1258416933 Q * AmokPaule Quit: Nettalk6 - www.ntalk.de 1258417002 Q * geb Quit: / 1258418708 Q * dowdle Remote host closed the connection 1258419499 J * kbad ~kyle@96-41-113-38.dhcp.mtpk.ca.charter.com 1258420264 J * arachnist arachnist@insomniac.pl 1258420499 J * kwowt ~urbee@93-103-199-233.dynamic.dsl.t-2.net 1258420531 Q * urbee Ping timeout: 480 seconds 1258424430 Q * arachnist Quit: Lost terminal 1258426921 P * kbad 1258427043 M * Bertl off to bed now .. have a good one everyone! 1258427049 N * Bertl Bertl_zZ 1258430291 J * saulus_ ~saulus@c207181.adsl.hansenet.de 1258430699 Q * SauLus Ping timeout: 480 seconds 1258430704 N * saulus_ SauLus 1258433519 Q * Piet Quit: Piet 1258433980 Q * hparker Remote host closed the connection 1258434271 J * fleischergesell ~fleischer@dslb-088-076-048-058.pools.arcor-ip.net 1258435232 J * mmgaggle ~kyle@96-41-113-38.dhcp.mtpk.ca.charter.com 1258437908 J * derjohn_mob ~aj@c195211.adsl.hansenet.de 1258437970 J * sharkjaw ~gab@90.149.121.45 1258440596 Q * kjj Ping timeout: 480 seconds 1258441480 J * tomreyn ~tom@04ZAACCAM.tor-irc.dnsbl.oftc.net 1258442566 J * thierryp ~thierry@home.parmentelat.net 1258442666 Q * derjohn_mob Ping timeout: 480 seconds 1258444322 Q * tomreyn Quit: tomreyn 1258445208 J * friendly ~friendly@ppp118-209-136-134.lns20.mel6.internode.on.net 1258446043 Q * quasisane Remote host closed the connection 1258446862 Q * sladen Ping timeout: 480 seconds 1258446910 J * Piet ~Piet__@04ZAACCCV.tor-irc.dnsbl.oftc.net 1258447245 J * sladen ~paul@starsky.19inch.net 1258447829 J * Piet_ ~Piet__@04ZAACCDI.tor-irc.dnsbl.oftc.net 1258447850 Q * Piet Remote host closed the connection 1258449183 N * Piet_ Piet 1258449641 J * sladen_ ~paul@starsky.19inch.net 1258449754 Q * sladen Ping timeout: 480 seconds 1258451949 J * BenG ~bengreen@94-169-110-10.cable.ubr22.aztw.blueyonder.co.uk 1258452079 Q * sladen_ Read error: Connection reset by peer 1258452089 J * sladen ~paul@starsky.19inch.net 1258452566 J * gnuk ~F404ror@pla93-3-82-240-11-251.fbx.proxad.net 1258453422 J * AmokPaule ~amokpaule@brsg-4dbba4f2.pool.mediaWays.net 1258454478 Q * BenG Quit: I Leave 1258454599 N * Bertl_zZ Bertl 1258454603 M * Bertl morning folks! 1258454612 M * AmokPaule morning 1258454623 M * fleischergesell hi Bertl 1258454636 M * fleischergesell need some help, again :/ 1258454687 M * fleischergesell i setup a partition to share amongst my 3 guests, mounted it via vservers-fstab but cant get usrquota to run 1258454734 M * fleischergesell it always says can't access /dev/md3 (which is the partition) - which is quite logical to me, because the guest has no md3 device - but how can i make this device accessable to him? 1258454761 M * Bertl did you read the wiki page regarding quota? 1258454780 M * fleischergesell sure, but i thought i have to do it another way now 1258454797 M * Bertl why? 1258454803 M * fleischergesell i mean, i dont want the quota to be running in the whole vserver - just for that single partition i mount in 1258454854 M * fleischergesell so to me, enabling quota on hdv1 makes no sense as I want quota only to be enabled on mountpoint for my md3-device 1258454877 M * Bertl still you need a vroot device for most quota tools 1258454892 M * fleischergesell so i just create that vroot-device for my md3? 1258454896 M * Bertl it now just has to use md3 instead of whatever you used before 1258454924 M * fleischergesell oh god, now i think i get it 1258454943 M * Bertl adjust the mtab to point to the proper vroot and show the proper mount options 1258454990 M * fleischergesell mh, i need to mount my md3-device anyway - is it not sufficient to list in in the guests fstab? 1258455058 M * Bertl how so? 1258455081 M * fleischergesell well, thats what the FAQ is telling me to do if I want to share a dir/partition amongst guests 1258455151 M * Bertl I don't see that in the FAQ, could you point me to that misinformation? 1258455201 M * fleischergesell http://linux-vserver.org/Share_a_directory_among_multiple_guests -> "If the partition or directory is to be used by two guests, you should list it in both the guest fstabs." 1258455239 M * Bertl If a directory is *only* used by a guest (or, by guests), it should *only* be configured in the guest fstab in /etc/vservers//fstab 1258455292 M * Bertl so nothing about 'needs to be mounted on the host' there? 1258455325 M * fleischergesell i dont really want to mount it on the host as this partition is - only - for guests - but i'm getting confused 1258455359 M * Bertl why? 1258455370 M * Bertl I mean, why are you confused? 1258455408 M * fleischergesell because you imply there is no need to mount the part in the guests fstab and the tut says, atleast for my purpose, there is 1258455500 M * Bertl I don't read what you seem to be reading from the tutorial 1258455528 J * arachnist arachnist@insomniac.pl 1258455530 M * Bertl where do you read that you need to mount it on the host? 1258455562 M * fleischergesell nowhere - because thats not what i want to do nor did i say i want to - i figure this is a misunderstanding here :) 1258455645 M * fleischergesell i just wanted to know if I list the mount in the guests(!) fstab do i still have to list it in the guests mtab too? 1258455733 M * Bertl you probably want to do that for the quota tools, as it might show up without the quota mount options inside the guest (but maybe that works as expected now, just try it) 1258455775 M * Bertl the mtab is just a file with no relation (beyond the fact that it is occasionally written and used by mount) to the real mounts (and the kernel view of mounted filesystems) 1258455873 M * fleischergesell worked - for the first guest - now try the other guests 1258455958 Q * AnOnJoe Remote host closed the connection 1258456730 J * BenG ~bengreen@94-169-110-10.cable.ubr22.aztw.blueyonder.co.uk 1258457042 M * fleischergesell works like a charm now - thank you for your time & patience Bertl 1258457161 Q * friendly Quit: Leaving. 1258457188 M * Bertl you're welcome! 1258457635 M * fleischergesell every now and then whil performing an upgrade, i get this error: insserv: Service mountkernfs has to be enabled to start service udev 1258457662 M * fleischergesell in guests, there shouldnt be mountkernfs enabled anyway - but how do you work around the requirement then? 1258457696 M * Bertl no idea, seems to be a misguided script or tool which produces that 1258457716 M * Bertl IMHO you have to find and disable it (or that part) 1258457741 M * Bertl btw, udev should be disabled in a guest, which is usually done on guest cleanup 1258457811 M * fleischergesell ya but stupid debian packages re-install it unfortunetaly 1258457857 M * fleischergesell for whatever purpose - right after install, dpkg sees "oh its chrooted, lets disable udev" 1258457959 M * Bertl yeah, debian is different :) 1258458009 M * fleischergesell which distro do you use? 1258458059 M * Bertl mostly Mandriva 1258458654 Q * sharkjaw Remote host closed the connection 1258459107 J * sharkjaw ~gab@90.149.121.45 1258459382 Q * fback Ping timeout: 480 seconds 1258459927 J * fback fback@red.fback.net 1258460558 M * fleischergesell is it possible to limit a cpu core exclusively to one guest? 1258460707 J * kir ~kir@swsoft-msk-nat.sw.ru 1258461923 M * Bertl sure, cpusets allow you to assign cpus to guests 1258463293 J * derjohn_mob ~aj@213.238.45.2 1258463485 Q * infowolfe Remote host closed the connection 1258465104 J * blues_ ~blues@avw243.neoplus.adsl.tpnet.pl 1258465226 Q * blues Ping timeout: 480 seconds 1258466824 Q * sharkjaw Quit: Leaving 1258467175 N * DoberMann[ZZZzzz] DoberMann 1258467601 Q * BenG Quit: I Leave 1258468967 Q * AmokPaule Remote host closed the connection 1258469000 Q * Piet Remote host closed the connection 1258469181 J * Piet ~Piet__@04ZAACCMM.tor-irc.dnsbl.oftc.net 1258470201 Q * Piet Remote host closed the connection 1258470380 J * hparker ~hparker@208.4.188.201 1258470452 J * Piet ~Piet__@04ZAACCND.tor-irc.dnsbl.oftc.net 1258471554 J * AmokPaule ~amokpaule@brsg-4dbba4f2.pool.mediaWays.net 1258471641 J * geb ~geb@earth.gebura.eu.org 1258471853 M * geb hi 1258472205 J * dowdle ~dowdle@scott.coe.montana.edu 1258472452 J * kjj ~kjj@pool-74-107-128-126.ptldor.fios.verizon.net 1258472767 M * fleischergesell what is good practise for limiting files with quota in a shared enviroment - do you use some kind of ratio "limit space/total space" = "limit files/total nodes possible"? that would be ~2.5Mio files for 20GB. It is probably more secure to lower the limit in order to avoid running out of nodes - so what do you think? 1258472862 M * Bertl check with df and df -i what you currently use, then set a reasonable limit based on that 1258473130 Q * _nono_ Quit: Leaving 1258473276 M * fleischergesell well I have to guess what will be the usage - there is almost no data to look at yet 1258473310 M * fleischergesell thats why I asked for some kind of "good practise" or formula based on something e.g. diskspacelimits 1258473354 M * fleischergesell thogh I understand running out of inodes is more significant for performance than running out of diskspace - that may be uncorrect 1258473364 M * Bertl you do not have _any_ disk with data in use? 1258473401 M * fleischergesell sure I do have data in use, but it is in no way comparable to what will be used 1258473419 M * fleischergesell oh and im talking usrquota, sorry 1258473436 M * Bertl well, then you have to guess :) 1258473472 M * fleischergesell hehe right - still, is it true, that running out of inodes is more critical than having just "a full disk"? 1258473579 M * fleischergesell I'm only asking about the performance - say, having a disk that is 95% full but 50% inodes left compared to a disk that is just 60% full but has 95% inodes in use? 1258473653 M * Bertl doesn't matter, especially as quota and disk limits are virtual limitations 1258473776 M * fleischergesell well, the limits may be virtual, but the space and inodes are actually in use, no matter how virtual the accounting/limiting 1258473810 M * fleischergesell but good that it doesn't matter - I'll just set them according to the ratio share/total space I give for each user 1258473829 M * Bertl correct, but your virtual limit of 90% is in no relation to the actual free inodes/space 1258473848 M * Bertl so it is rather unlikely that it influences performance somehow 1258473872 M * Bertl of course, once you are out of inodes _or_ space, your guest might get some problems 1258473920 M * fleischergesell I was more concerned about getting to a certain limit where it might actually influence performance - of course, having _no_ inodes/space left virtually disables most services running on the guest 1258473996 M * fleischergesell and that made me wonder if I really am at a point where there is just, say 2k inodes free - if that affects overall disk-performance more than having just 20MB free of space left 1258474089 M * fleischergesell I don't have a device for testing here otherwise it would be getting filled up by now 1258474581 M * Bertl you seem to ignore the fact that when your guest/user sees 2k free inodes, the system actually has 2M free inodes (for example) 1258474623 M * Bertl similar for the disk space of course 1258474787 M * biz never raise all guest/user quotas to a margin where their total would affect performance of the "whole" filesystem on the host 1258474836 M * fleischergesell the disk is _solely_ used for data - nothing "runs" on the disk, it is not even mounted in the host 1258474852 M * fleischergesell and I was not talking about what I see but what is actually happening on the disk / filesystem 1258474879 M * biz raising the limits for inodes too high might impact ext4 performance because of it's inode-(pre)-reservation feature, probably the overhead of the metadata journal will be hit, too 1258474900 M * fleischergesell that is not the case for ext3, is it? 1258474901 M * biz on the other hand, if you raise the space limits too high, you will end up with poor performance because of fragmentation 1258474943 M * Bertl we are talking whole disk, not guest/user limits here 1258474958 M * biz yeah, the total of all guest/user limits :) 1258474997 M * fleischergesell well I'm not really talking about limits anymore, I was just curious how a "very little inodes left on disk" affects overall disk-performance 1258475067 M * fleischergesell as "very little space on disk" affects performance because of (almost unevitable) fragmentation 1258475098 M * fleischergesell I'm sorry if my questions confuse 1258475282 M * biz sorry I can't answer that, I usually "optimise" the filesystem for the task so that I won't come close to any limits (for example: Maildir based mailserver vs. a fileserver for big binaries) 1258475371 M * fleischergesell hehe, thats kinda hard in a shared enviroment where users can upload whatever they like until they reach their limits 1258475380 M * biz if you expect your service to hit any of the limits.. then add monitoring and prevent it 1258475388 M * fleischergesell I got that anyway 1258475424 M * fleischergesell but I cannot monitor as fast as a malicious user can create files / using up inodes :) 1258475509 M * fleischergesell well, I'll set my quota limits now in a way that even if each users uses up his respective limit, there is still ~10% inodes left for that filesystem just to avoid any complications 1258475513 M * biz simply put, set up user/guest hard limits that won't affect performance of the underlying filesystem if reached 1258475582 M * fleischergesell well, exactly that was the question: Where is that limit, when will the underlying filesystems performance be affected? But it seems rather hard to find an answer for 1258476006 M * fleischergesell anyway, thank you both very much - I know I'm a little annoying because I don't know some basics but I'm working on it 1258476060 M * biz yeah, sorry.. I don't know numbers and I doubt it would help because you have to take your workload into account etc. 1258476146 M * biz for space I almost always set limits so that 10-15% will still be available (ext2/3/4 reserves 5% for the super-user per default, but you can tune2fs it or change it at mkfs time) 1258476185 M * biz and I've never hit an inode limit in a production system yet :)) 1258476192 M * fleischergesell hehe 1258476385 M * fleischergesell I'm not exactly sure how quota works or makes use of caching, but since it has to check before every write if the user is still in his limits any quota will affect performance too - do you know any numbers here? 1258476713 Q * jrdnyquist Quit: Leaving 1258476927 M * Bertl minimal compared to the huge overhead actually writing to disk (i.e. the io) has 1258476940 M * Bertl anyway, off for now ... bbl 1258476945 N * Bertl Bertl_oO 1258477117 J * jrdnyquist ~jrdnyquis@slayer.caro.net 1258477626 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1258478035 Q * gnuk Ping timeout: 480 seconds 1258478430 J * gnuk ~F404ror@pla93-3-82-240-11-251.fbx.proxad.net 1258478650 Q * nou Ping timeout: 480 seconds 1258479242 Q * AmokPaule Remote host closed the connection 1258479353 J * AmokPaule ~amokpaule@brsg-4dbba4f2.pool.mediaWays.net 1258480036 Q * derjohn_mob Ping timeout: 480 seconds 1258480279 Q * marl_scot Quit: Leaving 1258480612 J * _nono_ ~gomes@libation.ircam.fr 1258481465 J * derjohn_mob ~aj@c189001.adsl.hansenet.de 1258481550 Q * nenolod Remote host closed the connection 1258481576 J * nenolod ~nenolod@petrie.dereferenced.org 1258484430 Q * gnuk Quit: NoFeature 1258484449 Q * derjohn_mob Ping timeout: 480 seconds 1258484602 J * derjohn_mob ~aj@c189001.adsl.hansenet.de 1258487378 J * imcsk8 ~ichavero@148.229.1.11 1258488295 Q * derjohn_mob Ping timeout: 480 seconds 1258488352 Q * kir Quit: Leaving. 1258488447 J * derjohn_mob ~aj@c189001.adsl.hansenet.de 1258489518 J * hijacker_ ~hijacker@87-126-142-51.btc-net.bg 1258490350 J * dna ~dna@p54BC9840.dip0.t-ipconnect.de 1258491139 J * quasisane ~sanep@c-75-67-251-206.hsd1.nh.comcast.net 1258491239 N * Bertl_oO Bertl 1258491248 M * Bertl back now ... 1258491975 J * nou Chaton@causse.larzac.fr.eu.org 1258493726 Q * bonbons Quit: Leaving 1258494721 M * Radiance wb :) 1258494730 M * Radiance harry around ? 1258494751 Q * fleischergesell Ping timeout: 480 seconds 1258495566 Q * snooze Remote host closed the connection 1258495594 N * DoberMann DoberMann[ZZZzzz] 1258495956 Q * derjohn_mob Remote host closed the connection 1258496096 Q * hijacker_ Quit: Leaving 1258496281 M * harry Radiance: yesh 1258496315 M * Radiance ah cool :) 1258496465 J * derjohn_mob ~aj@c189001.adsl.hansenet.de 1258496763 Q * dna Quit: Verlassend 1258499838 Q * Piet Remote host closed the connection 1258499933 J * Piet ~Piet__@04ZAACC22.tor-irc.dnsbl.oftc.net 1258502176 J * snooze ~o@1-1-4-40a.gkp.gbg.bostream.se