1184630424 M * dallas does the scheduling overhead make that not so? I'm seeing the behavior I'm seeing when a specific guest has a quite high load... 15-20 or higher 1184630479 M * Bertl no, the load is accounted as 1 for each process 1184630497 M * Bertl regardless if it can/is running or not, as long as it _wants_ to run 1184630714 M * dallas so if a particular guest is very overloaded and has 100 processes all wanting to run but not able to due to not enough tokens it would raise the reported load on the host? 1184630747 M * Bertl exactly 1184630770 M * Bertl (well actually depends on the kernel) 1184631087 M * dallas is there another setting that might help keep these troublesome vservers in check? other than the interval and fillrate, etc? 1184631114 M * Bertl well, the question is what you want to achieve? 1184631141 Q * insomniaa Ping timeout: 480 seconds 1184631330 Q * gerrit Ping timeout: 480 seconds 1184631332 M * dallas I'd like to effectively completely contain a particular guest's cpu usage so it won't affect anything else on the server 1184631359 M * dallas it seems to be working to a point but loaded guests are seemingly increasing the load on other guests 1184631418 M * Bertl it is a shared system, and this sharing is what allows you to put more guests on a host than e.g. with Xen 1184631439 M * Bertl but of course, the sharing also affects other guests to some extend 1184631477 M * Bertl if you make sure that you do not overload the system, then it will definitely isolate the guests in a performant manner 1184631583 M * Bertl of course, fine tuning the TB schedulers can help a lot to improve guest performance 1184632021 M * dallas fillrate = "1" 1184632022 M * dallas interval = "16" 1184632022 M * dallas tokens = "500" 1184632022 M * dallas tokensmax = "1000" 1184632022 M * dallas tokensmin = "200" 1184632040 M * dallas that's what I'm using now on this particular guest 1184632068 M * dallas all other guests are the same other than interval being set higher or lower... 1184632072 M * Bertl well, your kernel clock is set to 100Hz or 250/1000? 1184632119 M * dallas it's set to 1000, I believe. 1184632141 M * Bertl okay, that means, that your guests will receive a single token, every 16/1000 seconds 1184632172 M * Bertl as the min is set to 200, that will give you bursts every 1184632174 M * dallas actually, we have it set to 250. sorry. so 16/250 seconds 1184632229 M * Bertl okay, so that will take 200/(16/250) jiffies to fill the bucket 1184632300 M * Bertl which means that the guest will run for a short amount every 12 seconds 1184632300 M * AStorm this VServer TB scheduler is funny 1184632307 M * AStorm it's bursty 1184632329 M * AStorm fair, but creates weird bursts, totally uninteractive :P 1184632336 M * AStorm unless I'm supposed to tune it to hell 1184632351 M * Bertl dallas: and then the guests will have tokens (to compete) for about a second 1184632389 M * Bertl AStorm: the behaviour can be completely tuned, the choice is yours if you prefer interactivity or less scheduling overhead 1184632448 M * AStorm and the default is bursty 1184632460 M * AStorm which kills my desktop (as it's scheduled as one too) 1184632462 M * AStorm :> 1184632479 M * AStorm looks funny, when something runs for 10 seconds, then slows to almost halt 1184632494 M * Bertl the defaults for Linux-VDesktop are different :) 1184632516 M * AStorm there is one? ;p 1184632595 Q * bzed Quit: Leaving 1184632660 Q * slack101 Ping timeout: 480 seconds 1184632681 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184632715 M * Bertl dallas: try to make that a little more 'responsive' my lowering the min values 1184632746 M * Bertl dallas: btw, about how many guests are we talking here? 16? 1184632750 M * onox where do you change those values? 1184632764 M * AStorm onox, vsched command 1184632778 M * Bertl onox: usually in the config (tree) 1184632791 M * onox thx 1184632795 M * onox i'm gone 1184632798 Q * onox Quit: zZzZ 1184632806 M * dallas we have 11 guests on this host with different intervals but mostly at 16. some are set to 8 and some to 4 1184632829 M * dallas would raising the fill rate as well as the interval to keep the same ratio have a useful effect? 1184632943 M * Bertl it might, as it will distribute larger amounts of tokens 1184632987 M * Bertl i.e. when a guest is running at the 'designed' rate, then it will oscillate around the min value 1184632987 Q * Vudumen Read error: Connection reset by peer 1184633014 M * Bertl this can be eased by setting higher values for interval and rate 1184633050 M * Bertl in general (if you still have idle time left) it would be quite benefitial to all your guests when you enable the idle time skipping 1184633120 M * AStorm Bertl, hmm, can one make vserver-ran processes appear to have different gids? 1184633131 M * AStorm ah, nope, that won't do what I want :> 1184633213 M * dallas where is idle time skipping enabled? 1184633236 M * AStorm in VServer kernel options 1184633251 M * dallas ah, right. I think I have that set.. I'll have to double-check 1184633255 M * Bertl yes, it has to be enabled there, and then it can be simply selected in the config 1184633298 J * DoberMann_ ~james@AToulouse-156-1-155-211.w90-38.abo.wanadoo.fr 1184633310 M * dallas simply selected in which config? 1184633345 M * AStorm with vsched 1184633348 M * Bertl in the guest config, as interval2/rate2 1184633367 M * AStorm yep 1184633394 M * Bertl http://www.nongnu.org/util-vserver/doc/conf/configuration.html 1184633408 Q * DoberMann[ZZZzzz] Ping timeout: 480 seconds 1184633411 M * dallas ah, right. ok. I don't have those enabled there. 1184633412 J * Vudumen ~vudumen@perverz.hu 1184633441 M * Bertl dallas: best, make them identical for all guests, something like 10/40 (r/i) 1184633471 M * Bertl dallas: does that machine have more than one CPU? 1184633489 M * AStorm TB sched per-CPU code is weird :P 1184633498 M * AStorm but it works 1184633505 M * dallas yeah, two cpus 1184633521 M * dallas I believe I have it set so both cpus are treated the same 1184633526 M * Bertl then I would suggest to put all the problematic guests on one of them (with cpusets) 1184633550 M * Bertl this way, they can only harm eachother (at least cpu wise :) 1184633607 M * dallas ah, good idea. though so far almost all of them are problematic. that's sort of why we're trying to isolate them from everyone else 1184633639 M * Bertl well, how good is your I/O subsystem? 1184633644 M * AStorm Bertl, hmm, I'd rather have dynamic migration fixed 1184633650 M * AStorm and the token bucket global 1184633671 M * Bertl dallas: because usually that is the typical bottleneck, not the CPU as one might expect 1184633676 M * AStorm so that the apps can be migrated as the scheduler sees fit 1184633683 M * Bertl AStorm: a per cpu TB has certain advantages 1184633691 M * AStorm of course it does 1184633705 M * AStorm but it'd better be optional :-) 1184633741 M * Bertl well, with a number of processes per guest, you get what you want by setting the values identical for each cpu 1184633756 M * AStorm mhm 1184633766 M * Bertl for a single process, it takes a little longer to even out 1184633779 M * dallas we have to run a lot of our sites off of NFS due to the nature of our setup so that is definitely a bottleneck in of itself 1184633795 M * AStorm Well, why was "make vserver look like a process" approach dropped? 1184633806 M * Bertl dallas: then better make sure that this NFS is optimized :) 1184633807 M * AStorm With the current new infrastructure, that should be easy 1184633808 M * dallas the OS itself in the vservers is on a single scsi drive so not super fast but ok 1184633841 M * dallas we run everything over NFS including non-vserver (we do web hosting) 1184633856 J * Vudu ~vudumen@perverz.hu 1184633871 M * Bertl AStorm: because guests comprise of several processes (usually) and thus you want them scheduled as separate processes (as long as you are in the 'green' area) 1184633875 Q * Vudumen Read error: Connection reset by peer 1184633896 M * AStorm hmm, what about just stacking the schedulers? 1184633922 M * Bertl adds an unnecessary layer to the already performance critical scheduler 1184633937 M * Bertl we know that from Xen and friends :) 1184633939 M * AStorm hmm, you're right 1184633968 M * AStorm Still, I'd love to have some CPU usage limiter per-group in the mainline 1184633974 M * AStorm like another rlimit 1184634030 M * AStorm if the VServers would utilise CFS' "fair scheduling group" approach... it'd be great 1184634065 M * Bertl implement that ontop of CPU sets, and you can use it right now :) 1184634071 M * AStorm it's currently done to support per-group fair scheduling 1184634090 M * AStorm could be used to support per-vserver fair scheduling too 1184634965 M * dallas thanks for the info, everyone. I'm sure you'll see me again sometime! 1184635017 M * Bertl dallas: you're welcome! 1184635096 P * dallas 1184636628 Q * FireEgl Read error: No route to host 1184636824 J * Vudumen ~vudumen@perverz.hu 1184636824 Q * Vudu Read error: Connection reset by peer 1184636955 Q * slack101 Ping timeout: 480 seconds 1184636976 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184637218 J * Vudu ~vudumen@perverz.hu 1184637218 Q * Vudumen Read error: Connection reset by peer 1184638871 J * Vudumen 871afc95a5@perverz.hu 1184638925 Q * Vudu Read error: Connection reset by peer 1184639351 J * emtty ~eric@dynamic-acs-24-154-33-109.zoominternet.net 1184639418 J * FireEgl FireEgl@Sebastian.Atlantica.DollarDNS.Net 1184641254 Q * slack101 Ping timeout: 480 seconds 1184641275 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184641371 Q * Johnnie Ping timeout: 480 seconds 1184641371 Q * mugwump Read error: Connection reset by peer 1184641689 J * mugwump ~samv@watts.utsl.gen.nz 1184641923 J * Johnnie ~jdlewis@c-67-163-246-136.hsd1.pa.comcast.net 1184642001 M * ntrs_ Bertl, what's up with 2.2.0.1? What is new/fixed/changed? 1184642035 M * Bertl a few minor issues were corrected, but you should go for vs2.2.0.2 :) 1184642054 M * Bertl this version also fixes some outstanding CoW error path problems 1184642123 M * ntrs_ ok, when will 2.2.0.2 be available? 1184642142 M * Bertl it already is 1184642161 M * AStorm when will vs2.2.0-rc5 update patch be available? 1184642163 M * AStorm :> 1184642186 M * Bertl we are at 2.2.0.2-rc1 for 2.6.22 :) 1184642192 M * ntrs_ 2.2.0-rc5 is a pretty old one if I remember correctly. 1184642196 J * Vudu 9f45708ef8@perverz.hu 1184642242 M * ntrs_ Bertl, if you are at -rc1 then it may be a while before 2.2.0.2 is out. :) 1184642290 M * Bertl for 2.6.22, yes (but we don't need many release candidates for that, I guess) 1184642313 Q * Vudumen Ping timeout: 480 seconds 1184643838 J * Cooler Cooler@10001271528.0000031908.acesso.oni.pt 1184643849 M * Bertl welcome Cooler! 1184644790 Q * Johnnie Ping timeout: 480 seconds 1184645000 M * Bertl okay, off to bed now ... have a good one everyone! cya! 1184645006 N * Bertl Bertl_zZ 1184645326 J * Johnnie ~jdlewis@c-67-163-246-136.hsd1.pa.comcast.net 1184645964 J * HeinMueck ~Miranda@dslb-088-064-026-159.pools.arcor-ip.net 1184646291 J * Vudumen ~vudumen@perverz.hu 1184646305 J * agryppa ~kb2qzv@cab-dr-cas1-57.dial.airstreamcomm.net 1184646453 Q * Vudu Ping timeout: 480 seconds 1184646663 M * agryppa Hi, I need to restart vservers.default script in order to reach guests. Don't know where to start from here. 1184646929 M * Cooler you cant 1184646963 M * Cooler latter 1184647133 Q * Cooler Quit: Leaving 1184647684 J * Vudu 8ff343c17b@perverz.hu 1184647713 Q * Vudumen Ping timeout: 480 seconds 1184648744 Q * agryppa Quit: Leaving 1184648746 J * gerrit ~gerrit@c-67-169-199-103.hsd1.or.comcast.net 1184649017 J * Gerardo ~Gerardo@host159.190-31-236.telecom.net.ar 1184649393 Q * EtherNet Read error: Operation timed out 1184649563 J * Vudumen ~vudumen@perverz.hu 1184649573 J * Supaplex supaplex@166-70-62-199.ip.xmission.com 1184649622 M * Supaplex how do vservers shutdown and restart when the host system does? How do I make sure they come back up on restart? 1184649713 Q * Vudu Ping timeout: 480 seconds 1184649781 M * Supaplex ah ha. faq++ 1184649865 Q * slack101 Ping timeout: 480 seconds 1184649879 Q * Vudumen Remote host closed the connection 1184649886 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184650380 J * Vudumen 708b442859@perverz.hu 1184651509 N * DoberMann_ DoberMann 1184652577 J * Vudu ~vudumen@perverz.hu 1184652633 Q * Vudumen Ping timeout: 480 seconds 1184652923 Q * Gerardo Ping timeout: 480 seconds 1184653120 N * DoberMann DoberMann[PullA] 1184654144 Q * slack101 Ping timeout: 480 seconds 1184654165 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184654998 J * Vudumen ~vudumen@perverz.hu 1184654998 Q * Vudu Read error: Connection reset by peer 1184655361 Q * Vudumen Remote host closed the connection 1184655373 J * Vudumen 46504d690b@perverz.hu 1184657345 Q * rob-84x^ Ping timeout: 480 seconds 1184657390 J * rob-84x^ rob@submarine.ath.cx 1184658010 Q * ruskie Remote host closed the connection 1184658038 J * ruskie ruskie@ruskie.user.oftc.net 1184658098 J * melek|wo1k ~ben@84-93-217-68.plus.net 1184658099 Q * melek|work Read error: Connection reset by peer 1184658435 Q * slack101 Ping timeout: 480 seconds 1184658456 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184658879 J * Vudu ~vudumen@perverz.hu 1184659003 Q * Vudumen Ping timeout: 480 seconds 1184660091 Q * Vudu Ping timeout: 480 seconds 1184660107 J * Vudumen ~vudumen@perverz.hu 1184660503 J * TheSeer ~theseer@border.office.nonfood.de 1184660510 M * TheSeer good morning ;) 1184662290 J * cedric ~cedric@80.70.39.67 1184662293 Q * melek|wo1k Quit: leaving 1184662381 J * boneb ~ben@mail.fourtwoseven.co.uk 1184662388 M * boneb Hi 1184662412 M * boneb Is there a mailling list for vserver? the link on the site doesn't seem to work? 1184662730 Q * slack101 Ping timeout: 480 seconds 1184662751 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184663093 M * boneb or a forum or something? 1184663153 M * harry there is a mailinglist 1184663161 M * harry and there is this chan :) 1184663389 M * boneb I can't find the mailing list, do you have a link? 1184663490 M * Bertl_zZ boneb: http://linux-vserver.org/Communicate 1184663523 M * Bertl_zZ (I hate it when I get woken up) 1184663570 M * Ramjar harry: "[14:42] Ramjar: you're doing strange things in a guest". "[14:43] i asume it's gentoo?". 1184663576 M * Ramjar Thats right 1184663583 M * Ramjar i use -d gentoo 1184663596 M * Wonka user namespaces got into 2.6.22-git8... 1184663603 M * Ramjar harry and then i also used a stage4 för installation. 1184663674 J * cluk ~cluk@p5B17F87E.dip.t-dialin.net 1184663759 M * Ramjar i used this how to: http://www.gentoo.org/proj/en/vps/vserver-howto.xml <- initstyle gentoo 1184663824 M * Ramjar http://paste.linux-vserver.org/4599 1184663871 M * harry Bertl_zZ: as you can see there... it doesn't seem to work 1184663877 M * harry as you can read on the ml too 1184664017 M * harry Ramjar: i never used gentoo, so i don't reallly know how it works or what problems are with gentoo :) 1184664020 M * harry sry 1184664076 M * Ramjar i see. thnx any way. Someone? plz? :) 1184664087 M * TheSeer hmm.. 1184664107 M * TheSeer how do i stop a vserver from eating all cpu without stopping it? 1184664205 J * lilalinux ~plasma@dslb-084-058-215-173.pools.arcor-ip.net 1184664409 J * werts ~wert@61.8.73.27 1184664434 M * Bertl_zZ harry: (un)subscription works as described, just the web interface is gone (read the list to figure why it isn't fixed) 1184664436 M * boneb Bertl_zZ: that gives http://list.linux-vserver.org/, which is not available 1184664469 M * Bertl_zZ boneb: _please_ actually _read_ the utl I gave you 1184664472 M * Bertl_zZ *url 1184664500 M * boneb ahh I see, oops :S 1184664528 M * TheSeer Bertl_zZ: got a hint for me too? i have a vserver that is eating cpu time like crazy (i have a load average of ~60 atm) and need to "cool" it down without killing it 1184664572 M * Bertl_zZ TheSeer: http://linux-vserver.org/CPU_Scheduler 1184664589 M * TheSeer i was trying to understand that page already ;) 1184664599 M * Bertl_zZ Ramjar: you are probably using an older set of util-vserver 1184664612 M * Bertl_zZ Ramjar: I would guess 0.30.212 1184664618 M * TheSeer Bertl_zZ: that page is somewhat theoretical 1184664641 M * TheSeer Bertl_zZ: i'm lost as to how to actually apply the schedule 'rules' 1184664674 M * harry an example of a config setup should be nice too :) 1184664710 M * harry for the cpu thingy 1184664778 M * Bertl_zZ http://www.nongnu.org/util-vserver/doc/conf/configuration.html 1184664798 M * Bertl_zZ /etc/vservers/vserver-name/sched 1184664832 M * Bertl_zZ http://oldwiki.linux-vserver.org/Scheduler+Parameters 1184664853 M * Bertl_zZ http://oldwiki.linux-vserver.org/vsched+explained 1184664882 M * TheSeer thanx 1184665039 M * cluk Hi 1184665044 M * sid3windr Bertl_zZ: maybe one/we could make a page with a webform where people can enter their email address and it automatigally sends an (un)subscribe mail to the list-request address? 1184665049 M * sid3windr *automagically 1184665099 M * Bertl_zZ well, naturally I would prefer to resolve the issue, but yes, that is probably an option 1184665104 M * harry http://oldwiki.linux-vserver.org/Scheduler+Parameters <== you should not refer to that one, Bertl_zZ 1184665118 M * harry it doesn't work since 0.30.210+ version of the tools 1184665121 M * harry iirc 1184665141 M * cluk we are using linux-vserver for about a year now. no problems so far. until today. :) 1184665154 M * sid3windr Bertl_zZ: I guess one doesn't want to be labeled as "the list i've been wanting to unsub for 1.5 years now" :p 1184665174 M * sid3windr but I guess moving the list requires the webif or the owner to get to the subscriber list :/ 1184665194 M * cluk i moved one of our guest systems from one host to another but when i try to start the guest there, the vserver start command hangs forever. 1184665203 M * Bertl_zZ sid3windr: (as I stated on the mailing list) 1184665211 M * sid3windr :] 1184665218 M * sid3windr I'm a bit behind on my email 1184665220 M * Bertl_zZ sid3windr: the problem is, some folks are too dumb to unsubscribe 1184665229 M * sid3windr 6300 unread in inbox (mostly spam) 1184665239 J * meandtheshell ~markus@85.127.104.80 1184665241 M * sid3windr as i've been offline for >week on holiday :) 1184665243 M * Bertl_zZ btw, I removed the comment from the page stating that it doesn't work 1184665269 M * Bertl_zZ cluk: _how_ did you move it? what tools? what kernel? 1184665328 M * cluk Bertl_zZ: i moved it using rsync -aAz --delete --numeric-ids for /etc/vserver/guest and /vserver/guest. 1184665392 M * cluk kernel is 2.6.16 debian vserver kernel on source and 2.6.18-4-vserver-amd64 on target 1184665416 M * cluk i did this move successfully for several other guest already 1184665418 M * Bertl_zZ both 64bit kernels? 1184665423 M * cluk yes. 1184665464 M * Bertl_zZ where does it hang? does it write some 'last' line? 1184665494 M * cluk on, no output at all. 1184665518 M * cluk ps xa show the long /usr/sbin/chbind command 1184665552 M * cluk vps xa | grep XID shows a /usr/sbin/vattribute command 1184665593 M * Bertl_zZ then please do 'vserver --debug start' and upload the output to paste.linux-vserver.org 1184665858 M * Ramjar Bertl_zZ: 0.30.212-r2 thats right. gentoo portage latest version of util-vserver. so there is a new version? 1184665896 M * Bertl_zZ I'm preety sure there is a newer one in gentoo too 1184665906 M * Bertl_zZ (yes we are on 0.30.213 for half a year now) 1184665918 M * Ramjar haha ohh :D 1184665931 M * Ramjar i did a sync but nothing more appear. i will do a new try. 1184666047 M * cluk Bertl_zZ: ok i pasted the full output 1184666130 M * Bertl_zZ cluk: looks good so far, i.e. the problem must be in the host config or inside the guest 1184666159 M * Bertl_zZ cluk: I would opt for resolv.conf or the ip on the host 1184666178 M * Bertl_zZ cluk: i.e. the guest's /etc/init.d/rc doesn't finish 1184666223 M * Bertl_zZ cluk: check what processes are running inside the guest 1184666293 M * cluk vps is only showing one process running with xid 1007: 1184666297 M * cluk /usr/sbin/vcontext --migrate-self --endsetup --chroot --silent -- /etc/init.d/rc 3 1184666313 M * cluk the rc script itself does not seem to run. 1184666461 M * Bertl_zZ that _is_ the rc script running 1184666529 Q * werts 1184666530 M * Bertl_zZ (or better, the rc script _waiting_ for something) 1184666544 M * cluk ah, ok. i will add some debugging output. 1184666554 M * cluk thanks so far. 1184666555 Q * lilalinux Remote host closed the connection 1184666569 M * Bertl_zZ daniel_hozac: feature request: can we get an --strace option for vserver - start? 1184666610 M * Bertl_zZ daniel_hozac: and maybe similar for bash -x (for sysv init) (all guest side) 1184666774 M * cluk another minor problem: I am trying to kill the vcontext / rc process 1184666776 M * Bertl_zZ cluk: if it is a resolver/nameservice issue, it should continue after 3x30 seconds (or something like that) 1184666793 J * lilalinux ~plasma@dslb-084-058-215-173.pools.arcor-ip.net 1184666799 M * Bertl_zZ cluk: vserver stop (or use vkill) 1184666826 M * cluk Bertl_zZ: I have waited for about 4 hours. :) 1184666865 M * Bertl_zZ okay, that doesn't seem to be a resolver issue then 1184666870 M * cluk vserver stop is also hanging, vkill -c 1007 -s SIGKILL did nothing. 1184666895 M * Bertl_zZ hmm, you said debian kernel, yes? 1184666944 M * cluk yes, on the target it is the original debian etch vserver kernel. 1184666968 M * Bertl_zZ check with 'dmesg' any unusual stuff there? 1184667017 M * cluk no kernel oops or something like that. 1184667022 Q * lilalinux Remote host closed the connection 1184667025 Q * slack101 Ping timeout: 480 seconds 1184667038 M * Bertl_zZ okay, then try: vkill -c 1007 -s 9 -- 0 1184667046 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184667096 M * cluk vcontext / rc still sitting there. 1184667100 M * Bertl_zZ cluk: what tool (util-vserver) version do you use 1184667125 M * cluk 0.30.212-1 1184667151 M * Bertl_zZ okay, let's update that to 0.30.313 1184667201 M * Bertl_zZ is the vcontext shown as 'S' 'R' or 'D'? 1184667361 M * cluk vps show it as H 1184667438 M * Bertl_zZ aha! 1184667456 M * Bertl_zZ so you managed (somehow) to put it on hold 1184667513 J * lilalinux ~plasma@dslb-084-058-215-173.pools.arcor-ip.net 1184667517 M * Bertl_zZ (could be a bug in the tools, so please try with 0.30.313) 1184667524 M * cluk uh. that is strange. 1184667545 M * cluk ok I will try with 0.30.313 but this will take me some time. :) 1184667571 M * Bertl_zZ first, please show me (uplaod to paste.linux-vserver.org) the contents of /proc/virtual//sched 1184667645 M * cluk pasted. 1184667817 Q * meandtheshell Quit: Leaving. 1184667866 M * Bertl_zZ cluk: and it is still shown as 'H'? 1184667885 M * cluk yes, still H 1184667896 M * Bertl_zZ okay, let's try the following: 1184667979 M * Bertl_zZ vattribute --xid 1007 --flag ~sched_pause 1184667988 Q * lilalinux Remote host closed the connection 1184668028 J * meandtheshell ~markus@85.127.105.7 1184668042 M * Ramjar how do i deactivate swap? my vserver which i installed with stage4 is activating swap on boot. 1184668059 M * Bertl_zZ cluk: btw, do you have an idea where the '--flag default' comes from? is that in your config? 1184668084 M * Ramjar i did a swapoff -a but then i got Not superuser. (with su) 1184668090 M * Bertl_zZ Ramjar: be asured, that - unless you gave a lot of extra capabilities - the guest will not be able to do that 1184668115 M * cluk Bertl_zZ: vattribute did not change anything, still H 1184668119 M * AStorm What would be the best way to unify guests? I want to avoid copying all files 1184668133 M * AStorm any hardlink + setattr? 1184668133 M * Bertl_zZ Ramjar: recent gentoo, IIRC, are guest aware and will handle that properly 1184668141 M * Bertl_zZ AStorm: vunify/vhashify 1184668143 M * AStorm or do I have to write a script myself? 1184668159 M * AStorm Bertl_zZ, for vunify to work, don't I have to copy the files first? 1184668162 M * cluk Bertl_zZ: about the '--flag default' I do not know but I will take a look. 1184668174 M * AStorm vhashify does something else, which I don't want 1184668194 M * Bertl_zZ vhashify most likely does what you want :) 1184668227 M * AStorm No, it only unifies _inside_ the vserver 1184668231 M * AStorm am I right? 1184668260 M * Bertl_zZ it unifies whatever dir is given (by default the guest root) with the hash store 1184668267 M * AStorm still, I want to avoid copying all files just to unify them later 1184668283 M * cluk Bertl_zZ: I pasted the content of /etc/vservers/guest/flags, no default inside. 1184668283 M * Bertl_zZ there is nothing copied, it creates hard links 1184668296 M * AStorm Bertl_zZ, so, unifying an empty dir will work? 1184668312 M * Bertl_zZ AStorm: sure, but it won't do anything :) 1184668317 M * AStorm gah 1184668329 M * AStorm I really would like it to just clone everything 1184668337 M * Bertl_zZ cluk: try vattribute --xid 1007 --flag ~sched_hard 1184668347 M * AStorm recursive hardlink + setattr 1184668371 M * Bertl_zZ AStorm: that is what vserver - build -m clone does IIRC 1184668383 M * AStorm does it copy, or really hardlink? 1184668386 M * Bertl_zZ (with unification enabled) 1184668435 M * cluk Bertl_zZ: yes, that did it. the process finished. i will try starting the vserver without sched_hard 1184668459 M * AStorm hmm, it also does initpost and initpre 1184668466 M * cluk thank you very much. I will be off for an hour but will report back. 1184668468 M * Bertl_zZ cluk: it looks like a bug to me, so either your kernel or the tools are broken 1184668886 Q * mjt Ping timeout: 480 seconds 1184669174 J * lilalinux ~plasma@dslb-084-058-215-173.pools.arcor-ip.net 1184669466 J * ktwilight_ ~ktwilight@236.124-66-87.adsl-dyn.isp.belgacom.be 1184669731 J * mjt ~mjt@nat.corpit.ru 1184669758 Q * ktwilight Ping timeout: 480 seconds 1184670172 M * AStorm Bertl_zZ, hmm, how would I clone while skipping one dir? (to avoid recursion) 1184670186 M * AStorm I want to clone / into /vservers 1184670240 M * mugwump find and cpio? 1184670394 M * AStorm cpio? 1184670396 M * AStorm more like ln 1184670407 J * Spyke ~jonas@pc19.hip.fi 1184670428 M * AStorm and then setattr 1184670512 M * AStorm but then, directory attrs have to be preserved 1184670551 M * AStorm which will make it too complex for find 1184670652 M * AStorm I'll write a script instead 1184671047 J * Piet hiddenserv@tor.noreply.org 1184671095 Q * Baby Remote host closed the connection 1184671168 Q * hardwire` Read error: Operation timed out 1184671320 Q * slack101 Ping timeout: 480 seconds 1184671341 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184671386 J * hardwire ~bip@rdbck-7046.palmer.mtaonline.net 1184671651 J * otaku42 ~otaku42@legolas.otaku42.de 1184671654 M * otaku42 moin 1184671709 M * AStorm moin 1184671805 M * otaku42 i wonder what solution people use to monitor their vservers... is there any tool (nagios, cacti, ...) that plays better with vservers than others? 1184671869 M * AStorm Any will do. 1184671998 M * otaku42 AStorm: hmm, ok. is there anything that you would recommend? at this point i'm pretty flexible in that regard - or, rephrased: it's hard for me to decide :) 1184672024 M * AStorm I don't know, really :> 1184672056 M * otaku42 darn... 1184672296 J * flea ~flea@a83-132-13-23.cpe.netcabo.pt 1184672300 M * flea howdy ppl 1184673664 Q * Aiken Remote host closed the connection 1184674096 J * EtherNet ~Gerardo@host50.190-31-245.telecom.net.ar 1184675074 M * cluk Bertl_zZ: Removing sched_hard from the flags file made the vserver start again 1184675111 M * cluk with sched_hard enabled it depends on the fill rate: 1184675137 M * cluk fill rates from 1-50 and from 100-150 work ok, while all other do not. 1184675183 M * cluk all with intervall set to 100 1184675295 M * cluk sometimes i see large negative values for 'tokens' like -858992956 when i cat /proc/virtual/1007/sched 1184675333 M * cluk might this indicate a 64bit kernel problem? 1184675620 Q * slack101 Ping timeout: 480 seconds 1184675641 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184675702 J * complexmind ~mark@cpc1-brig1-0-0-cust3.brig.cable.ntl.com 1184675812 M * cluk Bertl_zZ: ok, it is even worse. the whole sched_hard thing is not working as expected. 1184675843 M * cluk using a fill rate of 10 with an interval of 100 i can use 100% cpu time from within the guest. :( 1184675901 M * cluk i will try fixing the kernel 1184676198 J * ema ~ema@rtfm.galliera.it 1184676363 J * markus__ ~chatzilla@mail.netcare.at 1184676369 M * markus__ Hi :) 1184676430 M * AStorm cluk, not really 1184676445 M * AStorm probably the fill rate is so high, the vservers cannot use the tokens on time 1184676448 M * AStorm and then it overflows 1184676454 M * AStorm it's most certainly a bug 1184676481 M * AStorm I consider that TB sched a bug in its own right, but that might be just me :P 1184676619 M * markus__ I'm running two vserver. The machine has 4GB. One vserver is assigned 256MB, the other 768MB. Nothing else is currently running and there was nothing else running since the last restart, but still the host server shows a usage of 2GB ram. Is this somehow expected, overhead? Or is this number just no accurate? 1184676663 M * AStorm I think that last option is right 1184676730 M * markus__ me ? 1184676825 M * cluk AStorm: I would not care about the overflow if the scheduler would work as expected. :) 1184676848 M * AStorm cluk, well, it cannot work well when no of tokens is negative 1184676858 M * AStorm I'd personally just rip that scheduler off and redo it 1184676875 M * AStorm but my sched-foo is not that high yet 1184676950 M * cluk the no of tokens is not _always_ negative, but the scheduler even does not work while it is positive 1184677020 M * AStorm Reread my stance on that scheduler 1184677035 Q * HeinMueck Read error: Connection reset by peer 1184677277 M * otaku42 hrm... i vaguely remember that there were some scripts for nagios available on linux-vserver.org, but i fail to find them 1184677362 M * otaku42 erm... forget that, i was wrong. issue solved. 1184677372 J * Gerardo ~Gerardo@host170.190-31-8.telecom.net.ar 1184677783 Q * EtherNet Ping timeout: 480 seconds 1184678431 M * daniel_hozac otaku42: might want to look at collectd or munin. 1184678488 M * ktwilight_ hm, can host and guests be listening on the same port? 1184678494 M * ktwilight_ or can guests be listening on the same port? 1184678504 M * ktwilight_ i guess not? 'cuz apache is complaining. 1184678504 M * daniel_hozac markus__: that's most likely the kernel caching/buffering things. 1184678511 M * harry markus__: ever thought about caches, buffers, virtual memory ? 1184678523 M * daniel_hozac ktwilight_: sure they can, as long as they're not listening on the same IP addresses. 1184678523 M * harry damn... too late ;) 1184678543 M * markus__ I do, I do.... just wondering :) 1184678545 M * ktwilight_ daniel_hozac, odd, i have specified different ips. will double check again 1184678580 M * daniel_hozac note that if you don't specifically limit daemons running on the host, they'll grab all IPs, rendering the guests unable to bind anything... 1184678718 M * ktwilight_ yup 1184678792 M * markus__ I'm currently limiting a vserver with fill-rate/interval to 1/8 , so 1/8th of the whole CPU resources. The system has 4 CPUs. However, when I assign this value to another vserver, I get the impression the over server only gets 10MHz (yes, 10). Incredible slow. I've been going through http://linux-vserver.org/CPU_Scheduler but it's not that straightforward to me how I can easily assign... 1184678793 M * daniel_hozac AStorm: the TB scheduler lets you do pretty much anything. how would you achieve the same? 1184678793 M * markus__ ..."portions" of all CPUs to individual vservers. 1184678833 M * ktwilight_ weird thing is, on the host, it has a couple of vhosts, they are configured to a specific ip address on the same port, but on netstat, it's still grabbing on all ip addresses, which is weird. no? 1184678833 M * daniel_hozac AStorm: (no, i'm not actually interested in a response.) 1184678834 M * AStorm daniel_hozac, I don't need anything, I just need fair cpu limiter :> 1184678848 M * AStorm in percentage 1184678856 M * daniel_hozac AStorm: which is easily achieved with it. 1184678861 M * otaku42 daniel_hozac: thanks, doing so now 1184678891 M * daniel_hozac markus__: hmm? 1184678904 M * AStorm daniel_hozac, except it's a hack 1184678913 M * AStorm it piggybacks on top of the vanilla scheduler 1184678926 M * daniel_hozac it _extends_ the vanilla scheduler, yes. 1184678940 M * AStorm s/extends/breaks in the usual case/ 1184678946 M * AStorm :> 1184678949 M * daniel_hozac like the rest of Linux-VServer extends vanilla. 1184678950 J * lilalinux_ ~plasma@dslb-084-058-251-021.pools.arcor-ip.net 1184678952 M * daniel_hozac how does it break? 1184678954 M * daniel_hozac show me one case. 1184678969 M * AStorm daniel_hozac, changes interactivity expectations of the non-vserver load 1184678971 M * markus__ daniel_hozac: did I wrote completely confusing stuff? 1184678976 M * AStorm it shouldn't touch that at all 1184679011 M * daniel_hozac markus__: i'm not sure what you want to achieve, or what you think you have achieved. 1184679028 M * daniel_hozac AStorm: a guest is not one process. we're not UML or qemu. 1184679032 M * markus__ daniel_hozac: I wanted to give 1/8 of all CPU ressources to each of the two vservers ... 1184679041 M * daniel_hozac AStorm: feel free to use one of those. 1184679041 M * AStorm daniel_hozac, I know, but it still shouldn't break other apps 1184679050 M * daniel_hozac there should be no other apps. 1184679051 M * markus__ daniel_hozac: reading through the page I though i was doing this ... 1184679053 M * AStorm make it a scheduling class 1184679053 M * daniel_hozac the host should be minimal. 1184679065 M * AStorm daniel_hozac, blah blah, blah and... blah :-) 1184679075 M * AStorm mark tasks running in a vserver with another scheduling class 1184679077 M * daniel_hozac AStorm: agreed. 1184679087 M * daniel_hozac AStorm: so lets just agree to disagree. 1184679090 M * AStorm :> 1184679094 M * AStorm ok 1184679103 M * AStorm still, you'll have to rewrite the scheduler for 2.6.23 1184679140 M * daniel_hozac markus__: and you feel that's not happening right now? 1184679249 M * daniel_hozac AStorm: we'll also have to use the user namespace. it's called progress. 1184679264 M * AStorm :> 1184679286 M * AStorm I can't wait to see proper fair "per-vserver" scheduling 1184679294 M * daniel_hozac we already have that. 1184679298 M * AStorm almost 1184679309 M * daniel_hozac no, we do. 1184679310 M * AStorm except it is another scheduler altogether 1184679319 M * markus__ daniel_hozac: exactly .. because the second vserver I gave the same ratio didn't even start in a meaningful time. Just start syslogd took minutes before the next process was started. Removing the schedular ratio was like lightspeed 1184679320 M * AStorm I'm talking like with the normal one 1184679321 Q * lilalinux Ping timeout: 480 seconds 1184679348 M * daniel_hozac markus__: what kernel is that with? 1184679350 M * ktwilight_ ah got it. i also need to define the ip address in ports.conf :) 1184679352 M * AStorm will be easy (the infrastructure is in place for per-group scheduling) 1184679362 M * markus__ daniel_hozac: 2.6.18-4-vserver-686 1184679396 M * markus__ daniel_hozac: do you know of a way within a vserver to verify how/if the scheduling limits really work? with memory limitation it's easy, just calling 'free' 1184679398 M * daniel_hozac markus__: it seems that one has a broken scheduler. might want to try with vanilla. 1184679420 M * markus__ mhmm .. you mean 2.6.18 not from debian with vanilla .. or the latest 2.6 ... ? 1184679436 M * daniel_hozac cat /proc/virtual//sched on the host shows you lots of information. 1184679443 M * daniel_hozac well, 2.6.20 at least. 1184679682 M * markus__ daniel_hozac: hmm .. interesting .. both of them look the same :) 1184679684 M * markus__ cat /proc/virtual/4/sched 1184679685 M * markus__ Token: 0 1184679687 M * markus__ FillRate: 1 1184679688 M * markus__ Interval: 8 1184679690 M * markus__ cat /proc/virtual/40/sched 1184679691 M * markus__ Token: 0 1184679693 M * markus__ FillRate: 1 1184679694 M * markus__ Interval: 8 1184679696 M * markus__ hmm .. same numbers .. interesting, because I only told the '40' context what ratio to have. Must be a default 1184679728 M * markus__ daniel_hozac: I see .. well, I've only running two vservers with 2.6.18 and vserver 2.0.0 ... do you know of possible upgrade problems when changing to a recent kernel including a recent vserver version? 1184679915 Q * slack101 Ping timeout: 480 seconds 1184679936 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184680154 M * markus__ Do I need to upgrade util-vserver, too, when upgrading kernel/vserver patch? 1184680222 M * AStorm markus__, usually not 1184680231 Q * flea Quit: brb 1184680454 M * markus__ ah, k 1184680463 A * markus__ is a bit frightened upgrading kernels and such ... 1184680718 J * onox ~onox@kalfjeslab.demon.nl 1184680745 M * markus__ What there anything wrong using 2.6.21 instead of 2.6.20 ... ? 1184680776 M * daniel_hozac nope. 1184680796 M * daniel_hozac you'll want to make sure you have at least util-vserver 0.30.212, but 0.30.213 is preferred... 1184680894 M * markus__ I have 0.30.212 .. 1184681202 M * otaku42 l8er 1184681206 P * otaku42 1184682178 Q * Piet Quit: Piet 1184682277 J * UukGoblin ~jaaa@sr-fw1.router.uk.clara.net 1184682314 M * UukGoblin my XFS filesystem is fucked due to several kernel hangs after I upgraded to 2.6.19 and then 2.6.20 1184682325 M * UukGoblin using vserver+grsec 1184682346 M * UukGoblin I'll give you more details when I recover the FS 1184682361 M * UukGoblin and I'll check whether it will still die on newer versions 1184682377 M * AStorm there is a known bug in plain 2.6.20 and XFS 1184682382 M * AStorm upgrade to .2 1184682415 M * UukGoblin oh, that's good news 1184682438 M * UukGoblin I'll have a look what version it was exactly later on 1184682679 A * markus__ compiles his kernel *crossing fingers* 1184682753 M * UukGoblin btw, xfs_repair and xfs_check are randomly dying, segfaulting or aborting ;-) 1184682785 M * UukGoblin and now my FS has a ?--------- ? ? ? ? /lost+found 1184682807 M * UukGoblin wonder how much data I'll be able to recover when my new disks arrive :-) 1184683000 Q * Hollow Remote host closed the connection 1184683035 J * Hollow ~hollow@proteus.croup.de 1184683049 Q * FireEgl Read error: Connection reset by peer 1184683093 J * Pazzo ~ugelt@195.254.225.136 1184683107 A * UukGoblin wonders how would vserver+grsec work with ZFS on FUSE... 1184683368 M * Ramjar everytime I stop the vserver guest i get Deactivating swap. The funny part is that I havn't active swap anywhere. my /etc/fstab has only one line (/dev/hdv1 / ufs defaults 0 0) and when i run swapoff -a (with user: su) i get Not superuser. 1184683406 Q * blizz Ping timeout: 480 seconds 1184683454 M * AStorm UukGoblin, ZFS on FUSE is still in devel 1184683461 M * AStorm it's not supposed to work well at all yet 1184683731 M * UukGoblin Ramjar, broken distro scripts in the guest system :-> 1184683751 M * Ramjar hmm 1184683752 M * UukGoblin maybe not broken but not clever enough to figure they're under a vserver 1184683766 M * UukGoblin AStorm, boo 1184683775 M * Ramjar is a new installation with Hollow stage4. cant belive it :) 1184683775 M * UukGoblin I need a decent filesystem ;-/ 1184683836 M * UukGoblin Not superuser possibly means that the vserver doesn't have the capability of playing with swap, which is good 1184683882 M * Ramjar well. its funny coz i cant see anywhere that it is defined to set swap active. 1184683910 M * UukGoblin check the scripts, have a look at how they decide whether there is swap somewhere or not 1184683951 Q * transacid Ping timeout: 480 seconds 1184683989 J * FireEgl FireEgl@FireEgl.CJB.Net 1184684015 M * AStorm UukGoblin, it's gentoo baselayout :P 1184684130 M * UukGoblin I don't know gentoo 1184684215 Q * slack101 Ping timeout: 480 seconds 1184684236 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184684298 J * Baby ~miry@195.37.62.208 1184684358 J * transacid ~transacid@transacid.de 1184686138 M * markus__ thanks for everyone crossing fingers :) new kernel seems to work at a first glance 1184686229 M * markus__ Can I change the scheduler properties 1184686240 M * markus__ fillrate/interval also while the vserver is running? 1184686271 J * arachnis1 ~arachnist@088156185052.who.vectranet.pl 1184686307 J * sid3wind1 luser@bastard-operator.from-hell.be 1184686313 J * pusling_ pusling@88.212.70.38 1184686323 J * kaner_ kaner@strace.org 1184686327 J * gerrit_ ~gerrit@c-67-169-199-103.hsd1.or.comcast.net 1184686362 Q * arachnist Read error: Connection reset by peer 1184686362 Q * sid3windr Read error: Connection reset by peer 1184686362 Q * kaner Remote host closed the connection 1184686362 Q * pusling Read error: Connection reset by peer 1184686362 Q * gerrit Read error: Connection reset by peer 1184686476 N * pusling_ pusling 1184686529 J * HeinMueck ~Miranda@dslb-088-064-008-142.pools.arcor-ip.net 1184686812 J * stefani ~stefani@flute.radonc.washington.edu 1184687113 M * daniel_hozac markus__: yes, with vsched. 1184687203 M * markus__ daniel_hozac: will this just change runtime and also write to /etc/ or just runtime and the rest is manual? 1184687230 J * bzed ~bzed@dslb-084-059-123-118.pools.arcor-ip.net 1184687260 M * daniel_hozac just runtime. 1184687510 Q * lilalinux_ Remote host closed the connection 1184687517 M * AStorm Heh, not funny 1184687528 M * AStorm I have to do chmod/chown before unification 1184687543 M * AStorm I've got a nice OOPS in the other combo 1184687586 M * daniel_hozac where is it? 1184687588 J * lilalinux ~plasma@dslb-084-058-251-021.pools.arcor-ip.net 1184687618 M * AStorm wait, pasting 1184687685 M * AStorm http://pastebin.ca/623413 1184687717 M * daniel_hozac filesystem? 1184687729 M * AStorm ext2, as you can see 1184687750 M * AStorm vs2.2.0-rc5 1184687776 M * AStorm for 2.6.22 1184687791 M * daniel_hozac and taintaed. 1184687793 M * daniel_hozac -a 1184687798 M * AStorm yep, but that doesn't matter 1184687802 M * daniel_hozac and not vanilla. 1184687806 M * AStorm that too 1184687811 M * daniel_hozac can't really say anything about it then. 1184687814 M * AStorm but that doesn't matter either :P 1184687820 M * AStorm the trace should tell you enough :P 1184687828 M * daniel_hozac how? i'm not psychic. 1184687842 M * daniel_hozac generic_file_sendpage doesn't call any function pointer. 1184687852 M * daniel_hozac so there's no reason for RIP to be NULL. 1184687853 M * AStorm it failed on sys_fchmodat 1184687867 M * daniel_hozac so? 1184687874 M * daniel_hozac chmod works fine here. 1184687884 M * AStorm do you have 2.6.22-vs2.2.0-rc5? 1184687891 M * AStorm are we talking about that? 1184687894 M * daniel_hozac no. 1184687897 M * AStorm hmm 1184687899 Q * cluk Quit: Ex-Chat 1184687905 M * AStorm I remember you did some patching for that 1184687910 M * daniel_hozac but COW is the same basically across all kernels. 1184687943 M * AStorm hint - the device is almost full 1184687993 M * AStorm I'll better check the code 1184688018 M * daniel_hozac since no one else can, sounds like a plan 1184688040 M * AStorm q: where should I look at for that cow_break_link function? 1184688098 M * AStorm ok, found it :P 1184688109 M * daniel_hozac grep is hard to use sometimes... 1184688248 Q * ensc Ping timeout: 480 seconds 1184688361 M * AStorm hmm 1184688370 M * AStorm nothing obvious, unless 1184688462 M * AStorm I guess it might be cow_check_and_break 1184688464 M * Wonka there are cases where grep doesn't quite get it... perl -ne 'm// && print;' is for those :) 1184688496 J * bonbons ~bonbons@2001:5c0:85e2:0:20b:5dff:fec7:6b33 1184688501 M * AStorm first of all, #ifdef is missing there :P 1184688510 Q * slack101 Ping timeout: 480 seconds 1184688511 M * daniel_hozac no, it's not. 1184688531 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184688608 M * AStorm daniel_hozac, it is, at sys_fchmodat 1184688648 M * daniel_hozac no. 1184688652 M * AStorm hmm? 1184688661 M * AStorm in vs2.2.0_rc5 for 2.6.22? 1184688670 M * daniel_hozac cow_check_and_break assumes the -EROFS checks. 1184688762 J * ensc ~irc-ensc@p54B4E765.dip.t-dialin.net 1184688851 M * AStorm that's ok, but the COWBL ifdef is missing from it (unrelated to my OOPS) 1184688871 M * daniel_hozac from what? 1184688912 M * AStorm sys_fchmodat 1184688929 M * AStorm it uses cow_check_and_break, which isn't ifdeffed with COWBL 1184688929 M * daniel_hozac _it_shouldn't_be_ifdefed. 1184688935 M * AStorm hmm? why? 1184688942 M * daniel_hozac because it does the -EROFS checks. 1184688988 M * AStorm hmm, is IS_COW(foo) == false then? 1184689013 M * daniel_hozac yes. 1184689021 M * AStorm ah, so it's correct 1184689142 M * AStorm hmm, why path_lookup return values aren't checked in cow_break_link? 1184689157 M * AStorm kmalloc isn't checked either 1184689201 M * daniel_hozac hint: update your kernel. 1184689208 M * AStorm ? 1184689210 M * daniel_hozac though you're right, i missed kmalloc. 1184689234 M * daniel_hozac 2.2.0.2 checks the return values of everything else. 1184689240 M * AStorm Mhm 1184689245 M * AStorm is there 2.2.0.2 for 2.6.22? 1184689253 Q * ensc Ping timeout: 480 seconds 1184689262 M * daniel_hozac no, there's 2.6.22.1-vs2.2.0.2-rc1 1184689270 M * AStorm should work too, ok 1184689304 M * AStorm an incremental between vs2.2.0-rc5 and this would be the best 1184689315 M * daniel_hozac so, make one. 1184689324 M * AStorm i hope interdiff doesn't punt on me 1184689415 M * daniel_hozac it should. 1184689434 M * AStorm 1 out of 1 hunk FAILED -- saving rejects to file /tmp/interdiff-1.FxivAZ.rej 1184689435 M * AStorm :/ 1184689439 J * ensc ~irc-ensc@p54B4E765.dip.t-dialin.net 1184689462 M * AStorm and it's version, blah 1184689471 M * AStorm I'll mangle that part of the patch 1184689537 N * Bertl_zZ Bertl 1184689579 M * daniel_hozac morning Bertl! 1184689594 M * AStorm hmm, the changes are minor 1184689600 M * Bertl daniel_hozac: yep, something like that ... 1184689620 M * Bertl AStorm: please stop confusing people regarding the TB scheduler 1184689627 M * AStorm Bertl, ? 1184689646 M * AStorm I shall, thank you. 1184689651 M * Bertl AStorm: this is very likely a debian kernel issue (as the TB scheduler is working fine on mainline systems) 1184689666 M * AStorm most likely, debian patches the scheduler 1184689720 M * AStorm Bertl, could you provide a VServer patchset with the scheduler changes totally ripped out? 1184689723 M * Bertl AStorm: and you keep talking about 'your great scheduler enhancements' while I haven't seen anything useful from your side ... 1184689746 M * AStorm Not mine, I'm trying to "motivate" you into porting VServer to 2.6.23 :P 1184689753 M * Bertl AStorm: it is trivial to remove the TB scheduler changes, they are even in a separate patch 1184689764 M * AStorm Bertl, give me a link, I'd be grateful 1184689790 Q * markus__ Quit: ChatZilla 0.9.78.1 [Firefox 2.0.0.4/2007051502] 1184689792 M * Bertl AStorm: and trust me, the TB scheduler is the result of a lot of thinking and has become _very_ solid over the years 1184689838 M * AStorm it does work, yes 1184689852 M * AStorm but could use a cleanup for the new infrastructure 1184689854 M * Bertl AStorm: so either you provide something you can proof to be superior to the existing solution, or you stop whining about the 'poor design' (you do not understand) 1184689869 M * AStorm Not poor design, you certainly misunderstood me :> 1184689893 M * daniel_hozac so what does "hack" mean to you? 1184689909 M * AStorm daniel_hozac, hmm, it's just that piggybacking, sounds of one 1184689930 M * AStorm other than that, the scheduler is a clean entitlement-based one 1184689941 N * sid3wind1 sid3windr 1184689941 M * Bertl recently we have a bunch of folks here who spend most of their time talking of redesigns instead of doing things ... 1184689947 M * AStorm Hmm 1184689951 Q * Pazzo Quit: ... 1184689958 M * daniel_hozac indeed... it is a meritocracy after all :) 1184689975 M * Bertl :) anyway, I have to leave now ... will be back later 1184689988 M * daniel_hozac cya! 1184689989 M * AStorm Bertl, as I said, I'm unable to port VServer's scheduler to latest git on my own, would like a hand 1184689991 M * AStorm cya 1184689992 M * Bertl daniel_hozac: you got my 'feature request'? 1184689996 M * daniel_hozac yeah. 1184690014 M * daniel_hozac will add them later today. 1184690092 M * Bertl excellent! tx a lot! 1184690302 M * AStorm daniel_hozac, do you have a split patch for 2.6.22 too? 1184690322 M * daniel_hozac _i_ don't have split patches at all :) 1184690331 M * AStorm hmm 1184690348 M * Bertl k, later ... 1184690354 N * Bertl Bertl_oO 1184690544 J * Gerardo__ ~Gerardo@host170.190-31-8.telecom.net.ar 1184690938 Q * Gerardo Ping timeout: 480 seconds 1184691361 M * AStorm Hmm, I'll better go write per-xid proc hiding some time soon. 1184692169 Q * TheSeer Quit: Client exiting 1184693893 J * Ashsong ~chatzilla@orchard.laptop.org 1184694343 Q * fosco Ping timeout: 480 seconds 1184694490 J * fosco fosco@konoha.devnullteam.org 1184694638 Q * lilalinux Ping timeout: 480 seconds 1184694665 Q * cedric Quit: cedric 1184695353 M * matti Hi Bertl :) 1184695364 M * daniel_hozac _oO ;) 1184695429 M * matti Ops. 1184695431 M * matti Hi Daniel :) 1184695589 M * daniel_hozac hey matti 1184697048 N * DoberMann[PullA] DoberMann 1184697734 N * DoberMann DoberMann[PullA] 1184698408 J * Roey ~katz@dsl093-083-226.wdc1.dsl.speakeasy.net 1184698419 M * Roey hi all! 1184698426 M * Roey Hey Bertl_oO ;) 1184698461 M * Roey So I am getting this error when I try to upgrade GCC on my Debian system (which lives in a VServer instance) http://rafb.net/p/Zl5QgM89.html -- is it related to VServer, you think? 1184698583 M * daniel_hozac and what kind of system is it? 1184698620 M * AStorm Debian, heh 1184698654 M * AStorm it's probably because the script can't read enough proc to detect CPU type 1184698690 M * daniel_hozac i'm leaning more towards it being an x86_64 system running an x86 guest. 1184698721 M * Roey hey daneil :) 1184698722 M * Roey *daniel 1184698725 M * Roey ok, it's like this: 1184698736 M * Roey upon further digging I found that the install script does a uname -m 1184698742 M * Roey which on my system (xeon) 1184698745 M * Roey gives back i386 1184698756 M * daniel_hozac what type of xeon is that? 1184698771 M * AStorm Roey, it's probably because the kernel is generic? 1184698782 M * AStorm Debian likes to use these (optimised for nothing) 1184698797 M * Roey AStorm, daniel_hozac: /proc/cpuinfo gives: model name: Intel(R) Xeon(TM) CPU 2.80GHz 1184698806 M * Roey And the kernel is not generic; I hand-rolled it 1184698808 M * AStorm Roey, uname -m uses the arch stored in the kernel 1184698813 M * Roey oh 1184698813 M * AStorm so it's that, heh 1184698825 M * Roey any way to trick the install script then? 1184698862 M * daniel_hozac sure. 1184698873 M * daniel_hozac echo i686 > /etc/vservers//uts/machine 1184698878 M * Roey oh 1184698881 M * Roey hmm 1184698892 M * Roey lemme see 1184698925 M * Roey daniel_hozac: I'd have to bring it down first, wouldn't I 1184698960 M * daniel_hozac vuname can change it while the guest is running. 1184698979 M * mjt hmm. I've a kernel compiled for i486, and when it's running on xeon it reports i686 in uname -m 1184699003 M * daniel_hozac as it should. 1184699008 M * mjt (can't say for the reverse as i686 kernel does not boot on i486 ;) 1184701380 M * Roey ok 1184701381 M * Roey thanks guys 1184701383 M * Roey it helped 1184701388 M * Roey now I can dist-upgrade fine 1184701420 Q * slack101 Ping timeout: 480 seconds 1184701441 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184701495 M * Roey daniel_hozac, mjt, slack101: Where do you guys see room for growth with vserver? i.e., how will you push it? 1184701515 M * daniel_hozac what? 1184701536 M * Roey in light of stuff like openvz 1184701547 M * daniel_hozac what about it? 1184701551 M * Roey you don't want to get vserver sidelined 1184701561 M * daniel_hozac of what? 1184701566 M * Roey ok 1184701572 M * Roey openvz and vserver are substitutes 1184701590 M * slack101 Roey, do you work for the openVZ people or something =D 1184701591 M * daniel_hozac well... depends on your point of view, i suppose. 1184701595 M * Roey openvz's coders are going on an advertisement blitz for it 1184701605 M * Guy- Roey: this isn't a commercial project 1184701605 M * Roey slack101: no, I just wnat to see vserver dominate :} 1184701612 M * Roey Guy-: I know, I know 1184701613 M * slack101 needs money 1184701615 M * slack101 thats all 1184701619 M * Roey oh.. 1184701620 M * Roey ok. 1184701630 Q * bragon Ping timeout: 480 seconds 1184701631 M * Guy- Roey: I don't think the developers care much whether there are many users or not, as long as they themselves can use it 1184701655 M * Roey Guy-: ok. There's always one who'll respond to me like that. Whatever ;) 1184701658 M * Guy- Roey: if vserver were hugely popular, that would just mean more dumb questions here :) 1184701677 M * daniel_hozac indeed. 1184701680 M * Guy- Roey: thus less time coding (bliss) and more time supporting (horror) 1184701711 M * slack101 Vserver needs money ........ 1184701720 M * daniel_hozac so you keep saying. 1184701727 M * slack101 yep 1184701730 M * Guy- Roey: of course, I'm not a developer, I'm just trying to articulate what I think the developers might possibly feel 1184701747 M * daniel_hozac with no solid arguments as to why, or any indication of it heading Bertl's way... 1184701791 M * Guy- Roey: I know this concept of not caring whether you have many users is hard to grasp if you come from commercial software development 1184701841 M * daniel_hozac then again, OLPC is using Linux-VServer. that's quite a few users... ;) 1184701912 M * slack101 OLPC? 1184702095 M * Guy- One Laptop Per Child? 1184702172 M * Roey Guy-: eh? 1184702173 M * Roey dude 1184702192 M * Roey I'm a hard-core debian linux/kde/python user 1184702197 M * Roey don't get elitist on me. 1184702219 M * Roey Guy-: I've been using vserver now for a bit over a year 1184702223 M * Roey Guy-: I love it 1184702226 M * Guy- Roey: "elitism is bad, mmkay?" :) 1184702229 M * Roey ;) 1184702238 M * Roey Guy-: YES, a smile :) 1184702242 M * daniel_hozac Roey: so, spread the word. 1184702246 M * Roey I do 1184702312 M * Guy- Roey: I'm sorry if I injured your pride, it's just that it's mostly commercial types who don't understand how a project can not care about marketing itself 1184702320 M * Roey pride? 1184702339 M * Guy- well, you did take offense at my remark 1184702340 M * Roey heheh :) 1184702349 M * Roey nahh it's all good :) 1184702353 M * Guy- OK then :) 1184702355 M * Roey :) 1184702358 M * Roey ok 1184702371 M * Roey so, I really like VServer, 1184702392 M * Guy- good on yer :) so do I :) 1184702394 M * Roey but I just don't like how other projects try and 1184702399 M * Roey what'st the word 1184702406 M * Roey drown out? other projects? 1184702414 M * Roey like Xen vs. everythign else 1184702435 M * Guy- oh, sure, Xen does get its share of hype 1184702441 M * Guy- but in a way, I think that's good 1184702454 M * Guy- it draws away people who're easily influenced by hype 1184702459 M * Roey or Xen vs. VServer (people get used to thinking that Xen is the be-all end-all solution to running multiple "instances" on shared metal, regardless of the fact that vserver and xen complement each other, not substitue for one another) 1184702463 M * AStorm I'd really love to get some light jail to be in the mainline 1184702465 M * Roey Guy-: ahh 1184702467 M * AStorm a'la BSD one 1184702469 M * AStorm maybe better 1184702474 M * Guy- those are not typically the people who ask smart questions, are they 1184702495 M * AStorm VServer fits the job, but its anti-inclusion policy... :| 1184702497 M * daniel_hozac AStorm: wait a couple of more releases. 1184702506 M * AStorm Will do :-) 1184702521 M * daniel_hozac pid spaces should be available in 2.6.24 or 2.6.25, i guess. 1184702535 M * AStorm 2.6.23 is probable 1184702542 M * daniel_hozac (given that the patchsets have been around since 2.6.19... it's about time) 1184702545 M * daniel_hozac i don't think so. 1184702550 M * daniel_hozac the patchsets aren't ready. 1184702554 M * AStorm mhm. Is it in -mm already? 1184702565 M * AStorm (or was it?) 1184702576 M * daniel_hozac i believe it was at some point. 1184702677 M * daniel_hozac i have 900 unread emails in my containers folder though, so don't trust me :) 1184702714 M * AStorm :> 1184703001 M * HeinMueck Hi all! 1184703025 M * HeinMueck Can anyone explain this: 1184703026 M * HeinMueck top - 22:08:42 up 1 day, 4:35, 1 user, load average: 100.00, 99.97, 99.65 1184703026 M * HeinMueck Tasks: 104 total, 1 running, 102 sleeping, 0 stopped, 1 zombie 1184703026 M * HeinMueck Cpu(s): 0.3% user, 0.7% system, 0.0% nice, 99.0% idle 1184703045 M * daniel_hozac vps faux | grep D 1184703130 J * Piet hiddenserv@tor.noreply.org 1184703133 M * HeinMueck lots of lines :) 1184703139 M * daniel_hozac so, that's why. 1184703139 M * HeinMueck What should I look for? 1184703146 M * daniel_hozac dmesg | tail 1184703151 M * daniel_hozac might shed some light on the issue. 1184703181 M * daniel_hozac probably one of your filesystems has died on you, or you had a kernel crash with some important lock held. 1184703223 M * HeinMueck __alloc_pages: 0-order allocation failed (gfp=0x1d2/0) 1184703241 M * daniel_hozac out of RAM? 1184703245 M * HeinMueck thats the main content of the dmesg 1184703251 M * HeinMueck well, I think so 1184703268 M * daniel_hozac so, i'd assume that's why... 1184703271 M * HeinMueck one of the vservers is a mailserver and I think its the clamav 1184703327 M * HeinMueck When I kill -9 a process nothing happens - is there a wappon bigger than kill -9? 1184703351 M * daniel_hozac it's stuck in kernel code, there's nothing you can do. 1184703363 M * daniel_hozac you might want to get a trace of one of those processes. 1184703388 M * daniel_hozac strace initially, and then with the sysrq-trigger... 1184703426 M * HeinMueck Well, looking at these two lines makes me shiver 1184703427 M * HeinMueck Mem: 256832k total, 253504k used, 3328k free, 2280k buffers 1184703427 M * HeinMueck Swap: 498004k total, 497956k used, 48k free, 13276k cached 1184703464 M * daniel_hozac you just have 256 MiB RAM? 1184703472 M * daniel_hozac no wonder you ran out :) 1184703487 M * HeinMueck It seems to be a friend of mine :) 1184703505 M * HeinMueck I have this machine for 6 years now and it never had any problems. 1184703537 M * HeinMueck Now I'm relaying mail for a friend and thats where the problems started :) 1184703572 M * HeinMueck I think I will move the mail server to another machine tomorrow 1184703624 M * Guy- I don't think strace will work very well on a process in D state 1184703643 M * Guy- you might try "cat /proc/PID/wchan; echo" to see what syscall it's caught in 1184703658 M * Guy- and by all means, add more swap 1184703667 M * mjt at least it'll show the syscall where the process went awa^Winto kernel 1184703678 M * Guy- Kernel Must Have Virtual Memory :) 1184703779 J * ord ~jcurrey@67.11.10.179 1184704219 M * ord How do I umount a mount in a vserver from the host without stopping and changing fstab? vnamespace -e boris umount /tmp; vserver boris exec /bin/cat /proc/mounts | grep tmp #gives none /tmp tmpfs rw,nodev 0 0 1184704280 M * daniel_hozac vnamespace -e boris umount -n /vservers/boris/tmp 1184704314 P * stefani I'm Parting (the water) 1184704479 M * ord -n is normally to not update the mtab? I think its worse... the program that needs more space on the tmp has a file open on tmp, and therefore I cannot umount tmp :-( 1184704501 M * daniel_hozac exactly. 1184704574 M * ord is resizing tmpfs while mounted possible? I know I am screwed and am delaying the loss of data... 1184704586 M * daniel_hozac yep 1184704610 M * daniel_hozac usually kinda tricky though because mount attempts to be clever. 1184704635 M * daniel_hozac but give it a whirl, vnamespace -e boris mount -n -o remount,size=512m /vservers/boris/tmp 1184704741 M * HeinMueck Thanks daniel_hozac and Guy - will come back with a real question next time ;-) 1184704759 M * Guy- HeinMueck: my pleasure 1184704838 Q * mEDI_S Quit: mEDI_S 1184704876 J * mEDI_S ~medi@snipah.com 1184704975 M * HeinMueck I remember that I did a Linux training years ago, on a 368 with 128 meg. The full course, 20 people, had to telnet to that machine and get a file from another machine. I thought it would blow up in the next minute, took 2 minutes for some of the guys to see the char they typed. 1184704978 M * HeinMueck But: 1184704993 J * kwowt ~quote@pomoc.ircnet.com 1184704995 M * kwowt Hi 1184705002 M * HeinMueck After 60 Minutes or so every job was done and the machine went on working like charme 1184705036 M * kwowt I ran out of disk space for vserver users on hda1, but i got more space on hda3, how do i add a user with root directory on hda3 instead of the current full hard drive? 1184705088 M * daniel_hozac first, make sure you set the barrier on the directory you want to have the guests in with setattr --barrier 1184705103 M * daniel_hozac then, just use --rootdir when you build your guests. 1184705125 M * kwowt setattr ? 1184705142 M * kwowt like 1184705145 M * kwowt 'setattr --barrier /vserver/' ? 1184705149 M * kwowt if /vserver is on hda3 1184705153 M * daniel_hozac sure. 1184705163 M * kwowt but that wont effect the currently added vservers right? 1184705175 M * ord vnamespace -e boris mount --move /var/lib/vservers/boris/tmp /var/lib/vservers/boris/var/spud 1184705182 M * ord that worked... 1184705187 M * daniel_hozac kwowt: no. 1184705190 M * kwowt lemme try:p 1184705306 M * ord Thanks daniel for the clues... 1184705387 M * kwowt daniel_hozac 1184705397 M * kwowt i'm getting errors 'no such file or directory' while tryin to create vps 1184705404 M * kwowt ile or directory 1184705406 M * kwowt woops 1184705406 M * kwowt :) 1184705415 M * daniel_hozac which files? 1184705419 M * kwowt 'cannot open - no such file or directory' 1184705425 M * kwowt umm, files needed for the vserver 1184705427 M * kwowt his system 1184705456 M * daniel_hozac which files would that be? 1184705461 M * kwowt the whole system 1184705466 M * kwowt the one which its tryin to untar 1184705467 M * kwowt from stage3 1184705511 M * daniel_hozac hint: pasting the errors on say paste.linux-vserver.org is always a good idea... 1184705518 M * kwowt too many of them 1184705524 M * kwowt the whole systme is tryin to untar i guess 1184705549 M * daniel_hozac so, paste the command and the first couple of errors. 1184705561 M * kwowt vserver tomaz2 build --context 1007 --initstyle plain --rootdir /vservers -m template -- -d gentoo -t /root/stage3-x86-20070321.tar.bz2 1184705570 M * kwowt hm maybe cuz i ran out of space on hda1 1184705587 Q * HeinMueck Quit: Aah! 1184705598 M * daniel_hozac and /vservers is the new mount? 1184705601 M * kwowt yes 1184705652 M * daniel_hozac so what _is_ the error you're getting? 1184705669 M * kwowt wait lemem clear out some space and try again 1184705685 Q * slack101 Ping timeout: 480 seconds 1184705706 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184705889 M * kwowt oh 1184705892 M * kwowt maybe it was just a typo 1184705893 M * kwowt :p 1184705975 M * kwowt yep 1184705976 M * kwowt works now 1184705979 M * kwowt a typo :p 1184706252 J * Aiken ~james@ppp121-45-220-241.lns2.bne1.internode.on.net 1184706292 Q * bonbons Quit: Leaving 1184706586 M * kwowt thanks 1184706586 M * kwowt =) 1184707171 Q * meandtheshell Quit: Leaving. 1184707289 J * stephan ~stephan@evilhackerdu.de 1184707294 M * AStorm I've rewritten vserver clone in Python 1184707303 M * stephan heya 1184707306 N * stephan blizz 1184707308 M * AStorm to support cross-device operation and exclude list 1184707318 M * AStorm hi 1184707329 M * blizz whops.. okay, simple question: how can i umount the /tmp partition (tmpfs from default fstab) inside a vserver from outside? 1184707336 M * blizz (without granting any capabilities) 1184707377 M * daniel_hozac blizz: check the logs about an hour ago. 1184707391 M * blizz ahh, i just saw a FAQ entry 1184707410 M * blizz my server running irssi was down the last few hours :-/ 1184707456 M * daniel_hozac AStorm: clone doesn't make sense across devices, that's what -m rsync is for... 1184707532 M * blizz i executed: vnamespace -e the_xid umount /vservers//tmp 1184707533 M * AStorm daniel_hozac, except that requires rsync :> 1184707546 M * AStorm and probably won't correctly copy FIFOs and other junk 1184707555 M * daniel_hozac so, use cp. 1184707560 M * AStorm daniel_hozac, well, yeah 1184707566 M * daniel_hozac no point in reinventing the wheel... 1184707576 M * AStorm I needed that exclude list 1184707597 M * AStorm that's the main reason for rewrite 1184707617 M * daniel_hozac again, not something that makes sense for "clone". 1184707646 M * AStorm that's why it's disabled by default 1184707935 M * AStorm The idea is that I want to exclude some dirs from cloning, vclone doesn't have that feature yet 1184708028 M * daniel_hozac as i said, exclude lists doesn't make sense when you're cloning. 1184708216 M * daniel_hozac hmm, unless you're cloning a live guest, i guess. 1184708587 M * AStorm or unless I'm cloning one into itself 1184708599 M * AStorm (e.g. / into /vservers contained in it) 1184708616 M * AStorm or overlapping ones 1184708627 M * AStorm or a vserver with mounts active 1184708649 M * AStorm (though that one should be caught by XDEV code) 1184708722 M * daniel_hozac again, cross-device cloning isn't cloning, it's copying. 1184708800 M * AStorm yes 1184708813 M * AStorm still, it might be convenient 1184708827 M * AStorm not having to fish for not yet copied dirs 1184708869 M * daniel_hozac cp and rsync make it their business to copy things. 1184708875 M * daniel_hozac vclone will do one thing: clone. 1184708978 M * AStorm what about mixing these two? 1184708988 M * AStorm unify as much as possible, but copy the rest? 1184709005 M * daniel_hozac you realize that's what it does, right? 1184709030 M * AStorm ... Blah 1184709031 M * daniel_hozac it links the unified files, and copies the rest. 1184709043 A * AStorm stands corrected 1184709054 M * AStorm the only missing thing is the exclude list then 1184709060 M * AStorm I can add that easily to C code 1184709103 M * AStorm hmm, Python code takes 2700 characters :P 1184709119 M * daniel_hozac util-vserver already has support for that. 1184709154 M * AStorm Didn't see it (in clone) 1184709173 M * daniel_hozac vclone isn't using it yet. 1184709244 M * AStorm ah, no problem then 1184709256 M * AStorm I can extend the scripts to use it 1184709290 M * daniel_hozac what scripts? to use what? 1184709335 M * AStorm uhm, to support exceptions while cloning 1184709358 M * AStorm scripts and vclone, of course 1184709395 M * daniel_hozac patches accepted... 1184709433 M * AStorm Tomorrow. Today it's too late here :-) 1184709979 Q * slack101 Ping timeout: 480 seconds 1184710000 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184710275 N * Bertl_oO Bertl 1184710294 M * Bertl evening folks! 1184710397 N * Gerardo__ EtherNet 1184710466 M * Bertl daniel_hozac: btw, I applied your patches for 2.6.2[012] (the cow fixes) but I adjusted them for 2.6.19.7, could you have a look at the 2.6.19.7 version (minor change, I hope, but I like better that way) and tell me if I missed something obvious? 1184710485 M * Bertl +it 1184710668 Q * EtherNet Quit: Abandonando 1184710878 M * daniel_hozac Bertl: getting rid of !old_file/!new_file? 1184710913 M * Bertl well, no, actually getting rid of a = c ? b : a :) 1184710934 M * daniel_hozac well, that too, but that's implied by the former. 1184710935 M * Ashsong Bertl: Good evening! 1184710942 M * Bertl evening Ashsong! 1184710965 M * Bertl daniel_hozac: but I got the feeling that we can drop the !*_file check too (at some point) 1184711004 M * Ashsong Bertl: We're bumping up against XID tagging again, but I misplaced the name of the mount option to use to control it. 1184711006 M * daniel_hozac yeah, i didn't see it returning NULL anywhere. 1184711013 M * Ashsong Can you refresh my memory, please? 1184711017 M * daniel_hozac Ashsong: -o tag 1184711078 M * Bertl Ashsong: again sharing directories between guests? 1184711106 M * Ashsong Bertl: Long run, no. 1184711118 M * Ashsong Short run - activities crash if they can't write to their log files 1184711126 M * Ashsong And those log files are presently stored in /home 1184711136 M * Ashsong And I don't want to change every activity while I keep developing new features. 1184711143 M * Bertl i.c. 1184711159 M * Ashsong But I also don't want to have to clean every file the activities create as I run them. 1184711160 M * Bertl do you know the log file names? 1184711167 M * Ashsong Sometimes. 1184711195 M * Ashsong I figured the pragmatic solution for this week was simply to disable tagging on the whole filesystem. 1184711218 M * Bertl okay, makes sense 1184711328 Q * Piet Quit: Piet 1184711369 M * Ashsong daniel_hozac: How would I control that option through the mount() function in libc? 1184711377 M * Ashsong (I don't see a flag specified for it) 1184711432 M * Ashsong (This is a long-term question for when it works with bind-mounts) 1184711609 M * Bertl Ashsong: it is one of the binary options you pass 1184711648 Q * s0undt3ch Ping timeout: 480 seconds 1184711715 Q * FireEgl Read error: Connection reset by peer 1184711756 M * Ashsong Which one? 1184711779 M * daniel_hozac tag. 1184711781 M * Bertl you know how mount(2) works? 1184711793 M * Ashsong e.g. when I call mount(), I might specify MS_RDONLY | MS_BIND from /usr/include/sys/mount.h 1184711806 M * Bertl yes, and you might specify const void *data 1184711822 J * s0undt3ch ~s0undt3ch@80.69.34.154 1184711824 M * Bertl all options not handled in unsigned long mountflags are passed there 1184711828 M * Ashsong Ah, okay. 1184711837 M * Ashsong So that should contain "tag" or "notag" as a string. 1184711839 M * Ashsong ? 1184711854 M * Bertl or nothing (if you want the default) 1184711864 M * Ashsong (which was what I was using) 1184711867 M * Ashsong Thanks. 1184711873 M * Bertl then you should have no tagging 1184711924 M * Ashsong Hmm. In that case, perhaps the error is being caused elsewhere. 1184711929 M * Ashsong I shall investigate. 1184711938 M * Ashsong Anyway, thanks for the lesson on mount()! 1184711964 M * Bertl np 1184712715 J * FireEgl FireEgl@Sebastian.Atlantica.US.TO 1184712794 Q * Ashsong Quit: ChatZilla 0.9.78.1 [Firefox 2.0.0.4/2007051502] 1184712946 M * daniel_hozac Bertl: so the strace option is meant to strace the post-context creation part, right? 1184713052 M * Bertl yep, I would suggest to append it right before the init command 1184713065 M * Bertl and maybe with a dedicated log path or so? 1184713177 M * Bertl but I guess the 'bash -x' part will be more important (but probably easier, because less options) 1184713312 M * Bertl offtopic: bnc vs muh vs ezbounce (any pros/cons)? 1184714274 Q * slack101 Ping timeout: 480 seconds 1184714295 J * slack101 ~rwer@cpe-65-31-15-111.insight.res.rr.com 1184715392 N * DoberMann[PullA] DoberMann[ZZZzzz]