1184198769 M * AStorm Bertl, hmm, testsuite for VServer would have to check basic startup on some distro images 1184198775 M * AStorm quota management 1184198784 M * AStorm shutdown, killing 1184198797 M * AStorm of course vps and vpstree 1184198820 M * AStorm barrier and unification 1184198902 M * Bertl and a number of more detailed checks too 1184198963 M * AStorm Mhm. 1184198980 M * AStorm scheduler behaviour would be the hardest to check 1184199031 M * AStorm I have to ask Ingo about stackable classes for CFS 1184199107 M * AStorm because this is what VServer does, right? 1184199125 M * AStorm The hard CPU scheduler would have to be ported as a scheduling class 1184199149 M * Bertl yes, that would be an option, I guess 1184199161 M * AStorm I'll try to write a simple throttler for CFS 1184199170 M * Bertl although I'm unsure about the 'hold' functionality (if that can be done with a scheduling class) 1184199179 M * AStorm yes, it can 1184199185 M * AStorm it means sleeping indefinitely 1184199208 M * AStorm the "classes" are much more like separate schedulers in CFS 1184199212 M * AStorm much more powerful 1184199247 M * AStorm so, yes, you can easily hold tasks 1184199260 M * AStorm just like you've done now 1184199277 M * AStorm the scheduler would have to be rewritten so as to not piggyback on Vanilla one 1184199391 M * AStorm full entitlement based one (if you want to reuse the current one) 1184199473 M * AStorm hmm 1184199501 M * AStorm the tasks ran in the vserver would have to use it too for scheduling to work 1184199611 M * AStorm the current design of CFS infrastructure is that it only grabs a task to be scheduled 1184199633 M * AStorm (returned by the tick function) 1184199747 M * AStorm scheduling class can alter the task's load estimation 1184199978 M * AStorm multiCPU behaviour is done by the scheduler 1184200489 M * Bertl hmm, what if I want to have different scheduling decisions based on the cpu? 1184200494 M * Bertl (as we currently have) 1184200636 M * AStorm hmm, well, there are affinity classes, but this doesn't affect the scheduling class 1184200648 M * AStorm s/affinity classes/affinity flags/ 1184200656 Q * bzed Quit: Leaving 1184200664 M * AStorm You can decide based on the CPU task takes 1184200679 M * AStorm but that will interfere with load balancing in a weird way 1184200745 M * AStorm though I think you can mess with load estimator 1184201227 J * DoberMann_ ~james@AToulouse-156-1-78-185.w86-196.abo.wanadoo.fr 1184201333 Q * DoberMann[ZZZzzz] Ping timeout: 480 seconds 1184201853 J * oxylin ~jpeeters@chv78-2-88-161-189-78.fbx.proxad.net 1184202839 Q * oxylin Quit: Ex-Chat 1184203059 Q * slack101 Ping timeout: 480 seconds 1184203080 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184206076 M * Bertl okay, off for now .. back later 1184206081 N * Bertl Bertl_oO 1184207374 Q * slack101 Ping timeout: 480 seconds 1184207395 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184211449 Q * derjohn Ping timeout: 480 seconds 1184211626 J * derjohn ~derjohn@80.69.41.3 1184211699 Q * slack101 Ping timeout: 480 seconds 1184211720 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184214725 Q * Piet_ Quit: Piet_ 1184215511 M * Bertl_oO okay, back now, but almost off to bed .. anything important? 1184215515 N * Bertl_oO Bertl 1184215959 Q * slack101 Ping timeout: 480 seconds 1184215980 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184218021 M * Bertl okay, off to bed now ... have a good one everyone! 1184218026 N * Bertl Bertl_zZ 1184219878 N * DoberMann_ DoberMann 1184220279 Q * slack101 Ping timeout: 480 seconds 1184220300 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184222785 N * DoberMann DoberMann[PullA] 1184222862 Q * quasisane Ping timeout: 480 seconds 1184224023 J * dna ~naucki@92-243-dsl.kielnet.net 1184224335 J * hosungs ~chatzilla@ee3222.kaist.ac.kr 1184224354 M * hosungs hello 1184224433 M * hosungs is anyone out there? 1184224557 Q * slack101 Ping timeout: 480 seconds 1184224578 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184224614 J * Baby ~miry@195.37.62.208 1184224614 M * hosungs hello 1184224755 J * bzed ~bzed@dslb-084-059-106-081.pools.arcor-ip.net 1184224924 J * hosungs_ ~hosungs@ee3222.kaist.ac.kr 1184224961 M * hosungs_ hello? 1184224980 M * hosungs just me??? 1184225005 P * hosungs_ 1184225063 J * hosungs_ ~hosungs@ee3222.kaist.ac.kr 1184225072 P * hosungs_ 1184225219 J * hosungs_ ~hosungs@ee3222.kaist.ac.kr 1184225238 N * DoberMann[PullA] DoberMann 1184225298 P * hosungs 1184225319 J * DavidS ~david@p57A4A23B.dip0.t-ipconnect.de 1184225463 M * hosungs_ should i ask my question anyway? okay. i'm trying linux vserver for the first time with 2.6 kernel. i built the kernel myself and installed the util-vserver. they seemed installed okay. however, after i rebooted with the new kernel and tried the testme.sh script, i get this error: "chcontext: tools were built without legacy API support; can not continue... chcontext failed!" can anyone tell me what to do? 1184225500 J * ema ~ema@rtfm.galliera.it 1184227057 J * rgl ~Rui@84.90.10.107 1184227059 M * rgl hello 1184227076 M * rgl can I normal use use the quota command? 1184227683 M * hosungs_ Okay. I searched the chat log, and found a (maybe) clue: --enable-apis=NOLEGACY. I recompiled the util-vserver with that ./configure option, and now that error (tools were built without legacy API support) is gone. 1184227742 M * hosungs_ However, now I'm getting a new error: "chcontext: vc_new_s_context(): Function not implemented" Again, I got this error from testme.sh... 1184227807 M * hosungs_ I suppose everyone has gone to bed or something. I'll be back later. Hope to get in touch with VServer experts. Bye for now. 1184228371 M * mattzerah what version of utils are you using hosungs_ ? 1184228447 N * DoberMann DoberMann[PullA] 1184228935 Q * DavidS Quit: Leaving. 1184229142 J * meandtheshell ~markus@85.127.103.255 1184229443 Q * Aiken Remote host closed the connection 1184229486 Q * emtty Read error: Connection reset by peer 1184229936 J * Aiken ~james@ppp121-45-220-241.lns2.bne1.internode.on.net 1184230488 M * derjohn hosungs_, do you have a vserver patched kernel running? What does testme.sh say ? 1184230860 J * lilalinux ~plasma@dslb-084-058-201-078.pools.arcor-ip.net 1184231927 J * HeinMueck ~Miranda@host-88-217-199-211.customer.m-online.net 1184232495 Q * Aiken Remote host closed the connection 1184232885 J * DavidS ~david@p57A4A23B.dip0.t-ipconnect.de 1184232995 Q * esa Remote host closed the connection 1184233022 J * esa ~esa@ip-87-238-2-45.adsl.cheapnet.it 1184233139 Q * slack101 Ping timeout: 480 seconds 1184233160 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184233489 M * rgl oh. the last night warning on guest shutdown about quota was because the scripts at /etc/rcS.d are not run in the guest. so, the /etc/init.d/quota script wasn't being run to activate the quotas inside the guest. so, I guess its normal for vserver to only run the current runlevel scripts (at /etc/rc3.d/*)? 1184233554 M * rgl gag. normal runlevel on debian/ubuntu is 2 not 3. vserver defaults to 3 then? 1184233942 M * DavidS debian by default has all runlevels the same, if you use "update-rc.d defaults" to install a service, it will be started on all runlevels 1184233971 M * harry not all 1184233984 M * harry btw, rgl ... you can change default runlevel in /etc/inittab 1184234004 M * harry DavidS: i don't want my services started in runlevel 0, 1 or 6 ;) 1184234016 M * rgl harry, yup. I did it now :) 1184234067 M * DavidS harry: yeah "all" meaning in this case 2..5 1184234120 J * cedric ~cedric@80.70.39.67 1184234621 M * AStorm Hmm, where should I put the mtab for the VServer? 1184235027 M * rgl AStorm, /etc/vservers/ocelot/apps/init/mtab 1184235035 M * rgl (ocelot is my vserver name) 1184235085 M * AStorm mhm, apps/init 1184235151 M * rgl AStorm, http://www.nongnu.org/util-vserver/doc/conf/configuration.html :) 1184235170 M * AStorm hmm 1184235179 M * AStorm thanks for the doc 1184235204 M * rgl its the flower power page :D 1184235237 M * rgl AStorm, are you using quota? 1184235244 M * AStorm rgl, not yet 1184235270 M * rgl AStorm, do you known if a normal user can run the quota command? 1184235315 M * AStorm he can't 1184235327 M * AStorm it requires direct access to the filesystem 1184235327 M * rgl I think it should, or else he doesn't known how much quota he's using. though, for doing that, the /aquota.user needs have read permissions to everyone. 1184235341 M * AStorm uhm, he can read the quota 1184235346 M * AStorm df will tell you (I think) 1184235353 M * rgl df? 1184235363 M * AStorm uhm, no, sorry 1184235466 M * rgl humm, I can't change the permissions of /aquota.user. humm, maybe that can only be done when quotas are off. 1184237205 J * Aiken ~james@ppp121-45-220-241.lns2.bne1.internode.on.net 1184237442 Q * slack101 Ping timeout: 480 seconds 1184237463 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184238145 Q * ktwilight_ Read error: Connection reset by peer 1184238668 M * rgl its a good ideia to run quotacheck before the guest run quotaon? 1184238773 J * ktwilight ~ktwilight@12.104-66-87.adsl-dyn.isp.belgacom.be 1184239294 J * quasisane ~user@c-75-67-252-184.hsd1.nh.comcast.net 1184239313 M * arachnist /w 24 1184239437 J * Ramjar ~ramjar@195.159.98.150 1184239444 J * renihs ~penguin@83-65-34-34.arsenal.xdsl-line.inode.at 1184239465 M * Ramjar why do i sometimes lose my IP in vserver guest? suddenly (once in a month or something) i lose my IP (whole interface in guest disappears) 1184239480 M * Ramjar (kernel linux-2.6.18-vserver-2.1.1-r2) 1184239708 M * ard maybe another vserver has the same ip, and that one has been started and brought down 1184239718 M * ard is the ip still there in the root server or not? 1184239729 M * ard if not, then that must be your problem :-) 1184239904 M * Ramjar well no one else has that IP. but the strange thing is that the whole infterface disappears. 1184239935 M * Ramjar if someone else /another guest has that ip the interface shouldn't disappear. 1184239990 M * Ramjar is it able to find a gentoo-installation with vserver preinstalled? *easy as Centos or debian installation?* i have reinstalled with Centos Xen, but i really wanna get vserver to work. 1184240075 M * LaZaR i have a gentoo readme in germany for vserver 1184240406 Q * Aiken Quit: Leaving 1184241724 Q * slack101 Ping timeout: 480 seconds 1184241745 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184243177 Q * lilalinux Remote host closed the connection 1184243533 J * lilalinux ~plasma@dslb-084-058-201-078.pools.arcor-ip.net 1184244053 Q * ema Quit: leaving 1184244413 Q * HeinMueck Ping timeout: 480 seconds 1184245139 M * daniel_hozac LaZaR: is the one on gentoo.org not good enough for you? 1184245297 J * flea ~flea@a83-132-13-23.cpe.netcabo.pt 1184245308 M * daniel_hozac Ramjar: Gentoo doesn't really come with anything pre-installed... you might want to read http://www.gentoo.org/proj/en/vps/vserver-howto.xml 1184245364 M * flea howdy ppl 1184245370 M * flea daniel_hozac, Bertl_zZ [[]] 1184245479 M * daniel_hozac Ramjar: as for your problem, that might happen if the primary IP address is removed, and you don't have promote_secondaries enabled. 1184245505 M * Ramjar daniel_hozac thnx. i have done that once. but it thaks a loooong time :) so i just thought maybe someone have done a preinstall or something. 1184245509 M * Ramjar but now i knoe :) 1184245572 M * daniel_hozac well, that's the Gentoo-spirit ;) 1184245779 M * flea Ramjar: you have a stage3 prepared for vserver, with the correct baselayout version 1184245803 M * flea Ramjar: you just have to grab it and untar it and you have youserf the base you need 1184245806 M * daniel_hozac it's a stage4. 1184245810 M * daniel_hozac but that's for guests. 1184245818 M * flea or a stage4... it also has it. 1184245822 M * flea I use the stage3 1184245833 M * daniel_hozac (i had assumed this was about the host) 1184245844 M * flea sorry, maybe I'm off topic 1184245853 M * flea *oops* :S 1184245854 M * rgl woah. stage4 is new for me heheh. when I used gentoo there was only stage3- 1184246216 M * Ramjar oh 1184246226 M * Ramjar can i run debian etc on vserver guest? 1184246243 M * daniel_hozac yes. 1184246253 M * daniel_hozac the guest distro doesn't really depend on the host's. 1184246266 M * Ramjar i see. so i can run windows to :P 1184246276 M * daniel_hozac Windows runs on Linux now? 1184246294 M * Ramjar hehe yes. didnt you know? *just kidding* 1184246344 M * Ramjar but can i run solaris on vserver? (unix) 1184246356 M * daniel_hozac does it run on Linux? 1184246363 M * Ramjar hehe no, i know :) 1184246378 M * Ramjar okey so it is spec. Linux and nothing else 1184246383 M * Ramjar me like vserver anyway :) 1184246567 M * eyck- hmm, with vmware,qemu or vbox you can run solaris or win$$ on linux 1184246658 M * eyck- so, in this way, you can run solaris on vserver 1184247350 Q * quasisane Quit: ERC Version 5.2 (IRC client for Emacs) 1184247828 M * AStorm eyck-, hmm, as long as you don't have to give special permissions to them 1184247849 M * AStorm which you shouldn't have to (just create some device nodes) 1184248515 M * AStorm Hmm, are there any good howtos for syslog-ng configuration in vserver setting? 1184248612 M * LaZaR comment out: destination console_all { file("/dev/tty12"); }; 1184248621 M * LaZaR #log { source(src); destination(console_all); }; 1184248640 M * LaZaR remove pipe(/proc/kmsg) from source src { unix-stream("/dev/log"); internal(); }; 1184248760 M * AStorm uhm, I want to put logging inside another vserver 1184248765 M * AStorm and network the other syslogs 1184248791 M * LaZaR hm dont know about this 1184248794 M * AStorm you know, external logging 1184248814 M * daniel_hozac so, do it? 1184248834 M * daniel_hozac nothing vserver-specific about that... 1184248922 M * AStorm I was hoping for some nice howto to do that in 30s :P 1184248949 M * AStorm oh, there are some :P 1184248966 M * daniel_hozac how unexpected. 1184248977 M * arachnist onaka ga suita 1184249081 M * AStorm daniel_hozac, hmm, can vservers sniff each other's networking? 1184249148 M * daniel_hozac they can't sniff at all. 1184250141 M * AStorm blah, you're right, no NET_ADMIN rights 1184250149 M * daniel_hozac more like no CAP_NET_RAW. 1184250224 M * AStorm If the vserver had that right, could it sniff? 1184250230 M * daniel_hozac yes. 1184250266 Q * flea 1184250307 Q * slack101 Ping timeout: 480 seconds 1184250328 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184250388 M * AStorm I wonder if I could unify just a given list of files 1184250402 M * daniel_hozac sure. 1184250441 M * AStorm what will unify do on different files? does it check for it? 1184250448 M * daniel_hozac check for what? 1184250457 M * daniel_hozac if the files differ? 1184250477 M * AStorm yes 1184250484 M * daniel_hozac also, you know that vunify is deprecated in favor of vhashify, right? 1184250502 M * AStorm I know, but I don't want to hash everything 1184250510 M * daniel_hozac so, create an exclude list. 1184250519 M * AStorm uhm, you don't understand my use case 1184250531 M * daniel_hozac no, as i've said, i'm not psychic. :) 1184250532 M * AStorm I want to refresh the unification after system upgrade 1184250540 M * AStorm I have a list of modified files 1184250549 M * AStorm don't want to check hashes on everything 1184250553 M * AStorm or exclude other files 1184250563 M * daniel_hozac it won't check hashes on already unified files. 1184250569 M * AStorm hmm 1184250575 M * AStorm but the hashes will differ... 1184250581 M * daniel_hozac unless you use the refresh option. 1184250585 M * AStorm I want it to overwrite the files 1184250587 M * AStorm :P 1184250589 M * AStorm cp -a? 1184250600 M * daniel_hozac sure. 1184250612 M * AStorm hmm, no problem with it then. 1184250712 M * AStorm hmm 1184250737 M * AStorm could the next version of util-vserver have the --overwrite option for vhashify? 1184250749 M * daniel_hozac overwrite what? 1184250756 M * AStorm which would find different files, overwrite and unify 1184250770 M * daniel_hozac you realize hashify is a one-guest operation, right? 1184250770 M * AStorm (except the exclude list, of course) 1184250779 M * AStorm daniel_hozac, yes, it is 1184250790 M * daniel_hozac so an "overwrite" doesn't make sense. 1184250810 M * AStorm hmm, wait wait. So, it only unifies files on a single guest? 1184250815 M * AStorm Not between guests? 1184250818 M * daniel_hozac yes. 1184250841 M * AStorm Hmm 1184250860 M * AStorm So, I'll have to write a tool to update COW guests then? 1184250881 M * daniel_hozac "rsync"? 1184250910 M * AStorm uhm, then remark the files as COW 1184250938 M * daniel_hozac it really sounds like you want vunify though. 1184250954 M * daniel_hozac it has the whole reference-guest concept. 1184250972 M * AStorm Exactly 1184251050 Q * renihs Remote host closed the connection 1184251050 M * AStorm the question still stands - does it check (in manual mode) for file identity? 1184251075 M * daniel_hozac i have no idea. i've never used it, nor looked at the code. 1184251083 M * AStorm Hmm. So I'll have to. 1184251114 M * daniel_hozac for it to make sense, it should be overwriting the destination guest with whatever's in the reference one though. 1184251148 M * AStorm no, it ignores different files 1184251157 M * AStorm (according to the code) 1184251249 M * AStorm checks the stat data 1184251292 M * AStorm So, it's cp -a for me then :P 1184251318 M * daniel_hozac hmm? 1184251567 J * rgl_ ~Rui@84.90.10.107 1184251572 M * AStorm cp -a + vunify call 1184251591 M * AStorm of course, the vserver will have to be shut down for the update 1184252018 Q * rgl Ping timeout: 480 seconds 1184252685 N * Bertl_zZ Bertl 1184252689 M * Bertl morning folks! 1184253135 M * AStorm hmm, evening here ;-) 1184253169 M * daniel_hozac morning Bertl! 1184253212 M * daniel_hozac any ideas regarding the disk limit testing? 1184253278 M * Bertl well, what I saw from your patch, looks good 1184253306 M * Bertl haven't found the time to actually try it, will do so tonight 1184253345 M * Bertl IMHO the problem will be with reservations 1184253390 M * daniel_hozac yeah, that stuff is tricky. 1184253422 M * Bertl that is actually what stopped me last time when I tried to do a fully automated test suite for this :) 1184253444 M * Bertl but I ahve to admit, it was really half hearted ... 1184253650 M * daniel_hozac hehe. 1184254386 J * emtt1 ~eric@dynamic-acs-24-154-33-109.zoominternet.net 1184254529 M * Bertl wb emtt1! 1184254589 M * rgl_ Hey Bertl! I figured what the problem was with yesterday problem I was having with quota when the guest was shutingdown *G* 1184254609 Q * slack101 Ping timeout: 480 seconds 1184254630 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184254632 M * Bertl rgl_: let's hear! 1184254660 M * rgl_ Bertl, it was because the quotas weren't enable when the guest starts up. the script that enables them was inside rcS.d/ which vserver-util do no run *G* 1184254697 M * Bertl ts ts ... but they were working as expected nevertheless :) 1184254698 M * rgl_ Bertl, util-verver "only" runs the scripts inside the configured runlevel, which by default is 3. 1184254718 M * Bertl rgl_: which is correct, as S is not of any relevance 1184254830 M * rgl_ Bertl, they seemed to be running. which was odd *G* 1184254878 M * rgl_ Bertl, anyways, the fix was just to symlink the quota init script to run before any of the other sripts. then, the warning was gone :) 1184254951 M * Bertl yeah, simply adding it to the proper runlevel should suffice too 1184254962 M * Bertl okay, off for dinner ... back shortly 1184255070 J * stefani ~stefani@flute.radonc.washington.edu 1184255991 Q * matti Ping timeout: 480 seconds 1184256241 Q * ensc Ping timeout: 480 seconds 1184256875 J * bonbons ~bonbons@2001:5c0:85e2:0:20b:5dff:fec7:6b33 1184257331 J * ensc ~irc-ensc@p54B4D165.dip.t-dialin.net 1184257364 M * daniel_hozac Bertl: http://people.linux-vserver.org/~dhozac/p/k/delta-cow-fix09.diff fixes an oops caused by chcontext --xid 42 touch , when the current directory is below the barrier. 1184257416 M * daniel_hozac (granted, that should never happen in real life, but nonetheless it's nice to cover our bases) 1184257438 M * DavidS it could happen with vserver-exec, no? 1184257443 M * daniel_hozac no. 1184257450 M * daniel_hozac that's chrooted. 1184257455 M * DavidS ah! 1184257458 M * daniel_hozac (and such doesn't hit the barrier) 1184257461 M * daniel_hozac +as 1184258112 J * lilalinux_ ~plasma@dslb-084-059-011-165.pools.arcor-ip.net 1184258232 J * lilalinux__ ~plasma@dslb-084-058-221-118.pools.arcor-ip.net 1184258484 Q * lilalinux Ping timeout: 480 seconds 1184258604 Q * lilalinux_ Ping timeout: 480 seconds 1184258687 M * DavidS vserver+puppet really rules 1184258712 M * DavidS i'm currently testing my deployment and a full test cycle (including all application setup) looks like this: 1184258715 M * DavidS vserver ldap.edv-bus.at stop && rm -Rf /var/lib/vservers/ldap.edv-bus.at/ /etc/vservers/ldap.edv-bus.at/ && vserver puppetmaster exec puppetca --clean ldap.edv-bus.at && puppetd --test && vserver puppetmaster exec puppetca --sign ldap.edv-bus.at && vserver ldap.edv-bus.at exec puppetd --test 1184258855 M * Bertl back now 1184258902 Q * slack101 Ping timeout: 480 seconds 1184258923 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184258933 M * Bertl daniel_hozac: okay, I guess we have a bunch of patches which want to go into a new release (and some pending issues) no? 1184258973 M * Bertl dilinger: what I mean is, could you put them into a dir so that I can integrate them easily? 1184259069 M * Bertl s/dilinger/daniel_hozac of course :) 1184260527 M * rgl_ DavidS, cool :) 1184260577 M * rgl_ DavidS, can you talk about your setup a bit? I'm very interested in puppet, and vserver+puppet seems wonderful :D 1184260610 M * DavidS rgl_: svn co http://club.black.co.at:82/svn/manifests/trunk/ /tmp/davids_manifests 1184260611 M * DavidS ;) 1184260634 M * DavidS I have one host with several vserver guests, one of them is the puppetmaster ... 1184260636 M * rgl_ DavidS, gag! even better :) 1184260716 M * DavidS it's a bit of a mess at the moment, because i'm still migrating to modules, but it should give you an overview .. most of the stuff in modules/ should be fairly usable 1184260724 M * mjt anyone tried to use unionfs with vservers? It seems that vserver scripts are being confused about its "lower level" directories/branches... 1184260778 M * rgl_ DavidS, are you using debian? 1184260932 M * Bertl mjt: there is not much point in using unionfs, but IIRC, some folks are successfully using that 1184260950 M * mjt not much point? 1184260954 J * Pazzo ~ugelt@195.254.225.136 1184260991 M * Bertl mjt: well, there are two scenarios: 1184261009 M * Bertl - central update of many guests (no individual updates) 1184261019 M * Bertl - individual updates across guests 1184261039 M * slack101 good day Bertl 1184261041 M * Bertl in the first case, you are better off with a read only bind mount of the shared data 1184261055 M * Bertl in the second case, you are _much_ better off with unification 1184261057 M * DavidS rgl_: yes, exclusively 1184261065 Q * cedric Quit: cedric 1184261079 M * mjt hmm 1184261090 M * mjt unification.. i haven't tried it yet 1184261091 M * rgl_ DavidS, I'm using ubuntu. so everything should apply smootly :) 1184261118 M * DavidS rgl_: no,since i use "etch" in quite a few places .. but 1184261121 M * Bertl wb slack101! 1184261131 M * AStorm mjt, well, here unionfs works with vservers as usual 1184261136 M * DavidS but that's perhaps all in the apt and dbp modules 1184261147 M * mjt but i'm a bit worried about upgrading glibc on 50+ guests and running vunify to bring the differences back again 1184261154 M * rgl_ DavidS, I'll have to look deepear into puppet in near future. no more administration by "hand" for me! heheh 1184261172 M * Bertl mjt: the problem with unionfs is that the guests will diverge with each update, while unification will allow you to merge them transparently each week (or whenever you like) 1184261173 M * DavidS rgl_: see you on #puppet at freenode 1184261195 M * rgl_ DavidS, I known it :) 1184261196 M * mjt AStorm: well it works, but as i mentioned scripts gets confused.. and it works after telling which filesystems not to umount 1184261203 M * Bertl mjt: how do you update glibc on 50+ guests with unionfs? 1184261214 M * mjt i use it a bit differently 1184261223 M * mjt unionfs is readonly too 1184261242 M * AStorm mjt, is not :> 1184261244 M * mjt a thick layer on top of common image 1184261253 M * mjt thin err 1184261254 M * rgl_ DavidS, I known puppet a long time ago ;) 1184261265 M * AStorm Bertl, how do you do that on 50+ guests with vunify? 1184261270 M * AStorm My package manager is not supported :P 1184261285 M * mjt ie, a filesystems with all the components, unionfs with (mostly) configs specific to each guest 1184261292 N * rgl_ rgl 1184261315 M * Bertl mjt: no point in using unionfs then, bind mounts would suffice 1184261329 M * mjt just too many files to bind-mount 1184261332 M * AStorm Bertl, sharing configs is useful though 1184261344 M * mjt alot of stuff scattered inside /etc for example 1184261371 M * AStorm the proper way would be to unify the configs 1184261395 M * AStorm and use COW 1184261411 M * mjt ALL the files are read-only, so there will be no W in COW :) 1184261416 M * Bertl AStorm: there are tools to manage your guests (for most package managers) and there is a wrapper to execute something in a number of guests 1184261454 M * Bertl AStorm, mjt: if you are really concerned about the disk space during an upgrade, you can reunify after each guest upgrade 1184261475 M * mjt not a disk space, but the whole procedure becomes just too complex 1184261497 M * Bertl well, I don't see how it becomes simpler with unionfs 1184261520 M * mjt i chroot into the common root and update/install whatever i want. once. 1184261533 M * Bertl which is exactly the same you do with ro bind mounts 1184261535 M * mjt next i restart the servers if needed 1184261537 M * mjt yes 1184261546 M * Bertl so? 1184261562 M * mjt try to bind-mount 20+ DIFFERENT configs in /etc ;) 1184261566 M * rgl mjt, humm. that way you share everything in all guest? like, config and binaries? 1184261574 M * mjt no, not configs 1184261590 M * mjt configs lives in another directory - only the ones which are specific to each guest 1184261596 M * mjt and are union-mounted together 1184261606 M * Bertl mjt: I would simply mount /etc separately 1184261638 M * AStorm Bertl, it's _slow_ :> 1184261639 M * mjt yeah, and think why the hell my timezone is wrong after glibc upgrade for example :) 1184261646 M * AStorm I mean, the unification 1184261657 M * Bertl AStorm: and unionfs is _always_ slow :) 1184261660 M * AStorm I'll write some scripts which will just unify the changed files 1184261664 M * AStorm Bertl, uhm, no 1184261667 M * AStorm not that much :P 1184261681 M * Bertl AStorm: it adds a constant overhead to _any_ operation _all_ the time 1184261689 M * mjt there are common configs (/etc/timezone - maybe not a good example because it doesn't change often), which SHOUL be here in all guests. and there are configs specific to each guest, not shareable 1184261694 M * AStorm Bertl, the package manager in question is pkgcore - similar to portage 1184261710 M * Bertl mjt: that is configuration based on pure luck 1184261724 M * Bertl mjt: what if somebody modifies the 'shared' configs? 1184261726 M * mjt luck?? 1184261741 M * mjt it will be propagated to all the guests 1184261742 M * Bertl mjt: then your update will fail for this 1184261758 M * mjt but i repeat: the whole filesystem is read-only in all guests 1184261777 M * Bertl you are not making much sense here 1184261791 M * Bertl if it is read-only, then ro bind mounts are the way to go 1184261801 M * Bertl there will be absolutely no problem with updates 1184261807 M * mjt there's no 1184261815 M * mjt we're going in circles ;) 1184261822 M * Bertl if you want the guests to be able to modify the files 1184261827 M * mjt no 1184261832 M * Bertl you have to decide how to handle that on updates 1184261863 M * Bertl so, what's the point in using unionfs when everything is ro? 1184261887 M * mjt very easy management of guest-specific stuff 1184261904 M * Bertl sorry, doesn't make sense to me 1184261923 M * DavidS mjt: you modify it from the host, not from one of the guests? 1184261926 M * mjt one dir for each guest with its own files inside -- everything which is here replaces the same thing in common image 1184261940 M * mjt from host, yeah 1184261947 M * AStorm Bertl, it's an overlay 1184261952 M * mjt overlay, sure ;) 1184261956 M * AStorm much like COW 1184261964 M * mjt unionfs does COW 1184261972 M * AStorm mjt, not really 1184261980 M * AStorm it actually writes a separate file 1184261981 M * Bertl mjt: no, actually it doesn't 1184261992 M * AStorm and hides the original behind it 1184261997 M * mjt well, the end effect is like COW 1184262002 M * AStorm more or less 1184262004 M * Bertl nope, not at all 1184262011 M * mjt lol 1184262019 M * Bertl with CoW you get a completely _new_ file 1184262030 M * Bertl where the overlay now has _two_ files 1184262042 M * mjt in both cases we've 2 files 1184262044 M * Bertl and more important, every time you access a file 1184262059 M * Bertl unionfs has to check, what file to present to the user 1184262065 M * mjt yes 1184262071 M * Bertl (which adds the overhead) 1184262082 M * mjt all directory lookups will cost 2x of the simple case 1184262090 M * Bertl something like that 1184262097 M * mjt (in case 2 layers - with 3 layers, cost is 3x etc) 1184262115 M * Bertl thus CoW and unification is significantly faster 1184262117 M * AStorm unionfs is good for overlaying on RO filesystems 1184262123 M * AStorm and that's mostly it 1184262128 M * Bertl like CDs, yes 1184262143 M * mjt i use it alot for that very purpose here 1184262178 M * Bertl well, if you are running from a CD, then performance is not critical anyway :) 1184262241 M * Bertl mjt: trust me, I had a deep look at all kinds of unification/overlay systems 1184262298 M * mjt performance for reading a config file isn't critical either ;) 1184262299 M * Bertl http://vserver.13thfloor.at/TBVFS 1184262307 M * mjt (which is done once at startup) 1184262395 M * AStorm Bertl, well, unionfs will be in the mainline soon 1184262413 M * mjt . o O { aufs } 1184262435 M * mjt unionfs has issues which were discussed several times on lkml 1184262442 M * Bertl a lot of funny stuff is in mainline :) 1184262453 M * AStorm Bertl, yes, and most of the useful stuff isn't :P 1184262481 M * AStorm Write something like the VServer jail that will get accepted :> 1184262493 M * AStorm e.g. a small patch 1184262501 M * mjt Bertl: what's that double-backslashes everywhere on that TBVFS page? Seems those should be newlines, no? 1184262511 M * AStorm though they love LSMs 1184262518 A * mjt hates LSMs 1184262520 M * Bertl AStorm: Jacques did that 4 years ago :) 1184262554 M * mjt i use vserver as "jail" here. Like a DMZ on a single host 1184262558 M * Bertl mjt: yes, the wiki changed since, and I don't maintain those pages anymore 1184262584 M * AStorm Bertl, well, it isn't maintained 1184262634 M * Bertl hmm? 1184262674 M * AStorm Was that FServer or something? 1184262710 M * Bertl AStorm: no, Jacques did Linux-VServer before I took over maintainership 1184262718 M * AStorm ah, that 1184262742 M * AStorm hmm, wouldn't chroot + network virtualisation + capabilities be exactly what VServer is? 1184262753 M * AStorm (much like grsecurity-enhanced chroots) 1184262754 M * Bertl not at all 1184262768 M * AStorm if the chroot is secured, of course 1184262768 M * daniel_hozac pid space anyone? 1184262769 M * pusling AStorm: process seperation 1184262772 M * Bertl first, we do network isolation for a good reason (not virtualization) 1184262803 M * Bertl then, pid/user/ipc/uts isolation is required too 1184262805 M * AStorm daniel_hozac, hmm, since when VServer divides pid space? 1184262828 M * AStorm you're right 1184262828 M * Bertl AStorm: do you see the processes of other contexts? 1184262863 M * pusling talking about network isolation - can I as vserver root user add tun/tap devices ? (eg for aiccu ipv6 tunnels) 1184262881 M * AStorm Bertl, ah :> 1184262885 M * Bertl pusling: nope, not from the guest 1184262894 M * AStorm pusling, not with CAP_NET_ADMIN 1184262897 M * AStorm *without 1184262949 M * daniel_hozac actually, you can add the interface. 1184262954 M * daniel_hozac you just can't set it up or use it. 1184262965 M * Bertl right :) 1184262998 M * daniel_hozac Bertl: so, it looks like breaking a link with COW when you're over your disk limit merely truncates the file. 1184263016 M * daniel_hozac i mean, when breaking the link gets you over the disk limit. 1184263031 M * daniel_hozac that doesn't seem right to me. 1184263035 M * daniel_hozac it should return -ENOSPC, no? 1184263036 M * Bertl which sounds somewhat correct, although unexpected 1184263062 M * daniel_hozac okay, how so? 1184263075 M * Bertl but yeah, a better fault handling would probably be a good idea 1184263115 M * Bertl but I think all we can do is papering over this, until mainline has an in kernel copy function 1184263149 M * Bertl maybe what we can do is the following (for this specific case) 1184263198 M * Bertl check that the space is available (dlimit), allocate it before doing the copy, then let the copy run without checks, and if it succeds, return the handle 1184263217 M * Bertl otherwise return the appropriate error condition on the open 1184263224 Q * slack101 Ping timeout: 480 seconds 1184263225 M * AStorm /check/d 1184263232 M * AStorm just allocate and see if that fails 1184263243 M * Bertl daniel_hozac: or do you have a better suggestion? 1184263245 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184263257 M * Bertl daniel_hozac: also, how to handle real file-system full conditions? 1184263271 M * daniel_hozac same way, no? 1184263275 M * AStorm Bertl, same as quota full - don't break the link, return ENOSPC 1184263315 M * Bertl daniel_hozac: the problem there is that you cannot do the reservations (without going down to filesystem level) 1184263325 M * AStorm blah 1184263346 M * AStorm fallocate call is not yet implemeted anywhere, heh 1184263348 N * DoberMann[PullA] DoberMann 1184263356 M * AStorm maybe just write and then rename? 1184263372 M * AStorm as in the good old days :P 1184263387 M * Bertl ah, and take the big kernel lock *G* 1184263402 M * AStorm there's no lock anymore? 1184263418 M * DavidS AStorm: that's why it's called OPEN source ;) 1184263421 M * daniel_hozac Bertl: yeah, that's true... i do think it'd be better to fail than trashing the file though. 1184263441 M * Bertl daniel_hozac: no, I think we should handle failures on the cow link break properly 1184263456 M * Bertl daniel_hozac: i.e. if the link break fails for whatever reasons 1184263482 M * Bertl just undo as much damage as possible, and return the file as readonly or with special return code 1184263485 M * daniel_hozac Bertl: exactly. 1184263486 M * Bertl (maybe ENOSPC) 1184263518 M * daniel_hozac we need to get an error return from sendfile. 1184263539 M * Bertl yep, that is probably the tricky part :) 1184263579 M * Bertl but I think this way is actually doable 1184263660 M * daniel_hozac hmm, should be fairly straight-forward. 1184263681 M * daniel_hozac in the if (ret < 0) after vfs_sendfile in cow_break_link, can't we just add || ret != size? 1184263708 M * daniel_hozac well, another if would be required as we'd have to set ret to -ENOSPC, but the idea's the same. 1184263732 M * Bertl give it a try, let me know how it goes ... 1184263750 M * Bertl (read: sounds good :) 1184263758 M * daniel_hozac btw, i have stray files. that should never happen, right? 1184263802 M * daniel_hozac could be from my previous crashes, so i'm not ready to blame anything yet... will see if they reappear when i redo the tests. 1184263810 M * Bertl those are good signs of CoW breaks going wrong 1184263829 M * Bertl we do not clean up on CoW failures (yet) 1184263838 M * daniel_hozac oh really? 1184263852 M * Bertl at least it is on my todo list :) 1184263860 M * daniel_hozac yeah, that's right... 1184263992 M * AStorm Hmm, how does one kill a virtual context by hand? 1184263998 M * daniel_hozac vc_ctx_kill 1184264006 M * AStorm uhm, from command line :P 1184264013 M * daniel_hozac vkill 1184264039 M * AStorm and that'd be vkill --xid , right? 1184264052 M * Bertl vkill --help :) 1184264057 M * AStorm hmm, I've got myself a dead context 1184264097 M * AStorm brb 1184264119 Q * AStorm Quit: Bye 1184264190 Q * ensc Remote host closed the connection 1184264372 M * daniel_hozac Bertl: what were the open issues you were talking about earlier? XFS sendfile, disk limit removal on superblock destruction (not very important, and i guess would require some testing), ... this COW stuff. is there more? 1184264437 M * Bertl IIRC, you mentioned a patch to fix something (but I do not remember what exactly) 1184264455 M * daniel_hozac http://people.linux-vserver.org/~dhozac/p/k/2.2.0.1/ are the patches i have lined up right now. 1184264457 M * Bertl ah, ext3 disk limits 1184264465 M * Bertl the off by one issue 1184264471 M * daniel_hozac right. 1184264512 M * daniel_hozac btw, there was one thing i was a bit unsure about when looking at the disk limit stuff... 1184264561 M * Bertl okay, I should be back in business tomorrow (almost), I decided to completely upgrade my infrastructure (at both locations) and that did take some time ... 1184264572 M * daniel_hozac in ext3_new_blocks, right before the return in the success branch, there's a DQUOT_FREE_BLOCK with no corresponding DLIMIT_FREE_BLOCK. 1184264593 M * Bertl looks suspicious 1184264607 M * Bertl in general, we always pair DQUOT and DLIMIT 1184264609 M * daniel_hozac (same thing applies to ext4) 1184264622 M * daniel_hozac yeah, that's what i was thinking. 1184264725 M * daniel_hozac i'll fix that too then. 1184264752 M * Bertl I think ext3/4 fixed some 'missing' quota allocations recently 1184264770 M * Bertl so that might be the reason why it is not handled by dlimits (yet) 1184264814 M * daniel_hozac yeah. all the others seemed to have a dlimit friend. 1184264879 M * daniel_hozac http://people.linux-vserver.org/~dhozac/p/k/2.2.0.1/delta-ext-dlimit-fix02.diff fixes all of the issues. 1184264913 M * Bertl ah, excellent, so that is replacing the fix01 then 1184264916 M * daniel_hozac right. 1184264931 M * Bertl (which got removed this moment :) 1184264936 M * daniel_hozac yep :) 1184264948 M * daniel_hozac still present in the upper level directory. 1184265007 M * Bertl how is life trating you, btw? 1184265012 M * Bertl *treating even 1184265057 M * daniel_hozac very well, thanks. how about you? 1184265075 M * Bertl yeah, can't complain so far ... 1184265168 Q * Pazzo Quit: Ex-Chat 1184265176 M * daniel_hozac so what infrastructure were you upgrading? network? 1184265178 J * HeinMueck ~Miranda@dslb-088-065-254-049.pools.arcor-ip.net 1184265212 M * Bertl daniel_hozac: network, servers (at home), disk/raid ... 1184265230 M * daniel_hozac ah, everything. nice. 1184265234 M * Bertl daniel_hozac: and finally software (i.e. moved all from mandrake 8.2 to mandriva 2007.1 :) 1184265242 M * daniel_hozac hehehe. 1184265249 M * daniel_hozac how old is 8.2 now? 1184265265 M * Bertl well, actually it had nothing in common with 8.2 anymore 1184265281 M * daniel_hozac oh, you had upgraded everything? 1184265284 M * Bertl I kept updating parts on demand to newer versions 1184265311 M * Bertl but there is only so much you can upgrade without running into troubles 1184265318 M * daniel_hozac yeah.. 1184265340 M * Bertl so I decided to scratch the distro, and reinstall (only keeping improtant data) 1184265345 M * Bertl *important 1184265422 M * daniel_hozac sounds like a good idea. 1184265429 M * Bertl but it was about time now ... in that process I also replaced my k6/2 300 frontends :) 1184265456 M * daniel_hozac with what? 1184265471 M * Bertl (now running Athlon/XP 2GHz) 1184265494 M * Bertl those machines are way too fast for a frontend, btw :) 1184265499 M * daniel_hozac ah, nice upgrade. 1184265520 M * daniel_hozac hehe, definitely. 1184265539 M * Bertl server is an intel core2 duo at 64bit, which is really nice 1184265638 J * AStorm ~astralsto@host-81-190-179-124.gorzow.mm.pl 1184265746 M * daniel_hozac indeed, sounds good. i've only got that in my laptop for now, hope to upgrade the servers soon too... 1184265819 M * mjt hmm. how come - i built a "vserver" manually, without umounting anything, yet /proc/mounts in a vserver does not show "extra" directories mounted on host (such as /var etc) ? 1184265841 M * daniel_hozac it's chrooted, no? 1184265842 M * mjt "manually" = entering sequence of commands on the command line - vnamespace vcontext ... 1184265851 M * daniel_hozac mounts which are not accessible aren't showed. 1184265868 M * mjt but it's a good idea to umount them anyway, right? 1184265897 M * mjt (i was hoping to see what i have to umount - in a vserver's /proc/mounts ;) 1184265912 M * AStorm mjt, you won't be able to unmount because of lack of capabilities :> 1184265925 M * mjt i know 1184265937 M * mjt it has to be done before entering guest 1184266025 M * daniel_hozac unmounting them means you only have to unmount them on the host if you want to e.g. reformat it, extend it, or whatever. 1184266057 M * mjt here's my ad-hoc script: http://paste.linux-vserver.org/4502 1184266062 M * mjt ;) 1184266086 M * mjt dunno what some things mean still - like vuname --set -t context=..., and vattribute 1184266107 M * daniel_hozac so you run it with vnamespace --new ... ? 1184266115 M * mjt yes - around that script 1184266125 M * mjt vnamespace --new ./start.sh 1184266169 M * mjt and now i wonder how to kill the running vserver ;) 1184266179 M * mjt (it's not present in /etc/vservers) 1184266203 Q * rgl Ping timeout: 480 seconds 1184266224 M * daniel_hozac vcontext --migrate --xid .. --chroot -- /etc/vserver-stop? 1184266228 M * mjt aha. killing all processes inside it killed the whole vserver 1184266234 M * daniel_hozac yep. 1184266255 M * mjt i mean if there's some "force-kill" thing, like a power-off button 1184266264 Q * AStorm Quit: Bye 1184266268 M * mjt vpower-off ;) 1184266296 M * daniel_hozac vkill -9 --xid -- 0? 1184266301 M * daniel_hozac uh, -s 9 1184266370 M * mjt that works ;) 1184266408 M * mjt man vattribute 1184266411 M * mjt err miswin 1184266717 Q * gerrit Ping timeout: 480 seconds 1184267130 M * Bertl daniel_hozac: btw, regarding networking and userspace support 1184267158 J * ensc ~irc-ensc@p54B4D165.dip.t-dialin.net 1184267165 M * Bertl daniel_hozac: am I right that the interface ips are added according to the alphanumerical order the interface dirs have? 1184267231 M * daniel_hozac i think that depends on bash. 1184267236 M * daniel_hozac i.e. it's just a glob. 1184267250 M * daniel_hozac but the idea is that they should be. 1184267255 M * Bertl hmm, okay, that still should be truem I guess 1184267258 M * Bertl *true 1184267273 M * daniel_hozac why do you ask? 1184267280 M * Bertl okay, how complicated would it be to support extended matches there? 1184267295 M * Bertl e.g. networks or ip ranges? 1184267331 M * daniel_hozac networks are already supported, no? 1184267372 M * Bertl hmm, what I meant is, entire networks assigned to a guest 1184267372 M * daniel_hozac but it shouldn't be too hard. 1184267384 M * daniel_hozac yeah, but that would that entail more than ip and prefix? 1184267422 M * Bertl nothing actually, just an additional 'marker' when using the ekrnel interface 1184267432 M * daniel_hozac right. 1184267438 M * Bertl i.e. it would set a flag/mask, that's it 1184267465 M * Bertl so from the userspace PoV you think that would be simple to extend? 1184267486 M * Bertl just asking, because I think for the first shot, I'm going for an ordered list, but with extended matching 1184267503 M * Bertl i.e. ip, range and network (only inclusion, no exclusion) 1184267505 M * daniel_hozac well, kinda. i guess it'd be easiest to just start fresh with another directory tree in that case. 1184267520 Q * slack101 Ping timeout: 480 seconds 1184267541 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184267542 M * daniel_hozac or... hm. i guess it should be pretty easy. 1184267566 M * Bertl this would allow to test in actual setups, without doing any of the fancy parsing tree things 1184267586 M * Bertl and honestly, I think it would suffice for 99% of all cases 1184267607 M * daniel_hozac sure. 1184267845 Q * Blissex Remote host closed the connection 1184269487 J * gerrit ~gerrit@bi01p1.co.us.ibm.com 1184269601 M * Bertl okay, off for now .. have to pick up my SO from the airport :) 1184269606 N * Bertl Bertl_oO 1184271658 N * Bertl_oO Bertl 1184271664 M * Bertl flight got delayed :/ 1184271728 M * daniel_hozac that's too bad. 1184271832 Q * slack101 Ping timeout: 480 seconds 1184271847 M * DavidS Bertl: that "just increases the suspense" ;) ... the more it sucks now, the better it is when it's over 1184271853 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184272843 M * meandtheshell what's an SO? 1184273052 M * daniel_hozac significant other ;) 1184273071 M * meandtheshell lol 1184273118 M * meandtheshell awesome ... there are two Bertls out there :) 1184273324 M * Bertl meandtheshell: hmm? 1184273352 M * Bertl ah, got it :) 1184273363 M * meandtheshell ;) 1184273445 J * Aiken ~james@ppp121-45-220-241.lns2.bne1.internode.on.net 1184273462 M * coderanger Bertl: Ping 1184273814 J * AStorm ~astralsto@host-81-190-179-124.gorzow.mm.pl 1184274038 M * coderanger Bertl: Never mind, just needed to rtfm 1184274282 M * Bertl coderanger: np 1184274385 N * DoberMann DoberMann[ZZZzzz] 1184274599 Q * gerrit Ping timeout: 480 seconds 1184275265 J * gerrit ~gerrit@bi01p1.co.us.ibm.com 1184275812 Q * DavidS Quit: Leaving. 1184276024 P * stefani I'm Parting (the water) 1184276316 J * markus_ ~chatzilla@chello213047089232.17.14.vie.surfer.at 1184276414 M * daniel_hozac Bertl: btw, http://people.linux-vserver.org/~dhozac/p/m/delta-dlimit-feat02.diff is slightly updated. i'm trying to figure out why 216 is failing (disk limit is reporting 1 more block freed than df again...) 1184276505 M * markus_ How do I properly specify a runlevel other than 3? I was reading http://linux-vserver.org/util-vserver:Documentation#.2Fetc.2Fvservers.2Fvserver-name.2Fapps.2Finit and setting the numner '7' in the file apps/init/runlevel, but then I get this on the console: "Usage: fakerunlevel \nPut a runlevel record in file ". Do I need some command in that file? There's... 1184276506 M * markus_ ...no documentation about it. 1184276578 Q * bonbons Quit: Leaving 1184276581 M * daniel_hozac it's erroring because 7 isn't between 0 and 6 (inclusive). 1184276608 M * markus_ hmm .. the reason is, I rsynced another machine which is using 7 ... 1184276644 M * daniel_hozac so you'll either have to patch out that sanity check from util-vserver, or make it use another runlevel. 1184276692 M * markus_ Why this check ... :-( 1184276768 Q * HeinMueck Quit: Aah! 1184276811 M * daniel_hozac probably because you normally use a runlevel in that range. 1184276816 M * coderanger Bertl: Another question, is there a good (read: not racey) way to deal with cleaning up after a container? 1184276828 Q * dna Quit: Verlassend 1184276839 M * daniel_hozac coderanger: why would it be racy, and what does cleaning up mean? 1184276853 M * markus_ daniel_hozac: thanks for that hint, I think I can change the runlevel number on the machine itself.. 1184276878 M * Bertl coderanger: you can get a notification if you wait on context disposal 1184276910 M * Bertl coderanger: if you do not start a new one (with the same id) before your cleanup is finished, you should be fine 1184276924 M * coderanger Bertl: There is vc_wait_exit, but I would imagine we can't wait before it is created 1184276939 M * daniel_hozac you can use the state change helper as well. 1184276945 M * Bertl coderanger: correct 1184276952 M * coderanger daniel_hozac: Wazat? 1184276964 M * daniel_hozac which would call /sbin/vshelper once the context is destroyed. 1184276977 M * daniel_hozac (path is configurable, of course) 1184276985 M * coderanger daniel_hozac: Is this part of util-vserver or vserver itself? 1184276997 M * Bertl this is a kernel space helper 1184277009 M * Bertl note: you have to do any synchronization in userspace 1184277027 M * coderanger Bertl: Is this documented somewhere I can stare at? 1184277063 M * daniel_hozac kernel/vserver/helper.c? :) 1184277071 M * Bertl yep :) 1184277092 M * coderanger Heh, touche 1184277190 J * quasisane ~sanep@c-75-67-252-184.hsd1.nh.comcast.net 1184277198 M * daniel_hozac note that the helper is called a bit sooner than vc_wait_exit would return. 1184277232 M * daniel_hozac (i.e. before the context is unhashed, as opposed to after) 1184277248 M * coderanger This is for cleaning up the chroot and such, so split-second timing isn't needed 1184277261 M * Bertl who does the cleanups? 1184277268 M * daniel_hozac hmm, you probably want vc_wait_exit then. 1184277280 M * coderanger Rainbow will fork off another process to do it 1184277291 M * Bertl Rainbow is? 1184277296 M * markus_ Is util-vserver maintained anymore? last release somewhen last year ... ? 1184277297 M * daniel_hozac the mounts are still around when the helper is called. 1184277327 M * coderanger Bertl: The security serivce 1184277327 M * daniel_hozac markus_: eh? last release was in may. 1184277350 M * Bertl coderanger: then using the wait functionality is definitely the better way 1184277402 M * markus_ daniel_hozac: ops, I was looking in http://ftp.linux-vserver.org/pub/utils/util-vserver/ and https://savannah.nongnu.org/projects/util-vserver/, but seems old .. wrong place? 1184277417 M * markus_ ok, now I got it, my fault :) 1184277421 M * daniel_hozac markus_: that's where i'm looking. [ ] util-vserver-0.30.213.tar.bz2 03-May-2007 14:26 646K 1184277446 M * Bertl http://ftp.linux-vserver.org/pub/utils/util-vserver/util-vserver-0.30.213.tar.bz2 ? 1184277449 M * markus_ yes, hard to spot. So, is there a place to report features ? 1184277455 M * daniel_hozac savannah. 1184277463 M * daniel_hozac or here. 1184277508 M * markus_ Here? Ok :) I'ld like to propose for supporting runlevels greater than 6 ... 1184277548 M * daniel_hozac ok, done. 1184277554 M * Bertl markus_: do you have a distro which uses them? 1184277563 M * markus_ sme server does 1184277573 M * Bertl daniel_hozac: you should do the cron-o-john part :) 1184277585 M * daniel_hozac what? 1184277598 M * daniel_hozac (http://svn.linux-vserver.org/projects/util-vserver/changeset/2562) 1184277610 M * Bertl daniel_hozac: you know, travel back in time and fix it ... then explain that it is implemented now :) 1184277615 M * daniel_hozac oooh, hehe. 1184277636 M * markus_ Bertl: it's a "all in one" mail/web server made easy, based on .. hmm ... redhat maybe 1184277663 M * Bertl sounds interesting, and what does it use the extra runlevels for? 1184277729 M * markus_ well, 7 is the main runlevel 1184277741 M * markus_ per inittab it straight starts 7 as default 1184277762 M * daniel_hozac why not 2? or 3? or 4? or 5? 1184277805 Q * meandtheshell Quit: Leaving. 1184277814 M * markus_ uh, I don't know? I didn't wrote it. 1184278268 M * Bertl okay, off to the airport (again) .. back later 1184278274 N * Bertl Bertl_oO 1184278278 M * daniel_hozac good luck! 1184278722 Q * markus_ Quit: ChatZilla 0.9.78.1 [Firefox 2.0.0.4/2007051502] 1184279494 Q * hardwire Ping timeout: 480 seconds 1184279933 Q * lilalinux__ Remote host closed the connection 1184280399 Q * slack101 Ping timeout: 480 seconds 1184280420 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184281909 Q * gerrit Ping timeout: 480 seconds 1184281910 M * Supaplex slack101: sup slacker 1184282341 M * slack101 Supaplex, what it do pimpin 1184282547 J * gerrit ~gerrit@bi01p1.co.us.ibm.com 1184283397 Q * gerrit Quit: Client exiting 1184284688 N * Bertl_oO Bertl 1184284694 Q * slack101 Ping timeout: 480 seconds 1184284697 M * Bertl back now ... 1184284715 J * slack101 ~default@cpe-65-31-15-111.insight.res.rr.com 1184284717 M * Bertl (this time successful :)