1210550506 Q * assem Ping timeout: 480 seconds 1210550874 Q * Linus Remote host closed the connection 1210551200 J * assem ~terranpro@ip68-230-167-147.mc.at.cox.net 1210551726 Q * sid3windr Ping timeout: 480 seconds 1210551727 J * kriebel ~kriebel@216-164-160-36.c3-0.eas-ubr10.atw-eas.pa.static.cable.rcn.com 1210551834 J * sid3windr luser@bastard-operator.from-hell.be 1210552495 Q * Piet Quit: Piet 1210557737 J * fatgoose ~samuel@75.149.37.170 1210558363 Q * FireEgl Quit: Leaving... 1210560218 J * FireEgl FireEgl@adsl-226-58-240.bhm.bellsouth.net 1210562181 M * padde i'm having trouble with a squid instance inside a linux-vserver (CentOS guest on CentOS host)... are there known issues? 1210562219 M * padde (squid 'hangs' every now and then, cpu at 100%, not answering any requests) 1210562880 P * assem Leaving 1210564021 M * Bertl padde: what Linux-VServer kernel version? 1210564051 M * padde Bertl: 2.6.22.19-vs2.3.0.34.1 #1 SMP 1210564131 M * Bertl can you attach to the squid thread with strace -fF when it 'hangs' again? 1210564280 M * padde Bertl: within the guest? i tried 'strace -p pid' last time it happened (on both squid processes, the one running as root and the one running as squid), but strace didn't give me any output. 1210564309 M * Bertl well, try with strace -fF -p 1210564316 M * padde Bertl: ok, I will 1210564334 M * Bertl if that doesn't give any output, then double check in the dmesg logs (on the host) 1210564385 M * padde Bertl: i suspected that there are not enough file handles after I googled around for some time, and changed the resource limit so that 'ulimit -Ha' shows open files (-n) 1048576' now, but that doesn't seem to have helped at all' 1210564524 M * Bertl squid is a little strange, but it works fine here 1210564530 M * padde in dmesg I found 'vxW: [xid #0] !!! limit: f5eb2054[VM,9] = 31 on exit.', but I don't know when this was logged 1210564545 M * Bertl that's harmless 1210564584 M * padde Bertl: what's your configuration? it's used by ca. 30 users here, configured to use 15 GB of cache. 1210564612 M * Bertl my config is a very specific, as I wanted it to cache rpm packages too 1210564616 M * padde Bertl: otherwise nothing special... a few ACLs, nothing more. usually it consumes very little system resources 1210565170 M * Bertl okay, off to bed now ... have a good one everyone! 1210565175 N * Bertl Bertl_zZ 1210566293 J * kir ~kir@swsoft-msk-nat.sw.ru 1210567591 J * balbir ~balbir@122.167.223.107 1210568658 Q * fatgoose Read error: Connection reset by peer 1210569884 Q * bronson Ping timeout: 480 seconds 1210570307 J * bronson ~bronson@adsl-68-122-117-135.dsl.pltn13.pacbell.net 1210570946 J * ktwilight ~ktwilight@122.210-66-87.adsl-static.isp.belgacom.be 1210571324 Q * ktwilight_ Ping timeout: 480 seconds 1210571832 J * hijacker_ ~hijacker@213.91.163.5 1210571832 Q * hijacker Read error: Connection reset by peer 1210573899 Q * balbir Ping timeout: 480 seconds 1210576227 Q * FireEgl Quit: Leaving... 1210576234 J * balbir ~balbir@59.145.136.1 1210576699 Q * balbir Read error: Connection reset by peer 1210576735 Q * jsambrook Quit: Leaving. 1210578423 J * bfremon ~ben@lns-bzn-26-82-254-108-116.adsl.proxad.net 1210578667 J * JonB ~NoSuchUse@77.75.164.169 1210579446 J * FireEgl ~FireEgl@adsl-226-58-240.bhm.bellsouth.net 1210579452 J * balbir ~balbir@59.145.136.1 1210579732 Q * AStorm Quit: Leaving 1210582010 J * MatBoy ~MatBoy@wiljewelwetenhe.xs4all.nl 1210582817 J * dna ~dna@191-247-dsl.kielnet.net 1210583011 Q * balbir Quit: Ex-Chat 1210583478 J * balbir ~balbir@59.145.136.1 1210585846 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1210585921 J * bfremo1 ~ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210586256 Q * bfremon Ping timeout: 480 seconds 1210586574 Q * FireEgl Read error: Connection reset by peer 1210587413 J * FireEgl ~FireEgl@adsl-226-58-240.bhm.bellsouth.net 1210587442 Q * bfremo1 Quit: Leaving. 1210587799 J * friendly ~friendly@ppp59-167-137-15.lns3.mel6.internode.on.net 1210587998 J * ntrs ~ntrs@77.29.64.237 1210588446 J * bfremon ~ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210589405 J * lilalinux ~plasma@80.69.41.3 1210589530 Q * balbir Ping timeout: 480 seconds 1210590318 J * balbir ~balbir@59.145.136.1 1210591782 Q * Aiken Remote host closed the connection 1210592148 Q * bfremon Remote host closed the connection 1210592175 J * bfremon ~ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210592307 Q * bfremon Remote host closed the connection 1210592551 Q * friendly Quit: Leaving. 1210592805 J * jsambrook ~jsambrook@aelfric.plus.com 1210592879 J * bfremon ~ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210594110 M * arekm did vserver 2.3.0.33 for .22 had any known bugs like mem leak or something? (trying to track down a problem and not sure if this is vserver one) 1210594651 J * kwowt ~quote@193.77.185.75 1210594652 M * kwowt I'm having problems with Torrent trackers on Gentoo VPS using linux-vserver. I'll pay 20EUR to whoever helps me fix the issue. Query me. 1210595001 M * nkukard possible to bring up a vserver IP even if the host interface is down? 1210595608 Q * JonB Quit: This computer has gone to sleep 1210596074 M * pmjdebruijn kwowt: first explain your issue... 1210596321 M * kwowt torrent trackers work, with apache2, but when you wanna set the tracker to "private", so only registred users can seed and leach from the tracker, upload the torrent and try to seed it, the leecher gets 'Torrent not registrered with this tracker' and 'invalid_info hash' errors 1210596339 M * kwowt I noticed that only happens when i run the tracker inside the guest vserver, but if i move it outside, to the host enviroment, it works. 1210596349 M * kwowt maybe its a mather of another issue, the one where i cant run any apps (like apache, sshd, ftp) without first setting the listen IP for them in the configuration files 1210596366 M * kwowt like the port is in use 1210596368 Q * balbir Ping timeout: 480 seconds 1210596501 J * JonB ~NoSuchUse@192.38.8.25 1210596584 J * ntrs_ ~ntrs@77.29.69.187 1210596786 J * pflanze ~chris__@77-56-83-98.dclient.hispeed.ch 1210596854 M * pflanze Hello. Not really vserver's business, but I need it for running vservers with fair io, so: I'm looking for some documentation of the iosched parameters, 1210596872 M * pflanze especially the files under iosched/ when selecting one of the schedulers. 1210596907 M * pflanze cfq has 9 files with numbers in them of which I've got no idea what they do. 1210596965 M * pflanze I want good interactive behaviour, and cfq by default seems to take up to 6 seconds to react at all, 1210596985 M * pflanze which is surprisingly worse than the anticipatory scheduler which is reacting after 3 seconds for me. 1210597000 M * pflanze But I suspect that would be just a matter of tuning, soo...? 1210597014 Q * ntrs Ping timeout: 480 seconds 1210597345 Q * FireEgl Quit: Leaving... 1210599055 Q * nenolod Quit: half of the fortune 500 gets their ass kicked by the US Government for loosing private data of customers due to random bullshit, usually invol 1210599156 J * nenolod ~nenolod@ip70-189-77-60.ok.ok.cox.net 1210599324 N * DoberMann[ZZZzzz] DoberMann 1210599502 J * rosehosting ~chatzilla@77.29.192.118 1210599545 Q * rosehosting 1210601894 N * Bertl_zZ Bertl 1210601927 M * Bertl morning folks! 1210601975 M * Bertl pflanze: what kernel, setup, config, load 1210602017 M * pflanze 2.6.22.19 1210602045 M * Bertl kwowt: get an strace -fF of such a connect on the host and on the guest 1210602060 M * Bertl kwowt: and probably a tcpdump -vvnei ethX too 1210602083 M * pflanze load: well, not heavy load, very diverse, but long-running jobs (updatedb etc., and also direct partition scans) should not make reaction times bad. 1210602089 M * Bertl pflanze: so no Linux-VServer patch? 1210602099 M * pflanze On this test machine, no 1210602107 M * pflanze but I'm testing for a vserver server. 1210602138 M * Bertl hmm, you know that Linux-VServer kernels change the cfq behaviour to allow for per guest IO queues? 1210602146 M * pflanze which is running 2.6.22.18-vs2.2.0.6 1210602150 M * pflanze ah no I didn't know that 1210602175 M * pflanze so changing to cfq would already be better I guess 1210602193 M * Bertl you are talking about 6 seconds in regard of interactivity, what do you mean by that? 1210602209 M * Bertl it takes 6 seconds to type a char? 1210602223 M * pflanze I'm running cat /dev/sda14 | wc -c, and then do something else which needs to read a page. 1210602243 M * pflanze it seems the kernel likes to evict binaries from memory, 1210602252 M * Bertl you are running that in a separate cfq queue? 1210602266 M * pflanze so just bringing a program to foreground which hasn't been active for a bit is enough to make it wait at least 6 seconds. 1210602268 M * Bertl or 'just' as the same user, etc 1210602286 M * pflanze the cat is running as root, the rest as chris 1210602306 M * pflanze [although the root session has been started from the chris account through su] 1210602309 M * Bertl did you try to swap that, for a test? 1210602321 M * Bertl (and use ssh) 1210602327 M * pflanze checking. 1210602346 M * Bertl on the mainline kernel, cfq is per user 1210602386 J * yarihm ~yarihm@84-75-103-252.dclient.hispeed.ch 1210602858 M * pflanze well, a good tool to do more scientific measurements would be helpful 1210602889 M * pflanze anyway, at the moment I can't see any difference from running the cat as root, ssh root, chris, and with or without ionice -c 3 1210602950 M * pflanze an hour ago it seemed like switching off one of the two cores helped a lot. 1210602966 M * pflanze now I don't really see this again, strange. 1210602991 M * Bertl there are a bunch of factors involved there, for example, if whatever you try to do gets compeltely swapped out (rather unlikely with default settings) then you have to wait until binary and libraries are paged in 1210603001 M * pflanze sure 1210603011 M * pflanze which makes it a bit difficult. 1210603016 M * Bertl so, first you have to figure what actually happens 1210603042 M * Bertl iostat, vmstat and friends might help 1210603135 M * pflanze formerly, dpkg -S /usr/bin/top did take "forever", after about 3 minutes with no reaction I did just ctl-z the cat and voila, within much less than a second the dpkg was done. 1210603162 M * pflanze now such a process seems to be getting disk access a few times a second, from hearing head clicking of the disk. 1210603166 M * pflanze no idea what changed. 1210603191 M * pflanze both with cfq 1210603213 M * Bertl anything in dmesg? 1210603253 M * pflanze I *did* modify some of the parameters in the anticipatory scheduler, but since have switched back to cfq 1210603264 M * pflanze no, nothing there except me switching off and on the second core 1210603276 M * Bertl how do you do that? 1210603303 M * pflanze echo 0 / 1 > /sys/devices/system/cpu/cpu?*/online 1210603344 M * pflanze (I've written a script for that) 1210603358 M * Bertl well, that will definitely mess up the kernel for some time :) 1210603393 M * pflanze hm, the bad behaviour above with dpkg -S etc. was w/o me having switched anything for a long time 1210603405 M * pflanze then after much trying, I switched it off, and it immediately improved. 1210603421 M * pflanze now after switching 2nd core on again, it's still quite ok. 1210603430 M * pflanze so my messing would just have improved things. 1210603463 M * pflanze ah 1210603477 M * pflanze I did mke2fs /dev/sda14 and mount that in the mean time 1210603496 M * pflanze strange should this change something, but whoknows 1210603512 A * pflanze testing with sda13 instead now 1210603530 M * pflanze yep bad 1210603536 M * Bertl are those different disks? 1210603548 M * pflanze no 1210603553 M * Bertl or just partitions on the same (dog slow) disk? 1210603569 M * pflanze dog slow disk probably, it's a laptop. 1210603578 M * Bertl in which case you should already know what the problem is 1210603588 M * pflanze it's 30-40MB/sec streaming performance, but not so fast otherwise mb 1210603599 M * Bertl the I/O scheduling actually makes things worse here 1210603606 M * pflanze why? 1210603612 M * Bertl because the disk has to constantly seek between the two partitions 1210603613 J * balbir ~balbir@59.145.136.1 1210603629 M * pflanze no, I'm only cat'ing one partition at a time 1210603648 M * pflanze the root fs is on the same disk, 1210603649 M * Bertl and the mke2fs? 1210603656 M * pflanze but that's what I wanted to test after all. 1210603661 M * pflanze dito 1210603680 M * Bertl so you have an mke2fs and a cat running on different partitions, yes? 1210603710 M * pflanze no, mke2fs was long finished 1210603721 M * pflanze I just have it mounted; 1210603740 M * pflanze which means, the kernel won't drop the partition's data after stopping cat; nothing else I guess 1210603743 M * Bertl mounted but otherwise unused filesystems (ext2/3) do not cause I/O load 1210603747 M * pflanze yep 1210603845 Q * Hollow Remote host closed the connection 1210603846 J * dna_ ~dna@191-247-dsl.kielnet.net 1210603851 J * Hollow ~hollow@proteus.croup.de 1210603978 M * pflanze it's really bad again now, like X not being responsive for 10-20 seconds 1210603996 M * pflanze whatever. 1210604021 M * pflanze and this was now with only 1 cpu. so much for my theory. 1210604023 M * pflanze I'd be happy to find out once how to tweak the scheduler to improve upon that. 1210604066 Q * balbir Quit: Ex-Chat 1210604124 M * pflanze disk io has always given bad interactivity on linux (or probably any OS--I'm almost only using linux), but it seems to me like it has gotten worse with that new laptop and/or kernel. 1210604126 M * Bertl as I said, vmstat, iostat, maybe top 1210604193 M * pflanze Actually there is a known problem to me: when I swap out xorg by running a big parallel compilation job, X with the nv or nvidia drivers will freeze for tens of minutes. 1210604238 M * pflanze I did report that as a bug to the nv xorg driver. The guy there just thought it was a normal behaviour and pushed the but up to xorg. 1210604255 Q * dna Ping timeout: 480 seconds 1210604268 M * pflanze I found out that in *this* case, switching to uniprocessor consistently helped; 1210604277 M * pflanze also running mlockall() in xorg seemed to help. 1210604284 M * Bertl you don't have two cpus in your laptop :) 1210604292 M * Bertl you have two cores there 1210604295 M * pflanze yes 1210604311 M * Bertl which means that everything outside the 2nd level cache is shared 1210604312 M * pflanze But this is all different with my current cat /dev/sdaX test, since nothing is swapped out. 1210604339 M * Bertl also, a kernel difference would only be relevant if you compile with or without SMP support 1210604367 M * pflanze all kernels tried here were with SMP support 1210604382 M * pflanze but toggling off the second core did help with X 1210604413 M * pflanze but then it was only an X problem there, when running my compilation on the console, interactivity was no real problem 1210604432 M * pflanze and xorg with vesa driver was no problem either. 1210604442 M * pflanze One thing which could be special here is that the root partition is on dmcrypt. 1210604443 M * Bertl what chipset do you use? nvidia? 1210604446 M * pflanze yes 1210604457 M * Bertl well, that probably explains 90% of your issues 1210604470 M * pflanze so at first I thought, that's clearly a driver issue, and thus reported that bug 1210604481 M * pflanze driver or maybe hardware. 1210604510 M * pflanze but then now I did those cat checks, and seen no swapping (since linearly reading makes linux not cache the stuff so agressively), 1210604531 M * pflanze and seen that bad io interactivity, and suddenly thought hey that might well still be just a kernel problem. 1210604567 M * pflanze maybe worsened by some particular driver workings or so. 1210604606 M * pflanze maybe it's related to dmcrypt, that's about my only hypothesis left. 1210604659 J * balbir ~balbir@59.145.136.1 1210604674 M * pflanze (but the kcrypt kernel thread only takes half of one core for reading those 40MB/sec native disk speed, so it would have to be some interthread communication problem issue or so, not just cpu usage) 1210604717 M * pflanze (anyway sda13 and sda14 are of course not encrypted; just reading back evicted binaries will touch the encrypted root partition) 1210604802 M * pflanze (I've been meaning playing around with the schedulers for a long time for my vserver machine(s), so that's why I did the cat tests and come here; my excuse for being OT now) 1210604836 M * Bertl well, I think your laptop is just at the limits of hardware and sheduling 1210604864 M * pflanze But in any case I'm really disappointed that running ionice -c 3 does not help at all. 1210604884 J * hparker ~hparker@linux.homershut.net 1210604888 M * pflanze It should free up the disk accesses for other stuff, no?.. 1210604912 M * Bertl the encryption causes duplicated mappings of the disk blocks, the chipset has bad I/O properites, the increased latency and seeking will do the rest 1210604974 M * pflanze this is a ThinkPad T61, btw (I thought that'd be quite good hardware in comparison) 1210605024 M * Bertl think pad with an nvidia chipset? 1210605032 M * pflanze yes 1210605046 M * pflanze they come with either intel or nvidia video 1210605054 M * pflanze nowadays. 1210605057 M * Bertl video, yes, but the chipset? 1210605060 M * pflanze ah 1210605110 M * pflanze what do you mean with chipset exactly, host bridge, pci bridge, ..? 1210605140 M * pflanze Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c) 1210605144 M * Bertl the thing which connects the cpu with the rest of the world :) 1210605157 M * pflanze PCI bridge: Intel Corporation Mobile PM965/GM965/GL960 PCI Express Root Port (rev 0c) (prog-if 00 [Normal decode]) 1210605162 M * pflanze how do I find out? 1210605170 M * Bertl so that's an intel chipset then 1210605206 M * pflanze k 1210605385 M * pflanze still I'd welcome any pointer explaining the nine cfq parameter sys files 1210605871 M * Bertl use the source, chris :) 1210606558 J * vrwttnmtu ~eryktyktu@82-69-161-137.dsl.in-addr.zen.co.uk 1210606624 M * vrwttnmtu Hello all. I have a problem with Ubuntu guests on a Gentoo host. init 6 in the guest doesn't restart the guest. Is this a problem anyone knows about? 1210606644 M * vrwttnmtu PS. Hello all. :) 1210606775 M * Bertl hey, sounds like an ubuntu issue :) 1210606790 M * vrwttnmtu Well 1210606798 M * vrwttnmtu I don't have the problem with Debian 1210606799 M * vrwttnmtu :) 1210606803 M * Bertl look for a reboot or restart at the end of those scripts 1210606806 M * vrwttnmtu But you know, users.... 1210606832 M * Bertl and replace that by a reboot -f if you are using sysv init style 1210606886 M * vrwttnmtu Lemme check 1210606889 A * vrwttnmtu scampers off 1210606912 M * vrwttnmtu # cat apps/init/style 1210606912 M * vrwttnmtu plain 1210606915 M * vrwttnmtu Hmm 1210606937 M * vrwttnmtu What's the recommended for Ubuntu guests? 1210606941 M * Bertl with plain init style, it is definitely an ubuntu issue, 1210606946 M * vrwttnmtu Hmm 1210606956 M * Bertl i.e. you ahve to check _why_ the ubuntu init doesn't reboot 1210606980 M * vrwttnmtu Hard to say, really. vserver ubuntu enter 1210606981 M * vrwttnmtu init 6 1210606986 M * vrwttnmtu Just nothing happens 1210607007 M * vrwttnmtu It also doesn't seem to boot properly either 1210607014 M * vrwttnmtu Doesn't seem to run the init scripts right either 1210607017 M * Bertl I'd use telinit, but double check that runlevel 6 is reboot on ubuntu :) 1210607021 M * vrwttnmtu # runlevel 1210607022 M * vrwttnmtu unknown 1210607037 M * vrwttnmtu Ubuntu doesn't have an /etc/inittab :( 1210607069 M * Bertl so what makes you think that it responds to 'normal' sysv init commands then? 1210607093 M * vrwttnmtu Well, that's a good question. 1210607126 M * vrwttnmtu I don't know much about Ubuntu 1210607132 M * vrwttnmtu Other than it's "like Debian" 1210607159 M * Bertl well, obviously not, last time I checked, debian did use sysv :) 1210607163 M * vrwttnmtu Yep 1210607182 M * Bertl http://www.linux.com/feature/125977 1210607192 M * vrwttnmtu Another weirdness - if I restart the VPS from the host, SSH doesn't start up, even though it's set to. 1210607204 M * vrwttnmtu It's some sort of init problem, for sure 1210607272 M * vrwttnmtu # initctl list 1210607272 M * vrwttnmtu initctl: Unable to handle message: Message from illegal source 1210607280 M * vrwttnmtu Interesting 1210607318 M * vrwttnmtu No results found for "initctl: Unable to handle message: Message from illegal source". 1210607342 M * Bertl most likely missing source address remapping or so 1210607353 M * vrwttnmtu # ls -l /dev/initctl 1210607353 M * vrwttnmtu ls: cannot access /dev/initctl: No such file or directory 1210607359 M * vrwttnmtu Aaah, wonder if that's it. 1210607378 M * Bertl probably the upstart deamon has a hardcoded 127.0.0.1 check or so 1210607473 M * vrwttnmtu Just FYI, vserver foo stop gives "A timeout occured... and will be killed with SIGKILL", if that's of any use. 1210607498 M * Bertl nah, that just means that the 'upstart' doesn't react on the signal sent to it either 1210607519 M * Bertl I would simply say ubuntu guests are not (yet) supported 1210607547 M * Bertl doesn't mean it cannot be done with recent util-vserver if you figure the ubuntu guest details 1210607588 M * vrwttnmtu Is it something we can work out now, or does it take a long time? 1210607616 M * vrwttnmtu I'm quite surprised that Linux VServer and Ubuntu aren't a combo that people have asked for before actually.. 1210607618 M * Bertl well, if you figure out upstart 1210607630 M * vrwttnmtu What do I need to figure out about it? 1210607651 M * Bertl how to tell it (from outside) to shutdown/reboot 1210607665 M * vrwttnmtu From the host, you mean? 1210607678 M * Bertl yes, via signalling, if that is possible at all 1210607681 M * vrwttnmtu OK 1210607689 M * vrwttnmtu I'll have a good look into it. 1210607733 M * vrwttnmtu vkill is for sending signals to vps processes, yes? 1210607742 M * Bertl yep 1210607745 M * vrwttnmtu OK 1210607753 M * vrwttnmtu Let me have a look. Thanks for the pointers.. 1210607760 M * Bertl np 1210608339 Q * balbir Ping timeout: 480 seconds 1210608670 Q * JonB Ping timeout: 480 seconds 1210608713 M * Bertl okay, off for now .. bbl 1210608717 N * Bertl Bertl_oO 1210608746 M * vrwttnmtu Oh 1210608750 M * vrwttnmtu Curses.. :) 1210608837 J * edlinuxguru ~edlinuxgu@216.223.13.111 1210608939 Q * docelic_ Quit: http://www.spinlocksolutions.com/ 1210609445 J * pmenier ~pmenier@ACaen-152-1-68-97.w83-115.abo.wanadoo.fr 1210609611 J * JonB ~NoSuchUse@77.75.164.169 1210611598 Q * bfremon Remote host closed the connection 1210611629 J * bfremon ~ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210612209 N * pmenier pmenier_off 1210612904 Q * bfremon Ping timeout: 480 seconds 1210613457 Q * JonB Quit: This computer has gone to sleep 1210613521 J * balbir ~balbir@122.167.208.32 1210613582 J * bfremon ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210614211 Q * vrwttnmtu Remote host closed the connection 1210614324 J * cryptronic ~oli@p54A3B6AC.dip0.t-ipconnect.de 1210614363 J * vrwttnmtu ~eryktyktu@82-69-161-137.dsl.in-addr.zen.co.uk 1210614429 Q * balbir Ping timeout: 480 seconds 1210614663 J * Piet ~piet@86.59.21.38 1210614880 M * vrwttnmtu Hey Piet 1210614886 M * vrwttnmtu Oops, wrong channel 1210614888 M * vrwttnmtu Sorry 1210614980 M * Piet my nick's the same on the other channels ;) 1210614996 Q * bfremon Ping timeout: 480 seconds 1210615040 J * hijacker ~Lame@87-126-142-51.btc-net.bg 1210615051 M * vrwttnmtu No, I'm monitoring another channel, and saying hello to everyone new 1210615056 M * vrwttnmtu : 1210615057 M * vrwttnmtu :) 1210615414 J * JonB ~NoSuchUse@77.75.164.169 1210615819 J * balbir ~balbir@122.167.208.32 1210616580 J * Linus ~nuhx@bl7-130-23.dsl.telepac.pt 1210616589 A * Linus hi :D 1210616668 M * Linus can you tell me where is vs for 2.6.24 ??? 1210616674 M * Linus i lost the link :P 1210616694 M * Bertl_oO http://vserver.13thfloor.at/Experimental/ 1210616700 Q * vrwttnmtu Quit: Flee! Flee to the hills! 1210616728 M * Linus tks Bertl_oO :) 1210616734 M * Bertl_oO np 1210617006 Q * nebuchadnezzar Remote host closed the connection 1210617252 J * nebuchadnezzar ~nebu@zion.asgardr.info 1210618196 J * ntrs__ ~ntrs@77.29.74.122 1210618298 J * ensc|w ~ensc@www.sigma-chemnitz.de 1210618301 N * DoberMann DoberMann[Flim] 1210618606 Q * ntrs_ Ping timeout: 480 seconds 1210618620 J * blathijs ~matthijs@katherina.student.ipv6.utwente.nl 1210618634 M * blathijs Hi 1210618650 M * blathijs I'm considering whether to use vserver or openvz. 1210618661 M * blathijs Can anybody convince me to use Vserver? 1210618684 M * JonB blathijs: i dont think many in here has much experience with openvz 1210618734 M * blathijs Hmm, I'm getting a similar response in #openvz :-) 1210618745 M * blathijs Let's try another approach, then 1210618748 Q * hijacker Quit: Leaving 1210618753 M * JonB blathijs: what is it you want to achive? 1210618783 M * blathijs JonB: Separation of different services on a general purpose server 1210618833 M * Bertl_oO if you want lightweight, then it's Linux-VServer, if you want compatibility with commercial stuff, then it's probably OVZ, as you can buy Virtuozzo(tm) then and simply switch 1210618874 M * blathijs It seems OpenVZ supports a lot of limits on a lot of things (disk, memory, IPC stuff, etc.), making it (next to) impossible to let one vps hog all resources and cause others to suffer. Does vserver provide something similar? 1210618887 M * Bertl_oO yes 1210618887 J * bfremon ben@lns-bzn-22-82-249-94-177.adsl.proxad.net 1210618946 M * Bertl_oO http://linux-vserver.org/Documentation 1210618952 M * Bertl_oO http://linux-vserver.org/Resource_Limits 1210619054 M * blathijs Bertl_oO: That seems rather similar. I must have been reading out-of-date comparisons :-) 1210619089 M * Bertl_oO nah, probably just comparisons done by SWsoft :) 1210619094 M * blathijs hehe 1210619125 M * Bertl_oO we have that stuff for several years now, so not much new there 1210619136 M * blathijs Is there any chance of merging vserver code into the mainline kernel in the future? 1210619155 M * Bertl_oO a lot of code is already in mainline 1210619175 M * Bertl_oO and Linux-VServer is constantly adapting to utilize mainline virtualization where appropriate 1210619250 M * blathijs So it's mainly new features and improvements that cause the vserver "patch" to remain, because mainline kernel is lagging a bit behind? 1210619284 M * Bertl_oO either because mainline is lagging behind or because mainline is not interested in certain optimizations or virtualizations 1210619384 M * Bertl_oO and of course, there is the 'traditional' interface and backwards compatibility 1210619940 M * blathijs Another fancy feature of OpenVZ: It can suspend a VPS and dump its state into a file so it can be resumed on another physical machine. I don't think vserver can do something like that currently? 1210619985 M * Bertl_oO no, and we don't plan to add this, as it can be done with xen (or other solutions) and just bloats the code/causes potential issues 1210620011 M * nkukard harry, around? 1210620031 Q * bfremon Ping timeout: 480 seconds 1210620036 M * blathijs Bertl_oO: Does that mean you can suspend a vserver container using Xen? 1210620040 M * JonB Bertl_oO: which potential issues? 1210620081 M * Bertl_oO JonB: e.g. with networking, we would need to fully virtualize the stack (adds overhead and bloat and stack issues) to allow this for example 1210620089 M * edlinuxguru I personally do not value that as a feature, I can afford a shutdown outage when moving a system 1210620108 M * JonB edlinuxguru: some cant 1210620115 M * Bertl_oO blathijs: we do ip isolation not ip stack virtualization, avoids the double stack traversal 1210620143 M * Bertl_oO blathijs: as Linux-VServer works perfectly fine inside xen, you can use that to snapshot/migrate/whatever 1210620159 M * JonB how fast are XEN IO ? 1210620172 M * JonB disk io ? 1210620188 M * blathijs Bertl_oO: But that would be migrating all containers at the same time, not individual containers 1210620202 M * Bertl_oO depends on your setup 1210620243 M * blathijs You mean you can possibly group containers, by running multiple hosts with multiple containers each? 1210620257 M * Bertl_oO correct ... 1210620302 M * blathijs Why exactly would you need ip stack virtualization to allow dumping of a container? 1210620318 M * blathijs I can't seem to find of a critical case for that, really 1210620325 M * Bertl_oO because if you restore it, you need to restore all the connections/sockets 1210620332 M * edlinuxguru JonB: I think it is a risky feature. Honestly VM ware says you can live migrate a system. I tried it 1 out of every 10 tries it did not work right. 1210620352 M * blathijs Bertl_oO: Wouldn't storing a list of connections/sockets do the job just fine? 1210620355 M * Bertl_oO blathijs: and there is no guarantee that the ips are available on the target host/at that time, without a virtual stack 1210620375 Q * Linus Quit: I'll Be Back! 1210620439 M * blathijs I don't really know how OpenVZ's dumping stuff works, but it seems that with a virtual ip stack things mostly get a lot easier :-) 1210620464 M * Bertl_oO for the virtualization technology, yes, but for a price .) 1210620548 M * edlinuxguru Personally that is why I like vserver better then most of the other technologies. Every emulation layer just slows things down. 1210620557 J * doener ~doener@i577B9626.versanet.de 1210620664 Q * doener_ Ping timeout: 480 seconds 1210620695 M * blathijs In the end, don't OpenVZ and Vserver use the same kernel-level concepts? 1210620715 M * Bertl_oO similar basic principles, yes 1210620762 M * Bertl_oO blathijs: your point being? 1210620975 M * blathijs Bertl_oO: Similar principles, or also similar code? Ie, doesn't the mainline kernel have the concept of a container that both projects use? 1210620994 M * blathijs My point being that improvements of one project influence the other as well? 1210621023 M * Bertl_oO blathijs: there is no shared code between OVZ and Linux-VServer, but both projects adapt (and thus share) to the mainline changes 1210621186 M * edlinuxguru I little off topic. I am not too familiar with OpenVZ but I will say that Vserver works out of the box. You can get testing and going fast. I think its such a shame that VMware made this brand and I found it to be a waist of time. I have been really happy with vserver. 1210621237 M * blathijs The guys in #openvz said that openvz was easy to setup and vserver was harder :-p 1210621259 M * Bertl_oO yeah, that now convinced me :) 1210621267 M * blathijs I'm think I'm gonna play with vserver for now 1210621272 M * JonB Bertl_oO: so, you are gonna drop any future development? 1210621277 M * blathijs Bertl_oO: Thanks for the info! 1210621372 M * Bertl_oO you're welcome! 1210621782 M * edlinuxguru How would everyone rate the linux / linux/vserver crontrols to the solaris containers and resource controls in /etc/projects. 1210621823 M * edlinuxguru I did some research into the capabilities that solaris had and they seemed very strong. 1210621968 M * Bertl_oO well, I'd say we can do all that and much more :) 1210622046 M * JonB zfs? 1210622082 M * Bertl_oO zfs is now a container feature? 1210622093 M * Bertl_oO (not that I regard zfs a feature at all :) 1210622094 M * edlinuxguru ZFS is a filesystem 1210622110 M * JonB no, but a solaris feature 1210622175 M * edlinuxguru I specifically meant the faetures of solaris relating to software virtualization. They have containers, zones, processor groups, 1210622199 M * Bertl_oO yep, we have guests, contexts, cpu groups 1210622219 M * edlinuxguru http://www.cuddletech.com/blog/pivot/entry.php?id=803 1210622284 M * edlinuxguru One very cool feature that did not see the light of day was supposedly the information in /etc/projects could be stored in ldap so it could work across a server farm 1210622307 M * edlinuxguru Let me restate the feature may exist but was not very well documented 1210622322 M * Bertl_oO you can store /etc/vservers wherever you want too :) 1210622386 M * JonB Bertl_oO: even on this? http://www.engadget.com/2008/05/12/electricity-powered-datastorm-data-transfer-device-is-retrolutio/ 1210622427 M * Bertl_oO given a proper driver, why not? 1210622524 M * edlinuxguru Im a little confused. I hope your not taking a dig at me or anything. Believe me I am no solaris fan but I thought their zones were really ahead of their time and I liked how they can be configured in a single file. 1210622573 M * Bertl_oO nah, they are actually behind the time .. all the virtualization stuff (OS level) has been discovered and invented a few years ago 1210622615 M * Bertl_oO the only advantage for solaris was that Linux is way further behind in this area 1210622954 J * Aiken ~james@ppp118-208-54-233.lns4.bne1.internode.on.net 1210623051 M * edlinuxguru Someone brought up an interesting point. Has anyone every seen a white paper putting /etc/vservers on an NFS system and using a SAN. That would be a great way to show off vserver in a high end environment 1210623076 M * Bertl_oO whitepaper I don't know, but lycos is doing that in production 1210623180 M * JonB i wouldnt call NFS high end 1210623222 J * Linus ~nuhx@bl7-130-23.dsl.telepac.pt 1210623300 M * Linus what is nss ?? 1210623307 M * Linus checking for NSS... no 1210623307 M * Linus no 1210623307 M * Linus configure: error: internal error 1210623311 M * Linus :| 1210623337 M * Bertl_oO what does google say? 1210623379 M * Linus let me see 1210623381 M * Linus :P 1210623981 A * ard wouldn't call SAN high end 1210624319 M * edlinuxguru True NFS is not high end, but you need some way to get read write access to /etc/vservers across a farm. I could say OCFS2 an luster, instead of NFS SAN I guess 1210624807 M * edlinuxguru My point was not a technology focus, rather something to showcase vserver in an enterprise environment. 1210625216 M * geb why nfs ? 1210625290 Q * bonbons Quit: Leaving 1210625317 M * geb what do you do with nfs or OCFS2 ? 1210625327 M * geb having vservers dir accessibles over a network ? 1210625382 M * edlinuxguru Forget NFS. 1210625465 M * edlinuxguru I was saying that there is not to much documentation I found on vserver best pratices or using it in a large environment. 1210625779 M * geb hum ok 1210625781 M * ard Hmmmm: 1210625789 M * ard root@chode:/extra/home/ard# chcontext --cap CAP_SYS_RESOURCE,CAP_SYS_NICE,CAP_SYS_ADMIN --xid sponlp1 sudo -u '#31' renice 0 32698 1210625789 M * ard 32698: old priority 0, new priority 0 1210625791 M * ard vs 1210625794 M * geb but thats depend what you want do 1210625798 M * ard root@chode:/extra/home/ard# chcontext --cap CAP_SYS_RESOURCE,CAP_SYS_NICE,CAP_SYS_ADMIN --xid sponlp1 sudo -u '#0' renice 0 32698 1210625798 M * ard renice: 32698: setpriority: Operation not permitted 1210625845 M * geb vserver is very closed to linux 1210625869 M * geb so you can do everythings you can do with linux 1210625940 M * geb for example, it is very easy to put some vservers host in a failover/ha design with standards linux tools 1210625944 M * geb like drbd 1210626214 N * DoberMann[Flim] DoberMann[ZZZzzz] 1210626374 M * edlinuxguru I understand that. I just think vserver would benefit from some shiney PDF that says "do it this way". Like I am completely convinced that vserver is a better system then vmware or xen, but without the right supporting documents it might be hard to convince someone else. 1210626446 M * ard ah.... 1210626455 M * ard from --xid 1 I can do anything... 1210626468 M * ard but from --xid != 0, I can do nothing when UID=0 1210627066 Q * Linus Remote host closed the connection 1210627646 J * Linus ~nuhx@bl7-130-23.dsl.telepac.pt 1210627698 M * daniel_hozac ard: kernel? 1210628222 Q * Piet Quit: Piet 1210629143 Q * yarihm Quit: This computer has gone to sleep 1210629144 Q * geb Quit: Quitte 1210629736 J * dna__ ~dna@191-247-dsl.kielnet.net 1210630037 Q * JonB Quit: This computer has gone to sleep 1210630136 Q * dna_ Ping timeout: 480 seconds 1210630216 Q * MatBoy Quit: Ik ga weg 1210630812 J * marcello ~chatzilla@200.137.222.230 1210630844 Q * ntrs__ Ping timeout: 480 seconds 1210630844 M * marcello how do backup in vserver? 1210630899 M * Medivh hi there... anyone alive who could help me with a compile problem? 1210630913 M * Medivh trying to compile openvcp, and all i get is this: 1210630914 M * Medivh In file included from src/get.c:40: 1210630920 M * Medivh /usr/include/vserver.h:795: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'vc_get_task_tag' 1210630947 M * marcello Someone understands Portuguese? 1210630951 M * Medivh i'm using daniel's RPMs for centos 5 on x86_64, util-vserver version is 0.30.215 1210630970 M * Medivh marcello, what exactly do you want to backup? 1210631005 M * marcello I like backup one vserver guest 1210631050 Q * larsivi Remote host closed the connection 1210631061 M * Medivh you could just rsync /vservers/name_of_guest/ to another machine 1210631079 M * Medivh i.e. rsync -azP /vservers/name_of_guest/ -e ssh backup@other.machine:/directory_to_backup_to/ 1210631096 M * marcello Medivh: ok, I use rdiff-backup! 1210631119 M * marcello Medivh: And how restore? 1210631131 M * Medivh same way, just rsync it back from the backup machine 1210631156 M * Medivh like rsync -azP backup@other.machine:/directory_to_backup/to/ -e ssh /vservers/name_of_guest/ 1210631269 M * marcello Medivh: I say up service, for example: vserver start? 1210631293 M * Medivh marcello, uhm, sorry? what do you mean exactly? 1210631317 M * marcello Medivh: I have difficult in english! 1210631324 M * marcello :) 1210631375 M * Medivh hehe no problem, but please try explaining again what you meant 1210631447 M * marcello Medivh: I would like to make a backup is happening enventual a problem, restore the backup quickly. 1210631472 M * marcello Medivh: You understand me? 1210631503 M * Medivh hm well, you can just use the rsync... or you can make the backup on the same machine the vserver is running on of course, maybe just with cp -a ... worked for me, though i understand rsyncing is the recommended way 1210631588 M * marcello Medivh: Ok, I use rdiff-backup, you know? 1210631614 M * marcello Medivh: http://www.nongnu.org/rdiff-backup/ 1210631615 M * Medivh hrm, i'm sorry, i heard of it but never tried/used it 1210631627 M * Medivh but it sounds like it could do the job just fine 1210631692 M * marcello It's using librsync to backups. 1210631735 M * marcello Medivh: for me most elegance 1210631804 M * Medivh hm, maybe i should take a look, might come in handy for some applications of mine as well 1210631804 M * marcello Medivh: I'd like to know how to restore the backup 1210631823 M * Medivh but i don't know how restoring works with rdiff-backup without reading up, sorry 1210631869 M * marcello Medivh: rdiff-backup just control my backup 1210631904 M * marcello Medivh: What I wanted to climb a backup is done. 1210631933 M * marcello for example: If is same: vserver start 1210631947 M * marcello Medivh: Understand? 1210631962 M * kriebel marcello: your backup will just be the files on disk, not the config in /etc/vservers 1210631972 M * kriebel unless you copy that, too 1210631993 M * kriebel by "files on disk", I mean the files for the vserver's filesystem 1210632031 M * kriebel you can stop the broken vserver, move it out of the way, then copy the backup version to /vservers 1210632034 M * kriebel then start again 1210632072 M * kriebel but you'd have to restore the backup if it's archived, and know where the backup copy is on your system 1210632078 M * marcello kriebel: and where is the data files? 1210632100 M * marcello kriebel: in /home/? 1210632101 M * kriebel that would be something you set up in your rdiff-backup configuration 1210632184 J * brag_ ~bragon@2001:7a8:aa58::1 1210632205 J * arthur_ ~arthur@pan.madism.org 1210632239 J * daniel_hozac_ ~daniel@ssh.hozac.com 1210632256 M * marcello kriebel: the configuration files are in /etc/vserver and data in /home/vservers/? 1210632297 M * marcello kriebel: Is this true? 1210632304 Q * brag Ping timeout: 480 seconds 1210632314 Q * arthur Ping timeout: 480 seconds 1210632338 Q * dna__ Quit: Verlassend 1210632349 Q * daniel_hozac Ping timeout: 480 seconds 1210632359 Q * blathijs Ping timeout: 480 seconds 1210632458 M * kriebel marcello: my data files for linux-vserver are in /vservers 1210632472 M * kriebel but /home/vservers is a fine place for them 1210632533 N * arthur_ arthur 1210632556 M * Medivh kriebel: any chance you might have an idea on my compile issue? :) 1210632564 M * marcello kriebel: This is standard in Debian. 1210632678 M * kriebel marcello: I build my vservers with an argument for the path; I never let it use a default, so I don't know where it would put them 1210632693 M * kriebel but I don't think I have a vserver user, so I wouldn't put them in /home 1210632701 M * kriebel not that it really matters. It's an OK place 1210632709 M * kriebel Medivh: no, I didn't read very far up 1210632739 M * Medivh [00:19:45] trying to compile openvcp, and all i get is this: 1210632739 M * Medivh [00:19:46] In file included from src/get.c:40: 1210632739 M * Medivh [00:19:52] /usr/include/vserver.h:795: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'vc_get_task_tag' 1210632739 M * kriebel also, being a Debian user, I generally follow a rule of thumb that if you need to compile it, you don't need to be running it 1210632755 M * edlinuxguru You are also going to have to force install an IPTABLES RPM because of openvcp needs to compile against it. 1210632758 M * Medivh i've been search my a** off... can't figure it out this time :( 1210632792 M * kriebel what's on line 795 of /usr/include/vserver.h ? 1210632828 M * Medivh tag_t vc_get_task_tag(pid_t pid); 1210632895 M * kriebel is that a prototype? 1210632896 Q * Linus Remote host closed the connection 1210632907 M * Medivh uh i'm not really sure, i'm not exactly good when it comes to C 1210632916 M * kriebel I used to be good 1210632921 M * kriebel sorry 1210632949 M * kriebel is openvcp one of those control programs? 1210632973 M * kriebel I looked at them, and when one wanted mysql, I balked and didn't look back 1210632989 M * kriebel marcello: lets assume your backups are just files in /backups 1210632990 M * Medivh yeah, exactly... like you can set up a web panel on machine X and it can control vserver startups/configuration on machines Y and Z... works like a charm usually, but just can't get it compiled now, strangely 1210633011 M * Medivh worked fine on some older machines so i figure it prolly must be a syntax problem with the gcc version the new machine has, or so 1210633027 J * blathijs ~matthijs@katherina.student.ipv6.utwente.nl 1210633049 M * marcello kriebel: ok! 1210633050 M * Medivh hrm, trying to compile with compat-gcc comes to mind 1210633133 M * kriebel VSERVER=vservername; vserver $VSERVER stop; mv /home/vservser/$VSERVER /home/vservers/$VSERVER-broken; cp -a /backups/home/vservers/$VSERVER /home/vservers/; vserver $VSERVER start 1210633150 M * kriebel where vservername you should replace with the real name 1210633163 M * kriebel also, I didn't run that so use caution 1210633187 A * kriebel goes to play Dungeons and Dragons (no joke) 1210633193 M * kriebel good luck, guys 1210633372 J * Linus ~nuhx@bl7-130-23.dsl.telepac.pt 1210633392 M * marcello kriebel: tanks! 1210633481 Q * edlinuxguru Ping timeout: 480 seconds 1210633931 M * ard daniel_hozac_ : Linux version 2.6.22.16-vs2.2.0.6-d64-core2 (root@etchdev64) (gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)) #1 SMP Wed Jan 30 12:19:13 CET 2008 1210634270 Q * balbir Ping timeout: 480 seconds 1210634773 Q * cryptronic Quit: Leaving. 1210634928 J * balbir ~balbir@122.167.180.77 1210634972 J * edlinuxguru ~edlinuxgu@68.sub-97-2-161.myvzw.com 1210635702 Q * marcello Quit: ChatZilla 0.9.82 [Firefox 3.0b5/2008050509] 1210635961 Q * Linus Remote host closed the connection 1210636173 J * Linus ~nuhx@bl7-130-23.dsl.telepac.pt