1177807311 Q * mountie Ping timeout: 480 seconds 1177807480 J * Nam ~nam@70.71.224.66 1177808947 Q * onox Quit: zZzZ 1177813083 J * FireEgl ~FireEgl@adsl-226-44-249.bhm.bellsouth.net 1177813998 Q * ensc Killed (NickServ (GHOST command used by ensc_)) 1177814008 J * ensc ~irc-ensc@p54b4f89a.dip.t-dialin.net 1177814768 Q * Nam Read error: Connection reset by peer 1177814810 J * Nam ~nam@70.71.224.66 1177814837 J * DoberMann_ ~james@AToulouse-156-1-115-167.w90-30.abo.wanadoo.fr 1177814941 Q * DoberMann[Flim] Ping timeout: 480 seconds 1177815491 Q * Nam Ping timeout: 480 seconds 1177815647 Q * Adrinael Read error: Operation timed out 1177816127 J * Nam ~nam@70.71.224.66 1177817626 J * badari1 ~badari@bi01pt1.ct.us.ibm.com 1177817833 J * badari2 ~badari@bi01p1.co.us.ibm.com 1177817850 Q * badari Read error: Operation timed out 1177817871 Q * Nam Ping timeout: 480 seconds 1177818231 Q * badari1 Ping timeout: 480 seconds 1177818288 J * badari1 ~badari@bi01pt1.ct.us.ibm.com 1177818633 J * badari ~badari@bi01p1.co.us.ibm.com 1177818681 Q * badari2 Ping timeout: 480 seconds 1177819046 Q * badari1 Ping timeout: 480 seconds 1177820604 J * m0o ~m0o@60.53.27.224 1177820957 P * m0o 1177821502 J * infowolfe_ ~infowolfe@c-67-164-195-129.hsd1.ut.comcast.net 1177821584 Q * tzafrir_laptop Ping timeout: 480 seconds 1177821891 Q * infowolfe Read error: Operation timed out 1177822004 J * badari1 ~badari@bi01pt1.ct.us.ibm.com 1177822182 J * Nam ~nam@S0106001195551ff0.va.shawcable.net 1177822384 Q * badari Ping timeout: 480 seconds 1177824351 J * dna ~naucki@238-239-dsl.kielnet.net 1177825109 Q * DoberMann_ Ping timeout: 480 seconds 1177826138 Q * dna Quit: Verlassend 1177827737 J * badari2 ~badari@bi01p1.co.us.ibm.com 1177828156 Q * badari1 Ping timeout: 480 seconds 1177830101 J * derjohn2 ~aj@e180207210.adsl.alicedsl.de 1177831609 J * TradeHacker ~1@221.237.132.96 1177831633 P * TradeHacker 1177833134 Q * sladen Ping timeout: 480 seconds 1177833195 J * sladen paul@starsky.19inch.net 1177833520 J * DoberMann[Flim] ~james@AToulouse-156-1-18-183.w86-196.abo.wanadoo.fr 1177834221 Q * FireEgl Ping timeout: 480 seconds 1177835679 J * defri ~defri@202.3.217.122 1177836134 Q * sladen Ping timeout: 480 seconds 1177836369 P * defri Leaving 1177836388 J * sladen paul@starsky.19inch.net 1177837770 J * meandtheshel1 ~markus@85-124-174-45.dynamic.xdsl-line.inode.at 1177838461 J * Adrinael adrinael@rid7.kyla.fi 1177839885 J * bonbons ~bonbons@83.222.38.145 1177840089 J * Blissex ~Blissex@82-69-39-138.dsl.in-addr.zen.co.uk 1177840233 J * daniel_hozac ~daniel@c-2f1472d5.08-230-73746f22.cust.bredbandsbolaget.se 1177841138 M * mattzerah good evening daniel :) 1177842075 J * ema ~ema@rtfm.galliera.it 1177842258 N * Bertl_zZ Bertl 1177842269 M * Bertl morning folks! 1177842322 M * tanjix hi Bertl! 1177842807 M * Bertl hey! 1177842823 M * tanjix let me know when you have time for the tests we want to do :) 1177842872 M * Bertl I'm experiencing some network issues atm ... i.e. my connection is very flakey ... have to investigate that first 1177842939 M * tanjix no hurry 1177843016 M * Bertl you can start with installing (or making sure that it is installed) iostat and vmstat 1177843111 Q * Blissex Remote host closed the connection 1177843330 M * tanjix hmm where do i get them or see if already installed? 1177843373 M * Bertl that depends on your distro .. 1177843387 M * tanjix debian 1177843391 M * tanjix vmstat is there 1177843403 M * tanjix iostat seems to be included in the sysstat package 1177843427 M * tanjix both installed now 1177843653 Q * Aiken Quit: Leaving 1177843658 N * DoberMann[Flim] DoberMann 1177843915 J * tuxmania ~bonbons@83.222.38.145 1177843915 Q * bonbons Read error: Connection reset by peer 1177843961 Q * tuxmania 1177843975 J * bonbons ~bonbons@83.222.38.145 1177844047 J * tuxmania ~bonbons@83.222.38.145 1177844047 Q * bonbons Read error: Connection reset by peer 1177844221 J * bonbons ~bonbons@83.222.38.145 1177844221 Q * tuxmania Read error: Connection reset by peer 1177844421 J * tuxmania ~bonbons@83.222.38.145 1177844421 Q * bonbons Read error: Connection reset by peer 1177844436 Q * derjohn2 Ping timeout: 480 seconds 1177844637 J * derjohn2 ~aj@e180207210.adsl.alicedsl.de 1177844658 M * Bertl tanjix: then let's do a vmstat and iostat on a heavily loaded host and upload that somewhere 1177844681 M * Bertl tanjix: btw, what kernel/patches do you use on those systems? 1177844704 M * tanjix http://paste.linux-vserver.org/1603 1177844715 M * tanjix vps024:~# uname -a 1177844715 M * tanjix Linux vps024.vserver4free.de 2.6.19.7-vs2.2.0 #1 SMP Sun Apr 22 20:05:39 CEST 2007 i686 GNU/Linux 1177844715 M * tanjix vps024:~# 1177844737 M * tanjix vps024 currently has 230+ on load 1177844798 M * Bertl excellent, so let's get the stats and the 'ps auxwww' from the host and one of the guests 1177844836 M * tanjix stats see 1603 from pastebin 1177844839 M * tanjix ps aux follows 1177844882 M * tanjix http://paste.linux-vserver.org/1604 1177844929 M * tanjix load is now with 48,53 1177844952 M * tanjix raised up to 71 now 1177844956 M * tanjix very weird :) 1177844978 M * Bertl looks like performance issues with the I/O subsystem 1177844991 M * Bertl you see the hight iowait? 1177844994 M * Bertl *high 1177845018 M * tanjix 23% 1177845107 M * Bertl what chipset is used for the (SATA?) disks? 1177845134 M * waldi this just means that no other process is runnable at the same time 1177845164 M * tanjix Bertl: can iget that info form lspci? the sata things use the ahci driver 1177845165 Q * tuxmania Read error: Connection reset by peer 1177845169 M * Bertl waldi: with 30+ guests which are doing something, this points to I/O issues :) 1177845186 J * tuxmania ~bonbons@83.222.38.145 1177845196 M * Bertl tanjix: yep, lspci output would be fine 1177845213 M * waldi tanjix: please to iostat -x sda sdb 10 1177845216 M * waldi s/to/do/ 1177845226 M * tanjix http://paste.linux-vserver.org/1605 1177845295 J * FireEgl ~FireEgl@adsl-226-44-249.bhm.bellsouth.net 1177845309 M * tanjix http://paste.linux-vserver.org/1606 1177845338 M * Bertl okay, let's add the dmesg output too 1177845341 M * waldi you just hit the maximum seek rate of the disk 1177845435 M * tanjix http://paste.linux-vserver.org/1607 1177845468 M * Bertl tanjix: how is the overall guest performance btw? is the host responsive or sluggish? 1177845503 M * tanjix ssh response some times takes a up to 20 seconds 1177845535 M * tanjix but most time after 1 or 2 second i get the login prompt - working in ssh shows no lags etc 1177845567 Q * weasel Quit: leaving 1177845594 M * derjohn Hi all, is this a known problem with bonbons v6 patch and 2.6.20.x ? root@derjohn:~# modprobe ipv6 -> FATAL: Error inserting ipv6 (/lib/modules/2.6.20.7-vs2.2.0-p3-squash-drbd-256ip/kernel/net/ipv6/ipv6.ko): Cannot allocate memory 1177845623 M * Bertl inside a guest? 1177845668 M * Bertl derjohn: and what does -p3 stand for? 1177845702 A * waldi .o0( kaputt-gepatcht ... ) 1177845714 M * derjohn hello Bertl, this is on a host. "p3" stand for i386 $arch and "Pentium iii" optimitzed code. 1177845754 M * derjohn I might add that I changes the max number of ip addresses per guest to 256, I also changed that for the # of v6 ips. ... maybe thats the prob (just comes to my mind) 1177845762 M * Bertl so that is a final 2.2.0 release with squashfs, drbd and 256ips? 1177845786 M * waldi hmm, 256*16 + 256*4? 1177845788 M * Bertl yep, I assume you are running out of per cpu space here 1177845790 M * derjohn Bertl, yes and teh latest ipv6 patch ... 1177845807 M * Bertl but that is a wild guess, actually :) 1177845808 M * derjohn *the *g* 1177845813 M * waldi its more than on page 1177845841 M * waldi Bertl: why per-cpu? 1177845856 M * derjohn well, I am actually not *using* that much ips, but will the module allcocate memory in advance ? 1177845882 M * Bertl derjohn: the context will allocate _all_ the memory for 256 ips for _each_ guest 1177845884 M * derjohn well, I am on a single cpu, single core (AMD clawhammer) 1177845917 M * derjohn currently there is no guest running ... i was just preparing the host 1177845933 M * Bertl waldi: ipv6 has quite some amount of per-cpu space, and the out of memory sounds familiar 1177845937 M * derjohn And the wild assumption is, that it crosses a "does fit in a 4k page" limit ? 1177845974 M * derjohn well, I'l figure put my lowering the # of v6 ips ... how much does one ip use up? 1177845985 M * waldi 16+4? 1177845985 M * derjohn 16 byte ? 1177845992 M * waldi hmm, no 1177845996 M * waldi 16 1177846033 M * [PUPPETS]Gonzo derjohn: you don't have ipv6 anyways :) 1177846045 A * waldi needs to boot into the updated kernel 1177846056 M * derjohn [PUPPETS]Gonzo, nice to m33t you here .... no, not yet ... but maybe soon ;) 1177846088 J * weasel weasel@asteria.debian.or.at 1177846101 M * Bertl wb weasel! 1177846134 M * weasel hi 1177846245 J * bonbons ~bonbons@83.222.38.145 1177846245 Q * tuxmania Read error: Connection reset by peer 1177846566 J * tuxmania ~bonbons@83.222.38.145 1177846566 Q * bonbons Read error: Connection reset by peer 1177846624 M * Bertl tanjix: do you require the guests to be on separate partitions? 1177846654 M * tanjix Bertl: yes, to limit each guests disk space we use lvm 1177846654 M * Bertl i.e. are you using user/group quota or some kind of guest migration? 1177846678 M * Bertl tanjix: for the disk limit (as in disk space per guest) you do not need to separate them out 1177846729 M * tanjix Bertl: what to do then? 1177846731 M * Bertl also, it seems you have two identical disks in the system, do you use it as raid? 1177846740 M * tanjix no, they do not run as raid 1177846752 M * tanjix hda = host system - hdb = vservers on it 1177846788 M * Bertl the disks are 250GB of size 1177846799 M * tanjix yes 1177846805 M * Bertl the host system will probably be 3GB or so? 1177846827 M * tanjix 1,6GB used 1177846847 M * Bertl okay, so you would benefit from using them either as 1177846864 M * Bertl - raid 1, and gaining performance and data safety 1177846883 M * Bertl - raid 0/striping and gaining even more performance 1177846917 M * Bertl even putting half of the guests on the 'other' disk would probably improve overall performance 1177846936 M * Bertl (on a separate partition from the host system of course) 1177846951 M * arachnist since when raid1 means gaining performance? maybe in reads, but not writes 1177846988 M * Bertl arachnist: yes, for the reads, the writes will stay the same 1177847031 M * Bertl arachnist: but seriously, in hosting scenarios, what will be more, reads or writes :) 1177847046 M * arachnist writes, ofcourse ;) 1177847089 M * tanjix Bertl: if we give up the lvm thing how to limit the guest's usable disk contingent then? 1177847112 M * Bertl arachnist: actually you are right in tanjix case ... 1177847126 A * Bertl wonders why that is the case here ... 1177847165 M * Bertl tanjix: are there many logs or so running inside the guests? 1177847185 M * tanjix Bertl: sorry? didn't get you? 1177847203 M * Bertl it looks like your system is writing almost twice as much as reading 1177847217 M * Bertl so my first guess would be 'log files' 1177847264 M * tanjix each guest writes log for all services to /var/log insside each guest 1177847311 M * Bertl hmm ... and this is ext3, right? 1177847337 M * tanjix yes 1177847388 M * Bertl can you upload 'cat /proc/meminfo' too? 1177847436 M * tanjix sure.. mom 1177847438 A * derjohn hints about activating -O dir_index, filetype on ext3 .... finds over large dirs (amavis etc.) get a incredible speedup ! 1177847468 M * tanjix http://paste.linux-vserver.org/1608 1177847498 M * Bertl why do we have highmem on that system? 1177847520 M * tanjix it has 4 gb ram 1177847532 M * Bertl and no 64bit capable cpu? 1177847540 M * tanjix it is 1177847571 M * tanjix this is what we talked yesteday that daniel_hozac meant an upgrade toa md64 would solve the problem 1177847589 M * Bertl well, it will improve the overall behaviour 1177847596 M * waldi over 3GiB active? 1177847617 M * tanjix 3,7 used right now 1177847657 M * tanjix anyways this is one of two servers with 4 gb ram and highmem active 1177847665 M * Bertl you also should consider using both disks for guests and/or using unification on them, I assume most guests are using the same distro 1177847669 M * tanjix other servers have "only" 2 gb ram and do not have better loads :) 1177847742 M * Bertl and you see better 'load values' when you boot with an older kernel on the same machine, yes? 1177847752 M * waldi loaded disk and too less memory 1177847772 M * tanjix i did not try an older kernel 1177847780 J * mountie ~mountie@trb229.travel-net.com 1177847880 J * bonbons ~bonbons@83.222.38.145 1177847880 Q * tuxmania Read error: Connection reset by peer 1177847916 M * Bertl tanjix: okay, but you said, you observed a difference, yes? 1177847934 Q * derjohn2 Ping timeout: 480 seconds 1177847958 M * tanjix yes the difference between the old servers which were active before we moved the guests to new hardware 1177847974 M * tanjix the old servers were old athon cpus and max 1 gb ram and 2.4 kernel 1177847979 M * tanjix had 20 guests on it 1177847986 M * tanjix and were working fine 1177847995 M * tanjix without high loads 1177848082 J * tuxmania ~bonbons@83.222.38.145 1177848082 Q * bonbons Read error: Connection reset by peer 1177848124 M * Bertl tanjix: ah, okay, that probably explains 1177848143 M * Bertl the 2.6 kernels are accounting load and iowait differently 1177848242 M * tanjix that means? 1177848254 M * Bertl probably nothing changed ... 1177848256 M * trippeh_ Load is calculated differently. 1177848285 M * trippeh_ Not much more to it 1177848289 M * tanjix but is it normal the load raises to 300 or more ? 1177848345 M * trippeh_ Not really. Are the disks in PIO mode or something? :P 1177848345 Q * tuxmania Read error: Connection reset by peer 1177848384 M * trippeh_ I've seen loads wrap at 1024 many times over without capacity problems though :P 1177848390 J * bonbons ~bonbons@83.222.38.145 1177848404 M * tanjix trippeh_: how to check ? 1177848458 M * trippeh_ hdparm /dev/disk usually shows it, if its PATA or SATA-hiding-as-PATA 1177848517 Q * ema Quit: leaving 1177848522 M * trippeh_ SCSI and "pure" SATA is generally not affected by disks dropping to PIO 1177848538 M * tanjix vps024:~# hdparm /dev/sda 1177848539 M * tanjix IO_support = 0 (default 16-bit) 1177848539 M * tanjix readonly = 0 (off) 1177848539 M * tanjix readahead = 256 (on) 1177848541 M * tanjix geometry = 30401/255/63, sectors = 488397168, start = 0 1177848541 M * tanjix vps024:~# 1177848571 M * Bertl not really valid for SATA (yet) 1177848707 Q * bonbons Remote host closed the connection 1177848712 M * tanjix i think i will try to update this machine to amd64 arch and see how the behaviour is, what do you think? 1177848828 J * dothebart ~willi@xdsl-81-173-172-105.netcologne.de 1177848828 Q * tudenbart Read error: Connection reset by peer 1177848850 M * Bertl tanjix: yes, and move half the guests to a partition on sda in the process 1177848868 M * Bertl tanjix: that will reduce the I/O overhead by a factor of 2 1177848888 M * Bertl s/partition/partitions/ 1177848898 M * tanjix hmm ok, but bad if the first disk breaks down 1177848903 M * tanjix then hald of the vservers are gone 1177848910 M * tanjix half 1177849033 M * arachnist well, if you have everything on one disk, if one disk fails, everything goes down 1177849099 M * tanjix yes, but due to our experience the system disk breaks down more often than the second one 1177849465 M * Bertl lol 1177849494 J * bonbons ~bonbons@83.222.38.145 1177849575 J * phreak``_ ~phreak``@p548BD849.dip.t-dialin.net 1177849589 Q * phreak`` Killed (NickServ (GHOST command used by phreak``_)) 1177849597 Q * phreak``_ 1177849622 J * phreak`` ~phreak``@p548bd849.dip.t-dialin.net 1177849774 Q * Hollow Ping timeout: 480 seconds 1177850086 M * yang hello Bertl 1177850086 Q * bonbons Read error: Connection reset by peer 1177850104 J * bonbons ~bonbons@83.222.38.145 1177850112 M * Bertl hey yang! 1177850129 M * yang Bertl: How does your trip go ? Any floods in the area where you are? 1177850170 M * Bertl vacation is already over, no big floods (only tiny ones :) 1177850190 M * Bertl (i.e. I'm back home now, 20°C, sun :) 1177850226 M * yang I heard on the news, there were some floods on East coast 1177850276 M * Bertl yes, they had quite a weather there ... 1177850358 J * tuxmania ~bonbons@83.222.38.145 1177850358 Q * bonbons Read error: Connection reset by peer 1177850416 M * yang Bertl: will you publish any pics somewhere :) 1177850452 M * Bertl maybe ... 1177850651 M * yang Bertl: If I ever plan to visit the NY area, I will ask you for directions :) 1177850683 M * Bertl hehe, if you like adventures, go ahead .. be my guest :) 1177850687 J * bavi foobear@193.30.161.200 1177850690 M * bavi LooooooooooooooooL 1177850695 A * bavi slaps Bertl around a bit with a large trout 1177850697 A * bavi slaps Bertl around a bit with a large trout 1177850698 A * bavi slaps Bertl around a bit with a large trout 1177850703 M * bavi plz help breti 1177850718 M * Bertl hmm, which irc client was that with the 'slap button'? 1177850718 Q * tuxmania Read error: Connection reset by peer 1177850724 M * yang mIRC 1177850733 J * tuxmania ~bonbons@83.222.38.145 1177850734 M * bavi how do i make an interface which is global and shared between all vserver 1177850736 M * bavi MiRC 1177850739 M * Bertl that one is used by windows folks, right? 1177850745 M * bavi YAZ BERTI!!! 1177850746 M * yang yep 1177850763 M * bavi bertl i <3 u 1177850768 M * Bertl bavi: you don't make interfaces :) 1177850774 M * bavi ur the cutest guy i evah seen on the net 1177850776 M * bavi no way 1177850778 M * bavi beti... 1177850781 M * bavi berti* 1177850792 M * bavi but i need it shared, aka accessible 1177850797 M * bavi cant i ? 1177850806 M * bavi play with the interface scope or something ? 1177850806 M * yang Bertl: Do you know if there is any available "cpanel" for monitoring the guests ? 1177850808 M * Bertl and it looks like mirc is using an unreadable font and no nick completion too :) 1177850838 M * bavi Bertl: LOOOOL i am voting u #1 os developer of the year 1177850850 M * Bertl bavi: if you assign ips to an interface, and the same ips to the guests, they will 'share' the interface 1177850865 M * bavi Bertl: I told you , You are my king 1177850866 M * yang Bertl: users always ask me about these "cpanels" which some providers have...But I think this "webmin" is full of holes 1177850903 M * Bertl IIRC, quite a number of folks are writing on various control panels for guests and host 1177850907 M * brcc_ good morning bertl! 1177850926 M * Bertl unfortunately most panel solutions are either commercial or unfinished 1177850945 M * yang Bertl: ah, so you cannot really recomment any to work with linux-vserver ? 1177851023 M * Bertl has linux-vserver.org issues atm? 1177851023 Q * tuxmania Read error: Connection reset by peer 1177851037 M * brcc_ Bertl lylix told me he ported virtuatables 1177851041 J * bonbons ~bonbons@83.222.38.145 1177851057 M * brcc_ i was doing some changes on thursdat and friday and making it possible to work with iptables-restore and iptables-save 1177851063 M * brcc_ to you think it is worth to keep working on it? 1177851069 M * Bertl yang: IIRC, we have some links to folks working on something on the webpage 1177851101 M * Bertl brcc_: virtuatables being the shell wrapper transmitting the stuff to the host, yes? 1177851106 M * yang Bertl: yep, the website seems to be down 1177851109 M * bavi bertl: u are aware of the fact i am using vservers on a production environment for over 2 years 1177851118 M * brcc_ Bertl: Virtuatables is the daemon 1177851124 M * Bertl bavi: I am now :) 1177851133 M * bavi Bertl: well what about ur shell 1177851142 M * bavi Bertl: I gave u a shell back then 1177851154 M * bavi Bertl: do u use it or i'll delete it !?!? 1177851175 M * Bertl brcc_: okay, I think this is going to be replaced by a kernel interface in the future ... 1177851197 M * Bertl bavi: as I do not remember it, you can safely delete it :) 1177851216 M * bavi Bertl: E_OK FRIENDRICK 1177851225 M * bavi D:d;d:D:D:D: 1177851282 M * brcc_ Bertl: How long? I think i will finish what i am doing so we can have something usefull in 1 or 2 weeks 1177851299 M * brcc_ It already works but was missing iptables-save and iptables-restore 1177851322 M * Bertl brcc_: you should contact 'er' (sapan) 1177851361 M * Bertl brcc_: if I understood him correctly, he is interested in working on an userspace policy daemon connected to the kernel 1177851361 Q * bonbons Read error: Connection reset by peer 1177851363 J * bonbons ~bonbons@83.222.38.145 1177851380 Q * sladen Ping timeout: 480 seconds 1177851388 M * Bertl brcc_: i.e. it would be worth combining forces on that 1177851398 M * brcc_ Great 1177851410 M * brcc_ Another important point is quota support. how is that going? Do you need help ? 1177851416 M * brcc_ I've been away for a long time due to much work 1177851423 M * brcc_ Quota support is something really important 1177851435 M * Bertl well, it got put on hold (for shared partitions) 1177851450 M * Bertl we can dig it out anytime enough interest is present 1177851485 M * brcc_ I am just finishing iptables and after that i am able to do the tests for you 1177851498 M * brcc_ My main interest is to have quota+iptables and get cpanel and other commercial panels working 1177851514 M * brcc_ i've read on the web there are lot of people interested on getting those commercials panels fully working under linux-vserver 1177851534 M * brcc_ So as soon as i finish the iptables stuff i will tell you and we can go on with quota 1177851540 M * Bertl well, it would be worth getting those peoples here 1177851542 M * brcc_ I hope that will be done in < 1 week 1177851590 M * brcc_ sorry, i wrote it wrong. They want to have it working but not to work on it 1177851590 M * brcc_ hehe 1177851598 J * sladen paul@starsky.19inch.net 1177851619 M * Bertl I got that, but it would help to know that there actually _are_ folks who _really_ want that 1177851637 M * Bertl i.e. those folks would probably be available for testing and such 1177851670 M * brcc_ i am going to search the web again and mail them 1177851678 M * Bertl that'd be great! 1177851694 M * brcc_ I see lof of people using openvz, what are the main differences between linux-vserver and openvz ? 1177851714 M * brcc_ II tried a n openvz vps and it was possible to run vpns, firewall and change routes 1177851849 M * Bertl the main difference is that openvz is using virtualization in places where Linux-VServer uses isolation 1177851878 M * Bertl this looks more like a real system but has significant overhead 1177851910 M * brcc_ so linux-vserver is faster? Great. 1177851950 M * Bertl it is more resource efficient, which means it will be easier to get a higher number of guests working with less overhead 1177851978 M * brcc_ got it 1177852022 M * Bertl but in the near future, you will get the option to do layer 2virtualization (private network stack for the guests) in Linux-VServer too, as it is going to become a mainline feature 1177852085 M * Bertl so you can then decide whether you want to have the fast but limited L3 isolation or the slower but more 'complete' L2 virtualization on a per guest basis 1177852188 M * brcc_ so the user will be able to choose if he wants hie linux-vserver to work as openvz? 1177852230 M * brcc_ In my opinion linux-vserver just looses to openvz when it comes to firewall/quota/routing/vpn. Bsides that, i dont see any other point 1177852230 Q * bonbons Read error: Connection reset by peer 1177852249 J * bonbons ~bonbons@83.222.38.145 1177852309 M * Bertl brcc_: well, the mainline L2 virtualization will be more performant than the OVZ implementation, and hopefully better written (from the coding PoV) 1177852337 M * Bertl brcc_: besides that, the L2 virtualization is not something OVZ came up with :) 1177852368 M * Bertl brcc_: FreeVPS (an early Linux-VServer spinoff) has that since 3 years now 1177852399 M * Bertl (including all the issues, like stability and network overehad) 1177852715 M * brcc_ couldnt you get it from freebps? 1177852717 M * brcc_ freevps 1177852728 M * brcc_ ahh those adds lot of overhead 1177852796 M * Bertl yep, I do not want to add that, but once it will be available via mainline, we will (of course) support it :) 1177852879 J * tuxmania ~bonbons@83.222.38.145 1177852879 Q * bonbons Read error: Connection reset by peer 1177852881 M * brcc_ do you have any idea about when it will be supported by mainline ? 1177852919 M * Bertl the first patches are already there, they have their issues, but basically they are working 1177852944 M * Bertl so it should be in mainline in half a year or maybe a year 1177852998 M * brcc_ great! 1177853011 M * brcc_ just wanted to make sure i should finish virtuatables in a week :) 1177853012 M * brcc_ hehehe 1177853031 M * brcc_ gotta go.. i will finish stuff this week and then be read to quota tests. i will let you know 1177853032 M * brcc_ thanks bertl 1177853033 M * brcc_ cya 1177853038 M * brcc_ reasdy 1177853039 M * brcc_ ready 1177853040 M * brcc_ cya 1177853068 Q * FireEgl Quit: BBL. 1177853102 M * Bertl brcc_: cya! 1177853323 Q * phreak`` Quit: Lost terminal 1177853398 J * phreak`` ~phreak``@p548bd849.dip.t-dialin.net 1177853673 M * meandtheshel1 Bertl: you mean the patch set of Dmitry Mishin? 1177853728 M * sid3windr I thought from ebiederman 1177853731 M * sid3windr *unsure* 1177853734 M * daniel_hozac they have one each. 1177853734 M * meandtheshel1 regarding the L2 virtualization 1177853739 M * meandtheshel1 I see 1177853748 M * daniel_hozac ebiederm's was more efficient, IIRC. 1177853757 M * daniel_hozac hey Bertl, welcome back, btw ;) 1177854076 M * Bertl daniel_hozac: tx! 1177854125 M * Bertl daniel_hozac: seems we have issues with the wiki recently, do you know what the problems are? 1177854178 M * Bertl meandtheshel1: I was referring to the ones from eric, which we actually tested with Linux-VServer already ... 1177854260 M * meandtheshel1 Bertl: I see - also, during the last 10min read http://lwn.net/Articles/219794/ which proves what daniel said 1177854260 Q * tuxmania Read error: Connection reset by peer 1177854275 J * tuxmania ~bonbons@83.222.38.145 1177854409 M * meandtheshel1 by the way - eric is around here (at #vserver) - if so what's his nick? 1177854509 M * Bertl if he is around, he goes by ebiederman 1177854537 M * meandtheshel1 I see 1177855336 J * bonbons ~bonbons@83.222.38.145 1177855336 Q * tuxmania Read error: Connection reset by peer 1177855424 M * sid3windr hmm 1177855431 M * sid3windr does l2 virtualization mean ipv6 is automatically there? 1177855534 J * tuxmania ~bonbons@83.222.38.145 1177855534 Q * bonbons Read error: Connection reset by peer 1177855572 M * Bertl sid3windr: yep, but ipv6 is already there for Linux-Vserver too :) 1177855578 M * sid3windr myeah 1177855585 M * sid3windr but it's not in the official tree, is it? 1177855595 M * sid3windr I thought of it when I saw bonbons join ;> 1177855598 M * Bertl it will be in 2.3 shortly 1177855670 M * tuxmania sid3windr: international connection is really unstable here at the moment... don't know what my ISP is doing 1177855699 M * sid3windr hmm, I didn't know you were .lu ;) 1177855709 M * sid3windr ircd in zagreb is a bit obscure though :p 1177855848 M * tuxmania think I should switch over to the other connection, will possibly be more stable. Guess it might be related to planned bandwidth upgrade 1177855871 M * sid3windr well you seem fine now though :) 1177856038 J * Hollow ~hollow@styx.xnull.de 1177856046 M * tuxmania I'm not that optimistic, it's failing every 15 minutes or so 1177856097 M * tuxmania Bertl: did you already take a look at porting patch to 2.6.21? 1177856164 M * tuxmania but think will have to wait till 2.6.21.y... 2.6.21.1 is not that stable yet (at least in regard to ACPI) 1177856339 Q * phreak`` Quit: leaving 1177856367 J * phreak`` ~phreak``@deimos.barfoo.org 1177856446 Q * phreak`` 1177856491 M * bavi Bertl: I Love U 1177856492 M * bavi OK !?!?! 1177856501 A * bavi Kisses Bertl 1177856504 M * Bertl tuxmania: nope, I was on vacation till yesterday ... 1177856513 M * Bertl bavi: yeah okay :) 1177856534 J * phreak`` ~phreak``@deimos.barfoo.org 1177856570 Q * phreak`` 1177856598 J * phreak`` ~phreak``@deimos.barfoo.org 1177856607 M * sid3windr tuxmania: see, is still working :p 1177856709 M * tuxmania sid3windr: irc yes, but MSN just lost connection 1177856850 J * bonbons ~bonbons@83.222.38.145 1177856850 Q * tuxmania Read error: Connection reset by peer 1177856879 M * bonbons sid3windr: see, connection lost once again 1177856931 M * sid3windr :/ 1177856933 M * sid3windr sucks 1177857010 M * bonbons exactly 1177857679 J * tuxmania ~bonbons@83.222.38.145 1177857679 Q * bonbons Read error: Connection reset by peer 1177858110 J * bonbons ~bonbons@83.222.38.145 1177858110 Q * tuxmania Read error: Connection reset by peer 1177858310 J * tuxmania ~bonbons@83.222.38.145 1177858310 Q * bonbons Read error: Connection reset by peer 1177858449 M * phreak`` daniel_hozac: any idea what happened to Dave Hansen's r/o bind-mount patches ? 1177859097 M * Bertl they probably got ignored, as the original patches :) 1177859247 M * phreak`` well a part got merged (the helper stuff), but not the rest .. *shrug* 1177859247 Q * tuxmania Read error: Connection reset by peer 1177859250 J * bonbons ~bonbons@83.222.38.145 1177859283 M * phreak`` Bertl: so, sir: what about your split BME patches ? :) 1177859372 M * Bertl well, those are the original aptches :) 1177859403 M * Bertl i.e. serge/dave's work is the n-th revision of them .-.. 1177859415 M * Bertl okay, dinnertime .. back later ... 1177859824 J * tuxmania ~bonbons@83.222.38.145 1177859824 Q * bonbons Read error: Connection reset by peer 1177860135 J * bonbons ~bonbons@83.222.38.145 1177860135 Q * tuxmania Read error: Connection reset by peer 1177860335 J * tuxmania ~bonbons@83.222.38.145 1177860335 Q * bonbons Read error: Connection reset by peer 1177860511 J * bonbons ~bonbons@83.222.38.145 1177860511 Q * tuxmania Read error: Connection reset by peer 1177861065 Q * bonbons Read error: Connection reset by peer 1177861090 J * bonbons ~bonbons@83.222.38.145 1177861340 J * tuxmania ~bonbons@83.222.38.145 1177861340 Q * bonbons Read error: Connection reset by peer 1177862084 M * Bertl translocating now .. back in a while ... 1177862090 N * Bertl Bertl_oO 1177862276 J * bonbons ~bonbons@83.222.38.145 1177862276 Q * tuxmania Read error: Connection reset by peer 1177862854 J * tuxmania ~bonbons@83.222.38.145 1177862854 Q * bonbons Read error: Connection reset by peer 1177863540 J * bonbons ~bonbons@83.222.38.145 1177863540 Q * tuxmania Read error: Connection reset by peer 1177863672 J * derjohn2 ~aj@e180207210.adsl.alicedsl.de 1177863740 J * tuxmania ~bonbons@83.222.38.145 1177863740 Q * bonbons Read error: Connection reset by peer 1177863930 J * bonbons ~bonbons@83.222.38.145 1177863930 Q * tuxmania Read error: Connection reset by peer 1177863990 Q * bonbons 1177864049 J * bonbons ~bonbons@ppp-111-235.adsl.restena.lu 1177866215 Q * bonbons Quit: Leaving 1177866489 J * bonbons ~bonbons@ppp-111-235.adsl.restena.lu 1177867757 Q * derjohn2 Ping timeout: 480 seconds 1177870588 M * blizz are context ids constant or variable per vserver instance? 1177870612 M * blizz do they change when i restart vservers? 1177870614 M * sid3windr there are dynamic ones but they're deprecated 1177870627 M * sid3windr you just define one in /etc/vservers//context 1177870633 M * blizz ahh, nice 1177870634 M * sid3windr and it's always that one for that vserver 1177870635 M * blizz just found that file :P 1177870639 M * blizz great, thanks 1177870644 M * sid3windr np :) 1177872529 J * FireEgl Proteus@adsl-226-44-249.bhm.bellsouth.net 1177875660 J * shedii ~siggi@ftth-237-144.hive.is 1177876459 J * newz2000 ~matt@12-210-150-228.client.mchsi.com 1177876767 M * newz2000 I'm having troubles enabling quotas inside my vserver... I get the error: 1177876768 M * newz2000 quotacheck: Can't find filesystem to check or filesystem not mounted with quota option. 1177876790 M * newz2000 I've mounted the loop file system outside of the vservers and enabled quotas in fstab. 1177876881 M * newz2000 Do I need to do something inside the vserver to let the quota tools know that quotas are enabled? 1177877400 J * dna ~naucki@153-238-dsl.kielnet.net 1177877429 Q * arachnist Ping timeout: 480 seconds 1177877444 M * daniel_hozac newz2000: have you modified the mtab? 1177877467 M * newz2000 daniel_hozac: yes, but I wasn't sure what to put for the first column, so I use "none" 1177877478 M * newz2000 none /srv / ufs defaults,usrquota,grpquota 0 0 1177877496 M * daniel_hozac well, that should be /dev/hdvX, or whatever you named the vroot device. 1177877503 M * daniel_hozac (inside the guest) 1177877525 M * newz2000 daniel_hozac: I may not be doing this the most sane way, but I thought I'd only enable quotas in /srv 1177877528 M * newz2000 (since the guest) 1177877555 M * daniel_hozac so /srv is a separate filesystem, right? 1177877559 M * newz2000 daniel_hozac: yes 1177877580 M * daniel_hozac and you've setup a vroot device for that filesystem and copied the device node into the guest? 1177877600 M * newz2000 no, I tried but it didn't work, so I just skipped that step. :-) 1177877608 M * daniel_hozac hmm? 1177877620 M * newz2000 sudo vrsetup /dev/vroot/0 /srv/hosts.bearfruit 1177877620 M * newz2000 open("/dev/vroot/0"): No such file or directory 1177877636 M * newz2000 also tried: 1177877639 M * newz2000 sudo vrsetup /dev/vroot0 /srv/hosts.bearfruit ioctl(): Invalid argument 1177877650 M * daniel_hozac /srv/hosts.bearfruit doesn't look like a device node to me. 1177877658 M * newz2000 no, it's a loopback filesystem 1177877666 M * newz2000 I used mount -o loop to mount it 1177877673 M * daniel_hozac you need to point it at the appropriate /dev/loopX device then. 1177877680 M * newz2000 ah, ok. 1177877698 M * newz2000 It's probably loop0, but is there a way to confirm that? 1177877711 M * newz2000 oh, ps ax told me. 1177877818 M * newz2000 daniel_hozac: what is the modern day equiv to /vservers//dev/hdvX ? 1177877835 M * newz2000 it seems like stuff has moved to /etc/vservers and I don't really understand the new structure yet 1177877889 M * daniel_hozac /vservers//dev/hdvX is still current. 1177877912 M * daniel_hozac /etc/vservers/ is just the guest configuration. /vservers/ contains the actual guest contents. 1177877929 M * newz2000 on, interesting. On my computer, its something like /etc/vservers/.default/vdirbase/ 1177877990 M * daniel_hozac that's just a symlink to the actual destination, which would normally be /vservers. 1177878001 M * daniel_hozac on Debian and similar, you get /var/lib/vservers though... 1177878041 M * newz2000 ah, I see now. Yes, they are there. 1177878065 M * newz2000 probably some lsb related thing. 1177878152 M * daniel_hozac well, more like FHS. 1177878166 M * daniel_hozac but yes, i think that's why. 1177878222 M * newz2000 ok, I've done the vrsetup and copied the device to /dev/hdv2 on the guest. I've stopped and started the guest and updated mtab. 1177878237 M * newz2000 oh, quota_ctl 1177878400 M * newz2000 Hmm. I can't figure out how to do this either: - Step2: Add the quota capability to the guest vserver 1177878406 M * newz2000 Add quota_ctl to /etc/vservers//ccapabilities 1177878410 M * daniel_hozac yep. 1177878413 M * newz2000 that file doesn't exist, do I need to create it? 1177878439 M * daniel_hozac yep. 1177878710 M * newz2000 daniel_hozac: do I need to mount the filesystem in the host? 1177878719 M * newz2000 or do I need to somehow mount it in the guest? 1177878725 Q * bonbons Quit: Leaving 1177878785 M * daniel_hozac doesn't matter. 1177878835 J * arachnist arachnist@088156189068.who.vectranet.pl 1177879079 M * newz2000 daniel_hozac: ah ha. I think its working, but when I restart the guest, the line leaves the mtab file... is there a way to make this last? 1177879134 M * daniel_hozac /etc/vservers//apps/init/mtab, IIRC. the quota docs should mention it. 1177879161 M * newz2000 oh, it is there. :-( Sorry, I looked right over it. 1177879164 M * newz2000 Thanks for your help 1177879242 M * daniel_hozac you're welcome! 1177879542 Q * dna Quit: Verlassend 1177879916 J * tzafrir_laptop ~tzafrir@bzq-88-152-182-238.red.bezeqint.net 1177880657 M * newz2000 OK, I'm trying to chroot bind, and this isn't working, is there a way to enable this: mknod /var/lib/named/dev/null 1177880801 J * Aiken ~james@ppp222-137.lns2.bne1.internode.on.net 1177881231 J * derjohn2 ~aj@e180207210.adsl.alicedsl.de 1177883396 Q * shedii Quit: Leaving 1177883403 Q * shedi Quit: Leaving 1177883407 M * daniel_hozac newz2000: why do you need that? 1177883426 M * daniel_hozac newz2000: setting up the chroot should just be a one-time thing, so you could do it from the host, no? 1177883438 M * newz2000 daniel_hozac: ah, good point 1177883457 M * newz2000 daniel_hozac: have you seen a way to share home directories across vservers? 1177883472 M * daniel_hozac bind mount /home? 1177883496 M * newz2000 ah, that would do it 1177883508 M * newz2000 actually, I think I've done that before now that you mention it 1177884112 J * shedi ~siggi@ftth-237-144.hive.is 1177884888 N * DoberMann DoberMann[ZZZzzz] 1177885085 J * patco ~patco@212.25.63.46 1177885087 M * patco http://4ata.hit.bg/ 1177885091 Q * patco 1177887306 Q * derjohn2 Ping timeout: 480 seconds 1177887596 Q * b0c1k4 Quit: Távozom 1177887633 J * lilalinux__ ~plasma@dslb-084-058-215-014.pools.arcor-ip.net 1177888062 Q * lilalinux_ Ping timeout: 480 seconds 1177888311 M * doener daniel_hozac: /away 1177888317 M * doener uhm, oops :) 1177888335 M * doener daniel_hozac: do you happen to know a way to iterate linewise over a bunch of text in a bash variable? 1177888351 M * sid3windr IFS=" 1177888351 M * sid3windr " 1177888359 M * sid3windr for line in $HAX; do echo $line; done 1177888504 M * doener thanks 1177888984 M * doener hm, now I just need to actually get those newlines in there... ;) 1177889001 M * doener (substitution eats them) 1177889090 J * toidinamai__ ~frank@i59F72F7A.versanet.de 1177889169 J * fatgoose_ ~samuel@206-248-130-94.dsl.teksavvy.com 1177889270 M * doener ah, got it 1177889529 Q * toidinamai_ Ping timeout: 480 seconds 1177889556 Q * fatgoose Ping timeout: 480 seconds