1244332907 Q * scientes_ Ping timeout: 480 seconds 1244333996 J * after ~after@121-73-173-251.dsl.telstraclear.net 1244334932 M * Bertl off to bed now ... have a good one everyone! 1244334939 N * Bertl Bertl_zZ 1244336752 Q * geb Remote host closed the connection 1244336943 Q * after charon.oftc.net synthon.oftc.net 1244336943 Q * MooingLemur charon.oftc.net synthon.oftc.net 1244336943 Q * __gh__ charon.oftc.net synthon.oftc.net 1244336943 Q * FloodServ charon.oftc.net synthon.oftc.net 1244336943 Q * FireEgl charon.oftc.net synthon.oftc.net 1244336943 Q * bragon charon.oftc.net synthon.oftc.net 1244336943 Q * nenolod charon.oftc.net synthon.oftc.net 1244336943 Q * fb_ charon.oftc.net synthon.oftc.net 1244336943 Q * Kamping_Kaiser charon.oftc.net synthon.oftc.net 1244336943 Q * jescheng charon.oftc.net synthon.oftc.net 1244336943 Q * uva_ charon.oftc.net synthon.oftc.net 1244336943 Q * Floops[w]1 charon.oftc.net synthon.oftc.net 1244336943 Q * DreamerC charon.oftc.net synthon.oftc.net 1244336943 Q * infowolfe charon.oftc.net synthon.oftc.net 1244336943 Q * sardyno charon.oftc.net synthon.oftc.net 1244336943 Q * sladen__ charon.oftc.net synthon.oftc.net 1244336943 Q * tam charon.oftc.net synthon.oftc.net 1244336943 Q * micah charon.oftc.net synthon.oftc.net 1244337023 J * tam ~tam@gw.nettam.com 1244337023 J * sladen__ ~paul@jasmine.wyrdweb.com 1244337023 J * sardyno ~me@pool-96-235-18-120.pitbpa.fios.verizon.net 1244337023 J * micah ~micah@micah.riseup.net 1244337172 J * Floops[w]1 ~baihu@205.214.201.176 1244337172 J * DreamerC ~DreamerC@122-116-181-118.HINET-IP.hinet.net 1244337172 J * infowolfe ~infowolfe@c-76-105-242-186.hsd1.or.comcast.net 1244337173 J * MooingLemur ~troy@shells195.pinchaser.com 1244337173 J * __gh__ ~gerrit@c-71-193-204-84.hsd1.or.comcast.net 1244337174 J * FireEgl FireEgl@173-16-9-10.client.mchsi.com 1244337174 J * bragon ~Alexandre@alucard.bragon.info 1244337174 J * nenolod nenolod@petrie.dereferenced.org 1244337174 J * fb_ fback@red.fback.net 1244337174 J * Kamping_Kaiser ~kgoetz@ppp121-45-111-232.lns10.adl6.internode.on.net 1244337175 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244337175 J * uva_ bno@118-160-163-253.dynamic.hinet.net 1244337326 J * FloodServ services@services.oftc.net 1244343250 J * kgoetz kgoetz@203.209.167.143 1244344801 Q * jescheng Remote host closed the connection 1244344822 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244354272 Q * nenolod Ping timeout: 480 seconds 1244354473 J * nenolod nenolod@petrie.dereferenced.org 1244355422 J * derjohn_foo ~aj@e180193002.adsl.alicedsl.de 1244355570 J * Net147 Net147@c211-30-18-226.rivrw2.nsw.optusnet.com.au 1244355734 M * Net147 i'm having trouble with shutdown/reboot logging. I can write /var/log/wtmp records fine in guest using halt -w but I can't read them back using the 'last' command. any ideas? 1244355785 M * Net147 reading wtmp files from other systems works so i'm guessing something is missing when writing the records. 1244357703 J * doener ~doener@i59F57CCD.versanet.de 1244357806 Q * doener_ Ping timeout: 480 seconds 1244359202 Q * jescheng Remote host closed the connection 1244359222 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244359306 N * Bertl_zZ Bertl 1244359313 M * Bertl morning folks! 1244359324 M * Net147 morning Bertl 1244359330 M * Bertl Net147: do you get an error? if so, which one? 1244359348 M * Net147 Bertl: I don't get any error at all 1244359359 M * Net147 just displays no records 1244359374 M * Bertl sounds more like an userspace problem then, what does strace -fF reveal? 1244359401 M * Net147 for halt -w or last? 1244359429 M * Bertl I'd say for the one which fails :) 1244359438 M * Net147 none of them fail... 1244359447 M * Net147 just last doesn't print out anything 1244359468 M * Bertl well, I tought you considered that a failure ... 1244359480 M * Net147 it happens on gentoo guest as well. also happens on ubuntu jaunty vserver host. 1244359491 M * Net147 may be a login issue? 1244359503 M * Net147 I haven't tried with SSH login yet 1244359505 M * Net147 i'll do that 1244359612 M * Net147 hmm... can't login via SSH 1244359673 M * Net147 ah, need to bind to vserver guest ip 1244359744 M * Net147 that doesn't work either. address alraedy in use? 1244359833 M * Net147 when I try to connect to the vserver guest IP, it connects to host instead? 1244360048 M * Net147 Bertl: I noticed there is an arch init style in vserver.functions. has someone else been working with Arch support? 1244360457 J * ktwilight_ ~keliew@75.71-65-87.adsl-dyn.isp.belgacom.be 1244360457 Q * ktwilight Read error: Connection reset by peer 1244360638 M * Net147 Bertl: nevermind I got it working. apparently it requires a login entry first to associate with. 1244360667 M * Net147 Bertl: I needed to change bind address on vserver host from 0.0.0.0 to its LAN IP so that guest can also start sshd. 1244360879 J * ghislainocfs21 ~Ghislain@adsl2.aqueos.com 1244361105 M * Bertl that's expected and documented on the wiki 1244361135 M * Net147 yeah 1244361222 J * kernelmadness ~kernelmad@89.169.100.241 1244361234 Q * ghislainocfs2 Ping timeout: 480 seconds 1244362275 M * Net147 Bertl: there already seems to be an arch init style. should I modify arch init style or just use sysv? 1244362409 M * Bertl whatever suits you most ... if the arch init style is new, it might need some testing 1244362424 M * Net147 well the arch init style doesn't seem to work well... 1244362435 M * Net147 compared to using sysv with custom start/stop scripts 1244362441 M * Net147 dpesm 1244362449 M * Net147 doesn't set hostname, clean up leftover files, etc. 1244362599 J * misc--- ~misc@60-241-66-237.static.tpgi.com.au 1244363118 J * vk5foss kgoetz@203.209.167.143 1244363170 Q * kgoetz Read error: Connection reset by peer 1244363257 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1244363617 J * scientes_ ~scientes@174-21-87-171.tukw.qwest.net 1244364950 M * misc--- hello... when was the last vserver software update (just curious)? 1244365743 M * Net147 last experimental release was 29th may 1244365804 M * misc--- ah cool, so it's still active then 1244366369 M * harrydg ofcourse it is 1244366377 M * harrydg we're ... active! :) 1244366385 M * harrydg "harry said from the couch" 1244366456 M * Net147 harrydg: you still planning on doing that vs+grsec patch for 2.6.29.4? 1244366661 M * harrydg yesh... but first i need a sollution for the bug you were talking about 1244366672 M * harrydg (and someone else mentioned another error) 1244366697 M * harrydg but if you really want it, i can spin another patch next week 1244366718 M * Net147 harrydg: Bertl fixed the bug I was having. 2.3.0.36.14 includes the fix. 1244366758 M * Net147 what was the other error? 1244366828 M * harrydg don't remember, i have it at work somewhere ;) 1244366845 M * Net147 errors at work aren't a good thing... 1244366860 M * harrydg bleh, who cares, it's a government job 1244366862 Q * pmenier_off Ping timeout: 480 seconds 1244366885 M * Net147 okay =P 1244366892 J * pmenier_off ~pmenier@ACaen-152-1-28-113.w83-115.abo.wanadoo.fr 1244366980 M * bragon harrydg: it seems it's a grsec/pax problme. 1244366998 M * bragon i'll try perhaps without the grsec patch 1244367041 M * Net147 bragon: what's the problem? 1244367064 J * larsivi ~larsivi@70.84-48-63.nextgentel.com 1244367113 M * bragon harrydg: so people report many issues with the 2.6.29.4 and vserver-2.3/grsec patch ? 1244367169 M * kernelmadness Hi guys. Just noticed that load average on guests with virt_load enabled calculated incorrectly. Busy VPS shows value about 0, while not busy vps shows a bit less 1. Any ideas? 1244367223 M * Bertl kernelmadness: what kernel? 1244367235 M * harrydg bragon: no 1244367249 M * harrydg bragon: net147 reported one~, and i think there was another one 1244367254 M * harrydg that's about it :) 1244367258 M * kernelmadness 2.6.29 with recent vs2.3.0.36.14 1244367351 M * bragon harrydg: ok, i'll compile it 1244367366 M * bragon it can't be worste that the 2.6.27 1244367385 M * bragon i use that for the moment : 1244367386 M * bragon Linux gerontius 2.6.27.15-grsec-2.1.12-vs2.3.0.36.4 #5 SMP Mon Feb 16 17:34:09 CET 2009 x86_64 Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz GenuineIntel GNU/Linux 1244367554 M * harrydg bragon: so i read... 1244367556 M * bragon i use this on another serveur : 1244367557 M * bragon Linux alucard 2.6.22.19-vs2.3.0.34 #1 SMP Wed Nov 5 11:26:16 CET 2008 i686 Intel(R) Pentium(R) 4 CPU 3.00GHz GenuineIntel GNU/Linux 1244367570 M * bragon no issues with this version 1244367573 M * harrydg that's the one i still use on my webserver myself :) 1244367591 M * bragon very stable ! 1244367593 M * harrydg well... the grsec one off course :) 1244367616 M * harrydg Linux ssh 2.6.22.18-grsec2.1.11-vs2.2.0.6 #1 Wed Feb 13 10:47:06 CET 2008 x86_64 x86_64 x86_64 GNU/Linux 1244367620 M * harrydg that one even :) 1244367626 M * bragon but i'm a shell provider inside one off my vserver. so using grsec is better ! 1244367647 M * harrydg there is an update, but irrelevant for me 1244367662 M * harrydg i do want to upgrade to the latest version aswell... 1244367667 M * harrydg aswel 1244367702 M * bragon i must use a recent kernel because of a pci-e version of e1000 drivers 1244367777 M * harrydg i'd suggest you wait until somewhere next week 1244367796 M * harrydg i'll spin the 2.6.29.4 with the bugfix that net147 was talking about 1244367812 M * harrydg i'll see if the other one is even... in the kernel side ;) 1244367851 M * Bertl kernelmadness: cannot reproduce, works fine here with virt_load 1244367863 M * Bertl kernelmadness: what is your test setup? 1244367928 M * harrydg aaha... bertl... allways at work :) 1244367960 M * bragon harrydg: i can use it for my 2.6.29.4 ? : patch-2.6.29.2-vs2.3.0.36.12-grsec2.1.14-20090513.diff 1244367962 M * bragon ok ? 1244368051 M * kernelmadness Bertl: Gentoo box. 6 active guests. in config CONFIG_VSERVER_IDLETIME=y, CONFIG_VSERVER_IDLELIMIT=n 1244368089 M * harrydg bragon: no 1244368097 M * bragon http://paste.geeknode.org/801e76a7 1244368099 M * harrydg there are fs_struct changes 1244368106 M * harrydg which make kernel patching fail 1244368106 M * bragon harrydg: arf 1244368117 M * harrydg which are solved in bertl's latest patch 1244368128 M * harrydg 2.3.0.36.14 1244368131 M * bragon ok i can find that on ~bertl ? 1244368141 M * harrydg i'll merge that with the latest grsec this week 1244368152 M * harrydg no, it's on the site... gimme a sec 1244368171 M * bragon hum 1244368174 M * bragon ok 1244368176 M * harrydg http://vserver.13thfloor.at/Experimental/patch-2.6.29.4-vs2.3.0.36.14.diff 1244368187 M * bragon but no grsec with it 1244368190 M * bragon ok 1244368196 M * harrydg not yet, that, i'll do next week 1244368200 M * harrydg well... comming week 1244368207 M * Net147 cool 1244368234 M * bragon hum 1244368243 M * bragon i think that grsec is my problem 1244368430 M * harrydg bragon: don't know :) 1244368441 M * harrydg unless you tell me what your problem is exactly :) 1244368460 M * harrydg but if you suspect grsec, test without it and see what that gives you 1244368473 M * bragon vserver History Tracing is usefull for my issues ? 1244368475 M * harrydg as Bertl says... vserver adds security... 1244368486 M * harrydg might be :) 1244368497 M * bragon harrydg: the issues is un understable 1244368499 M * harrydg grsec adds more, but you have to konw what you're doing :) 1244368510 M * bragon because the server freeze 1244368513 M * bragon and that's all ! 1244368515 M * bragon no log 1244368517 M * harrydg ah... that sounds more like a hardware problem to me :S 1244368519 M * bragon no kerne message 1244368526 M * bragon just a freeze 1244368539 M * bragon harrydg: no, i think not 1244368540 M * harrydg i'd suggest you give the non-grsec latest vserver patch a go 1244368582 M * harrydg i'll spin a new patch this week... which you can test somewhere else first (and i'll do the same ;)) 1244368585 M * bragon i just change all the piece 1244368619 M * bragon the server is very new 1244368630 M * bragon i change it when my issues comes 1244368689 M * Bertl kernelmadness: and how do you test? 1244368756 M * bragon harrydg: i test with Bertl patch, if the issues comes with the Bertl's patch, i can say that it's not a grsec issues 1244368781 M * harrydg it's the first part in solving your problem... 1244368794 M * Bertl bragon: and then I would really appreciate a bug report :) 1244368795 M * harrydg if we can rule out kernel + vserver, we KNOW it's in grsec part 1244368820 M * bragon my bzImage is ready 1244368831 M * harrydg YAY! 1244368833 M * harrydg ;) 1244368976 A * bragon touch /fastboot 1244369010 M * kernelmadness Bertl: one vps is constantly under load(picture hosting), other(mysql) is a little less loaded. Others(mail, testing, dns) have almost no load. But, la on first 2 vps is around 0, and on others is around 0.3-1 1244369120 M * Bertl that doesn't sound like a proper test setup to me 1244369134 M * Bertl I'm testing here with a cpuhog, which creates a constant load of '1' 1244369157 M * Bertl putting that on the host, raises the load by 1, on the guest, with virt_load, the load stays at 0 1244369170 M * Bertl moving the cpuhog into the guest raises the guest load by one 1244369197 M * Bertl please don't confuse cpu usage per se with 'load' they might be quite different 1244369264 M * bragon gerontius ~ # uname -a 1244369270 M * bragon Linux gerontius 2.6.29.4-vs2.3.0.36.14 #1 SMP Sun Jun 7 11:59:57 CEST 2009 x86_64 Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz 1244369277 M * bragon for the moment, it works for me 1244369299 M * Bertl what were the issues you were seeing? 1244369320 M * kernelmadness hmm. It's seems to be very synthetic test... load average consists of count of waiting processes + io load. It's very strange that resting vps has high la 1244369354 M * Bertl no, load only consists of number of running processes averaged over time 1244369365 M * bragon i have an interesting problem 1244369367 M * bragon lol 1244369372 M * bragon gerontius ~ # vserver meriadoc enter 1244369372 M * bragon smeagol / # 1244369377 M * bragon gerontius ~ # vserver meriadoc enter 1244369377 M * bragon xanadu / # logout 1244369378 M * bragon o_o 1244369384 M * Bertl kernelmadness: (where running means in 'running' state, not scheduled) 1244369425 M * Bertl bragon: util-vserver version? 1244369455 M * bragon sys-cluster/util-vserver-0.30.215 1244369471 M * bragon gerontius ~ # vserver sam enter 1244369471 M * bragon xanadu / # logout 1244369473 M * bragon arf 1244369484 M * Bertl too old for 2.6.29.x 1244369502 M * bragon ok i upgrade 1244369507 M * Bertl you need a recent pre of 0.30.216, otherwise the namespaces won't work 1244369525 M * Bertl (which can cause all kinds of issues, like e.g. being able to change the host name) 1244369554 M * bragon Bertl: do you know if 216 is in portage ? 1244369573 M * Bertl no idea, you have to check with a gentoo person 1244369585 M * bragon i check himself 1244369591 M * bragon s/himself/myself 1244369620 M * kernelmadness Bertl: well, in this case there must be no load at all, because of no 'running' processes on this vps 1244369645 M * bragon ok 0.30.216 isn't in portage 1244369771 M * Bertl kernelmadness: look, Linux-VServer doesn't change how the load is calculated, it just 'adjusts' where the average is stored (i.e. inside a separate place in the vx_info) .. so you get the same behaviour/calculation as on the host, just limited to guest processes 1244369806 M * bragon Bertl: sorry , but where can i found the : 0.30.216 ? 1244369844 M * Bertl http://people.linux-vserver.org/~dhozac/t/uv-testing/ (it's on the wiki, under download) 1244369867 M * bragon ok thanks 1244369892 M * Bertl kernelmadness: so one valid test with your 'test load' would be to run it on the host, and record the load average, and then run it inside the guest, while keeping the host under some load, and compare it 1244369911 M * kernelmadness Bertl: yeah, i understand this... but real behaviour of this thing is strange 1244369988 M * Bertl nobody ever said that load accounting wasn't strange :) 1244370290 M * bragon i just compile by hand the util-vserver-0.30.216-pre2833, with ./configure && make && make install 1244370299 M * bragon but i can't enter in the good guest 1244370318 M * kernelmadness Bertl: ok. I have another fact on this problem.. la in /proc/virtual//cvirt seems to be realistic. But when i enter in corresponding VPS, the numbers are different! 1244370389 M * Bertl okay, that's a good argument, how do you check inside a guest? 1244370401 M * kernelmadness top 1244370414 M * Bertl okay, try with uptime? 1244370424 M * kernelmadness ok, i'll try now 1244370580 M * bragon hum 1244370605 M * Bertl good guest? 1244370615 M * bragon not 1244370618 M * Bertl did you restart the guests since the update? 1244370638 M * bragon my vserver-info show the old util-vserver 1244370646 M * bragon i don't understand 1244370650 M * bragon the make install is ok ! 1244370659 M * Bertl did you uninstall the old ones? 1244370664 M * bragon not ! 1244370676 A * bragon slaps himself 1244370731 M * bragon gerontius util-vserver-0.30.216-pre2833 # vserver-info 1244370731 M * bragon -bash: /usr/sbin/vserver-info: No such file or directory 1244370861 M * bragon ok the prfix is /usr/local 1244370879 M * kernelmadness Bertl: wtf... load avg rising up to 0.5 when i do vserver enter. If i leaving it for a minute, la returns to 0 1244370902 M * Bertl sounds okay 1244370925 M * kernelmadness Bertl: is it normal? 1244370942 Q * derjohn_foo Ping timeout: 480 seconds 1244370944 M * Bertl well, entering a guest is quite process intensive 1244370973 M * Bertl the load average picks up on that ... 1244371028 M * kernelmadness in my case entering seems to be VERY intensive process :) 1244371068 M * Bertl what happens if you logon via ssh for example? 1244371083 N * pmenier_off pmenier 1244371090 M * Bertl I wouldn't be surprised to see a similar load spike 1244371110 M * Bertl (nowadays distros run all kind of useless stuff on new sessions :) 1244371227 M * kernelmadness Bertl: no, when i log in via ssh into guest there no any load spikes :( 1244371348 M * Bertl what about 'vserver exec uptime'? 1244371416 M * kernelmadness same effect. la rising to 0.2 1244371433 M * Bertl well, .2 isn't that much 1244371452 M * Bertl it's a single process running for 1/5th of a second 1244371563 M * kernelmadness yes, but la calculated on a minute interval.. isn't it? 1244371891 M * Bertl so .. now we are suspecting that the load averaging is flawed ... well, let's check that with the source code 1244372110 M * Bertl we agree that the host load calculation is what we want for the guests too, right? 1244372126 M * kernelmadness yes, of course 1244372198 M * Net147 if i'm adding new distribution to util-vserver, do I need to add package management stuff? 1244372210 J * harobed ~harobed@arl57-1-82-231-110-14.fbx.proxad.net 1244372237 M * Bertl Net147: depends 1244372280 M * Net147 well I see the Gentoo guests are built using template and template build method has package management enabled by default 1244372471 M * Net147 so just wondering what I need to do for package management when adding a build method 1244372618 M * Bertl kernelmadness: okay, I lied, load calculation for the guests is different than the normal load calculation .. and I remember now why :) 1244372666 M * kernelmadness Bertl: so, whar a difference? :) 1244372718 M * Bertl kernel load calculation is done in the periodic timer 1244372760 M * Bertl if we would extend that to the guest load calculation, we would need to iterate over all guests on every load update 1244373039 M * Bertl (this is quite inefficient and would add significant overhead) 1244373040 M * harrydg i think it might be a good idea to make a 0.30.216 release, since you NEED it to run the latest (be it experimental) patches 1244373073 M * Bertl kernelmadness: so what we do instead is to update the average whenever the process count changes 1244373119 M * Bertl (and weight between previous and new value according to the interval passed) 1244373201 M * Bertl this has the 'unwanted' sideeffect, that the load might show certain 'jumps' if the state doesn't change for some time 1244373231 M * Bertl we might use a little trick to elevate this issue without adding the unwanted overhead though 1244373312 M * Bertl we could update each guest in a round-robin manner from the timer 1244373601 Q * jescheng Remote host closed the connection 1244373622 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244373639 M * Bertl kernelmadness: so, if you like to prepare a patch for that, I would consider inclusion 1244373812 M * Net147 Bertl: does 0.30.216 also work with stable releases? 1244373845 M * Bertl yep 1244373897 M * Net147 i've decided to leave the package management stuff out for now. get something working first. 1244374081 M * kernelmadness Bertl: ok, i'll think on this problem 1244377080 M * kernelmadness Bertl: can you point me, how to iterate through all vxi structs in kernel? 1244377596 M * misc--- I love vserver, I've used it for years without a single hitch, hats off to the devs and the people involves, bloody hell it's good. 1244377604 M * misc--- involved* 1244377657 M * Bertl thanks for the flowers! 1244377692 M * misc--- =) 1244377694 M * Bertl kernelmadness: check out __lookup_vx_info() 1244377722 M * kernelmadness Bertl: already there :) 1244377807 M * Bertl kernelmadness: note that you want to do it in a lockless fashion 1244377942 M * kernelmadness Bertl: ok, i'll try to imlement round-robin functionality in timer.c 1244378623 J * geb ~geb@earth.gebura.eu.org 1244381720 M * Net147 is there a function I can call from initpre/initpost to modify apps/init/cmd.start and apps/init/cmd.stop? 1244382197 M * Bertl echo? 1244382205 M * Bertl anyway ... nap attack ... bbl 1244382208 M * Net147 haha... yea that's it 1244382217 N * Bertl Bertl_zZ 1244382291 Q * geb Ping timeout: 480 seconds 1244382962 Q * larsivi Ping timeout: 480 seconds 1244385143 J * jazzanova ~boris@ool-44c5c39f.dyn.optonline.net 1244385152 M * jazzanova hi 1244385166 M * jazzanova i am having problem running xinetd on vserver 1244385197 M * jazzanova do I need to do something special to run it on a vserver ? 1244385384 J * geb ~geb@49.4.82-79.rev.gaoland.net 1244385681 Q * jazzanova Ping timeout: 480 seconds 1244385791 Q * Net147 1244386089 Q * geb Ping timeout: 480 seconds 1244387643 J * dowdle ~dowdle@97-121-196-15.blng.qwest.net 1244388001 Q * jescheng Remote host closed the connection 1244388022 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244388432 J * ViRUS ~mp@p579B5652.dip.t-dialin.net 1244389958 J * nkukard ~nkukard@196.212.73.74 1244391257 J * derjohn_foo ~aj@e180203081.adsl.alicedsl.de 1244392650 J * Stacey-Chin ~furrylova@tor-irc.dnsbl.oftc.net 1244392677 J * staceydamaster ~furrylova@tor-irc.dnsbl.oftc.net 1244392698 P * Stacey-Chin 1244392809 J * Stacey-Chin ~furrylova@tor-irc.dnsbl.oftc.net 1244392860 N * Bertl_zZ Bertl 1244392865 M * Bertl back now ... 1244392939 A * Stacey-Chin is away: http://www.furrylovables.com - Don't even think about messin with me and my chins! (815) 476-2641 http://profile.myspace.com/index.cfm?fuseaction=user.viewProfile&friendID=418817686 1244392998 A * Stacey-Chin is back (gone 00:00:58) 1244393003 A * Stacey-Chin is away: http://www.furrylovables.com - Don't even think about messin with me and my chins! (815) 476-2641 http://profile.myspace.com/index.cfm?fuseaction=user.viewProfile&friendID=418817686 1244393005 M * Bertl Stacey-Chin: please avoid 'away' messages or similar spam on this channel 1244393012 P * Stacey-Chin http://www.furrylovables.com - Don't even think about messin with me and my chins! (815) 476-2641 http://profile.myspace.com/index.cfm?fuseaction=user.viewProfi 1244393034 P * staceydamaster http://www.furrylovables.com - Don't even think about messin with me and my chins! (815) 476-2641 http://profile.myspace.com/index.cfm?fuseaction=user.viewProfi 1244393060 M * kernelmadness Bertl, i noticed that LA updates invoked every 5 seconds. I think it will have no significant overhead if we update all guests la counters every time host updates its own 1244393124 M * Bertl well, considering that we might have 500 guests, it could still be messy :) 1244393141 M * Bertl but for a quick hack/test it should be fine 1244393178 M * kernelmadness oh, 500 guests.. its insanity! 1244393212 P * dowdle Konversation terminated! 1244393335 M * Bertl but it is interesting that the update timer runs every 5 seconds 1244393363 M * Bertl a nice approach I was thinking of is the following: 1244393407 M * Bertl we could check for the update when we schedule a guest, i.e. when we do the hard cpu checks 1244393470 M * kernelmadness and, how often occurs hard cpu checks? 1244393580 M * Bertl that depends on the scheduler decisions, but a) we already check/dereference the vx_info there, and b) we can test for the delta to be at least 5seconds (or whatever we consider appropriate) 1244393632 M * Bertl and finally, c) before a guest will read/use the load info, we definitely will get a chance to update it :) 1244393707 M * daniel_hozac but if the host tries to use the values, they might not be accurate? 1244393728 M * Bertl correct, but we could check there too 1244393741 M * daniel_hozac sure. 1244393749 M * Bertl (again, all structures are cache hot/available) 1244393752 M * kernelmadness yeah, thats right.. i tried to hardcode la update in timer.c and got inclusion problems. Seems to be hard to iterate through vxi's from random place in source code 1244393797 M * Bertl the trick for the timer would be to have a separate circular list through all vx_info structs, and just walk that there 1244393821 M * daniel_hozac that seems like noticable overhead. 1244393826 M * Bertl but I think it would be inferior to the scheduler approach .. your ideas/comments ? 1244393827 M * kernelmadness so, how to generate such list? 1244393927 M * Bertl a simple vx_info *rr in each vx_info, which gets added when a context is hashed and removed when unhashed 1244393974 M * Bertl daniel_hozac: the idea was to handle a single guest at a time, and then move on to the next in the next timer 1244393984 M * daniel_hozac wouldn't struct list_head be better? 1244394027 M * Bertl yes, actually something lock and problem free (removal) is required 1244394035 M * Bertl was just a simplification 1244394127 M * daniel_hozac it seems problematic to do it one at a time. i suppose we could just grab a reference to the "next one" though. 1244394194 M * Bertl yep, but my personal preference would be the scheduler check 1244394198 M * daniel_hozac me too. 1244394238 M * kernelmadness I can't completely understand, what means hashed/unhashed? 1244394251 M * daniel_hozac the only problem there is that the delta might be bigger than 5s, which would require a slightly different algorithm 1244394276 M * Bertl that algorithm is already there for load accounting (or should be) 1244394285 M * Bertl (for guest load accounting that is) 1244394292 M * daniel_hozac okay, no big then. 1244394458 M * Bertl kernelmadness: each vx_info (nx_info too) is stored in a lookup hash when it becomes visible, to allow for enter or proc entries 1244395126 M * kernelmadness ok, i understood 1244395163 M * kernelmadness vx_tokens_recalc - is it that place where we should do la recalc? 1244395680 Q * derjohn_foo Ping timeout: 480 seconds 1244395815 M * Bertl yep, put it there or nearby and give it a spin 1244395931 M * kernelmadness so, what is delta in context of this func? seconds/jiffies? 1244396989 M * Bertl for the vx_tokens_recalc? 1244397098 M * kernelmadness yes 1244397209 M * Bertl yes, that delta is ticks .. but don't bother there, my suggestion would be to move the load lock down after the delta calculation in vx_update_load, and call it right after the context switch (in sched.c) before line 4707 1244397861 M * kernelmadness hm. so, we perform delta check before locking, and if (delta < 5*HZ) simply return? 1244397888 M * Bertl yep 1244398003 M * kernelmadness oh, i have one notice. For now load_updates counter in /proc/virtual//cvirt increments fast enough, about 2 per second. is it wasted cycles without real la updating? 1244398230 M * Bertl how many contexts? 1244398274 M * kernelmadness 6 1244398277 M * Bertl btw, did you move the unlock too? otherwise you will unlock 'unlocked' locks :) 1244398329 M * Bertl so about 12 times per second in total, I guess that is acceptable ... but probably it would be okay to increment the counter only on actual updates 1244398382 M * kernelmadness no, unlock will be called only if (delta < 5*HZ). so, there is no problem i think 1244398395 M * kernelmadness oops 1244398399 M * kernelmadness if (delta > 5*HZ) 1244398729 J * larsivi ~larsivi@70.84-48-63.nextgentel.com 1244398826 M * kernelmadness Can't understand, how to propely get vxi in sched.c 1244398969 M * Bertl current->vx_info 1244399321 M * Bertl just check that it actually has a vx_info, before calling vx_update_load :) 1244399326 M * kernelmadness well, kernel compiles normally. The only question is where to test it.. :)) 1244399336 M * Bertl kvm? 1244399524 M * kernelmadness it needs time to setup.. 1244399569 M * Bertl well, then test it one real hardware ... 1244399912 M * kernelmadness kamikaze mode on. I'll try to test it on our dev server 1244400250 M * kernelmadness fuck. it not work 1244400466 M * Bertl did you get a kernel panic/stack trace? 1244400470 M * Bertl if so, please upload 1244400597 M * kernelmadness server is located in our dc, i can get there tomorrow :( 1244400608 M * Bertl no serial console? 1244400625 M * kernelmadness no( 1244400646 M * Bertl maybe a good time to install one? 1244400672 M * kernelmadness Good time to install virtual box :) 1244400757 M * Bertl kvm is usually way simpler, as you can boot on the command line 1244400767 M * Bertl i.e. no graphic stuff requires 1244400771 M * Bertl *required 1244401002 M * kernelmadness i'll try 1244401069 M * Bertl you can get boot images and test kernels from http://vserver.13thfloor.at/Stuff/QEMU/ 1244401219 M * kernelmadness thx 1244402070 M * urbi someone told me his firm is using some kind of virtual servers that run on multiple host machines at a time, and when one server crashes another takes it over which results in greater uptime 1244402075 M * urbi what was he talking about? 1244402287 M * daniel_hozac probably vmware. 1244402343 M * urbi why isnt this kind of feature possible 1244402346 M * urbi in linux vserver? 1244402402 Q * jescheng Remote host closed the connection 1244402422 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244402444 M * daniel_hozac it would be, if you modified some sort of clustering patch to work with Linux-VServer. 1244402573 M * Bertl urbi: when one server crashes, how can the other 'take over'? 1244402623 M * Bertl urbi: normally this can only be done with extremely tight synced machines processing identical code 1244402707 M * daniel_hozac it's shared storage, and i assume they also keep CPU and memory on there too. 1244402741 M * Bertl so basically snapshoting? 1244402949 M * urbi Bertl: thats what i was asking you:P 1244402956 M * Bertl urbi: note that it shouldn't be too hard to adapt Linux-VServer to Linux cluster implementations 1244403019 M * Bertl urbi: well, it really depends on what one needs/wants ... usually it is more than enough to start a backup guest when a machine dies (minimalistic form of high availability) 1244403090 M * urbi but then it has to concurrently write to both machines, right? 1244403131 M * Bertl but of course, you can take that to the extreme and use lockstepping 1244403154 M * Bertl usually you put HA clusters on a shared or replicated filesystem 1244403502 M * Bertl note that I know of a bunch of HA setups using heartbeat and Linux-VServer for that 1244404083 M * fb_ is util-vserver svn repo accessible for ro access? 1244404101 M * fb_ (for anonymous ro access, that is) 1244404131 M * Bertl I'd say so 1244404163 M * fb_ using which schema? 1244404192 M * daniel_hozac http://svn.linux-vserver.org/svn/util-vserver/trunk 1244404205 M * fb_ daniel_hozac: thanks :-) 1244404582 J * docelic ~docelic@78.134.201.243 1244405896 M * Radiance hmm, i remember i had this issue before but forgot, this happens when trying to start a vserver: /usr/local/sbin/chbind: line 135: 5347 Segmentation fault "${create_cmd[@]}" "${chain_cmd[@]}" -- "$@" 1244405899 M * Radiance appreciate any advise 1244405941 Q * harobed Ping timeout: 480 seconds 1244405977 M * Bertl I would opt for toolchain/kernel problem 1244405981 M * Bertl anything in dmesg? 1244406099 M * Radiance well, it's the same setup as on some other box (same kernel config, version, same util version) 1244406103 M * Radiance lemme check some more 1244406491 M * daniel_hozac same gcc, binutils, CFLAGS, etc? 1244406535 M * Radiance no, gcc version and binutils are different 1244406687 M * Radiance lemme do a restart 1244407508 M * Radiance hmm can't find what's wrong 1244407598 M * Radiance this means the init script in vserver itself or that of the system btw ? /etc/init.d/rc 3 1244407618 Q * scientes_ Read error: Connection reset by peer 1244407622 J * scientes ~scientes@174-21-87-171.tukw.qwest.net 1244407835 M * Bertl Radiance: 22:19 < Bertl> anything in dmesg? 1244407988 M * Radiance nope, nothing there 1244408019 M * Bertl what about testme.sh? does it run through or bail out too? 1244408043 M * Radiance weird, it's working now 1244408054 M * Radiance but only after i downloaded and compiled util-vserver-0.30.215 1244408058 M * Radiance instead of 214 1244408076 M * Radiance i normally use 214 so far 1244408130 A * mnemoc misses .216 1244408217 M * Bertl Radiance: what kernel do you use? 1244408256 M * Radiance the 2.6.22.19 1244408534 M * Bertl okay, 214 should be fine, as long as your toolchain compiles them correctly 1244408598 Q * uva_ Ping timeout: 480 seconds 1244409828 J * derjohn_foo ~aj@e180203081.adsl.alicedsl.de 1244409944 Q * bonbons Quit: Leaving 1244410765 Q * ViRUS Quit: If there is Artificial Intelligence, then there's bound to be some artificial stupidity. (Thomas Edison) 1244413444 Q * kernelmadness Quit: Ухожу я от вас (xchat 2.4.5 или старше) 1244413731 J * geb ~geb@49.4.82-79.rev.gaoland.net 1244413865 J * uva bno@118-168-232-237.dynamic.hinet.net 1244414670 Q * derjohn_foo Ping timeout: 480 seconds 1244415867 Q * geb Quit: / 1244416801 Q * jescheng Remote host closed the connection 1244416821 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1244418919 Q * PowerKe Ping timeout: 480 seconds