1211501050 M * cehteh $ fakechroot /usr/sbin/chroot ~/debian_lenny/ /bin/bash 1211501050 M * cehteh Segmentation fault 1211501060 M * cehteh .. mhm thats seems reall be vserver related 1211501092 M * Bertl depends, segfaults usually point to gcc issues 1211501104 Q * dowdle Remote host closed the connection 1211501133 M * Bertl but running with gdb might shed some light 1211501152 M * cehteh arch_prctl(ARCH_SET_FS, 0x2ba02469b6e0) = 0 1211501152 M * cehteh --- SIGSEGV (Segmentation fault) @ 0 (0) --- 1211501156 M * cehteh strace 1211501168 Q * balbir Ping timeout: 480 seconds 1211501182 M * Bertl use 'where' to get a gdb backtrace 1211501191 M * cehteh well .. i just give that vserver chroot privileges and use real chroot 1211501210 M * cehteh that was my initial plan and it is an non public build server 1211501333 M * cehteh (no debug info either) 1211501419 M * cehteh heh .. under valgrind it works :P 1211501694 M * Bertl so it can't be broken then :) 1211501842 M * cehteh relatively broken :) 1211501906 M * cehteh well .. i need chroots just as build environment, not for isolation/security 1211501968 M * cehteh prolly easier to setup some real ones instead trying with fakechroot 1211502027 M * Bertl yep 1211502075 M * Bertl did you check the fakechroot limittions btw? 1211502109 M * Bertl i.e. sure that you do not simply hit one of them? 1211502167 M * cehteh quite much 1211502227 M * cehteh ok .. tomorrow is another day .. n8 1211502253 M * Bertl yep, have a good one, I guess I'll hit the sack too ... 1211502263 M * Bertl night everyone ... cya! 1211502270 N * Bertl Bertl_zZ 1211502695 M * daniel_hozac cehteh: do you have anything in dmesg? 1211502741 M * cehteh bash[22280:#40012]: segfault at 0000000000000000 rip 0000000000000000 rsp 00007fff9f35fba8 error 14 1211502760 M * cehteh (i dont have much kernel debugging on) 1211503090 M * daniel_hozac well, i was expecting an oops or a BUG. 1211503130 M * daniel_hozac must be a userspace issue then. 1211503156 M * cehteh fakechroot is all about userspace 1211503189 M * cehteh but it likely crashes in the wall because something which it never expected happens in vserver 1211503213 M * cehteh well i dont care its to unimportant to too easy to circumvent 1211504109 Q * padde Remote host closed the connection 1211504183 J * padde ~padde@patrick-nagel.net 1211507480 J * DavidS1 ~david@p4FCC2065.dip0.t-ipconnect.de 1211507688 J * balbir ~balbir@122.167.194.18 1211507889 Q * DavidS Ping timeout: 480 seconds 1211509619 J * dowdle ~dowdle@67-42-226-224.blng.qwest.net 1211511731 J * marv ~marv@74.57.218.127 1211511742 N * marv marv_ 1211511793 M * marv_ is there a way to "vserver exec" a command to ALL guests? like vapt-get --all 1211511836 M * daniel_hozac vsomething vserver --all -- exec ls -l / 1211511868 M * marv_ vsomething command not found :P 1211511880 M * daniel_hozac really? are you sure you have util-vserver installed? 1211511886 M * marv_ i guess i could vserver-stat |awk 1211511894 M * marv_ and exec the command with that 1211511944 M * daniel_hozac it sounds like your util-vserver installation is incomplete at best. you really ought to fix that. 1211511965 M * marv_ ? 1211511973 M * marv_ oh it does exist 1211511977 M * marv_ i thaught u where messing with me 1211512079 Q * derjohn Ping timeout: 480 seconds 1211512112 M * marv_ thx 1211512122 J * derjohn ~derjohn@dslb-084-058-195-055.pools.arcor-ip.net 1211512351 M * marv_ daniel_hozac, any plans for live migrations in linux-vserver project? 1211512378 M * daniel_hozac no. 1211512399 M * daniel_hozac migration practically requires virtualizing everything, and at that point you might as well use xen/kvm. 1211512430 M * marv_ well i'm doing single dom xen running vservers in that... 1211512532 M * daniel_hozac so then you've already accomplished what you want, no? 1211512542 M * marv_ not really 1211512567 M * marv_ when i take a dedicated server down for upgrades or whatever i'd like to shift the vservers to diffrent dedicated servers... not all on the same 1211512600 M * marv_ it wouldn't be as hard on the dedicated servers... i dont have a spare atm... 1211512771 M * marv_ anyways, still, 2 thumbs up on your project.... its the best, most flexible, lightweight system i've tried so far... 1211513173 M * Supaplex you betcha 1211513202 A * Supaplex agrees with marv_'s flattery, daniel_hozac + Bertl_zZ :) 1211516101 M * marv_ good night all... i'm out 1211516105 Q * marv_ Quit: Leaving 1211516159 Q * geb Ping timeout: 480 seconds 1211517362 Q * fatgoose Quit: fatgoose 1211519042 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1211521015 J * sharkjaw ~gab@64.28.12.166 1211521504 Q * nkukard Ping timeout: 480 seconds 1211524334 N * DavidS1 DavidS 1211524849 Q * DavidS Quit: Leaving. 1211525129 Q * Slydder Quit: Leaving. 1211526935 J * bfremon ~ben@lns-bzn-54-82-251-127-233.adsl.proxad.net 1211526958 J * rgl ~rgl@bl8-142-23.dsl.telepac.pt 1211526961 A * rgl waves 1211527188 Q * meandtheshell1 Quit: Leaving. 1211527739 M * wibble anyone know if the latest stable patches work with > 2.6.23 kernel ?> 1211527895 J * nkukard ~nkukard@vc-196-207-41-246.3g.vodacom.co.za 1211528130 N * DoberMann[ZZZzzz] DoberMann 1211528464 Q * nkukard Quit: Leaving 1211528924 J * doener ~doener@i577BACE8.versanet.de 1211528988 J * dna ~dna@48-194-dsl.kielnet.net 1211530824 Q * alex_ Ping timeout: 480 seconds 1211530995 Q * hijacker Remote host closed the connection 1211531339 J * alex_ ~alex@62-249-237-101.no-dns-yet.enta.net 1211532136 M * awk 2.6.22.19 1211532144 M * awk is this correctly updated, latest stable version to use? 1211532588 Q * FireEgl Read error: Connection reset by peer 1211532960 J * hijacker ~hijacker@213.91.163.5 1211533136 Q * hijacker Remote host closed the connection 1211533459 J * FireEgl FireEgl@adsl-212-220-247.bhm.bellsouth.net 1211533930 J * hijacker ~hijacker@213.91.163.5 1211534110 Q * hijacker Remote host closed the connection 1211534593 Q * MatBoy Quit: Ik ga weg 1211534997 J * MatBoy ~MatBoy@wiljewelwetenhe.xs4all.nl 1211535575 J * nkukard ~nkukard@196.212.73.74 1211536415 J * bfremon1 ~ben@lns-bzn-31-82-252-214-254.adsl.proxad.net 1211536778 Q * bfremon Ping timeout: 480 seconds 1211536837 J * Mojo1978 ~Mojo1978@ip-78-94-94-190.hsi.ish.de 1211539684 Q * balbir Ping timeout: 480 seconds 1211539792 M * transacid awk: yes, latest stable 1211540069 J * xanoro ~xanoro@p3EE033E0.dip0.t-ipconnect.de 1211540145 J * marl ~marl@84.13.35.31 1211540149 M * marl hi guys 1211540209 M * marl if i have a fresh guest system, does vserver require there to be any scripts in /etc/init.d/ ? 1211540351 J * balbir ~balbir@122.167.176.115 1211540445 M * daniel_hozac marl: no. 1211540521 M * marl as a quick work around to a guest bootup problem, i moved all the init.d scripts to a temp directory, when i try and start the guest, i get 'No command given; use '--help' for more information.' 1211540544 M * marl guest will only boot when i put all the init.d scripts back in place 1211540551 M * daniel_hozac which just means you didn't select the proper initstyle or configure the guest correctly. 1211540688 M * marl is there a way to disable init.d scripts in a guest, without starting it? (vserver build used to do it, but doesnt now) 1211540742 M * daniel_hozac hmm? 1211540762 M * marl other than manually removing the symlinks in rc[0-6] 1211540813 M * daniel_hozac vserver start --rescue /bin/bash 1211540832 M * marl the first few guests that i built came up at the end of the build proces and removed a load of unnecisary init scripts, but latest couple ive built dont anymore 1211540842 M * marl ah thanks 1211540858 M * daniel_hozac make sure you have recent utils. 1211540897 M * marl one other question, if i am only running one guest, is there a way of giving it the same ip as the host? or do i have to use iptables? 1211540900 J * meandtheshell1 ~sa@d91-128-17-22.cust.tele2.at 1211540956 M * daniel_hozac sure. 1211540969 M * daniel_hozac you can even have multiple guests using the host's IP address. 1211540984 M * daniel_hozac the guests will just be able to mess with eachother's services. 1211541011 M * daniel_hozac (and remember to set nodev, or your host will lose connectivity when you stop it...) 1211541035 M * marl thanks :) 1211541286 Q * jsambrook Quit: Leaving. 1211541640 P * xanoro 1211542774 Q * derjohn Ping timeout: 480 seconds 1211543519 Q * FireEgl Read error: Connection reset by peer 1211544288 J * DavidS ~david@p5B2A3656.dip0.t-ipconnect.de 1211544416 J * FireEgl FireEgl@adsl-212-220-247.bhm.bellsouth.net 1211544672 Q * Aiken Remote host closed the connection 1211547064 Q * padde Ping timeout: 480 seconds 1211547666 J * padde ~padde@patrick-nagel.net 1211547692 J * hijacker ~hijacker@213.91.163.5 1211547785 M * marl can i just check i have the follow correct? i am wanting to assign the same ip to a guest as the host, so i have ip and prefix set in the guest as per the host, and i need to rename the file 'dev' (containing eth0) to 'nodev' with same contents, is this correct? 1211547891 Q * hijacker Remote host closed the connection 1211548087 Q * padde Remote host closed the connection 1211548464 Q * dowdle Remote host closed the connection 1211548501 Q * sharkjaw Remote host closed the connection 1211548713 J * hijacker ~hijacker@213.91.163.5 1211549672 J * derjohn ~derjohn@dslb-084-059-012-146.pools.arcor-ip.net 1211550123 M * daniel_hozac nodev is a boolean file, contents don't matter. 1211550440 Q * kir Quit: Leaving. 1211550620 M * marl ah, thanks, do i still need the dev file then yes? 1211550636 M * daniel_hozac no, just remove it. 1211550691 M * marl thanks :) 1211551493 J * docelic ~docelic@78.134.194.21 1211552271 J * dllx ~navid@softbank126113080078.bbtec.net 1211552283 M * dllx hi all 1211553526 J * bored2sleep ~bored2sle@66-111-53-150.static.sagonet.net 1211554018 Q * bfremon1 Quit: Leaving. 1211554077 J * bfremon ~ben@lns-bzn-31-82-252-214-254.adsl.proxad.net 1211554103 J * bfremon1 ~ben@lns-bzn-31-82-252-214-254.adsl.proxad.net 1211554338 Q * kwowt Read error: Connection reset by peer 1211554567 J * kwowt ~zero@193.77.185.75 1211554575 Q * FloodServ charon.oftc.net services.oftc.net 1211554746 J * FloodServ services@services.oftc.net 1211555018 J * yarihm ~yarihm@whitehead2.nine.ch 1211555399 J * docelic_ ~docelic@78.134.193.62 1211555600 Q * FloodServ charon.oftc.net services.oftc.net 1211555644 J * fatgoose ~samuel@76-10-149-199.dsl.teksavvy.com 1211555700 J * FloodServ services@services.oftc.net 1211555806 Q * docelic Ping timeout: 480 seconds 1211556674 Q * FloodServ Service unloaded 1211556739 J * FloodServ services@services.oftc.net 1211556909 Q * DavidS Quit: Leaving. 1211557167 N * Bertl_zZ Bertl 1211557172 M * Bertl morning folks! 1211557464 M * yarihm hi Bertl 1211557501 M * yarihm do you get along with the limits-project? I was asked to ask for a status update :) 1211557505 M * Bertl hey dllx, you gained an l since yesterday? :) 1211557545 M * Bertl yarihm: you mean with the force swap-out, yes? 1211557556 M * yarihm Bertl, yep 1211557573 M * Bertl yeah, I was quite busy last week, but nevertheless there is some progress 1211557598 M * Bertl I want to do some testing this weekend, to see if we can force a single task to swap out a given amount 1211557632 M * Bertl if that works as expected, we can simply extend that to all tasks and run it with the normal swapper/reclaim process 1211557648 M * derjohn Bertl, ooh, interesting! 1211557687 M * derjohn Bertl, swap out even if RAM is still there? 1211557703 M * Bertl yarihm: but I'd like you to describe the issues you are currently seeing once again and in great detail if possible 1211557709 M * Bertl derjohn: jep, that's the idea 1211557749 M * Bertl yarihm: the reason is that I suspect that you are observing something else (maybe in addition) 1211557765 M * derjohn Bertl, does it really make sense? I mean ... souldnt we better use the RAM and swap if free RAM gets low ? 1211557791 M * Bertl yep, but that is kind-of contract work 1211557820 M * Bertl so it will definitely get some kind of tunables to turn it on/off 1211557823 M * derjohn but well, providers like to limit the guest to a "slow level", so $CUSTOMER buys upgrades ... 1211557868 M * derjohn an own swap-partition or wap file per guest? 1211557898 M * derjohn *swap 1211557925 M * Bertl nah, that would be extremely inefficient 1211557943 M * Bertl (so I'm pretty sure mainline will have that shortly :) 1211557965 M * dllx Bertl...nope :( 1211557997 P * bfremon1 1211558005 M * dllx u think u can give some advice? ;) 1211558056 M * Bertl dllx: strange I remember you with only one 'l' in the nick ... 1211558065 M * dllx heh 1211558067 M * dllx oops 1211558069 N * dllx dlx 1211558071 M * dlx typo 1211558078 M * derjohn well with an own swap-entity per process group would be inefficient? In puncto space or performance? 1211558078 M * Bertl ah, see, I was right then :) 1211558093 M * derjohn dlx, *lol* 1211558114 M * Bertl derjohn: both, first, you will have a lot of empty swap space, think 100*1G 1211558141 M * dlx actually, brb 1211558145 M * Bertl derjohn: second, what about shared pages? you have to swap them out to _all_ swap spaces they belong to 1211558186 M * Bertl derjohn: and finally, that will cause a lot of swapping, even if there is memory left 1211558188 M * derjohn shared pages among different guests ?! 1211558214 M * Bertl granted, in the swap case that will be rare 1211558282 M * derjohn well, and 100 guests on one machine is a perspective I currently dont (we have usually 1,10 or max. 20 on one host). 1211558292 M * derjohn have ... 1211558334 M * derjohn but sure, in the future the iron gets bigger ....but 100 GB isnt very much either. 1211558384 M * Bertl well, on what system do you currently have 100G swap? 1211558385 M * derjohn Bertl, but I dont want to steal more time, keep on hacking that kernel thingy :) 1211558413 M * derjohn Bertl, never more than 1g swap on one machine 1211558433 M * derjohn to prevent machines using to much swap and draining disk performance ... 1211558440 M * FaUl Bertl: have you thought of some priority-like thing? so that more privilegied vserver have their programms not so fast swapped then less privilegied? 1211558468 M * Bertl yep, once we have the basic mechanism in place, that can be easily achieved 1211558484 M * Bertl either by adjust the soft limit of the guest or by other priorization means 1211558516 J * hparker ~hparker@linux.homershut.net 1211558603 M * yarihm Bertl, what we have here is the following situation (I'll come up with some example numbers): We have a machine with 4GB of RAM and 12GB Swap. On that machine there are say 16 customers with vservers. That makes about 256MB of physical RAM and 768MB of swap. Now so far so good, we set the virtual limits accordingly so that this is exactly what is displayed within the VServer of each customer. Now some of the customers use their vservers more intensely 1211558603 M * yarihm than others. As long as there are unused ressources, that's all fine for us, these customers can use 1GB of physical ram. The problem is when there are no ressources left. let's assume the machine has to swap out processes. we have customer 1 with a huge mysql database that is being used intensively. Customer 2 has a mysql DB, too but he has a better optimized website so he produces less querries. what now happens is (as we suspect) this: customer1's 1211558604 M * yarihm database will be placed in RAM completely by the kernel, even though he already uses more than 256MB of RAM while customer2's database will in the worst case be swapped out completely because he accesses his pages less frequently, even though he might not even use his 256MB of RAM yet with all processes in his context. That behaviour makes sense if all vservers belong to the same party because it optimizes overall throughput, but it is very much not 1211558609 M * yarihm desireable if you have unrelated parties on the system that compete for ressources. What we want is to allocate them fairly among the customers. 1211558715 M * Bertl okay, and how is the CPU distributed between them? 1211558754 M * yarihm I think in case of starvation fairly using the bucket scheduler 1211558804 M * Bertl okay, do you think we can do some kind of test setup where we try to replicate this example case? 1211558968 M * yarihm well, that should not be that much of a problem but it is hard to tell when a process is swapped out just from looking at e.g. top's output ... you can obviously feel it in terms of performance, but it's hard to say exactly 1211559007 M * Bertl well, that's easy to figure from proc actually 1211559025 M * yarihm good 1211559089 M * yarihm well, what I can do then is have a machine and boot it with mem=256MB, create two vservers, set the limits and do a database-benchmark in one of them while running another database in the other one without benchmark and look what happens 1211559096 M * Bertl cat /proc//smaps, check the Rss vs Size and you see what is in memory 1211559130 M * Bertl the private clean vs dirty should show the swapped out data 1211559162 M * yarihm vs as in 'ratio' you mean? 1211559186 M * Bertl in this case it is difference 1211559209 M * yarihm ok 1211559433 M * Bertl because, the thing I'd expect to happen, if the CPU is configured for fair scheduling, is that the guest which gets swapped out, will be able to collect more CPU cycles, and thus should get more than enough time to work fine, even if some pages get swapped out 1211559553 M * Bertl another idea I had (in this regard) is that maybe we should not penalize guests for swap behaviour, but instead, give some additional tokens to those waiting for a swap-in? 1211559594 M * yarihm hm? you mean because it is accessing its memory much slower, it will use less CPU and thus have more tokens in the bucket which provides for more CPU in the end? 1211559643 M * Bertl well, currently it will get the same amount of tokens (in a fair setup) 1211559674 M * Bertl but, waiting for a page-in, will take some time, which, in turn, will give a bunch of tokens to use up at once 1211559700 M * Bertl while OTOH, the not-swapping guest will have short bursts of cpu 1211559705 M * yarihm so far so clear 1211559735 M * Bertl this should definitely give the swapping guest a good chance to do it's work efficiently 1211559797 M * Bertl let's assume we have the swap all above soft limit done (I.e. it is working and tested) 1211559828 M * Bertl let's now look at the two guests again ... 1211559855 M * Bertl first, there is probably a reason why guestA has all pages in memory, no? 1211559879 M * Bertl second, there is also a good reason why the pages of guestB got swapped out in the first place 1211559890 M * Bertl what will happen now? 1211559907 M * Bertl guestA is forced to evict pages above soft limit 1211559918 M * Bertl guestB might be able to page in (and keep) some more pages 1211559969 M * Bertl guestA shortly after that will start paging in the evicted pages again, because it obviously needs them, otherwise they would not be in memory in the first place 1211559989 M * Bertl (probably trowing out other pages, typical trashing) 1211560014 M * Bertl the question now is, will guestB gain from that in any case? 1211560037 M * Bertl - it won't have more CPU cycles available, in a fair scheduling setup 1211560042 M * yarihm so what you suggest is this: some guests will have RAM, and their share of CPU used very fast and other guests will have no RAM and use their share of CPU not that fast. In the end they should be of about the same speed ... 1211560098 M * Bertl - it won't get more 'real-time' to work, as the paging guestB was doing, is now done by guestA 1211560151 M * yarihm well, it's clear that the swapping done by guestA will cause the disk-system to degrade in performance 1211560159 M * Bertl what I mean is, although I'm almost done with the swap above soft limit, it probably won't help your situation 1211560220 M * Bertl that is, why we definitely need to test it on an artificial test case, where we can qualify and quantify the behaviour 1211560269 M * Bertl the main question there is, does guestB get less work done because of guestA? 1211560301 M * Bertl if so, is the amount of work done by guestA greater than guestB? 1211560336 M * yarihm ok, so I would have thought that this would be happening: say both guests run an application. guestA uses more memory than is assigned, thus gets swapped out partially. guestB uses less memory than is assigned and thus does not get swapped out. Now both have (in case of CPU-cycle shortage) their equal share, but at least guestB won't have to wait for his data to be fetched from swap, while guestA naturally will have to do so 1211560380 M * yarihm thus guestB will maybe be affected by guestA's swapping in and out, but his memory-performance won't be affected 1211560399 M * Bertl yes, but the page in is not accounted by the scheduler 1211560433 M * yarihm i think you suggested penalizing guestA for that swap in with the scheduler as a solution to this, no? 1211560451 M * Bertl so, while the page-in might take some 'real-time' it will not reduce the amount of cpu time that guest receives 1211560474 M * Bertl yes, but I think that this might be the wrong approach too 1211560512 M * Bertl wrong because a) too complicated to implement, and b) not really helpful for the system 1211560522 M * yarihm I kind of feel the same, it sounds as if there would be a lot of penalties involved here :) but the question still is how we do get guestB to get his fair share of the machine's ressources if we don't do it that way 1211560566 M * Bertl and that's my point, as you consider being swapped out an 'unfair disadvantage' we should do two things 1211560591 M * Bertl first, we should find a way to measure this disadvantage quantitatively 1211560619 M * Bertl second, we should consider rewarding such guests with additional CPU time, if it turns out to be an actual disadvantage 1211560684 M * yarihm ok, that kind of sounds sensible but what I kind of feel bad about with this approach is that you trade oranges for apples ... you say: you don't have RAM, but instead of giving you RAM I'll give you some more CPU instead 1211560695 M * Bertl look, the thing is, I could simply finish the current approach (swap over soft) and consider that done, but I don't think that we are on the right track anymore (after what I've seen from my preliminary tests) 1211560738 M * Bertl it's not that _we_ take ram from the guest or so 1211560764 M * yarihm ok, I mean if you say so I can hardly say you are wrong in this :) I'll go and ask about the money-giver's opinions about this if you want me to 1211560768 M * Bertl the heuristics for the page management decide that the pages from guest B do not need to be in memory compared to guest A 1211560786 M * Bertl those heuristics currently work without even looking at any process 1211560795 M * yarihm yes. and that makes sense of course. but only if you want to optimize over-all-throughput 1211560798 M * Bertl so I consider them at least somewhat fair 1211560813 M * FaUl harhar, i just found this on my harddisk: patch-2.4.26-vs1.28.diff 1211560823 M * Bertl yarihm: yes, and that's actually what we want, if we want to get work done, no? 1211560824 M * FaUl oh, wait, there is also one for 2.4.25 ;-) 1211560859 J * xanoro ~xanoro@p3EE033E0.dip0.t-ipconnect.de 1211560877 J * m_o_d ~kane@80.48.115.5 1211560897 M * m_o_d hello 1211560919 M * Bertl h_e_l_l_o m_o_d! 1211560952 M * Bertl welcome xanoro! 1211560989 M * Bertl yarihm: okay, let me put that into some expressions and variables to think about, you are a physicist, yes? 1211560989 M * yarihm Bertl, but not in a shared-hosting environment. You don't want a customer paying $FOO using all the ressources a machine that was supposed to serve 24 customers (also paying $FOO) -> on the back of all other customers <- just because he figured out how to behave the most like the worst havoc on that system. and that's acutally what is happening right now. the more a cusomters greed differs from the greed of the other customers on the server, the more 1211560989 M * yarihm the server will give to that server (in the sense of optimal overall performance). But that is rather undesireable 1211561037 M * yarihm "the more the server will give to that customer" I meant to say... 1211561062 M * yarihm Bertl, yes, I'm almost a physicist now:) 1211561178 M * yarihm nice would be: the greedier the less performance :) 1211561198 M * Bertl aha, okay, I have to comment on that before 1211561215 M * yarihm well, or best of all: independent of your greed, once ressources are sparse, everyone gets exactly the same amount 1211561223 M * Bertl so, you actually want a greed guest to get less work done than a nice one? 1211561239 M * yarihm no, not less, sorry, that was an admin-impulse :) 1211561256 M * Bertl okay, so we agree that we want to equalize power 1211561268 M * Bertl power is work done per time 1211561277 M * yarihm yes 1211561302 M * yarihm (that starts out like a very convincing argument for a physicist now, good start there ,) ) 1211561304 M * Bertl as both guests have the same realtime (ignoring virtual relativity :) they will need to do the same work 1211561337 M * m_o_d i have 2.6.25.4-vs2.3.x-vs2.3.0.34.10 when i reboot system i can run vserver but after 10 minuts vps is killed, and then when i want start vserver i ave message: kernel: Not cloning cgroup for unused subsystem ns, if anyone can help 1211561367 M * yarihm it's hard to quantify work here I guess, but so far so good 1211561374 M * Bertl m_o_d: that's interesing, what util-vserver version? 1211561399 J * xanor2 ~xanoro@p3EE033E0.dip0.t-ipconnect.de 1211561400 M * yarihm (i guess that's actually where the discussion will take place in the end, what does it mean to get some work done on a computer in the general case :) ) 1211561422 M * Bertl yarihm: okay, now each Time Unit can be broken down in several parts, like this: TU = CPU + SYS + IDLE 1211561443 M * Bertl where CPU is the time the CPU is working for that guest 1211561461 M * Bertl SYS is the time the CPU is working for system stuff, like swapping and so 1211561463 M * m_o_d Bertl: util-vserver 0.30.215-2 1211561482 M * yarihm Bertl, yes 1211561485 M * Bertl m_o_d: the 10minutes are exact? 1211561512 M * Bertl yarihm: the IDLE is the time the cpu is not working (or more precisely, working for someone else) 1211561542 M * m_o_d Bertl: no, about 5-10 minuts 1211561563 M * Bertl m_o_d: do you get any messages in dmesg when it is 'killed'? 1211561588 Q * xanoro Ping timeout: 480 seconds 1211561588 M * Bertl yarihm: OTOH, a given amount of Work (for each guest) will take a cetain time T, which again will consist of those three elements 1211561622 M * Bertl i.e. T_CPU, T_SYS and T_OTHER 1211561630 M * yarihm ok so far 1211561635 M * m_o_d Bertl: must check, but first reboot, afk 1211561648 Q * m_o_d Remote host closed the connection 1211561689 M * Bertl now, assuming that the hard cpus cheduler works as expected 1211561710 M * Bertl we can say that TA_CPU == TB_CPU for the same amount of work W 1211561720 M * yarihm (only affecting T_CPU, not T_SYS and T_OTHER) 1211561742 M * Bertl yeah, under the condition that TA == TB !! 1211561751 M * yarihm yes ... but now guestA uses more T_SYS and that will cause trouble 1211561771 M * Bertl wait wait, we are not there yet, we need some more basic equations :) 1211561783 M * yarihm ok 1211561788 M * yarihm i'll hold the horses then 1211561808 M * Bertl if, for whatever reason, the power is distributed unequally 1211561843 M * Bertl then, for a constant amount of work, the times must differ, right? 1211561856 M * yarihm that is correct, yes 1211561875 M * yarihm I really think i got your point there :) 1211561885 M * Bertl so, let's get to our unequal power scenario, let's assume that guestA has twice the power than guestB 1211561890 M * yarihm (but I also think that I did so before already) 1211561907 M * yarihm ok, unexpected turn there, please continue 1211561908 M * Bertl further let's hold to the identical work part 1211561924 M * Bertl so WA=WB, PA=2*PB 1211561932 M * yarihm yes 1211561943 M * Bertl what does that mean for TA and TB? 1211561945 M * yarihm so TB = 2*TA 1211561951 M * Bertl precisely 1211561965 M * Bertl assuming that the scheduler distribution is even 1211561984 M * Bertl that means that guestB will get twice as many cpu tokens 1211562026 M * Bertl what does that mean for the time distribution? 1211562044 M * yarihm thus in the end TB = TA one could suspect. but that's not true as only T_CPU counts, right? 1211562047 M * Bertl TB_CPU = 2*TA_CPU 1211562084 M * Bertl so the times we have for SYS and OTHER must get into the same relation 1211562120 M * yarihm why? 1211562137 M * yarihm (other than that you want to prove that TA_TOT = TB_TOT) 1211562148 M * yarihm :)) 1211562155 M * Bertl in our case, we know that TA_OTHER = TB_CPU and TB_OTHER = TA_CPU 1211562168 M * Bertl only two guests assumed on the otherwise idle machine 1211562188 M * yarihm well yes, but what about the swapping? 1211562195 M * yarihm i.e. TA_SYS 1211562196 M * Bertl that is the system part 1211562206 M * Bertl now let's get that out of the given equations 1211562218 M * yarihm ok, i'll watch you do that 1211562238 M * yarihm or was that an assumption, not a mathematical target definition? 1211562250 M * Bertl TB = 2*TA 1211562281 M * Bertl TB_CPU + TB_SYS + TB_OTHER = 2*TA_CPU + 2*TA_SYS + 2*TA_OTHER 1211562294 M * Bertl inserting the *_OTHER here 1211562323 M * Bertl TB_CPU + TB_SYS + TA_CPU = 2*TA_CPU + 2*TA_SYS + 2*TB_CPU 1211562368 M * Bertl TB_SYS = TA_CPU + 2*TA_SYS + TB_CPU 1211562401 Q * bfremon Quit: Leaving. 1211562413 M * Bertl substituting TB_CPU 1211562428 M * Bertl TB_SYS = 3*TA_CPU + 2*TA_SYS 1211562459 M * Bertl now that looks odd, doesn't it? 1211562473 M * yarihm let me think about that 1211562561 M * yarihm so you want to proof using that equation that TB_SYS increases as the CPU-time of A increases 1211562607 M * Bertl not exactly, basically we get the oposite if we assume that 1211562641 M * Bertl TA_SYS ~ 1/2 TB_SYS 1211562675 M * Bertl but let's assume that TB_SYS is much larger than TA_SYS 1211562681 J * bfremon ~ben@lns-bzn-31-82-252-214-254.adsl.proxad.net 1211562684 M * Bertl i.e. TB is swapping way more often 1211562712 J * bfremon1 ~ben@lns-bzn-31-82-252-214-254.adsl.proxad.net 1211562717 M * Bertl then we can get a relation like TA_SYS = -K*TA_CPU 1211562724 P * bfremon1 1211562740 M * Bertl or like TB_SYS = K*TA_CPU 1211562747 M * Bertl depending on the assumption 1211562820 J * Kokon ~richi@chello062178029134.10.11.vie.surfer.at 1211562827 M * Kokon hi 1211562842 M * Bertl hey Kokon! 1211562873 Q * bfremon 1211562891 M * Kokon i wanna install fedora core 9 in vserver, how can i register a new distribution? The link is old in the vserver warning. 1211562926 M * Bertl what util-vserver version? 1211562952 M * daniel_hozac F9 still needs work. 1211562963 M * yarihm Bertl, but still that means that if TA_CPU increases, TB_SYS increases. That's kind of unphysical as the system has limited ressources :) 1211562966 M * Kokon Version: 0.30.212-1 1211562973 M * daniel_hozac and that is ancient :) 1211563009 M * Bertl yarihm: but as TA_CPU increases, TB_CPU increases as well, no? 1211563053 M * yarihm huh? why would that be. if TA_CPU increases, TB_SYS increases, thus TB_CPU decreases if we assume that TB = const 1211563053 M * Kokon where can i find the wiki entry for adding a new distribution? 1211563074 M * Bertl yarihm: TB_CPU = 2*TA_CPU 1211563115 M * Bertl yarihm: see, so as that is in contradiction to the result, we can assume that this cannot really happen 1211563142 M * daniel_hozac Kokon: it wouldn't help either way. you need to fix the scripts too. 1211563207 M * Kokon hard to fix the script? 1211563235 M * yarihm Bertl, heh, the whole thing gets less and less stringent to me now :)) but given that TA = TB by assumption (the desired result) you either can increase TB_CPU on cost of TB_SYS or the other way around I guess :) 1211563287 M * daniel_hozac Kokon: it's not clear to me what the fix is. when it is, the trunk will get f9... 1211563336 M * Bertl yarihm: if the scheduler fails, yes 1211563348 M * yarihm Bertl, i personally think that the problem arises from the fact that the TB-scheduler only takes T_CPU into account. But I suspected that this was an issue because put simply: if one guest start to swap, that will affect all IO for all contexts. But to minimize this one really could penalize the swapper 1211563353 M * Bertl yarihm: if the scheduler works, you also have TA_CPU == TB_CPU 1211563356 M * yarihm Bertl, why must the scheduler fail in this case 1211563358 M * yarihm ok 1211563407 M * Bertl now, although unfair, but somewhat intuitive, we could add some kind of token amount which 'compensates' for swap-ins 1211563416 M * yarihm so you need to make sure that TB_SYS == TA_SYS ... or, alternatively (fail the scheduler or modify it) make sure that TB_SYS + TB_CPU = TA_SYS + TA_CPU ... thus the penalty for the swapper would not be that stupid 1211563431 M * Kokon daniel_hozac: if i know where the script is, i can look at it and maybe change the important things for me. I'am new to vserver, but i can code with some prog. languages. 1211563444 M * Bertl that would mean for guestB that he gets additional TB_CPU for some of the TB_SYS 1211563498 M * daniel_hozac Kokon: the hacky way is to just append 2>/dev/null to the lines in /etc/rc.d/rc using /proc/cmdline. 1211563552 M * Bertl yarihm: note that I still think that this would give an unfair advantage to B, but it would fit your scheme 1211563601 M * yarihm Bertl, one moment pls, got an emergency issue here :(( 1211563951 M * Bertl np 1211564511 M * Bertl have to leave now for half an hour or so ... bbl 1211564518 N * Bertl Bertl_oO 1211564854 J * xanoro ~xanoro@p3EE033E0.dip0.t-ipconnect.de 1211564858 M * dlx btw...does anyone know if u can get unique access to lo on vserver guests? 1211564882 M * daniel_hozac Linux-VServer 2.3 has isolated loopbacks. 1211564896 M * dlx that still in devel though, right? 1211564935 M * daniel_hozac it's the devel branch. 1211565228 M * dlx these are production debian boxes 1211565234 M * dlx any other way pre 2.3? 1211565244 Q * xanor2 Ping timeout: 480 seconds 1211565256 M * daniel_hozac i've been running 2.3 in production for months. 1211565478 N * DoberMann DoberMann[PullA] 1211565562 M * dlx hmm so to use that I have to upgrade my current kernel...and then patch the kernel? 1211565616 M * daniel_hozac i don't know what your current kernel is. 1211565625 M * dlx 2.6.17 1211565633 M * dlx debian stock 1211565643 M * daniel_hozac that'd be 2.6.18. 1211565659 M * dlx heh its an older debian unfortunately 1211566029 Q * xanoro Ping timeout: 480 seconds 1211566124 Q * [PUPPETS]Gonzo Remote host closed the connection 1211566311 J * dallas ~dallas@sf.newdream.net 1211566315 P * dallas 1211566392 J * dallas ~dallas@sf.newdream.net 1211566427 P * dallas 1211566435 J * [PUPPETS]Gonzo gonzo@fellatio.deswahnsinns.de 1211566770 J * dallas ~dallas@sf.newdream.net 1211566795 M * yarihm Bertl_oO, let's continue the discussion tomorrow or something, ok? seems as if we both had to go. I'll think about what you've said. 1211566808 Q * yarihm Quit: This computer has gone to sleep 1211566888 M * rgl humm, I'0m trying linux 2.5.25.3, but when I boot, it just sits wating for the root file system... so odd, this happens even with the "defconfig" .config file. known anything about this? 1211566900 J * dowdle ~dowdle@scott.coe.montana.edu 1211566908 M * rgl oh, I have the root fs in lvm. 1211567459 N * Bertl_oO Bertl 1211567463 M * Bertl back now ... 1211567698 J * larsivi ~larsivi@144.84-48-50.nextgentel.com 1211567710 Q * docelic_ Quit: http://www.spinlocksolutions.com/ 1211567880 M * rgl omg, something is so wacked... this kernel says: sd 0:0:0:0: [sda] 48918 52-byte hardware sectors (205 MB) how can that be? a 52-byte sector? :/ 1211567952 M * rgl oh well, gtg... 1211567959 Q * rgl Quit: Saindo 1211568087 M * Bertl nice one ... 1211568419 Q * Kokon Ping timeout: 480 seconds 1211568763 Q * dna Quit: Verlassend 1211569405 J * mire ~mire@171-175-222-85.adsl.verat.net 1211569563 Q * mire 1211569641 J * mire ~mire@171-175-222-85.adsl.verat.net 1211569935 Q * phedny_ Quit: Reconnecting 1211570127 N * phedny Guest21 1211570133 J * phedny ~mark@2001:610:656::115 1211571389 Q * doener Ping timeout: 480 seconds 1211571952 M * arekm is there a way to make wall send message to all ttys in all vservers guests + host? 1211572077 M * Bertl hmm ... 1211572167 M * Bertl you might try to do it in the spectator context? 1211572182 M * arekm 1? didn't work. even host didn't get the message 1211572200 M * Bertl what kernel version? 1211572211 M * arekm 2.6.22.19 1211572234 M * Bertl could you strace -fF the wall on the host and inside spectator and upload the results? 1211572341 M * arekm http://pld.pastebin.com/fb6adab4 host 1211572348 M * arekm http://pld.pastebin.com/f3079bcb1 host context 1 1211572357 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1211572398 M * arekm 28682 stat("/dev/pts/33", 0x7fff9da95e70) = -1 ENOENT (No such file or directory) 1211572402 M * arekm in context 1 1211572503 M * arekm reboot, back in few secs 1211572507 Q * arekm Quit: leaving 1211573029 J * arekm arekm@carme.pld-linux.org 1211573175 M * arekm er 1211573208 M * Hawq re 1211573240 Q * bonbons Quit: Leaving 1211574035 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1211574038 Q * bonbons 1211574202 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1211574319 Q * Mojo1978 Ping timeout: 480 seconds 1211574419 J * Aiken ~james@ppp121-45-230-114.lns1.bne4.internode.on.net 1211575507 M * Bertl arekm: strange, I have the following check here: 1211575511 M * Bertl return vx_check((xid_t)de->d_inode->i_tag, VS_WATCH_P | VS_IDENT); 1211575524 M * Bertl which should allow the spectator to see pts 1211575540 M * Bertl could you double check that with your devpts_filter() 1211575568 M * daniel_hozac unless you have privacy enabled. 1211575592 M * Bertl correct 1211575607 M * arekm I have privacy set 1211575620 M * daniel_hozac then it's expected :) 1211575676 M * Bertl if you want to have an exception there, you might want to change that VS_WATCH_P to VS_WATCH 1211575693 M * arekm ok, thanks 1211576623 Q * dlx Quit: Leaving 1211578223 Q * bonbons Quit: Leaving 1211578319 Q * meandtheshell1 Quit: Leaving. 1211578444 Q * FireEgl Quit: Leaving... 1211578555 Q * larsivi Quit: Konversation terminated! 1211580850 Q * mire Read error: Connection reset by peer 1211581825 J * mire ~mire@71-173-222-85.adsl.verat.net 1211582256 Q * derjohn Ping timeout: 480 seconds 1211582265 J * derjohn ~derjohn@dslb-084-058-214-174.pools.arcor-ip.net 1211582778 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1211583256 M * Guy- I've been using CONFIG_VSERVER_REMAP_SADDR=y in vs2.2 so far. I bound some host services to 127.x.y.z (not 127.0.0.1) and the guests could still reach them (this was what I wanted) 1211583272 M * Guy- will this continue to work in vs2.3? with or without VSERVER_AUTO_LBACK? 1211583371 M * Bertl yes, if you disable the mapping for the guest 1211583469 M * Guy- which mapping? 1211583646 M * Bertl the lback mapping 1211583670 M * Bertl when lback mapping is on, all 127.x.x.x ips will get mapped to 'the one' 1211583675 J * mire_ ~mire@151-172-222-85.adsl.verat.net 1211583683 M * Guy- ah 1211583704 M * Guy- but I can still have 127.0.0.1 be mapped to the guest's first IP? 1211583713 M * Bertl sure 1211583717 M * Guy- good 1211583734 M * Bertl just use that one as lback address 1211583775 M * Guy- I'm not sure I understand this at this point, but I'm sure it will all become clear :) 1211583828 M * Bertl well, it's quite simple actually 1211583840 M * Bertl before, we had no separate lback address 1211583848 M * Bertl the first IP took over that function 1211583850 M * Guy- I understand the mechanism (I think) 1211583858 M * Guy- I'm confused by the terminology 1211583873 M * Bertl now we moved that into a separate location, which is called lback 1211583896 M * Bertl lback = loop back ip 1211583905 M * Guy- thanks for the explanation, but you really needn't bother now, I can ask better questions when I've played with it some 1211583917 M * Bertl okay 1211584004 Q * mire Ping timeout: 480 seconds 1211584291 J * mire__ ~mire@238-174-222-85.adsl.verat.net 1211584696 Q * mire_ Ping timeout: 480 seconds 1211584890 J * mire_ ~mire@213-172-222-85.adsl.verat.net 1211585270 Q * mire__ Ping timeout: 480 seconds 1211585314 J * mire ~mire@94-172-222-85.adsl.verat.net 1211585326 J * m_o_d ~m_o_d@host-80.54.30.252.ltv.pl 1211585541 J * dna ~dna@p54BCDCEC.dip.t-dialin.net 1211585630 Q * mire_ Ping timeout: 480 seconds 1211585676 J * mire_ ~mire@67-174-222-85.adsl.verat.net 1211585946 J * mire__ ~mire@221-173-222-85.adsl.verat.net 1211586051 Q * mire Ping timeout: 480 seconds 1211586326 Q * mire_ Ping timeout: 480 seconds 1211586771 Q * mire__ Ping timeout: 480 seconds 1211586818 J * mire__ ~mire@65-175-222-85.adsl.verat.net 1211587105 Q * MatBoy Remote host closed the connection 1211587108 J * rofi ~rofi@21.2.broadband7.iol.cz