1242260869 J * dowdle ~dowdle@97-121-207-202.blng.qwest.net 1242260997 Q * docelic Quit: http://www.spinlocksolutions.com/ 1242261905 Q * geb Ping timeout: 480 seconds 1242262155 J * geb ~geb@earth.gebura.eu.org 1242265551 J * balbir_ ~balbir@122.172.22.28 1242266454 M * Bertl_oO off to bed now ... have a good one everyone! 1242266459 N * Bertl_oO Bertl_zZ 1242266601 Q * ViRUS Quit: If there is Artificial Intelligence, then there's bound to be some artificial stupidity. (Thomas Edison) 1242268483 Q * hparker Read error: No route to host 1242268560 Q * cehteh Ping timeout: 480 seconds 1242268609 J * hparker ~hparker@2001:470:1f0f:32c:290:96ff:fe50:40fa 1242269411 Q * dowdle Remote host closed the connection 1242275135 Q * balbir_ Ping timeout: 480 seconds 1242275765 M * Supaplex is there anything special I need to network from a guest instance? 2.6.26 on lenny, 0.30.216~r2772-6 util-vserver. 1242275811 M * Supaplex I don't need to network into it. vserver enter is plenty for my needs. 1242276423 M * Supaplex nm. i got a hold of the local lan admin. :) 1242277607 J * balbir_ ~balbir@59.145.136.1 1242278893 M * fb Supaplex: networking is a host bussiness, not guest 1242278987 M * fb Supaplex: all vserver can do is to vive you a limited view on a host's networking (when bind()ing to a 0.0.0.0 for example) and if using 2.3 series, also virtualised loopback ipv4 interface 1242279062 M * geb http://linux-vserver.org/Networking_vserver_guests is another way 1242279063 M * Supaplex some older release seemed to allow outbound initiated connections to use the ip of the host 1242279114 M * geb if you want block outbound use iptables 1242279532 J * Floops[w]1 ~baihu@205.214.201.176 1242279559 M * Supaplex oh. that's new. I thought we couldn't do nat on self traffic. 1242279926 Q * Floops[w]2 Ping timeout: 480 seconds 1242281049 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1242281554 Q * Piet Quit: Piet 1242282049 Q * larsivi Ping timeout: 480 seconds 1242282942 J * sharkjaw ~gab@149-240-82.oke2-bras6.adsl.tele2.no 1242283039 J * thierryp ~thierry@zanzibar.inria.fr 1242283594 J * davidkarban ~david@193.85.217.71 1242283878 J * BenG ~bengreen@94-169-110-10.cable.ubr22.aztw.blueyonder.co.uk 1242285018 Q * arthur Quit: leaving 1242285480 J * cga ~weechat@194.244.1.35 1242285574 Q * sharkjaw Ping timeout: 480 seconds 1242286386 J * harobed ~harobed@pda57-1-82-231-115-1.fbx.proxad.net 1242288086 Q * BenG Remote host closed the connection 1242288512 J * mtg ~mtg@vollkornmail.dbk-nb.de 1242289290 J * dna ~dna@186-204-103-86.dynamic.dsl.tng.de 1242290802 J * BenG ~bengreen@94-169-110-10.cable.ubr22.aztw.blueyonder.co.uk 1242290867 Q * coda Quit: Leaving 1242291102 J * gnuk ~F404ror@pla93-3-82-240-11-251.fbx.proxad.net 1242293164 Q * DreamerC Quit: leaving 1242293179 J * DreamerC ~DreamerC@122-116-181-118.HINET-IP.hinet.net 1242293920 J * cehteh ~ct@pipapo.org 1242294812 J * sharkjaw ~gab@149-242-57.oke2-bras6.adsl.tele2.no 1242295216 J * jfsworld ~jf@202.157.130.21 1242295235 M * jfsworld hey, anybody around to answer some scheduling questions? 1242295728 N * Bertl_zZ Bertl 1242295733 M * Bertl morning folks! 1242295762 M * Bertl jfsworld: ask questions, avoid metaquestions :) 1242295769 M * jfsworld morning - and thanks 1242295794 M * jfsworld my question(s) basically revolve around how to enable fair scheduling 1242295816 M * Bertl in what kernel/patches? 1242295875 M * jfsworld patch-2.6.22.19-vs2.2.0.7 1242295945 M * Bertl so stable release, good, what do you consider 'fair scheduling'? 1242295963 M * jfsworld http://linux-vserver.org/CPU_Scheduler#Fair_Share 1242295999 M * jfsworld basically i want to be able to guarantee a minmum amount of cpu for contexts 1242296046 J * Keeper longhorn40@188.17.2.100 1242296059 M * Keeper Hi 1242296064 M * jfsworld and then on top of that if there is extra cpu cycles be able to give them to the busy contexts as well 1242296123 M * jfsworld i know some folks have asked about this recently already - but whatever replies they've gotten havent exactly answered things for me 1242296180 M * Bertl okay, before I explain it once again, maybe ask some specific questions (where you are still in the dark) 1242296217 M * Keeper Bertl: Hi 1242296320 M * jfsworld let me dig out my old question hang on 1242296339 M * jfsworld ok firstly terminology 1242296395 M * Keeper Bertl: cp -af /dev/loop0 /vservers//dev/hdv1 so you can make that would use the Standard non-shared quota? 1242296402 M * jfsworld "fair share" is basically priority scheduling ? 1242296527 M * jfsworld ie. they are one and the same 1242296614 M * Bertl hmm, no, not really 1242296624 M * Keeper Bertl: I am trying to do work, but whether this is safe? 1242296666 M * Bertl Keeper: bad idea, as you will grant full access to the loop device 1242296694 M * jfsworld ok so i guess for what i'm looking for, "fair share" would be it then? 1242296724 M * Bertl no idea, what are you looking for :) 1242296774 M * jfsworld 1) i want to all contexts get at least a minimum amount of cpu if they need it 1242296799 M * jfsworld 2) when the cpu is idle, i want to be able to give this idle cpu time to contexts that need it 1242296802 M * jfsworld that's it 1242296926 M * Bertl well, probably the answer you got is: there are no explicit reservations/fuarantees 1242296933 M * Bertl *guarantees 1242296990 M * jfsworld so which part of what i want cannot be done? 1242296991 M * jfsworld no. 1? 1242297007 M * jfsworld (want all contexts get at least a minimum amount of cpu) 1242297012 M * Bertl I didn't say it cannot be done, I just said, there are no explicit reservations :) 1242297028 M * Bertl and I doubt that this is really what you want, btw 1242297059 M * Bertl if you have two contexts, one idle, the other running, would you still want the first to get a minimum of cpu? 1242297135 M * jfsworld no 1242297153 M * jfsworld or to rephrase 1242297167 M * jfsworld let's say i have 2 contexts with a 50/50 split 1242297189 M * jfsworld if one is idle and the otehr is running, is it possible to give 100% to context 2 then 1242297207 M * jfsworld then when context 1 starts to kick in, just scale back context 2's cpu usage? 1242297227 M * Bertl that makes more sense, and is how it is usually done 1242297275 M * Bertl you do that (on the stable branch) by using the hard cpu limit of the TB scheduling extension (with the desired 50/50 split) 1242297319 M * Bertl which will basically limit each guest to 50% cpu, and the idle time skipping feature, which allows to use up the rest, when the other context(s) is(are) idle 1242297396 M * jfsworld ah, i didnt know the idle time skipping feature was for that 1242297420 M * jfsworld ok b4 i continue - "TB scheduling extension"? 1242297445 M * jfsworld i keep hearing this - but what exatly is this "extension"? sounds like an extra patch to me 1242297455 M * jfsworld and what does "TB" stand for? 1242297576 Q * dentifrice Ping timeout: 480 seconds 1242297580 M * Bertl Token Bucket 1242297609 M * jfsworld ah ok 1242297680 M * jfsworld and "extension" probably just means it was added on to the "base" code. I shouldnt let the word "extension" make me think like it's not in stable already 1242297683 M * jfsworld am i correct? 1242297710 M * Bertl yep, it means, it 'extends' the existing Linux scheduler 1242297747 M * Bertl we used to simply call it TB scheduler, but that is kind of wrong, as the original scheduler is not replaced 1242297782 J * kir ~kir@swsoft-mipt-nat.sw.ru 1242297799 M * jfsworld right ok 1242297817 M * jfsworld so next qn would be that this works fine for 2 contexts on 1 system 1242297846 M * jfsworld but what if i have more? say 3 contexts. Context 1 idle, 2 & 3 busy. How will the idle time be split between 2 and 3? 1242297958 M * Bertl according to the idle values (they are a separate set like for the hard limit) 1242297993 M * Bertl so, let's assume you have 3 contexts, A,B, and C, with the following scheduler settings: 1242298042 M * Bertl A: 1/5 (1/1) B: 1/5 (1/2) C: 1/4 (1/2) 1242298105 M * Bertl then when all contexts are running, and the host is making up for the remaining 35%, the guests will get 20%:20%:25% 1242298132 M * Bertl when the host gets idle, the 35% will be divided according to 1:2:2 1242298141 M * Bertl sorry 1:1/2:1/2 1242298177 M * jfsworld which works out to 1 / 2 for A, 1/4 for B, and 1/4 for C ? 1242298200 M * jfsworld 1 over (1 + 1/2 + 1/2) 1242298206 M * jfsworld for A 1242298215 M * Bertl only for the 35% 1242298221 M * jfsworld right 1242298227 M * jfsworld but the calculation is correct? 1242298230 M * Bertl yep 1242298250 M * Bertl now if guest A becomes idle, the calculations shift towards: 1242298286 M * Bertl 20% for B and 25% for C with remaining 55% divided equally between them 1242298374 M * jfsworld right! this was what i wanted to check regarding the calculations - whether the calculation would be different if differet contexts were idle 1242298418 M * jfsworld ok for to get this thing working, what do i need to enable in the kernel? 1242298435 M * jfsworld do i need to enable any special options? 1242298565 M * jfsworld do i need CONFIG_VSERVER_IDLELIMIT ? 1242298731 M * Bertl yes, you need hard and idle limit enabled 1242298743 M * Bertl and you need to activate them per guest too 1242298767 M * jfsworld right.I've got the options enabled already in the kernel so that's good 1242298793 M * jfsworld as for how to activate per guest... could u explain? 1242298808 J * dentifrice f30dc85282@91.194.60.102 1242298818 M * Bertl http://linux-vserver.org/Capabilities_and_Flags 1242298842 M * Bertl and for the config: http://www.nongnu.org/util-vserver/doc/conf/configuration.html 1242298939 M * jfsworld sorry. I know the flower page - but my problem is interpreting it.. 1242298948 M * jfsworld eg. - do i need to set 'vattribute --set --xid --flag sched_hard' ? (since u mention hard limit) 1242299019 M * Bertl you can do that, or simply put it in the config (flags) 1242299052 M * jfsworld ok 1242299059 M * jfsworld and then how do i specifiy the ratio? 1242299060 M * Bertl /etc/vservers//flags 1242299085 M * Bertl the ratio is put in /etc/vservers//sched 1242299104 M * Bertl (again, see flower page for the files) 1242299255 M * jfsworld so fillrate and interval would give me the ratio for non-idle? 1242299266 M * Keeper Bertl: how could I do? 1242299375 Q * scientes Ping timeout: 480 seconds 1242299379 M * Bertl for 'normal' well behaved quota, you do not need access to the block device, only quota ioctls, which can be proxied with the vroot device 1242299418 M * Bertl jfsworld: correct 1242299449 M * jfsworld ok i think i get it (but seriously, docs need work) - fill-rate/interval for non-idle, and fill-rate2/interval2 for idle 1242299472 M * Bertl go ahead, improve the wiki 1242299490 M * Keeper Bertl: That is, you need to pass through the access vroot? 1242299532 M * jfsworld :) yeah... i know there is a documenation movement going on, right 1242299538 M * Bertl Keeper: yep, you configure the vroot device to 'proxy' for a real block device, then you put the vroot device in the guest 1242299558 M * jfsworld anyway to answer - yeah i would. Except given how u've seen me ask stuff, i'm not so sure about that much! 1242299618 M * jfsworld Bertl: anyway, thanks for ur time. Really appreciate it. You've definitely made helped me out here! 1242299659 M * Bertl well, I thought the http://linux-vserver.org/CPU_Scheduler page would explain it in quite some detail ... 1242299667 M * Bertl anyway .. you're welcome! 1242299703 M * jfsworld yeah i looked at it - and while it did give me some "clue", it also didnt answer some qns for me. Hence my emails/irc 1242299705 M * jfsworld but thanks 1242299726 M * jfsworld i'll come back and improve things when i'm more sure. Atm, i dont think i can contribute yet 1242299734 M * jfsworld thanks for ur time and effort 1242299747 M * Bertl np, have fun! 1242299757 M * jfsworld :) yup thanks 1242299761 M * jfsworld cya 1242299762 P * jfsworld 1242299787 M * Keeper Bertl: I little knew how to skip through loop0 vroot, for example the site is written for LVM 1242299818 M * Bertl lvm or loop or even real device doesn't matter 1242299848 Q * harobed Read error: No route to host 1242300330 M * ghislainocfs2 jfsworld: you can do like me just create a wiki page, give the link here so we can all review it and if it's ok then you can just add a link in the doc list :) 1242300722 M * Keeper Bertl: I never understood how you can loop in the guest environment 1242300776 M * Bertl you mean, do loop setup/mounts inside a guest? 1242300786 M * Bertl usually you do not allow that for security reasons 1242300809 M * Bertl but mounting a filesystem on a loop device _for_ a guest is considered secure 1242300910 M * Keeper Bertl: So I do not know what to do, consistency of action... 1242300947 M * Bertl what is it what you want to achieve? 1242301276 M * Keeper Bertl: Standard non-shared quota of gosetvom environment through loop0 1242301373 M * AndrewLee How to use hashify together with disklimit? 1242301499 M * AndrewLee I use vdlimit --xid and specify space to each hashified guest. And then I got problem. 1242301509 M * Keeper Bertl: I do not know... But the limits on the disk of course need 1242301571 M * AndrewLee only the lastest guest works. And the other gues all have 'file not exist' or 'command not found' problem. 1242301581 M * thermoman Red neck americans said pigs would fly before there would be a black president -- 100 days after Obama won, swine flew. 1242301658 M * Keeper The principle of the creation of loop is the same as for Xen? 1242301750 M * Keeper Block file attached to the block device loop* ? 1242301831 J * harobed ~harobed@pda57-1-82-231-115-1.fbx.proxad.net 1242301875 M * PowerKe Keeper: in /etc/vservers/GUESTNAME/fstab you can put something like: /file_on_host /guest_mountpoint ext3 loop,rw 0 0 1242302157 M * Keeper PowerKe: Thank you, I thought that we should not create a block file with the file system 1242302284 M * PowerKe Keeper: You can use a block device directly as well if you like 1242302407 M * Keeper PowerKe: I need that would have worked Standard non-shared quota but I do not know how best to realize this... you can tell me what to do? 1242302567 M * Keeper Bertl: Thank you, I will try to understand you will not distract 1242303402 M * Keeper Thanks to all who responded, I realized where erred 1242303788 M * Bertl np 1242303933 J * FireEgl Proteus@WTF.4.1.0.c.0.7.4.0.1.0.0.2.ip6.arpa 1242304196 M * Keeper Bertl: In the disc does not include limits on the size of the operating system? 1242304230 M * Bertl hmm? 1242304388 M * Keeper Bertl: Just my size reflects the space inside the guest system, but there neuchteny limits guest OS. But this is something that I have not made it :D 1242304497 M * Keeper Bertl: But it is not so important, Men like to break in, how it works VServer 1242304502 M * Bertl http://linux-vserver.org/Disk_Limits_and_Quota ? 1242304531 Q * FireEgl Ping timeout: 480 seconds 1242304609 M * Keeper Bertl: But it is not so important, I like to break in, how it works VServer... I will show you later what I mean... 1242304623 M * Bertl okay :) 1242305100 J * FireEgl Proteus@Sebastian.Atlantica.CJB.Net 1242305747 M * Keeper Bertl: patch-2.6.28.10-vs2.3.0.36.11.diff can use? 1242305836 M * Bertl yep 1242307424 J * xxxx ~chatzilla@218.89.84.93 1242307903 J * dowdle ~dowdle@scott.coe.montana.edu 1242308603 Q * sid3windr Remote host closed the connection 1242308605 J * sid3windr luser@bastard-operator.from-hell.be 1242308904 Q * xxxx Quit: Chatzilla 0.9.75.1 [SeaMonkey 1.1.16/2009040306] 1242309233 J * Piet ~piet@tor-irc.dnsbl.oftc.net 1242309601 Q * sharkjaw Remote host closed the connection 1242309814 Q * balbir_ Ping timeout: 480 seconds 1242311396 Q * mtg Quit: Verlassend 1242311607 J * ViRUS ~mp@p579B483A.dip.t-dialin.net 1242311777 M * ViRUS I have some weird issue. I start an apache daemon with a ChrootDir directive and only the "main" process is visible in the guest. When terminating it the spawned childs hang and are only visible from the host. 1242311805 M * daniel_hozac are you using gresec? 1242311955 Q * davidkarban Quit: Ex-Chat 1242312396 M * ViRUS jepp 1242312408 M * ViRUS Using the linux vserver package + grsec 1242312425 M * daniel_hozac check your grsec settings then. 1242312775 M * ViRUS when starting apache2 with -D NO_DETACH I can kill it without any troubles. weird. 1242313395 Q * geb Quit: Quitte 1242313668 J * hparker ~hparker@2001:470:1f0f:32c:290:96ff:fe50:40fa 1242314411 Q * cga Quit: got a DELL??? update you BIOS with http://github.com/cga/dellbiosupdate.sh/tree/master ;) 1242315431 Q * BenG Quit: I Leave 1242315823 M * Bertl off for now .. bbl 1242315829 N * Bertl Bertl_oO 1242315997 J * thierryp_ ~thierry@vis041b.inria.fr 1242316078 Q * thierryp_ Remote host closed the connection 1242316276 Q * thierryp Ping timeout: 480 seconds 1242316301 J * hijacker_ ~hijacker@87-126-142-51.btc-net.bg 1242316427 M * Keeper Something I did not correctly show the limits - /dev/hdv1 2,0G 1,7M 2,0G 1% / 1242317528 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1242318454 J * ViRUS_ ~mp@p579B56DC.dip.t-dialin.net 1242318528 Q * pmenier Quit: Konversation terminated! 1242318883 Q * ViRUS Ping timeout: 480 seconds 1242319662 Q * harobed Ping timeout: 480 seconds 1242319963 M * ghislainocfs2 i was wondering, does it exist a command to say: run 'xxx' on all the running vserver ? 1242319991 M * ghislainocfs2 basicaly i would want to do an cron from the host that launch commands on the guest's context 1242319998 M * daniel_hozac vsomething 1242320051 M * ghislainocfs2 hi daniel :) the one i see launch a command on one context like vserver xxx exec i think 1242320077 M * ghislainocfs2 should i create a script for that i was wondering if you can do like for vapt-get a --all 1242320123 Q * danman Quit: leaving 1242320153 M * ghislainocfs2 oh 1242320161 M * ghislainocfs2 vsomething is an actual command 1242320164 M * daniel_hozac yes. 1242320197 M * ghislainocfs2 lol i was beleiving you told me to look for a vxxx command 1242320199 M * ghislainocfs2 funny 1242320212 M * ghislainocfs2 i will add this to the wiki page of vserver i done then 1242320216 M * ghislainocfs2 thanks a lot ! :p 1242320246 M * ghislainocfs2 i was pretty sure this should exist this is pretty common scenario 1242320318 J * thierryp ~thierry@home.parmentelat.net 1242320651 M * harry ViRUS_: read my readme :) 1242320733 M * harry http://people.linux-vserver.org/~harry/_README_ 1242320741 M * harry # CONFIG_GRKERNSEC_CHROOT_FINDTASK is not set # don't enable this if you plan to use chroot stuff in vps'es, if not: go right ahead... not that it's very useful in vservers... ;) 1242320769 M * harry you can "disable" it on the fly 1242320822 M * harry the _ before the readme, and the capital letters are there for a reason... first in the list... and very important! ;) 1242321125 J * larsivi ~larsivi@70.84-48-63.nextgentel.com 1242321139 J * SlackLnx ~SlackWare@85.139.11.111 1242321307 M * ViRUS_ harry, thx for the hint 1242321309 N * ViRUS_ ViRUS 1242321313 M * harry np 1242321493 M * ViRUS harry, when turned off does it limit the guests to see the processes of the host? 1242321512 M * harry yesh! 1242321536 M * harry and the host ofcourse 1242321537 M * ViRUS so basicly it's double protection and breaks chroot stuff inside a vserver 1242321575 M * harry since vserver is "chroot on steroids", it does impact it ;) 1242321597 M * ViRUS that's what I was thinking, too. So I better leave it disabled 1242321604 M * harry plus, findtask is a dirty hack ;) 1242321626 M * ViRUS with that 'extra' security it simply breaks chrooted daemon tasks inside the vserver - which is a bad thing 1242321650 M * harry hence the big fat warning in the readme file :) 1242321708 M * ViRUS heh. didn't see that one. 1242321717 M * harry ... :) 1242321736 J * harobed ~harobed@arl57-1-82-231-110-14.fbx.proxad.net 1242323318 J * cga ~weechat@82.84.130.131 1242323712 J * doener ~doener@i59F56D01.versanet.de 1242323816 Q * doener_ Ping timeout: 480 seconds 1242323964 M * Keeper good luck 1242324037 Q * Keeper Quit: Keeper 1242324813 Q * gnuk Quit: NoFeature 1242327819 Q * SlackLnx Read error: Connection reset by peer 1242328230 J * ^jan ~jan@83.Red-80-33-9.staticIP.rima-tde.net 1242328238 M * ^jan hi all 1242328317 M * ^jan any awake? I am getting this error lots of times in my logs, and also when executing dmesg: vxW: [»irqbalance«,4598:#101|49155] messing with the procfs. 1242328321 M * ^jan any idea? thanks 1242328385 M * ^jan sorry, it happens in host, not inside a vps 1242331370 Q * thierryp Remote host closed the connection 1242331738 J * jrdnyquist ~jrdnyquis@slayer.caro.net 1242331780 J * Philadelphia bno@118-168-233-21.dynamic.hinet.net 1242331849 J * thierryp ~thierry@home.parmentelat.net 1242332236 Q * uva Ping timeout: 480 seconds 1242332772 Q * thierryp Remote host closed the connection 1242332950 Q * hijacker_ Quit: Leaving 1242333613 Q * bonbons Quit: Leaving 1242334656 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1242335896 Q * dna Quit: Verlassend 1242336576 J * hparker ~hparker@2001:470:1f0f:32c:290:96ff:fe50:40fa 1242336627 Q * cga Quit: got a DELL??? update you BIOS with http://github.com/cga/dellbiosupdate.sh/tree/master ;) 1242337038 Q * hparker Remote host closed the connection 1242337348 Q * harobed Ping timeout: 480 seconds 1242337590 Q * ViRUS Remote host closed the connection 1242337670 J * hparker ~hparker@2001:470:1f0f:32c:290:96ff:fe50:40fa 1242338924 N * Bertl_oO Bertl 1242339139 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1242340966 Q * Sebboh- Ping timeout: 480 seconds 1242341178 J * hparker ~hparker@2001:470:1f0f:32c:290:96ff:fe50:40fa 1242342571 Q * doener Ping timeout: 480 seconds 1242342715 Q * dowdle Remote host closed the connection 1242343109 J * saulus_ ~saulus@c150189.adsl.hansenet.de 1242343520 Q * saulus Ping timeout: 480 seconds 1242343529 N * saulus_ SauLus 1242345217 J * scientes ~scientes@97-113-120-192.tukw.qwest.net