1227916804 J * independence ~independe@titan.blinkenshell.org 1227916816 Q * docelic 1227917373 Q * bliz42 Quit: bliz42 1227917385 J * geb ~geb@79.82.4.112 1227917404 Q * geb 1227917476 J * kir ~kir@swsoft-msk-nat.sw.ru 1227917507 Q * kir 1227917621 Q * doener Quit: leaving 1227919249 Q * znyto Remote host closed the connection 1227920229 Q * mire Quit: Leaving 1227921738 M * yarihm still someone awake? A customer asked for direct disk-block access for his mysql server ... I saw the ADMIN_MAPPER cap, is that just administration of lvm or also usage of a lvm lv? What I'd like to do is restrict the customers access to a single lvm lv if possible 1227921790 M * daniel_hozac just copy that device node to the guest. 1227921796 M * daniel_hozac you don't need any caps. 1227921959 M * Bertl yarihm: but be aware, that this can endanger your security (host side) 1227922053 M * daniel_hozac how so? 1227922073 M * yarihm OK, i'll do that then. I'm aware of the security aspect of this, but there is not much I can do about it. However, the lv will be dedicated and the user trusted, so I guess this will not be that much of a problem 1227922186 M * Bertl daniel_hozac: several ways: ioctl stuff on that device for example, if filesystems are mounted, corrupted filesystems can crash the kernel, if the filesystem can be shared somehow permissions could be changed ... 1227922253 M * daniel_hozac well, mounting it is of course out of the question. and i'd hope the ioctls are protected by caps. 1227922291 M * Bertl I wouldn't be so sure about the ioctls and caps, they are ancient afterall :) 1227922377 M * daniel_hozac i would hope users can't mess with LVs on regular systems. 1227923071 M * Bertl check sd_ioctl, follow down to scsi_cmd_ioctl() 1227923111 M * Bertl you can at least adjust timeouts without any caps 1227923152 M * Bertl or send stop unit, if I read that correctly :) 1227924904 M * Bertl don't forget, on a 'normal' system, e.g. /dev/sda is protected by unix owner/group and permissions 1227925179 Q * nkukard Ping timeout: 480 seconds 1227925698 J * hparker ~hparker@linux.homershut.net 1227926186 M * Bertl okay, off to bed now .. have a good one everyone! 1227926195 N * Bertl Bertl_zZ 1227926367 J * sfreak ~sfreak@202-71-91-229.ap-w01.canvas.ne.jp 1227926499 M * sfreak Hi! anyone around? 1227926503 M * sfreak I am trying to set up a vserver with a public ip on debian etch... made a eth0:1 interface with the new ip, configured the vserver to use it ut have no connectivity to the outside. where can I find more info on the network setup, routing etc.? thanks :) 1227926931 M * daniel_hozac there is nothing special about it. 1227926947 M * daniel_hozac it's just like a regular Linux box, except guests can only use a subset of the IP addresses 1227927093 M * sfreak do I need to setup any routing or anything for the vserver to be able to use the new address? 1227927168 M * daniel_hozac that depends entirely on your network setup. 1227927289 M * sfreak well... it's just a stand-alone server, no routing. eth0 is has the primary IP for the host, and eth0:1 and its IP I want to use for the guest 1227927336 M * sfreak I used (roughly :) the following command to create the guest: vserver beta build -n beta --hostname beta.somewhere.net --netdev eth0:1 --interface 11.22.33.44/24 -m debootstrap -- -d etch 1227927394 M * sfreak --interface IP is the same as the IP of eth0:1 on the host 1227927758 M * daniel_hozac that's not what you want. 1227927772 M * daniel_hozac remove netdev, and use --interface 1=eth0:11.22.33.44./24. 1227927779 M * daniel_hozac can the host use eth0:1? 1227928386 M * sfreak ping -I eth0:1 www.debian.org works... so I assume the device works. I'll try the new command 1227928994 M * sfreak aaaah, that did it! now I have internet connectivity. thanks a lot! 1227929006 M * sfreak I didn't have much luck finding helpful documentation on this kind of thing, is there any? 1227929021 M * daniel_hozac http://linux-vserver.org/Building_Guest_Systems 1227929082 M * yarihm gn8 1227929085 Q * yarihm Quit: Leaving 1227929220 M * sfreak oh, no. it's still not working... that was a bit premature. darn. 1227929296 M * daniel_hozac we'll need more details on your specific network setup. 1227929720 M * sfreak something seems to be wrong with the device even on the host... seems my ping-test was misleading. if I specify the interface IP it doesn't work, only when I specify the device name... I will investigate further, give me a minute 1227931294 M * sfreak hrm... I guess my problem indeed lies not with vserver but with my network setup. ping -I eth0:1 works, but ping -I sencondary_IP doesn't - that got me confused. I'll talk to m hosting company, maybe they don't route the address yet... sorry for the confusion and thanks for the help 1227931786 Q * hparker Quit: Quit 1227932332 J * hparker ~hparker@linux.homershut.net 1227933471 Q * derjohn_mob Ping timeout: 480 seconds 1227935016 J * nkukard ~nkukard@196.212.73.74 1227936129 Q * ghislainocfs2 Ping timeout: 480 seconds 1227938177 N * quinq qzqy 1227942915 M * sfreak daniel_hozac: my provider routed the new IPs to the wrong server... I will try again as soon as thats changed 1227943291 Q * esa Ping timeout: 480 seconds 1227943554 M * sfreak everything's working now :) 1227945532 J * esa ~esa@ip-87-238-2-45.static.adsl.cheapnet.it 1227945775 Q * Aiken Quit: Leaving 1227948348 J * ktwilight__ ~ktwilight@87.66.205.19 1227948463 Q * cga_ Quit: WeeChat 0.2.6 1227948646 Q * ktwilight_ Ping timeout: 480 seconds 1227949769 J * cga ~weechat@94.36.88.11 1227950524 Q * cga Quit: WeeChat 0.2.6 1227950541 J * Mojo1978 ~Mojo1978@ip-88-152-63-81.unitymediagroup.de 1227950599 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1227951017 J * dib ~dib@149.25.70-86.rev.gaoland.net 1227951481 J * cga ~weechat@94.36.88.11 1227951576 J * derjohn_mob ~aj@e180215068.adsl.alicedsl.de 1227954569 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1227956011 Q * cga Quit: WeeChat 0.2.6 1227956939 J * cga ~weechat@94.36.88.11 1227957623 J * balbir_ ~balbir@124.124.219.61 1227958075 J * Slydder1 ~chuck@dslb-088-072-251-168.pools.arcor-ip.net 1227958586 Q * balbir_ Ping timeout: 480 seconds 1227959097 J * balbir_ ~balbir@124.124.219.61 1227962212 Q * Slydder1 Read error: Connection reset by peer 1227962867 Q * balbir_ Ping timeout: 480 seconds 1227963416 Q * cga Quit: WeeChat 0.2.6 1227964184 J * kiorky_ ~kiorky@cryptelium.net 1227964214 Q * kiorky Remote host closed the connection 1227964293 J * cga ~weechat@94.36.88.11 1227964568 N * Bertl_zZ Bertl_oO 1227964680 J * doener ~doener@i577BB1ED.versanet.de 1227965185 J * balbir_ ~balbir@124.124.219.61 1227965601 Q * kiorky_ Ping timeout: 480 seconds 1227965802 J * kiorky ~kiorky@cryptelium.net 1227966165 Q * cga Quit: WeeChat 0.2.6 1227966273 Q * sfreak Quit: sfreak 1227966286 Q * balbir_ Ping timeout: 480 seconds 1227966656 J * geb ~geb@112.4.82-79.rev.gaoland.net 1227966712 J * cga ~weechat@94.36.88.11 1227968065 J * znyto ~znyto__@tor-irc.dnsbl.oftc.net 1227970242 Q * eyck Quit: leaving 1227970266 J * eyck 1BpOfMS5@nat05.nowanet.pl 1227970401 J * dna ~dna@52-200-103-86.dynamic.dsl.tng.de 1227971856 J * balbir_ ~balbir@122.167.213.157 1227972572 Q * mugwump synthon.oftc.net scorpio.oftc.net 1227972657 J * mugwump ~samv@watts.utsl.gen.nz 1227973509 N * qzqy quinq 1227973619 Q * kiorky Ping timeout: 480 seconds 1227974229 J * gdm ~gdm@lair.fifthhorseman.net 1227974314 M * gdm hi, i think i'm being stupid but i can't find on the wiki how to allocate a cpu core to a certain vserver 1227974491 M * gdm oh wait, is it /etc/vservers/vserver-name/cpuset (from the great flower page)? 1227974972 J * hparker ~hparker@linux.homershut.net 1227976013 M * Bertl_oO yep 1227976057 M * gdm thank you 1227976092 Q * hparker Remote host closed the connection 1227976134 J * hparker ~hparker@linux.homershut.net 1227976192 J * chI6iT41 ~chigital@tmo-100-242.customers.d1-online.com 1227976231 Q * hparker 1227976987 J * hparker ~hparker@linux.homershut.net 1227977413 J * kiorky ~kiorky@82.231.146.43 1227978359 J * bliz42 ~kdsmith@c-98-193-150-250.hsd1.tn.comcast.net 1227979008 Q * bonbons Quit: Leaving 1227979551 J * znyto_ ~znyto__@tor-irc.dnsbl.oftc.net 1227979926 Q * znyto Ping timeout: 480 seconds 1227980049 Q * ktwilight__ Quit: dead 1227980291 Q * chI6iT41 Ping timeout: 480 seconds 1227980718 Q * independence Remote host closed the connection 1227980749 J * independence independen@titan.blinkenshell.org 1227981153 J * Walex ~Walex@82-69-39-138.dsl.in-addr.zen.co.uk 1227983814 J * ktwilight ~ktwilight@87.66.205.19 1227984128 Q * hparker Quit: Quit 1227984341 J * hparker ~hparker@linux.homershut.net 1227989115 Q * hparker Quit: Quit 1227989838 J * hparker ~hparker@linux.homershut.net 1227990474 Q * geb Remote host closed the connection 1227990626 J * geb ~geb@79.82.4.112 1227992295 Q * hparker Quit: Quit 1227992423 Q * cga Quit: WeeChat 0.2.6 1227992564 Q * dib Quit: Quitte 1227992656 J * hparker ~hparker@linux.homershut.net 1227992786 Q * derjohn_mob Ping timeout: 480 seconds 1227994685 J * ktwilight_ ~ktwilight@87.66.193.115 1227994951 Q * ktwilight Ping timeout: 480 seconds 1227995254 Q * hparker Quit: Quit 1227995455 M * mnemoc is it possible to have a 943M modules dir when using allmodconfig or there is something wrong on my build scripts? 1227995467 M * mnemoc (hi! ;-) 1227995468 M * daniel_hozac sounds reasonable. 1227995481 J * hparker ~hparker@linux.homershut.net 1227995484 M * daniel_hozac might want to strip debuginfo to another place. 1227995497 M * Bertl_oO actually that sounds kind of small with all debug info :) 1227995513 M * mnemoc really? 1227995515 M * mnemoc http://rafb.net/p/VeV44Z35.html 1227995525 M * mnemoc .22 (my last) wasn'T that large 1227995565 M * mnemoc this sounds crazy :) 1227995573 M * mnemoc (to me) 1227996184 M * mnemoc do you know any trick to not get every DEBUG config enabled when running allmodconfig ? 1227996233 M * daniel_hozac you want debuginfo though. 1227996247 M * daniel_hozac just not in the binaries you run. 1227996314 M * mnemoc .oO 1227996428 M * mnemoc you mean building the kernel twice? 1227996465 M * Bertl_oO nah, more like getting naked (i.e. 'strip' :) 1227996482 M * mnemoc :) 1227996623 Q * doener Quit: leaving 1227996629 M * cehteh brr .. its winter and cold! 1227996661 M * mnemoc i live 150m away from the ocean in spain, and I got snow today :| 1227996686 M * mnemoc that wasn't supposed to happen 1227998135 J * Aiken ~Aiken@ppp118-208-72-25.lns1.bne4.internode.on.net 1227998503 J * gcj ~chris@cpc3-cmbg7-0-0-cust452.cmbg.cable.ntl.com 1227998585 M * gcj hi all, i'm having a problem with system load spikes on my vserver from one particular context. is there anything I can do about it? 1227998615 M * Bertl_oO runn less tasks inside the guest? 1227998699 M * gcj i could do that, but i was hoping for a more general solution :) 1227998748 M * Bertl_oO well, add more cpus maybe? 1227998769 M * gcj not without replacing the motherboard :) 1227998784 M * gcj is there anything the vserver patch can do to protect against this abuse? 1227998816 M * Bertl_oO hmm? load is no abuse ... 1227998841 M * gcj it is for me, if it makes the other virtual machines and the host unusable 1227998848 M * gcj e.g. i see system load go up to 30-50 1227998851 M * Bertl_oO a load of 10 means that 10 processes 'could' be running ... that doesn't mean that they are runnung or consuming resources 1227998863 M * Bertl_oO *running 1227998879 M * daniel_hozac gcj: sounds like an I/O problem. 1227998938 M * Bertl_oO depending on the Linux-VServer patch, you can do hard cpu limits for that guest (despite the hardware issues) 1227998946 M * gcj my graphs show that the system cpu usage is almost entirely user space during these spikes 1227998949 M * gcj not iowait or something 1227998986 M * Bertl_oO well, then _something_ is consuming (or would be consuming) cpu in the guest, what should Linux-VServer do about that? 1227999006 M * gcj i'm not sure why the system becomes unresponsive when this happens 1227999028 M * Bertl_oO most likely because your config is suboptimal 1227999037 M * gcj my guess is that it's fork()ing a lot, as that would bring a normal system to its knees as well 1227999045 M * gcj could you help me to check my config? 1227999083 M * gcj strangely, system load is 66 at the moment and climbing 1227999089 M * gcj but the machine is usable right now 1227999112 M * Bertl_oO well, first, check what's actually happening 1227999131 M * gcj actually, i take that back, it's hanging on me again 1227999131 M * Bertl_oO are there 66+ processes inside the guest? are they proliferating? 1227999157 M * Bertl_oO you might have memory issues, maybe the system is trashing? 1227999170 M * gcj Cpu(s): 84.3%us, 15.4%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st 1227999177 M * gcj swap: 8392k used 1227999181 M * gcj over 1GB cached 1227999195 M * gcj 10130 1014 24 0 267m 57m 12m S 99.3 2.8 92:10.48 java 1227999226 M * Bertl_oO ah, java :) well memory/resource hog #1 1227999243 M * gcj yeah 1227999257 M * gcj i have a feeling a lot of threads are hidden in that one process 1227999258 M * gcj and it's spawning more 1227999315 M * gcj the process limit on that vserver is 64 1227999322 M * gcj but i don't know if that includes threads or not 1227999338 M * gcj or whether fast creation of threads could also strangle the host 1227999342 M * Bertl_oO check with /proc/virtual//limits :) 1227999355 M * gcj PROC: 43 0/ 64 200/ 200 0 1227999360 M * gcj what else should I look for? 1227999370 M * Bertl_oO and yes, excessive forking can have some impact 1227999389 M * Bertl_oO so you didn't reach the limit yet 1227999395 M * gcj i imagine a fork bomb would take down the host, but i don't know if a thread bomb could 1227999404 M * Bertl_oO and the limit is 200 1227999418 M * gcj oh yeah, sorry, not 64 1227999421 M * gcj does it include threads? 1227999450 M * Bertl_oO as there are no 'threads' per se (in Linux) yes 1227999469 M * gcj umm really? i thought there were kernel threads 1227999476 M * gcj e.g. /proc/x/task/* 1227999507 M * Bertl_oO yes, but there is no real difference between processes/tasks and threads 1227999517 M * Bertl_oO (at least for 'real' threads :) 1227999597 M * gcj ok, so I guess I could reduce the PROC limit for this vserver 1227999608 M * gcj is there anything else i can do? 1227999622 M * gcj e.g. reduce the priority of its tasks to run when the system load is high? 1227999627 M * Bertl_oO yes, I would suggest to set the priority bias (i.e. renice the guest) 1227999638 M * gcj the whole guest? using the renice command? 1227999654 M * Bertl_oO (that i a Linux-VServer feature) 1227999698 M * Bertl_oO vsched --help 1227999734 M * gcj ok, it has alot of options :) 1227999741 M * gcj what would you suggest as sensible settings? 1227999747 M * gcj and can i have them loaded when the vserver starts? 1227999765 M * Bertl_oO yep, see the flower page for the config files 1227999785 M * Bertl_oO (--prio-bias is the one you want) 1227999878 M * gcj which way does priority run? should I set it to +5 or -5 in this case? 1227999891 M * gdm Bertl_oO: can i clarify, is it # 1227999891 M * gdm * 1227999891 M * gdm Bertl_oO: can i clarify, is it o 1227999891 M * gdm + 1227999900 M * gdm woah, sorry 1227999921 M * Bertl_oO priority is inverse to niceness 1227999930 M * gcj ok, so -5 then 1227999956 M * gdm is it /etc/vservers/vsname/cpuset or /etc/vservers/vsname/sched i should be using to allocate CPU for the vservers? 1227999997 M * gdm i am confused by what a cpuset is and how i name it 1228000088 M * Bertl_oO well, it's a set of cpus (arranged together) 1228000132 M * gdm so if i have 0,1,2,3 i might put 0 and 1 together and call it 'foo' and 2 and 3 together and call them 'bar', is that right? 1228000142 M * gcj hey gdm :) 1228000148 M * gcj i think i know you from imc 1228000182 M * gcj are you working on the new uk server? 1228000232 M * gcj Bertl_oO, thanks, i think i applied it successfully 1228000235 M * gdm gcj: hey! yes i am from imc among other places and no, not on the new uk server, although these questions are of relevance 1228000243 M * Bertl_oO you're welcome! 1228000266 M * gcj can i come back for more help if it doesn't solve the problem? 1228000280 M * gcj and are there plans to deal with system load limiting in a vserver context? 1228000283 M * gdm haha, be careful, you might get addicted ;-) 1228000312 M * Bertl_oO gcj: load limiting? like killing off processes or what? 1228000344 M * gcj preventing or delaying fork() and clone() 1228000348 M * gcj if the system load is too high 1228000389 M * Bertl_oO well, preventing fork is done by the process limit 1228000406 M * gcj but that's a fixed limit, it doesn't depend on system load 1228000419 M * Bertl_oO adjust it dynamically? 1228000424 M * gcj and just delaying it would be preferable to me 1228000459 M * Bertl_oO well, not sure that will reduce the load issue for you :) 1228000486 M * gcj why not? if a fork() would increase the load, and we delay it until the load is lower 1228000492 M * gcj that should keep the load down, no? 1228000501 M * Bertl_oO but if you are volunteering to test something like that, we can do it :) 1228000515 M * gcj yes i'd be happy to test experimental kernel patches :) 1228000569 M * Bertl_oO well, get a test system ready (i.e. where you can simulate load and test it) 1228000590 M * gcj ok, a test system might take a little while 1228000595 M * daniel_hozac delaying fork is just going to make your system sluggish. 1228000607 M * gcj ideally i'd like to identify and reproduce the current problem first 1228000617 M * gcj i don't mean delaying all forks, just in that one context 1228000627 M * gcj oh wait, i see what you mean 1228000635 M * gcj which context is responsible? 1228000651 M * Bertl_oO well, delaying forks _inside_ a guest might be a valid approach for the fork soft limit 1228000673 M * Bertl_oO where delaying actually means hitting the context/forked process with a scheduler penalty 1228000681 Q * hparker Quit: Quit 1228000743 M * gcj i thought most of the load was created by the kernel in the fork operation? 1228000751 M * Bertl_oO nope 1228000752 M * gcj perhaps i should be running a preempt kernel as well? 1228000762 M * Bertl_oO the 'load' itself is completely harmless 1228000782 M * Bertl_oO you can have a load of 100, and still a fully responsive system 1228000790 M * gcj it seems very rare to me 1228000802 M * gcj hitting load > 50 seems to kill a system most of the time 1228000807 M * gcj unless it's zombie processes 1228000852 M * gcj and fork bombs do kill the system, even with no swap and virtually no touching of pages inside the forks 1228000869 M * Bertl_oO well, not inside a properly limited guest 1228000902 M * Bertl_oO i.e. the fork bomb is one of the examples/tests done with guests on a regular basis 1228000926 M * gcj that's good to know :) 1228000946 M * gcj presumably in that case, the forking parent doesn't die and keeps forking 1228000956 M * gcj even though sys_fork sometimes fails 1228000979 M * gcj so the load that it creates on the system carries on for some time, e.g. several minutes? 1228001023 M * Bertl_oO no, the load is averaged 1228001040 M * gcj i don't mean in the system load sense this time, i mean load as a concept 1228001079 M * gcj how long do you run the test for? and what does the system load reach/sustain? 1228001080 M * Bertl_oO hmm, well, we should probably find a common definition for load first :) 1228001313 M * Bertl_oO but I guess I'm off to bed now ... so we should continue this another time (maybe tomorrow?) 1228001338 M * gcj ok thanks again for your help :) 1228001361 M * Bertl_oO np, cya 1228001376 M * Bertl_oO have a good one everyone .. off to bed ... 1228001382 N * Bertl_oO Bertl_zZ 1228002616 N * _Hunger Hunger 1228003188 J * hparker ~hparker@linux.homershut.net 1228003191 Q * dna Quit: Verlassend