1155168270 Q * ekc 1155168445 Q * Savvy Read error: Connection reset by peer 1155170541 Q * Piet Remote host closed the connection 1155170593 J * Piet hiddenserv@tor.noreply.org 1155171251 Q * Hurga Remote host closed the connection 1155175513 Q * azazel Quit: Client exiting 1155176107 P * lilo2 1155176498 J * olilo hiddenserv@tor.noreply.org 1155176775 Q * olilo Quit: brb 1155176908 J * olilo hiddenserv@tor.noreply.org 1155179554 M * brc_ Is there any remote DDOS that would crash 2.6.17.4 ? 1155180160 Q * meandtheshell Quit: bye bye ... 1155182853 Q * bonbons Quit: Leaving 1155183362 Q * eyck Ping timeout: 480 seconds 1155187993 J * lolilol hiddenserv@tor.noreply.org 1155188067 Q * olilo Ping timeout: 480 seconds 1155188132 J * Piet_ hiddenserv@tor.noreply.org 1155188467 Q * Piet Ping timeout: 480 seconds 1155189969 N * lolilol olilo 1155190031 N * olilo lolilol 1155190075 N * lolilol olilo 1155191583 J * yarihm ~yarihm@whitehead2.nine.ch 1155191796 J * dna ~naucki@p54BCD8B8.dip.t-dialin.net 1155192127 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155192879 J * dna_ ~naucki@p54BCD8B8.dip.t-dialin.net 1155193257 Q * brc_ Ping timeout: 480 seconds 1155193292 Q * dna Ping timeout: 480 seconds 1155193560 N * dna_ dna 1155193814 Q * Viper0482 Quit: one day, i'll find this peer guy and then i'll reset his connection!! 1155193830 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155194086 J * brc_ bruce@201.19.143.188 1155194883 J * dna_ ~naucki@p54BCD8B8.dip.t-dialin.net 1155195035 Q * Viper0482 Quit: one day, i'll find this peer guy and then i'll reset his connection!! 1155195057 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155195162 Q * dna Ping timeout: 480 seconds 1155195282 J * ||Cobra|| ~cob@pc-csa01.science.uva.nl 1155195559 Q * Piet_ Quit: :tiuQ 1155195606 Q * Viper0482 Remote host closed the connection 1155195626 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155195682 J * dna ~naucki@p54BCD8B8.dip.t-dialin.net 1155195987 Q * dna_ Ping timeout: 480 seconds 1155196330 J * eyck eyck@ghost.anime.pl 1155196815 J * coocoon ~coocoon@p54A07BB8.dip.t-dialin.net 1155196826 M * coocoon morning 1155196880 Q * Aiken Ping timeout: 480 seconds 1155197101 J * dna_ ~naucki@84.188.216.184 1155197377 Q * dna Ping timeout: 480 seconds 1155198323 Q * Hunger charon.oftc.net europa.oftc.net 1155198600 J * Hunger Hunger.hu@Hunger.hu 1155198757 Q * dna_ Ping timeout: 480 seconds 1155198784 Q * shedi charon.oftc.net helium.oftc.net 1155198784 Q * virtuoso_ charon.oftc.net helium.oftc.net 1155198784 Q * hap charon.oftc.net helium.oftc.net 1155198784 Q * fs charon.oftc.net helium.oftc.net 1155198784 Q * [PUPPETS]Gonzo charon.oftc.net helium.oftc.net 1155198784 Q * Greek0 charon.oftc.net helium.oftc.net 1155198784 Q * ntrs charon.oftc.net helium.oftc.net 1155198784 Q * nokoya charon.oftc.net helium.oftc.net 1155198784 Q * tokkee charon.oftc.net helium.oftc.net 1155198784 Q * matled_ charon.oftc.net helium.oftc.net 1155198784 Q * FloodServ charon.oftc.net helium.oftc.net 1155198793 J * shedi ~siggi@inferno.lhi.is 1155198793 J * virtuoso_ ~s0t0na@shisha.spb.ru 1155198793 J * hap ~penso@212.27.33.226 1155198793 J * fs fs@213.178.77.98 1155198793 J * Greek0 ~greek0@85.255.145.201 1155198793 J * ntrs ~ntrs@68-188-51-87.dhcp.stls.mo.charter.com 1155198793 J * [PUPPETS]Gonzo gonzo@langweiligneutral.deswahnsinns.de 1155198793 J * nokoya young@hi-230-82.tm.net.org.my 1155198793 J * tokkee tokkee@casella.verplant.org 1155198793 J * matled_ ~matled@85.131.246.184 1155198793 J * FloodServ services@services.oftc.net 1155198877 J * meandtheshell ~markus@85-124-206-89.dynamic.xdsl-line.inode.at 1155198888 J * dna ~naucki@p54BCD8B8.dip.t-dialin.net 1155198971 Q * matled_ helium.oftc.net strange.oftc.net 1155198971 Q * ntrs helium.oftc.net strange.oftc.net 1155198971 Q * Greek0 helium.oftc.net strange.oftc.net 1155198971 Q * tokkee helium.oftc.net strange.oftc.net 1155198971 Q * nokoya helium.oftc.net strange.oftc.net 1155198981 J * Greek0 ~greek0@85.255.145.201 1155198981 J * ntrs ~ntrs@68-188-51-87.dhcp.stls.mo.charter.com 1155198981 J * nokoya young@hi-230-82.tm.net.org.my 1155198981 J * tokkee tokkee@casella.verplant.org 1155198981 J * matled_ ~matled@85.131.246.184 1155200259 J * dna_ ~naucki@p54BCD8B8.dip.t-dialin.net 1155200617 Q * dna Ping timeout: 480 seconds 1155200844 Q * coocoon Ping timeout: 480 seconds 1155201423 J * coocoon ~coocoon@p54A06ACB.dip.t-dialin.net 1155201480 Q * coocoon 1155201585 J * coocoon ~coocoon@p54A06ACB.dip.t-dialin.net 1155202217 Q * michal` Ping timeout: 480 seconds 1155202742 J * michal` ~michal@www.rsbac.org 1155203227 Q * dna_ Ping timeout: 480 seconds 1155203282 J * dna ~naucki@p54BCFEAD.dip.t-dialin.net 1155203911 J * dna_ ~naucki@p54BCFEAD.dip.t-dialin.net 1155204284 Q * ||Cobra|| Remote host closed the connection 1155204327 Q * dna Ping timeout: 480 seconds 1155204571 J * dna ~naucki@p54BCFEAD.dip.t-dialin.net 1155204987 Q * dna_ Ping timeout: 480 seconds 1155205264 J * dna_ ~naucki@p54BCFEAD.dip.t-dialin.net 1155205647 Q * dna Ping timeout: 480 seconds 1155206099 J * dna ~naucki@p54BCFEAD.dip.t-dialin.net 1155206118 Q * brc_ Quit: [BX] Reserve your copy of BitchX-1.1-final for the Atari 2600 today! 1155206128 J * brc bruce@201.19.143.188 1155206131 Q * brc Remote host closed the connection 1155206133 J * brc bruce@201.19.143.188 1155206241 J * ||Cobra|| ~cob@pc-csa01.science.uva.nl 1155206366 M * Nam question... for /etc/vservers/vserver-name/apps/init/mark ... if I put any string that matches in multiple vservers... when I start/stop any of the ones for that group, does the whole group start/stop? 1155206402 Q * dna_ Ping timeout: 480 seconds 1155206416 M * Nam for instance... if i put "bob" in three vserver configs for the mark... if I start/stop one of the.. they all start/stop? 1155206851 J * dna_ ~naucki@p54BCFEAD.dip.t-dialin.net 1155207182 Q * dna Ping timeout: 480 seconds 1155207221 M * Nam hmm... well, that doesn't appear to be the case 1155207289 M * Nam umm... 1155207290 M * Nam # depends 1155207290 M * Nam This file is used to configure vservers which must be running before the current vserver can be started. At shutdown, the current vserver will be stopped before its dependencies. Content of this file are vserver ids (one name per line). 1155207319 M * Nam I have the context id for a vserver in there, I tell the vserver to start... and it does, ignores the entry in the depends file 1155207326 M * Nam any reason for this? 1155207455 J * dna ~naucki@p54BCFEAD.dip.t-dialin.net 1155207572 Q * dna_ Ping timeout: 480 seconds 1155208019 J * |gerrit| ~kvirc@dslb-084-060-206-152.pools.arcor-ip.net 1155208246 J * dna_ ~naucki@p54BCFEAD.dip.t-dialin.net 1155208376 Q * dna_ 1155208557 Q * dna Ping timeout: 480 seconds 1155208618 M * Hollow Nam: the mark file is used by start-vservers 1155208637 M * Hollow it will start all vservers in a certain group/mark 1155208675 M * Nam ahhh... 1155208760 M * Hollow same applies for apps/init/depends 1155208822 M * Nam soo... I should be able to do "/lib/util-vserver/start-vservers -m test --start" and that will start all servers with the mark "test"? 1155208829 M * Nam i just tried it... and it didn't work 1155208850 M * Hollow in theory yes :) 1155208880 M * Hollow Nam: hm.. try to add --all 1155209009 M * Nam that seems to have worked... although... i am getting a weird error 1155209024 M * Nam root@testbox:/lib/util-vserver# vserver-stat 1155209024 M * Nam CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME 1155209024 M * Nam 0 81 795M 160.5M 11m33s98 8m12s40 2d02h18 root server 1155209024 M * Nam 49149 8 89.2M 7.7M 0m00s10 0m00s60 0m04s53 skytest 1155209024 M * Nam 49150 8 85M 7.6M 0m00s80 0m00s80 0m04s55 test1 1155209037 M * Nam this shows of course... the 2 vservers are running 1155209050 M * Nam ./start-vservers -m test --all --stop 1155209050 M * Nam Vserver '49149' does not exist; skipping it... 1155209050 M * Nam Vserver '49149' mentioned in '/etc/vservers/test1/apps/init/depends' does not exist; skipping it... 1155209069 M * Hollow you need top put names in depends 1155209071 M * Nam is it supposed to be the name 1155209072 M * Nam ah 1155209077 M * Hollow also you should not use dynamic context ids 1155209090 M * Nam This file is used to configure vservers which must be running before the current vserver can be started. At shutdown, the current vserver will be stopped before its dependencies. Content of this file are vserver ids (one name per line). 1155209101 M * Nam the flower page makes it confusing 1155209110 M * Hollow yeah, confusing formulation.. 1155209173 M * Nam your one of the developers of this project right? 1155209204 M * Hollow yep, i'm currently developing a new userspace implementation 1155209244 M * Nam i've talked with Bertl a bit... I'd like to extend the same thank you to you, This project is awesome so far, and I can't wait to see the projects future 1155209264 M * Hollow you'll love the future :D 1155209269 M * Nam I also look forward to hopefully being able to help out financially and equipment wise 1155209311 M * Nam your project has allowed me to create something so beautiful... I hope to make a lot of money with it in the corporate industry, and help bring the products I am using more spotlight 1155209333 M * Nam really... the only two things I see missing from the project are... ane one I hear is already on the way 1155209346 M * Nam 1) Ability to update network configuration without rebooting the vserver 1155209358 M * Nam 2) slackware package management support 1155209382 M * Nam I've made my own scripts and such... but they are all custom and don't really interface with util-vserver at all 1155209394 M * Hollow well, 1) actually works, 2) depends.. in my new implementation there will be _no_ distro specific stuff ever 1155209412 M * Hollow 2) for util-vserver, dunno.. should not be too hard to add it 1155209451 M * Nam i've just been doing it all with custom bash scripts so far... I just am using slackware/slamd64 for everything.. so, it would be nice to see support for it ;) ;) 1155209461 M * Nam but, it's not biggie 1155209474 M * Hollow you're talking about managing slackware guest packages from the host, right? 1155209479 M * Nam the main thing is the network support changes without reboot 1155209487 M * Nam yup 1155209496 M * Nam i can do it with command line 1155209497 M * Hollow so.. why don't you do it from inside? 1155209515 M * Nam installpkg --root /vserver/ /path/to/pkg.tgz 1155209522 M * Nam something like that 1155209527 M * Hollow i never saw the advantage of letting util-vserver managing packages 1155209533 M * Nam i have to install the packages first 1155209536 M * Hollow as you can see, it results in total mess ;) 1155209553 M * Hollow vyum is a patch mess, same for rpm, etc pp 1155209558 M * Nam hehe, which is why I have been doing it all custom myself... I'm writing a next generation control 1155209562 M * Nam control pane 1155209564 M * Nam er... 1155209566 M * Nam control panel 1155209579 M * Nam i can stand cpanel and most other control panel interfaces I have seen 1155209590 M * Nam I've been developing my own for about 2+ years now 1155209602 M * Nam getting ready to go public within probably this month or next 1155209620 M * Hollow for vservers or general like syscp? 1155209747 M * Nam hehe... never herd of syscp before... but taking a quick look at it... imagine this on let's say.... 10000000 doses of steriouds 1155209748 M * Nam hehe 1155209759 M * Nam steroids 1155209775 M * Hollow well, syscp, webmin, cpanel, plesk.. all the same ;) 1155209784 M * Nam yea, all variants of the same thing 1155209790 M * Nam i'm taking things a step further 1155209800 M * Nam you herd of Levanta? 1155209807 M * Hollow no 1155209809 M * Nam http://www.levanta.com/ 1155209820 M * Nam mine should be able to outdo this one 1155209829 M * Nam and it has WAY more features 1155209874 M * Nam we are also integrating VoIP/PBX features into ours 1155209878 M * Hollow but it still is a panel for managing what is inside the vserver (or on a normal host without vserver)? 1155209905 M * Nam it will manage not only vservers, but the local and remote servers 1155209914 M * Nam i can manage an entire network from one interface 1155209952 M * Hollow maybe i should rephrase it: does your panel manage the creation, start, stop config of vservers, or does it manage the services running inside the vservers? 1155209957 M * Nam run security checks, update software, move vservers from one machine to another, backup stuff, send phone/pager/sms reminders, all kinds of things 1155209966 M * Nam both 1155209970 M * Hollow omg.. 1155209973 M * Hollow what mess 1155209980 M * Nam ? 1155210037 M * Hollow exactly the same with util-vserver, why does it package management, why does it network setup, etc... it's simply not its job 1155210071 M * Hollow imo you either manage vservers, or what is inside 1155210075 M * Hollow i.e. two apps 1155210085 M * Nam hehe, all i use util-vserver for really... start/stop/restart servers 1155210092 M * Nam vprocunhide 1155210101 M * Nam umm... that's it 1155210109 M * Nam i do everything else through my own software 1155210134 M * Nam if you strip all package management abilities for it... i have no problem with that 1155210149 M * Hollow i don't strip them, i won't add them :p 1155210177 M * Hollow Nam: btw, http://svn.linux-vserver.org/cgi-bin/viewvc.cgi/vcd/trunk/doc/vcd.spec?view=markup if you're interested 1155210183 M * Nam ohh... and util-vserver would also apply the network settings for the new version right? 1155210216 M * Hollow hm? 1155210222 M * Hollow which new version? 1155210260 M * Nam Bertl_oO: said only the current development version of the project supports updating network without a vserver reboot 1155210267 M * Nam i'm waiting for it to become stable 1155210281 M * Hollow ah, yep.. that feature is in 2.1 only 1155210290 M * Nam you not planning on changing util-vserver completely are you? 1155210305 M * Hollow well, i have reimplemented it from scartch 1155210312 M * renihs :) 1155210313 M * Hollow with different architecture in mind 1155210321 M * Nam that will cause a bunch of work on my part if your going to change the vserver start/stop/restart script 1155210331 M * renihs lol ;) 1155210336 M * Hollow you can still use util-vserver 1155210344 M * Hollow noone forces you to change 1155210364 M * Hollow but util-vserver development is semi-dead... 1155210375 M * Nam well... if it is no longer supported in future versions, yes, i'd be forced to change right 1155210427 M * Hollow but also the old utils from jacques still work with current kernels, so util-vserver will work on it for a long time 1155210436 M * Hollow just new futures will probably not implemented 1155210437 M * Nam ah, k 1155210456 M * Nam then it won't be much of a problem for a while, which I can update with a patch at that time 1155210459 M * Nam no biggie then 1155210464 M * Hollow yup 1155210478 M * Hollow it will also still take a while before vcd is ready 1155210508 M * Nam what will vcd do exactly? 1155210529 M * Hollow in theory everything util-vserver currently does :) 1155210543 M * Hollow but many things are not implemented yet (mainly unification and disk limits) 1155210555 M * Nam umm.... I was hoping for a "and more"... ;) 1155210582 M * Hollow well, it already supports more (and less at the same time ;) 1155210632 M * Nam what is the vunify stuff for?... i haven't found that good of documentation, so I don't know what everything is yet 1155210679 M * Hollow there are different approaches, but in general you share (hardlinked) files across vservers to save space, but break the links once one vserver is writing to the file 1155210693 M * Nam ah 1155210698 M * Nam i don't need that 1155210711 M * Hollow me neither, that's why it's not implemented yet :D 1155210713 M * Nam customers pay for diskspace ;) 1155210751 M * Hollow also i use gentoo vserver only, and unification does not work that good with a source-distro 1155210784 M * Hollow well, it works, but after some updates the gain is gone 1155210799 M * Nam I plan to outsource the entire project in time... as long as I can talk my boss into it... although, for anyone to use it.. they do require certain setups of both hardware and internet capabilities 1155210826 M * Nam never used gentoo myself yet 1155210835 M * Nam i stick with my slack based distros 1155210840 M * Nam ie, slackware/slamd64 1155210853 M * Nam i'm running all this vserver stuff on 64bit 1155211000 M * Nam hmm.... if you used the fstab bind abilities, you could have one install that you keep updated, and it automatically updates the other ones.. but the /home or what ever stays the same... just an idea 1155211019 J * rgl Rui@217.129.151.190 1155211022 M * rgl hello 1155211059 M * Hollow Nam: well, you can share directories of course, e.g. /usr which is supposed to be sharable on normal systems too 1155211140 M * Nam in theory, if /usr /bin /sbin and perhaps a couple more where all mounted through another system, have them all mounted from a single vserver and leave the rest of the dirs setup for the data and configurations, then, if you syncronize you start/stop/restart of all processes, you would essentually upgrade them all at the same time, with only updating one vserver 1155211153 M * Nam interesting concept... don't think i'll be doing that myself 1155211196 M * coocoon Nam: u have seen this http://linux-vserver.org/SlackwareVserverHowto maybe under upgrading vservers it will give some ideas 1155211211 M * Hollow depends, in some situations unification will be better for sure, but if you want to have "clones" the bind mount approach might work out quite well too 1155211248 M * Nam yea, i've seen all of that coocoon, i've made all of my own scripts... that helped get me started though... like 2 years ago 1155211255 M * Nam 1-2 years ago 1155211330 M * Nam yup... I plan on running each system on it's own, and requiring customers to pay for it 1155211361 M * Nam our main clients are big wig realestate companies soo... we should be bringing in a lot of high paying customers easily 1155211362 Q * meandtheshell Ping timeout: 480 seconds 1155211801 M * rgl I'm trying to use busybox inside a guest, but it keeps failing to start, please have a look at http://pastie.caboo.se/7937 and tell me what you think 1155211845 Q * coocoon Quit: KVIrc 3.2.0 'Realia' 1155211876 J * coocoon ~coocoon@p54A06ACB.dip.t-dialin.net 1155211894 M * Nam /etc/vservers//apps/init/cmd.start and /etc/vservers//apps/init/cmd.start need to be setup to tell the vserver what command to run to start the server 1155211971 M * rgl Nam, humm, why isn't that needed for a debian install that I have working? 1155212010 M * rgl I mean, for a debian guest 1155212029 M * Hollow Nam: the scriplets are for additional commands you want to run during vserver start/stop 1155212039 M * Nam perhaps /etc/vservers//apps/init/style is set... or there is some other reason that I am unaware of... 1155212050 M * Nam I had to set them in order to run slackware to boot 1155212055 M * Nam i had to set them to /sbin/init 1155212059 M * Hollow which init style do you use? 1155212070 M * Nam slackware is bsd style 1155212071 M * matti Hollow: ;] 1155212079 M * Hollow hoi matti ;) 1155212087 M * Hollow always-grinning-matti 1155212088 M * Hollow :p 1155212136 M * rgl Nam, nope, there is nothing in the apps/init/ directory on the debian guest, very odd :D 1155212137 M * Hollow Nam: i.e. you have "bsd" in apps/init/style? 1155212167 M * Nam no, I have "/sbin/init" set in apps/init/cmd.start 1155212179 M * Hollow so, what is in apps/init/style? 1155212197 M * Nam plain 1155212209 M * Hollow hm, plain should call /sbin/init automagically 1155212228 M * Nam it wasn't working for me earlier 1155212265 M * Nam hmm... 1155212282 M * Nam i just removed the cmd.start/cmd.stop and it's still working 1155212287 M * Hollow heh 1155212299 M * Nam wonder what it was earlier 1155212301 M * Nam probably human error 1155212303 M * rgl humm, busybox init seems to be borked or something, http://pastie.caboo.se/7937 1155212311 M * Nam it's the damn "flower" hehehe 1155212324 M * Hollow Nam: yeah, it's so crappy.. 1155212350 M * Hollow rgl: init style? 1155212356 M * Nam ?... the bud i get is good, i don't know what your talking about 1155212357 M * Nam hehe 1155212378 M * Hollow the flower page just sucks.. 1155212399 M * rgl Hollow, what do you mean? or, does it matter? isn't vserver just running /sbin/init ? 1155212401 M * Hollow fortunately you can switch css sheets 1155212402 M * Nam ah yea... I use the boring theme 1155212421 M * Hollow rgl: no, it depends on your init style.. 1155212442 M * rgl Hollow, ah ok :) 1155212489 M * rgl Hollow, where can I see the different init styles? 1155212523 M * Hollow http://linux-vserver.org/InitStyles 1155212549 M * Hollow there is also a minix init style, not mentioned on that page.. 1155212570 M * rgl ah thx. 1155212591 M * rgl I think for busybox I should use "plain" :D 1155212595 M * Hollow in alsmost all cases you want plain 1155212608 M * Nam you should remake the config page, use something like this... http://javascript.cooldev.com/scripts/cooltree/demos/superdemo/ and make the files that you can click on, pop up with the information about that file or something 1155212631 M * Hollow uh.. please uninvent javascript 1155212632 M * Hollow :) 1155212644 M * Nam hehe... in the past... I would have agreed 1155212647 M * Nam not in todays age 1155212665 M * Nam jscript now is sooo powerful, I've forced myself to get into it 1155212671 M * rgl Hollow, humm, still the same problem. odd :/ 1155212672 M * Nam I am making AJAX applications now ;) 1155212686 M * Hollow well, i have javascript disabled in all my browsers, it just sucks 1155212704 M * Nam well... your missing out on the new age of AJAX 1155212712 M * Nam you never used maps.google.com? 1155212714 M * Hollow if you need dynamic pages use php or python or whatever, just not java(script) 1155212714 M * Hollow ;) 1155212728 M * Hollow Nam: i use google earth .. 1155212747 M * phedny Hollow: php is slow 1155212751 M * Nam not anymore man... pages are changing over to javascript... actually, most pages use javascript 1155212757 M * phedny every time you click something there is a round-trip to the server ;) 1155212767 M * Nam javascript is yes... WAY faster for client side stuff 1155212780 M * Nam PHP is not ment to do the jobs of javascript 1155212799 M * Nam check out http://www.netvibes.com/ or http://www.live.com/ 1155212807 M * Nam with jscript enabled of course 1155212828 M * Nam i can do all of that 1155212836 M * Nam my control panel uses javascript 1155212844 M * Nam it runs like it's a desktop interface 1155212858 M * Nam if you fullscreen it, you could fool someone into believing it is 1155212920 M * Nam when google came out with google maps.. they changed the way the world looked at javascript 1155212936 M * Nam now there is a major movement to using jscript for soooo much 1155212947 M * Hollow yeah.. *sigh* 1155212974 M * Nam hell, even microsoft is talking about making a web based operating system for a product 1155212975 M * Nam hehehe 1155212984 M * Hollow i don't care about microsoft ;) 1155212988 M * Nam i know 1155212996 M * phedny like https://www.online-reserveringen.nl/brs-i/e1-en/new <-- that calender I made is fast due to javascript 1155213030 M * phedny app is still in development by the way :) 1155213101 M * Nam hehe, check this out... I went even further.. once you selected a day, I made a PHP generated image using gd which displays the hours of a day, you can then click on an area and add time slots for things for the days in the calendar 1155213144 M * Nam looks like your using div and positioning everything... correct? 1155213154 M * phedny at the moment I am 1155213170 M * Nam div's have become the new way for managing sites as well 1155213170 M * phedny but css is going to be done by someone that can do a much better job on it :) 1155213187 M * phedny for now it's just to display the things i need :) 1155213194 M * Nam myself, I make functional code... I leave the look up the the graphics people 1155213196 M * Hollow well, i stoped web development some time ago, it just annoys me too much 1155213209 M * phedny Nam: same here 1155213230 M * phedny anyway, I'm going out for a break 1155213245 M * Nam i did to.. because of javascript actually.... i moved into just doing php stuff... but as i learned about the new technologies capable with javascript, man, I sure changed my opinion quickly 1155213261 M * Nam i hated javascript more then anyone else out there at one time 1155213266 M * Nam i couldn't stand it at all 1155213686 M * rgl oh, the busybox init is not running because vserver is not running it as pid 1 :/ 1155213693 M * rgl but how can I force that 1155213694 M * rgl ? 1155213915 M * Nam pid 0 is for the host, and i believe pid 1 is reserved for the monitor 1155213928 M * Hollow Nam: you're talking about xid 1155213931 M * Hollow he talks about pid 1155213933 M * Nam oh 1155213935 M * Nam right 1155213959 M * Hollow rgl: well, util-vserver should set the init pid to 1, which kernel/util-vserver version do you use? 1155214076 M * Nam Hollow: do you know what this mean... /dev/pts/1: Operation not permitt/dev/pts/1: Operation not permitted 1155214085 M * Nam i get then when running vserver enter 1155214107 M * Hollow you're probably don't have vlogin patch for util-vserver :) 1155214111 M * Nam i think i asked Bertl before, and it wasn't any big deal regarding the message 1155214147 M * Nam never herd of it 1155214176 M * Hollow it's simple: 1155214187 M * Hollow once you login to the host you get a new pts 1155214204 M * Hollow now, vserver ... enter migrates to the given context and executes /bin/bash 1155214244 M * Hollow i.e. once you are inside the guest, you cannot reach the pts device on the host, but it still is the pts device for the terminal 1155214259 M * Hollow my vlogin patch creates a new pts and proxies all i/o through it 1155214283 M * Hollow get it here: http://dev.croup.de/repos/gentoo-vps/util-vserver/patches/0.30.210-r18/util-vserver-0.30.210-vlogin.patch 1155214284 M * rgl Hollow, I've added this into apps/init/start.cmd: 1155214285 M * rgl /bin/ash 1155214285 M * rgl -c 1155214285 M * rgl echo $$ 1155214288 M * Nam ah.. do you plan on including it in the util-vserver release any time soon? 1155214301 J * debugger_ Rui@217.129.151.190 1155214304 N * debugger_ rgl_ 1155214313 M * rgl_ and it returns values != 0 :( 1155214318 M * Hollow well, the only one who is able to commit is enrico scholz, and as i said.. util-vserver is development is kind of dead 1155214356 M * Hollow that's why we have tons of patches for util-vserver in gentoo 1155214367 M * rgl_ Hollow, 0.30.210 1155214408 M * Hollow and kernel? 1155214415 M * rgl_ 2.16.7 1155214418 M * Hollow iirc there was some initpid fix some time ago 1155214421 M * rgl_ 2.6.16.7 1155214422 M * Hollow hm 1155214449 M * rgl_ crap.. its 2.6.17.7-vs2.0.2-rc27 1155214464 M * Nam thanks Hollow 1155214624 M * Hollow rgl_: sorry, no idea currently.. 1155214683 M * rgl_ Hollow, what is the exepcted result of cmd.start? I mean, does that command has to return? only returns when the system should be turned off? 1155214770 Q * rgl Ping timeout: 480 seconds 1155214796 M * Hollow rgl_: don't know.. never used cmd.start... have to look at the sources 1155214878 M * rgl_ Hollow, the util-vserver sources, correct? 1155214937 M * Hollow yep 1155215011 M * mnemoc Hollow: hi, any chance of separating the tool to create the vxdb and to set passwords? (to be able to change it, etc...) 1155215070 M * Hollow mnemoc: you can set passwords using the xmlrpc methods.. 1155215164 M * mnemoc that gensql is very anoying for automatized instalations :\ 1155215196 M * Hollow well, you cannot package an sqlite database 1155215225 M * mnemoc but with gensql you can't make that a 'postinstallation' procedure 1155215279 M * mnemoc because of the interactivity 1155215310 M * Hollow well, i could implement some command line switches if that helps you 1155215321 M * Hollow but how do you know the password then? 1155215348 M * mnemoc about the xmlrpc method, what if i loose the password 1155215367 M * mnemoc uhm, what about a bogus initial account just capable of changing the password? :) 1155215375 M * Hollow what mess 1155215377 M * Hollow :p 1155215378 M * mnemoc yes :p 1155215390 M * Hollow and you should not losse your passwords 1155215392 M * Hollow ;) 1155215426 M * mnemoc ugly generated password, and fs corruption... uh, surprise the vhelper.conf is lost 1155215437 M * mnemoc what was the f* password? :\ 1155215457 M * mnemoc ok ok, backups and blah blah blah 1155215461 M * Hollow well, the admin user can change it quite easily using the methods 1155215472 M * Hollow or you use plain SQL to change it 1155215511 M * Hollow UPDATE user set password="$(echo -n password | sha1sum)" WHERE name="vshelper" | sqlite /var/lib/vcd/vxdb 1155215519 M * Hollow + echo in the front 1155215524 M * mnemoc considering that suppose to be encrpyted, that's the part that would be nice to have a vxdbpasswd or something like that 1155215529 M * mnemoc sha1sum, uhm, ok 1155215555 M * mnemoc gensql is an script?! doh, i didn't check that 1155215563 M * Hollow no, it's C 1155215571 P * Roey Leaving 1155215648 M * mnemoc sqlite3 /var/lib/vcd/vxdb < doh.sql 1155215661 M * mnemoc vxpasswd admin -p bogus 1155215671 M * mnemoc vxpasswd vhelper -p weeee 1155215672 M * mnemoc :p 1155215696 M * mnemoc too messy? looks more confortable to me 1155215732 M * Hollow patches are welcome :) 1155215739 M * Hollow but yeah 1155215743 M * Hollow yould probably be better 1155215755 M * mnemoc so suggestion accepted? 1155215758 M * Hollow yep 1155215789 M * mnemoc i have two ugly deadlines for tomorrow, after that you will get your patch :) 1155215862 J * pisco ~pampel@p50879D97.dip0.t-ipconnect.de 1155215969 Q * coocoon Quit: KVIrc 3.2.0 'Realia' 1155216340 J * coocoon ~coocoon@p54A06ACB.dip.t-dialin.net 1155216653 J * mef ~mef@targe.CS.Princeton.EDU 1155216659 P * mef 1155216754 M * Hollow mnemoc: great :) you know that the svn repos have changed location? 1155216882 M * mnemoc i had the 'idea' enqueue to be processed somewhere on my brain :p 1155216897 M * Hollow heh, http://svn.linux-vserver.org 1155216920 M * Hollow checkout via http://svn.linux-vserver.org/svn/ 1155216948 M * mnemoc thanks, pasted on my TODO :p 1155216976 M * Hollow i also split some things during the move, i.e. lucid is now a shared lib, the syscall wrappers have moved to libvserver, and vstatd became an own package 1155217028 M * mnemoc oh 1155217132 M * Hollow you can use vstatd with util-vserver too, so we split it 1155217332 Q * pagano Ping timeout: 480 seconds 1155217501 J * pagano ~pagano@131.154.5.20 1155217872 M * rgl_ Hollow, humm are you reimplementing util-vserver ? 1155217915 M * Hollow i am reimplementing userspace tools, yes 1155217923 M * Hollow but they don't have anything in common with util-vserver 1155218034 J * tatiane ~tatiane@201009032054.user.veloxzone.com.br 1155218071 Q * tatiane 1155218078 M * rgl_ Hollow, nice :D 1155218137 M * rgl_ Hollow, is it usable already? like, can it already replace vserver binary? 1155218155 M * Hollow yes, it is usable, no it cannot replace the vserver command yet 1155218324 M * sid3windr Hollow: vcd is a daemon running on a host speaking xmlrpc to create/control vservers? 1155218333 M * Hollow exactly 1155218342 M * sid3windr eleet. 1155218351 M * sid3windr was going to try and write something similar myself 1155218355 M * Hollow best to read: http://svn.linux-vserver.org/cgi-bin/viewvc.cgi/vcd/trunk/doc/vcd.spec?view=markup 1155218362 M * sid3windr somewhat disappointed in quality & support of openvcp 1155218641 M * sid3windr Hollow: looks good 1155218667 Q * FireEgl Ping timeout: 480 seconds 1155218676 M * Hollow expect a first release in september 1155218751 M * sid3windr :) 1155218760 M * matti ? 1155218787 M * matti Uh. 1155218996 Q * yarihm Quit: Leaving 1155219022 M * rgl_ Hollow, libvserver looks interesting :) 1155219060 M * rgl_ Hollow, the equivalent of the current vserver binary will be made an utility that calls vcd? 1155219069 M * Hollow rgl_: well, it's probably the most unspectacular part of all the new tools :o 1155219087 M * Hollow rgl_: no, vcc, vcd is the control daemon, whereas vcc is the control client 1155219096 M * Hollow there will be other clients as well 1155219100 M * Hollow e.g. a webinterface 1155219103 M * Hollow or a X11 gui 1155219112 M * Hollow everything with xmlrpc bindings will work 1155219131 M * rgl_ ah, so current vserver is replaced by vcc, which will connect to vcd? 1155219137 M * Hollow exactly 1155219166 M * sid3windr what can it do already at the moment? :) 1155219201 M * Hollow sid3windr: well, complete configuration, start/stop/restart/reboot, create 1155219222 M * Hollow still missing: scriplets, init dependencies, unficiation, disk limits 1155219235 M * Hollow also a few command line tools are still missing 1155219244 M * Hollow like vdu 1155219258 M * Hollow or showattr/setattr 1155219295 M * |gerrit| vcd seems very promising, I think openvcp becomes obsolete then 1155219299 M * mnemoc OT: what happens on the BARRIER if /vservers/ and /vservers/*/ are mount points? 1155219318 M * Hollow |gerrit|: well, openvcp is for util-vserver, so its place may still be there in the future.. 1155219353 M * Hollow but probably most people will use vcd if they want a webinterface then :p 1155219375 A * rgl_ is not really interested in webinterfaces :D 1155219394 M * Hollow rgl_: a lot of people want a webinterface for vservers 1155219418 J * FireEgl ~FireEgl@Atlantica.US 1155219434 M * rgl_ Hollow, I prefer good command line utilities (or libraries) that can be easy scripted :) 1155219455 M * sid3windr yea 1155219460 M * sid3windr I'm in it for the webinterface ;) 1155219479 M * Hollow rgl_: well, ISPs like to give their customers a nice webinterface to manage their guests 1155219619 M * rgl_ Hollow, don't get me wrong, I like them too, but they have to be customizable :) 1155219646 M * Hollow them? you mean the webinterface? 1155219653 M * rgl_ yes 1155219699 M * Hollow well, if customizable == skinnable, then it should be the smallest problem... css is your friend ;) 1155219765 M * rgl_ if the HTML is well done :) 1155219828 M * Hollow i don't code the webinterface anyway 1155219847 M * rgl_ will vcc able to do is work without vcd? 1155219852 M * Hollow no 1155219972 M * matti Hollow: Nice draft. 1155219974 M * rgl_ won't vcd sit idle most of the time? 1155219979 A * matti just finished reading. 1155219981 M * Hollow rgl_: probably 1155219998 M * Hollow matti: thanks :) some parts are still missing, i should update it asap.. 1155220123 M * matti ;-) 1155220243 M * rgl_ Hollow, basically, vcd (at least the XMLRPC Server part) is a wrapper for libvserver? 1155220286 M * matti I think, that libvserver should work as backend library for everything. 1155220287 M * Hollow rgl_: well.. i wouldn't call it a wrapper.. it just uses the libvserver functions... 1155220302 M * Hollow matti: indeed, i'd love util-vserver to be ported to libvserver 1155220310 M * matti Hollow: Well. 1155220326 M * Hollow but i won't do it, because the libvserver shipped with util-vserver is not just a syscall wrapper 1155220339 M * Hollow it is utility, syscall wrapper and util-vserver library in one 1155220344 M * matti Hollow: With strong backend library that will implement all hooks well. 1155220360 M * matti Hollow: There should be possible to build any tools user only wished to have. 1155220374 M * matti Hollow: ncruses, qt, gtk, web gui. 1155220403 M * Hollow sure, you can build directly on libvserver, but you have to take care of all the ugly stuff yourself 1155220406 M * rgl_ Hollow, what I meant was, using libvserver one can do everything that is available on the xmlrpc interface? 1155220414 M * Hollow rgl_: no 1155220419 M * matti Hollow: Ugly stuff? 1155220430 M * Hollow libvserver is a very basic library for doing the vserver syscall 1155220449 M * rgl_ Hollow, humm, so why not make vcd a library? 1155220457 M * Hollow matti: using all the libvserver cuntions to start a vserver is "ugly" ;) 1155220482 M * matti Hollow: ;p 1155220492 M * matti Hollow: Only for brave people ;p 1155220495 M * Hollow heh 1155220511 M * Hollow rgl_: well, vcd is a kind of library 1155220516 M * Hollow just not a ELF one ;) 1155220519 M * Hollow but a XMLRPC one 1155220538 M * rgl_ and some stats gathering stuff ;) 1155220542 Q * Viper0482 Read error: Connection reset by peer 1155220552 M * Hollow rgl_: that's what vstatd is for 1155220560 J * Roey ~katz@h-69-3-4-130.mclnva23.covad.net 1155220578 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155220614 M * rgl_ ah ok :) 1155220647 M * rgl_ anyways, I would rip the xmlrpc handling part, and make a library out of it :) 1155220906 J * insomnia1 ~insomniac@slackware.it 1155220906 Q * insomniac Read error: Connection reset by peer 1155221422 J * schimmi ~sts@port-212-202-73-176.dynamic.qsc.de 1155221502 J * debugger_ Rui@217.129.151.190 1155221617 Q * renihs Remote host closed the connection 1155221659 M * Hollow rgl_: will probably not happen :) 1155221945 Q * rgl_ Ping timeout: 480 seconds 1155222655 Q * michal` Ping timeout: 480 seconds 1155223250 J * newz2000 ~matt@12-216-147-124.client.mchsi.com 1155223338 J * michal` ~michal@www.rsbac.org 1155223545 J * stefani ~stefani@tsipoor.banerian.org 1155223846 M * newz2000 my employer is looking for a few kernel devs. I know there are a few good ones who hang out here so just wanted to help get the word out... 1155223861 M * newz2000 I just posted the job listings here: http://www.ubuntu.com/employment 1155224155 J * mef ~mef@targe.CS.Princeton.EDU 1155224236 M * waldi ah, ben needs some help ... 1155224417 M * mnemoc :) 1155225182 P * Viper0482 und weg 1155225552 J * hallyn ~xa@adsl-75-21-68-95.dsl.chcgil.sbcglobal.net 1155225642 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155225662 Q * Viper0482 1155225730 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155225781 Q * Viper0482 1155225801 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155225827 J * bonbons ~bonbons@83.222.36.236 1155225882 Q * bubulak Ping timeout: 480 seconds 1155225925 J * bubulak ~bubulak@whisky.pendo.sk 1155225964 J * ekc ~nospam@netblock-66-245-252-180.dslextreme.com 1155226126 Q * Viper0482 Quit: one day, i'll find this peer guy and then i'll reset his connection!! 1155226150 Q * ||Cobra|| Remote host closed the connection 1155226364 Q * ekc 1155226365 Q * debugger_ Quit: Fui embora 1155226446 Q * Greek0 Quit: leaving 1155226474 J * Greek0 ~greek0@85.255.145.201 1155226692 J * ekc ~ekc@netblock-66-245-252-180.dslextreme.com 1155227468 P * newz2000 Talk to you later 1155228918 J * Viper0482 ~Viper0482@p54975C48.dip.t-dialin.net 1155229155 Q * michal` Ping timeout: 480 seconds 1155229644 J * michal` ~michal@www.rsbac.org 1155229717 Q * DreamerC Quit: leaving 1155229760 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155229805 Q * DreamerC 1155229827 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155230535 Q * ekc Remote host closed the connection 1155230901 Q * DreamerC Quit: leaving 1155230974 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155231170 J * meandtheshell ~markus@85-124-233-39.work.xdsl-line.inode.at 1155232010 M * Viper0482 hi 1155232060 M * Viper0482 after updateing my kernel i got some errors wenn i stop my vservers http://nopaste.php-q.net/231579 1155232082 M * brc How much memory unifying 20 vservers (same install) would give me ? Anyone have an idea? 1155232319 M * mnemoc memory? 1155232621 Q * DreamerC Quit: leaving 1155232657 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155232834 Q * DreamerC 1155232885 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155233048 M * ebiederm There would be more sharing in memory... 1155233076 M * ebiederm So you would get more page cache hits... 1155233238 M * doener mnemoc: unifying means not only the same library version, but actually the same library file is used, thus it's loaded into memory only once 1155233253 M * doener same for executables 1155233271 M * Greek0 ebiederm: hard to say IMHO, pretty much depends what you do in the vservers. do you use the same binaries/libs in all of them? 1155233279 M * mnemoc doener: oh, i had not think about that sideefect :p 1155233307 M * Greek0 e.g. 20 www servers using apache or different programs in every vserver.. 1155233351 M * doener Greek0: of course, unifying a bunch of totally different vservers is pretty pointless 1155233369 M * ebiederm Greek0: At least glibc and it's dependencies should wind up shared. 1155233371 M * Greek0 I don't think you'd save _that_ much, since you can only share .text (i.e. program code), .rodata and stuff like that. this usually isn't a huge part of the memory footprint of a program 1155233412 M * Greek0 doener: you could easily unify a www server vserver, a mail server one, etc, and you'd still save a lot of space.. 1155233423 M * Greek0 (hard-disk wise) 1155233492 M * doener that would mean that they have stuff in common which probably includes at least some libraries/executables -> memory saved 1155233503 M * doener as ebiederm glibc and such are good candidates 1155233507 M * doener +said 1155233587 M * Greek0 well, glibc has about 1Mb .text, if that's all shared (which probably works) you save 1 mb per vserver 1155233622 M * Greek0 OTOH most of that probably isn't paged in anyway, if you don't use everything. 1155233648 J * s0undt3ch_ hlthsmi@bl7-253-243.dsl.telepac.pt 1155233664 M * ebiederm I suspect the real savings is in disk space where you save 500MB+ per vserver (assuming a fairly complete distro install) 1155233749 M * ebiederm brc: That answer your question? 1155233759 M * Greek0 mm. I guess if you run apache in every single vserver you might save something like 50 mb if you're lucky. 1155233847 M * ebiederm Greek0: The real win is likely in responsive ness as you won't have to page in portions of your .text segment if you haven't been scheduled for a while. 1155233948 M * Greek0 depends on your memory load. if memory isn't scarce nothing is paged out, even if you weren't running for some time 1155233964 M * brc all vservers have the same binaries and libs 1155234024 M * brc and rus the same stuff: apache, postfix, ftp, etc etc. 1155234031 M * ebiederm Greek0: Yes. But with aggressive disk cacheing usually memory gets filled up, and clean disk pages are dropped from memory. 1155234057 M * ebiederm Greek0: So you are frequently paging even when you aren't using swap. 1155234077 Q * s0undt3ch Ping timeout: 480 seconds 1155234078 N * s0undt3ch_ s0undt3ch 1155234268 M * Greek0 ebiederm: ok, donno what gets dropped first, io caches or clean disk-backed memory.. 1155234316 M * ebiederm Greek0: same thing. 1155234358 M * Greek0 donno 1155234453 J * ekc ~ekc@netblock-66-245-252-180.dslextreme.com 1155234570 Q * DreamerC Quit: leaving 1155234592 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155234652 Q * DreamerC 1155234691 N * Belu_zZz Belu 1155234866 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155234911 Q * DreamerC 1155234918 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155234921 Q * DreamerC 1155234936 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155234959 M * ebiederm You might get a performance boot from shared caches on your text segments as well. 1155235145 Q * DreamerC 1155235331 J * DreamerC ~dreamerc@59-112-2-131.dynamic.hinet.net 1155235512 N * Bertl_oO Bertl 1155235518 M * Bertl evening folks! 1155235531 M * ebiederm Evening Bertl. 1155235539 M * ekc hello Bertl. 1155235572 M * Bertl hey ebiederm! LTNS 1155235622 M * coocoon morning bertl ;-) 1155235633 M * ebiederm yeah. I missed you at kernel summit and OLS ;) I'm sorry the organizers couldn't figure out how to sponsor you. 1155235690 M * Bertl ebiederm: well, that was not the problem, the kernel folks didn't invite me ... 1155235751 M * Bertl (obviously they didn't consider my presence desireable :) 1155235826 M * ebiederm I think Sam Villian was considered the better candiate becase he was actually trying to get stuff merged into the kernel. 1155235839 M * ebiederm But then he couldn't make it either. A real mess from that side. 1155235872 M * Bertl well, thats fine with me, I'm trying to get stuff into mainline since a few years now (see BME) 1155235934 M * ebiederm Bertl: Sorry not stuff in general but container type stuff. 1155235963 M * ebiederm Not that BME isn't exactly :) 1155235983 M * ebiederm But that was the perception of the relevant person on the comittie. 1155235992 M * Bertl ah, yeah, well, not my problem actually ... 1155236046 M * Bertl ebiederm: so how is it going? 1155236073 M * ebiederm I'm in the middle of working of a set of pid namespace patches and should be pushing them to -mm in the next couple of days. 1155236104 M * ebiederm The general sentiment from the conference is all in favor of merging this stuff as long as it can be done with 1155236106 M * ebiederm reasonable patches. 1155236124 M * ebiederm So I have finally gotten past my other distractions for a bit so I can get something done :) 1155236144 M * dhansen ebiederm: if you have any large janitorial-style patches, suka should be able to help you out a bit if you want to offload some work 1155236158 M * dhansen was the is_init() one helpful to see updated? 1155236184 M * ebiederm dhansen: Reasonable at least. I had forgotten about that corner of the problem. :) 1155236225 M * ebiederm dhansen: The big thing on a janotial side really looks like killing kernel_thread users and making them kthread users. 1155236264 M * ebiederm Until we get that we have the possibility of getting kernel processes trapped inside of a container, which is no fun. 1155236266 M * Bertl suggested that a long time ago 1155236269 M * dhansen ebiederm: yup. suka has a long list of those, and we plan to trickle them in over time 1155236295 M * dhansen but, I don't think they're super-high priority. If we run into any of them causing problems we can patch them right away/// 1155236306 M * Bertl (when we first hit the guest spawns kernel thread issue :) 1155236326 M * ebiederm Bertl: That was the other thing that was said. First get as many maintenance style patches as we can to lay the ground work. 1155236375 M * ebiederm So the patches to do real stuff get smaller. 1155236418 M * Bertl well, the problem I see there is, that probably no 'cleanup' will get in with the argument, we want to use/change that later ... (as usual) no? 1155236450 M * Bertl i.e. if you cannot give a good reason _right_now_ the cleanup is not accepted 1155236474 M * dhansen Bertl: I think just saying 'containers' is a good enough reason for most things at this point 1155236503 M * dhansen I did that with memory hotplug for like 6 months. "I need this cleanup for memory hotplug, please" :) 1155236503 M * ebiederm The decision has been made to do containers: And the kernel_thread kthread api change doesn't even need that. 1155236512 M * ebiederm That is the biggest cleanup on my radar. 1155236513 M * Bertl well, we'll see, I'll try to push a few things I have on my list, but I'm pretty sure they will be rejected 1155236530 M * ebiederm What do you have on your list right now? 1155236542 M * Hollow accounting syscalls 1155236543 M * Hollow ;) 1155236550 M * Bertl :) 1155236551 M * ebiederm dhansen: I would say saying containers is enough to get things considered. 1155236566 M * Bertl ebiederm: some socket cleanups (flags and similar) 1155236567 M * doener Bertl: I can go and move the changes a few lines up again ;) (-> kernel_thread mount problem patch) 1155236590 M * dhansen Bertl: you just need to remember that it might take some serious changes to what you're trying to push. The r/o bind mounts have like 2 lines of code in them from what I started with :) 1155236604 M * ebiederm As for accounting syscalls do you mean user bean counters and ckrm type things? 1155236608 M * Bertl ebiederm: cleanup regarding filesystem magic numbers and such 1155236691 M * ebiederm Bertl: The pieces sound sane but I don't have enough context to really know. 1155236709 M * Bertl np, I'll prepare something 1155236744 M * Bertl I'm used to getting rejected and finding the changes a few months later in mainline :) 1155236845 M * Bertl ebiederm: what about quota in general? 1155236882 M * Bertl (filesystem quota I mean) 1155236898 M * dhansen Bertl: hch and viro were really helpful in steering me toward getting something that was mergable. Have you been able to have them take a look? 1155236932 M * Bertl dhansen: nope, it seems I can work with hch but viro just doesn't reply to my questions ... 1155236938 M * ebiederm What is the issue with quota? Sorry I have largely been ignoring the resource management side. 1155236965 M * Bertl ebiederm: well, quota is per user and group, for each filesystem 1155236970 M * dhansen Bertl: yeah, he can be kinda quiet. I had to send to akpm and ask for inclusion before I got anything ;) 1155236996 M * Bertl ebiederm: and the current design cannot cope with duplicates like having separate users per guest 1155237026 M * ebiederm Bertl: Got it. Yes. Part of the question there is how we wind up mapping multiple users to the filesystem. 1155237029 M * Bertl dhansen: well, we are not in kindergarden ... 1155237045 M * dhansen Bertl: are too! 1155237047 M * dhansen :) 1155237047 M * Bertl ebiederm: I did a modification (back in 2.4 :) which allows that 1155237055 M * Bertl dhansen: are not! :) 1155237077 M * Bertl ebiederm: basically it changes the current quota system to so called quota hashes 1155237085 M * ebiederm What was that book. I learned all I really needed to know in kindergarten? 1155237126 M * Bertl ebiederm: i.e. you do not have a single quota struct per superblock, but a (number of) hashes isntead 1155237187 M * Bertl ebiederm: this was extensively tested on 2.4 and used there for a longer time, it is currently ported to 2.6 basically, but I wouldn't care to do a more generic approach there 1155237223 M * Bertl ebiederm: this could, if done properly, allow generic, context based quota 1155237289 M * Bertl but let me put it this way, I can save the time of kicking it around and submitting it over and over when I already know that there is no mainline interest (and make it better but more linux-vserver specific instead) 1155237403 M * ebiederm Bertl: My gut feel is that we need to get to the user namespace before that is interesting. Mostly because it assumes a certain way of mapping different users to the filesystem that I'm not convinced we need to implement. 1155237403 M * dhansen Bertl: what problem does it solve? 1155237427 M * dhansen vservers with several filesystems, and wanting to account for them in a collective fashion? 1155237444 M * Bertl dhansen: no, guest on a single filesystem, wanting quota :) 1155237462 Q * |gerrit| Quit: KVIrc 3.2.0 'Realia' 1155237479 M * dhansen oh, and the quota being attached to the user itself is what kills you? 1155237495 M * dhansen yeah, user namespace sounds like the first step there 1155237499 M * Bertl dhansen: no, the only having a single quota structure per superblock kills me :) 1155237529 M * Bertl and trust me, that will not change with the namespace stuff 1155237542 M * Bertl unless you start handing out different superblocks per guest 1155237567 M * dhansen as ebiederm says, that should be OK. The user namespace stuff should interact with the filesystem in a way that you say 'this user is this on-disk uid' 1155237596 M * Bertl that's not a problem, how to handle uid=7 for xid=1 and 2 regarding quota? 1155237621 M * ebiederm Bertl: if we successfully mapped uids in the guest into different uids on the host. So the filesystem would only ever see a single set of uids. 1155237645 M * Bertl ebiederm: I do that with the U/GID mappings already 1155237656 M * ebiederm Bertl: Is this a problem when you put the high bits of xid into the uid to store it on disk? 1155237661 M * Bertl ebiederm: but that doesn't solve the quota issue 1155237680 M * ebiederm Ok. So we don't have a uid issue but we have a quota issue. 1155237689 M * Bertl ebiederm: look, that's all stuff we do since ages (the xid persistance stuff) 1155237712 M * Bertl ebiederm: you can select between 5 different mappings for most linux filesystems 1155237721 M * ebiederm Bertl: Sure. I'm not certain about the implementation but I think the concept is sane. 1155237729 M * Bertl so xid file tagging is perfectly fine here 1155237743 M * Bertl ebiederm: nevertheless, this doesn't help the quota issue 1155237786 M * Bertl with the current implementation you can only have a single set of quota files, a single quota entry 1155237830 M * ebiederm Which leads to the problem that guest sysadmins cannot set indepedent quotas on their filesystems. 1155237836 M * Bertl it's a pity that you didn't even look at the config options of linux-vserver yet ... 1155237851 M * Bertl (otherwise you'd know the various taggings) 1155237872 M * Bertl ebiederm: which leads to the problem that guests cannot do anything with the quota 1155237879 M * Bertl except for hitting the limit :) 1155237905 M * ebiederm Bertl: That makes sense. 1155237924 M * ebiederm So the trick is you want multiple quota adminstrative domains. 1155237942 M * Bertl yep, we call it quota hashes, you basically add them on demand 1155237954 M * Bertl i.e. you add one per (superblock, xid) 1155237987 M * Bertl you can then quota-on with that hash and have your quota files as usual 1155238046 J * rgl Rui@217.129.151.190 1155238047 M * rgl hi 1155238052 M * Bertl hey rgl! 1155238119 M * rgl hey :D 1155238175 M * dhansen Bertl: it sounds interesting. We'd certainly need that eventually for containers. It might be worth posting it. 1155238217 M * rgl I'm trying to get the quota of an user using quotactl(2), but how do I get to the device from a path? 1155238251 M * rgl do I have to iterate over mtab to get the device, or is there other way? 1155238271 J * debugger_ Rui@217.129.151.190 1155238274 M * ebiederm At the moment it does seem to rely on the concept of a filesystem supporting multiple user namespaces. Which means that it probably won't make sense until we start sorting through the user namespace. 1155238359 M * ebiederm The problem you are solving does seem to need to be solved. Delegating quota administration responsibility. 1155238403 M * dhansen ebiederm: yup, but there might be another way to do it, and it is best to know that earlier than later 1155238583 M * Bertl rgl: stat should give you that info 1155238601 M * debugger_ nick rgl 1155238607 N * debugger_ rgl_ 1155238716 M * Bertl ebiederm: well, I do not really see how one is related to the other, but I'm fine with that :) 1155238735 Q * rgl Ping timeout: 480 seconds 1155238748 M * rgl_ Bertl, but quotactl expect a string with the device name, and stat give the device number 1155238750 M * Bertl ebiederm: you ahve the information, and you can do with it what you think is best ... 1155238762 M * ebiederm Bertl: Sure. 1155238798 M * ebiederm Bertl: My core recommendation is simply since it is a big chunk of work hold off for a bit and get the easier things first. 1155238806 M * ebiederm And see how it goes. 1155238881 M * ebiederm If you implementation really doesn't care about xids and you just happen to allocate a different one at for different xid it is probably something that could be pushed. 1155238931 M * Bertl the hashes use the xid for getting the right one, but you will have soemthing similar for namespaces too 1155238939 M * ebiederm To a large extent inexpensive resource limits are a hard problem, and not really one I suffer from so I have been concentrating on getting the other pieces general purpose. 1155238964 Q * ekc Ping timeout: 480 seconds 1155238977 M * Bertl aside from that, it is jsut a generalization of quota 1155239010 M * Bertl i.e. it can be as well used to do per directory quota if you find the time to assign directory ids, or per vfsmount if you like :) 1155239062 M * ebiederm If you werge going to make it general per vfsmount would probably be the way to go. 1155239146 M * Bertl heh, I know, but unfortunately it is not possible to make it generic and per vfsmount (because of the way quota works, i.e. inodes and such) but you can have quota per arbitrary context, and the context can bdepend on the vfsmount 1155239219 M * Bertl of course, you have to ensure (administratively) that the mounts do not share files, or if, that they belong to the super/shared context 1155239252 M * Bertl i.e. are not accounted (as long as they are shared) to quota at lls 1155239265 M * Bertl s/lls/all/ 1155239276 Q * shedi Read error: Connection reset by peer 1155239354 M * ebiederm Which makes this a truly stick problem. 1155239382 M * ebiederm The limitations you mention suggest to me that you don't quite handle the general case. 1155239406 M * ebiederm And sometimes handling the general case requires major redesign. 1155239425 M * Bertl like switching from inodes to something else? 1155239469 M * ebiederm Possibly. 1155239479 M * Bertl look, the limitations are intrinsic with unix (inode based) filesystems with the possibility for hardlinks 1155239484 M * ebiederm I really don't have time to do this part justice until I get some of the other pieces finished. 1155239515 M * Bertl good, have fun! 1155239794 A * Belu is away (iŽll be back later...) 1155239794 N * Belu Belu_zZz 1155240042 J * ekc ~ekc@netblock-66-245-252-180.dslextreme.com 1155240478 M * ekc is there a seperate fill-rate and fill-interval for idle time? If so, is that set per-guest or for the entire system? 1155240516 M * Bertl ekc: yes, there is, and it is per guest 1155240548 M * ekc should I use vsched-0.2 to set idle time? 1155240589 M * Bertl yes, either that or the vcmd hack tool 1155240612 M * ekc ok. i'm looking at vsched-0.2 options. What is unhold minimum? 1155240711 M * Bertl that is what you know as min fil, similar size is the max 1155240797 M * ekc ok. why is there both a context id parameter and a bucket id parameter? Can there be more than 1 bucket per guest? 1155240816 P * coocoon So Long, and Thanks for All the Fish! 1155240944 M * Bertl ekc: yes, the bucket was generalized, currently 'other' buckets are not used, but that might change in the future 1155240948 J * coocoon ~coocoon@p54A05BC7.dip.t-dialin.net 1155240961 Q * rgl_ Quit: Fui embora 1155241044 M * ekc and how about cpuid? Do I have to set a seperate bucket per cpu per vserver? Or will one tick be added to the bucket per cpu? 1155241058 Q * Viper0482 Remote host closed the connection 1155241072 M * Bertl the new scheduler uses per cpu buckets, but you can set all buckets at once 1155241084 M * Bertl or if you prefer, each one separately 1155241111 M * ekc if I set all buckets at once, will the scheduler automatically load-balance processes across the cpu's? 1155242136 M * Bertl yes 1155242673 M * ekc When exactly is idle time advanced? If A is idle and B is hogging CPU, would the idle bucket for A advance and the regular bucket for B advance? 1155243196 Q * bonbons Quit: Leaving 1155243293 M * Bertl ekc: no, the idle time is only considered when the idle task _would_ be scheduled 1155243296 Q * hallyn Quit: leaving 1155243324 M * Bertl instead of going idle (on that cpu) the scheduler makes a kind of time-warp 1155243347 M * Bertl advancing 'idle-time' until a guest can be scheduled 1155243536 M * ekc so, the scheduler advances idle time for all the guests that have idle-time (option -i in vsched-0.2) enabled? 1155243596 M * Bertl basically it advances idle time for all guests, but guest which have idle time set do benefit from that (i.e. they get tokens) 1155243919 M * ekc so, how does idle time help me to dynamically adjust hard scheduling so that whenever a new guest is started, all running guests share equally the CPU time? 1155243977 M * Bertl simple, just assign a 'minimal' but unrealistic amount by default (i.e. 1/1000 th for example) 1155244004 M * Bertl and then set the idle amount to something like 1/2 or 1/3 for all of them 1155244042 M * Bertl they will all get the same 'basic' amount (1/1000) and can use the idle amount when the cpu would go idle 1155244068 M * Bertl this results in all guests getting the same amount of tokens, self regulating the assigned cpu 1155244268 M * Bertl I agree it's not obvious at first glance, we had some hefty discussions in Princeton what scenarios this idle time skipping can fulfill ... 1155244445 Q * ekc Ping timeout: 480 seconds 1155244496 J * ekc ~ekc@netblock-66-245-252-180.dslextreme.com 1155244504 M * ekc argh. irc clients keeps dropping 1155244581 M * ekc let me see if i got this. the unrealistic fillrate/interval ratio of 1/1000 makes the cpu go idle. But, the scheduler preempts the idle task and fills the idle bucket for all active guests based on fillrate_idle/interval_idle. the guests use their idle buckets until they are all depleted, the idle task is scheduled but preempted, and the process repeats. is that right? 1155244710 M * ekc is there any danger of this causing excessive context switching? 1155244919 M * Bertl almost, it doesn't start or preempt the idle task 1155244936 M * Bertl when the scheduler decides that the cpu _should_ go idle 1155244946 Q * mef Quit: Download Gaim: http://gaim.sourceforge.net/ 1155244950 Q * brc Ping timeout: 480 seconds 1155244961 M * Bertl then the idle time kicks in, and reschedules a new guest/task (if there are tokens avail) 1155245018 M * Bertl so, the scheduling behaviour is not really changed by that, and it doesn't suffer from excessive rescheduling either, because the 'idle-time' jump fills the buckets properly 1155245191 M * ekc ah. what about the 1/2, 1/3, etc.. fillrate_idle/interval_idle values? Don't they need to be more like 10/20 to make the scheduling smoother? or, is the ratio all that matters? 1155245412 M * Bertl yes, that is similar to the 'normal' ratios 1155245446 M * Bertl i.e. you can make the scheduling smoother (with lower values) and 'better' with higer ones 1155245453 N * Nam Nam-brb 1155245468 M * Bertl smoother means more scheduling overhead (i.e. probably not what you want) 1155245483 M * ekc so higher values are favorable for batch processing? and lower values is better for interactive processes? 1155245489 J * nammie ~nam@S0106001195551ff0.va.shawcable.net 1155245504 M * Bertl ekc: yes, while the typical scheduler decisions help there too 1155245507 N * nammie Nam 1155245586 M * ekc to set the idle time values, do I just append the '-i' option to vsched? like so: ./vsched -x 11 -R 10 -I 30 -M 10 -S 1000 -i 1155245614 M * ekc because, when i tried that /proc/virtual/11/sched did not contain info about idle time scheduling 1155245615 M * Bertl yes, and no, you can set separate values, so you repeat the commands 1155245646 M * Bertl check the source code, it probably explains better what I mean 1155245664 M * ekc ok 1155245786 M * ekc got it. the only idle-time-specific parameters are fill-rate and fill-interval. 1155245806 M * Bertl yep 1155245929 M * ekc i'm assuming you picked 1/1000 because HZ=1000. How can I check what/where HZ is defined? is that architecture specific? 1155245937 Q * Nam-brb Ping timeout: 480 seconds 1155245956 M * Bertl it's hard to check that, except from the (kernel) config 1155245986 M * Bertl but it's a good assumption because you probably won't have more than 1000 guests, and HZ will not be higher than that either 1155246063 M * ekc 1000 is the upper-limit on HZ? i'm checking kernel .config now 1155246342 M * ekc yup. param.h has HZ set to 1000 on x86_64 1155246537 M * ekc doesn't xen use HZ=100? Why the difference between xen and vserver? Does having a higher HZ allow vserver to host more active containers concurrently? 1155246570 M * Bertl we do not impose any restrictions to your choice :) 1155246581 M * Bertl i.e. if you want HZ=100 then just select that :) 1155246638 M * ekc i trust the vserver defaults implicitly :) just wondering what the tradeoffs are between different xen values 1155246661 M * Bertl well, for a server I usually suggest to use the lowest possible value 1155246700 J * shedi ~siggi@inferno.lhi.is 1155246701 M * Bertl of course, with a higher value you will get 'smoother' scheduling for a higher number of tasks, but for a slightly increased scheduling overhead 1155246874 J * brc_ bruce@201.19.206.230 1155246881 M * ekc skipping back to your earlier point. what would happen if I had > 1000 guests and used 1/1000 for fillrate/interval 1155247057 M * Bertl then they would get too many tokens in case all 1000 are running at once 1155247083 M * Bertl i.e. no restrictions would apply and the token buckets would start filling up (basically harmless) 1155247352 M * ekc I was reading that irc thread about adding containers to the kernel. What's the timetable for getting complete vserver-like functionality into the mainline? 1155247371 M * Bertl you have to ask ebiederm for that ... 1155247394 M * Bertl but my guess would be that it takes at least 2-3 years to have that 1155247416 M * ekc ok. i won't hold my breath :) 1155247441 M * Bertl but as I already said, Linux-VServer will use the merged parts when they make sense 1155247487 M * ebiederm Currently I expect we are within a couple of months of having something useful in the kernel. But getting all of the resource limits and everything else will take longer. 1155247507 M * ekc ok. good to know 1155247584 M * ekc bertl: thanks for helping me understand idle-time. it's very subtle, but very useful. now, on to some experimenting 1155247691 M * ebiederm Bertl on the side of usefulness. How does the stuff that has made it into -mm look useful from your perspective? 1155247730 M * ebiederm s/How does/Does/ 1155248007 M * ebiederm Bertl? 1155248564 M * Bertl ebiederm: haven't had much time to check that, and I didn't get an email notifying me ... so I can't say yet 1155248574 M * Nam umm.... I'm getting a new error... I'm trying to setup apache... and when I try to tar up the package... I'm getting this... 1155248577 M * Nam tar cfj apache.tar.bz2 apache-pkg/ 1155248577 M * Nam bzip2: I/O or other error, bailing out. Possible reason follows. 1155248577 M * Nam bzip2: No space left on device 1155248577 M * Nam Input file = (stdin), output file = (stdout) 1155248577 M * Nam Broken pipe 1155248595 M * Bertl well, 'no space left on device?' 1155248622 M * Bertl i.e. either you ran out of inodes or blocks, or have additional disk limits/quotas active 1155248664 M * Nam hmm... 1155248664 M * ebiederm Bertl: Sorry for the lack of notification. I thought you were on the threads where we were discussion that but mabye not. Mostly it is just the uts and sysvipc namespace so nothing big. 1155248666 M * Nam /dev/hdv1 134G 9.7G 124G 8% / 1155248682 M * Nam and I have no settings for the disk limits or quotas 1155248688 M * Bertl ebiederm: will have a look into that shortly 1155248694 M * Nam i do knowtice that it may be the /tmp mount 1155248749 M * Nam found the problem 1155248758 M * Nam /tmp mount 1155248819 J * comfrey ~comfrey@h-64-105-215-75.sttnwaho.covad.net 1155248874 M * Nam fixed 1155248995 M * Bertl wb comfrey! 1155249009 M * Bertl okay, I'm off for today ... have a good one everyone, cya! 1155249018 N * Bertl Bertl_zZ 1155249019 M * Nam youtoo Bertl 1155249061 J * Aiken ~james@tooax8-080.dialup.optusnet.com.au 1155249535 Q * ekc Ping timeout: 480 seconds 1155249707 Q * coocoon Quit: KVIrc 3.2.0 'Realia' 1155251240 P * stefani I'm Parting (the water)