1125446435 Q * Doener Ping timeout: 480 seconds 1125446448 J * Doener ~doener@p54873DFA.dip.t-dialin.net 1125447144 N * Bertl_oO Bertl 1125447150 M * Bertl evening folks! 1125447444 M * mnemoc hi Bertl 1125447749 M * Bertl hey mnemoc! everything fine? 1125447766 J * litage ~nick@203.201.98.93 1125447783 M * Bertl welcome litage! 1125447807 M * litage howdy Bertl 1125447811 M * litage what's cookin 1125447830 M * Bertl 2.1.0 and a prerelease for 2.0.1 ... 1125447841 M * litage snazzy 1125447863 M * litage i'm going to be setting up my first vservers today, once i secure my new box 1125447921 M * Bertl sounds good! 1125447951 M * mnemoc Bertl: yes, everything fine :) 1125447969 M * Bertl mnemoc: did you get around testing the devfs patch? (just asking) 1125447983 M * mnemoc Bertl: working fine 1125448057 M * mnemoc Bertl: 2.0.1-bp1? ;) 1125448160 M * Bertl heh, for 2.6.10 ? :) 1125448179 M * mnemoc 2.6.11.12 1125448180 M * mnemoc :p 1125448201 M * Bertl should not be too hard, we have the FOR-2.0.1 dir 1125448210 M * Bertl you can get most of that from there ... 1125448220 M * mnemoc splitted patch? 1125448236 M * Bertl it is the location where we accumulate the patches 1125448333 M * Bertl so aside from 2.6.13 adjustments, all real 'changes' should be reflected there 1125448341 M * Bertl http://vserver.13thfloor.at/Experimental/FOR-2.0.1/ 1125448345 M * mnemoc thanks 1125448353 M * Bertl (not complete yet, though) 1125448365 M * Bertl will be up-to-date with a 2.0.1 release 1125448384 M * mnemoc great :D 1125448436 M * Bertl I expect maintainers of backports to apply updates themselves, and test them, but if you have questions or encounter issues, I'll gladly help ... 1125448438 J * monrad ~monrad@213083190134.sonofon.dk 1125448451 M * Bertl welcome monrad! 1125448500 M * mnemoc Bertl: i'll do :) thanks 1125448755 Q * litage Read error: Connection reset by peer 1125448761 M * monrad evening 1125449126 M * mnemoc Bertl: when i do backports, can i send them to you to preserve the autoritative download location? 1125449283 M * Bertl well, I guess I can make a directory for that somewhere ... 1125449318 M * mnemoc :D thanks a lot 1125449362 M * Bertl would something on vserver.13thfloor.at be okay for you? 1125449380 M * mnemoc sure 1125449415 M * Bertl any specific directory name? (like MNEMOC or T2 or whatever?) 1125449433 M * Bertl or just generic BACKPORTS? 1125449476 M * Bertl (i.e. how much updating/maintaining do you have in mind) 1125449476 M * mnemoc just backports ;) 1125449508 M * mnemoc i need to maintain a devfs linux26 for a long time 1125449518 M * Bertl those are against mainline, right? 1125449530 M * mnemoc yes, vanilla kernel 1125449639 J * litage ~nick@203.201.98.169 1125449644 M * Bertl okay, http://vserver.13thfloor.at/Stuff/BACKPORTS/ 1125449672 M * mnemoc great! thanks a lot 1125449684 M * Bertl mnemoc: if you want to see something there, just give me an url (via email or on irc) and I'll put it there ... 1125449696 M * Bertl you're welcome! 1125449702 M * mnemoc ok, i'll do 1125449739 M * mnemoc when will you release 2.0.1? 1125449773 M * Bertl there will be a pre release soon, probably tonight, then a series of rc's and after 3-4 weeks a 2.0.1 release 1125449794 M * Bertl (of course, pure guestimations :) 1125449802 M * mnemoc :) 1125450000 M * ntrs Bertl, will the missing devfs affect vserver in any way? 1125450058 M * Bertl no, not at all, you just have to create the vroot device nodes yourself (if you need them) 1125450080 M * ntrs ok, I don't think I need them. So udev will work just fine? 1125450101 M * ntrs I still need to have the dev entries in all vservers, right? 1125450112 M * Bertl is is supposed to, and any issue there is considered a bug of course ... 1125450129 M * Bertl ntrs: yes, the guests are neither affected nor changed 1125450144 M * Bertl (guest have only 5 or 6 dev entries) 1125450157 M * ntrs yes, correct. 1125450203 M * ntrs what about the changed default HZ value from 1000 to 250? how will it affect the token bucket scheduler? 1125450239 M * Bertl it will basically multiply the total settings by 4 1125450256 M * Bertl so what was 10 seconds before will now be 40 seconds 1125450279 M * ntrs so, now we either have to change the HZ value or change all scheduler settings for all vservers. not very good. 1125450293 M * Bertl yes and no ... 1125450307 M * Bertl basically the important part is the rate/interval ratio 1125450317 M * Bertl example: 1125450335 M * Bertl assuming 1000 Hz, a rate of 10 and an interval of 100 1125450348 M * Bertl this results in roughly 10% cpu usage ... 1125450353 M * ntrs yes 1125450365 M * Bertl a maximum of 1000 will allow for 1 second bursting here 1125450375 M * ntrs yes. correct. 1125450380 M * Bertl now let's change to 250 Hz 1125450406 M * Bertl the 10/100 again means roughly 10%, but the 1000 now means 4 seconds burst 1125450439 M * Bertl so the change is there, but it will not affect the long time behaviour 1125450447 Q * zobel Read error: Connection reset by peer 1125450465 M * ntrs ok. I see. 1125450514 M * Bertl from the server point of view it depends ... 1125450540 M * Bertl if you have a lot of guests running on a machine (let's say a total of 800-1200 processes min) 1125450564 M * Bertl then you probably want to go for a higher Hz value anyway, because of increased latency 1125450591 M * ntrs Ok. I guess changing the HZ value is not such a big deal. 1125450598 M * Bertl OTOH, if you guests do not do much interactively, you might even consider lowering it to 100 1125450627 M * Bertl (the scheduler overhead at 100Hz is pretty optimal for servers, not interactive stuff) 1125450684 M * ntrs Got it. Thanks 1125450690 M * Bertl you're welcome! 1125451452 Q * keyser_soze Quit: Abandonando 1125452010 Q * dddd44 Ping timeout: 480 seconds 1125452657 J * stefani ~stefani@c-24-19-46-211.hsd1.wa.comcast.net 1125453837 M * Bertl mnemoc: still around? 1125455102 M * mnemoc Bertl: re 1125455147 M * Bertl k, browsing the code (and linux-vserver changes) I stumbled over a fix in jfs quota code 1125455215 M * Bertl hmm .. no it was ext3 ... 1125455258 M * Bertl mnemoc: just wanted to let you know, because the patches will not cover that, nevertheless it might be interesting to you ... 1125455281 M * mnemoc will you add a delta for that? 1125455317 M * Bertl no, that's why I mention it ... the change is between 2.6.12 and 2.6.13, we 'just' follow the code 1125455384 M * Bertl fail2_free: 1125455390 M * Bertl + DQUOT_DROP(inode); 1125455403 M * Bertl (right after the DQUOT_FREE_INODE(inode);) 1125455421 M * Bertl fs/ext3/ialloc.c 1125456536 M * mnemoc my wife was using my laptop 1125456562 M * mnemoc now i get it 1125456622 M * Bertl send greetings to your wife then ... :) 1125457187 M * mnemoc :p 1125457748 M * eyck will this become new #vserver tradition? 1125457758 M * eyck sending greeting to wifes and girlfriends? 1125457805 M * Bertl eyck: it always was, ever will be (at least on my side :) 1125457855 M * Bertl eyck: so you read up on the wife/girlfriend issue on the internet *G* :) 1125457993 M * Bertl eyck: ah, before I forget, got no feedback yet for the 1.2.11-rc1, do you know some folks who use/test it? 1125458087 M * mnemoc on T2 we started to send greetings to children :( 1125458098 M * mnemoc 5 developers have at least one child 1125458144 M * mnemoc 1.2.11-rc1? 1125458171 M * Bertl http://vserver.13thfloor.at/Experimental/patch-2.4.31-vs1.2.11-rc1.diff 1125458218 M * mnemoc oh, i have forgot 2.4 1125458304 M * Bertl me too, but eyck reminded me ... 1125459786 J * ntrs_ ~ntrs@68.188.50.87 1125459786 Q * ntrs Read error: Connection reset by peer 1125461776 P * stefani parting (is such sweet sorrow) 1125462246 J * dddd44 ~dhb55@tor-irc.dnsbl.oftc.net 1125462585 M * mnemoc Bertl: reiser-fix02 without a fix1? 1125462627 M * Bertl yep, fix01 was never released ... 1125462631 M * mnemoc ok 1125462666 M * Bertl the hugetbl and ioprio might not apply for 2.6.11.x 1125462700 M * mnemoc ok 1125462718 A * mnemoc adapting reiser patch now 1125462730 M * mnemoc the 3 hunks failed :p 1125462844 M * Bertl try with -l (then adjust the results) 1125462853 M * mugwump ooo... ioprio? 1125462865 Q * OliverA Ping timeout: 480 seconds 1125462891 M * Bertl mugwump: not vserver related yet, just fixing up virtualization there ... 1125462922 M * Bertl (emphasis on yet) 1125462944 M * mugwump any thoughts on IO scheduling algorithms? 1125462982 M * Bertl maybe something like the borrowed time scheduler ... 1125462997 M * mnemoc Bertl: weird... all changes on reiser-fix02 were already there 1125463034 M * Bertl hmm ... maybe they got lost on the 2.6.13 branch 1125463066 M * mnemoc seem so 1125463093 M * Bertl sec, checking ... 1125463419 M * Bertl ah, yeah, I see now why that happened ... 1125463435 J * OliverA ~kvirc@ti200710a080-14563.bb.online.no 1125463441 M * Bertl check the REISERFS_FL_USER_MODIFYABLE vs. REISERFS_FL_USER_MODIFIABLE 1125463577 M * Bertl mnemoc: it's currently hard to compare reiserfs code, because they decided to re-indent the source recently ... 1125463596 M * mugwump I guess the killer for any fair IO scheduler will be dealing with disk IO queues. Managable on IDE systems, but SCSI and SATA less so. 1125463638 M * mnemoc Bertl: REISERFS_FL_USER_MODIFYABLE vs. REISERFS_FL_USER_MODIFIABLE got applied 1125463662 M * Bertl make sure to change it in the 'other' locations too ... 1125463678 M * Bertl (will update the patch tomorrow or so) 1125463679 M * mugwump perhaps one part of an approach would be, if a vserver has 1/16th of resources, to limit the number of IOs that they may have queued on the hardware to 1/16th of the queue limit 1125463749 M * mugwump so, if a hardware device supports a queue length of 32, they would only ever get 2 slots, keeping the queues nice and short for interactive use by other vservers 1125463780 M * Bertl nah, that would hurt the overall performance pretty bad, I guess 1125463794 M * mnemoc Bertl: i had missed two of those :p 1125463833 M * Bertl mugwump: maybe just allow a context to queue stuff, but account for it, and block with a pretty high hysteresis? 1125463835 M * mugwump it wouldn't hurt peak performance, ie when the system is busy. again, perhaps a token system could be used to allow the length of the queue to burst to full size 1125463906 M * Bertl I guess I will look into the swap stuff, including the swap token approach first, then see how we can extend the findings there to other I/O 1125463930 M * mugwump ah ... you have investigations there also? 1125463941 M * mugwump any prototypes? 1125463956 M * mugwump (even for old versions)? 1125463960 M * Bertl more theoretical models, nothing runable ... 1125464082 M * mugwump I would guess there are other design decisions, too ... like how to handle multiple devices - swap vs partitions etc - including the fact that these devices may share spindles 1125464213 M * Bertl well, you have swap priorities and partitions/files, if you decide to put them on the same disk with equal priority .. then that is self inflicted :) 1125464261 M * Bertl http://vserver.13thfloor.at/Experimental/patch-2.6.13-vs2.0.1-pre2.diff.bz2 1125464347 T * Bertl http://linux-vserver.org/ | latest stable 2.0, 2.0.1-pre2, 1.2.10, 1.2.11-rc1, devel 2.1.0-pre5 -- He who asks a question is a fool for a minute; he who doesn't ask is a fool for a lifetime -- share the gained knowledge on the wiki, and we'll forget about the minute ;) 1125464458 M * Bertl okay, folks, I'm off to bed now ... have a nice whatever ... and cya later ... 1125464470 M * mnemoc cu Bertl 1125464482 M * Bertl mugwump: keep thinking about the I/O scheduling stuff, guess you're the right person for that! 1125464500 M * Bertl mnemoc: cya, and good luck with the updates ... 1125464512 N * Bertl Bertl_zZ 1125468869 J * blinky ~none@dsl-084-056-255-035.arcor-ip.net 1125468872 M * blinky hello! 1125468926 M * blinky i'm sorry to bother but i've searched everywhere i could (google, documentation etc) but the things i found didn't solve my problem :-( 1125468943 M * blinky i've a suse 9.0 installation copied into a vserver and want to start it but nothing happens. 1125468957 M * blinky somewhere they told about replacing the rc-script... but this hasn't worked out :-( 1125468959 M * blinky any idee? 1125469064 M * blinky i tried the rc-script of the suse-image on linux-vserver.org ... 1125472322 M * eyck Bertl: I'm using it on few machines since that weekend few weeks ago, but unfortunatelly haven't found the time to test it with reiserfs yet, sorry, I hope next week I'll be able to provide data on that. 1125472795 Q * Doener Remote host closed the connection 1125472827 J * Doener ~doener@p54873DFA.dip.t-dialin.net 1125474162 Q * monrad Quit: Leaving 1125474800 J * Milf ~Miranda@ipsio370.ipsi.fraunhofer.de 1125474829 M * Milf anyone awake in here? 1125474948 M * Doener kind of 1125474959 M * Doener i.e. i didn't have coffee, yet ;) 1125474978 J * prae ~prae@gut75-1-81-57-27-189.fbx.proxad.net 1125475704 M * Milf Teehee 1125475747 M * Milf Can I ask you a question about a problem I have upgrading to 2.0 Vserver? 1125475764 M * Doener sure 1125475824 M * Milf Ok, I upgraded from a 2.4 Kernel to the new Vserver, also added new tools 1125475841 M * Milf But my vservers, built on the old tools, won't stop properly 1125475853 M * Milf I can issue the vserver stop, which will hang. 1125475871 M * Milf I can then do vkill --xid -s SIGKILL 1125475889 M * Milf that will kill the remaining init process and stop the server 1125475976 M * Milf But that's not good enough for production servers 1125475980 M * Milf Any ideas? 1125475993 M * Doener hm, guess that's related to the init-protection... 1125476035 M * Doener probably nobody has tried that with old-style config and fakeinit yet 1125476058 M * Milf I upgraded the config to new style 1125476071 M * Milf with old style I had even more problems 1125476093 M * Doener hm... which tools version? 1125476104 M * Milf 0.30.208 1125476111 M * Doener and which distro in the vservers? 1125476126 M * Milf Suse 8.2 1125476163 M * Doener it's just the init process that remains running in the vserver? 1125476197 M * Milf That and anything that is not stopped by rc scripts 1125476250 M * Doener hm, looks like the vkill part of the tools isn't run at all... 1125476297 M * Doener do you know where the "vserver foo stop" hangs? 1125476324 M * Milf Nope, want me to add a debug flag to the script? 1125476400 M * Doener vserver --debug foo stop 1125476416 M * Milf Hmmm, I added -x to the first line :) 1125476469 M * Milf Ok, same deal :) 1125476476 M * Milf How much do you want me to paste? 1125476494 M * Milf I see 1125476494 M * Milf ++ /usr/local/sbin/vserver-info - FEATURE vkill 1125476494 M * Milf ++ /usr/local/sbin/vkill -s INT --xid 6666 -- 1 1125476517 M * Milf and then there's a lot of things waiting 1125476606 M * Milf Last line is something waiting for process 9044 1125476632 M * Milf 9044 is: /usr/local/sbin/vwait --timeout 30 --terminate --status-fd 3 6666 1125476827 M * Doener hum 1125476959 M * Hollow morning Doener 1125476968 M * Milf And there it hangs indefinitely 1125476977 M * Doener no idea, and at the moment unfortunately not enough time to dig into that :/ 1125476982 M * Doener morning Hollow 1125477007 M * Hollow Milf: did you try to place a `reboot -f` at the end of your init-stop sequence? 1125477027 M * Milf init-stop-sequence? 1125477032 M * Hollow dunno what distro this is about.. 1125477057 J * toidinamai ~frank@toidinamai.de 1125477059 M * Milf Hmmm I just entered the vserver and type 'reboot -f' 1125477063 M * toidinamai Hi! 1125477063 M * Hollow Milf: most distros would call it runlevel 6 1125477070 M * Milf that did have an effect 1125477078 M * toidinamai Is it possible to mount /home into a vserver? 1125477090 M * toidinamai --bind doesn't seem to work. 1125477113 M * Hollow toidinamai: use secure-mount or put it in the vserver fstab (/etc/vservers/name/fstab) 1125477125 M * SiD3WiNDR hmm 1125477129 M * SiD3WiNDR I do --bind just fine 1125477132 M * SiD3WiNDR do I miss anything? :) 1125477149 M * Hollow SiD3WiNDR: well, you have the bind mount on the host too then, don't you? 1125477178 M * SiD3WiNDR yes 1125477198 M * SiD3WiNDR but wasn't that sort of the question? :) 1125477211 M * Hollow with the approach above only the namespace of the context gets the mounts 1125477212 M * toidinamai Hollow: What do I put in the fstab? 1125477225 M * Milf So for a hotfix I can add reboot -f to one of the K-Scripts in the correct runlevel? 1125477236 M * Hollow toidinamai: if you want e.g. the hosts /home to be present in guests /home you put the following: 1125477248 M * Hollow /home /home none bind 0 0 1125477266 M * toidinamai Oh, ok. 1125477281 M * toidinamai Just typed /home /home but it looked weird. :-) 1125477295 M * Hollow secure does mount the first /home from the host in the chroot /home from the guest 1125477390 M * toidinamai thx - working like it should :-) 1125477439 M * Doener toidinamai: for details on why --bind didn't work, see http://linux-vserver.org/Namespaces 1125477471 M * Hollow namespaces are voodoo 1125477486 M * toidinamai Doener: Yeah, I've seen the WTH presentation. 1125477512 M * toidinamai Kind of plan9ish, isn't it? 1125477514 M * Doener ok. i didn't ;) 1125477537 M * toidinamai Doener: You can download the video. 1125478290 J * erwan_taf ~erwan@81.80.43.77 1125478359 P * erwan_taf 1125479592 M * toidinamai Rehi. 1125479617 M * toidinamai Is it also possible to mount something readonly into the guest? 1125479658 M * toidinamai I would like to have /usr/portage shared. But the guests shouldn't be able to change it. 1125479930 M * BWare mount -o bind,ro /usr/portage /vservers/${VSNAME}/usr/portage 1125479969 M * toidinamai Oh, I didn't know read-only bind mounts were possible. 1125479986 M * BWare Worked for me 1125479998 M * BWare the only problem is /usr/portage distfiles 1125480013 M * BWare and the inablity to emerge sync within the vserver 1125480034 M * toidinamai Yeah, unionfs or something like that would be cool. 1125480054 M * toidinamai distfiles could writeable IMHO. 1125480073 M * BWare try that when you did a ro mount ;) 1125480091 M * toidinamai I can mount distfiles too... 1125480102 M * BWare That would be an option 1125480107 M * toidinamai :-) 1125480116 M * BWare But can pose security issues for the host 1125480125 M * toidinamai Why? 1125480151 M * toidinamai That's what digest are for, no? 1125480175 M * BWare They are, but do you always trust them ;) 1125480202 M * toidinamai I have to. I can't trust the mirrors. 1125480216 M * BWare never recreated a digest when developing a new e-build ? 1125480233 M * BWare all it is is an md5sum 1125480237 M * toidinamai I do that in /usr/local/portage... 1125480267 M * BWare oke 1125480283 M * toidinamai ebuild an digests should really be gpg signed. 1125480296 M * toidinamai And use sha or something better. 1125480309 M * BWare yeah, but that is not yet the case, but work is done on securing the process 1125480357 M * BWare bbl... too much blood in caffeine system ;) 1125480367 M * toidinamai :-) 1125480912 M * Hollow toidinamai: read-only bind mounts don't work with vanilla, but with bertls bind mount extensions.. 1125480932 M * Hollow toidinamai: and remember... /usr/portage /usr/portage none bind,ro 0 0 ;) 1125480937 M * Hollow easiest way ;) 1125481004 M * Hollow and to have a rw distfiles just do /usr/portage/distfiles /usr/portage/distfiles none bind,rw 0 0 1125481024 M * toidinamai I think I'll get the picture. :-) 1125481027 M * Hollow let the host sync once per day, and you're done 1125481051 M * Hollow and if you're looking for some helper scripts: http://home.xnull.de/work/gento/vserver/tools/ 1125481064 M * Hollow s/gento/gentoo/ 1125481134 M * toidinamai Hollow: thx, a README would be nice. 1125481158 M * Hollow vdispatch-conf, vemerge are just wrappers 1125481177 M * Hollow vesync recaches metadata for each vserver after the host has synced 1125481191 M * Hollow vschedcalc helps you to calculate scheduler token buckets 1125481214 M * Hollow vupdateworld updates world within each vserver and can pretend the output for cron to email you necessary updates 1125481223 M * toidinamai cat > README 1125481232 M * toidinamai :-) 1125481253 M * Hollow here you are 1125481272 M * Hollow lol, but it won't show up 1125481300 M * Hollow so, now look :P 1125481311 M * toidinamai Hehe. 1125481369 M * toidinamai Ok, I'll have to find something to eat. 1125481372 M * toidinamai bbl cu 1125481399 M * Hollow cu 1125485578 Q * VooDooMaster Quit: Nettalk6 der Freeware IRC-Client 1125485601 J * VooDooMaster VooDoo@topas.informatik.uni-ulm.de 1125486023 Q * fobi_ Remote host closed the connection 1125486496 M * VooDooMaster hi! 1125486532 M * VooDooMaster when I want to start sshd within a vserver I get an errot that /etc/init.d/net contains errors 1125486581 M * Doener VooDooMaster: gentoo vserver? 1125486602 M * VooDooMaster Doener: Yes. 1125486609 J * fobi wht@liiifeeee.com 1125486617 M * Doener did you use the vserver base-layout to create the vserver? 1125486627 M * VooDooMaster I commented out the "need net" and it works ... 1125486647 M * VooDooMaster Doener: I used vserver-new and Hollows stage3 1125486675 M * Doener hm, strange, i'd expect that to use the vserver base-layout and fix such stuff... 1125486697 M * VooDooMaster Doener: I thought so, too 1125486699 M * Doener Hollow: bug alert ;) 1125486856 M * VooDooMaster Doener: ;) Sorry ... 1125486975 Q * fobi Remote host closed the connection 1125487301 J * fobi wht@liiifeeee.com 1125487349 M * Hollow Doener: yep? 1125487413 M * Doener VooDooMaster created a vserver with your vserver-new and stage3. resulting in a message that "net" contains errors when starting "sshd" 1125487435 M * Doener sounds like a missing cleanup 1125487436 M * Hollow yeah, known bug, upgrade baselayout-vserver please 1125487452 M * Doener ok 1125487545 M * Hollow at least it seems like... which baselayout version is it? 1125487605 Q * fobi Remote host closed the connection 1125488113 J * fobi wht@liiifeeee.com 1125489361 J * renihs ~renihs___@193.170.52.70 1125490448 M * VooDooMaster Hollow: 1.11.12-r4 1125490517 Q * Loki|muh Remote host closed the connection 1125490521 J * Loki|muh loki@satanix.de 1125491690 M * Hollow VooDooMaster: yeah, try 1.11.13 1125491718 M * VooDooMaster Hollow: Already did it - working fine! you're doing a great job! 1125491731 M * Hollow thx :) 1125491795 M * Hollow VooDooMaster: DaPhreak is currently working on new stages, we'll see if we can fix some more bugs 1125492014 M * VooDooMaster Hollow: :) 1125492027 M * VooDooMaster Hollow: I'm looking forward to them ... 1125492752 P * kevinp Leaving 1125493117 Q * Loki|muh Read error: Connection reset by peer 1125493819 J * Loki|muh loki@satanix.de 1125493820 Q * blinky Ping timeout: 480 seconds 1125495325 Q * Aiken Ping timeout: 480 seconds 1125495405 J * Loki|muh_ loki@satanix.de 1125495661 Q * Loki|muh_ Read error: Connection reset by peer 1125495695 J * Loki|muh_ loki@satanix.de 1125495713 Q * Loki|muh Ping timeout: 480 seconds 1125497701 N * Loki|muh_ Loki|muh 1125498284 J * menomc ~amery@200.75.27.97 1125498390 Q * mnemoc Ping timeout: 480 seconds 1125499071 N * Bertl_zZ Bertl_oO 1125500396 J * monrad ~monrad@213083190134.sonofon.dk 1125500581 J * blinky ~none@dsl-084-056-255-035.arcor-ip.net 1125501002 N * BobR_zZ BobR 1125502629 J * jazzanova foobar@S0106000f3d018f36.vc.shawcable.net 1125502630 M * jazzanova hi 1125502663 M * jazzanova i have installed vserver support into the kernel 1125502668 M * jazzanova what do i do next 1125502681 M * jazzanova what is vserver system image and what is vserver guest image ? 1125502754 J * Blissex ~Blissex@82-69-39-138.dsl.in-addr.zen.co.uk 1125503498 J * stefani ~stefani@superquan.apl.washington.edu 1125503530 Q * Milf Ping timeout: 480 seconds 1125504220 Q * prae Quit: Execute Order 69 ! 1125505096 M * menomc Bertl_oO: hi, q0.14 patch doesn't apply on 1.2.11-rc1 1125505310 M * jazzanova so i started my vserver guest, how do I get inside it ? 1125505333 M * jazzanova enter command doesn't work, tells me it can't open pty 1125505575 M * menomc rtfm? 1125505584 N * menomc mnemoc 1125509208 N * BobR BobR_oO 1125512051 Q * blinky Quit: 1125512683 Q * renihs Remote host closed the connection 1125513380 Q * dddd44 Ping timeout: 480 seconds 1125513460 J * tsal ~chatzilla@trane.webintl.com 1125513481 M * tsal so, uh.. 1125513486 M * tsal the web server down? 1125513540 M * tsal is there a mirror for the docs? 1125513543 A * tsal sighs. 1125513550 M * tsal is there anyone actually HERE? 1125513605 M * tsal bah. 1125513606 Q * tsal Quit: 1125513825 M * jazzanova so in order to allow to ssh into the vhost, i need to give it an ip ? 1125513829 M * jazzanova and eth0 ? 1125514104 J * nayco ~nayco@lns-vlq-47-nan-82-252-249-136.adsl.proxad.net 1125514139 M * nayco 'llo !!! 1125514246 J * prae ~benjamin@sherpadown.net 1125514340 M * nayco any netatalk user, here (Inside Vservers, of course ;-) ! ) 1125514344 M * nayco ? 1125516938 J * matta ~matta@69.93.28.254 1125518206 M * stefani nayco: sorta 1125518816 M * nayco hello, stefani ! 1125518860 M * nayco Well, i've got a problem with atalkd not finding any device, wile afpd start well... 1125518929 M * stefani i was able to get the deb to install and start up, but apparently papd is not quite right . 1125519044 M * nayco Well, for papd, I don't use it, so... I cannot tell (But I plan to use it in the future...) 1125519061 M * nayco How did you do to have atalkd running ? 1125519104 M * stefani i just installed the deb (debian linux, sarge) 1125519124 M * nayco huh ? 1125519136 M * stefani i do not recall doing anything special. i was able to mount an appleshare on the guest. 1125519146 M * nayco Did you dive special capabilities to your vserver ? 1125519168 M * stefani not really no. 1125519213 M * nayco Oh, yes, i'm able too, but I need to specify manually the IP of the server, it doesn't appear in the selector, nor in "network" on OsX 1125519247 M * nayco => That mean that atalkd does not run, but afpd does. 1125519252 M * nayco means 1125519280 M * nayco so, i'm stuck there... What does your atalkd.conf look like ? 1125519327 M * stefani um. have not looked at this in a while. in fact i was about to ask my own questions as we seem to need papd. 1125519463 M * nayco Well, sorry for papd, but I don't even start it (I said "NO" in /etc/netatalk.conf). But If you mail your question on the vserver mailing list, I can answer tomorrow (It's 22:17 here). I haven't got IRC at work. 1125519486 M * nayco I'll be able to start it and look what happens. 1125519494 M * nayco What are the symptoms ? 1125519516 M * stefani printing not working. 1125519744 J * dddd44 ~dhb55@tor-irc.dnsbl.oftc.net 1125519771 M * nayco er... No error message when starting the service, or in /var/log/daemons/* . 1125519772 M * nayco ? 1125519858 M * stefani nayco: sorry. am at work fixing two other machines right now :( 1125520125 M * nayco us ? 1125520137 M * nayco sorry: U.S. ;-) ? 1125520158 M * stefani y 1125520191 M * nayco Ok, so, post your problem to the list when you've got time, and i'll look to my papd daemon. And I'll be _very_ glad to look at your atald.conf :D 1125520203 M * stefani y 1125520587 Q * nayco Quit: Bonne nuit ! 1125520838 Q * prae Quit: Pwet 1125520883 P * stefani I'm Parting (the water) 1125521250 J * li0uid romu@dsl-082-083-073-176.arcor-ip.net 1125521257 M * li0uid hello all 1125521271 J * prae ~benjamin@sherpadown.net 1125521372 M * li0uid question, does ya vserver engineers develop interfaces for this vserver system for e.g client and admin interface like virtuozzo? 1125521509 J * liQuid romu@dsl-082-083-073-176.arcor-ip.net 1125521509 Q * li0uid Read error: Connection reset by peer 1125521516 M * liQuid got disconnected sry 1125521626 Q * liQuid Quit: 1125521771 J * li0uid romu@dsl-082-083-066-058.arcor-ip.net 1125521774 M * li0uid re 1125521854 Q * prae Quit: Pwet 1125521997 M * li0uid hello? 1125522129 M * li0uid hm 1125523053 J * jkl eric@c-71-56-237-229.hsd1.co.comcast.net 1125523208 M * li0uid someone could answer my questions? :p 1125523548 J * nayco ~nayco@lns-vlq-47-nan-82-252-249-136.adsl.proxad.net 1125523695 Q * li0uid Read error: Connection reset by peer 1125524253 J * hillct ~hillct@client200-5.dsl.intrex.net 1125524303 M * hillct has anyone applied the 1.6.12 vs 2.0 patch set to kernel 2.6.13? any issues? 1125524421 M * daniel_hozac there's 2.0.1-pre2 for 2.6.13. 1125524432 M * hillct ah 1125524441 M * hillct didn't see that on 13th floor 1125524445 M * hillct URL pls 1125524472 M * daniel_hozac http://vserver.13thfloor.at/Experimental/patch-2.6.13-vs2.0.1-pre2.diff 1125524508 M * hillct thanks 1125524851 M * matta anyone know what is new in 2.0.1 over 2.0? 1125524852 M * matta bugfixes? 1125524888 M * mnemoc matta: http://vserver.13thfloor.at/Experimental/FOR-2.0.1/ 1125525062 M * matta good enough 1125525180 M * mnemoc matta: ioprio and hugetbl deltas are to fix 2.6.13 specific problems, the rest are to fix 2.0.0 1125525200 M * mnemoc brb 1125525670 M * hillct can someone remind me of the proper setting for the kernel option: 1125525671 M * hillct Persistent Inode Context Tagging 1125525683 M * hillct for x86_64 architecture? 1125525822 M * daniel_hozac architecture doesn't matter. 1125525838 M * hillct K 1125525846 M * hillct I must be thinking of something else 1125525856 M * hillct yeah, it's a FS thing 1125525867 M * hillct there was something I had to do to get XFS to play nice 1125525870 M * hillct last time 1125525887 M * hillct I know the dinode patch is in this release... 1125525889 M * hillct hmm 1125526108 J * Aiken ~james@tooax6-060.dialup.optusnet.com.au 1125527144 J * li0uid ~romu@dsl-082-083-066-058.arcor-ip.net 1125527548 M * hillct It appears that the lastest userspace tools are still 0.30.206. Is that correct? 1125527659 M * li0uid some supporter in here 1125527884 M * daniel_hozac 0.30.208 1125528005 M * hillct ah 1125528009 A * hillct digs 1125528011 M * hillct aha 1125528199 M * li0uid how much of these vservers will run on a Amd 2400+ with 512 ddram 1125528243 M * hillct I run on a dual opteron 1800 1125528258 M * hillct so if it'sthe architecture you're worried about it's no problem 1125528262 M * hillct get more memory though 1125528301 M * Beave this is probably going to come down to my setup, but I'd like to run snmp within each VPS, and then collect the information via MRTG.. however. 1125528316 M * Beave the way its setup now, i just get all the traffic information from within each VPS. 1125528324 M * Beave is there any good way to set this up? 1125528354 M * daniel_hozac set what up? traffic monitoring? 1125528376 Q * neofutur Ping timeout: 480 seconds 1125528394 M * li0uid so ? 1125528395 M * Beave daniel_hozac : yes, on a per-vps type of setup. 1125528404 M * li0uid u run 1800 vservers? 1125528419 M * hillct no 1125528421 M * hillct sorry 1125528423 M * hillct processor 1125528436 M * Beave Right now, it just seems to capture (bandwidth usage) from the primary interface. 1125528436 M * daniel_hozac Beave: some iptables rules should be able to take care of that. 1125528445 M * li0uid i wantes to know how much vservers will run on such system 1125528463 M * Beave daniel_hozac : yeah - i could also do someting with perl and rrdtool, i was just wondering if there was a way i might avoid that. 1125528489 M * Beave Figured i might ask, to see if somene has run into the same problem and a possible solution. 1125528499 M * daniel_hozac Beave: not really, since vservers use the same interface (unless you have dedicated interfaces to all vservers). 1125528522 M * Beave yeppers. While is what i've told the person hosting the VPS server (host) system. 1125528528 M * li0uid hello? 1125528532 M * Beave dang. oh well, thanks anyway. 1125528535 M * li0uid someone could help me on with that? :P 1125528551 M * daniel_hozac li0uid: it depends too much on your setup. 1125528568 M * li0uid in what case 1125528646 M * daniel_hozac the guests' distribution, the services running in the guests, the load, whether or not you're using unification, etc. all play a part. 1125528691 M * li0uid nothing more specified answers u could give me 1125528711 M * li0uid i never installes this vservers but i want to offer vserverns soon 1125528736 M * li0uid just need some facts 1125528867 M * hillct figure out what the specs will be of the vservsers you'll offer 1125528892 M * hillct multiply that by the number of vservers you want to host 1125528903 M * hillct add 20% for overhead 1125528915 M * hillct that's how I did it anyway 1125528926 M * daniel_hozac overhead caused by? 1125528941 M * hillct whatever is being run on the host machine 1125528946 A * hillct is just caucious 1125528958 M * hillct vserver is quite efficient 1125528963 M * hillct I just like to be safe 1125529075 M * li0uid for e.g the vsevers are justed used for shells,eggdrops,webhosting,bouncer, 1125529125 M * hillct you need to decide what resources you want to alocate to each vserver 1125529233 M * li0uid and what resources are these? 1125529245 M * hillct memory 1125529249 M * hillct disk space 1125529252 M * hillct CPU 1125529263 M * li0uid echt vserver should have 1 GB 1125529268 M * li0uid each 1125529297 M * li0uid i dont know how much memory or cpu i have to give each vserver 1125529309 M * hillct my formula above does nicely, although I gather daniel_hozac feels it's a little excessive 1125529310 M * li0uid just so that the user can do all with it 1125529327 M * li0uid and i get much as possible 1125529577 M * li0uid ... 1125530156 M * li0uid cool 1125530289 J * neofutur ~neofutur@neofutur.net 1125531112 J * Level2 ~level2@chn61.neoplus.adsl.tpnet.pl 1125531132 M * Level2 hello 1125531152 M * hillct hi 1125531204 M * hillct Level2 what can we do for you? 1125531268 M * li0uid hillct still waiting for some answer :p 1125531280 M * hillct on what? 1125531287 M * hillct use the formula I suggested 1125531310 M * li0uid on that answer how much vservers are possible 1125531325 M * li0uid just tell me how much are possible on a good installed system 1125531348 M * hillct decide how many vservers you want, and what resources you want to alocate to them, at 20% more for resources dedicated to the host server 1125531364 M * hillct or work in reverse 1125531414 M * li0uid i would like to run 100 of these vservers on a amx xp 2400+ with 512 ram and 1 Gbit port 1125531421 M * li0uid possible? 1125531438 M * hillct identify the minimum resources you feel comfortable with dedicated to the host server, then divide the remaining by the mimumum you feel are needed for each vserver based on expected usage of your vservers 1125531449 M * hillct to get the total vservers possible on your hardware 1125531464 M * hillct not a chance 1125531468 M * li0uid how do i know how much resources the will need 1125531476 M * li0uid if i not even have them running 1125531478 M * hillct each vserver is a server 1125531480 M * li0uid dont u undestand me? 1125531492 M * hillct I understand perfectly 1125531501 M * li0uid so just tell me a maximum 1125531505 M * li0uid oder a minimum 1125531509 M * li0uid or 1125531513 M * hillct you need to decide what resourced each vserver requires based on what you intend to run in each 1125531535 M * li0uid they just for hosting, shells,eggdrop,bouncer,webhosting 1125531556 M * li0uid but the client should have the possibility to use what he wants to 1125531560 M * hillct so decide how much memory, disk space and CPU you want to alocate to each 1125531563 M * li0uid i allready said that 1125531575 M * hillct then do the math I defined for you 1125531576 M * li0uid and how so? 1125531605 M * li0uid i dont know how much cpu or memory i have to give each vserver 1125531625 M * li0uid just i get much as posible man 1125531626 M * li0uid jesus.. 1125531629 M * li0uid :P 1125531637 M * hillct that's something you need to decide for yourself based on how big your clients are 1125531649 M * hillct and how many are power user types etc 1125531665 M * hillct this isn't that hard 1125531674 M * hillct but only you can answer these questions 1125531705 M * hillct What do you expect to host on these vservers? 1125531708 M * li0uid i dont even know what power use types are 1125531715 M * hillct once you figure that out, you're set 1125531716 M * li0uid i allready said that 1125531721 M * li0uid but for the 3rd 1125531728 M * li0uid shells,eggdrop,bouncer,webhosting 1125531739 M * li0uid i dont think this will need much resources or? 1125531754 M * hillct what web hosting? flat HTML, content management systems? Databases? 1125531768 M * li0uid the full package 1125531867 J * liQuid ~romu@dsl-082-083-066-018.arcor-ip.net 1125531872 M * liQuid sry got disconnected 1125531963 M * hillct full package of what? 1125531971 M * hillct pick a distribution 1125531977 M * hillct pick an application set 1125531989 M * hillct you need to research this information on your own 1125531990 M * liQuid just tell me a good one 1125531996 M * hillct Fedora 1125532000 M * liQuid ok 1125532007 M * liQuid then i take this 1125532009 M * liQuid so 1125532014 M * hillct miminal install takes 200MB diwsk space 1125532021 M * liQuid i got 120 Gb 1125532029 M * hillct complete install withot X takes near 1G 1125532032 M * liQuid and i would like to run 100 vservers 1125532035 M * liQuid so 100GB 1125532045 M * liQuid then complete 1125532059 M * liQuid diskspace isn´t the problem 1125532071 M * hillct memory is your problem 1125532086 M * liQuid yeah this server has only 512 1125532090 M * hillct and if you want 100 vservers you'll want at least a quad-CPU box 1125532099 M * liQuid that means? 1125532105 M * hillct 4 CPUs 1125532117 M * liQuid what? 1125532129 M * hillct to efficiently distribute load 1125532133 M * liQuid one mashine with 4 cpu´s? 1125532135 M * hillct for 100 vservers 1125532141 M * hillct sure 1125532162 M * liQuid all the hoster i know 1125532173 M * liQuid also only have one cpu on the server 1125532182 M * liQuid and maybe 1024 ram 1125532189 M * liQuid and they also able to tun 100vservers 1125532190 M * liQuid lol 1125532199 M * hillct and my recommendation is to go with at least 32 maybe 64GB ram 1125532214 M * hillct really? 1125532223 M * liQuid i think so 1125532227 M * hillct each vserver must be quite minimal 1125532229 M * liQuid all the offers i saw 1125532239 M * hillct good luck wit them 1125532245 M * liQuid hm 1125532260 Q * li0uid Ping timeout: 480 seconds 1125532261 M * liQuid dont they share the resources? 1125532280 M * liQuid i dont think 100 vservers are using the hole power at the same time 1125532464 Q * pusling Read error: Connection reset by peer 1125532471 J * pusling pusling@195.215.29.124 1125532509 M * liQuid hmm 1125532537 M * hillct they probably aren't 1125532550 M * hillct in my case, on my server, they're pretty heavily loaded 1125532559 M * hillct so I planned for it and it works 1125532580 M * hillct in your case, you need to take into account the expected usage of each vserver, as I've explained repeatedly 1125532614 M * liQuid but this is what i only see if they are used 1125532624 M * hillct yes 1125532627 M * liQuid ok 1125532635 M * hillct so make the appropriate usage estimates 1125532654 M * hillct you know your clients best, so only you can make those decisions 1125532657 M * liQuid but i am able to install as much as possible or 1125532677 M * hillct the vserver will run whatever you put into it 1125532690 M * hillct so decide what you want then go with it 1125532718 M * liQuid its client choice? 1125532751 M * hillct you still need to figure out what the maximum usage is going to be. that's how you'll presumably set your prices 1125532752 M * Aiken it seems to me that the best way here would be to start up a few vservers with representive use and see how much resources they use/need 1125532772 M * hillct then do that 1125532792 M * hillct if you don't understand the usage patterns of your clients, nobody else will either