1120176027 M * complexho Yes, that's what I was thinking :) OK I will pull together the stuff I have from last time I tried and get a page going 1120176061 M * complexho brb need a smoke :) 1120176068 M * jonsmel I will as well and we'll make this easier for future ppl 1120176071 M * jonsmel :) 1120176072 M * jonsmel ok 1120176078 M * jonsmel I must be off as well 1120176109 M * jonsmel laterzzz bertl, complexho; for tomorrow the battle resumes!! 1120176126 N * jonsmel jonsmel_zZ 1120176160 M * Bertl night jonsmel_zZ! 1120176199 M * complexho night jonsmel_zZ :) 1120176233 M * complexho Bertl: OK, so we just might be in business with GFS... What was that you where saying about EVMS/LVM2? 1120176281 M * Bertl just that they should not make any difference 1120176298 M * complexho ok thats a good thing :) 1120176345 M * AprilDL I am using lilo. uname -r shows the version as my unpatched 2.6.11 kernel. lilo.conf says image=/vmlinuz and there is a symlink from there to my patched kernel. 1120176376 M * Bertl did you rerun lilo? 1120176437 M * AprilDL maybe we should start with what i should do now, because since i did it the debian way - I don't know if the debian tools I was using "reran lilo" 1120176476 M * Bertl well, after changing the symlink/entries in lilo.conf 1120176488 M * Bertl you should run 'lilo' it will update the bootloader config 1120176502 M * micah AprilDL: it depends which debian way you did it, but if you install the debian kernel package, it will run lilo if you answer yes to that question 1120176581 M * AprilDL i was following a kernel pkg newbie doc for debian http://newbiedoc.sourceforge.net/system/kernel-pkg.html. Anyway I have now rerun lilo and I am guessing now I reboot. Glory be to IP based KVMs.... 1120176645 M * AprilDL hmmm maybe first i will just try putting in an alternate boot from the old initrd image in case this fails 1120176707 M * complexho ok night all and thanks Bertl I am going to sleep on this now. thanks 1120176715 M * Bertl night complexho! 1120176738 N * complexho complexhozZ 1120176741 M * micah Bertl: been too busy to look at that chattr +t problem involving the debian patch? 1120176748 M * micah (its ok if you have been!) 1120176769 M * Bertl don't even remember the issue :/ 1120176850 M * Bertl but you can refresh my memory ... 1120176857 J * brc bruce@200141220059.user.veloxzone.com.br 1120176945 M * micah hehe 1120176948 M * micah letsee 1120176968 M * micah the problem was when vserver build was called, there was a chattr +t that happens that fails 1120176985 M * micah let me find the irc logs, its easier to refresh :) 1120177219 M * micah need to refresh my copy of the logs 1120177233 M * micah oh hmm 1120177240 M * micah this irc logs ends in may 1120177268 M * micah here we go 1120177332 M * micah Bertl: http://irc.13thfloor.at/LOG/2005-06/LOG_2005-06-21.txt 1120177352 M * micah search for chattr 1120177373 M * micah 1119396575 1120177820 M * Bertl hmm, -15 to -16 does not touch this ... 1120177833 M * Bertl so I assume it must be the same issue with the original patch 1120177844 M * Bertl but to make sure, could you verify that? 1120177865 M * Bertl I'll start a kernel compile ehre, and have a look at the -15 patches then ... 1120178038 M * micah Bertl: sure, I can verify anything 1120178059 M * micah what should I do to verify? 1120178091 M * Bertl get a 2.6.*-15 kernel patched with vserver (the original 1.9.x-4 or whatever 1120178109 M * Bertl and see if the chmod +i fails as in your testkernel 1120178149 M * micah oh, yes you asked me to test that one already and I did 1120178164 M * micah and it gave the same results 1120178181 M * Bertl okay, so this was something which slipped through the initial tests 1120178186 M * Bertl interesting ... 1120178197 M * micah it is trying to do chattr -t, not +i 1120178211 M * Bertl yeah, but the +i fails as the -t does, no? 1120178233 M * micah but... maybe it is more interesting and relevant to think about waiting until 2.0 is ready and moving towards that instead of trying to fix old problems? 1120178251 M * Bertl well, I probably won't do a 2.0 for debian ... 1120178255 M * EverD Bertl, how to create the set of dev nodes for gentoo? 1120178255 M * EverD ls /vservers/gentoo/dev/ 1120178255 M * EverD MAKEDEV@ full initctl| log= null ptmx pts/ random stderr urandom zero 1120178257 M * micah oh, right, I believe you are correct, I forgot the +i was also not working 1120178289 M * Bertl EverD: you know the util-vserver page describing the different build methods? 1120178399 M * EverD no, i didn't build my guest. i unpacked the distro and confegured it. Bud there is now a problem with /dev 1120178408 M * micah Bertl: do you think it would be easier now that 1.9.x-4 has been ported to do 2.0? (I am wondering if I would be able to do it) 1120178434 M * Bertl EverD: http://linux-vserver.org/alpha+util-vserver 1120178445 M * Bertl it describes among others the skeleton build method 1120178455 M * EverD Bertl, thx. I'll check it 1120178468 M * Bertl EverD: use this with a fictional name to get a minimal/useful /dev 1120178476 M * Bertl then just copy that over to your guest 1120178540 M * micah It is not right to say that 20 instances of mysql is the same as one instance of mysql with 20 subprocesses, right? 1120178545 M * Bertl micah: maybe, you might work yourself through the FOR-* patches and apply one after the other ... 1120178564 M * Bertl micah: probably not 1120178599 M * EverD Bertl, I'll try 1120178679 M * micah Bertl: someone is trying to tell me that vservers are less efficient because apache and mysql have to fork processes and the argument that they share memory footprints (even small) doesn't matter because the forked processes aren't shared 1120178711 M * Bertl micah: well, they are less efficient than a monolithic solution 1120178725 M * Bertl e.g. if you have _one_ apache which handles 100 domains 1120178743 M * Bertl than this is more efficient than 100 apaches handling _one_ domain each 1120178809 M * micah right, but 100 apaches handling one domain each is more efficient than 100 computers with 1 apache on each handling one domain each 1120178822 M * Bertl that's right 1120179230 M * Bertl micah: hmm, I hate to say it, but the compiled version of my patches for the debian kernel (aka 2.6.8-vs1.9.5.x-4) boot and work quite fine with extended attributes here (with qemu) 1120179243 M * micah hmmm 1120179270 M * micah I must have missed something :( 1120179280 M * Bertl # touch /tmp/x.dat 1120179280 M * Bertl # chattr +i /tmp/x.dat 1120179280 M * Bertl # lsattr /tmp/x.dat 1120179280 M * Bertl ----i-------- /tmp/x.dat 1120179280 M * Bertl # chattr -t /tmp/x.dat 1120179282 M * Bertl # lsattr /tmp/x.dat 1120179285 M * Bertl ----i-------- /tmp/x.dat 1120179287 M * Bertl # chattr +t /tmp/x.dat 1120179290 M * Bertl # lsattr /tmp/x.dat 1120179292 M * Bertl ----i------t- /tmp/x.dat 1120179295 M * Bertl (tmp is on / as ext2) 1120179320 M * Bertl I'll test again with the -16 version 1120179374 M * micah the only difference is that my partition was resierfs and ext3 1120179392 M * matti ;] 1120179395 M * Bertl reiserfs needs a special mount option 1120179403 M * Bertl I hope you used that one ... 1120179408 M * matti ;p 1120179446 M * micah no wait, /tmp is where iw as trying things 1120179464 M * micah hmm but special mount option for extended attributes? 1120179477 M * Bertl yup, attribs or attrs 1120179492 M * Bertl always forget how it is called 1120179633 M * micah I will have to re-make a kernel here 1120179720 M * matti Bandwidth must flow... 1120179722 M * matti Ops. 1120179727 M * Bertl :) 1120179727 M * matti Wrong window. 1120179729 M * matti ;] 1120180107 Q * EverD Quit: 1120180315 M * Bertl micah: hmm, it seems that ext3 indeed exposes that issue 1120180383 T * services.oftc.net : http://linux-vserver.org/ | latest stable 1.2.10, devel 1.9.5, 2.0-rc4, ng9.5 -- He who asks a question is a fool for a minute; he who doesn't ask is a fool for a lifetime -- share the gained knowledge on the wiki, and we'll forget about the minute ;) 1120180453 M * Bertl rechecking now with sane config ... 1120180455 M * micah Bertl: aha! so it was my ext3 / partition (where tmp is) 1120180675 M * _ag_ Bertl: previously, i used chattr +t with XFS and it didn't work, do you know that issue? 1120180716 M * Bertl what kernel version? 1120180723 M * _ag_ Bertl: 2.6.12.1 1120180760 M * _ag_ attributes compiled in, debian sid e2fsprogs 1120180806 M * Bertl didn't work means? 1120180841 M * _ag_ # chattr +t tmp 1120180841 M * _ag_ chattr: Operation not supported while setting flags on tmp 1120180873 M * Bertl not 1120180881 M * Bertl chattr: Function not implemented while setting flags on ... 1120180938 M * _ag_ well, it's pasted from now 1120180969 M * _ag_ strace gives: 1120180970 M * _ag_ ioctl(3, EXT2_IOC_SETFLAGS, 0xbfbf8bac) = -1 EOPNOTSUPP (Operation not supported) 1120180990 M * _ag_ of course, it's xfs, but it's being said it should work too :/ 1120180996 M * _ag_ i'm confused 1120181020 M * Bertl give me a few minutes, I'm doing this every 2 month :) 1120181036 M * _ag_ however, setattr --barrier from util-server works 1120181043 M * Bertl after that, I figure the required flags and options, then we forget about the fs :) 1120181092 M * _ag_ "works" = doesn't give me an error :) 1120181240 M * Bertl okay, for the next time keywords: xattrib attrs xfs reiserfs 1120181246 M * Bertl =================== 1120181276 M * Bertl reiserfs requires the mount option "attrs" to make xattribs work as expected 1120181331 M * Bertl xfs does support xattribs natively, but not the tailmerge flag 't', therefore it gives: 'Operation not supported while setting flags on ...' when you try something with 't' 1120181366 M * Bertl xfs works quite fine with the barrier and iunlink flags as well as the 'i' immutable flag 1120181406 M * Bertl reiserfs _seems_ to work fine without the attrs option, but loses all information across mounts 1120181415 M * Bertl --------------- 1120181433 M * Bertl the ext3 seems to be a debian patch bug/issue, I'm currently looking into it 1120181460 M * _ag_ it seemed to me kernel 2.6 didn't really need that chattr +t hack, didn't it? 1120181470 M * Bertl ah, and I almost forgot, the +/-t stuff is an Ola bug :) 1120181513 M * _ag_ you mean the maintainer screwed up again? :O 1120181516 M * Bertl it is neither required nor very useful since more than a year, and IIRC it was filed as bugreport and closed by Ola with the 'will not be fixed' state 1120183900 M * Bertl micah: http://vserver.13thfloor.at/Stuff/Debian/patch-2.6.8-16-vs1.9.5.x-5.diff 1120183907 M * Bertl this fixes the 'ext3' issue 1120183970 A * Bertl is rebooting now (new kernel) 1120184668 J * tobi ~tobi@Ottawa-HSE-ppp265781.sympatico.ca 1120184726 M * tobi Evening, i just installed vserver and it was a breeze. Fantastic documentation on the wiki! One question though. Is there a agreed upon way of giving an vserver its own IP? 1120186149 M * Bertl hey tobi! 1120186172 M * Bertl which kernel/config/tools do you use? 1120186173 M * tobi hey :) 1120186183 M * tobi i started out with a somewhat recent gentoo install 1120186200 M * Bertl k, so 2.6 based, probably 0.30.207 as tools? 1120186237 M * tobi yea 2.6.11.5 and 207 tools 1120186257 M * Bertl okay, how did you create the guest? 1120186258 M * tobi i pretty much followed step by step 2.6 guide verbatim 1120186278 M * tobi debootstrap, its a sarge install. i chrooted into it and removed all sorts of run services 1120186307 M * Bertl with the tools 'build' function or manually? 1120186320 M * tobi with the build function, thats right 1120186350 M * Bertl okay, so if you build a guest this way, you can specify the --interface option/args at build time 1120186369 M * Bertl --interface eth0:192.168.3.1/21 1120186373 M * Bertl for example 1120186382 M * tobi nice! is there any way to retrofit this? 1120186385 M * Bertl but you can 'add' an ip later in the config 1120186403 M * Bertl you are currently using the new-style tree based config 1120186413 M * Bertl which is described on the famous flower page 1120186420 M * tobi hehe 1120186421 M * Bertl http://www.nongnu.org/util-vserver/doc/conf/configuration.html 1120186431 A * tobi loads it in links 1120186438 M * Bertl details for the tools can be found here: 1120186442 M * Bertl http://linux-vserver.org/alpha+util-vserver 1120186457 M * Bertl basically you have to add a directory for the 'new' ip 1120186485 M * tobi how is the network tunneled. does vserver use tun or another dummy device? 1120186503 M * Bertl neither nor, all networking happens on the host 1120186516 M * Bertl so there is no bridging/tunneling/whatever 1120186527 M * tobi interesting. i think i'm still thinking to much in terms of xen 1120186537 M * Bertl you basically provide an ip/alias for the guest 1120186553 M * Bertl and the guest is 'resctricted' to a subset of the host ips 1120186571 M * Bertl i.e. bind/connect happens just on those ips 1120186603 M * Bertl on the flower page, look for 'interfaces' 1120186619 M * Bertl the section describes the requirements for the interface config 1120186626 M * Bertl basically you have 3 options there 1120186642 M * Bertl a) the ip is available on the host, nothing is done on startup 1120186665 M * Bertl b) the ip will be added to an interface (but without alias) on guest startup, and removed on shutdown 1120186684 M * Bertl c) the ip will be added/removed as alias (i.e. ifconfig sees it :) 1120186717 M * Bertl in your case, where you probably want a single ip, which is created on guest startup, you do: 1120186748 M * Bertl - create a dir /etc/vservers//interfaces/0 1120186772 M * Bertl - echo "" >/etc/vservers//interfaces/0/ip 1120186788 M * Bertl - echo "" >/etc/vservers//interfaces/0/mask 1120186803 M * Bertl - echo "eth0" >/etc/vservers//interfaces/0/dev 1120186811 M * Bertl (as an example) 1120186836 M * tobi beautiful 1120186843 M * Bertl the '0' is just an arbitrary name identifying the IP 1120186928 M * tobi i'm setting up two computers as hosts right now. one is a gentoo machine and one running debian 1120186934 M * tobi the gentoo has following problem 1120186950 M * tobi vserver Staging start 1120186950 M * tobi - /proc/uptime can not be accessed. 1120186970 M * Bertl ou have to 'integrate' the vprocunhide script into host startup 1120186977 M * Bertl *you even 1120186995 M * Bertl this configures the procfs security properly 1120187007 M * tobi aha, ok will do 1120187014 M * Bertl (without that, all entries are hidden for the guests) 1120187058 M * tobi thank you very much for walking me through this. i really appreciate your help 1120187059 J * eXplasm explasm@p549F7CF9.dip.t-dialin.net 1120187069 M * Bertl you're welcome! 1120188209 M * tobi Bertl: one thing which is odd to me ( but i'm not very experienced with linux networking ). I gave the vserver the ip of 192.168.1.3 but if i ssh there i land on the host box 1120188240 M * Bertl that's not suprising, because the host's sshd is still listening to _all_ ips 1120188258 M * Bertl you can change that by adding an appropriate Listen* directive in the sshd config 1120188269 M * Bertl (usually /etc/ssh/sshd_config) 1120188282 M * tobi aha i understand 1120188290 M * Bertl listing only 'host' ips there will allow the guests to bind sshds too 1120188319 M * tobi is there a way to set it up in such a way that this would be uneccissary? 1120188337 M * Bertl he guests can bind to 0.0.0.0 quite fine 1120188354 M * Bertl only the host needs to be limited to the 'host' ips 1120188361 M * tobi it would be a bit hard to explain to everyone that they have to watch their binds or one customer might steal another customers web hits 1120188373 M * tobi oh ok, thats fine 1120188379 M * tobi thanks again 1120188386 M * Bertl my pleasure! 1120188999 Q * eXplasm Remote host closed the connection 1120190623 M * tobi hmm. when i'm in a vserver and su - to a user account it picks up the PS1 from the host systems /etc/profile 1120190629 M * tobi is that by design? 1120190671 M * Bertl if you are _inside_ a guest, there will be no evidence/files/config used from the host 1120190725 M * Bertl but it is very likely that either the guest has a copy of your hosts /etc/profile or you are not really inside the guest 1120190769 M * tobi ok debian is fooling me. apparently sarge ships with colored prompt which is equivalent to gentoos 1120190786 M * tobi i guess i should have checked that first :) 1120190793 M * tobi sorry for the noise 1120190800 M * Bertl np 1120190810 M * tobi i'm loving vserver so far. so simple but so powerful 1120190838 M * Bertl yes, we love it too ... 1120190883 M * Bertl you might consider adding yourself to the list of happy linux-vserver users (on the wiki) 1120190927 M * tobi this project needs more press 1120190965 M * tobi i'm using virtualizations for a long time but i haven't heard of vserver until today 1120190976 M * tobi will do 1120190989 M * Bertl feel free to advertise for it ... 1120191078 M * Bertl we are basically doing what bsd jails promised, and solaris zones provides, but for a few years now ... :) 1120191085 M * Bertl *provide 1120191147 M * tobi its perfect for my purposes. I'm using xen right now but i run the same kernel on all guests 1120191194 M * Bertl you might even be able to reduce the overhead further ... 1120191221 M * tobi definitely on the memory front. cuts out several kernels 1120191226 M * Bertl linux-vserver supports sharing files in a secure manner (which results in less caching and disk usage) 1120191248 M * Bertl of course, the kernel overhead is gone instantly :) 1120191355 M * Bertl in addition to that, you can share CPU resources even across SMP 1120191377 M * tobi thats true, didn't even think of that 1120191388 M * tobi thats actually very relevant to my setup 1120191991 J * alex234 ~alex234@p54B3F3EF.dip.t-dialin.net 1120192153 M * Bertl morning alex234! 1120192530 Q * alex234 Ping timeout: 480 seconds 1120192744 M * Bertl k, I'm off to bed now .. have a nice whatever everyone ... 1120192755 N * Bertl Bertl_zZ 1120197615 Q * tobi Quit: tobi 1120198166 Q * are|lunch Ping timeout: 480 seconds 1120199187 J * are|lunch ~are@gateway-dsl.lihas.de 1120199299 M * are|lunch hi 1120199305 N * are|lunch _are_ 1120201225 M * DaPhreak lo _are_ 1120201622 Q * _are_ Ping timeout: 480 seconds 1120201984 J * Aiken_ ~james@tooax6-049.dialup.optusnet.com.au 1120201991 Q * Aiken_ Quit: 1120202002 Q * Aiken Read error: Connection reset by peer 1120202129 J * rs ~rs@mon75-8-82-230-181-39.fbx.proxad.net 1120202211 Q * DaPhreak Quit: Lost terminal 1120202219 J * Doener` ~doener@p54877B0A.dip.t-dialin.net 1120202252 J * DaPhreak ~phreak@styx.xnull.de 1120202268 Q * DaPhreak Quit: 1120202281 J * DaPhreak ~phreak@styx.xnull.de 1120202324 J * sukria ~sukria@sargon.lncsa.com 1120202546 Q * matti Ping timeout: 480 seconds 1120202666 Q * Doener Ping timeout: 480 seconds 1120202782 J * matti matti@linux.gentoo.pl 1120203415 J * kestrel ~athomas@vsrouter.swapoff.org 1120203419 M * kestrel hello 1120203505 M * kestrel i have a question...i'm using 2.6.12-vs2.0-rc4 and util-vserver-0.30.207, but util-vserver fails to build with: 1120203512 M * kestrel /usr/include/linux/err_kernel_only.h:1:2: #error Kernel only header included in userspace 1120203530 M * kestrel is that the right combination of utilities/patch? 1120203597 M * DaPhreak kestrel: at least it works for me since about 2 weeks or so (when .12 had been released) 1120203681 M * Hollow kestrel: can you paste the complete error? 1120203685 M * Hollow moin DaPhreak 1120203700 M * kestrel could be my headers 1120203753 M * kestrel In file included from /usr/include/linux/spinlock.h:1, 1120203753 M * kestrel from ./kernel/dlimit.h:5, 1120203753 M * kestrel from ./linuxvirtual.h:24, 1120203753 M * kestrel from lib/syscall.c:28: 1120203758 M * kestrel /usr/include/linux/err_kernel_only.h:1:2: #error Kernel only header included in userspace 1120203777 M * kestrel from this compiler line: 1120203778 M * kestrel then mv -f "lib/.deps/lib_libvserver_la-syscall.Tpo" "lib/.deps/lib_libvserver_la-syscall.Plo"; else rm -f "lib/.deps/lib_libvserver_la-syscall.Tpo"; exit 1; fi 1120203778 M * kestrel gcc -DHAVE_CONFIG_H -I. -I. -I. -I ./lib -I ./ensc_wrappers -D_GNU_SOURCE -D_REENTRANT -DNDEBUG -march=i686 -O2 -pipe -Wl,-O1 -std=c99 -Wall -pedantic -W -funit-at-a-time -MT lib/lib_libvserver_la-syscall.lo -MD -MP -MF lib/.deps/lib_libvserver_la-syscall.Tpo -c lib/syscall.c -DPIC 1120203789 M * kestrel maybe it's my headers :\ 1120203810 M * DaPhreak yeah seems so :) which kernel-headers version are these from ? 1120204050 M * kestrel it says they're from glibc 2.3.4... 1120204058 M * kestrel [root@sclera:/usr/include/linux]cat spinlock.h 1120204059 M * kestrel #include 1120204061 M * kestrel hmmm :\ 1120204180 M * kestrel AHA!!! --with-kerneldir 1120204247 M * kestrel hmm 1120204253 M * kestrel thwarted at every turn 1120204635 M * kestrel hmmm 1120204656 M * kestrel so you guys are using that combination of patch and utils? 1120204774 M * DaPhreak well at home and here at work, yes :) 1120204786 M * kestrel okay, well at least i'm on the right track 1120204811 M * DaPhreak heh, btw, what distro is that ? debian ?! (from the --with-kerneldir) 1120204822 M * kestrel arch 1120204943 M * Hollow i even don't have a err_kernel_only.h 1120205037 M * kestrel yeah, i'm pretty sure it's from glibc's headers 1120205711 M * Doener` morning folks 1120205732 M * Hollow morning Doener` 1120205992 J * Vudumen_ vudumen@perverz.hu 1120206054 Q * Vudumen Read error: Connection reset by peer 1120206556 M * kestrel looks like arch uses the headers from: http://ep09.pld-linux.org/~mmazur/linux-libc-headers/ 1120207071 M * kestrel agg 1120207072 M * kestrel ravating 1120207281 Q * rt Ping timeout: 480 seconds 1120207416 J * erwan_taf ~erwan@81.80.43.67 1120207796 N * BobR_oO BobR 1120209527 M * Doener` Hollow: http://www.13thfloor.at/~doener/vserver/patches/patch-libvserver-move-headers-around.diff 1120209583 M * Doener` that one moves the kernel headers into an own directory (kernel/), Bertl's vserver.h and syscall.h into lib/ and renames your vserver.h to libvserver.h 1120209603 M * Doener` (because of the moves it's not very readable ;) 1120210108 M * Doener` (the patch that is...) 1120210674 M * DaPhreak lo Doener` ;) 1120210708 M * DaPhreak i think he's back to school (Hollow that is) 1120211042 Q * janra Ping timeout: 480 seconds 1120212466 M * kestrel so ,symlinking /usr/src/linux-2.6.12/include/linux to /usr/include did not fix the issue 1120212473 M * kestrel it changed it in new and exciting ways however 1120212571 J * dsoul_ darksoul@pingu.ii.uj.edu.pl 1120212617 Q * dsoul Read error: Connection reset by peer 1120212675 Q * sukria Quit: see you 1120212708 N * dsoul_ dsoul 1120213148 J * dsoul_ darksoul@pingu.ii.uj.edu.pl 1120213246 Q * dsoul Remote host closed the connection 1120213509 N * dsoul_ dsoul 1120214406 N * BobR BobR_Futter 1120215351 Q * erwan_taf Ping timeout: 480 seconds 1120215844 J * sukria ~sukria@213.223.184.201 1120217136 N * BobR_Futter BobR 1120218347 M * Hollow Doener`: thx, but the kernel headers are installed into the wrong dir now 1120218522 J * erwan_taf ~erwan@81.80.43.67 1120218893 P * erwan_taf Leaving 1120219104 M * kestrel hmm, it seems from this post: http://www.paul.sladen.org/vserver/archives/200311/0338.html 1120219112 M * kestrel that you need 2.4 headers in /usr/include 1120219349 M * sladen since you need to recompile the kernel you need the whole source, not just the headers 1120219538 M * kestrel yeah, i have the whole source as well 1120219552 M * kestrel but arch linux has 2.6 headers in /usr/include/{linux,asm} rather than 2.4 headers 1120219569 M * kestrel and it seems util-vserver does not compile with that combination 1120219573 M * DaPhreak well i've also the 2.6 headers in /usr/include 1120219587 M * DaPhreak and it compiles fine here :) 1120219604 A * kestrel shrugs 1120219615 M * kestrel the error in that post is exactly what i was getting 1120219622 M * kestrel and using 2.4 headers fixed it 1120219636 M * DaPhreak eh ? *lol* veeery odd :) 1120219678 M * kestrel we'll just smile, nod and move on i think :) 1120221320 N * jonsmel_zZ jonsmel 1120221331 J * prae ~prae@ezoffice.mandriva.com 1120222467 J * webster_ ~webster@p54934B01.dip.t-dialin.net 1120223448 J * eXplasm explasm@p549F7CF9.dip.t-dialin.net 1120223892 N * Bertl_zZ Bertl 1120223931 M * Bertl kestrel: it's not the fact that they are 2.6 headers, it's the fact that they are _broken_ 2.6 headers :) 1120224949 Q * webster_ uranium.oftc.net kinetic.oftc.net 1120224949 Q * dsoul uranium.oftc.net kinetic.oftc.net 1120224949 Q * Vudumen_ uranium.oftc.net kinetic.oftc.net 1120224949 Q * DaPhreak uranium.oftc.net kinetic.oftc.net 1120224949 Q * rs uranium.oftc.net kinetic.oftc.net 1120224949 Q * FaUl uranium.oftc.net kinetic.oftc.net 1120224949 Q * virtuoso uranium.oftc.net kinetic.oftc.net 1120224949 Q * id uranium.oftc.net kinetic.oftc.net 1120224949 Q * SiD3WiNDR uranium.oftc.net kinetic.oftc.net 1120224949 Q * Zoiah uranium.oftc.net kinetic.oftc.net 1120224949 Q * AprilDL uranium.oftc.net kinetic.oftc.net 1120224949 Q * zimbo uranium.oftc.net kinetic.oftc.net 1120224949 Q * mcp uranium.oftc.net kinetic.oftc.net 1120224987 J * Zoiah Zoiah@matryoshka.zoiah.net 1120225010 J * SiD3WiNDR luser@bastard-operator.from-hell.be 1120225021 J * webster_ ~webster@p54934B01.dip.t-dialin.net 1120225021 J * dsoul darksoul@pingu.ii.uj.edu.pl 1120225021 J * Vudumen_ vudumen@perverz.hu 1120225021 J * DaPhreak ~phreak@styx.xnull.de 1120225021 J * rs ~rs@mon75-8-82-230-181-39.fbx.proxad.net 1120225021 J * AprilDL ~chatzilla@ip68-9-200-247.ri.ri.cox.net 1120225021 J * mcp ~hightower@wolk-project.de 1120225021 J * zimbo ~zimbo@callisto.dom.bonis.de 1120225021 J * id ~id@relax-media.softwarezentrum.de 1120225021 J * virtuoso ~s0t0na@80.253.205.251 1120225021 J * FaUl ~immo@ip88.164.1211G-CUD12K-01.ish.de 1120225027 Q * DaPhreak Remote host closed the connection 1120225028 J * DaPhreak ~phreak@styx.xnull.de 1120226722 M * kestrel bertl: broken how? 1120226781 M * kestrel i tried these ones: http://ep09.pld-linux.org/~mmazur/linux-libc-headers/ 1120226792 M * kestrel i also tried the ones from the running kernel itself 1120226993 M * Bertl headers from the kernel are _always_ broken 1120227009 M * Bertl you want to use libc headers (but working ones) 1120227091 M * jonsmel morning all! 1120227098 M * Bertl morning jonsmel 1120227108 M * jonsmel how goes it? 1120227140 M * kestrel and the ones at that url are broken too? 1120227159 M * Bertl no idea, they might be .. 1120227225 M * kestrel mmm 1120227694 N * BobR BobR_oO 1120227737 Q * anonymousc Quit: adios 1120227781 M * kestrel do you have a link to a package containing non-broken headers? 1120227818 M * Bertl you said the 2.4 headers worked, no? 1120227861 M * kestrel yes they did 1120227875 M * kestrel but i'd like to find out why the arch 2.6 headers don't work, if they're supposed to 1120227875 M * Bertl so those are (in this regard) non broken headers then 1120227902 M * kestrel let me rephrase that then: do you have a link to a package containing non-broken 2.6 headers? 1120227936 M * Bertl well, define 2.6 headers ... 1120228085 M * kestrel well i think they come from here: http://www.kernel.org/pub/linux/kernel/v2.6/ 1120228190 M * Bertl well, the thing is, they are _not_ intended for userspace 1120228220 M * Bertl the 'kernel headers' are just for use by the system libraries (i.e. (g)libc) 1120228241 M * Bertl the library then provides a sanitized version of those headers fro userspace 1120228266 M * Bertl if the libc headers are not that well tested/sanitized, they are broken. period. 1120228281 M * Bertl but we can investigate and fix your headers if you like 1120228607 M * kestrel yes, i am aware that the headers in the kernel source need to be santitised...which is why i was asking if you know of a sane package 1120228772 M * Bertl mine are from glibc 2.2.5 1120229057 J * knoppix_ ~knoppix@dsl-082-082-082-061.arcor-ip.net 1120230652 M * Doener` evening! 1120230693 M * Doener` Hollow: hm, i didn't see anything in that would install the headers at all... but maybe that's because i got no experience with automake 1120231064 M * Bertl hey Doener`! 1120231070 M * Bertl off to dinner now ... back later 1120231085 N * Bertl Bertl_oO 1120231926 Q * prae Quit: Execute Order 69 ! 1120233276 N * BobR_oO BobR 1120233328 J * janra janra@paradox.homeip.net 1120233409 N * Bertl_oO Bertl 1120233431 M * Bertl welcome janra! 1120233944 Q * rs Quit: rs 1120234501 Q * sukria Quit: see you 1120234531 N * BobR BobR_afk 1120237499 M * Doener` Hollow: i updated the patch, now the kernel includes are installed into $(includedir)/linux/vserver 1120237509 M * Doener` hope that's fine with you now ;) 1120237518 M * Doener` gone now, birthday bbq :) 1120237643 M * Bertl bbq? 1120237751 Q * gregster Remote host closed the connection 1120238174 M * Vudumen_ barbeque maybe? :) 1120238308 M * Bertl probably 1120238753 J * tomi ~tomi@pha-84-242-95-4.nat.karneval.cz 1120238755 M * tomi Hi 1120238765 M * Bertl hey tomi! 1120238771 M * tomi do you know in witch deb is mkraid ? 1120238782 M * Bertl no idea :) 1120238792 M * tomi :-/ 1120238959 M * eyck raidtools? 1120238984 M * _ag_ tomi: why don't you use packages.debian.org to find out? 1120238998 M * eyck why are you not using mdadm? 1120239017 M * tomi no 1120239023 J * _are_ foobar@dsl-084-056-136-105.arcor-ip.net 1120239027 M * tomi raidtools is isn't 1120239045 M * Bertl does debian policy permit mdadm? :) 1120239058 M * _are_ hi 1120239069 M * Bertl hey _are_! 1120239073 M * _ag_ Bertl: i suppose, it's in main :) 1120239081 M * _are_ well, mdadm is distributed with debian, so I guess, yes 1120239101 M * Bertl but, but ... it's only a two years old, no? 1120239115 M * _are_ eh? guess I miss some context :-> 1120239143 M * _are_ someone claims vserver can't be in debian becaus eit is younger than 2 years? 1120239147 M * _ag_ Bertl: i feel the troll here :) 1120239177 M * Bertl _ag_: naah, just teasing a little ... 1120239221 M * Bertl probably because of: 1120239226 M * Bertl urpmf mkraid 1120239229 M * Bertl raidtools:/sbin/mkraid 1120239229 M * Bertl raidtools:/usr/share/man/man8/mkraid.8.bz2 1120239259 M * eyck I thought vserver already is in debian? 1120239279 M * Bertl yes, it is ... a broken version at least ... 1120239461 J * comdata ~mertins@D9257.d.pppool.de 1120239466 M * comdata moin 1120239470 M * Bertl welcome comdata! 1120239485 M * comdata finally I am back 1120239496 M * Bertl who are you? *G* 1120239503 M * comdata got a second opteron system 1120239640 M * comdata Bertl: so what's the status, I am currently doing a reinstall of a system and wanted to use kernel 2.6.12.2 but saw now that there are only patches for 2.6.11.x 1120239685 M * Bertl http://www.13thfloor.at/~doener/vserver/patches/patch-2.6.12-vs2.0-rc4.diff 1120239847 M * comdata did you have any chance testing it on a 64bit machine? 1120239886 M * Bertl well, this version no, but the 2.6.11 kernel version was running on x86_64, hppa64 and s390x 1120239924 M * comdata is dietlibc still very recommended for the userland utils, or is glibc ok? 1120239933 M * Bertl dietlibc it is ... 1120239955 M * eyck why do people insist on producing broken versions for debian to pick up? 1120240069 M * Bertl no idea ... 1120240111 M * Hollow Doener`: 0.1.1 out, with your patch 1120240584 M * _are_ well, I just swapped a 2.6.11.rc-something for a 2.6.11.12 with 2.0-rc4 patch for the 11.11 verison, seems to run fine on the dual opteron machine 1120240616 M * _are_ but then again I did this because the machine crashed a few times today, just like the whole power supply net, internet lines and phone lines for that company 1120240630 M * _are_ so everything up for more than 2h is stable copared to the rest ;) 1120241757 Q * knoppix_ Remote host closed the connection 1120242092 N * jonsmel jonsmel|LUNCH 1120242106 Q * _are_ Ping timeout: 480 seconds 1120243278 M * _ag_ eyck: what do you mean? 1120244165 Q * tomi Quit: Ukončuji 1120244341 Q * comdata Ping timeout: 480 seconds 1120244627 N * jonsmel|LUNCH jonsmel 1120244917 J * rs ~rs@212.43.230.5 1120245155 M * Bertl wb jonsmel! hey rs! 1120245330 M * jonsmel thanks, how's it going 1120245350 M * Bertl everything fine so far, and for you? 1120245360 M * jonsmel oh, it's coming 1120245367 M * jonsmel making a little progress with gfs 1120245388 M * jonsmel or, at least I've almost got the cluster functioning they way I want it too. 1120245423 N * complexhozZ complexho 1120245432 M * complexho hi jonsmel, Bertl ! 1120245433 M * Bertl morning complexho! 1120245518 M * jonsmel hey complexho 1120245580 M * complexho jonsmel: I'd be very interested to hear what your objectives are for using GFS 1120245636 M * jonsmel We mainly want to use it so that we can have multiple nodes mounting the same share 1120245644 M * jonsmel ie... the vservers dir 1120245657 M * complexho so you can migrate a vserver without the copy step yes? 1120245677 M * jonsmel that way we if we have hardware issues on one of the nodes we can, in a matter of seconds bring up a particular vserver on another node 1120245695 M * jonsmel correct, but also it would be much faster than a copy 1120245697 M * complexho yeah, that's exactly what I was thinking 1120245710 M * complexho because the filesystem is mounted on all your nodes 1120245711 M * jonsmel We are not really using gfs for the cluster 1120245715 M * jonsmel right 1120245755 M * complexho have you considered locking at the vserver level to prevent multiple nodes attempting to boot vservers ? 1120245756 M * jonsmel We just needed a filesystem that will allow for multiple mounts to the same share 1120245783 Q * eXplasm Remote host closed the connection 1120245784 M * jonsmel well, we go about it a different way, symlinks and such 1120245807 M * complexho right ok so you use a symlink to make the vserver 'appear' in the right place? 1120245815 M * jonsmel right 1120245820 M * complexho thats a nice simple idea 1120245835 M * jonsmel so when we want a vserver to switch places, all we have to do is create one symlink 1120245835 M * Bertl well, nice but dangerous ... 1120245845 M * complexho I also wondered about the dlm running under a vserver specific environment 1120245850 M * jonsmel it seems to work very well right now 1120245870 M * Bertl jonsmel: did you test regarding security issues? 1120245874 M * complexho Bertl: does that have any effect on the chroot barrier? 1120245927 M * jonsmel which are you asking about? 1120245960 J * eXplasm explasm@p549F7CF9.dip.t-dialin.net 1120245978 M * Bertl well, mainly filesystem related issues ... 1120246026 M * complexho re the lock manager... As I understand it, any node needing to lock a file does so through the dlm. Considering that each vserver will only ever be started on one node, I was wondering if GFS might have the ability to do a recursive lock from the vserver root dir downwards and if that might have decent performance increases? 1120246039 J * comdata ~mertins@Db540.d.pppool.de 1120246044 M * complexho Guess I should put that to the linux-cluster channel really... 1120246082 M * complexho it's quite a specific case, GFS as a /vservers partition... 1120246106 Q * webster_ Quit: 1120246114 M * Bertl well, if you 'just' mount it once, where is the point in using GFS at all? 1120246116 M * jonsmel That is basically the way we are running it 1120246145 M * complexho but it is the instant fail-over/migrate thing. If you release the lock, the vserver can just be started from another node 1120246169 M * Bertl so it could after the 'new' mount, no? 1120246249 M * jonsmel well, the way We operate it is: 1120246260 M * jonsmel it is mounted on all nodes, currently 4 1120246298 M * jonsmel then we only specify certain vservers to start up on each node of course none on two different nodes at once 1120246334 M * jonsmel that was the way we were operating for months before we tried to move to lustre 1120246365 M * complexho do you have a single shared /vserver filesystem or one fs per vs? 1120246376 M * jonsmel yeah... I beat gfs. I got it to operate the way I want "at least for trials right now" 1120246378 M * jonsmel :) 1120246387 M * jonsmel single shared vs dir 1120246404 M * jonsmel I have /dev/md0 a raid 5 array 1120246410 M * complexho right 1120246422 M * jonsmel gfs file system and all it has on it is /vservers 1120246435 M * jonsmel that gets mounted as /vservers on all nodes 1120246458 M * complexho I was looking into creating seperate filesystems using LVM2 or EVMS but apparently if you wanted to create new fs or resize or anything like that you would have to dismount all other nodes first... 1120246471 M * jonsmel We should have this operational and live by mid next week 1120246489 M * complexho So single vs partition seems like the only way with GFS for now - Lustre I believe can get around that 1120246503 M * jonsmel there are a lot of other issues with seperat partitions for each vs that we did't like as well 1120246522 M * complexho yes, but it seemed attractive to be able to resize and snapshot and such 1120246538 M * complexho alas probably not that practical for vservers 1120246587 M * complexho do you distribute your /etc/vservers on this shared fs as well? 1120246600 M * complexho or replicate 1120246676 M * jonsmel we don't distribute, that is where we do our symlinks specifing which vserver starts on what node 1120246699 M * complexho so your etc is shared too - cool 1120246732 M * jonsmel here is a typical setup 1120246747 M * jonsmel gfs mount t -> /vservers 1120246753 M * jonsmel within /vservers 1120246800 M * jonsmel hmm 1120246839 M * jonsmel "/vservers/CONFIGS holds all the startup items for the vs 1120246861 M * jonsmel "/vservers/vserver_name is the actual vs 1120246884 M * jonsmel "/etc/vservers/symlink->/vserver/vserver_name 1120246901 M * jonsmel that's basically it for a quicky 1120246924 M * complexho I'm right with you... 1120246953 M * jonsmel currently I have gfs setup now and I am setting up a test vserver to make sure it works 1120246981 M * jonsmel I have gfs no running where I can lose any node except the storage server and the cluster stays active 1120246989 M * jonsmel that was a fight but it works now 1120246993 M * complexho let me know how you get on, and I'll take the plunge again next week on our system - we have a couple of spare nodes for testing atm 1120247016 M * complexho hopefully our san will get us around that issue :) 1120247021 M * jonsmel like I said, I should have this running live by mid next week 1120247069 M * complexho good luck with it... GFS is a whore :P 1120247089 M * jonsmel thanks, I'll let you know, and yes it is. ;) 1120247118 M * jonsmel In the future we are going to work on getting Lustre and VS running together, of course with the help of Bertl 1120247131 M * jonsmel It is a much better route to go versus GFS 1120247163 M * complexho perhaps for your setup but for san I think GFS may be better performance wise from what I've read 1120247187 M * complexho still, I can't say for certain for obvious reasons :) 1120247198 M * jonsmel Possibly, Lustre is supposed to have the highest speeds though 1120247222 M * jonsmel Which when we tested it, it was faster than GFS from a FS stand point 1120247237 M * complexho so how much work did you say is need to get Lustre working with vserver? 1120247262 M * Bertl my estimation was about 2 month ... 1120247263 M * jonsmel bert's guess was about 3 - 4 months 1120247270 M * jonsmel oh, 2 mon 1120247273 M * jonsmel even better 1120247281 M * complexho u mean 2 man months? ouch 1120247287 M * jonsmel right 1120247291 M * complexho blimey 1120247294 M * Bertl maybe less, but including testing 1120247303 M * jonsmel lustre didn't play nice with their kernel implimentation 1120247311 M * Bertl a lot of stuff could be parallelized though ... 1120247314 M * complexho so it that to port all the xid tagging and stuff I was looking at last night? 1120247334 M * Bertl yup 1120247342 M * complexho like you have had to do for NFS et al 1120247344 M * complexho right 1120247373 M * complexho I do have a guy that would be up to it, it's just whether he'd be up for it ;) 1120247395 M * complexho is the effort paralellisable? 1120247417 M * Bertl 21:48 < Bertl> a lot of stuff could be parallelized though ... 1120247431 M * complexho I wondered if that's what you meant 1120247433 M * complexho :) 1120247439 M * Doener` and back again :) 1120247470 M * complexho jonsmel: I will have to take another look at Lustre again now... 1120247497 M * jonsmel That is the way we want to go but we just can't afford to do it right now 1120247504 M * jonsmel no man power or $ 1120247505 M * jonsmel hehe 1120247604 M * complexho well I need to get a cluster fs solution, have clients that may pay and a coder with some skill... There might be a way... 1120247662 M * complexho I suppose I might just have to get both running in a non-vserver scenario and do some testing on the san, then I can make a judgement on whether to stick with GFS or look more closely at getting Lustre working 1120247689 M * Bertl IMHO the first step would be getting lustre into the kernel (i.e. a single patch for the kernel which can build all required modules, either monolithic or _as modules_) 1120247708 M * jonsmel If you are testing 1120247711 M * complexho you mean like vserver is... 1120247723 M * jonsmel lustre is the easiest to setup with a vanilla 2.6.12 kernel 1120247733 M * Bertl this isn't even vserver related, and can be done by somebody somewhat experienced with the linux-kernel 1120247755 M * jonsmel yes, a patch like vserver does though 1120247761 M * complexho what proportion of the work would that make up iyo? 1120247786 M * Bertl something around 2-3 weeks 1120247810 M * complexho and then the rest would be splicing the two patches together into the kernel I guess :) 1120247818 M * Bertl no 1120247820 M * complexho and tweaking Lustre itself 1120247842 M * Bertl the next step would be arranging that they can both work together, (which would take about 1 week) 1120247867 M * Bertl and after that, about 3-4 weeks for the adaptations (like xid, xattr and of course network issues) 1120248002 M * jonsmel if you want to move into testing lustre I can help get thing setup for you. I have dealt with lustre for quite a few weeks figuring it out 1120248055 M * jonsmel It can somewhat be a little confusing in setting it up 1120248111 M * complexho ok cool, I will set up a test bed on the cluster next week and see if we can get something working 1120248153 M * jonsmel it would take about 1 - 2 hrs to get 2 nodes up and functioning correctly 1120248158 M * jonsmel probably closer to 1 1120248161 M * complexho really? 1120248171 M * complexho GFS or Lustre? 1120248190 M * jonsmel after messing with it for 3 weeks, I got pretty fast a setting it up 1120248192 M * jonsmel Lustre 1120248195 M * complexho :) 1120248202 M * jonsmel GFS takes a little longer 1120248240 M * jonsmel now that I got all the problems worked out with GFS it took me about 6 hrs to get 2 nodes setup 1120248267 M * jonsmel and well, I'm still adjusting minor items 1120248326 J * _are_ ~are@dsl-084-056-157-178.arcor-ip.net 1120248338 M * _are_ uff, no luck with ISPs today 1120248355 M * complexho have you got fenced working yet? 1120248392 M * jonsmel yea 1120248428 M * complexho cool, so thats the whole family singing then eh ;) there are a lot of components hanging together to just get a filesystem ;) 1120248428 M * jonsmel other issues dealing with making the cluster to start and stop correctly 1120248440 M * jonsmel yes 1120248454 M * jonsmel gfs has many components it requires to run correctly 1120248459 Q * comdata Remote host closed the connection 1120248467 M * jonsmel I only care about having one of those components though 1120248474 M * _are_ oh, sonmeone has good experience with gfs working? 1120248483 M * jonsmel that is why lustre was appealing to me 1120248513 M * jonsmel _are_: well I wouldn't quite say a good experience 1120248516 M * jonsmel :) 1120248521 M * _are_ :-> 1120248542 M * jonsmel but definately an experience 1120248546 M * _are_ wel, I still don't fdare working with distributed filesystems in production environments 1120248570 M * jonsmel yep 1120248718 M * complexho I suppose it is quite a scary thing... 1120248759 M * jonsmel complexho: just let me know when you want to try that and I'll make sure I'm available 1120248776 M * complexho ok how were you thinking of playing it? 1120248812 M * _are_ well, I played with afs/arla in 1997, had been quite some adventure back then and now and then the server even ran for 3 days straight .-) 1120248814 M * jonsmel well, you'll want to get most of the lustre source from me, it is ready for the 2.6.12 kernel 1120248851 M * jonsmel for GFS you'll just want to check out the STABLE branch from redhat 1120248896 M * jonsmel Then I can basically walk you through setting them up 1120248910 M * jonsmel I don't have it all documented yet or I'd just send you that 1120248925 M * complexho well we can doc it as we go... 1120248930 M * jonsmel right 1120248953 M * jonsmel That's kind of what I have been doing, especially with the lustre setup 1120248967 M * complexho shall we try GFS first, considering we can go as far as vservers too 1120248982 M * jonsmel sure 1120249017 M * jonsmel Bertl: do you want us to move that conversation to #linux-cluster or keep it here 1120249040 M * Bertl np, keep it here or move it, it's up to you ... 1120249047 M * jonsmel ok 1120249065 M * jonsmel well since it is pertant to VS we'll keep it here 1120249069 M * complexho well we can weigh it up, if the chan is already noisy we can come up with something else but log it 1120249075 M * jonsmel that way if anyone else wants to play along they can 1120249077 M * jonsmel :) 1120249089 M * jonsmel right 1120249182 M * complexho well I will catch you in here next week then and we'll set a time 1120249218 M * jonsmel sounds good 1120249236 M * jonsmel i'm getting ready to get outta here for the holiday 1120249241 M * complexho cool I will look forward to it 1120249246 M * jonsmel be back tuesday 1120249256 M * complexho are you going anywhere? 1120249266 M * jonsmel eh, kinda all over 1120249285 M * jonsmel mainly Cali..:) 1120249314 M * complexho heh.. 1120249318 M * complexho raining here... 1120249396 M * jonsmel rain is nice, it's burning here 1120249404 M * complexho SWAP! 1120249410 M * jonsmel we have 13 wild fires all around us 1120249423 M * complexho hmm... UNSWAP! 1120249443 M * jonsmel :| 1120249688 M * jonsmel well, 'til next week, everyone enjoy 1120249701 M * complexho yeah see ya jonsmel ! 1120249703 M * jonsmel later Bertl, complexho! 1120249739 N * jonsmel jonsmel|out_partying 1120249866 M * Bertl cya 1120250446 M * FaUl baust du ne neue d2char.mpq? 1120250449 M * FaUl OOPS 1120250493 M * FaUl there are things i hate about irssi - one of that things is that he changes the window-focus if i switch with alt-a :-) 1120250708 Q * jonsmel|out_partying Ping timeout: 480 seconds 1120251634 Q * eXplasm Remote host closed the connection 1120251715 M * complexho Bertl: Another thing I have been looking at, following the libvserver announcement, was a c daemon we used to develop for freeVSD. Did you ever use vsd? 1120252060 Q * _are_ Ping timeout: 480 seconds 1120252626 M * _ag_ complexho: what's the differences between freevsd and vserver? 1120252638 M * _ag_ s/what's/what are/ 1120252655 M * complexho well, for one, freeVSD has not been developed in about 3-4 years 1120252677 M * _ag_ complexho: got it :P 1120252739 M * complexho but it was one of the early hosting VPS systems released under the GPL. It worked 100% in user space and sort of did the same partitioning thing as vserver but using xinetd and iptables to do the network virtualisation. 1120252927 M * complexho But VSD did have a really nice configuration deamon which did all the grunt work. It was network-transparent, used SSL encryption, and quite a lot of modules to manage virtual servers. I'm looking at a copy of the last GPL'd release now to see if it might still be useful for what I am doing 1120252966 M * complexho and thought I would mention it on here 1120252997 M * Bertl complexho: no, didn't use vsd 1120253022 M * complexho would you be interested in seeing the daemon we used? 1120253031 M * daniel_hozac Doener` is already working on a daemon ;) 1120253040 M * Bertl complexho: sure, but not tonight ... 1120253054 M * Bertl I'm basically off to bed .. a little tired ... 1120253061 M * complexho ah ok ;) 1120253070 M * complexho I'll bring it up another time 1120253082 M * Bertl no, wrong, I'm off to bed ... have a nice one! :) 1120253091 N * Bertl Bertl_zZ 1120253105 M * complexho but if anyone else wants to see it let me know - my email is mark@ascendancy.ltd.uk 1120253113 M * complexho night Bertl 1120253116 Q * rs Quit: rs 1120253163 J * rs ~rs@212.43.230.5 1120253179 M * complexho daniel_hozac: is there any code released yet for Doener` s daemon? 1120253194 M * complexho or a spec or discussion or anything? 1120253195 M * Doener` a pretty hackish, minimal test version... 1120253201 M * complexho hi Doener` ! 1120253229 M * Doener` does only support setup-less context creation (for 10 seconds, i.e. 1 sleeping process in the context) and entering a context 1120253244 M * Doener` will continue working on it tomorrow, when i'm sober again ;) 1120253248 M * complexho lol 1120253255 M * complexho in c? 1120253260 M * Doener` http://www.13thfloor.at/~doener/vserver/tools/vservd-0.02.tar.bz2 1120253262 M * Doener` yep 1120253276 M * Doener` it's my birthday, i'm allowed to be drunk ;) 1120253285 M * complexho cool :) congrats 1120253290 M * Doener` thanks 1120253361 M * complexho did you ever see the vsd daemon? 1120253377 M * Doener` usage: start vservd, start client, available commands in client "create #", "enter #", "quit" (replace # with an xid) 1120253386 M * Doener` no... 1120253405 M * Doener` could you sent it to me via mail? i need to get to bed ;) 1120253421 M * complexho ok whats the address? 1120253429 M * complexho in fact I'll paste a URL 1 sec 1120253507 M * daniel_hozac any particular reason why it's not 0.0.2? 1120253569 M * complexho Doener`: email on way... anyone else: http://packetstormsecurity.org/UNIX/security/freevsd-1.4.9-2.tar.gz 1120253636 M * complexho we actually ported it to an early vserver (ctx-12 iirc) but unfortunately the company never released the daemon under GPL after that 1120253636 M * FaUl congraz Doener` 1120253638 M * Doener` daniel_hozac: yes. reason: i always forget the second dot ;) 1120253642 M * Doener` thanks FaUl 1120253646 M * Doener` thanks complexho 1120253666 Q * rs Quit: rs 1120253673 M * Doener` and good night, i start to hit the wrong keys all the time ;) 1120253698 M * daniel_hozac good night and congrats. 1120254029 M * complexho so the, everyone is pissed cept me eh? 1120254491 M * sladen complexho: hello 1120254499 M * complexho hi! 1120254512 M * complexho long time no speak :) 1120254728 M * sladen complexho: indeed! Hope you're doing well ...more later when I'm not fixing a broken vsat hub 1120254747 M * complexho yeah no probs :) 1120254764 M * complexho yeah life is good :) 1120254780 M * complexho how about u? 1120255500 M * complexho sladen: you use Redbus don't you... Are you in Harbour Exchange? Have you had regular power issues over the last 12 months? 1120256645 M * sladen complexho: don't have much in RB. But everyone has had power issues and wants to jump 1120256678 M * complexho we only have a couple in there - it has been dreadful...