1177459225 M * oliwel Bertl_vV: Game Over - Box is dead and I dont have a remote kvm....must drive to the NOC tomorrow...... 1177459250 M * oliwel thx anyway - will keep you up'd 1177459257 M * oliwel good night 1177459437 J * FireEgl ~FireEgl@adsl-220-216-111.bhm.bellsouth.net 1177459697 M * LarsW what could be the technical background when a provider offers vservers via Xen and says iptables support is not possible on such a domU? 1177459725 Q * meandtheshell Quit: Leaving. 1177461204 M * ballen maybe iptables not compiled into their Xen domU kernel 1177461826 J * tzafrir_laptop ~tzafrir@62.90.10.53 1177464353 J * rob-84x^ rob@submarine.ath.cx 1177464958 P * LarsW 1177466357 Q * ballen Quit: ballen 1177468210 J * newz2000 ~matt@12-210-150-228.client.mchsi.com 1177468277 M * newz2000 for a couple days I've been trying to join the mailing list but I get an "Unable to connect" msg. Wiki says the list is at: http://list.linux-vserver.org/ 1177468297 M * newz2000 is that the right place? The archives work 1177468301 J * mcp ~hightower@wolk-project.de 1177468322 N * mcp codeman_ 1177468484 Q * ensc Ping timeout: 480 seconds 1177468494 Q * _mcp Read error: Connection reset by peer 1177468682 Q * mnemoc Ping timeout: 480 seconds 1177468701 J * mnemoc ~amery@kilo105.server4you.de 1177469080 M * Bertl_vV newz2000: please send an email to martin at list-petersen dot dk 1177469098 M * newz2000 ok, thanks Bertl_vV 1177469103 M * Bertl_vV newz2000: yep, the url is correct ... 1177469874 J * ensc ~irc-ensc@p54B4E55A.dip.t-dialin.net 1177470066 P * newz2000 1177470537 J * ktwilight_ ~ktwilight@3.78-66-87.adsl-dyn.isp.belgacom.be 1177470681 Q * ktwilight Read error: Operation timed out 1177471413 Q * mire Ping timeout: 480 seconds 1177472028 J * mire ~mire@111-171-222-85.adsl.verat.net 1177475793 J * Loki|muh_ loki@satanix.de 1177475820 Q * Loki|muh Remote host closed the connection 1177475821 N * Loki|muh_ Loki|muh 1177475963 J * er ~sapan@pool-71-168-215-87.cmdnnj.fios.verizon.net 1177475983 M * er heyho 1177477560 J * derjohn2 ~aj@213.55.131.22 1177477914 J * mattzerah ~matt@121.50.222.55 1177477974 Q * tzafrir_laptop Ping timeout: 480 seconds 1177478082 Q * derjohn2 Ping timeout: 480 seconds 1177478375 Q * er Quit: er 1177478553 J * tzafrir_laptop ~tzafrir@62.90.10.53 1177479167 Q * softi42 Ping timeout: 480 seconds 1177479778 J * softi42 ~softi@p549D706E.dip.t-dialin.net 1177480423 N * DoberMann[ZZZzzz] DoberMann 1177482880 N * DoberMann DoberMann[PullA] 1177483247 J * dna ~naucki@51-198-dsl.kielnet.net 1177483872 Q * dna Quit: Verlassend 1177484855 Q * opuk Quit: buhuuuuu :( 1177485386 J * opuk ~kupo@c213-100-138-228.swipnet.se 1177485401 J * meandtheshell ~markus@85-124-175-27.dynamic.xdsl-line.inode.at 1177486382 J * ramon ~ramon___@116.Red-81-35-25.dynamicIP.rima-tde.net 1177486452 M * ramon Hello, I want to mount bind directories inside the virtual machine in the fstab, that is "/directory /view none bind 0 0", where /directory is inside the VM. What is the most convenient way to do it? 1177486596 M * oliwel Hi ramon 1177486625 M * oliwel just put "/vservers/www2/data/var /var none bind" into your guests fstab 1177486634 M * ramon OK. 1177486639 M * oliwel note that you need to give the full path for the source 1177486668 N * DoberMann[PullA] DoberMann 1177487457 J * bzed ~bzed@dslb-084-059-125-190.pools.arcor-ip.net 1177487461 Q * ramon Quit: Saliendo 1177488340 Q * id23 Remote host closed the connection 1177489116 J * Piet hiddenserv@tor.noreply.org 1177491773 J * dna ~naucki@211-233-dsl.kielnet.net 1177493070 J * derjohn2 ~aj@80.69.41.3 1177493207 J * ffffff ~user@tor-irc.dnsbl.oftc.net 1177493270 J * dothebart ~willi@xdsl-81-173-227-137.netcologne.de 1177493278 Q * tudenbart Read error: Connection reset by peer 1177493343 M * ffffff hi is there a #analsex in here? 1177493390 Q * ffffff 1177493921 Q * ktwilight_ Quit: dead 1177493993 J * shedi ~siggi@ftth-237-144.hive.is 1177494231 J * ktwilight ~ktwilight@3.78-66-87.adsl-dyn.isp.belgacom.be 1177495313 J * ramon ~ramon___@116.Red-81-35-25.dynamicIP.rima-tde.net 1177495332 M * ramon Hi. 1177495492 M * ramon What are the requirements for vserver enter to work? I have a minimal vm for running an app, with a customized init. 1177495687 M * oliwel What do you mean with "requirements" 1177495720 M * ramon What minimal files should be inside the vm, so that vm enter works. 1177495729 M * ramon When I run vstart vm enter. 1177495737 M * ramon I see openpty(): no such file or directory 1177495738 M * oliwel basically you need a complete init process 1177495753 M * ramon Uff, I can't have that. 1177495760 M * oliwel looks like you dont have a /dev/pts mounted 1177495765 M * oliwel you need 1177495769 M * ramon Ok, no problem 1177495773 M * oliwel a guest is a full running linux 1177495784 M * oliwel what are you looking for ? 1177495834 M * ramon Running a server application inside a vserver, with no operating system at all, just /bin and /usr/bin mounted (ro) from the host. 1177495852 M * ramon My /sbin/init is just a script that launches Tomcat. 1177495865 M * oliwel hm - 1177495873 M * oliwel should work somehow 1177495894 M * oliwel you need a minimal /dev 1177495934 M * ramon Great. Mounted /dev/pts and vserver vm enter works. 1177496049 M * ramon This may be a Vsever bug. 1177496092 M * ramon I have a file mounted outside VServer. That is a file, not a directory. mount --bind /usr/java/lib/tools.jar /opt/tomcat/lib/tools.jar 1177496110 M * ramon Inside the VM, the mount is not visible. I only see an empty file. 1177496261 M * oliwel no this is not a bug 1177496281 M * oliwel mounts from outside are not copied 1177496315 M * oliwel each VM runs in its own kernel context, so the mounts are not shared 1177496332 M * oliwel or you have to do the mount before you create the new vm 1177496339 M * oliwel in this case the mount should be inherited 1177496595 M * ramon Sorry for not being clear, I made the mount before vm start 1177496815 J * mady ~chris@e179192156.adsl.alicedsl.de 1177496827 M * ramon In addition mounting of files does not work with vserver in fstab. 1177496855 M * mady hi, can someone tell me which release of vserver is in Etch? didnt found it 1177496916 Q * mattzerah Quit: mattzerah 1177497348 M * ramon I can't find any reference to the VServer version number here in Etch. 1177497666 M * ramon A file mount --bind inside VServer returns EPERM. 1177497671 M * ramon Should I report a bug? 1177497860 M * ramon Are mount operations supposed to fail inside the VM? For me, it is failing inconditionally. 1177497889 Q * ramon Quit: Saliendo 1177498125 J * ramon ~ramon___@116.Red-81-35-25.dynamicIP.rima-tde.net 1177498282 M * ramon ping 1177498438 Q * ramon Quit: Saliendo 1177498462 J * ramon ~ramon___@116.Red-81-35-25.dynamicIP.rima-tde.net 1177498471 M * ramon anyone here? 1177498476 J * lilalinux ~plasma@dslb-084-058-203-000.pools.arcor-ip.net 1177498767 M * harry ramon: yes 1177498787 M * harry those are supposed to fail, yes ;) 1177498793 M * ramon Why? 1177498821 M * harry because standard, the guest doesn't have mount permissions 1177498835 M * ramon Well, I can live with it. 1177498846 M * harry that would be dangerous, as the guest would be able to mount other guests' filesystems 1177498851 M * harry then you should enable it :) 1177498873 M * ramon I don't see why. If the guest can only mount its own directories. 1177498882 M * ramon there is no security risk. 1177498892 M * ramon But I don't care. I can live with it. 1177498910 M * ramon But I would like to be able to place file (not directory) bind mounts in /etc/fstab 1177498910 M * harry let me look what to enable exactly ... 1177498921 M * ramon Don't bother. 1177498925 M * harry (you can do bind mounts at the guest start off course, but i asume you don't want that?) 1177498954 M * ramon It's OK. 1177498960 M * harry you can do dynamic bind mounting from the host with vnamespace -e mount bleh :) 1177498966 M * harry mkay, then i'll shut up :) 1177499046 M * lilalinux How do I disable tmp-ramdisk by default for new vservers? I always forget about the limited tmp until it's too late :-) 1177499132 M * oliwel lilalinux: just remove the /tmp entry from the guests fstab 1177499143 M * harry /usr/src/util-vserver-0.30.212/distrib/misc/fstab 1177499153 M * harry change it there, and reinstall the utils (i'd say) 1177499162 M * harry i'll see if there's another option... 1177499252 M * bzed on debian: /etc/vservers/.defaults/ 1177499271 M * bzed i think 1177499273 M * bzed not sure, though 1177499362 M * harry yes 1177499368 M * harry that could be... put a custom fstab there 1177499534 M * ramon Tahnk you very much. vnamespace, that is going to be very useful. 1177499567 M * ramon See you. 1177499570 Q * ramon Quit: Saliendo 1177499576 M * harry tataaaaaaaaaaa ;0 1177499578 M * harry ;) 1177499955 J * gonzo ~nano@96.Red-83-38-128.dynamicIP.rima-tde.net 1177499975 Q * gonzo Remote host closed the connection 1177500053 M * lilalinux oliwel: I know how to fix it, but I would prefer a commandline switch 1177500065 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1177500104 J * bianchi ~nano@96.Red-83-38-128.dynamicIP.rima-tde.net 1177500109 M * oliwel lilalinux: that does not make sense to me and I guess there is no way to do so 1177500122 M * bianchi hi all 1177500190 M * lilalinux oliwel: honestly it doesn't make sense to me, why /tmp should be a 16mb ramdisk :) 1177500224 M * oliwel thats your descission but i dont see the need for a commandline switch 1177500235 M * oliwel if you prefer another defautl just change the defautl fstab 1177500259 M * lilalinux oliwel: oh, the default one 1177500386 M * bianchi Hello, I'm trying to install etch+vservers with per-vserver quota support 1177500399 M * bianchi debian 1177500414 M * bianchi 2.6.18-4-vserver-686 kernel doesn't support vroot 1177500449 M * bianchi Does really need vroot support to get per-vserver quota? 1177500496 J * gdvqzar ~gdvqzar@ip132-213.17.dsl.minsktelecom.by 1177500599 J * kir ~kir@swsoft-mipt-nat.sw.ru 1177501093 Q * bianchi Quit: using sirc version 2.211+KSIRC/1.3.12 1177501658 M * harry lilalinux: the reason i think is: io is the biggest problem on servers... /tmp should only contain small and many temporary/... files 1177501674 M * harry so... if you put that in ram, you don't need disk IO 1177502988 J * silly_ ~silly@pD95590E7.dip0.t-ipconnect.de 1177503494 Q * Aiken Remote host closed the connection 1177504140 J * SoftIce ~psmith@dsl-242-115-118.telkomadsl.co.za 1177504163 M * SoftIce hi where do I add bcaps ? /etc/vservers/vserver_host/bcaps ? 1177504182 M * SoftIce need to add SYS_RESOURCE 1177504238 M * SoftIce or is it in bcapabilities 1177504261 M * SoftIce bcapabilities 1177504263 M * SoftIce never mind! 1177504313 M * SoftIce brb 1177504441 M * harry there you go 1177504446 M * harry you google, you win :) 1177504490 M * SoftIce no no 1177504492 M * SoftIce I just tried both 1177504495 M * SoftIce and restarted them :D 1177504498 M * SoftIce and 1 worked 1 didn't 1177504504 M * SoftIce I couldn't find a nice example on google 1177504572 M * SoftIce ok, now next BIIGGGGGGGGGGGGGGGg question 1177504580 M * SoftIce I want to build trixbox into a vserver 1177504584 M * SoftIce I don't see no google help on that 1177504599 M * SoftIce do I just need to add the trixbox repo and it should work? 1177504620 Q * shedi Quit: Leaving 1177504925 M * harry don't know what trixbox is...sry 1177504950 M * SoftIce just a modified centos 1177504952 M * SoftIce for asterisk 1177505015 M * SoftIce harry: should I just be able to add the repo to newvserver-vars & then run newvserver -v --hostn ? 1177505026 M * SoftIce or would I have to first install all the yum stuff first? 1177505068 M * lilalinux harry: maybe, but most programs don't care about that 1177505128 M * harry dont' think so...you should ask daniel_hozac 1177505140 M * harry lilalinux: true 1177505150 M * harry but most programs don't write a lot of data in /tmp 1177505182 M * lilalinux harry: commons-fileupload, clamav 1177505215 M * lilalinux it's hard to track all installed programs how and where they store temporary data 1177505294 M * lilalinux and even if each app only stores little data, the sum of all of them makes the catastrophe 1177505335 J * thessy fttmEsV9@nat-1.rz.uni-karlsruhe.de 1177505367 M * lilalinux I think it should be the distros responsibility to choose the /tmp criteria and not vservers 1177505477 M * lilalinux my desktops /tmp which is deleted at reboot contains 233M, all auto generated from running apps 1177505716 M * harry : 14:55 lois ~ ;du -s /tmp/ 1177505716 M * harry 11234 /tmp/ 1177505722 M * harry on my desktop pc 1177505726 M * harry running... shitload of apps :) 1177505746 M * harry anyway, if you don't want tmp on ramfs, then disable it 1177505754 M * harry i told you how to do it ;_) 1177505819 M * thessy hi, posted recently on the mailing list my problems with a reproducable crash of an java server which only occurs in vserver. Is Herbert here? 1177506613 Q * thessy Remote host closed the connection 1177506703 Q * FireEgl Remote host closed the connection 1177506915 M * SoftIce hhm, any etch debian users here 1177506921 M * SoftIce mind giving me their sources.list 1177506927 M * SoftIce i see it doesn't like this non-us stuff I have 1177506934 M * SoftIce and have no other boxes to compare sources.list files 1177507178 M * lilalinux deb http://ftp.gwdg.de/pub/linux/debian/debian/ etch main non-free contrib 1177507178 M * lilalinux deb-src http://ftp.gwdg.de/pub/linux/debian/debian/ etch main non-free contrib 1177507178 M * lilalinux deb http://security.debian.org/ etch/updates main contrib 1177507178 M * lilalinux deb-src http://security.debian.org/ etch/updates main contrib 1177507520 M * SoftIce thanks lilalinux 1177507566 Q * SoftIce 1177508090 M * lilalinux I thought disk IO is buffered in Linux? Is there really such a big difference in 16M ramdisk vs. buffered 16M disk? 1177508795 M * harry lilalinux: your disk cache is only 8MB on most disks and it's shared for all the mount/points/partitions/io on the disk 1177508824 M * harry so if you have a physical disk for /tmp alone and you have a 16MB cache disk 1177508826 M * harry you're right 1177508832 M * harry else: you're wrong ;) 1177508852 M * arachnist harry: new disks have 16MB of cache 1177508869 M * harry yes, but new disks are mosly NOT used for /tmp alone 1177508886 M * harry especially not if you have multiple servers running 1177508887 M * arachnist harry: dunno about others, but every seagate disk since barracuda 7200.7 has >=8MB of cache 1177508912 M * harry arachnist: if you run vserver, you probably run more than 1 virtual server 1177508925 M * harry so if you have 2, both using 10MB of /tmp space, you can't cache it all 1177508931 M * arachnist right 1177508938 M * harry AND... you can't use the disk for ANYTHING else!! 1177508946 M * harry because that would use the disk cacht too 1177508975 M * harry so a lot of small and lot of io files => cache pollution, misses,... 1177508977 M * harry so just ramdisk it 1177509472 M * lilalinux nas@nas:~$ free 1177509472 M * lilalinux total used free shared buffers cached 1177509472 M * lilalinux Mem: 2076960 1701872 375088 0 315452 874104 1177509472 M * lilalinux -/+ buffers/cache: 512316 1564644 1177509472 M * lilalinux Swap: 2650684 60 2650624 1177509507 M * lilalinux to me that looks like Linux is using all available ram for buffering, if it's not needed elsewhere 1177509585 M * lilalinux however, as soon as the 16M are full apps will crash, yes I know I can change that, but to newbies that might be not that obvious 1177509595 P * silly_ 1177509600 M * lilalinux they will get unreproducable errors 1177509607 M * lilalinux and blaim vserver 1177509620 M * lilalinux because the error only occurs inside a vserver 1177509922 M * lilalinux s/blaim/blame 1177510011 J * FireEgl Proteus@adsl-220-216-111.bhm.bellsouth.net 1177510343 M * harry it would be more useful to make /tmp larger maybe.. 1177510360 M * harry tough i only had problems with 16MB on 1 of the 30 or so servers i installed 1177510425 M * sid3windr if you run clamav inside the vserver it will die 1177510433 M * sid3windr it unpacks its updates in /tmp and they're >16M 1177510535 M * harry then just remove the /tmp mount... 1177510544 M * harry its not a big deal, it's in the fstab 1177510550 M * harry you can see it with the mount command 1177510563 M * harry okay, maybe there are a few apps who want more... 1177510608 M * sid3windr :) 1177510622 M * sid3windr it is easily fixed, I'm just saying :) 1177511200 M * ruskie just don't by default have /tmp mounted... 1177511519 J * oliwel_ ~chatzilla@ppp-82-135-72-64.dynamic.mnet-online.de 1177511621 J * arachnis1 arachnist@088156189068.who.vectranet.pl 1177511653 Q * arachnist Read error: Connection reset by peer 1177511786 A * lilalinux votes for ruskie 1177511846 Q * oliwel Ping timeout: 480 seconds 1177511859 N * oliwel_ oliwel 1177512345 Q * FireEgl Quit: ... 1177512596 Q * derjohn2 Ping timeout: 480 seconds 1177513846 J * derjohn2 ~aj@80.69.41.3 1177514818 Q * dna Quit: Verlassend 1177514852 Q * mady Quit: leaving 1177515268 J * stefani ~stefani@flute.radonc.washington.edu 1177515484 J * FireEgl ~FireEgl@adsl-220-216-111.bhm.bellsouth.net 1177515638 J * baldy_ ~baldy@pptp.dial.ipv6-network.de 1177515723 Q * baldy Read error: Operation timed out 1177515818 Q * derjohn2 Remote host closed the connection 1177516574 J * baldy ~baldy@pptp.dial.ipv6-network.de 1177516686 Q * baldy_ Ping timeout: 480 seconds 1177516942 J * boci^ ~boci@pool-6582.adsl.interware.hu 1177517027 Q * Piet Quit: Piet 1177518439 J * DoberMann_ ~james@AToulouse-156-1-92-237.w90-30.abo.wanadoo.fr 1177518546 Q * DoberMann Ping timeout: 480 seconds 1177519054 J * Fluggo ~alli@83.233.11.89 1177519191 M * Fluggo Hello. I'm wondering if there is any goal to be included in mainline kernel and if so how close are we? is there any work in progress on live migration on guests beween physical hosts ? 1177519244 M * daniel_hozac mainline is "slowly" working towards adding the virtualizations/isolations provided by Linux-VServer and similar technologies. 1177519260 M * daniel_hozac see e.g. the IPC and uts namespaces which were added in 2.6.19. 1177519314 M * daniel_hozac as for live migration, we're not doing any work in that area. 1177519328 M * daniel_hozac but if mainline suddenly grows support for it, i don't see why we wouldn't use it... 1177519434 M * arachnis1 live migration/freezing/cloning of proccesses would be cool 1177519446 M * arachnis1 but that's definitly not something that vserver team should care about 1177519454 M * Fluggo ok. But no word of it yet ? This is a very important part for me and the only alternative to Vserver is OpenVZ as it is now but I prefer the Vserver layout of doing things 1177519502 M * daniel_hozac well, it somewhat depends on what you want to achieve. 1177519546 M * daniel_hozac it should be possible to use Xen and have one big domU for your Linux-VServer hosts, which you can live-migrate to another Xen host. 1177519760 M * Fluggo Guests loadblancing themselves between physical hosts depending on load would be nice : ) But mostly if I have to take the host down for maintenence (ie manual migration) or automatic migration because of hardware failure on a host. 1177519805 M * daniel_hozac hardware failure is something people are already doing, you just need shared storage. 1177519836 M * daniel_hozac (i'm assuming hardware failure means: unresponsive, no way to migrate the processes off of it, etc.) 1177519844 M * Fluggo correct 1177520103 M * Fluggo I have two identical servers each with 2 HP Storageworks enclosures connected via SCSI (28 drives for each server) 1177520116 M * bragon hello world 1177520203 M * daniel_hozac Fluggo: so you'd need one of the network-RAID patches. 1177520212 M * daniel_hozac (like drbd) 1177520353 M * Fluggo thanks for the tip : ) 1177520443 M * Fluggo that should do the things I want with Vserver 1177520469 M * Fluggo have you used it yourself ? 1177520515 M * daniel_hozac drbd with failover? 1177520532 M * daniel_hozac s/with/and/ 1177520538 M * bragon i use drbd with heartbeat 1177520557 M * bragon it's work fine 1177520566 M * Fluggo yes that solutions looks very nice... 1177520594 M * bragon true :) 1177520831 M * bragon http://www.redhat.com/software/rha/gfs/ 1177520840 M * bragon i see that too, but i haven't test it 1177520965 M * daniel_hozac note that GFS isn't a good choice, as it doesn't support the Linux-VServer flags. 1177520968 M * daniel_hozac e.g. barrier. 1177520974 M * bragon ha ok 1177521015 M * daniel_hozac IIRC we had a patch for that a while back, but it's not in 2.2.0. 1177521409 M * Fluggo is 2 dedicated Gbit network cards (one in each server) and one TP crossovercable a good solution dedicated for drdb and heartbeat data? or how have you done bragon ? 1177521538 M * bragon i have 4 cards 1177521541 M * bragon 2 by server 1177521690 M * bragon yes i have a crossover cable between the 2 serveur for drbd and heartbeat 1177521707 M * bragon what distribution are you use ? 1177521711 M * Fluggo slackware 1177521715 M * bragon ok 1177521793 Q * lyli1 Quit: Leaving. 1177521800 M * Fluggo and you ? 1177521811 M * bragon gentoo 1177521824 M * bragon because on debian all packages was outdated 1177521833 J * lylix ~eric@dynamic-acs-24-154-33-109.zoominternet.net 1177521981 M * arachnis1 debian sid isn't that bad when it comes to "current" packages... but i still prefer gentoo over debian 1177522016 M * bragon arachnis1 to use sid in production ... 1177522090 M * arachnis1 bragon: to use gentoo in production... i wouldn't use gentoo, nor debian sid 1177522098 N * arachnis1 arachnist 1177522127 J * dna ~naucki@151-199-dsl.kielnet.net 1177522146 M * bragon arachnist no issues here 1177522187 M * arachnist i also had no issues with gentoo on my desktop/home router, but still... 1177522329 M * arachnist i'd be more happy if gentoo had a real qa 1177522343 M * bragon a real qa ? 1177522378 M * arachnist real as in, people who don't just check .ebuild code correctness, but also if packages work as one would/should expect them to work 1177522408 M * bragon ok i understand 1177522441 M * arachnist debian has quite good qa, but debian has other issues... /me looks at dpkg/apt-*.... 1177522516 M * arachnist ideal distro for me would probably be debian with paludis and gentoo's baselayout 1177522587 M * bragon probably 1177522645 M * arachnist but then, .deb lack a real alternative to use flags 1177522682 M * arachnist so probably it'd be more of a gentoo w/ debian's qa 1177522704 M * arachnist ok, c'ya 1177522839 M * bragon i'm not english, so this discussion is difficult for me 1177523314 Q * opuk Quit: Lost terminal 1177523439 J * opuk ~kupo@c213-100-138-228.swipnet.se 1177523828 J * duckx ~Duck@tox.dyndns.org 1177523994 Q * lylix Ping timeout: 480 seconds 1177525280 Q * sid3windr Ping timeout: 480 seconds 1177525591 J * sid3windr luser@bastard-operator.from-hell.be 1177528042 J * lylix ~eric@dynamic-acs-24-154-33-109.zoominternet.net 1177528687 Q * tzafrir_laptop Ping timeout: 480 seconds 1177528764 N * codeman_ mcp 1177529061 Q * softi42 Quit: Leaving 1177529471 Q * ruskie Read error: Connection reset by peer 1177529480 J * ruskie ruskie@goatse.co.uk 1177529590 Q * ag- Killed (NickServ (GHOST command used by ag-_)) 1177529595 J * ag- ~ag@archon.plz.fr 1177529609 Q * waldi Ping timeout: 480 seconds 1177529666 J * tzafrir_laptop ~tzafrir@bzq-88-153-210-69.red.bezeqint.net 1177529879 J * trippeh_ atomt@uff.ugh.no 1177529965 Q * trippeh Read error: Operation timed out 1177530006 M * Fluggo Anyone using DRBD?. If you use DRBD can you still run some vservers on the other host (if you say have 2) so you dont have one host always sitting idle syncing data? How is your setup? 1177530170 J * b0c1 ~boci@pool-4881.adsl.interware.hu 1177530252 J * waldi ~waldi@bblank.thinkmo.de 1177530447 J * matti_ matti@acrux.romke.net 1177530508 Q * matti Read error: Operation timed out 1177530508 N * matti_ matti 1177530530 N * matti Guest49 1177530609 Q * boci^ Ping timeout: 480 seconds 1177530617 Q * hardwire synthon.oftc.net oxygen.oftc.net 1177530617 Q * badari synthon.oftc.net oxygen.oftc.net 1177530617 Q * Bertl_vV synthon.oftc.net oxygen.oftc.net 1177530617 Q * infowolfe synthon.oftc.net oxygen.oftc.net 1177530617 Q * ntrs synthon.oftc.net oxygen.oftc.net 1177530617 Q * Adrinael synthon.oftc.net oxygen.oftc.net 1177530617 Q * eyck synthon.oftc.net oxygen.oftc.net 1177530617 Q * bored2sleep synthon.oftc.net oxygen.oftc.net 1177530617 Q * besonen_ synthon.oftc.net oxygen.oftc.net 1177530617 Q * bXi synthon.oftc.net oxygen.oftc.net 1177530617 Q * djbclark synthon.oftc.net oxygen.oftc.net 1177530617 Q * nebuchadnezzar synthon.oftc.net oxygen.oftc.net 1177530617 Q * mEDI_S synthon.oftc.net oxygen.oftc.net 1177530617 Q * harry synthon.oftc.net oxygen.oftc.net 1177530617 Q * glut synthon.oftc.net oxygen.oftc.net 1177530642 J * hardwire ~bip@rdbck-3091.wasilla.mtaonline.net 1177530642 J * badari ~badari@bi01p1.co.us.ibm.com 1177530642 J * Bertl_vV herbert@IRC.13thfloor.at 1177530642 J * infowolfe ~infowolfe@c-67-164-195-129.hsd1.ut.comcast.net 1177530642 J * ntrs ntrs@68-188-55-120.dhcp.stls.mo.charter.com 1177530642 J * Adrinael adrinael@rid7.kyla.fi 1177530642 J * eyck ~eyck@nat.nowanet.pl 1177530642 J * bored2sleep ~bored2sle@66.111.53.150 1177530642 J * besonen_ ~besonen@dsl-db.pacinfo.com 1177530642 J * bXi bluepunk@irssi.co.uk 1177530642 J * djbclark dclark@opensysadmin.com 1177530642 J * nebuchadnezzar ~nebu@zion.asgardr.info 1177530642 J * mEDI_S ~medi@snipah.com 1177530642 J * harry ~harry@d54C2508C.access.telenet.be 1177530642 J * glut glut@no.suid.pl 1177530782 M * oliwel Fluggo: Hi - I am using drbd 1177530804 M * Fluggo hi there : ) 1177530815 M * oliwel Fluggo: Any questions on this ? 1177531081 M * Fluggo I have two servers with 2 storageworks enclosures each connected via SCSI. This setup is going to run a couple of vservers and syncing data for failover with eachother with DRBD (using one TP crossover cable for heartbeat and data syncing between them on Gbit). I also doesnt want one machine sitting idle just syncing data. Any wrong with this setup or tips in general ? 1177531224 M * oliwel Fluggo: I am doing teh same 1177531236 M * oliwel there is one issue that i am currently debugging 1177531270 M * oliwel it makes problems with newer kernels to control the drbd devices from within the vserver scripts 1177531290 M * oliwel I used to put the primery/secondary calls in the vserver scripts 1177531302 M * oliwel this is not working atm and causes bad kernel oopses 1177531397 M * Fluggo how do you split the load when both are online.. do you effectivly loose half of the storage capacity due to mirroring ? 1177531509 M * oliwel yes I do 1177531527 M * oliwel I have a total of 5 servers running (mail, database, www) 1177531544 M * oliwel and I just put them around by hand according to their load 1177531712 M * tanjix hello @ll 1177531717 M * Fluggo each server with its own harddrives ? So you have a copy of all the vservers on each host ? 1177531802 M * Fluggo im going to have around 5 vservers at start but just one of each type 1177531980 M * oliwel Fluggo: 1177532007 M * Fluggo m 1177532012 M * oliwel Fluggo: actually I have the rootfs of all servers mirrored on both hosts as the www servers reuse the same rootfs 1177532021 M * oliwel the data and config part is on a drbd disk each 1177532061 M * oliwel so I have a vservers partiton that is present on both systems - ver static data 1177532069 M * oliwel and the live data is mirrored using drbd 1177532077 M * oliwel works quite well 1177532099 M * oliwel if I need more resources for a guest I just shut it dowen and fire it up on another node 1177532114 M * oliwel heartbeat runs for failover and will resume all interrupted guests 1177532120 M * oliwel if the second node dies 1177532154 M * oliwel currently I am migrating to a round robin with 3 machines because the load is going to high - so this means I have each of the services running on 2 out of 3 boxes 1177532196 M * Fluggo ok, I understand.. 1177532272 M * Fluggo I think its smart to do as you did and separate the live and static data 1177533467 Q * cdrx Quit: Leaving 1177533671 J * Aiken ~james@ppp222-137.lns2.bne1.internode.on.net 1177533681 Q * Aiken 1177533702 J * Aiken ~james@ppp222-137.lns2.bne1.internode.on.net 1177533895 J * dreamind ~dreamind@C2107.campino.wh.tu-darmstadt.de 1177534857 J * ntrs_ ntrs@68-188-55-120.dhcp.stls.mo.charter.com 1177534907 Q * ntrs Read error: Connection reset by peer 1177535582 N * DoberMann_ DoberMann[ZZZzzz] 1177536321 Q * Fluggo Quit: Leaving 1177536766 Q * bzed Quit: Leaving 1177536789 J * bzed ~bzed@dslb-084-059-125-190.pools.arcor-ip.net 1177537493 Q * jkl Remote host closed the connection 1177537495 J * jkl jkl@c-67-173-253-237.hsd1.co.comcast.net 1177537497 Q * virtuoso Remote host closed the connection 1177537543 J * virtuoso ~s0t0na@80.253.205.251 1177537585 Q * Vudumen Remote host closed the connection 1177537605 J * Vudumen ~vudumen@217.20.138.14 1177537610 Q * spyke Remote host closed the connection 1177537628 J * spyke ~jonas@pc19.hip.fi 1177537691 Q * blizz Remote host closed the connection 1177537698 J * blizz ~blizz@evilhackerdu.de 1177537715 J * tudenbart ~willi@xdsl-213-196-249-64.netcologne.de 1177537925 Q * mountie Ping timeout: 480 seconds 1177537950 J * yarihm ~yarihm@84-75-103-239.dclient.hispeed.ch 1177538141 Q * dothebart Ping timeout: 480 seconds 1177538201 P * glen_ 1177538639 J * mountie ~mountie@trb229.travel-net.com 1177538731 Q * yarihm Read error: Connection reset by peer 1177538864 Q * Hollow Remote host closed the connection 1177538890 J * Hollow ~hollow@styx.xnull.de 1177539103 Q * dreamind Quit: dreamind 1177539508 Q * fosco Remote host closed the connection 1177539531 J * fosco fosco@konoha.devnullteam.org 1177539625 Q * b0c1 Quit: Távozom 1177540160 Q * Borg- Ping timeout: 480 seconds 1177540212 Q * weasel Remote host closed the connection 1177540217 J * weasel weasel@asteria.debian.or.at 1177540242 Q * dna Quit: Verlassend 1177542027 J * lilalinux_ ~plasma@dslb-084-058-216-074.pools.arcor-ip.net 1177542359 Q * meandtheshell Quit: Leaving. 1177542460 Q * lilalinux Ping timeout: 480 seconds 1177542924 P * stefani I'm Parting (the water) 1177544059 Q * nox Remote host closed the connection 1177544247 J * _mcp ~hightower@wolk-project.de 1177544263 Q * mcp Read error: Connection reset by peer