1149466208 N * nammie Nam 1149466627 M * micah ryanc: dont use the newvserver script, its broken 1149466649 M * micah in fact I recommend removing the vserver-debiantools package 1149466691 M * ryanc hmm 1149466696 M * ryanc I see. 1149466784 M * ryanc a vserver can't make an iface promisc can it? 1149466807 M * micah no, not unless you allow it to 1149466881 Q * doener_ Quit: leaving 1149466941 M * ryanc not allowed by default? 1149467275 Q * meandtheshell Quit: bye bye ... 1149473315 N * Bertl_oO Bertl_zZ 1149476985 M * micah ryanc: right 1149478462 M * locksy Can someone explain how to add capabilities to a vserver (i.e. NET_ADMIN & NET_RAW) - using util-vserver tools 1149478522 M * daniel_hozac locksy: echo CAP_NET_ADMIN >> /etc/vservers/.../bcapabilities, as per the flower page. 1149478993 M * locksy ta, was missing the 'b' 1149479003 M * locksy must have misread the page :( 1149479024 M * Nam hehe, the flower page 1149479050 M * Nam although the background and colors suck and make it hard to read, hehe, I still love the background 1149479376 M * locksy Ok, I get: 1149479377 M * locksy XID: 49162 1149479378 M * locksy BCaps: 00000000344c14ff 1149479378 M * locksy CCaps: 0000000000000101 1149479378 M * locksy CFlags: 0000000202020010 1149479380 M * locksy CIPid: 0 1149479400 M * locksy but iptables still whines about needing to be root :( 1149485985 J * Hmmm ~Hmmmm@221.135.51.19 1149487552 Q * notlilo Quit: brb 1149487883 J * notlilo ~lilofree@tor-irc.dnsbl.oftc.net 1149488460 Q * samueltc_ Ping timeout: 480 seconds 1149488579 Q * Hmmm Quit: Ex-Chat 1149489394 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1149489689 J * lnX\ 1810@host216.201-252-189.telecom.net.ar 1149489705 P * lnX\ 1149489914 Q * cdrx Quit: Leaving 1149490135 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1149490513 M * glen_ hello. what does this error say? i have incompatible kernel and tools? 1149490514 M * glen_ # vserver test start 1149490514 M * glen_ chbind: vc_net_create(): Invalid argument 1149491589 J * dna ~naucki@dialer-145-87.kielnet.net 1149493543 Q * FireEgl Quit: Bye... 1149495037 Q * s0undt3ch Quit: leaving 1149495050 Q * glen_ arion.oftc.net keid.oftc.net 1149495050 Q * BenBen arion.oftc.net keid.oftc.net 1149495050 Q * virtuoso arion.oftc.net keid.oftc.net 1149495050 Q * Adrinael arion.oftc.net keid.oftc.net 1149495050 Q * Vudumen arion.oftc.net keid.oftc.net 1149495050 Q * baggins arion.oftc.net keid.oftc.net 1149495050 Q * click arion.oftc.net keid.oftc.net 1149495050 Q * jkl arion.oftc.net keid.oftc.net 1149495050 Q * SNy arion.oftc.net keid.oftc.net 1149495050 Q * insomniac arion.oftc.net keid.oftc.net 1149495050 Q * cemil arion.oftc.net keid.oftc.net 1149495050 Q * phreak`` arion.oftc.net keid.oftc.net 1149495050 Q * ag- arion.oftc.net keid.oftc.net 1149495050 Q * pusling arion.oftc.net keid.oftc.net 1149495188 J * samueltc_ ~samuel@levinux.UQAR.UQUEBEC.CA 1149495235 J * SNy 63fe5ce50b@bmx-chemnitz.de 1149495236 J * BenBen ~benny@defiant.wavecon.de 1149495237 J * ag- ag@caladan.roxor.cx 1149495237 A * samueltc_ is away: (Auto-Away after 10 mins) [BX-MsgLog On] 1149495241 J * Vudumen ~vudumen@perverz.hu 1149495243 J * insomniac ~insomniac@slackware.it 1149495246 J * glen_ ~glen@elves.delfi.ee 1149495247 J * pusling pusling@195.215.29.124 1149495253 J * Adrinael adrinael@hoasb-ff09dd00-79.dhcp.inet.fi 1149495254 J * baggins baggins@kenny.mimuw.edu.pl 1149495262 J * phreak`` ~phreak``@140.211.166.183 1149495267 J * virtuoso ~s0t0na@shisha.spb.ru 1149495274 J * click click@ti511110a080-0437.bb.online.no 1149495521 J * cemil ~cemil@defiant.wavecon.de 1149495960 J * jkl eric@c-71-56-216-223.hsd1.co.comcast.net 1149497186 J * bonbons ~bonbons@83.222.39.166 1149497292 Q * schimmi2 Read error: Connection reset by peer 1149497411 J * Viper0482 ~Viper0482@p54976669.dip.t-dialin.net 1149498789 M * bon hello :) 1149500138 Q * dna Quit: Verlassend 1149500157 J * dna ~naucki@dialer-145-87.kielnet.net 1149501840 J * yarihm ~yarihm@84-74-17-70.dclient.hispeed.ch 1149502926 P * orionpanda 1149502929 J * orionpanda orionpanda@netblock-66-245-252-180.dslextreme.com 1149505205 Q * bonbons Remote host closed the connection 1149506187 Q * Aiken_ Ping timeout: 480 seconds 1149506200 Q * micah Quit: leaving 1149506299 J * cryo ~say@psoft.user.matrix.farlep.net 1149506301 J * micah ~micah@208.99.202.72 1149506323 Q * micah Quit: 1149506438 Q * cryo Quit: 1149506444 J * micah ~micah@208.99.202.72 1149506450 J * meandtheshell ~markus@85-124-174-52.dynamic.xdsl-line.inode.at 1149506720 J * bonbons ~bonbons@83.222.39.166 1149506738 M * daniel_hozac glen_: you disabled dynamic contexts in your kernel but you didn't specify a static one for your guest? 1149506813 Q * micah Quit: leaving 1149506825 J * micah ~micah@208.99.202.72 1149506983 Q * cdrx Read error: Operation timed out 1149507285 M * glen_ daniel_hozac: yes. could be! checking 1149507316 M * glen_ daniel_hozac: thx. that was the problem! 1149507510 M * bon hm 1149507515 M * bon my guest shows nothing in df -h 1149507515 M * bon ;) 1149507664 Q * Viper0482 Ping timeout: 480 seconds 1149507814 M * orionpanda I've got a question about xattr and file attributes in the kernel. 1149507948 M * orionpanda xattr, extended attributes, are name=value pairs stored on the inode. These are used with getfattr,setfattr. 1149507978 M * orionpanda file attributes are used with chattr and lsattr. these are flags set on the inode to give it special behavior. 1149508003 M * orionpanda so, config_ext3_fs_xattr refers to xattr, not file attributes. which one does vserver use for COW/hashify? 1149508090 M * orionpanda i'm totally confused about this. 1149508218 J * Viper0482 ~Viper0482@p549755FA.dip.t-dialin.net 1149508314 M * Hollow orionpanda: you don't need xattr for the vserver flags to work on files.. 1149508339 M * orionpanda do I need xattr for COW? 1149508346 M * orionpanda what do I need it for? 1149508375 M * Hollow no, not even for cow.. 1149508456 M * orionpanda hmm. i've been reading these emails from the archives about enabling xattr to get cow support. 1149508579 M * orionpanda also, there's a unify utility using chattr: http://mirrors.paul.sladen.org/sam.vilain.net/vserver/unify-dirs 1149509524 Q * nokoya Ping timeout: 480 seconds 1149509596 J * nokoya young@hi-230-82.tm.net.org.my 1149511044 M * micah orionpanda: that is really ancient 1149511092 M * orionpanda yes, it is. 1149511117 M * orionpanda so, I can get cow/vhashify working with xattr turned off? 1149511276 M * micah I can say definately to vhashify, as I've used that without xattr, but I've not used cow yet, so I cannot say 1149511351 M * Hollow if cow would need xattr, Kconfig would depend on it 1149511466 M * bonbons Hollow: isn't xattr required for the persistence? 1149511559 M * Hollow well, i never have enabled xattr in my kernels 1149511566 M * Hollow and it vserver iattrs still work fine 1149511571 M * Hollow s/it// 1149511602 M * orionpanda if setattr doesn't use xattr, then I should be able to use cow with xattr disable. I guess 'setattr --iunlink' uses file attributes, not xattr? 1149511633 M * Hollow indeed 1149511683 M * bonbons and none of the information gets lost after reboot? For me reiserfs mounted without xattr forgot everything about IMMUTABLE and IUNLINK after reboot 1149511708 M * Hollow ah, yes.. reiser is dumb in this case IIRC 1149511710 M * orionpanda ok. reason I asked is i'm trying to get vserver to run on a clusterfs and i'm having problems with setattr. i thought it was an xattr problem. i guess it's a file attribute problem, instead 1149511733 M * orionpanda bonobns: i'm having the same problems with lustre, too 1149511888 M * bonbons with xfs it's working for me (no special mount option) 1149511966 M * orionpanda cow breaks links correctly? 1149511967 M * orionpanda touch a; ln a b; setattr --iunlink a; echo "a" > a; ls -i a b 1149512046 M * orionpanda that works great on a local ext3 fs. however, i'm having trouble with the iunlink tag. the immutable and append-only tags work fine, just not iunlink 1149512097 M * bonbons yes, but you need to set IUNLINK AND IMMUTABLE for it to work 1149512125 M * bonbons IUNLINK alone should permit everything except unlinking 1149512151 M * bonbons in other works, it should only disallow unlink() 1149512158 J * FireEgl Atlantica@Atlantica.DollarDNS.Net 1149512202 M * orionpanda oh. i thought that iunlink was all that i needed for cow. 1149512251 M * bonbons CoW needs combination of IMMUTABLE and IUNLINK 1149512301 M * orionpanda so, chattr +u , setattr --iunlink 1149512306 M * orionpanda thanks for the info, good to know 1149512377 M * orionpanda thankfully, lustre supports the immutable flag; however, it refuses to set iunlink; i'm trying to figure out why; now i know it's not a xattr problem. 1149512534 M * Hollow isn't iunlink vserver specific and each filesystem needs to be patched for it? 1149512641 M * orionpanda doesn't ioctl just set a bit mask? I thought only the client needs to be patched for it. I thought, for example, that my patched client could set iunlink on an un-patched NFS server. Am I wrong? 1149512737 M * Hollow *shrug* 1149512813 M * orionpanda hrmm. so confusing 1149515200 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1149515446 Q * phedny charon.oftc.net helium.oftc.net 1149515446 Q * DreamerC charon.oftc.net helium.oftc.net 1149515446 Q * nebuchadnezzar charon.oftc.net helium.oftc.net 1149515446 Q * orionpanda charon.oftc.net helium.oftc.net 1149515446 Q * VAndreas charon.oftc.net helium.oftc.net 1149515446 Q * dsoul charon.oftc.net helium.oftc.net 1149515446 Q * cohan charon.oftc.net helium.oftc.net 1149515446 Q * matti charon.oftc.net helium.oftc.net 1149515446 Q * anonc charon.oftc.net helium.oftc.net 1149515446 Q * yarihm charon.oftc.net helium.oftc.net 1149515446 Q * cdrx charon.oftc.net helium.oftc.net 1149515446 Q * Viper0482 charon.oftc.net helium.oftc.net 1149515446 Q * dna charon.oftc.net helium.oftc.net 1149515446 Q * cemil charon.oftc.net helium.oftc.net 1149515446 Q * baggins charon.oftc.net helium.oftc.net 1149515446 Q * glen_ charon.oftc.net helium.oftc.net 1149515446 Q * insomniac charon.oftc.net helium.oftc.net 1149515446 Q * BenBen charon.oftc.net helium.oftc.net 1149515446 Q * Zaki charon.oftc.net helium.oftc.net 1149515446 Q * gdm charon.oftc.net helium.oftc.net 1149515446 Q * redtux charon.oftc.net helium.oftc.net 1149515446 Q * Snow-Man charon.oftc.net helium.oftc.net 1149515446 Q * Hollow charon.oftc.net helium.oftc.net 1149515446 Q * Loki|muh charon.oftc.net helium.oftc.net 1149515446 Q * mugwump_ charon.oftc.net helium.oftc.net 1149515446 Q * bubulak charon.oftc.net helium.oftc.net 1149515446 Q * starlein charon.oftc.net helium.oftc.net 1149515446 Q * FloodServ charon.oftc.net helium.oftc.net 1149515446 Q * jake- charon.oftc.net helium.oftc.net 1149515446 Q * blackfire charon.oftc.net helium.oftc.net 1149515446 Q * wenchien charon.oftc.net helium.oftc.net 1149515446 Q * harry charon.oftc.net helium.oftc.net 1149515446 Q * ray6 charon.oftc.net helium.oftc.net 1149515446 Q * trasher charon.oftc.net helium.oftc.net 1149515446 Q * Medivh charon.oftc.net helium.oftc.net 1149515446 Q * daniel_hozac charon.oftc.net helium.oftc.net 1149515446 Q * otaku42_away charon.oftc.net helium.oftc.net 1149515446 Q * bragon charon.oftc.net helium.oftc.net 1149515446 Q * sid3windr charon.oftc.net helium.oftc.net 1149515446 Q * derjohn charon.oftc.net helium.oftc.net 1149515446 Q * meebey charon.oftc.net helium.oftc.net 1149515446 Q * bogus charon.oftc.net helium.oftc.net 1149515446 Q * waldi charon.oftc.net helium.oftc.net 1149515446 Q * [PUPPETS]Gonzo charon.oftc.net helium.oftc.net 1149515446 Q * tokkee charon.oftc.net helium.oftc.net 1149515446 Q * weasel charon.oftc.net helium.oftc.net 1149515586 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1149515586 J * Viper0482 ~Viper0482@p549755FA.dip.t-dialin.net 1149515586 J * orionpanda orionpanda@netblock-66-245-252-180.dslextreme.com 1149515586 J * yarihm ~yarihm@84-74-17-70.dclient.hispeed.ch 1149515586 J * dna ~naucki@dialer-145-87.kielnet.net 1149515586 J * cemil ~cemil@defiant.wavecon.de 1149515586 J * baggins baggins@kenny.mimuw.edu.pl 1149515586 J * glen_ ~glen@elves.delfi.ee 1149515586 J * insomniac ~insomniac@slackware.it 1149515586 J * BenBen ~benny@defiant.wavecon.de 1149515586 J * Zaki ~Zaki@212.118.99.32 1149515586 J * gdm ~gdm@64.62.195.81 1149515586 J * redtux ~redtux@pc199.pub.univie.ac.at 1149515586 J * Snow-Man ~sfrost@kenobi.snowman.net 1149515586 J * phedny ~mark@volcano.p-bierman.nl 1149515586 J * DreamerC ~dreamerc@59-112-22-42.dynamic.hinet.net 1149515586 J * nebuchadnezzar ~nebu@zion.asgardr.info 1149515586 J * Hollow ~hollow@home.xnull.de 1149515586 J * Loki|muh loki@satanix.de 1149515586 J * mugwump_ ~samv@watts.utsl.gen.nz 1149515586 J * bubulak ~bubulak@cicka.wnet.sk 1149515586 J * starlein ~star@fo0bar.de 1149515586 J * VAndreas ~Hossa@212.110.98.7 1149515586 J * dsoul darksoul@vice.ii.uj.edu.pl 1149515586 J * cohan ~cohan@koniczek.de 1149515586 J * waldi ~waldi@bblank.thinkmo.de 1149515586 J * matti matti@linux.gentoo.pl 1149515586 J * anonc ~anonc@staffnet.internode.com.au 1149515586 J * FloodServ services@services.oftc.net 1149515586 J * weasel weasel@weasel.noc.oftc.net 1149515586 J * jake- psybnc@murlocs.org 1149515586 J * [PUPPETS]Gonzo gonzo@langweiligneutral.deswahnsinns.de 1149515586 J * blackfire blackfire@dp70.internetdsl.tpnet.pl 1149515586 J * wenchien ~wenchien@221-169-69-23.adsl.static.seed.net.tw 1149515586 J * harry ~harry@d54C2508C.access.telenet.be 1149515586 J * tokkee tokkee@ssh.faui2k3.org 1149515586 J * ray6 ~ray@vh5.gcsc2.ray.net 1149515586 J * trasher daniel@217.160.128.201 1149515586 J * Medivh ck@paradise.by.the.dashboardlight.de 1149515586 J * meebey meebey@booster.qnetp.net 1149515586 J * daniel_hozac ~daniel@c-2d1472d5.010-230-73746f22.cust.bredbandsbolaget.se 1149515586 J * bogus ~bogusano@fengor.net 1149515586 J * otaku42_away ~otaku42@legolas.otaku42.de 1149515586 J * bragon ~bragon@sd866.sivit.org 1149515586 J * sid3windr luser@bastard-operator.from-hell.be 1149515586 J * derjohn ~derjohn@80.69.37.19 1149516109 Q * [PUPPETS]Gonzo Ping timeout: 480 seconds 1149516594 Q * anonc charon.oftc.net helium.oftc.net 1149516594 Q * cohan charon.oftc.net helium.oftc.net 1149516594 Q * VAndreas charon.oftc.net helium.oftc.net 1149516594 Q * orionpanda charon.oftc.net helium.oftc.net 1149516594 Q * phedny charon.oftc.net helium.oftc.net 1149516594 Q * matti charon.oftc.net helium.oftc.net 1149516594 Q * dsoul charon.oftc.net helium.oftc.net 1149516594 Q * DreamerC charon.oftc.net helium.oftc.net 1149516594 Q * nebuchadnezzar charon.oftc.net helium.oftc.net 1149516594 Q * yarihm charon.oftc.net helium.oftc.net 1149516594 Q * Hollow charon.oftc.net helium.oftc.net 1149516594 Q * gdm charon.oftc.net helium.oftc.net 1149516594 Q * Zaki charon.oftc.net helium.oftc.net 1149516594 Q * BenBen charon.oftc.net helium.oftc.net 1149516594 Q * insomniac charon.oftc.net helium.oftc.net 1149516594 Q * glen_ charon.oftc.net helium.oftc.net 1149516594 Q * baggins charon.oftc.net helium.oftc.net 1149516594 Q * cemil charon.oftc.net helium.oftc.net 1149516594 Q * dna charon.oftc.net helium.oftc.net 1149516594 Q * Viper0482 charon.oftc.net helium.oftc.net 1149516594 Q * cdrx charon.oftc.net helium.oftc.net 1149516594 Q * redtux charon.oftc.net helium.oftc.net 1149516594 Q * Snow-Man charon.oftc.net helium.oftc.net 1149516594 Q * starlein charon.oftc.net helium.oftc.net 1149516594 Q * mugwump_ charon.oftc.net helium.oftc.net 1149516594 Q * Loki|muh charon.oftc.net helium.oftc.net 1149516594 Q * bubulak charon.oftc.net helium.oftc.net 1149516594 Q * FloodServ charon.oftc.net helium.oftc.net 1149516594 Q * derjohn charon.oftc.net helium.oftc.net 1149516594 Q * bogus charon.oftc.net helium.oftc.net 1149516594 Q * daniel_hozac charon.oftc.net helium.oftc.net 1149516594 Q * Medivh charon.oftc.net helium.oftc.net 1149516594 Q * trasher charon.oftc.net helium.oftc.net 1149516594 Q * ray6 charon.oftc.net helium.oftc.net 1149516594 Q * tokkee charon.oftc.net helium.oftc.net 1149516594 Q * blackfire charon.oftc.net helium.oftc.net 1149516594 Q * jake- charon.oftc.net helium.oftc.net 1149516594 Q * bragon charon.oftc.net helium.oftc.net 1149516594 Q * otaku42_away charon.oftc.net helium.oftc.net 1149516594 Q * wenchien charon.oftc.net helium.oftc.net 1149516594 Q * meebey charon.oftc.net helium.oftc.net 1149516594 Q * waldi charon.oftc.net helium.oftc.net 1149516594 Q * harry charon.oftc.net helium.oftc.net 1149516594 Q * sid3windr charon.oftc.net helium.oftc.net 1149516594 Q * weasel charon.oftc.net helium.oftc.net 1149516606 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1149516606 J * Viper0482 ~Viper0482@p549755FA.dip.t-dialin.net 1149516606 J * orionpanda orionpanda@netblock-66-245-252-180.dslextreme.com 1149516606 J * yarihm ~yarihm@84-74-17-70.dclient.hispeed.ch 1149516606 J * dna ~naucki@dialer-145-87.kielnet.net 1149516606 J * cemil ~cemil@defiant.wavecon.de 1149516606 J * baggins baggins@kenny.mimuw.edu.pl 1149516606 J * glen_ ~glen@elves.delfi.ee 1149516606 J * insomniac ~insomniac@slackware.it 1149516606 J * BenBen ~benny@defiant.wavecon.de 1149516606 J * Zaki ~Zaki@212.118.99.32 1149516606 J * gdm ~gdm@64.62.195.81 1149516606 J * redtux ~redtux@pc199.pub.univie.ac.at 1149516606 J * Snow-Man ~sfrost@kenobi.snowman.net 1149516606 J * phedny ~mark@volcano.p-bierman.nl 1149516606 J * DreamerC ~dreamerc@59-112-22-42.dynamic.hinet.net 1149516606 J * nebuchadnezzar ~nebu@zion.asgardr.info 1149516606 J * Hollow ~hollow@home.xnull.de 1149516606 J * Loki|muh loki@satanix.de 1149516606 J * mugwump_ ~samv@watts.utsl.gen.nz 1149516606 J * bubulak ~bubulak@cicka.wnet.sk 1149516606 J * starlein ~star@fo0bar.de 1149516606 J * VAndreas ~Hossa@212.110.98.7 1149516606 J * dsoul darksoul@vice.ii.uj.edu.pl 1149516606 J * cohan ~cohan@koniczek.de 1149516606 J * waldi ~waldi@bblank.thinkmo.de 1149516606 J * matti matti@linux.gentoo.pl 1149516606 J * anonc ~anonc@staffnet.internode.com.au 1149516606 J * FloodServ services@services.oftc.net 1149516606 J * weasel weasel@weasel.noc.oftc.net 1149516606 J * jake- psybnc@murlocs.org 1149516606 J * blackfire blackfire@dp70.internetdsl.tpnet.pl 1149516606 J * wenchien ~wenchien@221-169-69-23.adsl.static.seed.net.tw 1149516606 J * harry ~harry@d54C2508C.access.telenet.be 1149516606 J * tokkee tokkee@ssh.faui2k3.org 1149516606 J * ray6 ~ray@vh5.gcsc2.ray.net 1149516606 J * trasher daniel@217.160.128.201 1149516606 J * Medivh ck@paradise.by.the.dashboardlight.de 1149516606 J * meebey meebey@booster.qnetp.net 1149516606 J * daniel_hozac ~daniel@c-2d1472d5.010-230-73746f22.cust.bredbandsbolaget.se 1149516606 J * bogus ~bogusano@fengor.net 1149516606 J * otaku42_away ~otaku42@legolas.otaku42.de 1149516606 J * bragon ~bragon@sd866.sivit.org 1149516606 J * sid3windr luser@bastard-operator.from-hell.be 1149516606 J * derjohn ~derjohn@80.69.37.19 1149518902 J * MrX ~urk@219.95.6.149 1149518937 J * nammie ~nam@S0106001195551ff0.va.shawcable.net 1149519306 Q * MrX Quit: urk IRC v0.-1.4 - http://urk.sf.net/ 1149519349 Q * Nam Ping timeout: 480 seconds 1149519546 J * MrX ~urk@219.95.6.149 1149524681 P * glen_ 1149525421 J * s0undt3ch ~s0undt3ch@bl7-248-238.dsl.telepac.pt 1149525836 J * [PUPPETS]Gonzo gonzo@langweiligneutral.deswahnsinns.de 1149526340 N * Bertl_zZ Bertl 1149526345 M * Bertl morning folks! 1149526403 M * Bertl orionpanda: you are wrong :) 1149526647 J * me2 ~dragosh@83.166.220.142 1149526659 M * Bertl welcome me2! 1149526700 M * me2 hello, i have a few questions, feel free not to answer them 1149526717 M * me2 first of all, can you specify which CPU a vserver shall use? 1149526722 M * Bertl let's hear ... 1149526733 M * me2 on a SMP system that is 1149526768 M * Bertl yes, in several ways, for SMP and SMT 1149526785 M * harry it is??? how? 1149526789 M * me2 quick documentation reference for this one? 1149526868 M * Bertl well, first, with the existing cpuset/cpuaffinity stuff 1149526878 M * Bertl (similar to any normal linux system) 1149526890 M * Bertl and then with the per cpu token bucket scheduler 1149526891 M * me2 is it a vserver parameter? or the usual stuff? 1149526924 M * Bertl me2: the cpuset/affinity isnot linux-vserver specific, the per cpu token bucket scheduler is 1149526953 M * me2 yes, and how do u specify in the vserver conf each cpu is allocated or preffered? 1149526998 M * me2 will the vserver processes be sent to a specific cpu by default, or runned on both of them? 1149527005 M * me2 both/all 1149527014 M * daniel_hozac all of them. 1149527021 M * daniel_hozac just like regular processes. 1149527023 M * Bertl that's actually a todo atm, because the current config does not directly support the new scheduler 1149527046 M * me2 isn't it a waste of cpu time to have them on more cpu? 1149527070 M * Bertl you can easily bind a complete guest to a single cpu if that is what you prefer 1149527080 M * daniel_hozac yep, i do that on my builder. 1149527121 M * me2 i ask these because i am curious to what extent does it matter is a vPS is hosted on a SMP system and alocated 10% (lets say) of CPU time compared to a vserver on a single cpu system with 20% time 1149527129 M * me2 if they can be roughly compared 1149527150 M * Bertl you mean 10% on each CPU, yes (and dual cpu smp :) 1149527163 M * me2 10% on a 2 cpu system 1149527173 M * me2 versus 20% on a single 1149527190 M * Bertl yeah, but 10% _for each_ cpu on the dual SMP system :) 1149527206 M * me2 can u specify for each one? 1149527214 M * me2 or just 10% and applies to both 1149527232 M * Bertl yes, you can adjust them independantly 1149527252 M * Bertl so you could have 10% on the first cpu and 30% on the second 1149527267 M * me2 aha 1149527275 M * Bertl (where we actually do not specify percentage at all :) 1149527290 M * me2 i get it 1149527293 M * Bertl (but I know, folks prefer to think in %) 1149527337 M * me2 also, can u specify swap size for each vserver? 1149527346 M * Bertl not directly 1149527358 M * me2 let's say i want to give it 256mb out of 1024 and 512swap out of 2048 1149527375 M * me2 so i need to allow X of swap+ram 1149527377 M * me2 right? 1149527395 M * Bertl you can adjust virtual and RSS memory, but the system will deicde how much ram/swap that will correspond to 1149527426 M * me2 aha, so it does the calculus based on the memory it has (ram+swap) 1149527465 M * Bertl that is because the 'straigh forward' RAM+SWAP approach actually is not only very complicated, but also adds unreasonable overhead and reduces overall performance and disallows efficient sharing 1149527466 M * me2 what if i have 4 vservers, 1gb ram, 2 gb swap, and i want to give each one 768memory 1149527484 M * me2 if 2 of them eat up all of the ram 1149527491 M * me2 what happens to the other 2? 1149527509 M * me2 or will the kernel decide a fair allocation for all? 1149527519 M * Bertl what happens on your normal linux system if java eats up all your ram? 1149527534 M * me2 the programmer gets killed, system trashed 1149527556 M * Bertl hmm, maybe we have different ways to handle that :) 1149527578 M * me2 i wanted to know how i can guarantee some actual ram to every vserver, while using swap also 1149527580 M * Bertl usually I'd answer that with, swap is used and if that is exhausted the OOM killer strikes 1149527607 M * Bertl that is very similar to what you get in a properly configured guest 1149527630 M * Bertl to some extend, the swap is just used, and if that is not available you get OOM killings 1149527632 Q * MrX Ping timeout: 480 seconds 1149527646 M * me2 i know that, i wanted to know what happens practically with the vservers and if there is a way to make sure they get fair access to resources, if one of them tries to use up all the memory 1149527652 M * Bertl with strict no overconmmitment, you can have normal OOM replies 1149527674 M * Bertl i.e. you can 'just' limit the ram to whatever you want 1149527690 M * Bertl (but usually no-overcommitment is not what folks use) 1149527731 M * me2 ok, so it's better to have lots of memory and dive it only with no swap and a hardlimit 1149527756 M * me2 to make sure nobody trashes the system 1149527764 M * Bertl you can (and probably should) use swap too 1149527782 M * Bertl but it's definitely better to keep enough ram around for the active working set 1149527792 M * Bertl (to avoid trashing) 1149527819 M * me2 i am asking for a supposed hosting system, now we use vservers and since we take care of the servers we handle this 1149527864 M * Bertl e.g. the following would be more than sufficient: 1149527867 M * me2 but i am writing a paper covering vservers/vps and i wanted to know all the limitations and problems that may occour when u buy/rent a vserver 1149527888 M * Bertl assumed, you have 4GB memory available, and another 4GB swap space 1149527893 M * me2 yes 1149527901 M * Bertl you put, let's say 10 guests on that 1149527919 M * me2 ok 1149527920 M * Bertl this doesn't mean that you would limit them to 400M each 1149527939 M * Bertl you would probably set a hard limit of 2GB or maybe even 3GB for each guest 1149527941 M * me2 what whould be a soft and a hard limit? 1149527956 M * Bertl and specify a soft limit of maybe 256MB or 512MB 1149527964 M * me2 and if one of them has a messy java app? 1149527979 M * Bertl inside the guest that will look like 256MB ram and 1.75GB swap 1149528003 M * me2 aaha 1149528004 M * Bertl the guest with the messy java app will not be able to hog the entire system 1149528022 M * Bertl but it will definitely use up to 2/3GB of memory 1149528038 M * me2 yeah, but is ok overall 1149528039 M * Bertl parts which are unused, will be swapped out 1149528073 M * me2 does the memory that is not alocated when you sum up all the softlimits matter? 1149528082 M * Bertl if all 10 guests will start an evil java app, they will probably trash the system, but that cannot be handled with 4GB memory in any way 1149528106 M * me2 meaning, should i divide the memory to the number of vservers, or leave something out 1149528110 M * me2 and divide the rest 1149528117 M * Bertl it is not directly relevan 1149528140 M * Bertl but it is probably a good idea to keep a realtion 1149528144 M * Bertl *relation 1149528163 M * me2 i saw some comments on different forums saying it matters if there is lots of memory avail. for the system 1149528180 M * Bertl yes, definitely memory is one of the first limitations 1149528199 M * Bertl directly followed by I/O bandwidth for the disk subsystem 1149528200 M * me2 nice 1149528206 M * me2 one last thing 1149528229 M * me2 have you benchmarked vserver against virtuozzo? 1149528253 M * Bertl nope 1149528288 M * me2 from what i see they (swsoft) appeal allot to fud, maybe is exagerated, but i dont see facts, only descriptions 1149528302 M * Bertl I do not have a license for this proprietary software and I would assume they somewhat forbid to do testing agains copeting products 1149528319 M * me2 well, someone should do it :) 1149528334 M * Bertl well, go ahead if you dare ... 1149528341 M * me2 i'll try to see who sells virtuozzo in our country 1149528360 M * me2 why not, there has to be a decent way of comparing to products that do the same thing 1149528379 M * me2 they claim allot, should be checked 1149528390 M * Bertl FWIW, we tested with OVZ, and it performs quite well, not as fast as Linux-VServer but in some aspects more accurate virtualization (but also more overhead) 1149528407 M * me2 aha 1149528421 M * me2 well, at least, we know that they are in the same ballpark 1149528423 M * me2 right? 1149528455 M * Bertl for example, they do precise accounting of dentries with path name lengths, while we do only the minimum necessary to make it secure/save 1149528477 M * Bertl yes, they are very similar, take or give a feature 1149528481 M * me2 is there a way to keep accounting of traffic made by a vserver? 1149528485 M * me2 or of cpu used? 1149528508 M * me2 i guess traffic can be handled with the traditional tools 1149528511 M * Bertl sure, all of the values are available, you can use rrdtool and such to collect and graph 1149528526 M * me2 inside the vserver? 1149528535 M * Bertl usually you do that on the host 1149528551 M * Bertl but some things could be collected on the guest too 1149528563 M * me2 used cpu? 1149528584 M * Bertl lycos used very detailed graphing of all aspects, and it didn't take much cpu to do that 1149528599 M * me2 the firewall, iptables, can/will it be implemented in the guest system? so that the guest admin can make up his one ruleset 1149528611 M * Bertl yes and no 1149528624 M * me2 be a bit more verbose please 1149528644 M * Bertl in the future, it will be possible to do firewalling and routing inside a guest, but we will not do per guest iptables and such :) 1149528654 M * me2 i saw that with other products they do a hack, have a host daemon that reads the guest settings and applyes them at host level 1149528677 M * Bertl network stack virtualization bears a lot of overhead, and it's worth to do something like that 1149528697 M * Bertl basically with a virtual network stackeach packet traverses the stack twice 1149528706 M * Bertl which roughly gives you half the performance 1149528780 M * me2 will there be a visual/web tool for vserver administration done by you/the project? 1149528781 M * Bertl you also have to apply policy to adding/removing rules and such 1149528804 M * Bertl there already is one (or actually two of them) 1149528837 J * sereNity elite@dsl-146-229-156.telkomadsl.co.za 1149528837 M * me2 shame to me, i didnt know 1149528846 M * Bertl welcome sereNity! 1149528868 M * me2 ok, this has cleared up all my list 1149528875 M * me2 thank you Herber 1149528875 M * me2 t 1149528896 M * Bertl you're welcome! feel free to hang around! 1149528942 M * me2 i undertake irc therapy 1149528961 M * me2 keep my irc client closed, it saves time :) 1149528975 M * me2 but from time to time it is worthed to open it 1149528989 M * me2 i'll go back to my paper writing 1149528993 M * me2 thank you again 1149528993 M * bon hello :) 1149528995 M * bon hi berlt 1149528997 Q * me2 Quit: Leaving 1149528998 M * bon bertl even .) 1149529053 M * Bertl hey bon! 1149530156 N * sars sarnold 1149530656 J * doener ~doener@i5387D2A1.versanet.de 1149530666 M * doener hi 1149531314 M * Bertl hey doener! 1149531613 Q * sereNity Quit: I'm going to a pretty place now where the flowers grow... I'll be back in an hour or so.. 1149532298 J * shedi ~siggi@inferno.lhi.is 1149533715 M * Bertl okay, off for now .. back later this evening ... 1149533719 N * Bertl Bertl_oO 1149535095 Q * derjohn2 Ping timeout: 480 seconds 1149535101 J * derjohn2 ~aj@dslb-084-058-224-079.pools.arcor-ip.net 1149535273 J * gerrit ~gerrit@bi01p1.co.us.ibm.com 1149535324 M * morrigan so der dokumentationspflicht ist hoffentlich genuege getan 1149535408 M * morrigan whoops sorry 1149535416 M * jake- hehe 1149535808 Q * yarihm Quit: Leaving 1149536453 J * BANDITT_ frank@fk.fag.edu.br 1149537676 Q * bonbons Remote host closed the connection 1149537836 Q * anonc charon.oftc.net helium.oftc.net 1149537836 Q * cohan charon.oftc.net helium.oftc.net 1149537836 Q * VAndreas charon.oftc.net helium.oftc.net 1149537836 Q * orionpanda charon.oftc.net helium.oftc.net 1149537836 Q * phedny charon.oftc.net helium.oftc.net 1149537836 Q * matti charon.oftc.net helium.oftc.net 1149537836 Q * dsoul charon.oftc.net helium.oftc.net 1149537836 Q * DreamerC charon.oftc.net helium.oftc.net 1149537836 Q * nebuchadnezzar charon.oftc.net helium.oftc.net 1149537836 Q * gerrit charon.oftc.net helium.oftc.net 1149537836 Q * Hollow charon.oftc.net helium.oftc.net 1149537836 Q * gdm charon.oftc.net helium.oftc.net 1149537836 Q * Zaki charon.oftc.net helium.oftc.net 1149537836 Q * BenBen charon.oftc.net helium.oftc.net 1149537836 Q * insomniac charon.oftc.net helium.oftc.net 1149537836 Q * baggins charon.oftc.net helium.oftc.net 1149537836 Q * cemil charon.oftc.net helium.oftc.net 1149537836 Q * dna charon.oftc.net helium.oftc.net 1149537836 Q * Viper0482 charon.oftc.net helium.oftc.net 1149537836 Q * cdrx charon.oftc.net helium.oftc.net 1149537836 Q * redtux charon.oftc.net helium.oftc.net 1149537836 Q * Snow-Man charon.oftc.net helium.oftc.net 1149537836 Q * starlein charon.oftc.net helium.oftc.net 1149537836 Q * mugwump_ charon.oftc.net helium.oftc.net 1149537836 Q * Loki|muh charon.oftc.net helium.oftc.net 1149537836 Q * bubulak charon.oftc.net helium.oftc.net 1149537836 Q * FloodServ charon.oftc.net helium.oftc.net 1149537836 Q * derjohn charon.oftc.net helium.oftc.net 1149537836 Q * bogus charon.oftc.net helium.oftc.net 1149537836 Q * daniel_hozac charon.oftc.net helium.oftc.net 1149537836 Q * Medivh charon.oftc.net helium.oftc.net 1149537836 Q * trasher charon.oftc.net helium.oftc.net 1149537836 Q * ray6 charon.oftc.net helium.oftc.net 1149537836 Q * tokkee charon.oftc.net helium.oftc.net 1149537836 Q * blackfire charon.oftc.net helium.oftc.net 1149537836 Q * jake- charon.oftc.net helium.oftc.net 1149537836 Q * bragon charon.oftc.net helium.oftc.net 1149537836 Q * otaku42_away charon.oftc.net helium.oftc.net 1149537836 Q * wenchien charon.oftc.net helium.oftc.net 1149537836 Q * meebey charon.oftc.net helium.oftc.net 1149537836 Q * waldi charon.oftc.net helium.oftc.net 1149537836 Q * harry charon.oftc.net helium.oftc.net 1149537836 Q * sid3windr charon.oftc.net helium.oftc.net 1149537836 Q * weasel charon.oftc.net helium.oftc.net 1149537875 J * bonbons ~bonbons@83.222.39.166 1149537958 J * derjohn ~derjohn@80.69.37.19 1149537958 J * sid3windr luser@bastard-operator.from-hell.be 1149537958 J * bragon ~bragon@sd866.sivit.org 1149537958 J * otaku42_away ~otaku42@legolas.otaku42.de 1149537958 J * bogus ~bogusano@fengor.net 1149537958 J * daniel_hozac ~daniel@c-2d1472d5.010-230-73746f22.cust.bredbandsbolaget.se 1149537958 J * meebey meebey@booster.qnetp.net 1149537958 J * Medivh ck@paradise.by.the.dashboardlight.de 1149537958 J * trasher daniel@217.160.128.201 1149537958 J * ray6 ~ray@vh5.gcsc2.ray.net 1149537958 J * tokkee tokkee@ssh.faui2k3.org 1149537958 J * harry ~harry@d54C2508C.access.telenet.be 1149537958 J * wenchien ~wenchien@221-169-69-23.adsl.static.seed.net.tw 1149537958 J * blackfire blackfire@dp70.internetdsl.tpnet.pl 1149537958 J * jake- psybnc@murlocs.org 1149537958 J * weasel weasel@weasel.noc.oftc.net 1149537958 J * FloodServ services@services.oftc.net 1149537958 J * anonc ~anonc@staffnet.internode.com.au 1149537958 J * matti matti@linux.gentoo.pl 1149537958 J * waldi ~waldi@bblank.thinkmo.de 1149537958 J * cohan ~cohan@koniczek.de 1149537958 J * dsoul darksoul@vice.ii.uj.edu.pl 1149537958 J * VAndreas ~Hossa@212.110.98.7 1149537958 J * starlein ~star@fo0bar.de 1149537958 J * bubulak ~bubulak@cicka.wnet.sk 1149537958 J * mugwump_ ~samv@watts.utsl.gen.nz 1149537958 J * Loki|muh loki@satanix.de 1149537958 J * Hollow ~hollow@home.xnull.de 1149537958 J * nebuchadnezzar ~nebu@zion.asgardr.info 1149537958 J * DreamerC ~dreamerc@59-112-22-42.dynamic.hinet.net 1149537958 J * phedny ~mark@volcano.p-bierman.nl 1149537958 J * Snow-Man ~sfrost@kenobi.snowman.net 1149537958 J * redtux ~redtux@pc199.pub.univie.ac.at 1149537958 J * gdm ~gdm@64.62.195.81 1149537958 J * Zaki ~Zaki@212.118.99.32 1149537958 J * BenBen ~benny@defiant.wavecon.de 1149537958 J * insomniac ~insomniac@slackware.it 1149537958 J * baggins baggins@kenny.mimuw.edu.pl 1149537958 J * cemil ~cemil@defiant.wavecon.de 1149537958 J * dna ~naucki@dialer-145-87.kielnet.net 1149537958 J * orionpanda orionpanda@netblock-66-245-252-180.dslextreme.com 1149537958 J * Viper0482 ~Viper0482@p549755FA.dip.t-dialin.net 1149537958 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1149537958 J * gerrit ~gerrit@bi01p1.co.us.ibm.com 1149538301 Q * bonbons Quit: Leaving 1149538726 Q * BANDITT_ Quit: The 7 Deadly Sins: compartilhe o momento, compartilhe a vida!   [www.t7ds.com.br] 1149538745 J * Thorsten ~Thorsten@dslb-084-058-179-255.pools.arcor-ip.net 1149539876 J * namulator ~nam@S0106001195551ff0.va.shawcable.net 1149539917 M * Thorsten Hi! I'm currently using 2.6.15.5-vs2.0.1.2 and like to update my kernel to a recent version. And like everytime I don't know which is the highest stable kernel that is supported and which vserver patch I should use :-) 1149539940 M * Thorsten patch-2.6.16.17-vs2.0.2-rc21.diff or patch-2.6.16.17-vs2.1.1-rc21.diff ? 1149539956 M * Thorsten So 2.6.16.17 is the highest? 1149540158 M * doener both patches should apply cleanly to .20 1149540160 Q * lilalinux Ping timeout: 480 seconds 1149540172 M * doener (except for the Makefile hunk of course) 1149540214 M * Thorsten ok, and what'ss the difference? I've accidently installed a non stable vserver patch in the past and wasn't very happy with it ;-) 1149540274 M * doener between 2.0.2-rc21 and 2.1.1-rc21? lots AFAIK... both work well for me though 1149540301 Q * nammie Ping timeout: 480 seconds 1149540317 M * Thorsten But the first one is the stable one? 1149540354 M * Thorsten linux-vserver.org looks like that at least 1149540382 M * Thorsten Stable Sources ... Development Sources 1149540412 N * namulator Nam 1149540414 J * notlilo_ ~lilofree@tor-irc.dnsbl.oftc.net 1149540435 M * doener 2.0.2-rc21 is the latest rc for the stable series, yes 1149540452 Q * notlilo Killed (NickServ (GHOST command used by notlilo_)) 1149540466 N * notlilo_ notlilo 1149540488 M * Thorsten Ok, I will try that tonight. Thanks doener! 1149540507 M * doener yw 1149540679 J * lilalinux ~plasma@dslb-084-058-236-114.pools.arcor-ip.net 1149542751 J * DreamerC_ ~dreamerc@59-112-10-239.dynamic.hinet.net 1149543003 Q * meandtheshell Quit: bye bye ... 1149543157 Q * DreamerC Ping timeout: 480 seconds 1149543521 J * Aiken ~james@tooax7-063.dialup.optusnet.com.au 1149544823 Q * DreamerC_ Quit: leaving 1149544840 J * DreamerC ~dreamerc@59-112-10-239.dynamic.hinet.net 1149546437 Q * Viper0482 Remote host closed the connection 1149547450 Q * Thorsten Ping timeout: 480 seconds 1149547508 N * Bertl_oO Bertl 1149547513 M * Bertl evening folks! 1149547520 M * doener evening Bertl 1149547789 Q * cdrx Read error: Operation timed out 1149547809 Q * doener Quit: leaving 1149548038 Q * dna Quit: Verlassend 1149548092 J * dollzrealm dollzrealm@ppp-69-218-216-183.dsl.wotnoh.ameritech.net 1149548098 M * dollzrealm Hello 1149548113 M * dollzrealm any php pros in here? 1149548116 M * Bertl hello dollzrealm! 1149548129 M * dollzrealm hi Bert1 1149548168 M * dollzrealm i need some help with a php script 1149548180 M * Bertl well, wrong channel ... 1149548188 M * dollzrealm oh, where do I go 1149548217 M * Bertl maybe #php or something like that? 1149548260 M * orionpanda bertl: (re xattr vs file-attributes): So, even if a client is patched to support iunlink tags, I need to patch the remote server's kernel, too? 1149548280 M * orionpanda Also, is iunlink impllemented using xattr or file attributes? 1149548286 M * Bertl for nfs to be xid and/or xattr aware, yey 1149548290 M * Bertl *yes 1149548310 M * Bertl iunlink is xattr 1149548314 P * dollzrealm 1149548326 J * notlilo_ debian-tor@tor-irc.dnsbl.oftc.net 1149548329 M * orionpanda ah. I've been trying to figure that out 1149548382 M * Bertl notlilo_: hmm? 1149548385 M * orionpanda so, I could set the iunlink flag using setfattr (instead of using setattr)? 1149548422 Q * notlilo Remote host closed the connection 1149548446 M * Bertl setfattr? 1149548502 M * orionpanda man setfattr: "set extended attributes for filesystem objects; associates a new value with an extended attribute name for each specified file" 1149548529 M * Bertl ah, no, extended attr != xattr 1149548550 M * orionpanda lol. wow. i'm confused. 1149548571 M * orionpanda whih one does CONFIG_EXT2_FS_XATTR=y 1149548585 M * orionpanda which one does config_ext2_fs_xattr enable? 1149548601 M * Bertl that's extened attributes (which is confusing, I agree) 1149548631 M * orionpanda I don't need that config parameter for iunlink support? 1149548633 M * Bertl you cannot disable xattrs for ext2 1149548826 M * Bertl what was the filesystem you were asking for (in the first place)? 1149548830 M * orionpanda ok. so my goal is to set iunlink (for cow support) on a Lustre cluster fs. I've got immutable tags working. Now I need to patch the kernel to support iunlink. 1149548845 M * Bertl ah, lustre, yes 1149548864 M * Bertl where does the lustre fs come from? 1149548870 M * orionpanda I'm looking through the vserver patch to determine where vserver adds iunlink support. I'm looking at ext2_fs.h right now 1149548880 M * orionpanda clusterfs.com 1149548903 M * Bertl is it available as a patch to the kernel (2.6.16.19 or so)? 1149548955 M * orionpanda It became open source a few monthgs ago. Well, the storage servers patch 2.6.12. However, there is a patchless client for lustre, which I am using. This allows me to patch the client with the latest vserver 1149549056 M * orionpanda However, that still doesn't give me iunlink support. So, I'm trying to figure out where in the kernel the iunlink magic takes place. 1149549070 M * Bertl well, you need support on both sides, client and server I guess 1149549147 M * orionpanda That's what I was afraid of. 1149549223 M * orionpanda So, if I was using NFS instead, I would need to patch the NFS server and the NFS client to get iunlink support? 1149549227 M * Bertl http://vserver.13thfloor.at/Experimental/split-2.6.16-rc4-vs2.1.1-rcX/ 1149549243 M * Bertl check out the inode, isync and iattr patches 1149549350 M * orionpanda excellent. thanks for narrowing this down. Also, about COW: does the split cow patch need to be applied to the server, too? Or, is COW a client-side operation? 1149549383 M * Bertl I think that could be done on the client 1149549396 M * Bertl but it depends on some kind of sendpage implementation 1149549491 N * notlilo_ notlilo 1149549547 M * orionpanda are there some more recent splits in the works? I've read that the cow split has changed quite a bit since feb'06 1149549570 M * Bertl I will do one shortly (e.g. in the next two days) 1149549598 M * orionpanda thanks. awesome. 1149549768 M * orionpanda If I want to enable COW on two hard-linked files, is setting them to 'iunlink' sufficient or do I need to set 'immutable', too? 1149549797 M * Bertl you need both flags to make it qualify for cow 1149550254 M * orionpanda If I wanted to use NFS instead, would I need to apply the vserver patch to the server? to get iunlink support? 1149550310 M * Bertl yes, although VoW over nfs is not really tested 1149550313 M * Bertl *CoW 1149550657 M * orionpanda ok. well, I'm going to parse the iattr, isync, inode, cow splits and try to create a cow/iunlink patch for lustre. 1149550668 J * Thorsten ~Thorsten@dslb-084-058-129-155.pools.arcor-ip.net 1149550700 M * Bertl orionpanda: excellent, if you encounter issues, feel free to contact me 1149550710 M * Aiken I think cow might be why I moved from guests on nfs to guests on aoe 1149551044 M * orionpanda hmm. never thought about using ata-over-ethernet. How is that working out for you? Bandwidth/latency/scalability issues? 1149551181 M * Aiken that example is being used over 10mbit, found it a bit faster than nfs. 1149551204 M * Aiken it being a block device you also get the normal filesystem caching which at times make it a lot faster than nfs 1149551228 M * Aiken on 100MBit fastest to slowest , aoe, iscsi, nfs 1149551242 M * Bertl (sidenote) nfs without locking has similar caching 1149551309 M * Aiken some of the test I did involved copying lots of files (I used a kernel tree for testing) from any of those 3 to local storage 1149551319 M * harry Bertl: where's rc22 ? 1149551320 M * Aiken repeat copies from nfs were always the same time 1149551320 M * harry ;) 1149551342 M * orionpanda Is AoE a single unified namespace? So, can 10 AoE devices present one mount-point to the clients? Are there cache coherency issues across multiple clients? 1149551345 M * Aiken repeat copies from aoe or iscsi were very fast as they came from the local cache 1149551361 M * harry (yeah, i'm a bitch, i know :)) 1149551379 M * Aiken aoe is a block device 1149551383 M * harry btw, Bertl ... would be nice if there are diffs between the updates available ;) 1149551396 M * Bertl harry: sitting an my desk :) 1149551400 M * Bertl *on 1149551426 M * Aiken I have it mounted on /vservers 1149551431 A * harry slaps Bertl :p 1149551433 M * harry no hurry 1149551450 M * Aiken I don't see why you could not have separate aoe block devices for each guest 1149551455 M * harry can't upgrade until i have my new network interfaces anyway : 1149551456 M * harry :) 1149551474 Q * shedi Read error: Connection reset by peer 1149551513 M * orionpanda aiken: would cow work with per-guest aoe block devices? 1149551555 M * Aiken I don't thinks so, hardlinks work on the same filesystem, different block devices = different filesystems 1149551586 M * orionpanda right. 1149551643 M * orionpanda well, it would make implementing redundancy easy. Right now I have each lustre server mirrored with DRBD. With AoE I could simply use raid on the client side over multiple aoe block devices 1149551705 M * Aiken I still have the 2.6.17 kernel from the other installed, easy enough to test the current state of cow + nfs 1149551727 M * orionpanda I'm still trying to get my head around how AoE would work with multiple client machines. 1149551770 M * orionpanda I definitely would not use NFS. It doesn't scale. Plus, it's slow. Admin nightmare. 1149551807 M * Bertl distributed filesystem? 1149551811 M * orionpanda Your AoE idea is really interesting, though. I'm googling now. 1149551835 M * Aiken multiple client, I miss read what you said before. Might be nasty 1149551860 Q * gerrit Read error: Operation timed out 1149551863 M * orionpanda Yes, which is why I went with Lustre. It's great in principle. Except satisfying all the dependencies between DRBD, Vserver, Lustre server, and Lustre Client is trying my pateience. 1149551885 M * Bertl well, maybe ocfs? 1149551899 M * Aiken have not looked at iscsi for a bit but I think there might be something about multiple clients with it 1149551918 M * Aiken I forget is that was being implemented or if someone though it would be a good idea