1129594195 J * litage ~nick@203.220.55.70 1129594246 M * litage can a vserver guest be on a different subnet from its host? (eg: host is 10.0.10.10 (private), guest is 202.169.42.111 (public)) 1129594333 M * daniel_hozac sure. 1129594369 M * daniel_hozac you will need to use NAT in order for the vserver to be able to communicate with the rest of the world though. 1129594415 M * litage daniel_hozac: where would i use nat? 1129594473 M * daniel_hozac iptables -t nat -I POSTROUTING -o -s -j SNAT --to 1129594513 M * litage daniel_hozac: ah, so i'd have to use that iptables rule for each vserver guest? 1129594553 M * daniel_hozac or a generic one that matches all guest's IP addresses. 1129594692 M * litage daniel_hozac: since the vserver host is on a private subnet, the router would have to be configured to route each of the guest public ip addresses to the vserver host's private ip, right? 1129594760 M * daniel_hozac well, if you set that up on your router, you won't need NAT. 1129595131 M * litage daniel_hozac: if that wasn't setup on the router, how would the packets destined for a guest (public ip) ever reach the host (private ip)? 1129600986 Q * Hollow Remote host closed the connection 1129600993 J * Hollow ~hollow@home.xnull.de 1129601747 J * Aiken_ ~james@tooax8-079.dialup.optusnet.com.au 1129602055 Q * Aiken Ping timeout: 480 seconds 1129602597 J * stefani ~stefani@c-24-19-46-211.hsd1.wa.comcast.net 1129605207 P * stefani parting (is such sweet sorrow) 1129605658 Q * shuri Remote host closed the connection 1129611310 Q * Johnsie Quit: G'bye! 1129611711 Q * litage Ping timeout: 480 seconds 1129614478 J * Johnsie ~john@acs-24-154-53-217.zoominternet.net 1129615226 J * ntrs_ ~ntrs@68-188-50-87.dhcp.stls.mo.charter.com 1129615228 Q * ntrs Read error: Connection reset by peer 1129616002 Q * dddd44 Read error: Connection reset by peer 1129616522 J * litage ~nick@203.220.55.70 1129616786 J * dddd44 dhb55@60.49.78.240 1129620675 Q * litage Ping timeout: 480 seconds 1129621096 J * litage ~nick@203.220.55.70 1129623474 Q * dddd44 Read error: Connection reset by peer 1129623601 Q * andrew_ Ping timeout: 480 seconds 1129623647 J * dddd44 dhb55@60.49.78.240 1129624052 J * neofutur_ ~neofutur@neofutur.net 1129624052 Q * neofutur Read error: Connection reset by peer 1129624843 N * Bertl_oO Bertl 1129624847 M * Bertl morning folks! 1129625047 M * FireEgl g'morning.. =) 1129625103 M * Bertl good morning FireEgl! how's your setup? 1129625284 M * matti B. :) 1129625328 M * FireEgl I'm sorry to say that I'll be using UML.. =( See, I want ipv6 support in the latest kernel.. And ngnet doesn't apply to vs2.1.0. =/ I also want to use grsecurity, which I'm more likely to get working in UML than in vserver. 1129625395 M * FireEgl I really like vserver though.. And I'll probably switch back as soon as I can have ipv6 work in it. 1129625538 M * Bertl hey, no problem with that, if UML meets your needs better than linux-vserver, that is fine for me ... 1129625599 M * FireEgl Is ngnet being worked on? Last patch I see for it is dated in May.. 1129625615 M * Bertl currently it's on hold .. waiting for some funding ... 1129625621 M * FireEgl =/ 1129625646 M * Bertl we know how we will do it (the implementation) and what you probably looked at was considered a prototype 1129625666 M * FireEgl =) 1129625693 J * andrew_ ~andrew@tnlug.linux.org.tw 1129625704 M * Bertl if everything goes as expected, it should be done in a few month 1129625735 J * erwan_taf ~erwan@81.80.43.77 1129626014 M * Bertl welcome andrew_! erwan_taf! 1129626020 M * erwan_taf hey Bertl \o/ 1129626041 A * erwan_taf hides.... he didn't looked at vserver since so loooooooong 1129626287 Q * dddd44 Read error: Connection reset by peer 1129626397 J * dddd44 dhb55@60.49.78.240 1129628147 J * jayeola ~jayeola@host-87-74-49-120.bulldogdsl.com 1129628390 M * andrew_ Bertl: hi 1129628508 M * andrew_ Bertl: I am trying to use vdlimit to limit space usage for a guest 1129628554 M * andrew_ Bertl: I found it's so difficult to use everytime manually, the command is so long. :p 1129628707 M * andrew_ Bertl: So I made a simple script to do that: http://people.linux.org.tw/~andrew/DiskLimit.sh 1129628817 M * Bertl cool! 1129628913 M * Bertl andrew_: you actually might want to 'improve' the calculations, if you have it in a script ... 1129628957 M * andrew_ Bertl: I don't know how, could you give me some hints? 1129628992 M * Bertl well, currently the du will traverse the entire tree (and sum that) 1129629013 M * Bertl this might include other mounts as well as unified files 1129629094 M * andrew_ Bertl: Oh, the calculations are just a copy from http://linux-vserver.org/Disk+Limits 1129629121 M * Bertl either by using patched tools (find?) or doing some more checks, you could easily improve that ... 1129629144 M * Bertl andrew_: yeah, I know, it's just a 'simple' first shot version ... 1129629192 M * andrew_ Bertl: Seems limit space doesn't require a stopped vserver, so that would probably included /proc some some otherss... 1129629266 M * andrew_ Bertl: Let me see, maybe the -x option with du would helpful? 1129629287 M * Bertl for example .. 1129629322 M * matti http://www.officeguns.com/ :) 1129629525 M * andrew_ Bertl: Seems du -sx works fine. 1129629613 M * andrew_ Bertl: Would you support disk limit in the new-style config file in the future? 1129629679 M * andrew_ Bertl: The seting of vdlimit would be gone after a reboot, doesn't it? 1129629681 M * Bertl I guess they will be added to the tools sooner or later ... 1129629761 M * andrew_ Bertl: Does Enrico online? 1129629829 M * Bertl sometimes, but it has been quite some time now ... 1129629920 M * andrew_ Bertl: What's his nickname on irc? 1129629956 M * Bertl ensc 1129630009 M * andrew_ Bertl: Thanks 1129630040 Q * AndrewLee Quit: leaving 1129630084 N * andrew_ AndrewLee 1129630349 M * AndrewLee Bertl: Does the script good enough to share on the wiki? 1129630396 M * Bertl sure .. just put it somewhere, and add a link to the disk limit page 1129630671 M * AndrewLee Bertl: done, thanks. :) 1129630689 M * Bertl thank you! 1129632866 Q * serving- Ping timeout: 480 seconds 1129636161 Q * Aiken_ Ping timeout: 480 seconds 1129636315 Q * dddd44 Ping timeout: 480 seconds 1129636585 J * prae ~prae@ezoffice.mandriva.com 1129636646 Q * nokoya Remote host closed the connection 1129636683 J * nokoya young@hi-230-82.tm.net.org.my 1129636700 Q * nokoya Remote host closed the connection 1129636731 J * nokoya young@hi-230-82.tm.net.org.my 1129637135 M * mef bertl: We are running up against the clock on our submission to EUROSYS. Unfortunately, we have not had a chance to figure out why the lmbench fork, fork+execve, fork+execve(/bin/sh -c hello) is 10s to 100s usecs slower compared to regular Linux. 1129637181 J * yarihm ~yarihm@84-74-18-28.dclient.hispeed.ch 1129637229 M * mef bertl: any chance you could look into this on your oprofile'd system? 1129637387 M * mef bertl: if so, please download http://www.cs.princeton.edu/~mef/lmbench3-mef.tgz, cd into lmbench3/src; make results (use gcc32 or gcc33, not gcc4.x), ... after compiling the code it will ask you a bunch of questions for the setup. Just hit control-c and cd to ../bin/i686-pc-linux-gnu 1129637643 M * Bertl hmm, should be able to do that ... 1129637722 M * mef then run 1129637724 M * Bertl will test on x86_64 and alpha, no x86 currently available for such tests 1129637731 M * mef cp ./hello /tmp/hello 1129637746 M * mef for i in for exech shell; do ./lat_proc -P 1 -N 1000 $i; done 1129637777 M * mef for i in fork exec shell; do ./lat_proc -P 1 -N 1000 $i; done 1129637819 M * mef actually you can skip fork. 1129637828 P * erwan_taf Leaving 1129637834 M * mef it is exec and shell that are off by 10s to 100s of usecs. 1129637877 M * Bertl okay, will check ... 1129638004 M * mef Hmph... my student bind mounts the lmbench files into the vserver chroot area and /tmp is therefore on a different disk... I'd be darned if that is the difference. Will ask him when he wakes up. 1129638037 M * Bertl okay ... 1129638038 M * mef However, we've observed that type of difference between stock linux and vserver since March 2004. 1129638039 J * Blissex pcg@82-69-39-138.dsl.in-addr.zen.co.uk 1129638049 M * mef Back in the 2.4 days. 1129638053 M * mef 2.4 days for us. 1129638096 M * Bertl I will investigate ... a recent comparison can't hurt ... 1129638111 M * mef great... thank you. 1129638128 M * mef the comparison with Xen has been interesting. 1129638146 M * mef If you are interested, I can send you what we've got tonight. 1129638152 M * mef the deadline is 8pm. 1129638165 M * mef or midnight GMT. 1129638399 M * Bertl yeah, I'd be pleased ... 1129639149 M * mef One thing that is surprisingly weird with Xen is that the basic bcopy benchmark in lmbench is a bunch faster in a guest domain than in the host. In fact, it is faster compared to any other system. I am still trying to get my head around that difference. Presumably within their guest domain the memory malloc'd for the bcopy loop has a more favorable virtual memory mapping for the cache. It is the only way that I can justify that. 1129639186 M * mef Oddly enough in the host domain, the bcopy performance is similar to what one gets when running stock linux on the bare hardware. 1129639190 M * mef wacked. 1129639198 M * mef or... neat... 1129639234 M * Bertl well, you sure you're not running on hot caches there? 1129639324 Q * Loki|muh Remote host closed the connection 1129639339 J * Loki|muh loki@satanix.de 1129639351 M * Bertl mef: I mean, after all it would make sense to cache the I/O stuff for the domains, as they have to hand it through ... 1129639392 M * Bertl (I/O and memory access that is) 1129639442 M * Bertl also, on what time source is the benchmarking based? 1129639455 M * mef bertl: are you talking about bcopy benchmark or the lat_proc benchmark? 1129639482 M * Bertl the bcopy benchmark ... 1129639513 M * mef bertl: bcopy is just bcopy... there is no I/O involved. With the exception for doing malloc, everything else is in user-level. 1129639543 M * Bertl page faulting and cache handling is not userspace 1129639569 M * mef bertl: good point... page faulting is definitely not handled in userspace. 1129639578 M * mef bertl: not sure what you mean by cache handling. 1129639579 M * Bertl and what about the time source? 1129639599 J * serving serving@213.186.190.162 1129639618 M * Bertl mef: do you have an url to the bcopy source? 1129639632 M * mef bertl: it is part of the lmbench3 that you downloaded. 1129639643 M * mef take a look into lmbench3/src/bw_mem.c 1129639646 M * Bertl ah, okay ... good ... 1129639698 M * mef as you can see, loop_bcopy is just a simple bcopy loop. 1129639725 M * mef What I hadn't considered is the difference in handling page faults and from what I gather those are slower. 1129639759 M * mef But then there is the time source... from what I gather there is a bit of jumping through hoops that they have to do for the time source. 1129639833 M * Bertl also time is not time in xen, and you will get different 'virtual' cycles 1129639843 M * mef right... I am with you on that. 1129639950 M * Bertl so anything not based on synchronized wall clock (ntpd) will probably give strange results 1129639961 M * mef ntpd is even worse... 1129639968 M * mef lets not go thre. 1129639989 M * mef anything not based on the CPU's cycle counter will be off. 1129640005 M * Bertl well, you won't get that on xen :) 1129640036 M * mef where in Linux is the gettimeofday() stuff supported. I am looking into their linux guest code and would like to see how they do this different from regular linux. 1129640087 M * mef is it usually handled in arch/i386? 1129640197 M * Bertl i386/kernel/time.c 129 void do_gettimeofday(struct timeval *tv) 1129640215 M * Bertl there should be an implementation for xen too 1129640223 M * mef there is. 1129640576 J * Loki|muh_ loki@satanix.de 1129640576 Q * Loki|muh Read error: Connection reset by peer 1129642077 M * mef bertl: the faster numbers for xen would only make sense if gettimeofday is off by 1/100th of a second. That seems unlikely. 1129642171 M * Bertl what HZ does the xen domain0 kernel use? and what the guest? 1129643943 J * shuri ~shuri@64.235.209.226 1129644037 M * Bertl welcome shuri! 1129644073 M * shuri hi Bertl 1129644103 M * shuri Nice Prague Conference :) 1129644159 M * shuri congratulation 1129644272 M * Bertl thanks! 1129644324 M * Bertl misko was so kind to invite me, so I was there ... 1129644372 M * shuri :) 1129644549 M * mef bertl: both xen0 and xenU use HZ=100 1129644568 M * Bertl and did you set the same for vanilla and vserver? 1129644595 M * mef bertl: well, I just now realized this difference, so no vanilla & vserver use HZ=1000 1129644614 M * mef bertl: vserver and vanilla use HZ=1000 1129644636 M * Bertl what 'other' differences are there? :) 1129644652 M * mef bertl: LLLLLLLLLLLLLLLLLOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL 1129644683 M * SiD3WiNDR err :p 1129644712 M * mef bertl: I just found out that our SPECWEB99 benchmark went belly up because one of clients died. Crapola! 1129644735 M * mef bertl: praying for a small miracle between now and midnight GMT. 1129644767 M * shuri miracle :) 1129644800 M * Bertl mef: well, seems my x86_64 test machine didn't like the 2.6.13 kernel ... 1129644813 M * Bertl (will now check if I can get it online again) 1129644813 M * mef bertl: ? 1129644822 M * mef bertl: werps. 1129644842 M * mef bertl: ok 1129644947 M * mef bertl: what do you expect will be the result if I run the "osdb benchmark" under one UID and run repeat "dd if=/dev/zero of=./antisocial bs=500M count=1; rm -f ./antisocial" in a loop for the duration of the benchmark? 1129644986 M * mef bertl: will linux's CFQ give "osdb" running under one UID give half the disk i/o vs. the antisocial dd running under a different UID? 1129644997 M * Bertl no idea, probably the benchmark will be bad ... 1129645027 M * mef You had mentioned before the CFQ should "isolate" one from the other. 1129645067 M * Bertl well, it can not do miracles ... if you use up I/O bandwidth, you will not have it available for other stuff?! 1129645165 M * Bertl Vudumen: ping! 1129645209 M * mef bertl: the xen folks presumably figured out how to limit the amount of I/O bandwidth a particular VM can consume. 1129645236 M * Bertl yes, that's quite easy in the xen concept ... 1129645238 M * mef bertl: this was the reason I ask you whether something along the lines of a filter, such as the hard CPU scheduler, could be done. 1129645259 M * Bertl but, let's make an example: 1129645272 M * Bertl - two guests, both should get 50% I/O bandwidth 1129645287 M * Bertl now how do you configure that in xen? 1129645323 M * mef bertl: to be honest, I have not configured that with Xen yet. But their 2003 SOSP paper states that they do something along those lines. 1129645342 M * Bertl okay, let's assume you do: 50% I/O for A, and B ... 1129645348 M * mef bertl: as mentioned, small miracles are requird today. 1129645351 M * Bertl (you magically set that for xen) 1129645367 M * Bertl now what happens if you run only guest A? 1129645380 M * Bertl - a) it will get 50% I/O 1129645388 M * Bertl - b) it will get 100% I/O 1129645414 M * Bertl and what will be the result in an I/O benchmark (assumed such thing exists) 1129645424 M * mef It should get b). 1129645444 M * mef the limiting algorithm should be work preserving. 1129645446 M * Bertl so xen 'should' share the resources? like the memory? or the cpu power? 1129645476 M * Bertl (btw, that's not how xen works :) 1129645510 M * mef bertl: I know your TB for the CPU scheduler could not express this work preserving nature. I.e., if you specified 50% of the CPU it would give the vserver 50% and not 100%. 1129645557 M * mef bertl: This is something that Andy Bavier fixed by sliding in a different algorithm for your CPU scheduler. 1129645570 M * Bertl aha, url? 1129645589 M * Bertl and btw, you _can_ do that quite fine, but you can't do it on xen 1129645590 M * mef bertl: it is in our cvs tree. 1129645622 M * Bertl if you do not enable the hard cpu limitation, you will get 100% 1129645696 M * Bertl did you send me your preliminary paper yet? 1129645716 M * Bertl mef: cvs is available where? 1129645720 M * mef bertl: Andy's scheduler will let you specify fair share and also give specific vservers a guarantee (reservation) of CPU cycles (i.e., 10%). It lacks documentation, which Andy will eventually write. 1129645722 M * Bertl (your cvs tree) 1129645749 M * mef bertl: cvs -d :pserver:anonymous@cvs.planet-lab.org:/cvs co linux-2.6 1129645768 M * mef you might only need the relevant linux-2.6/kernel and linux-2.6/include/ subtrees. 1129645794 M * mef This is a FC4-1398 kernel with vserver 2.0 1129645842 M * Bertl thanks, where could I get that from? 1129646194 M * mef You can also get that from our cvs tree 1129646222 M * mef cvs -d :pserver:anonymous@cvs.planet-lab.org:/cvs co -r fedora-2_6_12-1_1398_FC4 linux-2.6 1129646242 M * mef I may also a tar.bz2 lying around somewhere. 1129646243 M * Bertl excellent! 1129646299 J * michal_ ~michal@mprivacy-update.de 1129646402 M * Bertl mef: okay, I guess I can try on the alpha, if the kernel currently runs on alpha :) 1129646443 M * michal_ hey Herbert :) 1129646450 M * Bertl hey michal_! 1129646474 M * michal_ temporary problems but i am back ;) 1129646492 M * Bertl good to hear! 1129646506 M * michal_ thx. how was open weekend ? 1129646517 M * Bertl definitely fun! 1129646547 M * Bertl we had a nice 'workshop' there (well basic theory and some examples) 1129646560 M * michal_ that's fine. many interesting people ? 1129646637 M * Bertl yeah, actually I made contact with some of them ... 1129646652 M * Bertl (e.g. pasky regarding cogito and such) 1129646964 M * misko saying bad things about me? 1129646967 M * misko Jesus hears you! 1129647011 M * Bertl misko: when did I say bad things about you? :) 1129647195 Q * shuri Remote host closed the connection 1129647385 M * Bertl mef: I'm now going to reconfigure my server (to allow for some of your tests), you owe me! :) 1129647523 M * mef bertl: ok 1129648080 M * mef bertl: is there a cheap way for you to get from Wien to Lausanne? I'll be there next week and would love to buy you a beer! 1129648129 M * mef bertl: trains apparently take 12 hours. And there are no flights from Wien to Geneva on cheap airlines like easyjet. I have not checked ryan air. ;) 1129648158 J * yungyuc ~yungyuc@220-135-53-220.HINET-IP.hinet.net 1129648191 M * Bertl I doubt there is .. 1129648195 M * Bertl welcome yungyuc! 1129648422 M * mef bertl: do you have information from previous oprofile measurements how long clock interrupt handling usually takes? Is it on the order of a few usecs on a relatively fast box? Or could it take on the order of 10usecs? 1129648527 M * Bertl it could take quite some amount 1129648538 M * mef oh really... 1129648555 M * mef so the difference in HZ (ie., 100 and 1000) is significant then. 1129648566 M * Bertl don't worry, we will have 'some' values till midnight :) 1129648596 M * mef I need to write text now. 1129648599 M * mef :( 1129648602 M * Bertl make that ... 1129649115 J * stefani ~stefani@superquan.apl.washington.edu 1129649121 M * Bertl welcome stefani! 1129649282 M * stefani hola 1129650102 J * liquid3649 ~liquid@p54975B8F.dip.t-dialin.net 1129650114 M * Bertl welcome liquid3649! 1129650124 M * liquid3649 hi bertl 1129652022 M * misko fuck 1129652029 M * misko Personalities : [raid1] [raid5] 1129652029 M * misko md4 : active raid5 hdd1[4](F) hdc1[5](F) hdg1[2] hdf1[6](F) 1129652029 M * misko 586075008 blocks level 5, 64k chunk, algorithm 2 [4/1] [__U_] 1129652036 M * misko just while eating......... :-( 1129652037 Q * misko Quit: ircII EPIC4-2.2 -- Are we there yet? 1129652976 M * Bertl mef: how long does the lmbench take to run? 1129652984 M * Bertl (complete set of tests) 1129653170 N * nokoya nokoya- 1129653190 N * nokoya- nokoya 1129653409 Q * prae Quit: Execute Order 69 ! 1129653875 M * liquid3649 hi, to limit used ram in a vserver i set flags ulimts and rss in rlimits ? 1129653967 M * Bertl the rss and vmlimits are enough ... 1129654252 M * liquid3649 is it normal that the proces get kill if they need more ram than set in rss? 1129654284 M * Bertl yes, that is the normal linux behaviour with overcommitment 1129654366 M * liquid3649 ok 1129654544 N * Loki|muh_ Loki|muh 1129654839 Q * Johnsie Read error: Connection reset by peer 1129656129 M * Bertl mef: tests are started ... will be back around 22:00 CET, will need some time then to redo them for different kernels ... 1129656140 M * Bertl off now, back later ... 1129656145 N * Bertl Bertl_oO 1129656625 J * Johnsie ~john@acs-24-154-53-217.zoominternet.net 1129657066 J * prae ~benjamin@sherpadown.net 1129657158 Q * Greek0 Read error: Connection reset by peer 1129657172 J * Greek0 ~greek0@85.255.145.201 1129658858 Q * liquid3649 Quit: Verlassend 1129660695 J * micah_ micah@micha.hampshire.edu 1129661111 Q * micah Ping timeout: 480 seconds 1129662964 Q * Hollow Read error: Connection reset by peer 1129662967 J * Hollow ~hollow@home.xnull.de 1129663240 Q * jayeola Quit: leaving 1129663540 Q * yungyuc Remote host closed the connection 1129663554 J * yungyuc ~yungyuc@220-135-53-220.HINET-IP.hinet.net 1129663921 N * micah_ micah 1129664954 Q * prae Quit: Pwet 1129665496 Q * yarihm Read error: Connection reset by peer 1129665500 J * yarihm ~yarihm@84-74-18-28.dclient.hispeed.ch 1129665701 Q * yarihm Quit: 1129666428 J * mrec ~revenger@p54B04172.dip0.t-ipconnect.de 1129666849 Q * mrec_ Ping timeout: 480 seconds 1129666982 Q * tchan Quit: WeeChat 0.1.6-cvs 1129667007 J * tchan ~tchan@c-67-174-18-204.hsd1.il.comcast.net 1129667062 J * spd1snd ~psingh@68-232-133-13.chvlva.adelphia.net 1129667166 Q * tchan Quit: 1129667208 J * tchan ~tchan@c-67-174-18-204.hsd1.il.comcast.net 1129668240 J * eyck eyck@81.219.64.71 1129668403 J * Aiken ~james@tooax7-192.dialup.optusnet.com.au 1129669373 N * Bertl_oO Bertl 1129669425 M * Bertl evening folks! 1129669429 M * Bertl mef: ping! 1129670281 M * Vudumen Bertl: pong! :) (round-trip time: 7:1:53) 1129670285 M * Vudumen hi :) 1129670294 M * Bertl hey Vudumen! not too bad ... :) 1129670325 M * Vudumen well i was with my ex-bride 1129670527 M * Bertl hmm, ex-bride, sounds interesting ... 1129670608 M * Vudumen (msg... :) 1129670918 M * Bertl Vudumen: okay, so have a good night then, and cya tomorrow! 1129670949 M * Vudumen Bertl: gn for you too :) 1129671150 Q * michal_ Ping timeout: 480 seconds 1129671318 J * michal_ ~michal@mprivacy-update.de 1129672307 M * mrec good evening bertl :) 1129673196 M * Bertl evening mrec! 1129674155 Q * Vudumen Ping timeout: 480 seconds 1129674565 Q * michal_ Ping timeout: 480 seconds 1129674602 Q * spd1snd Quit: spd1snd 1129674687 J * michal_ ~michal@mprivacy-update.de 1129675451 J * Vudumen vudumen@perverz.hu 1129675769 P * stefani I'm Parting (the water) 1129676187 J * yarihm ~yarihm@84-74-18-28.dclient.hispeed.ch 1129676371 M * yarihm umm ... where is the faq-entry for multiple IPs with new generation-configs? 1129676465 M * Bertl hmm .. probably on the faq page, if there is any (entry) 1129676479 M * Bertl but it's quite simple and straight forward ... 1129676520 M * Bertl either at config time, use --interface more than once 1129676548 M * Bertl or after creation, add more directories in /etc/vservers//interfaces 1129676709 M * yarihm well.... but is /etc/vservers//interfaces/number not the IPROOTDEV? i.e. interface/0 is on eth0? ... or am i mistaken there? 1129676771 M * daniel_hozac the number can be anything. 1129676869 M * yarihm oh, good ... thanks 1129676928 M * daniel_hozac text if you feel like it. 1129677802 M * yarihm great, works ... thanks guys 1129677924 M * Bertl okay, I'm off to bed now .. have a nice whatever everyone! 1129677936 N * Bertl Bertl_zZ 1129678459 M * yarihm night Bertl_zZ 1129678705 Q * Blissex Remote host closed the connection