1198023227 Q * mnemoc Ping timeout: 480 seconds 1198023358 J * hparker ~hparker@linux.homershut.net 1198023693 Q * larsivi Ping timeout: 480 seconds 1198023725 J * mnemoc ~amery@kilo105.server4you.de 1198025297 Q * ema Quit: leaving 1198025426 Q * hparker Remote host closed the connection 1198025501 Q * dowdle Remote host closed the connection 1198025942 Q * mire Read error: Operation timed out 1198026051 J * bardia ~bardia@lnc.usc.edu 1198026102 Q * bardia 1198026158 J * bardia ~bardia@lnc.usc.edu 1198027236 J * Hollow_ ~hollow@proteus.croup.de 1198027242 Q * Hollow Read error: Connection reset by peer 1198027266 M * Bertl okay, off to bed now ... have a good one everyone! 1198027266 N * Bertl Bertl_zZ 1198027266 N * Hollow_ Hollow 1198027305 Q * FireEgl Ping timeout: 480 seconds 1198027737 Q * dna_ Quit: Verlassend 1198027761 J * FireEgl FireEgl@4.0.0.0.1.0.0.0.c.d.4.8.0.c.5.0.1.0.0.2.ip6.arpa 1198027784 M * bardia hello, does anyone know which flag i need to turn on to get around 'tried to spawn a kernel thread' ? 1198027805 J * balbir ~balbir@122.167.205.122 1198027915 M * daniel_hozac why do you need to spawn kernel threads in a guest? 1198027935 M * daniel_hozac i assume you're also aware of the problems with running them there? 1198027936 M * bardia looks similar to http://www.paul.sladen.org/vserver/archives/200704/0005.html but that was for cifs mounts and i need to do nfs4 mounts inside the guest 1198027945 M * daniel_hozac why? 1198027954 M * daniel_hozac why can't you do the mount on the host, in the guest? 1198028040 M * bardia i don't understand. migrating an existing autofs machine into a vserver client. i remember reading something about accessing automounts through the host but there were no specifics 1198028172 M * bardia nfs3 mounts can be simply enabled with the bcapabilities/SYS_ADMIN option. if i could transition those automounts to happen via the vserver host that'd actually be better, but for now it's happening in the client 1198028331 M * bardia this is inside my lab intranet so i'm not too worried about security if those are the 'problems' you're referring too 1198028393 M * daniel_hozac "simply"... CAP_SYS_ADMIN is not to be taken lightly. 1198028415 M * daniel_hozac security is not the only issue. 1198028426 M * daniel_hozac kernel threads, especially NFS, don't really like dying. 1198028434 M * daniel_hozac unless it's on their terms. 1198028450 M * daniel_hozac so it's rather likely you'll have a guest that you cannot stop. 1198028509 M * daniel_hozac but, nonetheless, if you really want kernel threads in guests, give it the KTHREAD ccap. 1198028510 M * bardia i agree, i did run into that at some point maybe, if i could somehow access the hosts automounts inside the guest that'd be ideal. at this point the host and guest have the same automounts 1198028666 M * bardia if there's a clever way to do this i'd prefer it. i tried for instance in my /etc/vserver/client/fstab to point to the ghosted autofs folder on the host. but attempting to cd into a folder on the client would fail and not invoke an automount as desired. 1198028791 M * daniel_hozac i've never used automount, so i don't know how it works. 1198028868 M * daniel_hozac i'd assume there's a way to get it to mount stuff in the guest's namespace though. 1198028871 Q * Infinito Quit: Quitte 1198028981 M * bardia btw KTHREAD seems to not be working, is this a new flag? i'm running deb lenny atm 1198029036 M * daniel_hozac it's 2.3 only. 1198029036 M * bardia unknown ccap 'KTHREAD' when i try to start the client 1198029049 M * bardia oh 1198029155 M * daniel_hozac you'll probably have to specify the bit, i.e. ^24 1198029170 M * daniel_hozac i don't think i've added it to the utils yet... 1198030382 J * dowdle ~dowdle@71-36-196-83.blng.qwest.net 1198030458 Q * dowdle Remote host closed the connection 1198030463 J * dowdle ~dowdle@71-36-196-83.blng.qwest.net 1198032458 J * ViRUS ~mp@p57A6F5A8.dip.t-dialin.net 1198033188 M * bardia qstat -nr1 1198033202 M * bardia oops, ignore 1198033465 Q * balbir Ping timeout: 480 seconds 1198033948 Q * yarihm Quit: Leaving 1198036143 Q * bardia Read error: Connection reset by peer 1198036478 Q * mnemoc Ping timeout: 480 seconds 1198036937 J * mnemoc ~amery@kilo105.server4you.de 1198037184 J * bardia ~bardia@lnc.usc.edu 1198037272 Q * derjohn Ping timeout: 480 seconds 1198037288 J * derjohn ~derjohn@dslb-084-058-204-051.pools.arcor-ip.net 1198042157 J * cedric ~cedric@rny93-2-82-66-66-30.fbx.proxad.net 1198042845 Q * ViRUS Quit: Leaving 1198043702 Q * _bjh_ Ping timeout: 480 seconds 1198044078 J * _bjh_ ~bjh@84.112.154.154 1198044153 J * balbir ~balbir@202.62.81.131 1198045181 Q * AStorm Remote host closed the connection 1198045191 J * AStorm ~astralsto@chello089077114002.chello.pl 1198045517 Q * quasisane Ping timeout: 480 seconds 1198046034 Q * balbir Ping timeout: 480 seconds 1198047527 Q * zLinux Ping timeout: 480 seconds 1198047600 N * DoberMann[PullA] DoberMann 1198047712 Q * Aiken Quit: Leaving 1198047756 J * Aiken ~james@ppp121-45-246-228.lns2.bne4.internode.on.net 1198047787 Q * bardia Ping timeout: 480 seconds 1198047869 J * besonen_mobile_ ~besonen_m@71-220-198-145.eugn.qwest.net 1198048108 Q * FireEgl Ping timeout: 480 seconds 1198048132 Q * besonen_mobile__ Read error: Operation timed out 1198048223 J * zLinux ~zLinux@88.213.24.71 1198048228 Q * softi42 Ping timeout: 480 seconds 1198048615 J * bardia ~bardia@lnc.usc.edu 1198048734 J * softi42 ~softi@p549D6C48.dip.t-dialin.net 1198049276 J * balbir ~balbir@59.145.136.1 1198050229 N * DoberMann DoberMann[PullA] 1198050793 Q * cedric Quit: cedric 1198050903 J * friendly12345 ~friendly@ppp121-44-206-97.lns3.mel4.internode.on.net 1198051013 J * larsivi ~larsivi@85.221.53.194 1198051628 J * DLange ~DLange@p57A30272.dip0.t-ipconnect.de 1198051913 Q * pusling Ping timeout: 480 seconds 1198052369 Q * bored2sleep Read error: Operation timed out 1198052520 J * JonB ~NoSuchUse@kg1-98.kollegiegaarden.dk 1198053008 J * bored2sleep ~bored2sle@66-111-53-150.static.sagonet.net 1198053520 J * Alikus ~alikus@217.150.200.212 1198053786 Q * Alikus Remote host closed the connection 1198054038 N * Bertl_zZ Bertl 1198054046 M * Bertl morning folks! 1198054292 N * DoberMann[PullA] DoberMann 1198054464 J * pusling pusling@77.75.162.71 1198054695 J * quasisane ~sanep@c-76-118-191-64.hsd1.nh.comcast.net 1198055407 N * DavidS Guest697 1198055407 J * David1 ~david@p57A4BB62.dip0.t-ipconnect.de 1198055407 N * David1 DavidS 1198055840 Q * Guest697 Ping timeout: 480 seconds 1198055963 M * Bertl okay, off for now ... back later 1198055968 N * Bertl Bertl_oO 1198056047 J * gebura ~gebura@77.192.186.197 1198056051 M * gebura hi 1198056329 J * dna ~dna@161-227-dsl.kielnet.net 1198056746 J * mire ~mire@103-170-222-85.adsl.verat.net 1198056860 Q * JonB Ping timeout: 480 seconds 1198057819 Q * mire Read error: Operation timed out 1198058473 J * meandtheshell ~sa@85.127.102.217 1198059586 J * JonB ~NoSuchUse@130.227.63.19 1198060137 J * NetAsh ~na@193.219.160.108 1198060166 M * NetAsh hello 1198060191 M * NetAsh has someone som experience with vserver guests and autofs ? 1198060206 M * NetAsh ..some.. 1198061841 M * NetAsh ./ 1198061844 Q * NetAsh Quit: Leaving 1198063395 Q * larsivi Quit: Konversation terminated! 1198063612 J * larsivi ~larsivi@85.221.53.194 1198063855 N * DoberMann DoberMann[PullA] 1198064658 J * ema ~ema@rtfm.galliera.it 1198065238 P * friendly12345 1198065332 J * the_hydra ~the_hydra@125.161.204.163 1198067494 M * awk Hi, hmm, ading a second IP to a vserver? 1198067514 M * awk I have a vserver with 1 IP on eth0 and I want to add a second IP on eth1 1198067519 M * awk is this possible? 1198067552 M * JonB yes 1198067565 M * JonB which util-vserver version are you using? 1198068121 Q * Aiken Quit: Leaving 1198068161 Q * the_hydra Quit: Leaving 1198069085 J * lilalinux ~plasma@dslb-084-058-204-051.pools.arcor-ip.net 1198070310 Q * balbir Ping timeout: 480 seconds 1198070380 J * ViRUS ~mp@p57A6DA4B.dip.t-dialin.net 1198070627 N * DoberMann[PullA] DoberMann 1198070869 J * pmenier ~pme@LNeuilly-152-22-72-5.w193-251.abo.wanadoo.fr 1198071504 Q * dowdle Remote host closed the connection 1198071532 Q * _gh_ Ping timeout: 480 seconds 1198072667 J * _gh_ ~gerrit@c-67-169-199-103.hsd1.or.comcast.net 1198072825 J * Julius ~julius@p57B265A2.dip.t-dialin.net 1198072838 M * Julius chrooting inside a vserver is kind of strange 1198072848 M * Julius mknod is forbidden 1198073250 Q * lilalinux cation.oftc.net magnet.oftc.net 1198073250 Q * DavidS cation.oftc.net magnet.oftc.net 1198073250 Q * danman cation.oftc.net magnet.oftc.net 1198073250 Q * arachnist cation.oftc.net magnet.oftc.net 1198073250 Q * ex cation.oftc.net magnet.oftc.net 1198073250 Q * nebuchadnezzar cation.oftc.net magnet.oftc.net 1198073250 Q * transacid cation.oftc.net magnet.oftc.net 1198073250 Q * weasel cation.oftc.net magnet.oftc.net 1198073250 Q * tokkee cation.oftc.net magnet.oftc.net 1198073250 Q * vasko cation.oftc.net magnet.oftc.net 1198073250 Q * Medivh cation.oftc.net magnet.oftc.net 1198073250 Q * ard cation.oftc.net magnet.oftc.net 1198073250 Q * FaUl cation.oftc.net magnet.oftc.net 1198073250 Q * matthew_ cation.oftc.net magnet.oftc.net 1198073250 Q * grobie cation.oftc.net magnet.oftc.net 1198073250 Q * Loki|muh cation.oftc.net magnet.oftc.net 1198073250 Q * bXi cation.oftc.net magnet.oftc.net 1198073250 Q * waldi cation.oftc.net magnet.oftc.net 1198073256 Q * ViRUS cation.oftc.net kinetic.oftc.net 1198073256 Q * larsivi cation.oftc.net kinetic.oftc.net 1198073256 Q * JonB cation.oftc.net kinetic.oftc.net 1198073256 Q * _bjh_ cation.oftc.net kinetic.oftc.net 1198073256 Q * derjohn cation.oftc.net kinetic.oftc.net 1198073256 Q * ||Cobra|| cation.oftc.net kinetic.oftc.net 1198073256 Q * virtuoso cation.oftc.net kinetic.oftc.net 1198073256 Q * eyck cation.oftc.net kinetic.oftc.net 1198073256 Q * opuk cation.oftc.net kinetic.oftc.net 1198073256 Q * Tuxbubling cation.oftc.net kinetic.oftc.net 1198073256 Q * sid3windr cation.oftc.net kinetic.oftc.net 1198073256 Q * click cation.oftc.net kinetic.oftc.net 1198073256 Q * otaku42 cation.oftc.net kinetic.oftc.net 1198073256 Q * meebey cation.oftc.net kinetic.oftc.net 1198073256 Q * Adrinael cation.oftc.net kinetic.oftc.net 1198073256 Q * Bertl_oO cation.oftc.net kinetic.oftc.net 1198073256 Q * Guy- cation.oftc.net reticulum.oftc.net 1198073256 Q * doener cation.oftc.net reticulum.oftc.net 1198073256 Q * eviljonny cation.oftc.net reticulum.oftc.net 1198073256 Q * Hunger cation.oftc.net reticulum.oftc.net 1198073256 Q * BobR_zZ cation.oftc.net reticulum.oftc.net 1198073256 Q * fosco cation.oftc.net reticulum.oftc.net 1198073256 Q * C14r_ cation.oftc.net reticulum.oftc.net 1198073256 Q * snooze cation.oftc.net reticulum.oftc.net 1198073256 Q * mEDI_S cation.oftc.net reticulum.oftc.net 1198073256 Q * blizz cation.oftc.net reticulum.oftc.net 1198073256 Q * Radiance cation.oftc.net reticulum.oftc.net 1198073256 Q * cohan cation.oftc.net reticulum.oftc.net 1198073265 M * Julius is it possible to allow the creation of some nodes? 1198073265 M * Julius random/urandom/zero/null 1198073265 M * Wonka those should be created before starting the vserver 1198073265 M * Julius yeah 1198073265 M * Julius so creating chroots inside of a vhost is impossible? 1198073265 M * Julius can't even copy the existing nodes 1198073265 M * Wonka looks like 1198073473 J * ViRUS ~mp@p57A6DA4B.dip.t-dialin.net 1198073473 J * larsivi ~larsivi@85.221.53.194 1198073473 J * JonB ~NoSuchUse@130.227.63.19 1198073473 J * _bjh_ ~bjh@84.112.154.154 1198073473 J * derjohn ~derjohn@dslb-084-058-204-051.pools.arcor-ip.net 1198073473 J * ||Cobra|| ~cob@pc-csa01.science.uva.nl 1198073473 J * virtuoso ~s0t0na@78.37.193.149 1198073473 J * eyck vSQ6EQIB@nat05.nowanet.pl 1198073473 J * opuk ~kupo@c213-100-138-228.swipnet.se 1198073473 J * Tuxbubling ~tuxbublin@sat78-6-88-160-130-34.fbx.proxad.net 1198073473 J * sid3windr luser@bastard-operator.from-hell.be 1198073473 J * click click@ti511110a080-0234.bb.online.no 1198073473 J * otaku42 ~otaku42@torvalds.h4ckr.net 1198073473 J * meebey meebey@booster.qnetp.net 1198073473 J * Adrinael adrinael@rid7.kyla.fi 1198073473 J * Bertl_oO herbert@IRC.13thfloor.at 1198073475 J * marv ~marv@modemcable128.145-80-70.mc.videotron.ca 1198073475 J * Guy- ~korn@elan.rulez.org 1198073475 J * doener ~doener@i577AC8A4.versanet.de 1198073475 J * eviljonny ~eviljonny@loki.eviljonnys.com 1198073475 J * Hunger Hunger.hu@Hunger.hu 1198073475 J * BobR_zZ odie@IRC.13thfloor.at 1198073475 J * fosco fosco@konoha.devnullteam.org 1198073475 J * C14r_ ~C14r@h58173.serverkompetenz.net 1198073475 J * snooze ~o@1-1-4-40a.gkp.gbg.bostream.se 1198073475 J * mEDI_S ~medi@snipah.com 1198073475 J * blizz ~stephan@evilhackerdu.de 1198073475 J * Radiance 78af56ead3@193.16.154.187 1198073475 J * cohan ~cohan@koniczek.de 1198073485 N * marv _marv 1198073570 M * daniel_hozac Julius: if you use 2.3, you can set up the device map to let that guest create the device nodes you need. 1198073573 J * lilalinux ~plasma@dslb-084-058-204-051.pools.arcor-ip.net 1198073573 J * DavidS ~david@p57A4BB62.dip0.t-ipconnect.de 1198073573 J * danman danman@eliza.wigner.bme.hu 1198073573 J * arachnist arachnist@088156187175.who.vectranet.pl 1198073573 J * ex ex@valis.net.pl 1198073573 J * nebuchadnezzar ~nebu@zion.asgardr.info 1198073573 J * transacid ~transacid@transacid.de 1198073573 J * weasel weasel@weasel.chair.oftc.net 1198073573 J * tokkee tokkee@ssh.faui2k3.org 1198073573 J * vasko ~vasko@unreal.rainside.sk 1198073573 J * Medivh ck@dolphin.serverbox.de 1198073573 J * bXi bluepunk@irssi.co.uk 1198073573 J * Loki|muh loki@satanix.de 1198073573 J * waldi ~waldi@bblank.thinkmo.de 1198073573 J * grobie ~grobie@master.schnuckelig.eu 1198073573 J * matthew_ ~matthew@81.168.74.31 1198073573 J * FaUl immo@shell.chaostreff-dortmund.de 1198073573 J * ard ~ard@gw-tweakb16.kwaak.net 1198073778 J * lilalinux_ ~plasma@80.69.41.3 1198074070 M * Julius http://julius.homelinux.org/nopaste/index.php?id=18 <- what version do i have? 1198074103 M * daniel_hozac 2.0.2.2-rc9. 1198074154 M * _marv anybody got a doc on creating a .conf for each guest? 1198074168 M * _marv trying to get vserver-copy working 1198074181 M * daniel_hozac don't use vserver-copy. 1198074201 M * daniel_hozac it's been legacy for years. 1198074216 M * _marv haha really? i got troubles with chxid and the hole xid system when i cp -a vservers and edit there settings manualy... 1198074218 Q * lilalinux Ping timeout: 480 seconds 1198074246 M * daniel_hozac use vserver ... build -m clone... 1198074249 Q * pmenier Read error: Connection reset by peer 1198074253 J * hparker ~hparker@linux.homershut.net 1198074255 M * _marv thx 1198074266 J * pmenier ~pme@LNeuilly-152-22-72-5.w193-251.abo.wanadoo.fr 1198074307 J * doener_ ~doener@i577AEBC6.versanet.de 1198074404 Q * larsivi Quit: Konversation terminated! 1198074582 M * _marv daniel_hozac, thx... rsync is more what i was looking for thow... 1198074720 Q * doener Ping timeout: 480 seconds 1198075775 J * yarihm ~yarihm@84-75-119-160.dclient.hispeed.ch 1198076202 Q * derjohn Ping timeout: 480 seconds 1198076208 J * derjohn ~derjohn@dslb-084-059-013-119.pools.arcor-ip.net 1198076855 J * lilalinux__ ~plasma@dslb-084-059-013-119.pools.arcor-ip.net 1198077288 Q * lilalinux_ Ping timeout: 480 seconds 1198077492 J * m_o_d ~kane@host.ltv.pl 1198077807 J * tuxbublin1 ~omauras@62.80.120.214 1198077810 M * tuxbublin1 hello 1198077819 M * Bertl_oO wb tuxbublin1! 1198077845 M * tuxbublin1 Bertl_oO: just a question i wanna monitor trough snmp a guest 1198077872 M * Bertl_oO hmm, for what purpose? (i.e. why not monitor the host?) 1198077915 M * tuxbublin1 well if that's the way.... 1198077941 M * Bertl_oO it works inside a guest too, but I'm curious what data you expect to gather 1198077953 M * tuxbublin1 traffic, loggued users etc 1198077959 M * sid3windr can't log traffic :) 1198077975 M * tuxbublin1 that's what i see :) 1198077975 M * Bertl_oO not easily from inside the guest 1198077987 M * Bertl_oO but it's trivial on the host 1198077993 M * tuxbublin1 from the host i won't have loggued user from the guest etc 1198078010 M * Bertl_oO maybe a mix? 1198078078 M * Julius from the host i won't have loggued user from the guest etc <- why? 1198078092 M * Julius you can access all the logfiles & /proc 1198078100 M * tuxbublin1 mmmmh 1198078141 M * tuxbublin1 would requier some snmp tweaks 1198078337 M * tuxbublin1 mmh actually gettings all infos exept cpu usage and traffic 1198078365 M * tuxbublin1 got processes, loggued in users 1198078376 M * tuxbublin1 disk space, load 1198078479 M * m_o_d hello 1198078520 M * tuxbublin1 hello m_o_d 1198078548 M * Bertl_oO tuxbublin1: you get the cpu usage too (on the host) 1198078599 M * tuxbublin1 yes i'm running snmpd only on the guest right now 1198078600 J * dowdle ~dowdle@scott.coe.montana.edu 1198078646 M * Bertl_oO wb dowdle! hey m_o_d! 1198078653 M * m_o_d one question, i have vserver+grsec kernel, grsec work on guest (can i use gradm in guest to create grsec acl ) or only in host ? 1198078677 M * Bertl_oO I don't know for sure, but I would say, only on the host 1198078768 A * dowdle grunts 1198078806 J * systest ~systest@146-115-126-31.c3-0.arl-ubr1.sbo-arl.ma.cable.rcn.com 1198078868 M * Bertl_oO welcome systest! 1198078977 Q * JonB Ping timeout: 480 seconds 1198079820 J * Abaddon abaddon@68-71.is.net.pl 1198079959 Q * ||Cobra|| Remote host closed the connection 1198080009 J * ||Cobra|| ~cob@pc-csa01.science.uva.nl 1198080371 M * m_o_d if grsec only works in host, what can i use from guest shell server ? 1198080456 Q * pusling Read error: Connection reset by peer 1198080553 J * balbir ~balbir@122.167.200.72 1198080826 J * pusling pusling@77.75.162.71 1198081510 J * FireEgl FireEgl@4.0.0.0.1.0.0.0.c.d.4.8.0.c.5.0.1.0.0.2.ip6.arpa 1198081516 J * DreamerC ~DreamerC@122-121-37-11.dynamic.hinet.net 1198081995 P * DreamerC 1198082663 P * tuxbublin1 1198082800 M * gebura m_o_d, grsec add some protection in kernel level 1198082810 M * gebura since this kernel is partaged by host and vservers 1198082820 M * gebura all will use it 1198082846 M * gebura for example , with the patch that remove the ability to see other users process 1198082860 M * gebura (some of the most visible but it is just an example) 1198083580 J * rgl ~rgl@84.90.10.245 1198083582 M * rgl hello 1198083616 M * rgl for some reason my server rebooted itself, and now the raid1 software is in this weird state: md3 : inactive sda7[0](S) you guys known how to fix this? it should even have two partitions (sda7 and sdb7) :/ 1198083649 J * JonB ~NoSuchUse@kg1-98.kollegiegaarden.dk 1198083661 M * _marv sda sdb are 2 diffrent drives 1198083663 M * _marv not partitions 1198083746 M * rgl they are different drives and partitions 1198083756 M * gebura rgl, first you have to check if sda is ok 1198083770 M * gebura (with hdparm and/or smart) 1198083784 M * daniel_hozac rgl: sounds like you lost one of the disks. 1198083791 M * rgl oh, I did a: mdadm --assemble /dev/md3 /dev/sda7 /dev/sdb7 1198083802 M * rgl and its now resyncing again :( 1198083813 M * rgl maybe I should have not done this? 1198083851 M * gebura i think it is good 1198083852 M * gebura http://tldp.org/HOWTO/Software-RAID-HOWTO-8.html 1198083942 M * rgl daniel_hozac, its odd, because if I did lost one of then I should have something in the logs. (and they are both new, so if they are bad, its really unlucky me :() 1198084042 M * rgl I think LVM is screwing me :( 1198084088 M * rgl the array failed to stop, still, LVM seems to be sniffing the sda7 partitions for VG, which seems to bork the array? does that make any sense? 1198084211 Q * gebura Quit: Quitte 1198084689 M * rgl daniel_hozac, its something strange happening with vserver utility :( 1198084698 M * daniel_hozac hmm? 1198084703 M * daniel_hozac i seriously doubt that... 1198084711 M * rgl for some reason my server rebooted itself, and now the raid1 software is in this weird state: md3 : inactive sda7[0](S) you guys known how to fix this? 1198084715 M * rgl err 1198084721 M * rgl vserver calipo start 1198084722 M * rgl Unknown tag; use '-l' to get list of valid tags 1198084722 M * rgl /proc/uptime can not be accessed. Usually, this is caused by 1198084722 M * rgl procfs-security. Please read the FAQ for more details 1198084722 M * rgl http://linux-vserver.org/Proc-Security 1198084722 M * rgl Failed to start vserver 'calipo' 1198084734 M * rgl BUT env - /usr/sbin/vserver calipo start works 1198084754 M * daniel_hozac so you'll want to figure out what environment setting messes you up. 1198084801 M * rgl you never seen this before? 1198084945 M * _marv rgl, did u run vprocunhide since u've rebooted? 1198084989 M * rgl _marv, I didn't run it explicitly. I only used the vserver and vserver-stat commands. 1198085007 M * _marv try to run it... 1198085014 M * _marv i just saw that -l flag error thow 1198085017 M * _marv i never saw that 1198085036 M * _marv or just never noticed it 1198085244 M * rgl gag: Dec 19 17:16:49 host kernel: vserver-info[21778:#0] trap stack segment rip:4028ea rsp:7269702f73726576 error:0 1198085260 M * rgl Dec 19 17:25:01 host kernel: vxW: [xid #0] !!! limit: ffff81011cdcf078[VM,9] = 63 on exit. 1198085266 M * rgl these look ugly :/ 1198085281 J * larsivi ~larsivi@144.84-48-50.nextgentel.com 1198085336 M * _marv i'm really new to linux-vserver i couldnt say... 1198085342 Q * balbir Remote host closed the connection 1198085342 M * _marv still exporing pretty much i must say 1198085628 J * virtuoso_ ~s0t0na@ppp91-122-187-26.pppoe.avangard-dsl.ru 1198085887 M * rgl vserver piranha enter 1198085887 M * rgl vnamespace: vc_enter_namespace(): Invalid argument 1198085908 M * rgl not even with env - /usr/sbin/vserver piranha enter :( 1198085924 M * _marv i dont know what happened... but it doesnt sound good 1198085998 Q * virtuoso Read error: Connection reset by peer 1198086042 M * rgl http://irc.13thfloor.at/LOG/2006-12/LOG_2006-12-21.txt 1198086050 M * rgl its the same error. 1198086054 M * ard rgl : that limit, isn't that the inode count? 1198086075 M * rgl ard, I have no idea :( 1198086088 M * ard did you set disklinits? 1198086094 M * rgl nope 1198086135 M * ard hmm 1198086137 Q * ViRUS Quit: Leaving 1198086141 M * ard [18:18] for some reason my server rebooted itself, and now the raid1 software is in this weird state: md3 : inactive sda7[0](S) you guys known how to fix this? 1198086148 M * ard 2.6.19? 1198086156 M * ard or lower? 1198086161 J * DreamerC ~DreamerC@122-121-37-11.dynamic.hinet.net 1198086161 M * rgl 2.6.22.14-vs2.2.0.5 1198086221 M * ard Hmmmm, in that case the reboot sounds really nasty... 1198086240 M * ard haven't got spontaneous reboots with 2.6.20+ and weird hardware 1198086264 M * ard I hope you have a serial console and some logging :-) 1198086278 M * rgl I don't :( 1198086302 M * rgl there is nothing in the logs that says why it did reboot. just an wtmp entry :( 1198086315 M * ard hmmm 1198086320 M * _marv how's the temps? 1198086324 M * ard does the wtmp say crash or reboot? 1198086334 M * _marv temperature 1198086346 M * rgl reboot system boot 2.6.22.14-vs2.2. Wed Dec 19 16:29 - 17:45 (01:16) 1198086346 M * rgl reboot system boot 2.6.22.14-vs2.2. Wed Dec 19 16:10 - 17:45 (01:35) 1198086363 M * ard and people logged in before the reboot? 1198086390 M * ard they should have crash if it wasn't a normal reboot :-) 1198086418 M * ard and it that case it is a good thing to look at your hardware :-( 1198086424 M * rgl no, just me. I was in the datacenter setting this up. when I come home, it has rebooted :/ 1198086427 M * ard as _marv says: temps nd stuff 1198086454 M * rgl I'll look at them :) 1198086457 M * ard or power failure :-) 1198086472 M * rgl still need to setup lmsensors. 1198086529 M * rgl but first I'm crossing my fingers for the raid1 sync to end. still 180 mins away 1198086680 M * ard but you do have a serial console right now? 1198086689 M * rgl no 1198086692 M * rgl only ssh 1198086703 M * ard datacenter? No serial console? 1198086711 M * rgl I'm at home now 1198086729 M * rgl in the DC I was using a KVM 1198086749 M * ard yes I know :-). But for me the first thing I set up is the serial console and the remote power boot :-) 1198086769 M * rgl a serial console to where? 1198086774 M * ard but then again, I get paid to think things out like that :-) 1198086784 M * ard your server in the datacenter 1198086802 A * rgl has access to a remote power switch 1198086816 M * rgl but where do you connect it to? 1198086826 M * ard to the serial port on your server... 1198086847 M * rgl gag. I'm not getting my message hehe 1198086854 M * ard usually a datacenter has console servers where you can ssh to, and then you can connect with rs232 to your server 1198086860 M * rgl ah! 1198086868 M * rgl now that makes sense. 1198086878 M * rgl but here I don't have that :/ 1198086893 M * ard all server hardware nowadays support bios and console redirection to the serial port 1198086898 M * rgl maybe I should really invest in something like that. 1198086936 M * ard If you have a single server, and the datacenter does not provide the service, it might be better to just keep your fingers crossed :-) 1198086943 M * rgl I don't known if they have a "serial server" like that. 1198086950 M * ard if you have 2 servers, cross connect them :-) 1198086962 M * rgl I only have one :/ 1198087014 M * rgl I'should really get another one, but I don't have enough money for this now :/ 1198087022 M * ard it more or less depends on if you loose money if it's down for a day:-) 1198087047 N * DoberMann DoberMann[PullA] 1198087054 Q * pmenier Quit: Konversation terminated! 1198087067 M * ard in holland, most datacenters have that as a normal service, and some have it for a small extra price 1198087092 M * rgl I'll have to ask them. Its a peace of mind having something like that! 1198087130 M * rgl because so many shit can happen if for some reason I mess up with, eg, kernel install 1198087217 M * ard yes :-) 1198087242 M * ard always add panic=20 or so to your kernel parameters 1198087263 M * ard that will make sure you don't have to power cycle to often :-) 1198087406 M * AStorm isn't that the automatic power cycle? when you use that, add some netconsole too 1198087412 M * AStorm that can detect some oopses 1198087418 M * AStorm and inform of them 1198087621 M * ard well, it doesn't really power cycle. Some hardware will stay in an unknown state, and maybe make the system crash... (non-pc hardware like a NAS f.i.) 1198087622 Q * lilalinux__ Remote host closed the connection 1198087658 M * ard it just initiates a reboot after a panic 1198088107 M * rgl I've talked with them, and I think I'm going to install an IPMI board, what do you guys think? 1198088124 M * rgl (them == the datacenter guys) 1198088774 J * cedric ~cedric@rny93-2-82-66-66-30.fbx.proxad.net 1198088790 M * rgl omg, now I can't enter nor stop a guest :(( 1198088802 M * rgl can you guys help me out? pulease? 1198088817 M * rgl always: vnamespace: vc_enter_namespace(): Invalid argument :( 1198088869 M * rgl I can enter and stop other vms, but this particular one I can't 1198088876 M * ard did it really boot a vserver kernel? 1198088884 M * ard ah 1198088886 M * ard ok 1198088887 M * ard hmmm 1198088899 M * rgl sure. there is another vm running: 1198088908 M * rgl vserver-stat 1198088908 M * rgl CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME 1198088908 M * rgl 30 110 227.3M 1.1G 7m03s54 4m39s41 1h11m32 piranha 1198088908 M * rgl 40 52 665.3M 317.5M 0m33s30 0m05s58 1h03m05 calipo 1198088927 A * ard suggests rebooting 1198088940 M * ard due too the oopses 1198088945 M * rgl what will happen to raid1 sync? 1198088956 M * rgl it will start again from scratch? 1198088964 M * ard depends 1198088973 M * ard is it running now? 1198088976 M * rgl yes 1198089011 M * ard then it will work after the reboot 1198089013 M * rgl the only raid1 that is giving me problems is where I have a LVM PV :/ 1198089017 M * ard is it missing a disk? 1198089049 M * rgl it has all disks (two) 1198089073 M * ard ah, no serial console... 1198089074 M * ard hmmm 1198089082 J * mick_work ~clamwin@adsl-068-157-089-099.sip.bct.bellsouth.net 1198089098 M * rgl but I'm affraid if I reboot, that array will fail to start too, and I'll have to stop the lvm PV, and mdadm --assemble again :/ 1198089102 A * ard doesn't want to be in your place :-) 1198089130 M * rgl I don't want anyone in this place too. it sux :( 1198089130 M * ard you can always start a raid without assembling 1198089167 M * ard but it needs careful consideration of what you are typing :-) 1198089193 M * rgl what do you mean? /me is so stressed, that can't think too strait 1198089315 M * ard let's recap: you have a server that spontanously rebooted. everything was coming up except for one vserver 1198089335 M * rgl lemme start first :D 1198089353 M * rgl I have two disks that I'm using in a software raid1 setup. 1198089379 M * rgl in that disk, I've made 4 partitions. and 4 raid1 devices, md0 to md3. 1198089395 M * ard and md3 did not autostart? 1198089407 M * rgl the last raid, md3, is also a LVM pysical volume (PV). 1198089437 M * rgl when the server booted, that raid was in the stop state and with only a disk. 1198089448 M * ard a single disk. 1198089469 M * ard so, you want to: mdadm --start /dev/md3 /dev/sda7 or something like that? 1198089488 M * rgl /proc/mdstat displayed this: md3 : inactive sda7[0](S) 1198089492 M * ard and then mdadm --add /dev/md3 /dev/sdb7 to get a HA raid1 1198089519 A * ard would try to --start it with mdadm 1198089534 M * ard if you don't trust it: mdadm -E /dev/sda7 1198089544 M * ard to examine the raid superblock on sda7 1198089589 M * ard keep a manual page of mdadm available :-) 1198089589 M * rgl the thing is, lvm seems to take ownership of the sda7 disk, and I have to stop it first with: vgchange --available n raide 1198089595 M * ard ah 1198089630 M * ard you must change the start order of lvm and raid :-) 1198089655 M * ard the superblock of LVM is interfering with raid1... 1198089676 M * ard if you have no other PV's, just *really* stop LVM 1198089694 M * ard start the raid1 on that disk and then start LVM 1198089735 M * rgl I have another two disks with LVM PV, but those can be stopped for trying out :/ 1198089751 M * rgl how do you change the order? 1198089774 M * ard depends on your system :-) 1198089791 M * ard I haven't messed with LVM for a while 1198089800 M * ard but one of my servers just works 1198089816 M * ard all my partitions are marked auto-raid 1198089824 M * ard raid is build into the kernel 1198089844 M * ard LVM also 1198089865 M * ard /dev/sda8 : start= 33206418, size=279370287, Id=fd 1198089880 M * rgl OMG! the sda7 is not marked like that! 1198089893 M * ard md5 : active raid1 sdb8[1] sda8[0] 1198089893 M * ard 139685056 blocks [2/2] [UU] 1198089896 M * rgl sda1 Boot Primary Linux raid autodetect 98.71 1198089897 M * rgl sda5 Logical Linux raid autodetect 10001.95 1198089897 M * rgl sda6 Logical Linux raid autodetect 10001.95 1198089897 M * rgl sda7 Logical Linux 479797.04 1198089908 M * ard ah :-) 1198089920 M * ard well... fix it now with fdisk 1198089930 M * rgl what did I do wrong? omg :( /me cries 1198089941 M * ard fdisk /dev/sda 1198089946 M * ard t 7 fd 1198089954 M * ard something like that? 1198089959 M * rgl I can change that while its still running? 1198089963 M * ard but that will only help after the reboot 1198089964 M * ard yes 1198089989 M * ard your partition table will not be reread however, because it is in use 1198090007 M * ard so you can either reboot (fingers crossed) and let it come up as intended 1198090034 M * ard or remove sda7 from the VG so you can start the raid, and after that add it again 1198090054 M * ard PV's can be removed and added on the fly on the LVM version I was using 1198090062 M * ard I assume that's still possible 1198090075 M * rgl I can't remove it too, because it being used by the guest that vserver util refuses to stop or enter :/ 1198090097 M * ard well, if it is part of a VG, the date will be moved to the other disks 1198090110 M * Bertl_oO rgl: vkill? 1198090119 M * ard as long as you have enoug unallocated storage 1198090121 M * rgl the VG has a only a single PV. 1198090125 M * ard ah 1198090145 M * ard listen to Bertl_oO , he is the wise man :-) 1198090153 M * rgl Bertl_oO, :) 1198090159 M * ard and sometimes he is just a script :-) 1198090168 M * ard waiting to greet anybody that joins :-) 1198090229 M * rgl Bertl_oO, can you pulease tell me how should I use vkill? 1198090329 M * ard Usage: vkill [--xid|-c ] [-s ] [--] * 1198090331 M * ard :-) 1198090346 M * ard vps to decide what to kill 1198090371 M * ard Bertl_oO : 1198090372 M * ard [18:27] gag: Dec 19 17:16:49 host kernel: vserver-info[21778:#0] trap stack segment rip:4028ea rsp:7269702f73726576 error:0 1198090372 M * ard [18:27] Dec 19 17:25:01 host kernel: vxW: [xid #0] !!! limit: ffff81011cdcf078[VM,9] = 63 on exit. 1198090419 M * Bertl_oO xid=0 should never throw such messages 1198090485 M * ard and the fact that it spontanuously rebooted... 1198090495 M * ard 2.6.22.14 with 2.2.0.5 1198090650 M * rgl ard, only if I knew what to kill :/ 1198090666 M * ard anything within that vserver :-) 1198090679 M * rgl there seems to be no "v" prefixed utility running inside the vserver 1198090688 M * ard But if the state is D, you are probably out of lock 1198090693 M * ard no... 1198090699 M * ard from the host... vps faux 1198090712 M * ard and from the host kill stuff inside the vserver 1198090731 M * ard the vservers are restricted, the host never is... 1198090735 M * ard (although... :-) 1198090751 M * rgl nothing is in the D state 1198090785 M * ard vps aux|grep vserverthatdoesn't stop 1198090794 M * rgl I did that :) 1198090800 M * ard vkill all those pids with --xid 1198090843 M * daniel_hozac pid 0 and -1 mean the same thing with vkill as they do with kill... 1198090852 M * ard ah 1198090877 M * rgl what is pid 0? 1198090878 M * ard vkill --xid ... -1 it is :-) 1198090883 M * ard group 1198090906 M * daniel_hozac man kill 1198090907 M * rgl it sends a signal to the group of the current command? 1198090940 M * ard If I am correct... 1198090955 M * rgl daniel_hozac, my man page does not mentions pid 0 :( 1198090965 M * daniel_hozac 0 All processes in the current process group are signaled. 1198090966 M * ard man 2 kill 1198090972 M * daniel_hozac -1 All processes with pid larger than 1 will be signaled. 1198090992 M * rgl oh sorry, I was looking at the kill(1) :( 1198091023 M * ard well my 1 kill is also lackin something :-) 1198091183 M * rgl omg, mysql does not stop :( 1198091225 M * rgl ah it now dead :D 1198091246 M * rgl vserver-stat only has one guest now! :) 1198091280 M * rgl now I busted! 1198091295 M * yarihm daniel_hozac: i've got a question regarding the way debian/ubuntu vservers are set up. the install works fine, but I personally end up doing some cosmetic work (removing initscripts from runlevels, fixing some minor things like tty-(re)setting and hwclock related things). As compared to e.g gentoo-vservers where everything looks nice out of the box. I personally tend to do a tarball and then use the skeleton-method to avoid this. would y 1198091295 M * yarihm ou consider accepting patches against util-vserver or should that be dealt with micah being the debian-maint? 1198091309 M * rgl how do I run /etc/vservers/piranha/scripts/postpost-stop ? 1198091350 M * daniel_hozac yarihm: are you using 0.30.214? 1198091360 M * yarihm rgl: you mean as a shell-script? if you do not use any variables or arguments given by util-vserver, you can just do something like /etc/vservers/piranha/scripts/postpost-stop 1198091364 M * yarihm daniel_hozac: let me check 1198091379 M * rgl yarihm, I don't known in what context that scripts runs :( 1198091380 M * daniel_hozac yarihm: Hollow did a fair bit of work to create a debian initpost script, it removes most runlevel stuff and creates a guest that doesn't spew garbage. 1198091396 M * rgl can I run it directly in the host? 1198091405 M * yarihm 0.30.214-5~bpo40+2_i386 here 1198091454 M * daniel_hozac yarihm: and you're building etch/lenny/sid guests? 1198091465 M * yarihm rgl: see, util-vserver gives some arguments to that script, some write scripts that use them, some scripts that don't. it depends. if not, you can run it on the host system as it is I think 1198091478 M * yarihm daniel_hozac: etch in that case 1198091491 M * daniel_hozac yarihm: and what problems are you seeing? 1198091492 M * yarihm daniel_hozac: ubuntu hardy hat similar issues though 1198091512 M * daniel_hozac i haven't set up the Ubuntu distributions to run that initpost script. 1198091519 M * daniel_hozac since, well, i haven't tested it. 1198091532 M * rgl yarihm, it has worked :D 1198091538 M * rgl (running on the host) 1198091541 M * yarihm daniel_hozac: nothing spectacular, they are as I said only cosmetical problems. the umount-scripts are being run during shutdown, stty is being run during shutdown, hwclock tries to sync, the kernel-logger hangs during startup 1198091580 M * daniel_hozac yarihm: and you're sure you ran the initpost script? that should remove all of those. 1198091584 M * yarihm daniel_hozac: ubuntu behaves slightly different to debian ... 1198091604 M * m_o_d re 1198091605 M * yarihm well, what initpost script? where do i find it? is it not run automatically for etch-guests 1198091629 M * m_o_d can i use gradm in guest system ? 1198091633 J * Piet ~piet@tor.noreply.org 1198091649 M * daniel_hozac it is. 1198091681 M * rgl gag. still after restarting the guest, I can't use vserver enter :( 1198091759 M * daniel_hozac why not? 1198091777 M * rgl vnamespace: vc_enter_namespace(): Invalid argument 1198091784 M * yarihm daniel_hozac: hmm ... not here it seems. i bootstrapped with "vserver foo build -m debootstrap --interface ... --hostname ... -- --mirror ... --dist ... IIRC 1198091815 M * yarihm rgl: you are trying vserver piranha enter, right? 1198091815 M * daniel_hozac rgl: did you disable the namespace for that guest or something? 1198091834 M * rgl yarihm, correct. 1198091855 M * rgl daniel_hozac, I dind't do anything like that explicitly :/ 1198091862 M * rgl how can I tell? 1198091873 Q * Abaddon Quit: leaving 1198091889 M * rgl I can enter and leave the other guest, but this one I can't :/ 1198091900 M * daniel_hozac ls -l /etc/vservers/ 1198091938 M * rgl drwxr-xr-x 4 root root 4096 2007-12-19 12:24 apps/ 1198091938 M * rgl lrwxrwxrwx 1 root root 41 2007-12-19 12:24 cache -> /etc/vservers/.defaults/cachebase/piranha 1198091938 M * rgl -rw-r--r-- 1 root root 10 2007-12-10 21:45 ccapabilities 1198091938 M * rgl -rw-r--r-- 1 root root 3 2007-12-19 12:24 context 1198091938 M * rgl drwxr-xr-x 2 root root 4096 2007-12-19 12:24 cpuset/ 1198091940 M * rgl -rw-r--r-- 1 root root 193 2007-12-10 21:43 fstab 1198091940 M * rgl drwxr-xr-x 5 root root 4096 2007-12-19 12:24 interfaces/ 1198091942 M * rgl -rw-r--r-- 1 root root 8 2007-12-19 12:24 name 1198091944 M * rgl lrwxrwxrwx 1 root root 25 2007-12-19 12:24 run -> /var/run/vservers/piranha 1198091946 M * rgl drwxr-xr-x 2 root root 4096 2007-12-19 19:10 scripts/ 1198091948 M * rgl drwxr-xr-x 2 root root 4096 2007-12-19 12:24 uts/ 1198091950 M * rgl lrwxrwxrwx 1 root root 40 2007-12-19 12:24 vdir -> /etc/vservers/.defaults/vdirbase/piranha/ 1198091952 M * rgl oh, sorry for the spam :/ 1198091955 M * daniel_hozac rgl: please use paste.linux-vserver.org for anything longer than 3 lines. 1198091973 M * daniel_hozac yarihm: try /usr/lib*/util-vserver/distributions/debian/initpost /etc/vservers/ /usr/lib*/util-vserver/util-vserver-vars 1198091999 M * daniel_hozac yarihm: i also just reworked this piece of code earlier today, so you might want to get the version in trunk. 1198092049 M * rgl daniel_hozac, I'm sorry. I'll will for now on. 1198092141 M * rgl oh, well, going to reboot. worse than this it can't get :( 1198092410 M * rgl omfg. now I've rebooted and the raid is still not correctly started /me shoots self 1198092574 M * rgl you guys known how to stop LVM from automatically running? 1198093523 Q * mick_work Remote host closed the connection 1198093555 J * mire ~mire@103-170-222-85.adsl.verat.net 1198093718 Q * JonB Quit: This computer has gone to sleep 1198093889 J * Abaddon abaddon@68-71.is.net.pl 1198093889 M * rgl oh, the raid is now correctly started before lvm... ufff... I had to edit lvm.conf :/ 1198093920 M * rgl to only include the drives/partitions that I known to have PV. 1198094164 M * rgl still, I can't enter the guest... omg! oh why? 1198095206 J * Aiken ~james@ppp121-45-246-228.lns2.bne4.internode.on.net 1198095494 M * rgl how do I create a guest directory skeleton (without bootstrapping the OS) 1198095498 M * rgl ? 1198095549 M * rgl instead of -m debootstrap I use -m skeleton? 1198095560 Q * systest Quit: Client exiting 1198095584 M * daniel_hozac sure. 1198095597 M * rgl daniel_hozac, thx! 1198095619 M * rgl daniel_hozac, can't you help me troubleshoot why vserver refuses to start my guest? I'm really lost :( 1198095652 M * daniel_hozac Bertl_oO already told you to diff the environments... where's the diff? 1198095703 M * rgl I'll make it. 1198095728 M * rgl but before that I need to recreate the env. -m skeleton is giving: No build-method specified 1198095748 M * daniel_hozac hmm? 1198095748 M * rgl oh scrap that. I messed up! 1198096001 M * rgl daniel_hozac, my env only has this: http://paste.linux-vserver.org/10687 1198096014 M * rgl why does it work with one guest, and not the other? 1198096023 M * daniel_hozac hmm? 1198096062 M * rgl I have two guests: calipo and piranha. I can start/stop/enter calipo just nicely. but piranha I can0t 1198096083 M * rgl are you seeing any reason for that? 1198096101 M * daniel_hozac without more info, no. 1198096130 M * rgl what more info do you need? is there some debug command I can run? 1198096166 M * daniel_hozac cat /proc/mounts 1198096175 M * daniel_hozac how does piranha fail to start? 1198096240 M * rgl daniel_hozac, proc mounts is at http://paste.linux-vserver.org/10688 1198096272 M * rgl and at http://paste.linux-vserver.org/10689 is how piranha fails to start 1198096328 M * daniel_hozac mv /etc/vservers/piranha/scripts{,.bak} 1198096331 M * daniel_hozac try again. 1198096360 M * rgl I've completly removed the scripts/ directory 1198096373 M * rgl and the quota stuff that I had before. 1198096381 M * daniel_hozac and it still fails? 1198096383 M * rgl (which was the only reason for having the scripts) 1198096386 M * rgl yes 1198096419 M * rgl http://paste.linux-vserver.org/10690 contains all files inside the vserver directory 1198096424 M * daniel_hozac i thought this was a console vs. ssh issue, and not a per-guest issue? 1198096485 M * rgl at first I though it was that, but its not really. it simply fails to launch this second guest. 1198096617 M * rgl I'm so wasted :( 1198096629 M * rgl doing env - /usr/sbin/vserver piranha start does start it. :/ 1198096664 M * rgl omg... now vserver piranha enter works... 1198096676 M * rgl what is going on here? :(( 1198096805 A * ard suggests a good night sleep, or a strong mug of espresso :-) 1198096873 M * rgl ard, I didn't do anything different. it just started to work? something is really hacky here :/ 1198096905 M * rgl I also need that night of sleep :D 1198096995 Q * hparker Read error: Operation timed out 1198097131 J * hparker ~hparker@linux.homershut.net 1198097225 M * rgl daniel_hozac, any idea why using vserver piranha start with that environment from the paste does not work? 1198097328 Q * DLange Ping timeout: 480 seconds 1198097619 M * daniel_hozac not really, no. 1198097640 M * rgl ok thx daniel_hozac ! 1198097718 M * daniel_hozac presumably it has something to do with your PATH. 1198097759 Q * Julius Quit: Verlassend 1198097810 M * rgl its the default path :/ 1198097823 M * rgl and it works fine with one guest, and not another. 1198097832 M * rgl isn't that really strange? 1198097864 M * daniel_hozac without seeing what differs between them, i can't really say... 1198097886 M * rgl between them? 1198097896 M * rgl using the default env and no env at all? 1198097901 M * daniel_hozac no, between the guests. 1198097911 M * rgl how do I see that? 1198097932 A * rgl doesn't really known how the guests are started, and where they pick the env up :( 1198098059 M * daniel_hozac does the other guest have any scripts? 1198098084 M * rgl init.d scripts? or vserver scripts? (it does not have any vserver scripts) 1198098123 M * daniel_hozac vserver scripts. 1198098338 Q * Abaddon Quit: leaving 1198098438 J * ldng ~ldng@84.77.98.59 1198098683 M * ldng Hi, I'm sure this is a classical, but I'll try anyway :P Try to build a first guest with debootstrp give me this error : "WARNING: --nid is not supported by this version. chbind: kernel does not provide network isolation". My guess would be that I'm missing some option in my kernel but I don't know which. Any advice anyone ? 1198098906 Q * Piet Remote host closed the connection 1198099113 M * daniel_hozac why would you enable the legacy version? 1198099132 M * daniel_hozac did you even read the help text? 1198099189 J * Piet ~piet@tor.noreply.org 1198099198 Q * ldng Quit: Ex-Chat 1198099378 J * Abaddon abaddon@68-71.is.net.pl 1198100446 J * DreamerC_ ~DreamerC@122-121-38-47.dynamic.hinet.net 1198100615 J * JonB ~NoSuchUse@kg1-98.kollegiegaarden.dk 1198100824 Q * DreamerC Ping timeout: 480 seconds 1198101095 Q * hparker Quit: Quit 1198101417 Q * Abaddon Quit: leaving 1198101591 Q * cedric Quit: cedric 1198103800 Q * DreamerC_ Quit: leaving 1198103931 Q * JonB Quit: This computer has gone to sleep 1198105176 Q * Piet Quit: Piet 1198105263 N * DoberMann[PullA] DoberMann 1198105327 J * dna_ ~dna@161-227-dsl.kielnet.net 1198105471 Q * ema Quit: leaving 1198105490 J * Infinito argos@200-101-123-52.gnace701.dsl.brasiltelecom.net.br 1198105705 Q * dna Ping timeout: 480 seconds 1198105737 N * DoberMann DoberMann[ZZZzzz] 1198105877 Q * meandtheshell Quit: Leaving. 1198106567 Q * larsivi Quit: Konversation terminated! 1198106780 J * hparker ~hparker@linux.homershut.net 1198107319 Q * duckx Remote host closed the connection 1198107370 J * duckx ~Duck@81.57.39.234 1198108169 J * shuri ~shuri@64.235.209.226