1196899732 Q * mire Ping timeout: 480 seconds 1196900344 J * mire ~mire@75-168-222-85.adsl.verat.net 1196902958 Q * dowdle Remote host closed the connection 1196905187 Q * meebey Remote host closed the connection 1196905217 J * meebey meebey@booster.qnetp.net 1196905960 Q * meebey Remote host closed the connection 1196905990 J * meebey meebey@booster.qnetp.net 1196906632 Q * mire Ping timeout: 480 seconds 1196907623 Q * meebey Remote host closed the connection 1196907653 J * meebey meebey@booster.qnetp.net 1196908152 Q * faheem_ Read error: Connection reset by peer 1196909618 Q * meebey Remote host closed the connection 1196909648 J * meebey meebey@booster.qnetp.net 1196910963 Q * meebey Remote host closed the connection 1196910993 J * meebey meebey@booster.qnetp.net 1196917228 Q * balbir resistance.oftc.net oxygen.oftc.net 1196917228 Q * tam resistance.oftc.net oxygen.oftc.net 1196917228 Q * mick_work resistance.oftc.net oxygen.oftc.net 1196917228 Q * brc resistance.oftc.net oxygen.oftc.net 1196917296 J * balbir ~balbir@122.167.196.19 1196917296 J * mick_work ~clamwin@adsl-068-157-089-099.sip.bct.bellsouth.net 1196917296 J * tam ~tam@gw.nettam.com 1196917296 J * brc bruce@megarapido.cliquerapido.com.br 1196919957 J * shuri ~shuri@64.235.209.226 1196920328 Q * shuri Quit: Leaving 1196920360 N * Bertl_zZ Bertl_oO 1196920385 M * Bertl_oO morning folks ... 1196921296 M * hparker morning Bertl_oO 1196921861 J * DLange ~dlange@p57A300EB.dip0.t-ipconnect.de 1196922229 Q * meebey Remote host closed the connection 1196922234 J * sharkjaw ~gab@shell.ormset.no 1196922259 J * meebey meebey@booster.qnetp.net 1196923976 J * DavidS ~david@213.20.156.34 1196924081 N * DavidS DavidS|FMO 1196924081 Q * meebey Remote host closed the connection 1196924111 J * meebey meebey@booster.qnetp.net 1196924955 P * notabene 1196925088 J * Alikus ~alikus@217.150.200.212 1196925573 J * kir_home ~kir@81.5.110.176 1196925577 J * notabene ~notabene@62.242.54.137 1196925772 J * JonB ~NoSuchUse@kg0-223.kollegiegaarden.dk 1196925912 Q * fatgoose_ Quit: fatgoose_ 1196926817 J * vserveraddict ~vserverad@ADSL2.AQUEOS.COM 1196927537 Q * balbir Read error: Operation timed out 1196927624 Q * JonB Quit: This computer has gone to sleep 1196928308 J * balbir ~balbir@122.167.176.252 1196929077 Q * notabene Quit: User is away. 1196929321 J * notabene ~notabene@62.242.54.137 1196929678 J * gebura ~gebura@77.192.186.197 1196929861 Q * meebey Remote host closed the connection 1196929891 J * meebey meebey@booster.qnetp.net 1196929947 J * rgl ~rgl@84.90.10.245 1196929950 M * rgl hellos 1196929981 M * DavidS|FMO hi rgl 1196930025 M * rgl the next supported linux will be 2.6.24, correct? 1196930129 M * gebura hi 1196930329 J * dna ~dna@241-213-dsl.kielnet.net 1196930492 Q * notabene Quit: ChatZilla 0.9.79 [Firefox 2.0.0.4/2007060115] 1196931124 J * notabene ~notabene@62.242.54.137 1196931164 J * JonB ~NoSuchUse@kg0-223.kollegiegaarden.dk 1196931220 Q * notabene 1196931357 M * hparker If I'm going to rsync migrate from a real server to a virtual can I just cp /etc/vserver/existing_guest /etc/vserver/new_guest and edit that? 1196931429 M * DavidS|FMO hparker: yes, but be sure to fix all important symlinks and values 1196931437 M * DavidS|FMO there are quite a few ... 1196931444 M * hparker hrrmm 1196931476 M * hparker Maybe i'll follow the guide at linux-vserver.org :P 1196931484 M * DavidS|FMO context, name, vdir, run, cache, uts/nodename, interfaces/* are the most important i think 1196931490 M * DavidS|FMO that's always a good idea :) 1196931497 M * hparker Especially since I only have gentoo guests till now and this one is FC 1196931507 M * hparker And a live server :P 1196931534 A * hparker reads up on a minimal FC install 1196931607 M * JonB hparker: why not just make a new? 1196931634 M * hparker JonB: i'm lazy, thought it might be quicker/easier to cp it 1196931777 M * JonB hparker: when i moved a real server to a vserver i made a new vserver 1196931784 M * JonB removed most stuff in it's / 1196931784 M * DavidS|FMO hparker: you can make a new _and_ rsync it over ;) 1196931793 M * DavidS|FMO (on top of the new server) 1196931793 M * JonB and then i rsync'ed the real server over 1196931799 M * JonB i kept /dev 1196931816 M * JonB and /etc i cleaned out after moving 1196931847 M * hparker Well, since it's an old FC1 system that no longer resembles FC1 from custom packages, I'm going to need an empty / 1196931879 J * notabene ~notabene@62.242.54.137 1196931951 M * JonB hparker: keep /dev from the vserver guest 1196931973 M * hparker makes sense 1196932004 Q * notabene 1196932035 J * notabene ~nb@62.242.54.137 1196932249 J * mire ~mire@75-168-222-85.adsl.verat.net 1196933323 Q * DavidS|FMO Quit: Leaving. 1196933741 J * virtuoso_ ~s0t0na@ppp91-122-102-47.pppoe.avangard-dsl.ru 1196934149 Q * virtuoso Ping timeout: 480 seconds 1196936226 A * hparker waits on nightly backup to finish :P 1196936775 M * JonB hparker: why do you have to wait for that? 1196936874 M * hparker Cuz i'm a lazy ass, my backup is to bzip up the guests completely every night with a chincy little script that looks for directories in /vservers/ to archive.. Don't want to add one while it's running 1196936915 M * hparker It's 04:30 here, great time for backups... unless you've somehow managed to flip your days and nights yet again 1196937006 M * hparker It's "only" got 10 gig to archive :P 1196937089 Q * brc Ping timeout: 480 seconds 1196937104 J * brc bruce@megarapido.cliquerapido.com.br 1196937266 M * hparker I need to implement something a bit better once i free up some hardware 1196937389 J * lilalinux ~plasma@dslb-084-058-239-053.pools.arcor-ip.net 1196937648 M * JonB hparker: i use rsync to big harddisks in a lvm on another machine 1196937657 M * JonB hparker: i hardlink identical files together 1196937664 M * hparker that's what i'm going to look into 1196937677 M * hparker I can't, too many gentoo guests 1196937680 M * JonB hparker: that means i have a full backup of everything at a specific tage 1196937699 M * JonB hparker: sure you can hardlink everything together, let me explain 1196937715 M * JonB hparker: initially i copy everything to the backup machine 1196937728 M * JonB hparker: there i hardlink identical files 1196937732 M * hparker What if they're compiled with different use flags? 1196937743 M * JonB hparker: then they are not identical 1196937758 M * hparker hrrmm... so check md5sum or something? 1196937768 M * JonB hparker: no, far more simple 1196937777 M * JonB hparker: i sort all the files by size 1196937781 M * hparker Oh, I'm all for simple ;) 1196937803 M * JonB files that has the same size are compared bit for bit, and if they are identical i hardlink them 1196937807 M * JonB this takes days 1196937810 M * JonB if not weeks 1196937830 M * JonB once that is done, i use cp -al to make a hardlink copy 1196937849 M * hparker Ahh... Well, it'd always be doing that for me... i've got 6-8 gentoo guests already and going to be adding more 1196937850 M * JonB like from daily to weekly 1196937855 M * JonB or monthly or so 1196937868 M * JonB then i rsync new data ontop of the daily part 1196937878 M * JonB because rsync does not touch unchanged files 1196937892 M * hparker hrrmm 1196937917 M * hparker I might hit you up for your script when I get to that part (If you can share it) 1196937938 M * JonB and rsync writes to a temporary file when there are changes, deletes the old and writes the temporary file to the new file 1196937963 M * JonB this ensures that the hardlink to any identical file is broken before rsync writes any new data 1196937990 M * JonB the next day i can just cp -al daily to a new place and i have a new backup 1196938000 M * JonB i'm afraid i can not share the scripts 1196938004 M * JonB but i can tell you what i did 1196938019 M * hparker Fair enough ;) 1196938105 J * DavidS ~david@213.20.156.34 1196938113 M * JonB hparker: inside my daily directory i have 8 days with the day number since 1/1 and the day name 1196938121 M * JonB hparker: i deleted the oldest 1196938136 M * JonB but i only copy the one from yesterday to today, and then i rsync ontop of today 1196938151 M * JonB once a week i copy today to a weekly 1196938161 M * JonB and i keep 5 weeks 1196938176 M * JonB once a month i copy today to a monthly, deleting the oldest 1196938181 M * JonB and i keep a yearly i never delete 1196938233 M * hparker On your initial setup, how dod you do the file comparison? 1196938258 M * JonB hparker: i use find to print out the inode number, the file size and full path to file 1196938285 M * JonB then i have another script that reads that, takes the file size and create a special file with the name of the size 1196938297 M * JonB into that file i echo the output from find 1196938323 M * JonB so, in the end all files of the same size will be written inside a file with a name of that size 1196938330 M * JonB one file pr. line 1196938347 M * JonB the lines consist of inode, size and full path 1196938361 M * JonB for every line i check the full path to see if the inode is still the same 1196938367 M * JonB as in the line 1196938388 M * JonB if the inode is the same, and there are multiple linies with the same inode, then i ignore those 1196938418 M * JonB for any inode that does not match the top one, i compare the files bit by bit and if they are identical i hardlink the 2 files together 1196938424 M * JonB after that i continue to the next line 1196938455 M * hparker How did you do the compare? 1196938461 M * JonB hparker: cmp 1196938473 M * hparker ouch.. explains the time required 1196938484 M * JonB hparker: no 1196938491 M * JonB hparker: md5sum takes the same time 1196938505 M * hparker oh? ok 1196938514 M * hparker i've never compared them, so... 1196938520 M * JonB hparker: md5sum has to travel the entire lenght of both files 1196938526 M * JonB compute the md5sum and then compare them 1196938540 M * JonB suppose you run cmp on 2 different files 1196938550 M * JonB cmp will exit on the first bit that is not identical 1196938559 M * JonB that might be pretty early in the file 1196938575 M * JonB only if they are identical will i have to go all the way to the end 1196938585 M * hparker ahh 1196938588 M * hparker makes sense 1196938600 M * JonB it takes alot of time to use find --print on 243987t048w049934 files 1196938639 M * JonB it takes even longer time to use a bash script that uses readline to read 1 line pr file and then do something with it 1196938668 M * hparker yeah 1196938674 M * JonB before that i used sort to make one big file 1196938693 M * JonB but there are advantages to keeping small files 1196938705 M * JonB you dont have to use so much memory 1196938710 M * JonB and the diskspace is the same 1196938719 M * JonB but maybe readline was not the smartest choice 1196938764 M * hparker heh 1196938784 M * hparker not sure, i've never looked into doing what you've done there, but sounds interesting 1196938801 M * hparker I was considering cheating with rdiff-backup 1196938811 M * JonB thats a possibility 1196938836 M * JonB i inherited the basic idea of cp -al for hardlinks and rsync ontop 1196938864 M * hparker ahh 1196938867 M * JonB i polished it a bit to make daily, weekly, monthly and yearly copies with a nice readable name 1196938888 M * JonB and then i made those scripts that hardlinked data that was identical, but not hardlinked 1196939022 M * JonB the reason was that my work has alot of dvd images 1196939029 M * JonB and people store them in their homedir 1196939041 M * JonB and a official place on the server 1196939044 M * JonB on a burning machine ... 1196939058 M * JonB i found it better to hardlink all those identical files 1196939087 M * hparker makes sense 1196939104 M * JonB at least on the backup server 1196939111 M * JonB i dont touch files on the server 1196939120 M * JonB because i never know if some user damages it 1196939129 M * hparker ahh 1196939212 M * JonB so, if a DVD image that is hardlinked between /official/place and /ome/user/dvd.iso is changed in /home, then rsync will notice that, break the hardlink and transfer the new file into "/home" on the backup 1196939255 M * hparker And won't change in /official/place ... Nice 1196939277 M * JonB yes 1196939286 M * JonB but rdiff backup might also be an option 1196939317 M * JonB i looked at afbackup and stored to removeable harddisks, but that didnt work optimally 1196939323 M * hparker I was going to go with it at first, but opted for this because i was still learning and wanted restores to be quick-n-easy 1196939358 M * hparker And it came in handy when someone stole the partition tables from both drives :P 1196939421 M * hparker And I still need to get the networking on is squared away.. i kinda slammed this together one weekend because a customer wanted a vhost 1196939447 M * hparker It works, but i don't think it's quite right 1196939456 M * JonB it's better than nothing 1196939462 M * hparker heh 1196939508 M * hparker I only have 1 IP for it, provider thinks they're made of gold still... So, I've had to be a bit more creative with things 1196939569 M * JonB well, there are not so many free ip addresses 1196939614 M * hparker When I had my ISP I just had to justify the usage and I got another block 1196939703 M * hparker hell, an ISP I do work for added a connection from another provider... Had a /24 and /25 with the first, and wanting to get away from them as they're high priced.. New provider wouldn't give him smaller then a /23 1196939803 M * hparker But i'm now dealing with the ISP I sold to, not a real connection 1196939938 M * JonB okay 1196940034 M * hparker i just do a little hosting and such now... I figure if it covers the electricity and 'net connection I'm ok 1196940435 M * JonB hparker: and you gotta have net anyway 1196940444 M * hparker yup 1196940483 M * hparker though i have been looking for better paying work... I've got a PC shop, but it's been sporadic and the feast or famine is getting a bit old 1196940500 M * JonB ofc. you could always get a stay at home server http://www.stayathomeserver.com/ (windows home server commercial, but quite funny) 1196940564 M * hparker lol 1196941332 Q * DavidS Quit: Leaving. 1196943618 Q * JonB Quit: This computer has gone to sleep 1196944034 P * vserveraddict 1196944082 Q * yang Ping timeout: 480 seconds 1196944767 J * JonB ~NoSuchUse@kg0-223.kollegiegaarden.dk 1196945409 M * hparker ugh 1196945423 M * hparker Can't get eith atp-rpm or yum to work 1196945448 M * hparker Though I might can fix yum install 1196945660 M * hparker cp: cannot stat `/etc/vservers/.defaults/vdirbase/.pkg/mars.rwisp.com/yum/etc/yum.conf': No such file or directory 1196945684 M * hparker So, I add the directory and put a yum.conf there... Now I get: 1196945707 M * hparker vserver pkgmgmt-directory exists already; please try to use '--force'; or remove it manually 1196946208 Q * rgl Ping timeout: 480 seconds 1196946434 M * daniel_hozac hparker: do you have /vservers/.pkg? 1196946554 M * hparker I created it 1196946600 M * hparker And then created /vservers/.pkg/mars.rwisp.com/yum/etc and copied a yum.conf to it... and then I moved the yum.conf 1196946610 M * hparker It's unhappy with me 1196946691 M * daniel_hozac don't create it, it'll be created by build. 1196946751 M * hparker the first error was when I ust have /vservers/.pkg 1196946758 M * hparker s/ust/just 1196947782 Q * Aiken Quit: Leaving 1196948382 Q * kir_home Ping timeout: 480 seconds 1196948568 M * hparker irritating 1196948574 A * hparker pokes Hollow 1196948738 M * daniel_hozac what is it that you're trying to do? 1196948787 M * hparker Build a FC1 guest so i can rsync an existing server to it 1196948800 M * hparker I guess it could be FC anything 1196948822 M * daniel_hozac uh, if you're just gonna rsync something over it, why not use -m skeleton? 1196948844 M * hparker cuz... I didn't know about it? ;) 1196948862 A * hparker pokes vserver build build --help some more 1196948905 M * hparker just s/yum/skeleton? 1196949028 N * virtuoso_ virtuoso 1196949028 Q * meebey Remote host closed the connection 1196949058 J * meebey meebey@booster.qnetp.net 1196949086 M * hparker Seems to have worked, thanks daniel_hozac! 1196949134 M * daniel_hozac you're welcome 1196949337 P * friendly12345 1196950290 M * Bertl_oO yeah, the tools are quite powerful nowadays ... 1196950556 M * hparker And thank everyone for making them that way 1196950602 M * Bertl_oO daniel_hozac: btw, does the rsync build target work with real hosts too? 1196951050 J * doener ~doener@i577AFF28.versanet.de 1196951304 M * Bertl_oO hey doener! is it really you? 1196951312 N * Bertl_oO Bertl 1196951351 M * doener yep 1196951362 M * Bertl LTNS, how's going? 1196951399 Q * mick_work Remote host closed the connection 1196951416 M * doener "slow" probably describes it pretty well... my biggest "achievements" lately were some trivial fixes to git 1196951469 Q * doener_ Ping timeout: 480 seconds 1196951470 M * doener overall, I'm not quite excited about how things evolve, but well... 1196951549 M * doener and you? Everything smooth with you moving to a "different" place? (IIRC it's not _that_ far from your old home, is it?) 1196951558 M * doener how's l-v stuff going? 1196951576 M * doener btw, did you see the per-process capability bounding patches? 1196951592 M * Bertl well, everything fine here, except for the 'usual' cold I get every winter ... 1196951607 M * daniel_hozac Bertl: sure, it just invokes rsync. 1196951632 M * doener heh, hope you get well soon. Over here, it's always my gf who gets ill in the winter. I usually catch a cold in the summer time... 1196951632 M * Bertl the relocation is going good I guess, as usual takes a lot of time, but as you remebered correctly, the short distance helps a lot there 1196951691 M * Bertl on the Linux-VServer I'm a little lazy, mostly because I do not want to fix all the mainline issues ... thus we? decided to skip 2.6.23 and go for 2.6.24 1196951704 J * shuri ~shuri@64.235.209.226 1196951722 M * Bertl regarding the per-process capability bounding .. no I missed that, do you have a short overview for me? 1196951766 M * doener I did just see that topic showing up on lkml. Interestingly, it was a bug report ;-) But I can probably get you some lkml.org urls 1196951782 M * daniel_hozac it's on containers, i read it last night. :) 1196951913 M * Bertl does it implement/fix the posix stuff? 1196951972 M * doener http://marc.info/?l=linux-kernel&m=119610783722314&w=2 (lkml.org was tooooo slow) 1196951974 A * hparker wonders why his ~/ takes the longest to transfer 1196951988 M * doener ... that's the patch by Serge 1196952106 M * Bertl hmm, might work for our purpose ... have to look at it 1196952140 M * doener there also seems to be a 64bit capabilities patch in -mm but I can't find where that comes from 1196952174 M * Bertl now that's something ... not that we didn't suggest one a year ago :) 1196952303 M * doener Bertl: probably it's this thread: http://www.mail-archive.com/linux-security-module@vger.kernel.org/msg01938.html 1196952377 M * doener stupid 2321 different mailing list for the kernel... grml 1196952389 M * daniel_hozac heh 1196952458 J * pusling_ pusling@77.75.162.71 1196952470 Q * _gh_ Ping timeout: 480 seconds 1196952532 Q * pusling Read error: Connection reset by peer 1196952731 M * Bertl doener: you can't have enough kernel mailing lists, to ensure that the subsystem maintainer never gets the proper mail :) 1196952903 Q * JonB Ping timeout: 480 seconds 1196952914 J * allen ~allen@60.52.1.244 1196952924 M * Bertl welcome allen! 1196953077 Q * sharkjaw Quit: Leaving 1196953436 Q * allen 1196953945 Q * lilalinux Quit: Leaving 1196953976 Q * sfs|alfa Quit: [BX] Occifer, take me drunk, I'm home 1196954035 Q * shuri Quit: Leaving 1196954542 J * john ~chatzilla@84-53-64-122.adsl.unet.nl 1196954630 Q * john 1196954785 J * JonB ~NoSuchUse@192.38.8.25 1196956534 J * shuri ~shuri@64.235.209.226 1196957217 Q * gebura Quit: Quitte 1196957617 J * dowdle ~dowdle@scott.coe.montana.edu 1196957753 M * Bertl wb dowdle! 1196957866 A * dowdle drinks coffee 1196958220 M * JonB dowdle: addict? 1196958252 J * Abaddon abaddon@68-71.is.net.pl 1196958263 M * Bertl wb Abaddon! 1196958283 M * Abaddon hi 1196959897 Q * JonB Ping timeout: 480 seconds 1196960058 J * JonB hidden-use@192.38.9.151 1196960700 Q * JonB Quit: This computer has gone to sleep 1196960991 J * JonB hidden-use@192.38.9.151 1196961383 Q * meebey Remote host closed the connection 1196961414 J * meebey meebey@booster.qnetp.net 1196961434 J * bonbons ~bonbons@2001:960:7ab:0:20b:5dff:fec7:6b33 1196961588 Q * meebey Remote host closed the connection 1196961618 J * meebey meebey@booster.qnetp.net 1196961637 M * Bertl wb bonbons! 1196961649 M * bonbons hey Bertl! 1196961657 M * Bertl how's going? 1196962112 M * bonbons fine, just not much time available 1196962182 M * Bertl as usual ... 1196962480 M * bonbons quite so 1196962524 Q * larsivi Quit: Konversation terminated! 1196962667 J * fatgoose ~samuel@76-10-149-199.dsl.teksavvy.com 1196962673 Q * fatgoose Remote host closed the connection 1196962681 J * larsivi ~larsivi@144.84-48-50.nextgentel.com 1196962759 J * fatgoose ~samuel@76-10-149-199.dsl.teksavvy.com 1196963229 Q * shuri Quit: Leaving 1196964641 Q * esa Ping timeout: 480 seconds 1196964987 N * bzed jeden 1196964989 J * esa bip@ip-87-238-2-45.adsl.cheapnet.it 1196965016 N * jeden bzed 1196965604 J * Infinito argos@201-10-156-171.gnace701.dsl.brasiltelecom.net.br 1196965915 Q * rob-84x^ Quit: That's it for today 1196968113 Q * meebey Remote host closed the connection 1196968144 J * meebey meebey@booster.qnetp.net 1196968505 J * Darkglow ~pdesnoyer@208.71.184.41 1196968514 M * Darkglow hi all 1196968524 M * Darkglow still having issues with hashify... :( 1196968543 M * daniel_hozac how so? 1196968557 M * Darkglow on 1 host, I have only 1 vserver (for now). 1196968604 M * Darkglow after installation using some scripts and debootstrap, chxid globally and then I run hashify 1196968619 M * Darkglow now, some config files are not writable in /etc 1196968638 M * daniel_hozac so you're on a 2.0 kernel? 1196968642 M * Darkglow so I chxid the file and it's still unwritable... do I have to restart the vserver after chxid ? 1196968665 M * Darkglow (I use the kernel from Debian Etch, not 100% sure what version it is... 1196968681 M * daniel_hozac 2.0.2.2-rc9 1196968682 Q * bragon Ping timeout: 480 seconds 1196968684 M * Darkglow + utils from backports. 1196968687 Q * Abaddon Quit: leaving 1196968694 M * daniel_hozac you might want to get a kernel from backports too. 1196968736 M * Darkglow why ? (what are the bug fixes important to me in this situation ? cow fixing ? 1196968750 M * daniel_hozac 2.2 _has_ COW. 1196968754 M * daniel_hozac 2.0 does not. 1196968759 M * Darkglow ah. 1196968761 M * Darkglow good point ! 1196968795 M * Darkglow so in 2.0 (I have a few machines with it now... I need to find a solution for now and I will think about a backport kernel after christmas ;-) 1196968820 M * Darkglow if I want to access a file that was in context 0, I use chxid right ? 1196968870 M * daniel_hozac what? 1196968881 J * bragon ~bragon@2001:7a8:aa58::1 1196968947 M * Darkglow ok : 1 file --> /etc/nagios/nrpe.cfg in my vserver. It had context 0. I could not edit it. I changed the xid using chxid -c path_to_file 1196968969 M * Darkglow lsxid show the correct xid, but I still cannot write to it. 1196968978 M * Darkglow maybe I have to restart the vserver ? 1196969099 M * daniel_hozac it has nothing to do with that. 1196969111 M * daniel_hozac it's immutable, you can't edit immutable files. 1196969150 M * Darkglow how do I make files non-immutable ? 1196969152 M * daniel_hozac context 0 files are accessible by all guests. 1196969162 M * daniel_hozac you need to unhashify it. 1196969240 M * Darkglow easy command to do that or do I "cp, rm, mv" the file? 1196969319 M * Darkglow in 2.2, this is not the same right ? It would break the link and create a new file ? (cow right?) 1196969321 M * daniel_hozac yeah, that'll do it. 1196969323 M * daniel_hozac yes. 1196969366 M * Darkglow then new kernel for vservers just gained a big vote ;-) had a couple of issues with these files in the past. 1196969463 M * Darkglow thanks again... it's starting to sync in now (this whole hash/cow/immutable stuff). ;-) 1196969980 Q * ryker Ping timeout: 480 seconds 1196970237 Q * mire Ping timeout: 480 seconds 1196970476 J * rgl ~rgl@84.90.10.245 1196970479 A * rgl waves 1196970681 M * Bertl hey rgl! 1196970868 J * mire ~mire@15-169-222-85.adsl.verat.net 1196970869 Q * dna Read error: Connection reset by peer 1196970893 J * dna ~dna@241-213-dsl.kielnet.net 1196971465 Q * duckx Read error: Connection reset by peer 1196971649 J * duckx ~Duck@81.57.39.234 1196972065 J * Abaddon abaddon@68-71.is.net.pl 1196972500 Q * Alikus Remote host closed the connection 1196973473 Q * JonB Quit: This computer has gone to sleep 1196974399 J * yarihm ~yarihm@84-75-119-160.dclient.hispeed.ch 1196974459 Q * yarihm 1196974467 J * yarihm ~yarihm@84-75-119-160.dclient.hispeed.ch 1196974515 Q * yarihm 1196975103 J * ryker ~ryker@71.239.15.74 1196975200 J * Aiken ~james@ppp121-45-202-26.lns1.bne1.internode.on.net 1196975780 P * Darkglow Konversation terminated! 1196975885 J * alex^^ ~email@78-86-117-217.zone2.bethere.co.uk 1196975901 M * Bertl wb alex^^! 1196976043 M * alex^^ hi :D 1196976044 M * alex^^ thanks 1196976048 M * alex^^ http://pastebin.com/m6d5bde72 # can someone please tell me why my vsersers are not rebooting after i did a system reboot? 1196976053 M * alex^^ having some issues with vserver today 1196976057 M * alex^^ shes being unstable on me 1196976100 M * Bertl 'rebooting' means starting, and 'system reboot' refers to the host system? 1196976104 M * alex^^ yes 1196976117 M * alex^^ system reboot 1196976141 M * Bertl looks like your host system didn't run 'vprocunhide' 1196976154 M * Bertl which util-vserver version do you use? 1196976165 M * alex^^ um 1196976177 M * alex^^ im sorry, how do i check agian? 1196976179 M * Bertl (check with vserver-info - SYSINFO) 1196976183 M * Bertl (please use paste.linux-vserver.org for everything longer than 3 lines) 1196976218 M * alex^^ http://pastebin.com/m4e878585. 1196976219 M * alex^^ http://pastebin.com/m4e878585 1196976259 M * Bertl hmm, you might want to update them (for newer kernels like you use) to 0.30.214 1196976268 M * Bertl you can get it from backports 1196976287 M * alex^^ whats interesting is the system was playign up on me, directory listen and the likes 1196976296 M * Bertl look for the vprocunhide (runlevel) script and run it 1196976296 M * alex^^ so i performed a system reboot, and now im gettting this 1196976303 M * alex^^ ok 1196976323 M * alex^^ would i normall find it in /usr/bin something? 1196976406 M * Bertl I have no debian system at the hand ATM 1196976436 M * alex^^ i just did an install of util-vserver 1196976444 M * alex^^ its like it was never there 1196976445 M * alex^^ hmm wtf 1196976497 M * daniel_hozac from source or package? 1196976497 M * alex^^ ahh hang 1196976504 M * alex^^ ahh nuts :) 1196976513 M * alex^^ yes i just installed vserver from a package 1196976513 M * Bertl rpm -ql util-vserver-sysv-0.30.214-0.1 1196976516 M * Bertl /etc/rc.d/init.d/vprocunhide 1196976519 M * alex^^ while i have vserver source ;) 1196976523 M * Bertl (this is on mandriva) 1196976554 M * daniel_hozac Debian is special, the packages have /etc/init.d/util-vserver only. 1196976561 M * alex^^ yeap 1196976568 M * alex^^ The configured vshelper '/sbin/vshelper' does not match the 'vshelper' 1196976568 M * alex^^ script of the util-vserver package. Maybe you have two versions installed? 1196976570 M * alex^^ woops 1196976585 M * alex^^ i installed a package whilist i had my other vserver built from source 1196976595 M * alex^^ been a while since i touched this system, forgotton what i did 1196976624 M * alex^^ right! 1196976626 M * alex^^ got it 1196976630 M * alex^^ echo "/usr/local/lib/util-vserver/vshelper" >/proc/sys/kernel/vshelper 1196976635 M * alex^^ shes firing up 1196976722 M * alex^^ okay im back onlien now 1196976743 M * alex^^ now gotta figure out why i was getting very wierd directory permission errors 1196976749 M * alex^^ also does the kernel / fstab support quotas? 1196976755 M * alex^^ in guest systems 1196976781 M * daniel_hozac http://oldwiki.linux-vserver.org/Standard%20non-shared%20quota 1196976797 M * alex^^ Dec 6 21:32:33 moon postfix/qmgr[4301]: fatal: open dictionary: expecting "type:name" form instead of "Warning:" 1196976804 M * alex^^ im getting this in my postfix now .. hmm 1196976835 M * daniel_hozac looks like something is warning. 1196977031 M * Guy- umm... I just realized I'm somewhat confused as far as contexts and namespaces are concerned 1196977042 M * Guy- if I enter a vserver and cat /proc/mounts, I see what I expect to see 1196977065 M * Guy- however, whether I try vcontext or vnamespace to cat /proc/mounts, I always see the mounts of the host as well 1196977076 M * daniel_hozac yes. 1196977113 M * Guy- why is that? what other operation does 'vserver exec' carry out that limits what the spawned process sees in /proc? 1196977121 M * daniel_hozac chroot 1196977161 M * Guy- so the problem is that I'm cat'ing the /proc/mounts of the host? 1196977166 M * Guy- I thought that file was virtual 1196977203 M * daniel_hozac no, the problem is that all those mountpoints are reachable from teh host. 1196977204 M * Guy- or does the kernel function that generates it check the location of the root of the reading process? 1196977269 M * Guy- so /proc/mounts is filtered based on what the reading process can access based on its root? 1196977294 M * Bertl yes, but it is also based on the namespace 1196977294 M * Guy- and the mountpoints are re-based to be relative to that root? 1196977313 M * daniel_hozac yes. 1196977320 M * Bertl the thing is, you are observing two different mechanisms here at once 1196977341 M * Guy- I thought three 1196977344 M * Bertl multiple (filesystem) namespaces and vfs root/mount points 1196977400 M * Guy- how tightly is the seucity context (vcontext) and the filesystem namespace (vnamespace) coupled? 1196977411 M * Guy- are they completely orthogonal? 1196977416 M * Bertl in the current design, the guest namespace sees most of the host system, except for the actually guest processes, which are chrooted into the guest 1196977449 M * Bertl this allows to administrate the guests from their namespace 1196977538 M * Bertl a different, in our opinion inferior, approach would be to start an init process with all the host stuff removed 1196977654 M * Guy- OK, thanks 1196977663 M * Bertl you're welcome! 1196977711 M * Guy- when I create a new context, it 'inherits' the filesystem namespace from context0, is that correct? 1196977713 J * Piet ~piet@tor.noreply.org 1196977779 M * daniel_hozac if you just create the context, it does not have a filesystem namespace. 1196977830 Q * Infinito Quit: Quitte 1196977847 M * Guy- so contexts and namespaces are completely orthogonal? I can create one without the other? 1196977861 M * Guy- and, say, enter context x and namespace y at the same time? 1196977885 M * daniel_hozac sure. 1196977889 M * daniel_hozac namespaces are in mainline. 1196977899 M * daniel_hozac have been since 2.4.19 or something. 1196977945 M * Guy- but they aren't used for anything in mainline, are they? 1196977974 M * Bertl not that I know of, but they could be 1196977982 M * Bertl i.e. the interfaces are all there 1196977983 M * daniel_hozac there's pam_namespace. 1196977995 M * Guy- ah, indeed 1196978007 M * Guy- is the filesystem namespace and the uts namespace the same thing? 1196978012 M * Guy- or are these also different? 1196978014 M * Bertl daniel_hozac: didn't know that, tx! 1196978034 M * daniel_hozac uts and filesystem are wildly separate things. 1196978043 M * daniel_hozac Bertl: i found out about it the other day :) 1196978067 M * Guy- so, when I create a vserver, the following kernelspace objects are created: 1196978070 M * Guy- - filesystem namespace 1196978073 M * Guy- - security context 1196978075 M * Guy- - uts namespace 1196978078 M * Guy- - network context 1196978080 M * daniel_hozac - ipc namespace 1196978104 M * Bertl soo, user and optionally, pid/network namespace 1196978108 M * Bertl *soon 1196978126 M * daniel_hozac hmm, pid is going to be optional? 1196978130 M * Guy- the uts namespace determines the hostname and what else? 1196978141 M * Bertl daniel_hozac: yes, if possible ... 1196978159 M * Guy- and the network namespace will be for routes and such? 1196978169 M * daniel_hozac Bertl: what would the alternative be? 1196978184 M * Bertl I think we do not want to sacrifice the blend through isolation performance for separate spaces (unconditionally) 1196978200 M * daniel_hozac we can still do blend through, can't we? 1196978211 M * Bertl yes, but that is not the real benefit 1196978244 M * Bertl once we see that pid spaces are as fast as the unpartitioned thing, we can drop the isolation 1196978272 M * Bertl I asked for performance numbers a number of times, but AFAIK, none were reported yet 1196978281 M * daniel_hozac oh, yeah. 1196978301 M * Bertl and from the implementation it looks like a lot of overhead to me 1196978302 M * daniel_hozac i've lost the overview... are pid namespaces merged fully yet? 1196978322 M * Bertl they should be there, but I didn't dare to test with them yet :) 1196978342 M * daniel_hozac heh. 1196978351 M * Bertl exploding nsproxies and such *G* 1196978368 M * daniel_hozac what? 1196978380 M * Bertl just kidding ... 1196978396 M * AStorm heh 1196978399 M * daniel_hozac i realized that, i just figured it was a reference to something i had missed :) 1196978426 M * Bertl well, we had one version where the nsproxy got stuck (in mainline) IIRC 1196978426 M * AStorm nah, it's almost no overhead 1196978439 M * AStorm but it isn't finished yet API wise 1196978447 M * AStorm and still has bugs 1196978453 M * Bertl and we also had an nsproxy version which did free itself at least once too often :) 1196978461 M * daniel_hozac no overhead? 1196978483 M * Bertl _almost_ <- could mean 2-20%, no? 1196978492 M * daniel_hozac hehehe 1196978507 M * AStorm Bertl: no, it means <2% 1196978521 M * Bertl AStorm: you _have_ real numbers? 1196978522 M * AStorm I couldn't measure anything here 1196978524 M * AStorm no 1196978525 M * Bertl if so, please share 1196978532 M * AStorm because I wasn't in the single mode when measuring 1196978539 M * AStorm and couldn't get anything over noise 1196978544 M * AStorm maybe later 1196978551 M * Bertl how did you test them? 1196978567 M * daniel_hozac it seems to me like walking a list of pids is bound to be slower than just access the pid directly. 1196978577 M * AStorm Bertl: hackbench 1196978591 M * AStorm vs ps ax 1196978621 M * AStorm daniel_hozac: well, "pid directly" equals reading the list 1196978624 M * AStorm :> 1196978628 M * Bertl I think a relevant test would be to run at least 10 spaces side by side, and compare them to a single setup (without the spaces enabled) at equal number of processes doing some pid bound work 1196978643 M * AStorm Bertl: I did 5 namespaces 1196978649 M * AStorm each with 50 processes 1196978654 M * AStorm maybe too little ;P 1196978662 M * Bertl and you compared it to? 1196978671 M * AStorm To normal hackbench 1196978680 M * AStorm 5x 50 processes 1196978687 M * Bertl running on a kernel with pid spaces disabled 1196978687 M * daniel_hozac AStorm: there is no list without pid namespaces. 1196978696 Q * meebey Remote host closed the connection 1196978718 M * AStorm daniel_hozac: check yourself, it's the queue of task_struct that has to be walked anyway 1196978724 Q * rgl Quit: Enough 1196978726 J * meebey meebey@booster.qnetp.net 1196978733 M * AStorm indeed, pidns adds overhead, but it's constant 1196978739 M * Guy- will pid namespaces mean that each vserver can, theoretically run 32000 processes? 1196978752 J * shuri ~shuri@64.235.209.226 1196978755 M * Bertl yes 1196978765 M * AStorm Guy-: if your machine can, which it won't be able to I guess ;P 1196978767 M * daniel_hozac AStorm: i mean the list of pids _per_task_struct. 1196978780 M * AStorm daniel_hozac: as I said, it's constant overhead 1196978781 M * Guy- AStorm: no, I don't think so either :) 1196978795 M * AStorm because you're scanning the list anyway 1196978828 M * AStorm unless you're getting the current task pid, which you'll have regardless of namespaces w/o any scanning 1196978830 M * daniel_hozac AStorm: but there is no list without pid namespaces... 1196978840 M * AStorm daniel_hozac: so? 1196978855 M * AStorm which part of constant overhead eludes you? :-) 1196978872 M * daniel_hozac it's also not constant as you can have varying depths of namespaces. 1196978878 M * AStorm xids are also constant overhead 1196978917 M * AStorm daniel_hozac: well, it's semi-nested 1196978932 M * AStorm the code is weirder than you think it is 1196978936 M * AStorm :-) 1196978956 M * AStorm you can call it O(nest level) if you wish 1196978961 M * AStorm that should be small anyway 1196979013 M * AStorm there are many alternative implementations to be considered though 1196979022 M * AStorm I think they wouldn't drop anything that improves performance 1196979025 M * AStorm :> 1196979038 M * AStorm first, they have to be working though 1196979065 N * pusling_ pusling 1196979562 Q * bonbons Quit: Leaving 1196979636 J * derjohn_mobil ~aj@84-73-35-99.dclient.hispeed.ch 1196980057 Q * alex^^ Quit: ircN 8.00 for mIRC (20070730) 1196980859 Q * derjohn_mobil Ping timeout: 480 seconds 1196981594 Q * tam cation.oftc.net resistance.oftc.net 1196981594 Q * dna cation.oftc.net resistance.oftc.net 1196981594 Q * dowdle cation.oftc.net resistance.oftc.net 1196981594 Q * hparker cation.oftc.net resistance.oftc.net 1196981594 Q * zLinux cation.oftc.net resistance.oftc.net 1196981594 Q * AndrewLee cation.oftc.net resistance.oftc.net 1196981594 Q * faheem__ cation.oftc.net resistance.oftc.net 1196981594 Q * arachnist cation.oftc.net resistance.oftc.net 1196981594 Q * besonen_mobile cation.oftc.net resistance.oftc.net 1196981594 Q * shuri cation.oftc.net resistance.oftc.net 1196981594 Q * micah cation.oftc.net resistance.oftc.net 1196981594 Q * Aiken cation.oftc.net resistance.oftc.net 1196981594 Q * hardwire cation.oftc.net resistance.oftc.net 1196981594 Q * Skram cation.oftc.net resistance.oftc.net 1196981594 Q * djbclark cation.oftc.net resistance.oftc.net 1196981594 Q * ryker cation.oftc.net resistance.oftc.net 1196981594 Q * mire cation.oftc.net resistance.oftc.net 1196981594 Q * quasisane cation.oftc.net resistance.oftc.net 1196981594 Q * bored2sleep cation.oftc.net resistance.oftc.net 1196981594 Q * fatgoose cation.oftc.net resistance.oftc.net 1196981594 Q * brc cation.oftc.net resistance.oftc.net 1196981594 Q * balbir cation.oftc.net resistance.oftc.net 1196981594 Q * mountie cation.oftc.net resistance.oftc.net 1196981594 Q * Hollow cation.oftc.net resistance.oftc.net 1196981594 Q * neuralis cation.oftc.net resistance.oftc.net 1196981594 Q * emag cation.oftc.net resistance.oftc.net 1196981621 J * djbclark dclark@opensysadmin.com 1196981621 J * Skram ~mark@HERCULES.sentiensystems.net 1196981621 J * hardwire ~bip@rdbck-7085.palmer.mtaonline.net 1196981676 J * mountie ~mountie@trb229.travel-net.com 1196981676 J * Hollow ~hollow@proteus.croup.de 1196981676 J * neuralis ~krstic@solarsail.hcs.HARVARD.EDU 1196981676 J * emag ~Itoc5OI6@gurski.org 1196981676 J * balbir ~balbir@122.167.176.252 1196981676 J * brc bruce@megarapido.cliquerapido.com.br 1196981676 J * fatgoose ~samuel@76-10-149-199.dsl.teksavvy.com 1196981676 J * besonen_mobile ~besonen_m@71-220-198-145.eugn.qwest.net 1196981676 J * zLinux ~zLinux@88.213.17.215 1196981676 J * AndrewLee ~andrew@flat.iis.sinica.edu.tw 1196981676 J * faheem__ ~faheem@152.16.8.94 1196981676 J * arachnist ~arachnist@pool-71-174-118-56.bstnma.fios.verizon.net 1196981676 J * hparker ~hparker@linux.homershut.net 1196981676 J * dowdle ~dowdle@scott.coe.montana.edu 1196981676 J * dna ~dna@241-213-dsl.kielnet.net 1196981676 J * Aiken ~james@ppp121-45-202-26.lns1.bne1.internode.on.net 1196981676 J * micah ~micah@micah.riseup.net 1196981676 J * shuri ~shuri@64.235.209.226 1196981676 J * bored2sleep ~bored2sle@66-111-53-150.static.sagonet.net 1196981676 J * quasisane ~sanep@c-76-118-191-64.hsd1.nh.comcast.net 1196981676 J * mire ~mire@15-169-222-85.adsl.verat.net 1196981676 J * ryker ~ryker@71.239.15.74 1196981746 J * tam ~tam@gw.nettam.com 1196981759 M * Bertl wb tam! 1196981768 M * Bertl okay, off to bed now .. have a good one everyone! 1196981774 N * Bertl Bertl_zZ 1196982703 Q * DLange Quit: Bye, bye. Hasta luego. 1196983377 Q * doener Quit: leaving 1196983736 J * doener ~doener@i577AFF28.versanet.de 1196983757 Q * nou Ping timeout: 480 seconds 1196983876 J * Q_ kurt@d54C3F9BC.access.telenet.be 1196985167 Q * Abaddon Quit: g'night