1178843611 J * lyli1 ~eric@dynamic-acs-24-154-33-109.zoominternet.net 1178848392 J * click_ click@ti511110a080-0476.bb.online.no 1178848512 Q * click Ping timeout: 480 seconds 1178849728 J * nox ~nox@static.88-198-17-175.clients.your-server.de 1178849818 J * svenk ~sven@pulsar.digital.udk-berlin.de 1178850739 J * Loki|muh_ loki@satanix.de 1178850740 Q * Loki|muh Read error: Connection reset by peer 1178850750 N * Loki|muh_ Loki|muh 1178850850 N * ensc Guest127 1178850860 J * ensc ~irc-ensc@p54B4E32E.dip.t-dialin.net 1178850967 Q * Guest127 Ping timeout: 480 seconds 1178851777 J * edog ~edog@office.aichyna.com 1178853943 J * stefani ~stefani@c-24-19-46-211.hsd1.mn.comcast.net 1178855578 Q * FloodServ charon.oftc.net synthon.oftc.net 1178855578 Q * besonen charon.oftc.net synthon.oftc.net 1178855578 Q * Johnnie charon.oftc.net synthon.oftc.net 1178855578 Q * stefani charon.oftc.net synthon.oftc.net 1178855578 Q * mugwump charon.oftc.net synthon.oftc.net 1178855578 Q * ruskie charon.oftc.net synthon.oftc.net 1178855578 Q * mstrobert charon.oftc.net synthon.oftc.net 1178855578 Q * mattzerah charon.oftc.net synthon.oftc.net 1178855578 Q * bragon charon.oftc.net synthon.oftc.net 1178855578 Q * Rich_Estill charon.oftc.net synthon.oftc.net 1178855578 Q * weasel charon.oftc.net synthon.oftc.net 1178855578 Q * micah charon.oftc.net synthon.oftc.net 1178855578 Q * djbclark charon.oftc.net synthon.oftc.net 1178855578 Q * johnb charon.oftc.net synthon.oftc.net 1178855578 Q * soltesz charon.oftc.net synthon.oftc.net 1178855578 Q * ntrs_ charon.oftc.net synthon.oftc.net 1178855578 Q * lyli1 charon.oftc.net synthon.oftc.net 1178855578 Q * infowolfe charon.oftc.net synthon.oftc.net 1178855578 Q * brcc_ charon.oftc.net synthon.oftc.net 1178855578 Q * besonen_mobile charon.oftc.net synthon.oftc.net 1178855578 Q * hardwire charon.oftc.net synthon.oftc.net 1178855578 Q * mountie charon.oftc.net synthon.oftc.net 1178855700 J * stefani ~stefani@c-24-19-46-211.hsd1.mn.comcast.net 1178855700 J * lyli1 ~eric@dynamic-acs-24-154-33-109.zoominternet.net 1178855700 J * mugwump ~samv@watts.utsl.gen.nz 1178855700 J * soltesz ~soltesz@aegis.CS.Princeton.EDU 1178855700 J * ntrs_ ntrs@68-188-55-120.dhcp.stls.mo.charter.com 1178855700 J * infowolfe ~infowolfe@c-67-164-195-129.hsd1.ut.comcast.net 1178855700 J * FloodServ services@services.oftc.net 1178855700 J * ruskie ruskie@ruskie.user.oftc.net 1178855700 J * mstrobert ~mstrobert@wkstn.wycliffe.ca 1178855700 J * micah ~micah@micah.riseup.net 1178855700 J * brcc_ bruce@i.am.someasshole.com 1178855700 J * mattzerah ~matt@121.50.222.55 1178855700 J * bragon ~bragon@sam.geeknode.org 1178855700 J * besonen_mobile ~besonen_m@71-220-227-185.eugn.qwest.net 1178855700 J * hardwire ~bip@rdbck-1922.wasilla.mtaonline.net 1178855700 J * djbclark dclark@opensysadmin.com 1178855700 J * weasel weasel@weasel.noc.oftc.net 1178855700 J * besonen ~besonen@dsl-db.pacinfo.com 1178855700 J * Rich_Estill ~restill@c-24-11-195-139.hsd1.mi.comcast.net 1178855700 J * johnb ~johnb@ge0.fw0.Athens.oh.us.splicednetworks.net 1178855700 J * Johnnie ~jdlewis@c-67-163-247-109.hsd1.pa.comcast.net 1178855700 J * mountie ~mountie@trb229.travel-net.com 1178856598 M * Bertl_oO okay, off to bed now ... have a good one everyone! cya! 1178856603 N * Bertl_oO Bertl_zZ 1178856961 P * stefani parting (is such sweet sorrow) 1178858049 Q * virtuoso Ping timeout: 480 seconds 1178858078 J * virtuoso ~s0t0na@80.253.205.251 1178859299 J * lchvdlch ~nestor@201.240.69.78 1178859508 Q * lchvdlch 1178861933 Q * edog Remote host closed the connection 1178862570 J * ktwilight_ ~ktwilight@105.122-66-87.adsl-dyn.isp.belgacom.be 1178862723 Q * ruskie Remote host closed the connection 1178862913 J * ruskie ruskie@ruskie.user.oftc.net 1178862919 Q * ktwilight Ping timeout: 480 seconds 1178863107 J * comfrey ~comfrey@cpe-024-211-195-185.nc.res.rr.com 1178864619 Q * opuk Ping timeout: 480 seconds 1178864844 Q * DavidS Quit: Leaving. 1178864977 J * opuk ~kupo@c213-100-138-228.swipnet.se 1178865093 J * BenG ~ben@82-45-23-100.cable.ubr03.azte.blueyonder.co.uk 1178865103 P * BenG 1178867950 J * meandtheshell ~markus@85-124-39-136.dynamic.xdsl-line.inode.at 1178869910 J * DavidS ~david@vpn.uni-ak.ac.at 1178870982 J * dna ~naucki@122-198-dsl.kielnet.net 1178871035 Q * comfrey Quit: Lost terminal 1178873671 Q * cdrx Quit: Leaving 1178873675 J * cdrx ~legoater@cap31-3-82-227-199-249.fbx.proxad.net 1178873790 J * ktwilight ~ktwilight@112.69-66-87.adsl-dyn.isp.belgacom.be 1178874204 Q * ktwilight_ Ping timeout: 480 seconds 1178874218 J * ktwilight_ ~ktwilight@134.91-66-87.adsl-dyn.isp.belgacom.be 1178874507 Q * ktwilight Ping timeout: 480 seconds 1178874573 J * ktwilight ~ktwilight@181.210-66-87.adsl-static.isp.belgacom.be 1178874837 Q * ktwilight_ Ping timeout: 480 seconds 1178875276 P * EdwardTLS Leaving 1178875927 J * ktwilight_ ~ktwilight@98.214-66-87.adsl-static.isp.belgacom.be 1178875964 J * lilalinux ~plasma@dslb-084-058-216-158.pools.arcor-ip.net 1178876104 Q * meandtheshell Quit: Leaving. 1178876136 J * meandtheshel1 ~markus@85-124-39-136.dynamic.xdsl-line.inode.at 1178876307 Q * ktwilight Ping timeout: 480 seconds 1178876893 Q * meandtheshel1 Quit: Leaving. 1178876902 J * meandtheshell ~markus@85-124-39-136.dynamic.xdsl-line.inode.at 1178877617 J * ciphergoth ~paul@host226.lshift.net 1178877658 M * ciphergoth I'm trying to set up a Fedora 6 guest. I've installed "yum" on the guest, but when it goes to fetch the mirror list, it uses this URL 1178877671 M * ciphergoth GET /mirrorlist?repo=core-Null&arch=i386 1178877703 M * ciphergoth ie it substitutes Null for $releasever in "mirrorlist" 1178877716 M * daniel_hozac does the guest have fedora-release installed? 1178877727 M * ciphergoth how do I check? 1178877758 M * daniel_hozac rpm -q fedora-release 1178877808 M * ciphergoth nope 1178877843 M * daniel_hozac that'd be why. 1178877846 M * ciphergoth So sudo vyum server -- install fedora-release then? 1178877855 M * daniel_hozac i guess so. 1178877928 M * ciphergoth weird - apart from warning me about my bad version of yum, that prints no output and fedora-release is still not installed 1178878047 M * daniel_hozac you _have_ internalized package management, right? 1178878100 M * ciphergoth ...internalized package management? 1178878116 M * ciphergoth I installed a few things with vyum including yum and then tried to start using it 1178878120 M * daniel_hozac vserver pkgmgmt internalize 1178878123 M * ciphergoth there's another step? 1178878124 M * ciphergoth aha 1178878181 M * ciphergoth oooooooo! 1178878185 M * ciphergoth many thanks! 1178878248 M * ciphergoth it's a happy vserver now 1178878251 M * daniel_hozac you're welcome! 1178878400 Q * meandtheshell Quit: Leaving. 1178878441 J * meandtheshel1 ~markus@85.124.39.136 1178880495 A * trippeh bought some new drives yesterday: 1178880497 M * trippeh PV Size 8192.00 EB 1178880510 M * daniel_hozac some? 1178880513 M * daniel_hozac geez. 1178880600 M * trippeh 8 192 exabytes = 8.79609302 × 10^12 gigabytes 1178880616 M * trippeh 8 589 934 592 terabytes 1178880618 M * trippeh :-o 1178880626 M * daniel_hozac how many drives is that? 1178880634 M * trippeh 4x500GB :P 1178880653 M * trippeh (lvm userspace is beeing a bit confused) 1178880676 M * daniel_hozac ah :P 1178880695 M * trippeh Just added another raid5 array to my home fileserver 1178880708 M * daniel_hozac hehe 1178880716 M * trippeh Benched it to 750MB/s now, with all 12 drives saturated concurrently 1178880731 M * daniel_hozac damn 1178880748 M * trippeh Pretty good with just plain SATA controllers on a intel core2duo desktop motherboard ;) 1178880753 M * Loki|muh wtf? _HOME_ fileserver? ;) 1178880773 M * Wonka you are mad 1178880779 M * meandtheshel1 s/home/campus home/ 8-] 1178880807 M * trippeh High Definition puts quite a demand on home storage! 1178880808 M * trippeh ;) 1178880817 M * meandtheshel1 rofl - nutter! 1178880849 M * trippeh But the I/O capabilities on modern consumer chipsets are getting preeetty impressive 1178880858 M * daniel_hozac indeed. 1178880884 M * trippeh Using software raid5 it gets about 350-400MB/s with about 20% cpu usage, spread on 11 drives 1178880912 M * trippeh I need 10GigE ;-) 1178880933 M * meandtheshel1 so HD needs 400MB/s - wow - that would be a 2 second entertainment here 1178880938 M * meandtheshel1 :) 1178880957 M * trippeh Well.. ;) 1178880984 Q * weeble Ping timeout: 480 seconds 1178880995 M * trippeh With that kind of transfer rates on cheap hardware, who needs SAN's ;) 1178881002 M * trippeh *cough* 1178881182 A * trippeh wonders what kind of data rate uncompressed 1080p HD would use 1178881893 J * weeble ~weeble@81.52.144.1 1178881935 M * sid3windr hmpf 1178881952 M * sid3windr my triple-raid5-lvm with 13 disks only does 180M on software raid on dual xeon 2.4 o_O 1178881960 M * sid3windr with 2 pci-x 8-port sata controllers ;/ 1178882089 M * daniel_hozac haha. 1178882914 Q * SoftIce Ping timeout: 480 seconds 1178884543 M * trippeh sid3windr: Hehe. This is not measured on the lvm, but on the invidual md sets in paralell 1178884559 M * sid3windr ah 1178884591 M * sid3windr Timing buffered disk reads: 160 MB in 3.03 seconds = 52.87 MB/sec 1178884596 M * sid3windr 5x 320G RAID5 1178884621 M * sid3windr Timing buffered disk reads: 176 MB in 3.02 seconds = 58.37 MB/sec 1178884624 M * sid3windr 4x 200G RAID5 1178884634 M * sid3windr Timing buffered disk reads: 178 MB in 3.03 seconds = 58.76 MB/sec 1178884636 M * sid3windr 4x 250G RAID5 1178884640 M * sid3windr not that great eh? :o 1178884689 M * trippeh Could be better, but then again hdparm isn't very good at measuring sequential reads nowadays 1178884752 M * trippeh for i in /dev/md?; do dd if=$i of=/dev/zero iflag=direct bs=65536 & done 1178884761 M * trippeh then monitor with iostat -m 1178884817 M * trippeh Modern dd's will print the average at the end also 1178885297 M * trippeh It's not like its a very real test or anything, but its always fun to see how much I/O you can pull off ;) 1178885337 Q * derjohn Remote host closed the connection 1178885709 J * derjohn ~derjohn@80.69.41.3 1178886757 M * sid3windr of dev zero? 1178886760 M * sid3windr or of dev null? :p 1178887030 M * trippeh of = output to oblivion ;) 1178887037 M * sid3windr yes, so that shouldn't be /dev/zero 1178887043 M * trippeh Ah, typo 1178887043 M * sid3windr you'll be overwriting zeroes with your hd data :p 1178887055 M * sid3windr and then when you cat it, you'll get a filesystem! 1178887055 M * sid3windr =) 1178887078 M * harry why the hell would you write to an output device??? 1178887103 M * harry you want to test to a really fast backup disk! /dev/null 1178887225 M * sid3windr easy on the question marks there buddy ;) 1178887247 M * harry oink? ): 1178887247 M * harry ;) 1178887470 M * sid3windr this little piggy went to the market.. :p 1178887933 M * harry http://i2.photobucket.com/albums/y23/drsanity/pearlsbeforeswine.gif 1178888604 M * DavidS harry: make it stop .. this hurts ... *plunk* *head in sand* 1178888658 Q * DavidS Quit: Leaving. 1178888670 M * harry http://thedefeatists.typepad.com/apoplectic/images/blogosphere.gif 1178888937 M * sid3windr heh 1178888942 M * sid3windr I don't have many comments on my blog 1178888950 M * sid3windr most people reading it are with me on irc and comment there :p 1178889428 M * harry i don't blog 1178889434 M * harry it's useles anyways... 1178889453 M * harry the most important thing is: i rule and i know it ;0 1178889453 M * harry ;) 1178890970 J * ema ~ema@rtfm.galliera.it 1178891113 J * FireEgl FireEgl@4.0.0.0.1.0.0.0.c.d.4.8.0.c.5.0.1.0.0.2.ip6.arpa 1178891137 Q * ciphergoth Quit: Client exiting 1178892828 N * click_ click 1178892834 M * mjt trippeh: it's a hardware raid(s), isn't it? 1178892864 M * sid3windr /dev/md* usually isn't hardware raid 1178892886 M * mjt er. double sigh :) 1178892910 M * mjt 1) s/trippeh/sid3windr/; and 2) i didn't notice /dev/md* 1178892998 N * Bertl_zZ Bertl 1178893002 M * Bertl morning folks! 1178893020 M * mjt here, 14-disk raid10 array gives a bit more than 600Mb/sec transfer rate at max. Looks like it's maxing out SCSI bus speed (two busses) 1178893048 M * mjt Hi Bertl ! 1178893136 M * harry Timing buffered disk reads: 664 MB in 3.00 seconds = 221.01 MB/sec 1178893144 M * harry that's on my raid5 over 5 disks 1178893160 Q * cdrx Quit: Leaving 1178893202 M * harry (they're all 146GB 15krpm disks) 1178893283 M * bon what gives such output? 1178893286 M * bon let me check mine 1178893289 M * mjt hdparm -t 1178893303 M * Bertl but it is not very relevant 1178893314 M * Bertl you want to run bonnie++ or so 1178893341 M * bon Timing buffered disk reads: 78 MB in 3.04 seconds = 25.67 MB/sec 1178893344 M * bon not very good :) 1178893350 M * bon that's raid5 over 4 disks 1178893376 M * Bertl hardware or software? 1178893386 M * bon software 1178893394 M * mjt /dev/md11: 1178893394 M * mjt Timing buffered disk reads: 1831 MB in 3.00 seconds = 610.33 MB/sec 1178893427 M * mjt 14-disk raid10 over 2 scsi busses 1178893781 M * trippeh hdparm seems to give about 130-200MB/s on each of my 3-4 disk software raid5's (md, 32k chunk) 1178893811 Q * meandtheshel1 Quit: Leaving. 1178893858 M * trippeh Plain boring 7200rpm SATA drives :) 1178893875 M * sid3windr :| 1178893882 M * sid3windr I wonder what I'm missing then 1178893900 M * sid3windr I have 64k and 128k chunks 1178893903 M * sid3windr also 7200rpm sata's 1178893909 M * trippeh Chipset with plenty of I/O bandwidth? :) 1178893918 M * mjt maybe different controller(s)? 1178893934 M * sid3windr hehe 1178893944 M * sid3windr I have a dual xeon serverboard with a few different pci busses 1178893953 M * sid3windr both controllers are in pci-x 64bit slots each on a separate bus :/ 1178893966 M * sid3windr the controller chip is tagged EXPERIMENTAL though ;> 1178893970 M * sid3windr (in kernel config) 1178893972 M * trippeh sata_mv? :P 1178893974 M * sid3windr ack 1178894013 M * trippeh My controllers are either PCI-X or inside the south bridge (AHCI) 1178894015 M * mjt for example, on one of machines around me, there's an ICH7 - that one wont give more than 80Mb/sec, regardless on the number of drives in an array. 1178894040 M * sid3windr sucks 1178894044 M * mjt 0000:00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) Serial ATA Storage Controller IDE (rev 01) 1178894045 M * trippeh Err, PCI-E or inside the south bridge 1178894060 M * mjt inside the bridge 1178894080 M * trippeh 00:1f.2 SATA controller: Intel Corporation 82801HB (ICH8) SATA AHCI Controller (rev 02) 1178894083 M * mjt and it doesn't support NCQ, too... 1178894089 M * sid3windr :) 1178894094 M * sid3windr I have ICH7 in my firewalls 1178894094 M * trippeh 6 ports all saturating all the drives on that one 1178894096 M * sid3windr they don't need a lot of IO ;) 1178894109 M * sid3windr "real" servers have areca controllers 1178894118 M * sid3windr but at home I can't afford that ;) 1178894122 J * meandtheshel1 ~markus@85-125-158-186.dynamic.xdsl-line.inode.at 1178894138 M * trippeh ICH6/7/8 are good, fast controllers - as long they have R in the end of their name, eg is AHCI enabled 1178894178 M * trippeh (and has that useless bios raid thingy... hnngh.) 1178894181 M * sid3windr RICH? :p 1178894197 M * sid3windr ah in the end:p 1178894210 M * sid3windr 00:1f.2 RAID bus controller: Intel Corporation 82801GR/GH (ICH7 Family) Serial ATA Storage Controller RAID (rev 01) 1178894213 M * sid3windr this one's in my firewall 1178894216 M * sid3windr s 1178894240 M * trippeh sid3windr: Hmm, with RAID, probably means you can enable "native" mode in the BIOS, thus getting AHCI (and good performance) 1178894255 M * trippeh NCQ and all the other goodies 1178894355 M * trippeh I wish someone would make a pci-e card with an AHCI controller, and lots of ports 1178894358 M * mjt i tought ICH7 is pre-AHCI 1178894369 M * trippeh sata_mv sucks 1178894391 M * sid3windr it does? 1178894398 M * sid3windr performance wise or error handling wise or otherwise? :) 1178894409 M * trippeh THe chips are okay, but the driver is .. immature and lacks pretty much everything 1178894414 M * sid3windr myea 1178894429 M * sid3windr works well for me so far *knock on wood* 1178894456 M * trippeh Yah, I have it in production :P 1178894464 M * trippeh On several servers 1178894482 M * sid3windr so fix the driver, then I can use it with more confidence at home 1178894482 M * sid3windr :p 1178894520 M * trippeh But a drive error easily sends the driver into a irq storm (better have some spare cpu cores, or else you'll be in shit) 1178894550 A * trippeh have had 500000 intr/s on a single core more than once 1178894586 M * trippeh libata-dev has a version with wastly improved error handling 1178894597 A * trippeh wonders what the upstream status of that is now 1178894618 M * sid3windr uhu 1178894622 M * sid3windr I heard about the drive error stuff 1178894720 J * Piet hiddenserv@tor.noreply.org 1178894750 Q * Piet Remote host closed the connection 1178894758 M * trippeh There was also some NCQ work beeing done 1178894774 M * trippeh Nothing releasable though 1178894782 M * trippeh (for sata_mv) 1178894808 J * Piet hiddenserv@tor.noreply.org 1178895496 Q * eyck Ping timeout: 480 seconds 1178895730 J * DavidS david@chello062178045213.16.11.tuwien.teleweb.at 1178896859 J * eyck ~eyck@nat.nowanet.pl 1178897975 J * stefani ~stefani@flute.radonc.washington.edu 1178898118 J * HZPfT[jId ~hollow@85.10.237.60 1178898133 Q * Hollow Read error: Connection reset by peer 1178898155 M * Bertl wb DavidS! eyck! 1178898165 M * Bertl welcome stefani! 1178898176 M * DavidS good morning Bertl, everyone! 1178898178 N * HZPfT[jId Hollow 1178898205 M * DavidS Bertl: bad news .. i had those dst_release() BUGs today with the stable Debian kernel :-( 1178898281 Q * ||Cobra|| Read error: No route to host 1178898366 M * stefani wb 1178898465 M * mstrobert Good morning gentlemen. 1178898486 J * ||Cobra|| ~cob@pc-csa01.science.uva.nl 1178901360 J * phreak`` ~phreak``@deimos.barfoo.org 1178901400 J * marcfiu ~mef@aegis.CS.Princeton.EDU 1178903299 Q * FloodServ Service unloaded 1178903319 J * FloodServ services@services.oftc.net 1178904022 Q * zLinux Remote host closed the connection 1178905066 M * mstrobert After much time spent getting 32-bit Fedora installed on my 64-bit Debian server, I wrote a little set of instructions so others don't have to spend the same amount of time trying to figure it out: http://linux-vserver.org/Installing_32-bit_Fedora_on_64-bit_Debian 1178905246 M * daniel_hozac mstrobert: you really shouldn't edit files in /usr/lib/util-vserver. 1178905255 M * mstrobert daniel_hozac: Probably so. 1178905275 M * mstrobert daniel_hozac: what's the right way to do that? 1178905302 M * daniel_hozac cp -a /usr/lib/util-vserver/distributions/fc6/ /etc/vservers/.distributions/fc6 1178905306 M * daniel_hozac edit the /etc copy. 1178905310 M * mstrobert ahh 1178905318 M * daniel_hozac though i have to admit, that stuff's really ugly. 1178905325 M * daniel_hozac you shouldn't have to edit anything. 1178905344 M * daniel_hozac we just have to find a way to convince yum it should install as if it was a 32-bit system. 1178905353 M * mstrobert Yeah. 1178905358 M * daniel_hozac have you tried setarch i386 vserver ...? 1178905431 M * mstrobert Hmm, I'm not familiar with setarch ? 1178905581 M * daniel_hozac it changes the personality, making e.g. uname return i686 or whatever instead. 1178905603 M * mstrobert Hmm, I see web sites about Fedora referencing it, but packages.debian.org doesn't make me think it's in Debian. 1178905632 M * daniel_hozac that's possible, i suppose. 1178905655 M * daniel_hozac i guess you'll have to build it yourself. 1178905659 M * mstrobert ok 1178906511 Q * virtuoso Remote host closed the connection 1178906575 Q * ema Quit: leaving 1178906591 M * Bertl what about, export ARCH=i386; linux32 ... ? 1178906592 J * virtuoso ~s0t0na@80.253.205.251 1178906799 M * daniel_hozac i guess linux32 does the same thing, yeah. 1178908183 N * nebuchad` nebuchadnezzar 1178909301 Q * mjt Remote host closed the connection 1178909651 Q * DavidS Quit: Leaving. 1178909691 M * Bertl okay, nap attack .. back later ... 1178909695 N * Bertl Bertl_zZ 1178909770 Q * nebuchadnezzar Remote host closed the connection 1178909879 J * nebuchadnezzar ~nebu@zion.asgardr.info 1178911777 J * SoftIce ~bongo@vc-196-207-45-253.3g.vodacom.co.za 1178911871 M * SoftIce re' bah 1178911878 M * SoftIce http://paste.linux-vserver.org/1740 1178911878 Q * shedi Quit: Leaving 1178911910 M * SoftIce I see there are 2 kernels for vserver, k7 and 686 as you can see in my paste 1178911921 M * SoftIce based on my kernel type, would I be safe to say to use the k7 1178911937 M * SoftIce my information on amd kernels are very limited 1178911956 M * SoftIce I just don't want to chose the wrong kernel and it doesn't boot :D 1178911990 M * ntrs_ I am seriously thinking about starting to use unification with COW on our production machines. 1178912011 M * daniel_hozac ntrs_: what's keeping you? 1178912019 M * ntrs_ Is it production ready? Any little problems or quirks I should know about before I start tinkering with it? 1178912027 M * SoftIce wow, daniel_hozac you suprise me the amount of time you online :) 1178912031 M * daniel_hozac SoftIce: i'd go for an amd64 kernel, but that's just me. 1178912046 J * shedi ~siggi@ftth-237-144.hive.is 1178912065 M * daniel_hozac ntrs_: they've all been fixed, AFAIK. 1178912067 M * SoftIce daniel_hozac: so you say use that k7 kernel I pasted? 1178912074 M * SoftIce and not the 686? 1178912082 M * daniel_hozac no, i say go for the amd64 one. 1178912097 M * ntrs_ daniel_hozac, I heard that there were problems updating guests with yum or apt-get, is that still true? 1178912115 M * daniel_hozac yum has always worked with unification, even without COW. 1178912127 M * daniel_hozac dpkg had some problems, but that's fixed now. 1178912130 M * ntrs_ I mean how will a guest's behavior be different than a non-unified guest? 1178912141 M * daniel_hozac not at all. 1178912152 M * daniel_hozac it's just hardlinks. 1178912162 M * ntrs_ what should we be doing differently in our daily work? creating deleting guests for example 1178912168 M * SoftIce sp4679a:~# apt-cache search linux-image | grep vserver | grep 64 1178912169 M * SoftIce sp4679a:~# 1178912182 M * SoftIce daniel_hozac: no amd64 kernel with debian 1178912185 M * daniel_hozac there is 1178912192 M * daniel_hozac you just installed a 32-bit OS. 1178912222 M * daniel_hozac ntrs_: well, you'd have to add a vserver ... hashify step to the end of your install procedure. 1178912255 M * daniel_hozac ntrs_: and you'd want to run find /vservers/.hash -links 1 -type f | xargs rm -f regularly. 1178912265 M * SoftIce daniel_hozac: ahh, well then second option the k7 kernel ? 1178912278 M * daniel_hozac either one should work. 1178912297 M * SoftIce I see, daniel_hozac: by the way, this is the box I migrated from ubuntu to debian :) 1178912306 M * SoftIce wasn't the easiest job! 1178912366 M * ntrs_ daniel_hozac, is the hashify or the task that needs to be run regularly expensive, in terms of CPU or IO usage? 1178912387 M * daniel_hozac well, it depends. 1178912406 M * daniel_hozac hashify after build really shouldn't be expensive, most of it should be in the cache still. 1178912423 M * daniel_hozac (note that you might avoid that entirely by using build -m clone) 1178912434 M * ntrs_ daniel_hozac, how can I convert a host full of currently running guests to hashified servers? Can I do it while they are running? 1178912460 M * daniel_hozac the find might be a bit IO expensive, as it has to traverse the entire tree. 1178912472 M * daniel_hozac sure, just run vserver ... hashify 1178912481 M * ntrs_ wait a minute... 1178912506 M * daniel_hozac you'll have to restart the services inside to see the effect though. 1178912539 M * ntrs_ vserver ... hashify will do what exactly? I was under the impression that there is supposed to be something like a main (template) image to which the guests will be hashified, is that right? 1178912552 M * daniel_hozac no. 1178912561 M * ntrs_ I have different distro guests on the same host 1178912571 M * daniel_hozac hashify links identical files. 1178912591 M * ntrs_ hmmm, so no need of a template of each distro on the host? 1178912601 M * daniel_hozac no, but it would be beneficial for the build. 1178912608 M * ntrs_ I see 1178912627 M * daniel_hozac i.e. that would let you skip the vserver ... hashify step of the build procedure. 1178912654 M * ntrs_ But, isn't one of the guests supposed to be a main guest? I mean the hardlinks have to point to a real file somewhere. 1178912668 M * daniel_hozac "real file"? 1178912671 M * daniel_hozac hardlinks are all identical. 1178912679 M * ntrs_ an actual file that is not a hardlink. 1178912680 M * daniel_hozac there's no "master" file or anything like that. 1178912692 M * ntrs_ Isn't a hardlink a link to a file? 1178912698 M * daniel_hozac that's a symlink. 1178912717 M * daniel_hozac a hardlink is a reference to the same inode that another file is pointing at. 1178912733 M * daniel_hozac making them the same file, just different names. 1178912735 M * ntrs_ Ok, but the inode has the actual data of the file 1178912766 M * ntrs_ now, what happens when two guests have the same hardlinks and one decides to delete it 1178912782 M * daniel_hozac the link count is decreased. 1178912784 M * ntrs_ Oh, here's a better case 1178912795 M * ntrs_ On the initial hashify 1178912802 M * ntrs_ say there are 30 guests 1178912872 M * ntrs_ all have the same identical file 1178912896 M * ntrs_ which inode will remain and which will be removed? 1178912899 M * ntrs_ How is that decided? 1178912910 M * daniel_hozac the first one to get hashified. 1178912922 M * ntrs_ so they may belong to different guests. 1178912933 M * daniel_hozac sure. 1178912937 M * ntrs_ the first one will remain, all others will be deleted? 1178912941 M * daniel_hozac yep. 1178912948 M * daniel_hozac (once you restart the services) 1178912954 M * ntrs_ So there should be incredible disk space savings in that case. 1178912963 M * daniel_hozac that's the idea. 1178912971 M * ntrs_ very interesting. 1178913000 M * ntrs_ how is hashify doing the comparisons? just the hash value of every file? 1178913046 M * daniel_hozac the hash and ownership/size/permissions. 1178913076 M * ntrs_ is the "..." in the vserver ... hashify something like a special option meaning "all guests"? 1178913088 M * daniel_hozac no, it's to be replaced by the guest name. 1178913093 M * SoftIce grrr, somebody here who knows grub well enough, right, if you have defaul 0 , I understand that boots the first image, but you also have a safe image, eg: single user mode, same kernel second option, how do you boot an entire different image, would that be default 1 or default 2 1178913097 M * ntrs_ ok, so I'd have to do it one by ine. 1178913123 M * daniel_hozac SoftIce: default 2, if it's after the single user. 1178913129 M * daniel_hozac ntrs_: well, you could use vsomething. 1178913143 M * ntrs_ vsomething? 1178913174 M * SoftIce daniel_hozac: http://paste.linux-vserver.org/1742 1178913180 M * SoftIce so default as you say would be 2 ? 1178913190 M * daniel_hozac vsomething vserver --all -- hashify, e.g. 1178913224 M * ntrs_ I never heard of vsomething 1178913232 M * ntrs_ Is that a new command in the tools? 1178913235 M * daniel_hozac no. 1178913256 M * daniel_hozac vyum's using it. 1178913280 M * daniel_hozac and nowadays, so are vrpm, vapt-get, vemerge*, etc. 1178913302 M * ntrs_ is this all going to work with 0.30.212 or do I need a newer version? 1178913357 M * daniel_hozac build -m clone is new in 0.30.213. 1178913369 M * daniel_hozac but the rest should work fine with 0.30.212. 1178913412 M * daniel_hozac SoftIce: yes. 1178913524 M * ntrs_ this should work across distros too, I mean in case some distros use the same files, right? 1178913536 M * daniel_hozac yeah. 1178913730 M * ntrs_ daniel_hozac, is there a step by step guide how to hashify a host with multiple running guests? 1178913739 M * ntrs_ I don't want to mess up something. 1178913789 M * daniel_hozac it should just be a matter of setting up hashification (http://linux-vserver.org/Frequently_Asked_Questions#How_do_I_manage_a_multi-guest_setup_with_vhashify.3F) 1178913803 M * daniel_hozac and then running vserver ... hashify 1178913863 M * ntrs_ ok 1178913895 M * ntrs_ what does the vunify do? 1178913925 M * daniel_hozac as it says, vhashify is reusing vunify's configuration. 1178913945 M * daniel_hozac vunify is the predecessor, where you needed the reference guest. 1178913970 M * ntrs_ ok, and there is no longer a need of vunify? 1178913990 M * daniel_hozac well, i consider vhashify to be vastly superior 1178914216 M * SoftIce This program is part of util-vserver 0.30.212 1178914223 M * SoftIce is this an acceptable version? 1178914244 M * daniel_hozac if you don't need anything present only in 0.30.213, i don't see why not. 1178914275 M * SoftIce great 1178914331 M * ntrs_ daniel_hozac, what I was trying to ask is if vunify is needed or not for proper funcioning of vhashify? 1178914350 M * daniel_hozac the binary? no. 1178914449 M * ntrs_ what about this? 1178914451 M * ntrs_ There seems to be a catch when a hashified file has multiple hardlinks inside a guest, or when another internal hardlink is added after hashification. Link breaking will remove all the internal hardlinks too, so the guest will end up with different copies of the original file. The correct solution would be to not hashify files that have multiple links prior to hashification, and to break the link to the hashified version when a new int 1178914451 M * ntrs_ ernal hardlink is created. Apparently, this is not implemented yet (?). 1178914497 M * daniel_hozac still true. 1178914543 M * ntrs_ how is this a problem in real life? 1178914550 J * Loki|muh_ loki@satanix.de 1178914550 Q * Loki|muh Read error: Connection reset by peer 1178914562 N * Loki|muh_ Loki|muh 1178914572 M * daniel_hozac i have never had a problem with it, so i can't say. 1178914584 M * ntrs_ ok, got it 1178914754 Q * toidinamai__ Ping timeout: 480 seconds 1178914893 J * bonbons ~bonbons@ppp-111-219.adsl.restena.lu 1178914918 Q * SoftIce 1178914930 P * stefani I'm Parting (the water) 1178915824 M * ntrs_ daniel_hozac, does the filesystem have to be prepared in any way in advance for the COW? 1178915847 M * daniel_hozac no. 1178915869 M * ntrs_ also, can any external file operations from the host on guest's files cause problems? 1178915879 M * ntrs_ Like deleting files inside the guest from the host? 1178915899 M * daniel_hozac no, everything still works as normal. 1178915908 M * ntrs_ ok. that sound great. 1178915910 J * yarihm ~yarihm@84-75-103-239.dclient.hispeed.ch 1178917292 Q * dna Quit: Verlassend 1178917954 Q * bonbons Quit: Leaving 1178918894 M * mstrobert daniel_hozac: I cleaned up http://linux-vserver.org/Installing_32-bit_Fedora_on_64-bit_Debian so it doesn't do awful things like edit things in /usr 1178918908 M * mstrobert daniel_hozac: and linux32 worked great! 1178918937 M * daniel_hozac good to know. 1178918949 M * daniel_hozac is the proxy thing needed? isn't it sufficient just to export http_proxy? 1178920154 M * mstrobert daniel_hozac: setting the proxy in yum.conf isn't necessarily needed for building. But the environment variable would then be required every time vyum is invoked on the system. So putting it in yum.conf makes that simpler. 1178920183 M * mstrobert (sorry for delay; took me a bit to verify that :) 1178920409 M * daniel_hozac but don't you usually use the proxy for the host as well? 1178920417 M * daniel_hozac i.e. it should already be in the environment? 1178920927 M * mstrobert daniel_hozac: that would make sense. And there may be something regarding this that I'm not familiar with, but I'm not aware of a correct system-wide way that one sets their proxy so that just every process can find it in the environment. It seems something that needs to get set by users in their programs (firefox, gnome), or by admins for system programs in the programs' config files (eg I have my proxy set in /etc/apt/apt.conf so that apt knows 1178920927 M * mstrobert how to get through it.) 1178920931 M * mstrobert daniel_hozac: On my workstation, http_proxy is set in my terminals, perhaps by gnome. But on the server it's not. And I wouldn't just want it in a .profile or something, because then crond wouldn't know about the proxy. 1178921110 M * daniel_hozac /etc/profile? 1178921164 M * mstrobert But if process execution doesn't run through sh, then it wouldn't hit that,right? 1178921189 M * daniel_hozac nope. 1178921223 M * mstrobert Maybe it does always go through sh at some point, though, before ever getting to whatever runs vyum 1178921285 M * mstrobert (oh, though perhaps not for stuff in /etc/crontab; that might get run via execve()) 1178921307 M * mstrobert (maybe) 1178921315 M * daniel_hozac i'm quite certain that's run through system(3). 1178921332 M * mstrobert well then that would wtork 1178921366 M * mstrobert hmm, if that works, that would make lots of silly proxy things work more smoothly in Linux :) 1178921378 M * mstrobert I shall have to try that 1178921431 M * ntrs_ daniel_hozac, are you here? 1178921435 M * daniel_hozac yeah. 1178921443 M * ntrs_ One more thing I just thought of. 1178921449 M * ntrs_ about the vhashify thing... 1178921464 M * ntrs_ what happens if I backup guests using rsync to another machine. 1178921479 M * ntrs_ Will the actual files be backed up or just the links? 1178921490 M * ntrs_ I don't want to break the backups 1178921513 M * daniel_hozac a hardlink is the file. 1178921548 M * daniel_hozac so backing them up individually would work fine. 1178921592 M * daniel_hozac if you get all of /vservers and use -H, you should get some savings for your backups too. 1178921608 M * ntrs_ rsync -aq -e ssh /vservers/someguest backupserver:/backup/someguest 1178921616 M * ntrs_ will the above work as expected? 1178921643 M * daniel_hozac yep. 1178921650 M * ntrs_ yes, but wouldn't -H copy the links and not the files? 1178921667 M * daniel_hozac or well, it won't change just because you move to hashification. 1178921689 M * daniel_hozac there's really no "link" to speak of when referring to hardlinks. 1178921698 M * ntrs_ I see, ok 1178921701 M * daniel_hozac you just have multiple names for the same file. 1178921722 M * ntrs_ when I restore a guest from the backup server I would have to do the rehashify thing again, right? 1178921732 M * daniel_hozac right, 1178921737 M * ntrs_ ok 1178921918 M * yarihm daniel_hozac: have you heard of any progress on the OpenSuSE-Front (in terms of vserver ... create ... -- -d opensuse I mean)? 1178921943 M * daniel_hozac no. 1178922426 Q * mstrobert Quit: Jesus is Lord! 1178922672 M * yarihm ok 1178923267 Q * fosco Read error: Connection reset by peer 1178923269 J * fosco fosco@konoha.devnullteam.org 1178924440 J * lilalinux_ ~plasma@dslb-084-058-196-030.pools.arcor-ip.net 1178924440 Q * lilalinux Read error: Connection reset by peer