1295223810 J * blackmamba69 ~blackmamb@75-149-57-177-SFBA.hfc.comcastbusiness.net 1295223840 M * blackmamba69 how do I kill guests using the context id because it has no name? 1295223917 M * daniel_hozac vkill --xid 1295224357 M * Bertl "where the guests have no name" ... 1295224371 M * Bertl do we know why that happens? 1295224703 M * daniel_hozac people removing the directories before killing the guest, e.g. 1295224718 M * daniel_hozac why people don't use vserver ... delete that does that for you is beyond me :-) 1295224840 M * Bertl okay, got the impression that there were several non PEBKAC events recently 1295225059 M * daniel_hozac i haven't gotten any reports, but i've been pretty busy elsewhere. 1295225134 M * Bertl blackmamba69: so, did you remove guest directories before stopping the guest? 1295226432 J * ichavero_ ~ichavero@148.229.9.250 1295226810 Q * blackmamba69 Quit: blackmamba69 1295227277 J * blackmamba69 ~blackmamb@75-149-57-177-SFBA.hfc.comcastbusiness.net 1295227290 M * blackmamba69 thanks it worked 1295227308 J * maod ~maod@173-203-86-60.static.cloud-ips.com 1295227331 M * blackmamba69 How do we have 2 guests share 1 root filesystem? 1295227412 M * maod I assume if you have 2 configs point to the same root, it will work, but one would need to make each guest not modify stuff in /var probably? I'm a newbie at this too 1295227959 M * maod I tried dupvserver, creating 2 guests from one root filesystem, and then booted both, but when booted, the 1st guest's name disappears. why is this? 1295227981 M * daniel_hozac don't use dupvserver. 1295227997 M * maod what's the equivalent then? 1295228014 M * maod or what is the proper way of creating 2 guests that branch from the same root filesystem? 1295228018 M * daniel_hozac vserver ... build -m rsync/clone 1295228060 M * maod but that would clone the whole filesystem right? if we have a root fs that is 500mb, and created 10 guests, we'd use up 5gb unnecessarily right? 1295228079 M * daniel_hozac not if you set your files to COW and use clone. 1295228081 M * maod do people use unionfs to solve something like this? 1295228180 M * maod is the COW parameter specified on vserver build? or is that when starting the guest? 1295228197 M * daniel_hozac that's set on the files. 1295228204 M * daniel_hozac before you run vserver ... build. 1295228444 M * maod ok, i see in the faq now, so after creating the necessary dirs, i just call vserver test1 hashify right? 1295228480 M * daniel_hozac that's one way to do it. 1295228490 M * maod is that not the recommended way? 1295228593 M * Bertl maod: usually people don't use unionfs, because it wastes resources and is less flexible than using unification 1295228620 M * maod yeah, i see that unification is basically the same as unionfs, the modified files are stored as a hash 1295228655 M * Bertl except for the fact that unionfs gives different inodes which lead to different mappings, caches, etc 1295228675 M * Bertl (and thus wastes a lot of precious memory for no gain at all) 1295228811 M * Bertl blackmamba69: I presume you got that dupvserver from that outright evil debian package, right? 1295228830 M * maod (i'm working with blackmamba) 1295228842 M * maod but yes, it was googled on the debian page 1295230032 M * maod so we just created a new guest using "vserver test1 build -m clone" but it looks like it actually duplicated the directory of the base clone system. 1295230052 M * maod i thought it wouldn't actually duplicate the files... 1295230319 M * Bertl did you prepare unification and hashify the first guest 1295230321 M * Bertl ? 1295230728 Q * yarihm Quit: This computer has gone to sleep 1295230759 M * maod Bertl: yeah, it created the .hash files 1295230796 M * maod Bertl: shouldn't the "du -h" for the new clone show a significantly decreased directory size? 1295230808 M * Bertl no, those are hardlinks 1295230822 M * Bertl try with du -h /path/to/guest1 /path/to/guest2 1295230850 M * maod when I do "df" and subtract the block counts, it still went up significantly 1295230929 M * maod so my base file system is 230M, whne I do "du -h base test1" it shows 110M. Does this mean each guest consumes 110M in hashes above the original filesystem? 1295230967 M * Bertl the hashes are not relevant, they are hardlinks too 1295230986 M * Bertl so no (relevant) disk space is consumed by them 1295231015 M * maod so how would I discover how much a guest is actually using compared to the base? 1295231032 M * maod I can see that the file inodes are the same 1295231085 M * Bertl something like 'du -shx /path/to/base /path/to/copy' 1295231112 M * Bertl should show the actually 'duplicated' amount, but that depends on your distro and unification config 1295231119 M * melbar If I understand it right, you base system is 230M and the 110M is in files not shared by the base 1295231125 M * melbar is that right? 1295231150 M * Bertl could be, but sounds wrong for a typical distro 1295231152 M * melbar du takes care of not counting the inodes twice, IIRC 1295231162 M * Bertl average not shared is about 10-30M 1295231183 M * Bertl (mostly config files and /var/* entries 1295231185 M * melbar yeah, that's what I got here 1295231187 M * maod when I did the du -shx, it showed base at 230M and test1 at 110M 1295231208 M * melbar 110M sounds a bit too high 1295231219 M * melbar for a freshly 'cloned' vserver 1295231250 M * maod I don't think I had anything special for my unification config -- i just used defaults, whatever they happen to be. the distro is lenny (so lenny base, and lenny clone) 1295231296 M * Bertl could you upload the commands you used to create the guest and then clone it somewhere? 1295231314 M * Bertl (e.g. paste.linux-vserver.org) 1295231315 M * maod I ran this "vserver test1 build -m clone -- --source=/path/to/base" 1295231383 M * maod http://paste.linux-vserver.org/18674 1295231411 M * melbar Perhaps I should read the source code before asking this, but what does hashify does beyond what can be done with: cp -dlprx $base $clone ; find $clone -not -type d -print0 | xargs -o settar --iunlink 1295231412 M * melbar ? 1295231423 M * Bertl maod: not just the clone command, the build command for 'base' and the unification setup as well 1295231439 M * maod I basically created the base using the standard mechanism "newvserver" then I created the necessary hash directories, I booted the base server, then ran "vserver base hashify", then shutdown the base server. 1295231458 M * Bertl newvserver is definitely not a standard mechanism 1295231470 M * Bertl it is part of the same evil package you should get rid of ASAP 1295231489 M * Bertl (i.e. it creates broken guests with weird configs) 1295231519 M * melbar maod: I cloned my vserver using the commands I showed above 1295231545 M * melbar but that was me trying to understand how things work under the hood 1295231569 M * Bertl melbar: apples and oranges 1295231588 M * Bertl hashify will find identical files and 'join' them 1295231603 M * Bertl the commands you listed are more like a clone with no exceptions 1295231605 M * melbar even in different paths, I guess 1295231613 M * maod okay, here's the full command list: http://paste.linux-vserver.org/18675 1295231629 M * melbar that's the point of hashing, right? 1295231633 M * Bertl yep 1295231652 M * Bertl maod: okay, remove both guests, create a new one with 'vserver ... build ...' 1295231687 M * Bertl then prepare unification, hashify the guest and clone it with the command you used 1295231692 M * maod remove the base one too? 1295231703 M * maod so remove the original hashified base? 1295231707 M * Bertl especially the base, it is broken anyway 1295231730 M * Bertl and remove the debian-vserver-something package 1295231743 M * maod ok. retrying now :-) 1295231743 M * melbar I've been using vserver ... build ... -m bootstrap -- -d lenny ... 1295231747 M * Bertl (i.e the one containing newvserver) 1295231751 M * melbar is that the recommended way? 1295231759 M * Bertl melbar: yep, that's how you do it 1295231759 M * maod so i'll empty out the .hash directory too then right? 1295231776 M * melbar anyway, it's working almost perfect 1295231796 M * Bertl maod: yep, you can do that 1295231872 M * maod it's running the "vserver base build -m debootstrap..." command now 1295231917 M * maod the next step would be to create the vunify dir right? then do the vserver hashify command? 1295231927 M * melbar maod: I find having an apt-cacher-ng nearby quite handy to speed up new vserver ... builds 1295231937 M * Bertl maod: yep 1295231947 M * maod melbar: yeah, I do that on my xen server 1295232229 M * maod Bertl: for our base vserver, do we create a /etc/vservers/base/apps/vunify directory? 1295232303 M * Bertl yep 1295232459 M * maod ok, it looks a little better now. so when I do a "du -shx base test1", I see that base is 180M and test1 is 55M, does that look right? 1295232483 M * Bertl still looks a little high to me 1295232495 M * Bertl can you upload the commands for that? 1295232670 M * blackmamba69 http://paste.linux-vserver.org/18676 1295232680 M * maod (posted the pastie vie blackmamba) 1295232698 M * maod it's basically copied directly from my bash history 1295232712 M * Bertl k, let me see how it does here ... 1295232721 M * maod with a few things omitted which were mostly me looking into various directories 1295232844 M * maod we have /etc/vservers/.defaults/apps/vunify/hash/root pointing to /var/lib/vservers/.hash (which is the dir that contains all the hashes) 1295232964 M * maod maybe we could even create a ssh tunnel and have you watch us type commands into screen to see what we're doing wrong hehe 1295232996 M * Bertl nah, no problem, pastebin is fine, currently building 1295233003 M * maod btw, thanks for all this help. this is mostly just a side project of mine :P 1295233051 M * Bertl np 1295233211 M * melbar maod: reproduced here, got the exact same 55M you said 1295233272 M * maod meaning, the commands are fine so far? a clone is still 55MB even though most of the stuff is identical? 1295233296 M * melbar seems ok. I have a theory about it, trying to convince myself of it 1295233452 M * Bertl http://paste.linux-vserver.org/18677 1295233469 M * Bertl this is how your commands (I had to adjust the test1 to testx) turned out here 1295233490 M * maod do you have the command history for yours? did you run the exact same commands as me too? 1295233496 M * Bertl sec 1295233507 M * maod or why does melbar and I have a size discrepancy of 55MB? 1295233590 M * Bertl http://paste.linux-vserver.org/18678 1295233625 M * Bertl no idea about the discrepancy, maybe debian specific? 1295233664 M * Bertl util-vserver 0.30.216-pre2921 installed from sources here 1295233693 M * melbar probably 1295233743 M * melbar when I do the cp -dlprx ... ; find ... | xargs, the base <-> clone difference is ~4M before I start the clone, goes to 6 after I start it 1295233780 M * melbar my guess, which I still can't prove, is that debian's util-vserver is excluding way too much stuff from being unified 1295233789 M * Bertl probably 1295233825 M * melbar I'm writing a perl script to compare inode numbers to get a listing of what, exactly, isn't being shared. it might elucidate something 1295233839 M * Bertl you can get that simpler 1295233852 M * Bertl just find for files with nlink = 1 1295233853 M * maod Bertl: are you on debian, or another distro? (for your root system) 1295233861 M * daniel_hozac find -links 1 1295233866 M * melbar of course 1295233875 M * Bertl maod: definitely not on debian here 1295233879 M * melbar thinko 1295233915 M * blackmamba69 Bertl: what distro are you using? 1295233937 M * Bertl somewhat customized mandriva version 1295233994 M * daniel_hozac if the hashify default doesn't work for you, and you're not really aiming to do hashification, why don't you just set it manually? 1295234022 M * melbar a lot of .deb packagers in /var/cache/apt/archive aren't being shared 1295234030 M * melbar that's the culprit, I think 1295234051 M * daniel_hozac why would you even have packages there in the first place? 1295234061 M * Bertl was going to ask that 1295234072 M * Bertl but obviously that's what you get on debian?! 1295234175 M * melbar let me recheck something here 1295234225 M * maod Bertl: hm, when typing your exact commands starting from scratch, I get this: "Could not find a place for the hashified files at '/etc/vservers/.defaults/apps/vunify/hash'". Do you not have to create any symlinks that point to the .hash directory? 1295234274 M * maod it seems like we still need to do something like "ln -s /vservers/.hash /etc/vservers/.defaults/apps/vunify/hash/root" 1295234274 M * melbar ln -s /var/lib/vservers/.hash/ /etc/vservers/.defaults/apps/vunify/hash/0 1295234298 M * maod (well I'm just following Bertl's bash history line by line) 1295234307 M * Bertl maod: not using anything in /var/lib/vserver /vservers is the default 1295234324 M * Bertl and the symlink probably already existed from a previous run 1295234348 M * maod yeah sorry, I meant to substitute your path in the error since I'm using /var/lib instead of / for my vservers 1295234348 M * Bertl so you probably need to add: 1295234351 M * Bertl ln -s /vservers/.hash /etc/vservers/.defaults/apps/vunify/hash/root 1295234441 J * ktwilight ~keliew@91.176.116.82 1295234463 Q * ktwilight_ Read error: Connection reset by peer 1295234512 M * maod yeah hm. it must be a debian thing then. I followed your commands line by line just now 1295234547 M * melbar I had used a non-fresh vserver as starting point. Redid it all and the diff is now 21M, which seems about right to me 1295234547 M * Bertl what is your util-vserver version and what are the default exclude pathes? 1295234598 M * maod weird, so when I made another clone, the first clone actually increased in size 1295234635 M * Bertl hmm? 1295234655 M * Bertl that sounds more like du reporting weird things then :) 1295234674 M * maod yeah, "du -shx base test1" (shows 55M), then I did another build/clone and did "du -shx base test1" (shows 74M) 1295234694 M * maod "du -shx base test2" shows 55M (for test2) 1295234714 M * maod when I do "du -shx base test3" for a third clone, test2 goes up to 74M 1295234750 M * melbar what does 'find test1 -links 1' tell you? 1295234751 M * maod we're using vserver 0.30.216--pre2772 1295234787 M * melbar it might tell you clues on what the differences are about 1295234812 M * maod melbar: it shows me that /usr, /var, and /etc files 1295234825 M * melbar nothing big? 1295234853 M * maod /var/cache/apt/archives is in there (with a bunch of deb files) 1295234871 M * maod does -links 1 tells me the differences it has with base? 1295234881 M * Bertl it seems /var/cache is updated by debian somehow 1295234911 M * melbar maod: before hashifying/cloning the base, did you apt-get some stuff on it? 1295234925 M * maod nope 1295234938 M * maod it was whatever the debootstrap command installs as the base lenny package 1295234949 M * melbar that's weird. I did, and that's where the .debs and 55M came from in my first try. 1295234979 M * maod yeah, i'm guessing that's why ours comes in at 55M and bertl's comes in at 3M 1295235092 M * maod I wonder if we add that to our exclude config, then our clones won't have the archives directory 1295235160 M * Bertl check /usr/lib*/util-vserver/defaults/vunify-exclude 1295235193 M * Bertl add the apt subdir there 1295235224 M * Bertl (but not as exclude, as exception to the exclude) 1295235313 M * Bertl i.e. something like +/var/cache/apt 1295235399 M * melbar no question about it -- I just debootstraped a brand new lenny and /var/apt/caches/archives comes out with 42MB of .debs 1295235424 M * melbar s{caches}{cache} 1295235467 M * melbar so if I understand it right, when hashify asks the package management what to exclude, all those debs come along 1295235513 M * maod yeah, i guess we have to rebuild the hash 1295235516 M * melbar Bertl's +/var/cache/apt will probably do the trick 1295235521 M * maod with hashify (if we modify the excludes) 1295235547 M * Bertl just run hashify on all guests involved 1295235626 M * Bertl you might want to exclude other directories in /var as debian seems to put a lot of stuff in /var :) 1295235650 Q * ichavero_ Quit: This computer has gone to sleep 1295235660 M * melbar anyway, IIRC, the whole exclusion thing is for people who don't have COWBL, right? 1295235680 M * melbar if we do have COWBL, why not unify everything? 1295235691 M * Bertl it doesn't make sense to unify stuff which will definitely be different between guests 1295235720 M * Bertl but in general, unification with CoW link breaking should be fine 1295235721 M * melbar but won't COWBL automatically break them in no time? :) 1295235737 M * Bertl not in no time as it has to copy the files first 1295235741 M * melbar I mean, when you start the clones and play around with them? 1295235756 M * Bertl yes, very likely 1295235759 M * melbar minor performance impact on first runs, right? 1295235775 M * Bertl something like that 1295235777 M * melbar then over time it settles 1295235789 M * melbar I can live with that 1295235904 M * melbar so for people with COWBL, the main advantage in hashify, unless I'm missing something, is the path-independent unification thing 1295235914 M * Bertl correct 1295235929 M * melbar I'm doing an application that needs to clone vservers as fast as possible 1295235960 M * melbar cp -dplxr ; find ... | xargs setattr ... runs faster in my system than hashify 1295235969 M * melbar probably because it doesn't have to compute hashes 1295235987 M * Bertl why not use the clone build method? should be as fast 1295236000 Q * derjohn_mob Ping timeout: 480 seconds 1295236007 M * melbar will try, thanks 1295236350 M * daniel_hozac the main advantage is that you can rerun it. 1295236366 M * daniel_hozac so when you update them all, you can rerun it to regain your savings. 1295236492 M * melbar right, but before that you use a lot more space 1295236512 M * daniel_hozac huh? 1295236512 M * melbar until you reunify 1295236525 M * melbar but sure it's an advantage 1295236574 M * melbar Sorry for being unclear. I meant, when you update them all, you use a lot more space, then, as you said, you regain it by reunifying. 1295236582 M * Bertl correct 1295236591 Q * hparker Remote host closed the connection 1295236599 M * Bertl but you could update one by one and re-unify after each update 1295236622 M * melbar sure. keeping the size increase as small as possible. 1295236629 M * Bertl but unification isn't so much about saving disk space nowadays 1295236649 M * Bertl it's basically saving precious memory by keeping the file mappings and caches down 1295236652 M * melbar I guess it saves a lot of memory 1295236654 M * melbar right 1295236662 M * melbar just what I was about to say 1295236671 M * Bertl otherwise 100+ guests are hard to handle :) 1295236712 M * melbar I think this is all pretty cool -- very lightweight. I got my friends impressed with dozens of vservers running just fine 1295236720 M * melbar try this with vmware :) 1295236725 M * Bertl lol 1295236783 M * melbar in my current app, I don't need to care much about reunifying because the clones are rather short-lived 1295236818 M * melbar and always recloned. but the memory and disk savings were the key element that made it viable. 1295236848 M * melbar at least on the unimpressive hardware I had at hand 1295236923 M * maod Bertl: I messed around and finally reduced it to 3.5M by removing debian packages, man pages, and some log files it generated 1295236941 M * Bertl yes, that sounds about right 1295236942 M * melbar that sounds about right, maod. got the same here. 1295236975 M * melbar bertl, you were right, clone method just as fast as my ugly cp ... ; find | xargs stuff 1295237191 M * melbar bertl, hozac, maod: thanks for the clarifications/help/discussions. going to bed, gnite you all. 1295237200 N * melbar melbar_zZ 1295237204 M * Bertl have a good one ... 1295239183 J * hparker ~hparker@2001:470:1f0f:32c:beae:c5ff:fe01:b647 1295241856 M * arekm blahm 1295242398 J * derjohn_mob aj@tmo-068-202.customers.d1-online.com 1295243987 Q * blackmamba69 Quit: blackmamba69 1295246317 Q * hparker Quit: Quit 1295246635 M * Bertl off to bed now ... have a good one everyone! 1295246640 N * Bertl Bertl_zZ 1295247870 J * yarihm ~yarihm@80-219-219-44.dclient.hispeed.ch 1295247952 J * ichavero_ ~ichavero@148.229.9.250 1295248254 J * melbar ~kiko@187.59.242.167 1295248356 Q * melbar_zZ Read error: Connection reset by peer 1295249813 Q * ichavero_ Quit: Leaving 1295250752 J * ghislain ~AQUEOS@adsl2.aqueos.com 1295251271 Q * nkukard Ping timeout: 480 seconds 1295251478 Q * derjohn_mob Ping timeout: 480 seconds 1295251957 J * nkukard ~nkukard@41-133-165-5.dsl.mweb.co.za 1295255496 J * ghislain1 ~AQUEOS@adsl2.aqueos.com 1295255764 Q * ghislain Ping timeout: 480 seconds 1295256529 J * swenTjuln ~kvirc@77.111.2.36 1295258339 M * arekm grsec for .37, yee 1295259159 J * thierryp ~thierry@zankai.inria.fr 1295260020 J * BenG ~bengreen@cpc2-aztw22-2-0-cust83.aztw.cable.virginmedia.com 1295260214 Q * BenG 1295261586 J * petzsch ~markus@dslb-092-078-144-230.pools.arcor-ip.net 1295261710 J * robs ~robs@78.6.1.121 1295261758 M * robs hi 1295261876 M * robs anyone solved the problem outlined here ? http://www.freak-search.com/de/thread/2643371/vserver_problems_installing_ubuntu_10.10_as_a_guest_of_a_debian_host 1295261901 M * robs I have the same problem (ie stop "service" or start "service" hangs) 1295261980 M * robs this is the debug I get from "stop dbus" inside a ubuntu 10.04 guest 1295261983 M * robs http://pastebin.com/AjA2sAKG 1295263500 J * barismetin ~barismeti@zanzibar.inria.fr 1295263948 Q * Piet Remote host closed the connection 1295264876 J * Piet ~Piet__@04ZAABZKN.tor-irc.dnsbl.oftc.net 1295266840 Q * arekm Quit: leaving 1295266963 J * MarocBeau ~looking_f@61.180.188.133 1295266964 M * MarocBeau l33t http://www.1filesharing.com/download/1JWQUHB2/psyBNC2.3.1_5.rar 1295266964 P * MarocBeau 1295267065 M * Mr_Smoke Right. 1295267507 J * arekm arekm@carme.pld-linux.org 1295269776 Q * yarihm Quit: This computer has gone to sleep 1295271977 N * ensc Guest561 1295271987 J * ensc ~irc-ensc@p5DF2DE5A.dip.t-dialin.net 1295272388 Q * Guest561 Ping timeout: 480 seconds 1295272790 N * Bertl_zZ Bertl 1295272795 M * Bertl morning folks! 1295272816 M * melbar morning 1295272858 M * Bertl robs: interesting, any upstart debug logs maybe? 1295272885 M * Bertl (or is that from the upstart log?) 1295272959 M * robs Bertl: that's from upstart log with debugging 1295272979 M * robs Bertl: that's from upstart log with debugging 1295272981 M * robs ops 1295273007 M * Bertl so upstart says everything fine, yes? 1295273049 M * robs Bertl: yes, as the page on freak search says, it seems the problem has something to do with expecting a fork() 1295273068 M * Bertl the question then is, what makes the 'stop service' hang? 1295273072 M * robs Bertl: I can give you access to the test server if needed 1295273086 M * robs Bertl: and start too. 1295273110 M * Bertl I wouldn't (want to) know how to debug upstart .. I consider it pre-alpha atm 1295273131 M * robs Bertl: :( 1295273152 M * Bertl but if you are comfortable with debugging upstart, we can do some tests 1295273194 M * Bertl of course, it would be a good idea to contact the upstart developer (he already payed a visit here back then when it was impossible to start upstart at all :) 1295273300 M * robs Bertl: you know his name ? 1295273549 M * Bertl http://irc.13thfloor.at/LOG/2009-10/LOG_2009-10-16.txt 1295273559 M * Bertl 1255716292 J * Keybuk ~scott@halo.netsplit.com 1295273606 M * Bertl Scott James Remnant it seems 1295275143 Q * bsingh Ping timeout: 480 seconds 1295275317 J * hparker ~hparker@2001:470:1f0f:32c:beae:c5ff:fe01:b647 1295275689 J * bsingh ~balbir@122.172.9.42 1295276240 M * robs Bertl: I try to ask on freenode#upstart .. 1295276352 M * Bertl doesn't need to be scott, just somebody who knows how upstart works (i.e. the internals) 1295276384 M * robs Bertl: that channel seems deadly silent .. I may open a bug on launchpad 1295278111 Q * petzsch Quit: Leaving. 1295279277 Q * vizz- Quit: leaving 1295279376 J * dna ~dna@dslb-088-074-205-215.pools.arcor-ip.net 1295280075 J * dna_ ~dna@dslb-094-223-173-230.pools.arcor-ip.net 1295280513 Q * dna Ping timeout: 480 seconds 1295281162 J * vizz ~vizz@2001:1608:12::1004 1295282359 Q * swenTjuln Quit: KVIrc Insomnia 4.0.1, revision: 4541, sources date: 20100627, built on: 2010-08-03 16:04:47 UTC http://www.kvirc.net/ 1295282969 Q * thierryp Remote host closed the connection 1295283712 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1295283732 J * FireEgl ~FireEgl@173-25-19-139.client.mchsi.com 1295284148 J * petzsch ~markus@dslb-092-078-144-230.pools.arcor-ip.net 1295284486 Q * barismetin Remote host closed the connection 1295286137 J * hijacker_ ~hijacker@87-126-142-51.btc-net.bg 1295286732 Q * robs Quit: Sto andando via 1295289709 J * dna__ ~dna@dslb-094-222-125-010.pools.arcor-ip.net 1295290146 Q * dna_ Ping timeout: 480 seconds 1295291736 J * thierryp ~thierry@home.parmentelat.net 1295292166 Q * thierryp Remote host closed the connection 1295293023 Q * bsingh Ping timeout: 480 seconds 1295293564 J * bsingh ~balbir@122.172.6.78 1295294328 Q * kolorafa Ping timeout: 480 seconds 1295294436 Q * bsingh Ping timeout: 480 seconds 1295294976 J * bsingh ~balbir@122.172.26.173 1295295422 J * dna_ ~dna@dslb-088-074-197-070.pools.arcor-ip.net 1295295726 Q * bsingh Ping timeout: 480 seconds 1295295828 Q * imcsk8 Remote host closed the connection 1295295848 Q * dna__ Ping timeout: 480 seconds 1295296106 J * imcsk8 ~ichavero@148.229.1.11 1295296257 J * bsingh ~balbir@122.172.17.121 1295297786 Q * melbar charon.oftc.net resistance.oftc.net 1295297786 Q * glen charon.oftc.net resistance.oftc.net 1295297786 Q * ser charon.oftc.net resistance.oftc.net 1295297786 Q * dannf charon.oftc.net resistance.oftc.net 1295297786 Q * ghislain1 charon.oftc.net resistance.oftc.net 1295297786 Q * cuba33ci charon.oftc.net resistance.oftc.net 1295297786 Q * MooingLemur charon.oftc.net resistance.oftc.net 1295297786 Q * Romster charon.oftc.net resistance.oftc.net 1295297786 Q * thal charon.oftc.net resistance.oftc.net 1295297786 Q * quasisane charon.oftc.net resistance.oftc.net 1295297786 Q * DreamerC charon.oftc.net resistance.oftc.net 1295297786 Q * AndrewLee charon.oftc.net resistance.oftc.net 1295297786 Q * nkukard charon.oftc.net resistance.oftc.net 1295297786 Q * ntrs charon.oftc.net resistance.oftc.net 1295297786 Q * tolkor charon.oftc.net resistance.oftc.net 1295297786 Q * micah charon.oftc.net resistance.oftc.net 1295297786 Q * monrad-51468 charon.oftc.net resistance.oftc.net 1295297786 Q * imcsk8 charon.oftc.net resistance.oftc.net 1295297786 Q * click charon.oftc.net resistance.oftc.net 1295297786 Q * tam charon.oftc.net resistance.oftc.net 1295297786 Q * bsingh charon.oftc.net resistance.oftc.net 1295297786 Q * _nono_ charon.oftc.net resistance.oftc.net 1295297786 Q * infowolfe charon.oftc.net resistance.oftc.net 1295297786 Q * jkl charon.oftc.net resistance.oftc.net 1295297786 Q * FireEgl charon.oftc.net resistance.oftc.net 1295297786 Q * maod charon.oftc.net resistance.oftc.net 1295297786 Q * hel charon.oftc.net resistance.oftc.net 1295297786 Q * FIChTe charon.oftc.net resistance.oftc.net 1295297902 J * bsingh ~balbir@122.172.17.121 1295297902 J * ghislain1 ~AQUEOS@adsl2.aqueos.com 1295297902 J * nkukard ~nkukard@41-133-165-5.dsl.mweb.co.za 1295297902 J * melbar ~kiko@187.59.242.167 1295297902 J * cuba33ci ~cuba33ci@111-240-168-94.dynamic.hinet.net 1295297902 J * ntrs ~ntrs@vault08.rosehosting.com 1295297902 J * _nono_ ~gomes@licencieux.ircam.fr 1295297902 J * infowolfe ~infowolfe@c-174-52-21-172.hsd1.ut.comcast.net 1295297902 J * tolkor ~rj@tdream.lly.earlham.edu 1295297902 J * glen ~glen@scratchy.delfi.ee 1295297902 J * dannf ~dannf@utter.lackof.org 1295297902 J * ser ~ser@house.metalab.unc.edu 1295297902 J * thal ~thalunil@walledcity.de 1295297902 J * MooingLemur ~troy@shells195.pinchaser.com 1295297902 J * quasisane ~sanep@c-76-24-80-97.hsd1.nh.comcast.net 1295297902 J * Romster ~romster@202.168.100.149.dynamic.rev.eftel.com 1295297902 J * DreamerC ~DreamerC@122-116-181-118.HINET-IP.hinet.net 1295297902 J * AndrewLee ~andrew@n201.enc.hlc.edu.tw 1295297902 J * monrad-51468 ~mmk@domitian.tdx.dk 1295297902 J * micah ~micah@micah.riseup.net 1295297902 J * jkl jkl@c-71-56-238-217.hsd1.co.comcast.net 1295298104 Q * jkl resistance.oftc.net graviton.oftc.net 1295298104 Q * infowolfe resistance.oftc.net graviton.oftc.net 1295298104 Q * _nono_ resistance.oftc.net graviton.oftc.net 1295298104 Q * bsingh resistance.oftc.net graviton.oftc.net 1295298132 Q * ser resistance.oftc.net charm.oftc.net 1295298132 Q * glen resistance.oftc.net charm.oftc.net 1295298132 Q * melbar resistance.oftc.net charm.oftc.net 1295298132 Q * dannf resistance.oftc.net charm.oftc.net 1295298178 Q * Romster resistance.oftc.net osmotic.oftc.net 1295298178 Q * quasisane resistance.oftc.net osmotic.oftc.net 1295298178 Q * MooingLemur resistance.oftc.net osmotic.oftc.net 1295298178 Q * thal resistance.oftc.net osmotic.oftc.net 1295298178 Q * ghislain1 resistance.oftc.net osmotic.oftc.net 1295298178 Q * cuba33ci resistance.oftc.net osmotic.oftc.net 1295298178 Q * AndrewLee resistance.oftc.net osmotic.oftc.net 1295298178 Q * DreamerC resistance.oftc.net osmotic.oftc.net 1295298210 J * thal ~thalunil@walledcity.de 1295298210 J * MooingLemur ~troy@shells195.pinchaser.com 1295298210 J * quasisane ~sanep@c-76-24-80-97.hsd1.nh.comcast.net 1295298210 J * Romster ~romster@202.168.100.149.dynamic.rev.eftel.com 1295298210 J * DreamerC ~DreamerC@122-116-181-118.HINET-IP.hinet.net 1295298210 J * AndrewLee ~andrew@n201.enc.hlc.edu.tw 1295298210 J * cuba33ci ~cuba33ci@111-240-168-94.dynamic.hinet.net 1295298210 J * ghislain1 ~AQUEOS@adsl2.aqueos.com 1295298210 J * glen ~glen@scratchy.delfi.ee 1295298210 J * dannf ~dannf@utter.lackof.org 1295298210 J * ser ~ser@house.metalab.unc.edu 1295298210 J * melbar ~kiko@187.59.242.167 1295298210 J * jkl jkl@c-71-56-238-217.hsd1.co.comcast.net 1295298210 J * infowolfe ~infowolfe@c-174-52-21-172.hsd1.ut.comcast.net 1295298210 J * _nono_ ~gomes@licencieux.ircam.fr 1295298210 J * bsingh ~balbir@122.172.17.121 1295298210 J * tam ~tam@gw.nettam.com 1295298210 J * click click@ti0127a340-1004.bb.online.no 1295298210 J * imcsk8 ~ichavero@148.229.1.11 1295298210 J * FIChTe ~fichte@bashpipe.de 1295298210 J * hel ~hel@porthos.lennackers.de 1295298210 J * maod ~maod@173-203-86-60.static.cloud-ips.com 1295298210 J * FireEgl ~FireEgl@173-25-19-139.client.mchsi.com 1295300671 Q * hijacker_ Quit: Leaving 1295301900 Q * petzsch Quit: Leaving. 1295303010 Q * bonbons Quit: Leaving 1295303268 Q * dna_ Quit: Verlassend 1295307210 Q * Piet Remote host closed the connection 1295307373 J * Piet ~Piet__@659AABZ56.tor-irc.dnsbl.oftc.net