1195258473 J * hparker ~hparker@linux.homershut.net 1195259495 Q * dowdle Remote host closed the connection 1195261095 Q * mire Ping timeout: 480 seconds 1195262254 Q * m_stone_ Quit: leaving 1195262307 J * mjeanson ~mjeanson@bas2-sherbrooke40-1128575760.dsl.bell.ca 1195263379 Q * matthew_ Ping timeout: 480 seconds 1195263885 J * Infinito argos@200-140-155-92.gnace701.dsl.brasiltelecom.net.br 1195265115 J * mire ~mire@245-169-222-85.adsl.verat.net 1195265360 Q * Infinito Read error: Connection reset by peer 1195265394 J * Infinito argos@200-140-155-92.gnace701.dsl.brasiltelecom.net.br 1195265537 Q * Infinito 1195265919 Q * mjeanson Quit: Quitte 1195266053 Q * mire Read error: Operation timed out 1195268897 Q * hparker Quit: Quit 1195269050 J * hparker ~hparker@linux.homershut.net 1195270340 J * FireEgl FireEgl@4.0.0.0.1.0.0.0.c.d.4.8.0.c.5.0.1.0.0.2.ip6.arpa 1195270783 J * matthew__ ~matthew@81.168.74.31 1195276228 Q * quasisane Read error: Operation timed out 1195276680 J * ntrs_ ~ntrs@79.125.224.31 1195277088 Q * ntrs Read error: Operation timed out 1195277188 J * quasisane ~sanep@c-76-118-191-64.hsd1.nh.comcast.net 1195277287 Q * quasisane Remote host closed the connection 1195278346 J * _jmcaricand_zzz ~jmcarican@d90-144-119-177.cust.tele2.fr 1195279597 Q * _jmcaricand_zzz Quit: KVIrc 3.2.4 Anomalies http://www.kvirc.net/ 1195280442 Q * Aiken Ping timeout: 480 seconds 1195284327 Q * FireEgl Read error: Connection reset by peer 1195284772 J * Aiken ~james@ppp59-167-115-173.lns3.bne4.internode.on.net 1195285107 J * DLange ~dlange@p57A33A57.dip0.t-ipconnect.de 1195286293 J * arachnist ~arachnist@pool-71-174-118-56.bstnma.fios.verizon.net 1195286317 Q * arachnist 1195286339 J * arachnist ~arachnist@pool-71-174-118-56.bstnma.fios.verizon.net 1195286424 Q * arachnis1 Quit: brb/bbl 1195286524 J * JonB ~NoSuchUse@kg1-20.kollegiegaarden.dk 1195286725 J * DavidS ~david@85.125.165.34 1195286791 Q * Aiken Read error: Connection reset by peer 1195286873 J * mire ~mire@245-169-222-85.adsl.verat.net 1195287120 Q * AStorm Remote host closed the connection 1195287196 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195288345 Q * mire Ping timeout: 480 seconds 1195289372 Q * JonB Ping timeout: 480 seconds 1195289382 J * JonB ~NoSuchUse@kg0-199.kollegiegaarden.dk 1195289392 J * igraltist ~user4@kasten-edv.de 1195289662 J * Aiken ~james@ppp59-167-115-173.lns3.bne4.internode.on.net 1195289750 Q * AStorm Remote host closed the connection 1195289790 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195289805 J * dna ~dna@146-211-dsl.kielnet.net 1195289960 N * Bertl_zZ Bertl 1195289965 M * Bertl morning folks! 1195289977 M * JonB hey Bertl 1195290368 Q * AStorm Remote host closed the connection 1195290395 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195290845 Q * transacid Remote host closed the connection 1195290878 J * transacid ~transacid@transacid.de 1195291923 Q * JonB Quit: This computer has gone to sleep 1195292097 Q * DavidS Ping timeout: 480 seconds 1195293077 J * larsivi ~larsivi@101.84-48-201.nextgentel.com 1195293354 J * Yvo ~yvonne@91.64.217.106 1195293869 M * Bertl wb Yvo! larsivi! 1195293944 M * larsivi hey :) 1195293953 J * virtuoso ~s0t0na@ppp91-122-26-204.pppoe.avangard-dsl.ru 1195293985 M * Yvo good morning :-) 1195294291 Q * hparker Quit: peer reset by connection 1195294364 Q * virtuoso_ Ping timeout: 480 seconds 1195294476 M * Bertl okay, off for now .. back a little later 1195294482 N * Bertl Bertl_oO 1195295622 Q * larsivi Quit: Konversation terminated! 1195295735 J * ema ~ema@rtfm.galliera.it 1195295968 M * igraltist hi 1195296086 M * igraltist i have this for the guest 1195296087 M * igraltist http://paste.debian.net/42689 1195296093 M * igraltist in bcapabilitie 1195296094 M * igraltist s 1195296123 M * igraltist on the host is no X-server installed also no graphic-driver 1195296151 M * igraltist when i start now the X in the guest its says that no device found 1195296217 M * AStorm you're missing a device node, maybe? 1195296248 M * igraltist which one? 1195296266 M * igraltist i have all file from my old /dev copied 1195296304 M * AStorm uhm, so what's the point of vserver then, if you enabled almost all caps and added all devices? 1195296328 M * igraltist this is not the question for me 1195296329 M * AStorm maybe your gfx driver needs more of proc 1195296333 M * AStorm e.g. /proc/maps 1195296375 M * AStorm e.g. /proc/iomem that is 1195296376 M * AStorm or proc/irq 1195296394 M * AStorm try stracing X server 1195296809 Q * igraltist Ping timeout: 480 seconds 1195297351 J * pme ~pme@ACaen-152-1-14-52.w83-115.abo.wanadoo.fr 1195297382 N * pme pmenier 1195298300 J * meandtheshell ~sa@85.127.117.127 1195298685 J * JonB ~NoSuchUse@kg0-199.kollegiegaarden.dk 1195298802 Q * ntrs_ Read error: Connection reset by peer 1195298826 J * ntrs_ ~ntrs@79.125.224.31 1195300165 J * igraltist ~user4@kasten-edv.de 1195300268 J * zLinux ~zLinux@88.213.31.164 1195300313 M * igraltist hi 1195300332 M * igraltist how can allow the guest to get more proc access 1195300353 M * igraltist i really dont like to put the whole xserver stuff on the host 1195300355 M * AStorm vprocunhide 1195300360 M * AStorm check it configuration 1195300364 M * AStorm *its 1195300391 M * igraltist i must start it otherwise the guest want start 1195300432 M * AStorm .... 1195301176 Q * JonB Quit: This computer has gone to sleep 1195301246 M * ard /etc/vserver/.default/apps/vprocunhide/files 1195301253 M * ard put the files there 1195301276 A * ard is also trying to run the xserver in a vserver environment :-) 1195301297 N * pmenier pmenier_off 1195301354 M * igraltist ard and is it working your xserver? 1195301395 M * AStorm mine worked fine, with nvidia driver too 1195301488 M * ard igraltist : the xserver works so far, but I am missing the keyboard :-) 1195301554 M * AStorm ard: you forgot to make /dev/input available ;P 1195301565 M * ard ye 1195301566 M * ard s 1195301576 M * igraltist what i must i put in vprocunhide/files? 1195301580 M * ard but it was not my primary goal this week :-) 1195301588 M * igraltist the /proc ? 1195301593 M * AStorm igraltist: proc files your x server needs 1195301596 M * AStorm actually, your driver 1195301605 M * AStorm no, NEVER put your whole proc in it 1195301613 M * ard igraltist : http://oldwiki.linux-vserver.org/MoreUbuntu 1195301627 M * AStorm btw, why isn't proc unhiding per-vserver? 1195301645 M * ard that's what I realised :-) 1195301664 M * ard anyway: it's much better than running it in the root environment :-) 1195301682 M * AStorm mhm 1195302129 M * igraltist so i hope now i have add enough to start the X 1195302135 M * igraltist go to reboot :) 1195302319 Q * arachnist Quit: brb/bbl 1195302330 J * arachnist arachnist@088156187175.who.vectranet.pl 1195303019 Q * ensc Remote host closed the connection 1195304078 N * pmenier_off pmenier 1195305539 J * JonB ~NoSuchUse@kg0-199.kollegiegaarden.dk 1195306694 J * stux ~stux@AOrleans-109-1-4-4.w80-11.abo.wanadoo.fr 1195307287 M * stux hello sirs, i have a little question 1195307293 M * stux i try to mount a nfs partition between a guest and a host, the nfs server handles permissions well (tested with another physical machine) but my guest always says "Permission denied" Without any further log (even in syslog). 1195307310 M * stux has got a similar problem ? 1195307407 M * JonB i dont think guests can mount 1195307423 M * stux hmm ... 1195307428 M * JonB is it to share data between a vserver guest and a vserver host? 1195307436 M * stux yes 1195307474 M * JonB the vserver host that the guest is running under? 1195307502 M * stux host on slackware 12, guest on debian etch 1195307524 M * JonB how many vserver hosts do you have? 1195307536 M * stux actually , 5 1195307553 M * stux ah , hosts ... 1195307557 M * stux no 1 sorry 1195307562 M * JonB okay, good 1195307567 M * stux running 5 guests 1195307600 M * JonB why would you want to share data using NFS? 1195307667 M * stux because i'm running nis+nfs for all my network, and i'd like to do the same on guests 1195307682 M * JonB NIS is one thing 1195307686 M * JonB NFS is another 1195307705 M * JonB are the vserver host also the NFS server? 1195307713 M * stux yes 1195307726 Q * ema Quit: leaving 1195307728 M * JonB then share the NFS data directly 1195307739 M * stux how can i do that ? 1195307778 M * JonB well, just mount the partition the data is on inside the vserver 1195307793 M * JonB edit /etc/vservers//fstab 1195307821 M * stux i will try ... 1195307917 M * stux something like 1195307933 M * stux /dev/sda9 /data ext3 defaults 0 0 ? 1195307959 M * JonB i think it uses regular fstab syntax 1195308009 M * stux ok, it seems working ... thank you very much ;) 1195308132 M * JonB you're velcome 1195308220 M * stux bye bye, and thanks again 1195308239 P * stux Quitte 1195309853 J * vrwttnmtu ~eryktyktu@82-69-161-137.dsl.in-addr.zen.co.uk 1195309864 M * vrwttnmtu Hello all :) 1195309887 M * AStorm hello 1195309923 M * AStorm I wonder how hard will be patching vserver on top of xen patched kernel, shall see 1195310024 M * JonB AStorm: why do you want to do that? 1195310065 M * vrwttnmtu Hello all. Does anyone have any experience building CentOS guests? I get this error: mount point /etc/rpm does not exist. Please enlighten with a cluestick :) 1195310097 M * AStorm JonB: to run certain Dom0 services in vservers ;P 1195310107 M * JonB AStorm: what is dom0? 1195310112 M * AStorm hmm, that shouldn't be necessary though 1195310126 M * AStorm JonB: Xen host 1195310131 M * AStorm (as opposed to Xen guest) 1195310132 M * AStorm domain 0 1195310139 M * JonB okay 1195310168 M * ruskie hmm I would have though going the other way might be better... 1195310183 M * ruskie i.e. have xen then in domUs vservers 1195310349 J * ntrs__ ~ntrs@79.125.224.31 1195310350 Q * ntrs_ Read error: Connection reset by peer 1195310640 M * AStorm ruskie: usually that works 1195310653 M * AStorm unless the service in question depends on hardware 1195310659 M * AStorm e.g. X server 1195310671 M * AStorm I could forward required PCI devices though 1195310709 M * AStorm I know that X in a vserver works extremely well 1195310731 M * AStorm (after unhiding some proc, adding 2 bcaps and some device nodes) 1195310757 M * AStorm also, vservers inside a xen node could also be useful to limit damage due to a security break 1195310784 M * JonB AStorm: you're asuming that xen is error free ? 1195310793 M * AStorm no 1195310804 M * AStorm that's why security in layers is important 1195310809 M * AStorm *in depth even 1195310815 M * JonB AStorm: oh, just like a onion 1195310826 M * AStorm I'd also add Smack as an ACL 1195310844 M * AStorm and then it should be really secure 1195310892 M * AStorm Xen is like hardware-level security 1195310911 M * AStorm vserver is OS-level separation, like chroots 1195310917 M * AStorm Smack is an ACL 1195310936 M * AStorm together, they'd create a MAC system 1195310956 M * AStorm s/ACL/file ACL/ 1195310985 M * AStorm Smack could do that on their own, but to turn it into separation would be cumbersome 1195311024 M * daniel_hozac vrwttnmtu: mkdir /etc/rpm 1195311032 M * daniel_hozac vrwttnmtu: the Debian rpm package is broken. 1195311046 M * AStorm :p 1195311058 M * vrwttnmtu No dynamically linked rpm binary found; exiting... 1195311068 M * daniel_hozac uh, you did install rpm, right? 1195311068 M * vrwttnmtu It's a Gentoo host 1195311071 M * vrwttnmtu Nope 1195311079 M * AStorm hmm, I can emulate most of vserver using Smack and filesystem capabilities ;P 1195311080 M * daniel_hozac ... might want to do that first then. 1195311091 M * AStorm in fact, everything except network separation 1195311138 M * AStorm maybe even that, if Smack is strong enough 1195311148 P * Yvo 1195311158 M * vrwttnmtu daniel_hozac, Does this method download the RPMs over the net? 1195311171 M * vrwttnmtu I don't need a store of them already, do I? 1195311186 M * AStorm so, the only thing missing would be scheduling 1195311300 M * daniel_hozac vrwttnmtu: the yum method will use yum to download everything, yes. 1195311313 M * vrwttnmtu Hurrah, excellent. 1195311330 M * vrwttnmtu Does it support Centos 5 yet, do you know? 1195311342 M * daniel_hozac CentOS 5 was added in 0.30.213 or 0.30.214... 1195311366 M * vrwttnmtu I'm on .213 - I'll try it, and if not, I'll upgrade 1195311777 J * mark__ ~mark@gate12.kolornet.pl 1195311818 M * mark__ hello everyone 1195311832 M * mark__ I've got a little question about vserver usage 1195311899 M * JonB yes? 1195311914 M * mark__ suppose you have tmpfs on many vserver instances 1195311927 M * mark__ it eats up RAM from common pool, right? 1195311933 M * mark__ "shared memory" I believe 1195311951 M * mark__ do your vserver installations eat up a lot of RAM if you have, say, 20 vservers on a host? 1195311954 Q * pmenier Quit: Konversation terminated! 1195311971 M * JonB mark__: you do not need to have a /tmp in ram 1195311978 M * mark__ or is it better to disable it entirely in fstab for particular 1195311980 M * mark__ I know 1195312002 M * mark__ the tradeoff is "less ram, but more disk usage" 1195312003 M * JonB mark__: i think they are taking up 20*16mb 1195312017 M * mark__ which is not enough in many cases, for apt-get for instance 1195312018 M * JonB mark__: only if you actually use /tmp alot 1195312036 M * mark__ suppose you have apache and exim or postfix instance in each 1195312040 Q * phedny Quit: Lost terminal 1195312041 M * mark__ + mysql 1195312053 M * JonB does any of those really use /tmp? 1195312056 M * mark__ yep 1195312065 M * mark__ exim does in many situations 1195312069 M * mark__ I don't know about mysql; 1195312073 M * mark__ ssh does 1195312087 M * mark__ apache sessions are stored on /tmp 1195312088 M * JonB mark__: but does ssh for many data 1195312097 M * mark__ well ssh is small potatoes 1195312133 M * mark__ prob is, if I allow more ram to tmpfs I'm left with significantly less ram 1195312135 J * Pazzo ~ugelt@reserved-225136.rol.raiffeisen.net 1195312141 J * phedny ~mark@ip56538143.direct-adsl.nl 1195312142 M * mark__ and 4G is all I have on 32 bit arch 1195312164 M * mark__ i'm not that adventurous to try 64-bit linux 1195312168 M * daniel_hozac you realize that it's backed by swap too, right? 1195312176 M * mark__ besides little works there and hardware is expensive 1195312179 M * daniel_hozac and that it only uses RAM if it's necessary? 1195312180 M * mark__ sure, 1195312196 M * mark__ but tmpfs takes up ram that apps need, right? 1195312203 M * daniel_hozac "little works there"?! what doesn't work on 64-bit, except proprietary crap? 1195312206 M * mark__ so there's lots of disk swapping bc of vm activity 1195312222 M * mark__ my last encounter with 64-bit debian didn't end up very well 1195312232 M * mark__ actually my colleague's, a Debian maniac 1195312244 M * mark__ there were some services we had trouble to get working 1195312251 M * mark__ I don't remember exactly now 1195312255 M * mark__ what it was 1195312265 M * mark__ he couldn't make it work anyway 1195312279 M * mark__ besides I don't have hardware with support for more than 8G of ram, so it's academic anyway 1195312285 M * mark__ more than 4G of ram, that is 1195312323 M * mark__ most apps have to be redesigned for 64bit not just recompiled 1195312332 Q * phedny 1195312336 Q * AStorm Quit: ET calling home 1195312337 M * mark__ there's little advantage running 32bit apache on 64bit OS 1195312378 M * daniel_hozac uh, no. 1195312394 M * daniel_hozac just recompiling gives you a significant performance boost on the retarded architecture that is x86. 1195312412 M * mark__ I agree that x86 is brain-damaged 1195312424 M * mark__ but it's not like I have any choice moving elsewhere for simple reasons of prices 1195312450 M * mark__ maybe I'll give 64bit try again 1195312455 M * mark__ anyway, about disk activity on /tmp 1195312480 M * mark__ question is, is it worth making the tradeoff 1195312487 M * JonB mark__: wont you have the same problem in non vservers ? 1195312509 M * mark__ well sure but if you have a single server it usually has enough ram so you can spare some for tmpfs 1195312521 M * mark__ with vserver instances it is much more tightly packed 1195312529 M * mark__ well that's the main point of vserver, right? 1195312538 M * mark__ using up resources more efficiently 1195312549 M * daniel_hozac mark__: it's simple really. whichever way you go, in the worst case scenario, you'll have thrashing. 1195312568 M * daniel_hozac with tmpfs, that lessens as you don't have to write to disk all the time. 1195312604 M * mark__ thrashing is more likely to happen when apps don't get enough ram 1195312643 M * daniel_hozac but you'd have the same effect if they were all writing to the disk for /tmp, no? 1195312645 M * mark__ and if some of vservers fills up their tmpfs'es with stuff they could just as well keep on the disk 1195312649 M * mark__ not really 1195312657 M * mark__ many apps leave temporary files around 1195312671 M * JonB mark__: then dont run them 1195312672 M * mark__ sure I can reduce this effect with doing tmpwatch/tmpreaper 1195312684 M * mark__ it's not up to me, it's up to users of particular vserver 1195312697 M * mark__ what is that they run there 1195312816 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195312818 M * mark__ oh, anyway, more specific question: is there really no way of getting user/group quotas INSIDE a vserver apart from running it all using LVM? 1195312964 M * daniel_hozac since nobody has ever stuck around to actually test that, no. 1195312974 M * mark__ well I tried testing it 1195312988 M * mark__ I mean simplemindedly, by installing quota and quotatools 1195312994 M * daniel_hozac hmm? did you just imagine the patch, or what? 1195312997 M * mark__ on a shared partition 1195313001 M * mark__ nope :-) 1195313006 M * daniel_hozac so, you didn't test it. 1195313018 M * mark__ where's the patch? 1195313032 M * vrwttnmtu Hmm, thanks PowerFox 1195313034 M * daniel_hozac you'd have to talk to Bertl_oO. 1195313047 M * mark__ ok 1195313235 M * mark__ thanks 1195313294 J * phedny ~mark@ip56538143.direct-adsl.nl 1195313713 M * mark__ another thing, just to be sure: have you policed/shaped bandwidth for vserver instances? does it work well? 1195313871 M * daniel_hozac just use tc. 1195313998 M * mark__ ok 1195314012 M * mark__ concise enough for instruction :-) 1195314014 M * mark__ thx 1195314195 Q * Aiken Quit: Leaving 1195314430 Q * JonB Quit: This computer has gone to sleep 1195314489 M * arachnist nice, i get ~83MB/s network throughput between vserver and a xen domU 1195314598 M * AStorm arachnist: you're a Xen convert... infidel! 1195314607 M * AStorm ;) 1195314746 M * arachnist AStorm: don't try to tell me you don't use xen at all 1195314775 M * AStorm notice the smiley 1195314829 M * arachnist and i get about 500MB/s between vservers ;>> 1195314850 M * arachnist or at least that's how fast lighttpd can push data here 1195314864 M * AStorm uhm, that's because copy between domU and dom0 is mostly CPU-limited 1195314886 M * AStorm xen-sources have a slightly older version of Xen that doesn't implement efficient request batching 1195314896 M * AStorm but, that discussion should happen at #xen ;P 1195314927 M * arachnist AStorm: not #xen but ##xen ;P 1195314941 M * arachnist or, maybe... 1195314943 M * arachnist :> 1195314950 J * pmenier ~pmenier@ACaen-152-1-14-52.w83-115.abo.wanadoo.fr 1195314986 M * AStorm ##xen would be good too, but on freenode :> 1195315006 M * mark__ Astorm: I'm new to vserver, so honestly, can you tell what is status of the project? active/stale, developing steadily or not much? 1195315058 J * hparker ~hparker@linux.homershut.net 1195315059 M * arachnist mark__: active/alive 1195315120 M * mark__ I'll be happy to find that out in practice, arachnist, bc we plan to use vserver for VPS (commercial) hosting 1195315133 M * mark__ I like it so much more than XEN 1195315152 M * mark__ so it would be pity to see it gone 1195315162 M * mark__ why is it not in mainline kernel yet?! 1195315184 M * mark__ I mean, a bit of polishing here and there and you have just about optimum virtualization solution 1195315248 M * mark__ btw, does vserver work with 64-bit kernel? 1195315370 M * hparker It works fine 64bit 1195315380 M * AStorm mark__: active, alive, stable 1195315401 M * AStorm mark__: it's not in the mainline because of architectural "arguments" from mainline devs 1195315412 M * hparker As for going in kernel, take a look at http://www.montanalinux.org/linux-vserver-interview.html 1195315418 M * AStorm something similar is slowly trickling in, hat is namespaces and containers 1195315433 M * AStorm and cgroups 1195315541 J * click_ click@ti511110a080-2269.bb.online.no 1195315642 Q * click Ping timeout: 480 seconds 1195316901 M * mark__ this is pretty funny: 1195316902 M * mark__ http://www.fsfe.org/it/fellows/rca/from_out_there/from_zero_to_virtualization_linux_vserver_vs_openvz 1195316963 M * mark__ having read that, though, another question exploded in my skull: is there a way to limit a guest to that particular amount of memory? I mean, "free" or "vmstat" show entire memory is available 1195317395 J * dreamind ~dreamind@cable-134-163.iesy.net 1195317427 N * dreamind Guest1007 1195317595 N * phedny Guest1008 1195317603 J * phedny ~mark@ip56538143.direct-adsl.nl 1195317625 M * mark__ hello? 1195317638 M * mark__ is it possible to limit memory per guest (vserver instance)? 1195317666 M * matthew__ mark__: yes, I believe it is 1195317693 M * mark__ ok, good, but how? I didn't find anything on the Great Tree: http://www.nongnu.org/util-vserver/doc/conf/configuration.html 1195317710 M * matthew__ err, not totally sure. It's not something I've ever done. 1195317715 M * mark__ ok 1195317897 M * Loki|muh mark__: its written in the faq on the homepage afair 1195317900 M * Guest1007 hi folks 1195317911 M * hparker mark__: yes, with rlimits 1195318004 Q * Guest1008 Ping timeout: 480 seconds 1195318019 M * mark__ Lokijmuh: huh? sorry 1195318027 M * mark__ it's there in faq. overlooked it 1195318083 M * vrwttnmtu daniel_hozac, Trying to install CentOS guest, and it's not showing much output. It's just hung (?) after "Warning, could not load sqlite, falling back to pickle" 1195318098 M * vrwttnmtu Is it actually working, or has it got stuck? 1195318131 M * hparker I don't want no pickle.... Just want to ride my motor.... sickle 1195318172 M * vrwttnmtu Or just have sex with my bi-sickle. http://news.bbc.co.uk/1/hi/scotland/glasgow_and_west/7098116.stm 1195318236 M * vrwttnmtu daniel_hozac, Wow. Lots of "Missing Dependancies" : "/usr/share/apps/khangman/data is needed by package kde-i18n-Italian" 1195318269 M * hparker yikes 1195318272 M * vrwttnmtu Yeah 1195318299 M * hparker them bikes, they've got some sharp edges 1195318808 Q * Guest1007 Quit: Guest1007 1195319023 M * faheem__1 Hi. Is it correct that inside the vserver, localhost should use the IP address of the host? That is what I've been doing. 1195319342 M * faheem__1 vrwttnmtu: I could get a centos guest to work on Debian etch. What is your platform?> 1195319361 M * Loki|muh faheem__1: localhost should be the IP of the guest itself 1195319381 M * vrwttnmtu Gentoo 1195319387 M * faheem__1 Loki|muh: Are you sure? 1195319417 M * vrwttnmtu I'm running vserver xxxx build -m yum --interface eth0:xxxx/24 --hostname=xxxx -- -d centos5 1195319424 M * faheem__1 Loki|muh: I think I got this info from somewhere, but don't remember now where. 1195319424 M * Loki|muh faheem__1: its everywhere in my setup ;) 1195319457 M * faheem__1 vrwttnmtu: Yes, similar. 1195319497 M * vrwttnmtu Hmm 1195319509 M * faheem__1 vrwttnmtu: It threw a fit about yum being insecure, but otherwise completed successfully. 1195319516 M * vrwttnmtu Yep, same here 1195319526 M * vrwttnmtu Blah, chroot, blah, patches, blah :) 1195319626 M * faheem__1 vrwttnmtu: ? 1195319634 A * hparker needs to try a RH like install on his gentoo server sometime 1195319672 M * faheem__1 vrwttnmtu: Why are you using centos? Some people I work with insist on using RH stuff, which is why I'm using it. 1195319692 M * vrwttnmtu faheem__1, Don't know. 1195319695 J * click click@ti511110a080-2692.bb.online.no 1195319699 M * vrwttnmtu faheem__1, Someone asked, I try to provide 1195319703 M * faheem__1 Wouldn't touch it with a 10 foot pole otherwise. (Apologies to RH fans). 1195319774 M * vrwttnmtu Ours is not to reason why 1195319794 M * arachnist faheem__1: even if you could find a 10 foot pole, i'm sure he'd resist 1195319798 M * hparker hence the reason I need to try it 1195319812 Q * click_ Ping timeout: 480 seconds 1195319903 J * ntrs_ ~ntrs@79.125.240.124 1195320336 Q * ntrs__ Ping timeout: 480 seconds 1195320870 J * JonB ~NoSuchUse@kg0-199.kollegiegaarden.dk 1195321044 Q * mark__ Quit: Konversation terminated! 1195321079 Q * bragon Ping timeout: 480 seconds 1195321141 M * faheem__1 Does anyone understand the issue with 127.0.0.1 inside a guest? 1195321206 M * hparker iirc that's changed in the development stuff 1195321217 M * JonB faheem__: how many of you are there? 1195321225 M * JonB faheem__: i think bertl knows 1195321263 M * faheem__1 JonB: ? 1195321295 M * JonB faheem__ + faheem__1: how many times do you have to be online? 1195321320 M * faheem__1 JonB: Oh. I'm IRC illiterate, and I'm online via different computers. 1195321329 M * faheem__1 I think one home and one work. 1195321336 M * arachnist ssh ? 1195321341 M * faheem__1 I guess I need to work at being less IRC illiterate. 1195321343 M * arachnist ever heard of it? :> 1195321343 M * faheem__1 arachnist: Yes. 1195321356 M * faheem__1 Too many other things to think about though. 1195321369 M * JonB faheem: it is annoying 1195321370 M * arachnist i have separate vserver for my irc session and rtorrent ;P 1195321386 M * faheem__1 And I think I reached by quota for learning new computer stuff sometime this summer. 1195321405 M * faheem__1 JonB: Annoying that I'm logged in via different machines? Sorry to be annoying. 1195321420 M * JonB faheem: use a irc proxy 1195321485 M * faheem__1 arachnist: sorry, ever heard of what? 1195321494 M * faheem__1 JonB: Yes, I know. Need to figure out how to use it. 1195321507 M * faheem__1 JonB: And do other things more sensibly. 1195321976 Q * JonB Quit: Leaving 1195322114 N * AStorm Guest1015 1195322119 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195322133 Q * Guest1015 Ping timeout: 480 seconds 1195322292 J * quasisane ~sanep@c-76-118-191-64.hsd1.nh.comcast.net 1195322614 J * mire ~mire@245-169-222-85.adsl.verat.net 1195322642 Q * vrwttnmtu Quit: Flee! Flee to the hills! 1195322723 J * JonB ~NoSuchUse@kg0-199.kollegiegaarden.dk 1195322872 Q * michal Ping timeout: 480 seconds 1195323193 J * michal ~michal@www.rsbac.org 1195323238 J * bonbons ~bonbons@2001:960:7ab:0:20b:5dff:fec7:6b33 1195323342 Q * pmenier Quit: Konversation terminated! 1195323552 Q * arachnist Quit: reboot 1195323739 J * ema ~ema@rtfm.galliera.it 1195323772 J * arachnist arachnist@088156187175.who.vectranet.pl 1195323937 Q * JonB Ping timeout: 480 seconds 1195324732 Q * Johnnie Ping timeout: 480 seconds 1195325294 J * Johnnie ~jdlewis@c-67-163-142-234.hsd1.ct.comcast.net 1195325477 Q * arachnist Quit: leaving 1195325530 J * arachnist ~arachnist@pool-71-174-118-56.bstnma.fios.verizon.net 1195325553 Q * ema Quit: leaving 1195325619 Q * Pazzo Quit: ... 1195325912 J * JonB ~NoSuchUse@kg0-199.kollegiegaarden.dk 1195326654 Q * JonB Quit: This computer has gone to sleep 1195326887 Q * arachnist Quit: brb 1195326892 J * arachnist arachnist@088156184167.who.vectranet.pl 1195326932 J * mwagner122 ~mwagner23@pD9E3CA5E.dip.t-dialin.net 1195326993 M * mwagner122 Question: hi guys... im trying to limit the size of each vservers directory. However setting the values on http://linux-vserver.org/Disk_Limits_and_Quota seems to have absolutely no effect. each client still can use as much space as he wants. Do i have to meet any certain requirements to make this work? Thanks alot 1195327241 J * Novotelecom_user ~user@l64-91-1.cn.ru 1195327282 M * Novotelecom_user hi all 1195327290 N * Novotelecom_user `Ivan_siberian 1195327698 P * `Ivan_siberian 1195328010 N * Bertl_oO Bertl 1195328051 M * Bertl mwagner122: you need to make sure to have tagging enabled, and use a filesystem which supports dlimits 1195328070 M * Bertl mwagner122: using recent tools should do the rest for you 1195328173 M * mwagner122 Bertl: thanks for your answer. im using ext3. that should work, right? could you tell me how to check if tagging is enabled? 1195328190 M * Bertl yes, ext3 is definitely supported 1195328339 J * Mark17 ~Mark@vnc.streamservice.nl 1195328342 M * Mark17 hello 1195328358 M * Bertl welcome Mark17! 1195328380 M * Mark17 how can i move a vps from master 1 (with ssh) to master 2 1195328383 M * Mark17 ? 1195328408 M * Bertl mwagner122: check with /proc/mounts 1195328429 M * Bertl Mark17: the rsync build method of util-vserver should do the trick for you 1195328475 M * mwagner122 Bertl: May i paste "cat /proc/mounts" for review here? its like 12 lines... i have no idea if its enabled 1195328500 M * Bertl (please use paste.linux-vserver.org for everything longer than 3 lines) 1195328511 M * Mark17 Bertl: master1 kan only make connection to the rest off the world, but they cannot start the contact with master1 (is there a push option somehow?) 1195328548 M * Bertl rsync works over ssh, so that should be fine 1195328562 M * Bertl you might need to do the rsync manually though 1195328594 M * mwagner122 bertl: could you check "http://paste.linux-vserver.org/9576" ? Is it enabled? thanks alot 1195328596 M * Mark17 rsync is to get files if i am correct 1195328599 M * Mark17 not to push files 1195328610 M * Bertl doesn't matter which direction you do it 1195328631 M * Bertl rsync -axHPSD --numeric-id local-dir host:remote-dir 1195328637 M * Mark17 lets find out (what directories/files do i have to copy?) 1195328742 M * Bertl you want to copy the guest data, and most of the config 1195328760 M * Bertl you might want to adjust the symlinks in the config on the destination 1195328768 M * Bertl (if the guest pathes change) 1195328948 J * Yvo ~yvonne@i3ED6D37D.versanet.de 1195328967 M * Mark17 only the ip off the guest changes 1195328972 M * Mark17 other things can be the same 1195328978 M * Mark17 all guests have to be moved 1195328989 M * Mark17 what is the easiest way? 1195329060 M * Bertl again, rsync, I'd suggest to rsync first while the guest are stiöörunning 1195329069 M * Bertl *still running 1195329088 M * Mark17 nobody can access the server from outside 1195329098 M * Mark17 and the guests arent running at this moment ;) 1195329102 M * Bertl then stop the guest one by one, and resync 1195329125 Q * transacid Remote host closed the connection 1195329126 M * Bertl okay, even easier, then rsync all at once 1195329191 M * Mark17 --numeric-id is? 1195329218 M * Bertl it ensures that rsync doesn't use the uid/gid -> name mapping from your host 1195329238 M * Mark17 it gives an error 1195329239 M * Bertl (which is definitely what you want, as guests can have completely different mappings) 1195329246 M * Mark17 that it is an unknown option 1195329254 M * Bertl because I made a typo and missed an 'm' 1195329272 M * Bertl (or maybe an 's' :) 1195329279 M * Bertl check with --help/man 1195329316 M * hparker help man, help! 1195329637 M * Mark17 indeed you did mis an s 1195329646 M * Mark17 --numeric-ids 1195329891 J * transacid ~transacid@transacid.de 1195329941 M * faheem__1 Bertl: Could you clarify this 127.0.0.1 issue? Why doesn't this work inside a vserver guest? 1195329988 M * Bertl faheem__1: it doesn't? 1195330032 M * faheem__1 Bertl: To be clearer, the default assoc of 127.0.0.1 with localhost seems to have some problem. 1195330057 M * Bertl it does? 1195330107 M * faheem__1 Bertl: Yes. Sec. I'll try to find a link. 1195330176 M * faheem__1 See, eg. first line of http://lena.franken.de/linux/debian_and_vserver/sendmail.html 1195330240 M * faheem__1 I don't really understand the point. Was hoping you could clarify. 1195330301 M * Bertl well, I don't know why whoever wrote that got this opinion 1195330312 M * Bertl fact is the following: 1195330327 M * Bertl - 127.0.0.1 is remapped (to avoid colisions) inside Linux-VServer guests 1195330352 M * Bertl - depending on kernel version, configuration and guest setup, the mapping happens either 1195330375 M * Bertl (forget the either :) 1195330382 M * Bertl + on source ips 1195330412 M * Bertl + on destination ips 1195330429 M * faheem__1 Bertl: Not sure what this means. What is it remapped to? 1195330431 M * Bertl + on system ip queries (i.e. info) 1195330463 M * faheem__1 Bertl: What I know is that with the default settings for a Debian guest in /etc/hosts 1195330463 M * Bertl 127.0.0.1 is remapped to some other IP, and that ip is mapped back to 127.0.0.1 (in recent kernels for info) 1195330476 M * faheem__1 ie. localhost 127.0.0.1 1195330490 M * faheem__1 localhost can't be reached. 1195330496 M * Bertl everything works fine, if you have the proper kernel and config 1195330515 M * Bertl of course, localhost can be reached quite fine 1195330561 M * faheem__1 Bertl: ping localhost hangs 1195330574 M * faheem__1 Am I doing something wrong, is this expected behaviour?... 1195330584 M * Bertl ping doesn't happen on the IP layer, it uses ICMP 1195330603 M * Bertl so if you want that to work (with older debian kernels) you need to add the proper -I option 1195330622 M * faheem__1 Bertl: There is also a problem with databases, ie. postgres. 1195330641 M * faheem__1 Bertl: I'm running 2.6.18 (Debian etch default). 1195330662 M * Bertl if they are configured to expect connections from 127.0.0.1 and that remapping isn't activated 1195330670 M * Bertl yes, as I said, older kernels :) 1195330676 M * faheem__1 There is a link here http://linux-vserver.org/Problematic_Programs. 1195330695 M * faheem__1 Bertl: So the programs are doing something wrong, then? 1195330698 M * faheem__1 Ie postgres? 1195330737 M * faheem__1 Bertl: So, 2.6.18 is an older kernel? This remapping is not activated here, then? 1195330754 M * Bertl no, your config is assuming wrong things :) 1195330766 M * faheem__1 Bertl: which config? 1195330771 M * faheem__1 postgres config? 1195330780 M * Bertl i.e. if we have the remapping from 127.0.0.1 to the first assigned IP (as done by the debian kernels) 1195330780 M * faheem__1 or kernel config? 1195330808 M * Bertl then naturally, you have to allow the first IP for database connections, instead of 127.0.0.1 1195330833 M * Bertl alternatively you can use one of the recent kernels, which also does the reverse mapping 1195330938 M * faheem__1 Bertl: The reverse mapping is not happening in 2.6.18, and that is causing problems? Correct summary? 1195330956 M * Bertl no, that is causing problems if you _assume_ that it happens :) 1195330972 M * faheem__1 Bertl: Is postgres for example assuming this? 1195330990 M * faheem__1 I'm not assuming anything, as far as I'm aware. 1195331024 M * Bertl no, postgres is not assuming anything either, but the debian default config probably is 1195331031 P * Yvo 1195331056 Q * bonbons Read error: No route to host 1195331073 M * faheem__1 Bertl: Oh, the kernel default config? 1195331092 M * Bertl well, no, the postgres permissions I think 1195331103 M * Loki|muh Bertl: since when is this remapping active? 1195331109 M * Loki|muh -active+available 1195331115 M * Bertl in recent devel kernels 1195331135 M * Bertl 2.2.x is able to do most of the mappings correctly but not all 1195331163 M * faheem__1 Bertl: 2.2.x? 1195331182 M * Bertl vs2.2.x (current stable branc) 1195331185 M * Bertl *branch 1195331193 M * Bertl vs2.3.x current devel 1195331194 J * bonbons ~bonbons@2001:960:7ab:0:20b:5dff:fec7:6b33 1195331205 M * faheem__1 Bertl: Sorry, confused, thought you were talking about the Linux kernel for a moment. :-) 1195331220 M * Bertl yeah, I know, we use similar numbering 1195331269 M * faheem__1 Bertl: What is the recommended workaround? Online people recommended setting localhost to the "public ip". 1195331305 M * Bertl that is a good start 1195331328 M * Bertl (if "public ip" means first assigned ip) 1195331338 M * faheem__1 Bertl: Upgrading to a more recent kernel probably not an option, unfortunately. 1195331357 M * Bertl another good decision is to explicitely allow the first assigned ip for connections (in the security config) 1195331369 M * Bertl faheem__1: sure that is an option too 1195331375 M * faheem__1 Bertl: Yes. first and only assigned ip. :-) 1195331388 M * Bertl faheem__1: compile your vs2.3.x kernel and it should work out of the box 1195331411 M * faheem__1 Bertl: I mean, from a stability perspective. I'm using the Debian precompiled kernels for etch. 1195331426 M * faheem__1 Bertl: I'm sure it would work just fine if I did it. 1195331426 M * Bertl well, that's not very stability oriented 1195331438 M * faheem__1 Bertl: What isn't? 1195331444 M * Bertl we fixed quite a number of bugs since that version 1195331459 M * Bertl so you have the stability of all the old (and unfixed) bugs :) 1195331470 M * faheem__1 Bertl: What version was in 2.6.18? 1195331489 M * faheem__1 Bertl: Well, that's Debian's way. It's their way or the highway. :-) 1195331502 M * Bertl I'd assume it is an older vs2.0 kernel, amybe an early vs2.2 one? 1195331513 M * faheem__1 Bertl: There are always new bugs... 1195331534 M * faheem__1 Bertl: Debian does backports, but only for critical stuff. 1195331554 M * Bertl not during a stable release/branch here 1195331556 P * mwagner122 1195331580 M * Bertl i.e. vs2.2.x will not introduce any fancy new features, only fixes 1195331580 M * Loki|muh Bertl: is there a way in a live system to fetch the version of the vserver-patch? 1195331591 M * faheem__1 Bertl: Ok. I vaguely understand the point. Thanks for taking the point to explain it. 1195331600 M * Bertl you're welcome! 1195331618 M * Bertl Loki|muh: not really, you can query the API version, but that's about it 1195331659 M * faheem__1 Bertl: Oh, just to be clear, is 127.0.0.1 remapped to the first public IP, or something else? 1195331685 M * faheem__1 Bertl: Oh, sorry. You already said Debian did that. 1195331700 M * Bertl yep, recent versions use a special 'lback' address 1195331709 M * Bertl which is usually chosen as 127.x.y.1 1195331978 M * Mark17 how can i change the ips off the vps servers on a master when i am logged in on the master with the user root? 1195332016 M * Bertl you can edit all config files from the host with an editor or rewrite them with 'echo' 1195332022 M * faheem__1 Bertl: Any documentation about this online you can point me to? 1195332044 M * Bertl Mark17: the entwork config will be in /etc/vservers//interfaces// 1195332064 M * Mark17 i will change that and see if that works 1195332075 M * Bertl faheem__1: yes, the source code .. AFAIK, nobody did document specifics yet 1195332371 M * faheem__1 Bertl: Oh, Ok. Thanks for the info. I added it to my "irc sessions with vserver developers" notes. :-) 1195332614 J * Infinito ~argos@200-140-155-92.gnace701.dsl.brasiltelecom.net.br 1195332768 Q * DLange Quit: Bye, bye. Hasta luego. 1195332852 Q * derjohn Ping timeout: 480 seconds 1195332888 J * derjohn ~derjohn@dslb-084-058-216-091.pools.arcor-ip.net 1195333148 M * Bertl wb Infinito! 1195333169 M * Infinito :) 1195333177 M * Bertl faheem__1: are those notes public? 1195333196 M * Bertl faheem__1: if not, then you might consider adding a few words/pages to the wiki 1195333485 J * dowdle ~dowdle@71-36-198-163.blng.qwest.net 1195333492 M * Bertl wb dowdle! 1195333526 M * dowdle Greetings a script Bertl runs? 1195333538 M * Bertl yep, all wetware :) 1195333640 M * faheem__1 Bertl: No, just private cut and pastes of the irc sessions. :-) 1195333669 M * faheem__1 Bertl: Where would a good place to put such info. 1195333682 M * faheem__1 dowdle: Yes, Berl is fast. :-) 1195333704 M * Bertl go through the wiki, look where you would expect to find that info, and add it there :) 1195333737 M * faheem__1 Bertl: networking, maybe... 1195334052 M * Bertl sounds good to me ... 1195334226 J * larsivi ~larsivi@101.84-48-201.nextgentel.com 1195334644 J * click_ click@ti511110a080-1164.bb.online.no 1195334764 Q * click Ping timeout: 480 seconds 1195334966 J * Aiken ~james@ppp59-167-115-173.lns3.bne4.internode.on.net 1195335754 Q * bonbons Quit: Leaving 1195335884 J * Yvo ~yvonne@i3ED6D37D.versanet.de 1195335941 P * Yvo 1195336773 M * bzed faheem__1: backports.org also ships pretty new stuff. but the last stable release oly gets security fixed (and a .23 kernel is planned) 1195336860 N * AStorm Guest1042 1195336863 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195336994 Q * Guest1042 Ping timeout: 480 seconds 1195337645 Q * AStorm Remote host closed the connection 1195337684 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195338659 J * FireEgl FireEgl@4.0.0.0.1.0.0.0.c.d.4.8.0.c.5.0.1.0.0.2.ip6.arpa 1195339343 M * faheem__1 bzed: Yes, debian stable only gets security fixes. I'm reluctant to mess with the stock Debian vserver patched kernel. It is mostly working Ok for me. 1195339386 M * Bertl with some adjustments, it should work fine for you 1195339408 M * Bertl (it just hasn't all the fixes/enhancements of recent kernels) 1195339414 M * bzed faheem__1: etch +1/2 should come with a .23 kernel, I hope there'a vserver patch for it until then... :) 1195339437 M * Bertl bzed: we'll see, but unlikely for the stable branch 1195339473 M * faheem__1 bzed: What is etch +1/2, you mean the next release/or the one after that? 1195339497 M * faheem__1 Looks like you've written etch + half... :-) 1195339536 M * bzed faheem__1: there's be etch and a half release... mainly with a new kernel 1195339549 M * bzed s/'s/ will/ 1195339579 M * faheem__1 bzed: Oh, so you do mean etch + half. News to me... I've never heard of a stable + half before. 1195339600 M * faheem__1 Usually they just have very conservative changes for critical security bugs and stuff. 1195339621 M * bzed first time that's supposed to happen 1195339627 M * Bertl well, I guess they will put in already outdated stuff there too 1195339648 M * faheem__1 bzed: Interesting. I try to keep up with Debian news, but must have missed it. link? 1195339655 M * Bertl but let's wait and see, maybe uv-0.30.215 (or at least 214) will be there 1195339660 M * bzed no clue where that is posted 1195339683 M * bzed the summary of the irc meeting was posted somewhere 1195339688 M * faheem__1 bzed: Ok. Thanks for the info. 1195339695 M * faheem__1 bzed: Which irc meeting? 1195339699 M * bzed Bertl: there won;t be updates to the rest of the system if I remember right 1195339743 M * faheem__1 bzed: I get some hits for etch and a half... 1195339756 M * faheem__1 http://wiki.debian.org/EtchAndAHalf 1195339854 M * bzed faheem__1: <20071106234347.GB20602@colo.lackof.org> on debian-release@l.d.o 1195339855 N * AStorm Guest1047 1195339857 M * faheem__1 Well, that's good news. Debian being a bit adventurous. Who'd have thought it? 1195339862 J * AStorm ~astralsto@tor-irc.dnsbl.oftc.net 1195339862 Q * Guest1047 Ping timeout: 480 seconds 1195339896 M * Bertl bzed: so there wouldn't be a point in having a Linux-VServer patch for 2.6.23 then, as it would definitely require newer tools 1195339952 M * faheem__1 Bertl: Maybe the Debian people would consider a waiver? No harm in asking. 1195339976 M * faheem__1 There aren't that many tools tightly tied to the kernel like that. 1195340012 M * faheem__1 bzed: Yes, saw that. 1195340012 M * bzed Bertl: other tools have to be updated, too - for example the wireless stuff, so I guess that wouldn't be a problem 1195340026 M * Bertl well, if debian folks show up here asking for a kernel patch, that shouldn't be a problem ... 1195340052 M * faheem__1 Bertl: Do you talk to the Debian kernel team much? 1195340074 M * Bertl (folks being maintainers) that would count as real interest and of course we would try to provide something 1195340089 M * Bertl yes waldi and friends should be around ... 1195340456 M * faheem__1 Bertl: I'm getting the german paypal page from http://linux-vserver.org/Donations#Donating_Money again. 1195340485 M * Bertl please use the link on 13thfloor.at, that one should be fine 1195340517 M * Bertl http://www.13thfloor.at/vserver/donate/ 1195340534 M * faheem__1 Bertl: Suggest redirecting to that link. Thanks. 1195340575 M * Bertl yeah, well, we'll add the magic paypal option to circumvent their 'smart logic' there too :) 1195340762 Q * dna Quit: Verlassend 1195341130 J * click click@ti511110a080-0771.bb.online.no 1195341234 Q * click_ Ping timeout: 480 seconds 1195341501 J * click_ click@ti511110a080-0720.bb.online.no 1195341617 Q * click Ping timeout: 480 seconds 1195342251 Q * larsivi Quit: Konversation terminated! 1195342375 M * faheem__1 Bertl: Can you confirm receipt of the donation? 1195342382 M * Bertl sec 1195342560 M * Bertl yep, I can confirm a 18.92 USD donation. thanks! 1195342850 Q * meandtheshell Quit: Leaving. 1195343033 J * click click@ti511110a080-0284.bb.online.no 1195343149 Q * click_ Ping timeout: 480 seconds