1178411358 M * brcc_ Daniel, you still there ? 1178411381 M * brcc_ I am comparing vserver memory limits with a real linux server and things seems kinda weird 1178412078 M * fatgoose how weird? 1178412244 M * brcc_ i am doing some calculations here.. comparing a 128mb physical with a 128mb vserver 1178412596 M * fatgoose ok 1178412627 Q * yarihm Quit: This computer has gone to sleep 1178412820 M * Bertl brcc_: and what is weird? 1178412946 Q * bzed Quit: Leaving 1178413277 M * brcc_ bertl 1178413286 M * brcc_ Comparing with uml 1178413293 M * brcc_ it seems i can run much more stuff using uml 1178413298 M * brcc_ with same ammount of ram 1178413332 M * Bertl that would really surprise me :) 1178413369 M * brcc_ I am getting crazy over here, doing lot of comparisions. With vs2.0 if you ran 50 apache processess, they wouldnt be accounted 1178413373 M * brcc_ with vs2.2, 128mb ram, with 40 you run into problems 1178413428 M * Bertl and with uml? 1178413445 M * Bertl btw, do you have the limit overview of vs2.0 at hand? 1178413465 M * brcc_ Yes i have everything here 1178413471 M * brcc_ The limit when running 40 proccess ? 1178413480 M * brcc_ i mean, 100 1178413490 M * Bertl no, the vs2.0 with 50 or so 1178413504 M * Bertl or 100, whatever 1178413567 M * brcc_ Ok hold 1178413863 M * brcc_ Bertl: http://paste.uni.cc/15239 1178414039 M * Bertl and with 2.2 ? 1178414122 Q * gerrit Ping timeout: 480 seconds 1178414473 M * brcc_ http://paste.uni.cc/15240 1178414604 M * Bertl and you are sure you are running the same guests on them? 1178414646 M * Bertl e.g. the 2.0.2 has only half the number of files open 1178414653 M * brcc_ no 1178414656 M * brcc_ this guests are different 1178414659 M * brcc_ the same were the ones with java 1178414711 J * gerrit ~gerrit@c-67-160-146-170.hsd1.or.comcast.net 1178414751 M * brcc_ but what i mean is, on vs2.0 if you run 10 or 100 you dont lose much ram 1178414758 M * brcc_ on vs2.2 if you run 40 you lose almost 128mb 1178414783 M * brcc_ Imagine how sad would it be to upgrade a server from vs2.0 to vs2.2 without knowing this 1178414784 M * brcc_ hehe 1178414899 M * Bertl how do you define/test 'lose ram'? 1178414986 M * brcc_ running 100 apache proccess would use around 50mb 1178415001 M * brcc_ running 120 would use around 60-70 (just examples) 1178415006 M * brcc_ (this on vs2.2) 1178415010 M * brcc_ oops 1178415014 M * brcc_ this on vs2.0 1178415020 M * brcc_ while on vs2.2 40 would fit 128mb ram 1178415121 M * Bertl and you define 'use' as the RSS accounting? 1178415141 M * Bertl or do you do some serious checks on the host? 1178415180 M * brcc_ rss accounting, as shown in free -m 1178415199 M * Bertl okay, as we told you before, that was wrong/incomplete with 2.0 1178415202 M * brcc_ and doesnt make it possible to launch new proccesses when limit is reached 1178415207 M * brcc_ yes i got it 1178415208 M * brcc_ hehe 1178415241 M * Bertl so, except for the 'wrong' display in 2.0 (which we know about) nothing changed, no? 1178415277 M * Bertl btw, you can get the 'new' rss amount by adding up anon and rss (on 2.0) 1178415333 M * Bertl what seems more important to me would be to measure the memory consumption on the host 1178415340 M * Bertl (i.e. with /proc/meminfo) 1178415347 M * brcc_ yes i got it 1178415350 M * brcc_ new rss = old rss + anon 1178415355 M * Bertl and to compare that with e.g. UML 1178415387 M * Bertl if it turns out that UML uses less memory than Linux-VServer, then we have something to look into :) 1178415403 M * brcc_ But what i am afraid about is, with vs2.0 new apache proccesses could be forked without problems when new connections were received (until reaching maxclients) 1178415416 M * brcc_ Now this number of connections are really limited 1178415433 M * brcc_ maybe i am seeing it wrong 1178415450 M * Bertl well, raise the limit, or completely disable it 1178415475 M * Bertl your old setup has no limits on anon memory 1178415501 M * Bertl if you 'just' set a limit on RMAP, then you get the same behaviour (regarding limits) you had before 1178415516 M * Bertl (which I would not suggest to do, btw :) 1178415524 M * brcc_ hehe ghot it 1178415539 M * brcc_ on the old approach i had prople using, for example, 500mb but just showing 100mb. right ? 1178415551 M * Bertl something like that 1178415589 M * Bertl the values are not that critical, because if you have several guests, they might actually share the RMAP to some degree 1178415596 M * Bertl (not the ANON though) 1178415600 M * brcc_ i am wondering if i can run 100 proccesses on uml 1178415640 M * brcc_ Got it. so anon=shared 1178415681 J * er ~sapan@pool-71-168-215-87.cmdnnj.fios.verizon.net 1178415691 M * Bertl no, anon is anonymous mappings :) 1178415713 M * er hi Bertl 1178415725 M * Bertl hey er! LTNS! 1178415751 M * brcc_ ahh ok 1178415753 M * brcc_ sorry i read it wrong 1178415756 M * brcc_ the rmap not the anon 1178415757 M * brcc_ hehe 1178415757 M * Bertl np 1178415760 M * brcc_ too late here 1178415764 M * brcc_ going to go out and drink a bit 1178415766 M * er yep, i disappeared into a black hole, one of the many that I frequent 1178415770 M * brcc_ and think about memoryu limiting :) 1178415776 M * brcc_ thanks for the explanation bertl 1178415779 M * Bertl er: and how was it? 1178415780 M * brcc_ good night everyone 1178415786 M * Bertl brcc_: have a good one! 1178415867 M * er Bertl: not too bad, I've been in worse. 1178415895 M * er Bertl: did you follow my thread with Daniel a little while ago? 1178415921 M * Bertl not much ... 1178415940 M * Bertl but I can read it up if it was important? 1178415976 M * er i can re-ask in 1 line 1178416122 M * Bertl then please do so :) 1178416192 M * er ok...if I modify sk->sk_xid for a connection from X->Y, will it get transferred to guest Y? 1178416248 M * Bertl the socket? no, only partially, i.e. it will give 'funny' results 1178416273 M * Bertl sockets currently carry 4 identifier, two for the xid and two for the nid 1178416450 M * er hm. ok. and is there a way of doing this? 1178416489 M * Bertl what do you actually want to do? move sockets around? 1178416515 M * Bertl and what about the tasks using those sockets? 1178416532 J * fatgoose_ ~samuel@206-248-175-36.dsl.teksavvy.com 1178416582 Q * fatgoose_ 1178416597 M * er actually want to do - get a transparent HTTP proxy which works with VNET working without VNET 1178416650 M * er it consists of a process sitting in root context, which listens for HTTP requests, and depending on the Host: header, sets the xid of an incoming connection to the guest which has registered for a particular host 1178416716 M * Bertl why would you do that? 1178416742 M * er currently, the only 2 steps that VNET seems to carry out to this effect is to allow uspace processes to set a socket's xid 1178416767 M * er why: because guests share the same IP address 1178416786 M * Bertl yeah, but that is self inflicted, isn't it? 1178416791 M * er and their http servers need to run transparently 1178416795 M * er Bertl: I know:) 1178416806 M * Bertl and you need to use separate ports to bind to anyways 1178416826 M * Bertl so your transparent proxy has to map to various ports 1178416844 M * Bertl which in turn makes the context tagging obsolete 1178416871 Q * fatgoose Ping timeout: 480 seconds 1178416906 M * Bertl or am I missing something? 1178416981 M * er hm. VNET's autobind feature, which reroutes sys_bind seems to allow multiple guests to bind to the same port 1178417041 M * Bertl which sounds like a bug, and is a feature we didn't know about last time, no? 1178417062 M * Bertl how do you handle two sshd daemons bound to the same port? 1178417080 M * er right, because we weren't looking at anything other than raw sockets. 1178417115 M * er Bertl: VNET introduces an xid field in struct skbuff 1178417145 M * Bertl we use the fwmark or the secmark instead 1178417172 M * er and sets that field based on its own bind table in one of the NF hooks it implements 1178417176 M * er right 1178417199 M * er but right now you're only looking at those fields in RAW sockets. 1178417220 M * Bertl so how does it handle the two sshds bound to port 22? 1178417454 M * er I suppose it relies on the fact that when one of the bound guests accept()s, only one get_tcp_connection in tcp_ipv4.c will succeed - and that'll be the one with the right xid set for the connection 1178417481 M * Bertl okay, but which xid will be set? 1178417571 M * er Bertl: the transparent proxy defines one policy for that 1178417597 M * Bertl transparent proxy for ssh? 1178417623 M * er another policy is defined by libproper, which defines the slice which has access to the port at a time 1178417646 M * er Bertl: well, ssh makes this particular function a bug more than a feature 1178417681 M * er HTTP slides it closer towards a feature, even though the bugginess is quite clearly visible 1178417748 M * er anyhow, that seems to answer my question, you think that doing what VNET does right now should produce funny results 1178417764 M * Bertl well, if you want to implement something hacky like this, the best way is probably to add checks for the skb_mark to normal sockets, and disable the collision detection (or limit it to contexts) 1178417787 M * er right - I was planning to come to that eventually 1178417791 M * er VNET does the latter right now 1178417798 M * er and uses xids for the former 1178417801 M * Bertl so that ports can be bound without ever knowing that there is another guest using the same port 1178417824 M * Bertl but you should base that on nid instead of xid 1178417829 M * er ok 1178418054 Q * infowolfe Quit: Leaving 1178418843 Q * ensc Killed (NickServ (GHOST command used by ensc_)) 1178418853 J * ensc ~irc-ensc@p54b4ecf5.dip.t-dialin.net 1178419296 J * DoberMann_ ~james@AToulouse-156-1-176-30.w90-38.abo.wanadoo.fr 1178419406 Q * DoberMann[ZZZzzz] Ping timeout: 480 seconds 1178419946 Q * FireEgl Quit: Bye... 1178419975 J * FireEgl FireEgl@2001:5c0:84dc:0:81bf:9202:bf7b:c83f 1178420158 Q * FireEgl 1178420660 M * Bertl okay, off to bed now .. have a good one everyone! 1178420667 N * Bertl Bertl_zZ 1178420887 J * ktwilight_ ~ktwilight@112.82-66-87.adsl-dyn.isp.belgacom.be 1178421297 Q * ktwilight Ping timeout: 480 seconds 1178422329 P * er 1178423492 Q * cehteh Ping timeout: 480 seconds 1178425581 Q * shedi Ping timeout: 480 seconds 1178426116 J * shedi ~siggi@ftth-237-144.hive.is 1178433040 J * tudenbart ~willi@xdsl-81-173-229-65.netcologne.de 1178433491 Q * dothebart Ping timeout: 480 seconds 1178436261 J * phedny ~mark@ip56538143.direct-adsl.nl 1178436293 J * dna ~naucki@211-231-dsl.kielnet.net 1178436664 Q * phedny_ Ping timeout: 480 seconds 1178436916 Q * hardwire Ping timeout: 480 seconds 1178437855 J * infowolfe ~infowolfe@67.164.195.129 1178438718 J * mattzerah ~matt@121.50.222.55 1178442216 J * FireEgl FireEgl@Sebastian.Atlantica.CJB.Net 1178442599 J * infowolfe_ ~infowolfe@67.164.195.129 1178442921 Q * infowolfe Ping timeout: 480 seconds 1178443801 J * kwowt69 ~quote@pomoc.ircnet.com 1178443802 M * kwowt69 hi 1178443805 M * kwowt69 derjohn here? 1178443838 M * kwowt69 or anyone else, using his IP patch 1178444001 M * kwowt69 i've patched the kernel but i still dont know how to add more then 16 ips 1178444138 M * FaUl iirc you have to patch the util-vservver as well 1178444171 M * kwowt69 with which patch? 1178444495 M * FaUl i'm not quite sure 1178444513 M * FaUl i've never used that ip-patch 1178444565 M * kwowt69 argh 1178444581 M * kwowt69 how come every time i start to screw around with these IPs my filesystem breaks down! 1178444617 Q * bavi 1178445140 J * yarihm ~yarihm@84-74-20-183.dclient.hispeed.ch 1178445191 N * DoberMann_ DoberMann 1178445920 M * sid3windr because you're using reiserfs? 1178446281 Q * sladen Ping timeout: 480 seconds 1178446404 M * daniel_hozac kwowt69: just add the addresses. util-vserver 0.30.212+ doesn't have a limit on the number of IP addresses. 1178446515 J * sladen paul@starsky.19inch.net 1178447042 J * ema ~ema@rtfm.galliera.it 1178447052 Q * Aiken Quit: Leaving 1178447773 M * derjohn kwowt69, yes 1178447836 M * derjohn kwowt69, I see daniel_hozac did alreadx answer. Besides that: it's not my patch, I just adopted what I found .... 1178447893 M * sid3windr adopted ? then it IS your patch! ;) 1178448346 Q * Nam Ping timeout: 480 seconds 1178448798 Q * infowolfe_ Quit: Leaving 1178448799 Q * derjohn Remote host closed the connection 1178448834 J * infowolfe ~infowolfe@67.164.195.129 1178449044 J * derjohn ~derjohn@80.69.41.3 1178449231 Q * gerrit Ping timeout: 480 seconds 1178449259 M * derjohn well, yes, I mean I "took that patch over into my patchset". But the patch is simply changing a single define. thats all .. (with .212+ utils you dont even have to change the utils anymore ...) I wonder when this define will become a compiletime option ... 1178449422 J * suser ~tux@bzq-82-81-19-143.red.bezeqint.net 1178449487 M * daniel_hozac make the patch? 1178449529 J * bzed ~bzed@dslb-084-059-121-166.pools.arcor-ip.net 1178449896 M * kwowt69 sid3windr: i'm using ext3 1178449965 J * gerrit ~gerrit@67.160.146.170 1178450399 Q * suser Remote host closed the connection 1178450522 Q * toidinamai_ Ping timeout: 480 seconds 1178451360 Q * yarihm Read error: Connection reset by peer 1178451469 J * yarihm ~yarihm@84-74-20-183.dclient.hispeed.ch 1178452335 Q * yarihm Quit: Leaving 1178452358 M * derjohn Bertl_zZ, daniel_hozac, bonbons: I changed in include/linux/percpu.h the #define PERCPU_ENOUGH_ROOM to a value of 65536. Now everything seems to work, no more "could not allocate percpu ....". What it the major drawback of that? +32K seems not much nowadays. 1178452587 J * Piet hiddenserv@tor.noreply.org 1178452642 J * yarihm ~yarihm@84-74-20-183.dclient.hispeed.ch 1178453329 N * Bertl_zZ Bertl 1178453333 M * Bertl morning folks! 1178453407 M * Bertl derjohn: that is basically harmless, but the question is, does it work with mainline (which uses 32k) if so, the calculation must be going wrong somewhere and/or something uses more percpu space than expected (which should better be identified) 1178453454 M * Bertl derjohn: fact is, vs2.2.0 should precisely calculate the percpu room for the given number of (max) contexts 1178453599 M * Bertl okay, off for now .. back later ... 1178453603 N * Bertl Bertl_oO 1178453655 M * derjohn Bertl, hmmm, I greped though the sources for alloc_percpu and can only see ipv{4,6} stuff, vserver stuff and xfs_mount using it. I cannot say which module "combinations" show the problem, but I wonder why the problem does not come up with a AMD64 optimized kernel. 1178453706 M * derjohn bonbons says, he uses his patch with a similar patchset on i386 without probs .. but I didnt get him here for further investigation- 1178453881 M * derjohn hm, I reduced the # of max. CPUs vom 8 (default) to 4. Could that be the source od the prob? Well, I would think, I leaves more space over per cpu, but maybe the calculation does work "better" with 8+ cpu? 1178453976 J * doener ~doener@host.magicwars.de 1178455924 J * Val ~val@v41.org 1178455933 M * Val hi :) 1178456087 M * Val kernel 2.6.20 vs 2.2.0 -> building one vserver with 1G space available dlimits set -> df -h show 100% (1Go) but only use 235M on disk... 1178456128 M * Val an idea ? 1178456145 M * Val (util-vserver 0.30.212) 1178456164 M * Val (xid tags are set) 1178456341 M * derjohn Val, hm, you say within the guest there is 100% used, but from the host ( du -hs) only 235M are used? 1178456375 M * Val yes 1178456398 M * Val i see dlimit cache dir link are not done with vserver build command 1178456402 M * Val right ? 1178456425 M * Val testing again 1178456437 M * derjohn hm, did you delete stuff from the host ? 1178456489 M * derjohn did you set the path correctly ? (i.e. if you copy a /etc/vservers/xxx dir, the path which is limited still points to the old one) 1178456513 M * Val i did not copy anything 1178456522 M * Val i did build a new vserver 1178456536 M * derjohn Val, I assume you already did "chxid" ob the vserver's dir ? 1178456548 M * Val using an initpost script to put some missing stuff 1178456558 M * Val derjohn: yup :) 1178456579 M * derjohn hm, the % used up stuff should be updated immediately. 1178456589 J * bonbons ~bonbons@158.64.111.14 1178456603 M * derjohn Val, maybe you also limited inodes ? maybe they ran put ?` 1178456605 M * derjohn *out 1178456608 M * Val perhaps the missing dlimits cache link is the trick 1178456610 M * derjohn bonbons, Weclome ! 1178456633 M * Val derjohn: nope, 40000 i-nodes are set 1178456648 M * derjohn Val, I am usiing dlimits but never had to link the cache (debian's util-vserver) 1178456662 M * Val i'm on debian too 1178456663 M * derjohn Val, 40000, well, that is not very much. 1178456680 M * Val ... this is the first time my script fail building a new vserver 1178456690 M * derjohn I had to rise several times for s.o. you compiles packages on the guest (hello s0undt3ch !) 1178456705 M * derjohn Val, you had limits before ? 1178456715 M * Val yep, many times 1178456751 M * Val waityes it works 1178456758 M * Val it was the link cache 1178456764 M * Val :) 1178456765 M * derjohn hm, did you upgrade utils ? 1178456768 M * Val yep 1178456788 M * derjohn if so, could you file a bug or inform micah about what you found out ? 1178456794 M * Val what i dit : 1178456799 M * Val mkdir -p /var/cache/vservers/dlimits/$vhost/cache 1178456805 M * derjohn (maybe daniel_hozac , too ) 1178456825 M * derjohn Val, and this wasnt created with the .212 (etch?) utils ? 1178456828 M * Val derjohn: i will 1178456830 M * bonbons hey derjohn 1178456836 M * Val yes etch one 1178456846 M * derjohn Val, good to know ;) 1178456865 M * Val i'll publish my script too 1178456866 M * derjohn bonbons, hello! did you get my mail? If have one "update" 1178456871 M * derjohn Val, fine ! 1178456889 M * derjohn Val, on the linux-vserver.org wiki, right ? 1178456893 M * Val when i'll take some time to patch vserver debootstrap build method to include it 1178456916 M * derjohn bonbons, can you read the backlog here? if not, I p-msg you 1178456949 M * derjohn Val, what do you want to patch there ? 1178456974 M * derjohn debootstrap is not responsile for the missing link 1178456989 M * derjohn *responsible 1178456992 M * bonbons I have no backlog of today, but should be archived on l-v.org... 1178456994 M * Val some bad init scripts for devices / mount points etc. 1178457010 M * derjohn bonbons, i already p-msged you ... got it? 1178457012 M * Val some rights/default xid/default limits 1178457046 M * Val improved autofs support for roaming users account between vservers 1178457048 M * Val ... 1178457076 M * derjohn hm, maybe it makes more sense to use the templaate funktion ? 1178457083 M * derjohn of util-vserver 1178457251 M * Val yup 1178457280 M * Val may be an addon to util-vserver package 1178457287 M * Val util-vserver-addon 1178457292 M * Val :) 1178457699 Q * gerrit Ping timeout: 480 seconds 1178457869 M * daniel_hozac Val: hmm? what was the problem? 1178457925 M * daniel_hozac the disk limit stuff works fine for me... 1178457967 M * derjohn daniel_hozac, the prob was a missing cache dir (see above) 1178458426 J * gerrit ~gerrit@c-67-160-146-170.hsd1.or.comcast.net 1178459124 N * DoberMann DoberMann[PullA] 1178460578 N * DoberMann[PullA] DoberMann 1178460640 P * Val 1178460669 M * daniel_hozac derjohn: but that directory should be created by the tools... 1178460707 M * derjohn daniel_hozac, ok, the Val did something strange ... or maybe set a dlimit to that dir ;) 1178460758 M * daniel_hozac the host cannot be limited by disk limits. 1178461435 J * cehteh ~ct@pipapo.org 1178463130 J * transacid ~transacid@transacid.de 1178463654 Q * yarihm Ping timeout: 480 seconds 1178466171 J * toidinamai ~frank@i59F7192E.versanet.de 1178466627 J * yarihm ~yarihm@84-74-20-183.dclient.hispeed.ch 1178466685 Q * ema Quit: leaving 1178466872 M * s0undt3ch yep, I break all yor guests derjohn :) 1178466955 J * jrsharp ~jrsharp@c-71-228-234-243.hsd1.tn.comcast.net 1178466989 M * jrsharp hey all 1178466999 M * nebuchadnezzar hello 1178467008 N * Bertl_oO Bertl 1178467015 M * Bertl greetings! 1178467018 M * jrsharp anyone running vserver on the latest stable, 2.6.21.1? 1178467039 M * jrsharp I tried the latest posted patch, but it would not compile... 1178467039 M * nebuchadnezzar I just have a craisy GCC in a guest, I can not kill it, it just eat all my CPUs :-( 1178467055 M * jrsharp should I just be satisfied with 2.6.20.4? 1178467072 M * Bertl hmm? 1178467091 M * Bertl jrsharp: could you upload the compile error you get? 1178467096 M * jrsharp sure 1178467116 M * Bertl and what patch precisely did you use? 1178467175 M * brcc_ Bertl good morning 1178467182 M * brcc_ I am already wakeup and playing with memory limits 1178467208 M * brcc_ Using openvz i can have 200 apache proccess with memory usage which seems like vs2.0 behavior 1178467246 M * Bertl well, they probably get the accounting wrong too as we did before 1178467267 M * brcc_ hhehhehe 1178467270 M * Bertl but as I said, you should double check with memory usage on the host 1178467283 M * jrsharp bertl: the latest stable vserver patch for 2.6 1178467288 M * jrsharp it was meant for 2.6.20.4 1178467300 M * Bertl jrsharp: why not use the patch for 2.6.21? 1178467307 M * jrsharp I didn't see it 1178467319 M * brcc_ If i limit just rmap (and not rss), the free comand inside the guest would act weird, wouldn't it ? I mean, there would be memory problems when the free command still shows much memory ? 1178467347 M * jrsharp the latest listed on the website, and the latest I found on the ftp was the 2.6.20.4 1178467364 M * Bertl http://vserver.13thfloor.at/Experimental/patch-2.6.21-vs2.2.0-rc1.diff 1178467383 M * jrsharp the compile error I get is in sched.c, too few args to task_running_tick 1178467385 M * Bertl brcc_: yes, that is true, meminfo is based on rss 1178467398 M * Bertl jrsharp: take the one for 2.6.21 please :) 1178467420 M * Bertl jrsharp: and let us know how it goes ... 1178467435 M * jrsharp yes 1178467437 M * jrsharp definitely 1178467441 M * brcc_ Bertl: Any workaround i can do over there? 1178467517 M * Bertl you mean, to get the supposedly 'wrong' values? 1178467528 M * brcc_ yes 1178467541 M * brcc_ Wrong values but something that can be compared to what comptetiros have 1178467550 M * brcc_ "comptetitors" 1178467552 M * Bertl interesting ... 1178467588 M * brcc_ The problem is: "why can i run tomcat at a 128mb vps on all ISPS but i cant at yours?" 1178467597 M * Bertl well, it should not be too hard to revert the fixes :) but we are actually more interested in getting the accounting as good as possible 1178467688 M * Bertl but before we continue here, I'd prefer to get a comparison with e.g. UML or a real machine ... could you do some test there with, let's say a fixed number of 100 apache processes and see how much memory is actually used? i.e. get the /proc/meminfo before and after start of those processes? 1178467708 M * brcc_ sure 1178467723 M * brcc_ is it ok to do that with vmware? 1178467731 M * Bertl sure 1178467820 M * brcc_ one momment 1178467824 M * brcc_ going to do tht on my local machine 1178467846 M * Bertl make sure to have a compareable kernel 1178467854 M * Bertl i.e. same arch, same kernel version 1178468008 M * brcc_ Hmm, i dont have that 1178468009 M * brcc_ hehe 1178468011 M * brcc_ http://pastebin.ca/474055 1178468048 M * brcc_ This was not on vmware, was on my local machine. 90 apache proccess are just using 10 MB 1178468071 M * brcc_ While on vs2.2 50 apache proccess are going to use all 128MB 1178468081 M * brcc_ Dont they share ram ? 1178468122 M * brcc_ I dont know how this work, but each fork takes the same ammount of memory as the parent ? Dont they share ? 1178468519 M * brcc_ Do you think that downgrading to old stable would be an immediate sollution to my specific problem? 1178468544 M * Bertl fork itself will not share, clone/thread can share 1178468605 M * brcc_ So in the theory if i have 100 apache proccess they should use (100 * (memory of parent)) 1178468623 M * Bertl if they are separate processes, yes 1178468743 M * Bertl so your example uses roughly 10MB for the 100 apache processes? 1178468789 M * brcc_ Yes 10 mb 1178468798 M * brcc_ I just tried on openvz and it behaves the same way 1178468829 M * brcc_ I signed up with an openvz provider just to check out how those vps' work.. :) 1178468831 M * Bertl okay, could you do the same on vs2.2 and capture both, the meminfo and the limits for the guest? 1178468844 M * brcc_ sure 1178468853 M * Bertl (keep the guest unlimited) 1178468864 M * brcc_ ok 1178469128 M * brcc_ i am just waiting for an upload to finish on this guest so i can reboot it with unlimited RSS 1178469140 M * Bertl np 1178469218 M * brcc_ 7 mins left 1178469997 T * * http://linux-vserver.org/ | latest stable 2.2.0, 2.0.3-rc2, devel 2.3.0.12, stable+grsec 2.0.2.1, 2.2.0 | util-vserver-0.30.213 | libvserver-1.0.2 & vserver-utils-1.0.3 | He who asks a question is a fool for a minute; he who doesn't ask is a fool for a lifetime -- share the gained knowledge on the Wiki, and we'll forget about the minute ;) 1178469997 T * daniel_hozac - 1178470071 M * brcc_ Bertl: http://paste.uni.cc/15252 1178470332 M * Bertl okay, did you capture the limit/accounting values too? 1178470347 M * Bertl if not, what do they show right now? 1178470480 J * hardwire ~bip@rdbck-1922.wasilla.mtaonline.net 1178470641 J * Orange ~debian@71-39-124-91.pool.ukrtel.net 1178470804 M * brcc_ 1 sec 1178470830 M * brcc_ Can i paste here or create anbother pastebin ? 1178470840 M * jrsharp bertl: the patch compiled just fine now 1178470971 M * brcc_ Bertl: http://paste.uni.cc/15253 1178471140 Q * jrsharp Quit: Leaving 1178471254 Q * Orange Quit: Êóäà íè ïëþíü - âåçäå çíàêîìûå ëèöà... ß ÞçÀþ ÑàÌûÉ ÐåÀëÜíÛé ÑêÐèÏò NeOn Script 8.5! CêÀ÷Àé ñ http://script.i-neon.lv : 1178471718 Q * yarihm Quit: Leaving 1178472746 M * Bertl brcc_: so you would be happier to see the anon as used memory, yes? 1178473069 J * er ~sapan@pool-71-168-215-87.cmdnnj.fios.verizon.net 1178473076 M * Bertl wb er! 1178473092 M * er thanks, wow that was quick 1178473118 M * er almost at the same time as the SYN-ACK 1178473132 M * Bertl well, I'm known to be fast :) 1178473139 M * er and modest 1178473141 M * er :) 1178473157 M * Bertl :) 1178473251 M * bXi they should shoot people who have utf8 characters in their quit messages 1178473265 M * Bertl without notice :) 1178473347 M * bXi especially when their advertising some mirc script 1178475130 M * brcc_ Bertl: I dunno, i liked how it used to be before 1178475147 M * brcc_ it was more likely a "real" server and the accounting virtuozzo does 1178475182 M * brcc_ The new behavior introduces lots of limitations when running a web hosting enviroment for example 1178475197 M * brcc_ If you have, for example, 10 more web accesses than expected you will reach memory limit 1178475212 M * brcc_ in the old approach you could have 100 more without big problems 1178475222 M * brcc_ (this is what happens in openvz and "real" servers) 1178475226 M * Bertl well, I think we can make that a backwards compatible option or so 1178475253 M * Bertl I have to think about it a little more and do some tests I guess 1178475265 M * brcc_ Ok 1178475278 M * Bertl but if you like, there should be a simple hack to go back to the previous behaviour 1178475313 M * brcc_ Great. i would appreaciate that if it is not hard for me (dont know how to play with kernel source :) ) 1178475369 M * brcc_ I think that having ANON to be accounted is great, but it still seems not to be correct if we compare this particular apache's fork case 1178475504 M * brcc_ What if i go back with "old stable" ? 1178475524 M * Bertl kernel/vserver/limit.c edit the vx_vsi_meminfo/vx_vsi_swapinfo 1178475549 M * Bertl to use whatever limit you consider appropriate (for your purpose) instead of RLIMIT_RSS 1178475684 M * brcc_ After changing that, the output of free will be get from "RMAP" for example? 1178475742 M * Bertl yep, that is the function which controls the free/meminfo 1178475766 M * Bertl you can also add some math there, e.g. only account 1/3rd or so 1178475809 M * brcc_ Great. So to have the old behavior i will change that to use RMAP and on /etc/vservers/rlimits ao will have rmap.hard in spite of rss.hard 1178475822 M * brcc_ mv rss.hard rmap.hard 1178477729 J * dreamind ~dreamind@C2107.campino.wh.tu-darmstadt.de 1178479239 J * dothebart ~willi@xdsl-81-173-172-3.netcologne.de 1178479247 Q * tudenbart Read error: Connection reset by peer 1178479518 Q * dreamind Quit: dreamind 1178481090 Q * ntrs_ Ping timeout: 480 seconds 1178481147 Q * toidinamai Ping timeout: 480 seconds 1178481222 J * toidinamai ~frank@i59F7192E.versanet.de 1178482356 Q * bonbons Quit: Leaving 1178482370 M * daniel_hozac Bertl: hmm, doesn't fork() create COW-mappings of the anonymous pages? 1178482761 M * Bertl hmm ... 1178483658 J * ema ~ema@rtfm.galliera.it 1178483704 Q * er Quit: er 1178483973 J * onox ~onox@kalfjeslab.demon.nl 1178484640 P * newz2000 1178485304 Q * dna Quit: Verlassend 1178486107 Q * bzed Ping timeout: 480 seconds 1178486379 J * bzed ~bzed@dslb-084-059-121-166.pools.arcor-ip.net 1178487202 J * Aiken ~james@121.45.222.137 1178487569 M * Bertl morning Aiken! 1178487591 M * Aiken hi 1178488078 Q * ema Quit: leaving 1178488431 Q * gerrit Ping timeout: 480 seconds 1178489019 Q * sid3windr Ping timeout: 480 seconds 1178489153 J * gerrit ~gerrit@c-67-160-146-170.hsd1.or.comcast.net 1178489428 Q * Johnnie Read error: Connection reset by peer 1178489482 J * sid3windr luser@bastard-operator.from-hell.be 1178490442 Q * fs Ping timeout: 480 seconds 1178490452 J * fs fs@213.178.77.98 1178492017 J * ntrs_ ntrs@68-188-55-120.dhcp.stls.mo.charter.com 1178492307 Q * onox Quit: zzzz 1178493321 N * DoberMann DoberMann[ZZZzzz] 1178494008 J * toidinamai_ ~frank@i59F74F59.versanet.de 1178494135 Q * Aiken Remote host closed the connection 1178494367 Q * toidinamai Ping timeout: 480 seconds 1178494380 Q * Piet Quit: Piet 1178494883 J * Aiken ~james@ppp222-137.lns2.bne1.internode.on.net 1178494939 Q * ruskie Read error: Connection reset by peer 1178495304 J * ruskie ruskie@ruskie.user.oftc.net 1178495432 Q * gerrit Ping timeout: 480 seconds