1209083313 Q * samtc Read error: Connection reset by peer 1209084055 J * fatgoose ~samuel@76-10-149-199.dsl.teksavvy.com 1209086718 Q * dowdle Remote host closed the connection 1209087134 J * friendly ~friendly@ppp59-167-167-175.lns1.mel4.internode.on.net 1209089502 Q * cehteh Ping timeout: 480 seconds 1209090981 J * dtbartle ~dtbartle@caffeine.csclub.uwaterloo.ca 1209091977 Q * fatgoose Ping timeout: 480 seconds 1209093442 J * fatgoose ~samuel@76-10-149-199.dsl.teksavvy.com 1209093508 J * cehteh ~ct@pipapo.org 1209094799 Q * ryker Ping timeout: 480 seconds 1209095658 J * AndrewLe1 ~andrew@flat.iis.sinica.edu.tw 1209095722 J * mspang mspang@artificial-flavours.csclub.uwaterloo.ca 1209095774 Q * AndrewLee Ping timeout: 480 seconds 1209095794 N * AndrewLe1 AndrewLee 1209096526 Q * padde Remote host closed the connection 1209097250 J * padde ~padde@patrick-nagel.net 1209098641 J * sharkjaw ~gab@64.28.12.166 1209100481 J * virtuoso_ ~s0t0na@ppp92-101-14-147.pppoe.avangarddsl.ru 1209100662 J * bardia ~bardia@lnc.usc.edu 1209100893 Q * virtuoso Ping timeout: 480 seconds 1209101238 J * Slydder ~chuck@194.59.17.53 1209101335 M * Slydder morning all 1209102458 M * phedny morning 1209103427 J * cryptronic ~oli@p54A3AFEC.dip0.t-ipconnect.de 1209103943 Q * nkukard Ping timeout: 480 seconds 1209104932 J * nkukard ~nkukard@196.212.73.74 1209104932 Q * nkukard 1209105118 J * nkukard ~nkukard@196.212.73.74 1209105708 Q * nkukard Remote host closed the connection 1209105902 N * Bertl_zZ Bertl 1209105906 M * Bertl morning folks! 1209106144 J * nkukard ~nkukard@196.212.73.74 1209106188 Q * cryptronic Quit: Leaving. 1209107109 J * dna ~dna@93-200-dsl.kielnet.net 1209109782 J * bfremon ~ben@ANantes-252-1-17-104.w82-126.abo.wanadoo.fr 1209110172 J * FireEgl ~FireEgl@adsl-226-44-128.bhm.bellsouth.net 1209110250 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1209110383 Q * Loki|muh Ping timeout: 480 seconds 1209110804 J * dna_ ~dna@7-221-dsl.kielnet.net 1209110813 Q * friendly Quit: Leaving. 1209110990 J * yarihm ~yarihm@whitehead2.nine.ch 1209111010 M * yarihm hi everyone 1209111019 Q * dna Ping timeout: 480 seconds 1209111024 M * yarihm Bertl, did that work out with the money-transfer? 1209111024 M * Bertl wb yarihm! 1209111039 M * Bertl yarihm: yes, arrived yesterday 1209111348 M * Bertl yarihm: do you have a test machine (for yourself)? 1209111400 M * yarihm Bertl, I can get one, that should not be a problem 1209111415 M * Bertl okay, would be good to prepare a test setup then 1209111438 M * Bertl i.e. a controlled environment, for test loads and such 1209111486 M * yarihm that's no problem. How do you do the load-tests? I can give you full access to the machine. x86 and amd64 are available 1209111512 M * Bertl well, we have to design a few simple test tools for this specific purpose 1209111514 M * yarihm if you want and have the time to give instructions I can do the testing, too. We did it using bonnie++ and stress so far 1209111517 M * yarihm ok 1209111536 M * yarihm what do you have in mind? 1209111549 M * Bertl bonnie only focusses on disk i/o 1209111567 M * Bertl if I got you correctly, we want to test memory pressure 1209111573 M * yarihm exactly, stress does only do VM-related things (memory, CPU) 1209111580 M * yarihm yes, stress provides that to some extent 1209111606 M * Bertl precisely, but we also need a way to produce specific allocations (for testing) 1209111631 P * pusling 1209111732 M * yarihm My C/C++ is rather rusty to say the least, in case you had something of that kind on your mind 1209111763 M * Bertl np, I'll provide the tools for that, but you could do the test setup and adjustments 1209111783 M * yarihm ok, I'll do that part then 1209111847 M * Bertl you have an idea how the memory management works? 1209111861 M * yarihm so a normal machine with some contextes is ok then? Our host-systems all run Debian and I guess even though you can choose from quite a range of guests here, the contexts' distro doesn't really matter 1209111909 M * yarihm Bertl, you mean paging, segmentation and the concept of virtual memory? well, yes, but most likely not to the extent desireable for this. But I'm keen on learning more if you can spare the time :) 1209111921 M * Bertl for testing, we need a very controlled environment, i.e. minimum of processes running, no disturbances 1209111989 M * yarihm well, I can strip down a system such that only ssh is left and whatever you want to run within the contexts ... or what would be your exact requirements? 1209112005 M * Bertl yes, that sounds good 1209112018 M * yarihm if sshd is too much, we also have an IP-based KVM-switch ... its not that nice to work with though 1209112039 M * Bertl nah, no problem with ssh on the host, the impact is minimal 1209112078 M * yarihm there is not much more running anyway on our host-systems. But i'll stop cron and syslog, too ... and whatever is running besides the kernel's processes 1209112098 M * Bertl syslog can probably stay as well, we might need that 1209112147 M * Bertl what we want to avoid are processes which kick in at arbitrary times and memory/cpu hogs 1209112204 M * yarihm np. about the knowledge of memory management, do you have any pointers? I'm a theoretical physicist (actually a student thereof at ETH Zurich) but I do have a computational physics background and did all offered courses on IT security, so I'd be interested there 1209112242 M * yarihm No problem getting that. I'll turn of the backups and the monitoring along with the cron, so syslog and ssh are the only ones left 1209112270 M * Bertl http://linux-mm.org/ (good start) 1209112445 M * Bertl so, the aim is to get a 'fair' distribution of memory and 'preferred swapout' for processes in guest which are over the soft limit, right? 1209112479 J * Punkie ~Punkie@goc.coolhousing.net 1209112616 M * grobie morning 1209112696 M * yarihm Bertl, yes, exactly. We'd like to have the memory-behaviour be like a normal machine, or at least closer to this. Just so that if a context is within his soft-limits (i.e. the range displayed as "swap" in top in the context if VIRT_MEM is activated) and physical memory is exhausted, that he really is swapped out. Preferably he would not be swapped out if there is enough memory, of course 1209112729 M * yarihm or in other words as you put it, that if swapping needs to take place, it will be the memory regions of processes that are over the soft-limit 1209112784 M * yarihm no matter whether another process in a different context is less active and would normally not be in physical memory but swapped in favour of the other process that is over the soft limit 1209112822 M * yarihm I'm not sure whether I provide a good specification here :) I hope it is clear what the intention is 1209112871 M * Bertl I guess so ... here are the things we can do with the advantages and disadvantages 1209112914 M * Bertl first, we could tag every page with a context specific tag (as there is no conenction from a page to the task) 1209112947 Q * Punkie Quit: Odcházím 1209112958 M * Bertl this is the approach done in 2.6.25+ with the cgroups 1209112974 J * Punkie ~Punkie@goc.coolhousing.net 1209113020 M * Bertl the disadvantages here are: we need to dereference a lot of things, search for the context, and extend the page struct, which again costs memory and time 1209113041 M * Bertl especially the larger page struct hurts memory wise 1209113105 M * Bertl a different approach is to do per context page reclaiming 1209113179 M * Bertl i.e. when memory pressure is high, we walk all contexts over the soft limit, and start page reclaiming on a per process basis 1209113246 M * yarihm i see. but on the upside you could maybe use some of the codebase of 2.6.25+? if you now have a connection from the page to the task, how do you know whether a specific process is above the soft-limit whereas another process with the same context tag is decided to be one of those below? That should depend on the processes activity, no? 1209113255 M * Bertl this has the advantage that we do not need to extend the page struct, but the disadvantage that we can only adjust the odds, not actually solve the problem 1209113268 M * yarihm mhm ... i see 1209113332 M * Bertl I would go for the per context reclaim, because that is not very intrusive and doesn't add any overhead if not used 1209113342 M * yarihm page reclaiming is the process of asking for a page to be transferred to physical memory? I'm sorry to ask such stupid questions, maybe I should first read linux-mm.org 1209113368 M * Bertl reclaiming is getting it free of data (i.e. getting the data out of the memory) 1209113405 M * yarihm when talking of memory, you refer to physical memory, not virtual memory (well, obviously I guess) 1209113415 M * Bertl yes 1209113436 M * Bertl we cannot change anything in regard to the virtual memory allocations 1209113456 M * Bertl we have a hard limit there on the address space, and that's about what can be done 1209113566 M * yarihm you say this increases the odds that the preferred behaviour would be done. what does that exactly mean in terms of "exploitability" of this by a "malicious" context? Or put differently, how likely is it that in an agressive environment, the page reclaiming cannot effectively change the swapping decisions? 1209113630 M * Bertl well, regardless of the mechanism, the swapping for itself can _always_ be exploited 1209113649 M * Bertl let's assume we have 'perfect' behaviour like on a real machine 1209113655 M * yarihm ok 1209113667 M * yarihm (I mean, ok, go on...) 1209113668 M * Bertl i.e. when over 'physical' memory, we start paging out 1209113687 M * Bertl (we don't bother about the mechanism to achieve that atm) 1209113696 M * yarihm k, still with you 1209113720 M * Bertl now let's consider two guests running, with the very same amount of memory assigned 1209113793 M * Bertl what will happen if one of those 'ideal' guests allocates twice the amount of 'virtual' memory (as there is physical memory assigned) and then starts filling that bottom to top over and over again 1209113822 M * Bertl naturally it will beginn paging out (trashing) 1209113851 M * Bertl and of course, that will hurt not only the performance of the other guest, but also the overall system performance 1209113883 M * Bertl only way to avoid that would be hard partitioning, including the swap disks 1209113923 M * yarihm yes, that makes sense. Of course that would not be required. We are fully aware that if someone starts to swap, this will affect the other context's IO-performance 1209113949 M * yarihm of course it would be nice if that would not be so, but that's quite some work I guess 1209113953 M * Bertl but, that is the point where my second suggestion could help to improve things 1209113987 M * yarihm ok, don't yet see that 1209113988 M * Bertl i.e. when a context is causing heavy page out/in then we could penalize it cpu wise (via the scheduler) 1209114001 M * yarihm ah, yeah, that's what you mentioned in the mail 1209114044 M * yarihm that sounds very good indeed. I thought with "second suggestion" you meant the idea of doing the page reclaiming 1209114064 M * yarihm instead of adding the tag to the page-struct 1209114088 N * virtuoso_ virtuoso 1209114162 M * Bertl so, in general, are you fine with the proposed solution to do the per context reclaiming (when memory pressure is high, or on demand) and extend that by the penalization based on page I/O? 1209114170 M * yarihm but still I'm not quite clear on what exactly "adjust the odds" means ... do you have an estimation on how reliable that would be? 1209114188 M * Bertl well, it would work like this: 1209114212 M * Bertl - memory pressure gets high or somebody (userspace) triggers it 1209114231 M * Bertl - we walk all contexts, picking those which are over soft limit 1209114253 M * Bertl - for those contexts, we walk all processes 1209114273 M * Bertl - for each process, we walk the pages and try to reclaim at least 1209114295 M * Bertl (memCurrent - memSoftLimit) 1209114297 Q * grobie Quit: Coyote finally caught me 1209114300 J * grobie ~grobie@valgrind.schnuckelig.eu 1209114334 M * Bertl the problem here is: 1209114348 M * Bertl - we cannot guarantee that thus many pages can be reclaimed 1209114385 M * Bertl - we cannot guarantee that the process with the largest amount of memory will be reclaimed 1209114548 M * Bertl but statistically we will achieve exactly what we want, the context which are over limit will start paging out 1209114576 M * Bertl and will keep paging out as long as they are over limit and memory pressure is high 1209114691 M * Bertl alternatively we could do the reclaiming whenever a page is requested (considered that the context is over limt) but I think that would cause more overhead for the checks than it will buy us 1209114765 M * yarihm I'm not quite clear on point 4: you walk the pages and try to reclaim current-softlimit. say memCurrent is 600MB and memSoftLimit is 256MB (actually it would be something like 1.75GB if the hardlimit is 2GB, right?), you'd then ask the process to return ~350MB of physical memory? Most likely I've gotten something wrong here 1209114798 M * Bertl no, that's exactly it 1209114841 M * yarihm but what if said process is say sshd? most likely that one can't give you 350MB of physical memory as it only allocates say 20MB of vmem 1209114880 M * Bertl we are talking context here, so if the context only consists of sshd, and has 600MB RSS 1209114925 M * Bertl then we have to remove 344MB worth of pages from sshd 1209114949 M * Bertl but, if we have sshd and apache running inside 1209114970 Q * hijacker Read error: Connection reset by peer 1209114983 M * Bertl then we start (randomly) with one of them, and try to reclaim pages, which, if it is sshd, will not succeed at this point 1209114993 M * Bertl but it will succeed once we reach apache 1209114997 J * hijacker ~hijacker@213.91.163.5 1209115016 M * yarihm who or what decides whether we succeed with the page reclaim? 1209115037 M * Bertl if there are no pages to be reclaimed, we cannot reclaim. period. 1209115060 M * Bertl if there are, we reclaim as long as we are over limit. period. 1209115132 M * yarihm well, when is a page to be reclaimable? I mean you ask the memory-management to pageout the page and put it into swap, right? 1209115145 M * Bertl nah, not necessarily 1209115158 M * Bertl memory pages can have different sources and data 1209115184 M * Bertl first, there are caches (we do not consider or handle them inside a guest) 1209115213 M * Bertl then, there is mapped memory, i.e. file based mappings (like executables and libraries and such) 1209115265 M * Bertl we have two cases here, the mapping can be shared or unique to a process (in theory to a guest, but that is no information we have) 1209115348 M * Bertl finally there are anonymous pages or on write copies of mapped pages 1209115386 M * Bertl those have to be paged out to swap, because they have no representation 1209115442 M * Bertl caches and read-only mappings can be simply purged (this is the preferred source for new pages :) 1209115615 M * yarihm i see. ok then, there is only one thing which I didn't yet get. if you reclaim pages per context, why do you go to the processes and ask each one to be reclaimed such a big amount of memory. and you say that with our ssh-example it would fail (because that one does not have that amount available), but neither does apache as that one forks and has many small childs being a process, right? 1209115671 M * Bertl nah, I'm not going to each process and ask it to reclaim the entire amount (which doesn't work as the process doesn't know about page reclaiming anyway) 1209115693 M * yarihm ok, that makes sense :) 1209115706 M * Bertl but, I'm going over each process, and take a look at all pages of those processes and if I find some which can be reclaimed, I do that 1209115737 M * Bertl (or at least initiate it, as it takes quite a while to page out a page) 1209115768 M * yarihm can it be that there are no pages that can be reclaimed such that still some context "steals" physical memory from other contexts? 1209115791 M * Bertl not if the limits are set properly 1209115813 M * Bertl non reclaimable pages are for example locked memory pages 1209115852 M * Bertl i.e. if you don't allow to lock more pages in memory than the soft limit is, this is not a problem 1209115926 M * yarihm is that option available at the moment or would that be implemented along with the "feature" 1209115950 M * Bertl that's available in the limits 1209115974 M * yarihm ok, so that sounds very good to me then 1209116070 M * Bertl okay 1209116090 M * yarihm if that's ok with you, I think this is a good way to do this. (but why does this only increase the odds, to me it seems as if this worked quite reliably) 1209116111 M * Bertl well, remember the example with sshd and apache? 1209116115 M * yarihm yes 1209116138 M * Bertl it would be better if we could reclaim the memory from apache (not sshd) in this example 1209116164 M * Bertl but we have no way to tell or decide that without adding a very complex heuristic system 1209116184 M * Bertl and that is definitely something we do not want to put into the kernel 1209116185 M * yarihm well, if apache needs it, he will get it back from the mem-manager somewhen, right? 1209116209 M * Bertl yes, again causing page-in/out 1209116212 M * yarihm ah, sorry, i mean if sshd is affected and needs it, he will get it back 1209116259 M * Bertl so, what I meant with adjusting the odds is, that statistically we will get the proper effect from the host PoV, but in the micro management, we will cause some disturtions 1209116268 M * yarihm well, that's though luck, but as you said, anything else is hardly possible. the one context that causes this, will furthermore be punished in terms of cpu anyway 1209116286 M * Bertl in the second step, yes 1209116291 M * yarihm yes 1209116361 M * yarihm but is there no way to prefer the pages of more "active" processes in a context to be asked last for a pagereclaim? 1209116377 M * Bertl define more active :) 1209116392 M * yarihm is there some statistic available on how much some process accesses his memory, how much mem-io it causes or something? 1209116416 M * Bertl not really, but let's assume there is 1209116416 M * yarihm i mean in our apache and ssh example, the apache will most likely be much more active in terms of memory than ssh, right? 1209116449 M * Bertl then we have still the problem that we would need to 'sort' the processes of a context by this criterion 1209116471 M * Bertl and I'm not sure that we want to spend a lot of time on sorting when we encounter memory pressure :) 1209116478 M * yarihm ok, let's assume that. then it would be better paging out the sshd before the apache, right? 1209116489 M * yarihm hmm :) 1209116516 M * yarihm but don't you lose more time then if the wrong process was paged out and needs to be paged in soon thereafter? 1209116526 M * Bertl the normal system wide page reclaiming doesn't have this problem, as the pages are sorted in an LRU manner 1209116546 M * Bertl so pages which are not really used get paged out first 1209116564 M * yarihm mhm LRU = Last recently used I take from the context 1209116568 M * Bertl but this behaviour obviously isn't what you want 1209116572 M * yarihm yes 1209116589 M * yarihm you are perfectly right on this :)) that's actually the behaviour we want to break ,( 1209116592 M * yarihm ,) that is 1209116629 M * Bertl so, while I have no guarantee that the proposed solution actually solves your issues 1209116631 M * yarihm so there is no LRU-ordering in a per-context context? 1209116652 M * Bertl from my PoV the statistics are on your side :) 1209116662 M * Bertl no, there is no LRU ordering per context 1209116692 M * Bertl but, if the proposed process does not work out that well (for whatever reason) 1209116741 M * Bertl we can extend it by reclaiming a fixed amount (e.g. 256 pages max) and selecting them from the 'top' pages (i.e. find the 256 best pages for reclaiming or so) 1209116760 M * yarihm well, look, it's the n00b discussing with the pro here. I sure hope that you are right, otherwise I'm going to look stupid, but since I absolutely do trust your judgement in this, I'm ok with that 1209116770 M * Bertl note that we don't want to go there if it isn't needed :) 1209116852 M * yarihm that sounds perfectly fair to me. 1209116857 M * Bertl okay, so we have a plan ... now let's get working ... 1209116882 M * yarihm okay. I 1209116897 M * Bertl your task: get the test system up and running, and create a bunch of test cases where you can recreate the unwanted behaviour with a minimal setup 1209116898 M * yarihm 'll get two machines, a 64bit one and a 32bit one for us 1209116929 M * Bertl I'll provide some tools to do basic allocations and such 1209116933 M * yarihm there is just one question: how do I tell whether a given application is paged out or not? 1209116947 J * balbir ~balbir@59.145.136.1 1209116954 M * yarihm i did that up to now using simple math and vtop :) not that professional 1209117140 M * Bertl hmm, good question, we'll have to investigate that too, I think 1209117176 M * Bertl /proc/*/maps and statm could provide basic data 1209117356 Q * bfremon Quit: Leaving. 1209117501 M * yarihm Bertl, how do I parse that from /proc/$PID/maps? didn't find statm 1209117540 J * bfremon ~ben@ANantes-252-1-17-104.w82-126.abo.wanadoo.fr 1209117569 M * Bertl ah, we have something better, /proc/#/smaps 1209117599 M * Bertl size and rss 1209117632 M * yarihm ah, statm was in /proc, too, not a command :) 1209117681 M * yarihm ok, i'll write something that sums up all RSS and all size-fields. but isn't that the same information that is contained in top? 1209117727 M * Bertl maybe .. don't know 1209117774 M * yarihm we'll find out 1209117784 M * yarihm i'll check for that 1209117930 J * ktwilight ~ktwilight@87.66.201.159 1209117993 Q * bfremon Quit: Leaving. 1209118268 Q * ktwilight_ Ping timeout: 480 seconds 1209118324 J * ktwilight_ ~ktwilight@96.102-67-87.adsl-dyn.isp.belgacom.be 1209118719 Q * ktwilight Ping timeout: 480 seconds 1209118972 N * pmenier_off pmenier 1209119237 J * bfremon ~ben@ANantes-252-1-17-104.w82-126.abo.wanadoo.fr 1209119326 Q * Punkie Quit: Odcházím 1209121568 J * Punkie ~Punkie@goc.coolhousing.net 1209122778 Q * bfremon Ping timeout: 480 seconds 1209123410 J * bfremon ~ben@ANantes-252-1-14-205.w82-126.abo.wanadoo.fr 1209124528 N * Bertl Bertl_oO 1209124531 M * Bertl_oO bbl 1209124749 N * pmenier pmenier_off 1209124935 Q * sharkjaw Quit: Leaving 1209125713 M * ard woei 1209125723 A * ard runs 2.6.25-2.3.0.34.5 1209125739 M * ard But i've f*ked up my sshd config :-( 1209125749 A * ard cries 1209125759 M * ard a headless macmini without console... 1209125949 A * ard lies 1209125953 M * ard I have a BT console 1209126700 J * mess-mate ~chatzilla@ALille-254-1-55-184.w86-196.abo.wanadoo.fr 1209126708 M * mess-mate hi folks 1209126768 N * Bertl_oO Bertl 1209126776 M * Bertl hey mess-mate! 1209126821 M * mess-mate I've found a good article about vserver (french) at http://fr.wikibooks.org/wiki/Vserver 1209126853 M * Bertl have to trust your judgement on that, I don't speak/read french :) 1209126870 M * mess-mate There they talk about viu.conf = guste1.conf 1209126884 M * mess-mate gsute1>guest1 1209126889 M * Bertl then it can't be that good :) 1209126908 M * Bertl we abandoned the one file config about 4 years ago, or so 1209126918 M * mess-mate now teher isn't a guestx.conf no more, do it ? 1209126946 J * tobifix ~tobifix@muedsl-82-207-216-166.citykom.de 1209126959 M * Bertl nope, no guest.conf anymore :) 1209126965 M * tobifix hey folks 1209126966 M * tobifix ;) 1209126971 M * Bertl wb tobifix! 1209127012 M * mess-mate It is replaced partially by separated conf. files ? But not all what was in that gustx.conf 1209127036 M * mess-mate my keyboard laks :( 1209127037 M * Bertl it was completely replaced by the config tree 1209127049 M * tobifix hey Bertl, how are you? 1209127058 M * Bertl fine, thanks! and you? 1209127066 M * tobifix fine :) 1209127070 M * tobifix the sun is shining 1209127073 M * tobifix aah, i love it 1209127088 M * Bertl quick, install linux on it :) 1209127156 M * mess-mate Bertl: what was the reason for the abandon of a guestx.conf ? 1209127170 M * Bertl lack of flexibility 1209127210 M * mess-mate bertl: a guestx.conf looks to me as sort of template ? 1209127251 M * Bertl nah, it is just a config, templates contain rudimentary guest data too 1209127273 M * Bertl (or in the current definition, only guest data) 1209127500 M * mess-mate IWhen i start (fresh) a machine and want to start a vserver, i've to vprocunhide. Otherwize it won't star. 1209127506 M * mess-mate Can it be done auto ? 1209127532 M * Bertl usually this is part of the util-vserver specific init scripts 1209127552 M * Bertl you might want to check that they are enabled and/or complain to your distribution maintainer 1209127593 M * mess-mate It's your util-vserver i talking about :) 1209127619 M * Bertl nope, sorry, the mainline version is maintained by daniel_hozac, I'm doing the kernel stuff 1209127627 N * DoberMann[ZZZzzz] DoberMann 1209127661 M * mess-mate Ah ok, i din't know who what why :) 1209127666 M * Bertl np 1209127708 M * Bertl but for example here (on mandriva) I have 1209127716 M * Bertl /etc/rc.d/init.d/vprocunhide 1209127726 M * Bertl and that is started on system startup 1209127775 M * mess-mate It's also done by debian. I'm testing the original util-version now. 1209127841 M * Bertl this package (for mandriva) is created directly from the sources (util-vserver 0.30.215) and there is nothing added (source wise) 1209127879 M * Bertl so it is definitely part of the tar package :) 1209128055 M * mess-mate There is a gui fot that in mandriva ? 1209128072 M * Bertl a gui for what? 1209128092 M * mess-mate For a complete vserver set-up 1209128129 M * Bertl not that I know of, what would you need a gui for? (gui == graphical user interface) 1209128201 M * mess-mate http://linuxeduquebec.org/spip.php?page=article-imprim&id_article=302 an others 1209128260 M * Bertl nice, didn't know something like that existed 1209128274 M * mess-mate http://linuxeduquebec.org/Gerer-des-vservers 1209128411 M * pmjdebruijn yaya 1209128426 M * pmjdebruijn it seems our iSCSI problem has been fixed in Ubuntu 8.04 LTS with 2.6.24-ubuntu 1209128546 Q * balbir Ping timeout: 480 seconds 1209128996 Q * Aiken Remote host closed the connection 1209129086 M * mess-mate Bertl: and this WAS for debian: an easy complete control, do it ? 1209129102 M * mess-mate http://fr.wikibooks.org/wiki/Vserver 1209129157 Q * bfremon Remote host closed the connection 1209129300 M * Bertl well, I'm not so fond of those graphical wrappers for perfectly fine command line solutions ... to be honest, I prefer a command line over any gui, but, others love GUIs in all variations :) 1209129351 M * ard GUI's can be nice if you have no keyboard (like on an iPaq :-) ) 1209129363 M * ard but I still start rxvt on my iPaq :-) 1209129598 J * bfremon ~ben@ANantes-252-1-14-205.w82-126.abo.wanadoo.fr 1209129863 M * mess-mate I'm not a gui defender, but at the point of vieuw of a user.. 1209129905 M * mess-mate we set-up say a webserver and maybe an ftpserver on consoel. 1209129921 Q * bfremon Quit: Leaving. 1209129955 J * bfremon ~ben@ANantes-252-1-14-205.w82-126.abo.wanadoo.fr 1209129977 M * mess-mate Next year we have to setup a mailserver and had to remember the the commands (maybe not) 1209130011 J * hparker ~hparker@linux.homershut.net 1209130012 M * mess-mate That's why a gui is usefull. 1209130045 M * mess-mate Or a dbconf as for debian. 1209130055 M * mess-mate dbconf> debconf 1209130627 M * pmjdebruijn Bertl: do you mind, if I were to private message you? 1209130706 M * Bertl no 1209130982 Q * tobifix Read error: Connection reset by peer 1209131003 J * tobifix ~tobifix@muedsl-82-207-204-116.citykom.de 1209131691 M * ard If I define a single ip address with 2.3, what should I do to get server to listen to 127.0.0.1 and the single ip? 1209131718 M * ard there is an nflag single_ip but I think I need no_single_ip? 1209131730 M * Bertl depends on your kernel config and guest setup 1209131843 M * ard ah 1209131864 M * ard CONFIG_VSERVER_AUTO_LBACK=y 1209131864 M * ard CONFIG_VSERVER_AUTO_SINGLE=y 1209131883 M * ard I think I do not want the auto_single in this case? 1209131901 M * Bertl yes, or you want to add ~SINGLE_IP 1209131912 A * ard sighs 1209131927 A * ard never figured there was a not option :-0 1209131932 M * ard Bertl : tanx! 1209131974 Q * quasisane Read error: Connection reset by peer 1209131987 M * Bertl np, ! or ~ inverts flags 1209132036 M * daniel_hozac (as it says on http://linux-vserver.org/util-vserver:Capabilities_and_Flags) :) 1209132049 M * ard Bertl rules! 1209132064 M * Bertl credit goes to daniel_hozac for that one :) 1209132082 M * ard daniel_hozac rules! 1209132090 M * ard :-) 1209132138 A * ard should read the wiki changelist more often :-) 1209132197 M * ard although it hasn't been changed in more than 1 1/2 years (except for that single line) 1209132226 J * docelic_ ~docelic@78.134.204.121 1209132628 Q * docelic Ping timeout: 480 seconds 1209133769 J * doener ~doener@i577AE7E6.versanet.de 1209134149 J * quasisane ~sanep@c-75-68-59-175.hsd1.nh.comcast.net 1209134358 J * _Keks_ ~keks@80.64.184.134 1209134378 M * _Keks_ Hi @ all 1209134462 M * Bertl hey _Keks_! 1209134469 M * _Keks_ Hi Bertl 1209134523 M * _Keks_ could you tell me the status of your 2.6.24 patch? 1209134561 M * Bertl outdated, use the 2.6.25 one :) 1209134567 M * _Keks_ hmpf 1209134584 M * _Keks_ ok 1209134588 M * _Keks_ i try again 1209134596 J * chris__ ~chris@204-112-106-123.dedicated.mts.net 1209134596 M * _Keks_ could you tell me the status of your 2.6.25 patch? 1209134599 M * _Keks_ :) 1209134620 M * Bertl it is kind-of working, but quite some work to do 1209134675 M * _Keks_ sounds good 1209134702 M * Bertl if you are going to try it, please provide feedback 1209134706 M * _Keks_ are there still "problems" with the sched? 1209134729 M * Bertl scheduler is not implemented yet, the mainline scheduler was changed significantly 1209134740 M * _Keks_ i know 1209134750 M * _Keks_ thats the reason 1209134751 M * Bertl i.e. needs a rewrite of the HardCPU scheduler 1209134752 M * _Keks_ :) 1209134839 M * _Keks_ just a second 1209135181 Q * Slydder Quit: Leaving. 1209135312 Q * Punkie Quit: ...mizim... 1209135348 J * cryptronic ~oli@p54A3AFEC.dip0.t-ipconnect.de 1209135651 P * ktwilight_ dead 1209135656 J * ktwilight_ ~ktwilight@96.102-67-87.adsl-dyn.isp.belgacom.be 1209135664 M * ktwilight_ oopsie :) 1209135689 Q * pmenier_off Ping timeout: 480 seconds 1209135711 J * pmenier_off ~pmenier@ACaen-152-1-47-138.w83-115.abo.wanadoo.fr 1209135803 J * balbir ~balbir@59.145.136.1 1209136171 Q * nkukard Ping timeout: 480 seconds 1209136315 J * mire ~mire@155-174-222-85.adsl.verat.net 1209136606 M * _Keks_ back again 1209136631 M * Bertl was barely a second :) 1209136637 M * _Keks_ :) 1209136642 M * _Keks_ i am sooo sorry 1209136664 M * _Keks_ back to topic 1209136691 M * _Keks_ i have one vserver-system running 1209136701 M * Bertl congrats! 1209136716 M * _Keks_ which could take advantage of the new sched 1209136719 M * _Keks_ ;) 1209136741 M * _Keks_ its not only 1 system i have 1209136748 M * infowolfe Bertl, that machine of mine is still up (changed out the ram) 1209136801 M * Bertl so you had hardware issues? 1209136811 M * Bertl _Keks_: okay, so? 1209136829 M * _Keks_ i am still working on an redundat vserver based firewall 1209136848 M * infowolfe Bertl, that box that i gave you the oopses from, on a fluke, we replaced the (cheap) ram with good ram, and it's been up >24h 1209136859 M * infowolfe on 2.6.25-vs 1209136875 M * Bertl excellent news then! 1209136877 M * _Keks_ but i think the new sched isnt needed for this project 1209136920 M * infowolfe Bertl, excellent in some ways, terrible in others (if the bad ram exposed a bug that should be fixed) 1209136948 J * nkukard ~nkukard@196.212.73.74 1209136953 M * Bertl bad ram does seldom expose bugs, it causes issues :) 1209136994 M * Bertl the only case where bad ram actually could expose a bug is if you are testing bad ram management and failover :) 1209137007 A * infowolfe mutters something about waking up, lack of caffiene 1209137035 M * infowolfe Bertl, so you're saying the multiple oopses we saw yesterday have nothing to do with how the code was written and everything to do with the ram that was in the box? 1209137049 M * Bertl yes, definitely 1209137061 M * Bertl when you cannot trust the memory, the code is worthless 1209137097 Q * balbir Ping timeout: 480 seconds 1209137261 M * _Keks_ i did not look too deep into the vserver code, but is there still an improvment of the host system with kernel 24+ and the vserver patches? 1209137307 M * Bertl I don't understand the question ... 1209137353 M * infowolfe _Keks_, i just have notoriously shitty hardware, and the only mainline kernels that run well on my hardware are 2.6.19.7 and 2.6.25 1209137419 M * _Keks_ the guest doesnt take adv. of the new sched because it is not fully "supported" yet, right? 1209137449 M * Bertl both the guest and the host use the same scheduler, which is the mainline sheduler 1209137475 M * Bertl on 2.6.22, the scheduler is extended by the Hard CPU scheduler extensions 1209137492 M * Bertl which allows you to do per guest CPU management 1209137505 Q * mire Quit: Leaving 1209137544 M * _Keks_ that means the base system takes adv. of the mainline sched? 1209137555 M * _Keks_ in 24+ 1209137590 M * Bertl both, the host and the guest will 'take advantage' (whatever that means) of the (new) mainline scheduler 1209137626 M * _Keks_ ok 1209137654 M * _Keks_ imho one big adv. of the "new" sched is the fairness 1209137694 M * Bertl not really, the old scheduler, together with the HardCPU and IdleTime management provides the very same fairness, even more 1209137705 M * _Keks_ hm 1209137706 M * _Keks_ ic 1209137744 M * Bertl but yes, the new mainline scheduler is an improvement for mainline :) 1209137753 M * _Keks_ :) 1209137807 J * DavidS ~david@p57A485D8.dip0.t-ipconnect.de 1209137881 M * _Keks_ means, i dont run into problems if i want to use a 25 kernel and do not use any cpu specific limits 1209137897 Q * alex__ Remote host closed the connection 1209137908 M * Bertl yep, that's it mostly 1209137923 M * _Keks_ thats great 1209138172 M * _Keks_ thanks a lot, you have successfully recruited a new beta tester 1209138176 M * _Keks_ ;) 1209138316 M * Bertl excellent! welcome to the testing club :) 1209138437 M * mess-mate I've this error when : /usr/local/lib/util-vserver/vserver.delete: line 19: : Dir or file not found 1209138469 M * mess-mate Have i missing somwhat ? 1209138492 M * _Keks_ i am only going to update our oracle vserver 1209138497 M * _Keks_ :) 1209138554 M * _Keks_ should i start with vs2.3.0.34.5? ;) 1209138653 Q * nkukard Quit: Leaving 1209138672 J * nkukard ~nkukard@196.212.73.74 1209138725 M * Bertl yes, vs2.3.0.34.5 is the current version 1209138991 Q * cryptronic Quit: Leaving. 1209139249 J * thei0s ~G0D@lk.84.20.235.126.dc.cable.static.lj-kabel.net 1209139266 Q * thei0s 1209139319 Q * nkukard Ping timeout: 480 seconds 1209139415 P * mess-mate I'm not here right now. 1209139564 J * nkukard ~nkukard@196.212.73.74 1209139839 Q * chris__ Remote host closed the connection 1209140341 J * larsivi ~larsivi@144.84-48-50.nextgentel.com 1209140411 Q * dna_ Ping timeout: 480 seconds 1209140434 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1209140533 M * _Keks_ bertl, which kernel options should i enable/disable? 1209140578 M * _Keks_ hard cpu limits should be disabled, right? 1209140722 M * Bertl diesn't matter, is ignored, but make sure to disable the scheduler monitor 1209140758 J * hparker ~hparker@linux.homershut.net 1209141192 J * mire ~mire@155-174-222-85.adsl.verat.net 1209141473 M * _Keks_ wuupppss 1209141491 M * _Keks_ where to find that monitor? 1209141531 M * ard6 vserver debug 1209141539 M * _Keks_ thx 1209141554 A * ard6 found out that hard cpu limits did matter :-) 1209141563 M * ard6 in compilation I mean 1209142034 M * mspang Bertl: I copied the vserver that can't be shut down to http://csclub.uwaterloo.ca/~mspang/vserver/ 1209142040 M * mspang and the kernel I used 1209142061 M * Bertl great, tx! 1209142065 Q * docelic_ Quit: http://www.spinlocksolutions.com/ 1209142119 M * mspang no problem :) 1209142125 M * Bertl okay, I made a copy 1209142288 M * _Keks_ seams like i am blind 1209142301 M * _Keks_ i cant find any scheduler monitor 1209142336 M * Bertl it depends on other options, kernel debug and vserver debug 1209142347 M * Bertl isn't shown by default 1209142385 M * _Keks_ ah ok 1209142402 M * _Keks_ i am not blind :) 1209142477 M * pmenier_off Hi all 1209142509 M * _Keks_ hi 1209142511 M * pmenier_off Bertl: CONFIG_GROUP_SCHED must be enable/disable ? 1209142524 M * Bertl as you like it 1209142527 M * _Keks_ :) 1209142540 M * pmenier_off thanks :) 1209142774 M * Bertl np 1209142833 J * dna_ ~dna@197-221-dsl.kielnet.net 1209142840 J * pflanze ~chris__@77-56-93-197.dclient.hispeed.ch 1209143328 Q * nkukard Quit: Leaving 1209143363 J * nkukard ~nkukard@196.212.73.74 1209143523 M * tobifix Bertl, are there docs which explain how the new sched works? 1209143543 M * Bertl yes, there is some documentation about the design 1209143558 M * tobifix tell me, where can i find that? 1209143585 M * Bertl http://kerneltrap.org/node/8059 (good start) 1209143620 M * tobifix thx alot 1209143625 M * Bertl np 1209143726 Q * bardia Ping timeout: 480 seconds 1209143831 J * bardia ~bardia@lnc.usc.edu 1209143999 J * ryker ~ryker@76.16.114.60 1209145747 M * _Keks_ cu later 1209145753 M * _Keks_ thx for help 1209145756 M * Bertl cya! 1209145778 M * _Keks_ kernel compiled successfully 1209145792 M * _Keks_ installing later 1209145799 M * _Keks_ bye 1209145803 P * _Keks_ Kopete 0.12.7 : http://kopete.kde.org 1209146939 J * dna__ ~dna@197-221-dsl.kielnet.net 1209147004 J * FmUQZMlEx ~hollow@proteus.croup.de 1209147152 Q * dna_ Read error: Connection reset by peer 1209147153 Q * Hollow Remote host closed the connection 1209147153 Q * FireEgl Read error: Connection reset by peer 1209147181 Q * _gh_ Read error: Connection reset by peer 1209147201 J * _gh_ ~gerrit@c-67-169-199-103.hsd1.or.comcast.net 1209147515 Q * doener Quit: leaving 1209148046 J * FireEgl FireEgl@adsl-226-44-128.bhm.bellsouth.net 1209149448 Q * yarihm Quit: Leaving 1209149530 N * DoberMann DoberMann[PullA] 1209149784 Q * dna__ Ping timeout: 480 seconds 1209149861 J * dna ~dna@229-240-dsl.kielnet.net 1209150324 Q * pflanze Quit: Leaving 1209151889 T * * http://linux-vserver.org/ |stable 2.2.0.7, devel 2.3.0.34, grsec 2.2.0.7|util-vserver-0.30.215|libvserver-1.0.2|vserver-utils-1.0.3| He who asks a question is a fool for a minute; he who doesn't ask is a fool for a lifetime -- share the gained knowledge on the Wiki, and we forget about the minute. 1209151889 T * harry - 1209153783 J * JonB ~NoSuchUse@0x573501a3.kjnxx10.adsl-dhcp.tele.dk 1209153848 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1209154172 J * hijacker_ ~Lame@87-126-142-51.btc-net.bg 1209154589 J * hijacker__ ~Lame@87-126-142-51.btc-net.bg 1209154589 Q * hijacker_ Read error: Connection reset by peer 1209154665 Q * hijacker__ 1209154919 Q * _er Ping timeout: 480 seconds 1209155471 J * doener ~doener@i577AD844.versanet.de 1209157213 J * _er ~sapan@aegis.CS.Princeton.EDU 1209157283 Q * larsivi Ping timeout: 480 seconds 1209157539 Q * pmenier_off Ping timeout: 480 seconds 1209157853 Q * JonB Quit: This computer has gone to sleep 1209157859 J * Aiken ~james@ppp121-45-247-4.lns2.bne4.internode.on.net 1209158221 M * Bertl okay, off to bed now ... have a good one everyone! 1209158227 N * Bertl Bertl_zZ 1209158354 J * JonB ~NoSuchUse@0x573501a3.kjnxx10.adsl-dhcp.tele.dk 1209159181 Q * bonbons Quit: Leaving 1209159741 Q * JonB Quit: This computer has gone to sleep 1209160629 Q * simon_ Ping timeout: 480 seconds 1209160731 J * yarihm ~yarihm@84-75-103-252.dclient.hispeed.ch 1209160897 J * simon_ ~simon@vegtastic.falafelexpress.se 1209161267 Q * dna Ping timeout: 480 seconds 1209162282 N * DoberMann[PullA] DoberMann[ZZZzzz] 1209163271 J * JonB ~NoSuchUse@0x573501a3.kjnxx10.adsl-dhcp.tele.dk 1209164238 Q * JonB Quit: This computer has gone to sleep 1209165762 Q * Aiken Quit: Leaving 1209167363 Q * bfremon Remote host closed the connection