1391731319 Q * zerick Read error: Connection reset by peer 1391734801 Q * fisted Remote host closed the connection 1391734822 J * fisted ~fisted@xdsl-87-78-188-79.netcologne.de 1391739274 J * undefined ~undefined@00011a48.user.oftc.net 1391744925 Q * _BovineSpongiformEncephalitis_ Ping timeout: 480 seconds 1391755837 J * Ghislain ~aqueos@adsl1.aqueos.com 1391759579 J * thierryp ~thierry@2a01:e35:2e2b:e2c0:dc0e:389f:79f2:3a13 1391760474 M * Bertl off to bed now ... have a good one everyone! 1391760488 N * Bertl Bertl_zZ 1391760768 M * Ghislain have a good sleep :) 1391761041 Q * thierryp Remote host closed the connection 1391761062 J * thierryp ~thierry@home.parmentelat.net 1391761546 Q * thierryp Ping timeout: 480 seconds 1391766493 J * thierryp ~thierry@zebra.inria.fr 1391769971 Q * BenG_ Quit: I Leave 1391771112 N * l0kit Guest402 1391771117 J * l0kit ~1oxT@0001b54e.user.oftc.net 1391771421 J * thierryp_ ~thierry@zebra.inria.fr 1391771421 Q * thierryp Read error: Connection reset by peer 1391771522 Q * Guest402 Ping timeout: 480 seconds 1391773764 Q * jrklein Remote host closed the connection 1391773772 J * jrklein ~osx@proxy.dnihost.net 1391774749 Q * ircuser-1_ Read error: Operation timed out 1391775100 Q * ircuser-1 Ping timeout: 480 seconds 1391776706 Q * Aiken Remote host closed the connection 1391777598 J * ircuser-1 ~ircuser-1@35.222-62-69.ftth.swbr.surewest.net 1391777890 Q * click Read error: Connection reset by peer 1391778001 Q * fisted Remote host closed the connection 1391778026 J * fisted ~fisted@xdsl-84-44-225-168.netcologne.de 1391782097 J * tarzeau ~gurkan@mail.aiei.ch 1391782099 M * tarzeau sladen: hi :) 1391782118 M * tarzeau sladen: https://answers.launchpad.net/ubuntu/+question/115258 3 years passed, i was curious if anything happened from bdale? 1391782441 P * undefined 1391783953 M * sladen tarzeau: received will look later 1391786046 M * ard is there a reason not to use btrfs as /vserver ? 1391786055 M * ard (beside btrfs not being stable) 1391786225 J * Will ~chatzilla@c-98-217-158-46.hsd1.ma.comcast.net 1391786791 Q * thierryp_ Remote host closed the connection 1391787006 M * Will Hi All - I'm trying to come up with a very fast way to replace a guest if it were to become compromised by an attacker (my guests run Mate, Firefox, etc). Each of my guests is installed into its own logical volume. I've been looking at LVM snapshots, where the guest's /home would be in one nested LV and everything else for the guest would be in another nested LV. Then I could snap shot... 1391787008 M * Will ...each nested LV, and potentially restore each independently. I've looked at Kpartx, but can't get it to work on Wheezy. Can someone suggest how to go partitioning a guest inside an existing logical volume, or possibly some other approach to restore a guest's "system" quickly along with existing data? Thx! 1391787313 M * daniel_hozac why would you use nested partitions? 1391787326 M * daniel_hozac it's a lot easier to just use separate volumes. 1391787911 M * Will thanks daniel - I was thinking that the guest has to all be in one logical volume. So how does a guest know about about another volume? 1391787945 M * daniel_hozac you put the filesystems in the guest's fstab 1391788102 J * _BovineSpongiformEncephalitis_ ~MadCow@0001c2a3.user.oftc.net 1391788122 M * Will OK -cool! Won't get a chance to try this until tomorrow, but thanks so much for that tip! Assume I'd clone the guest as normal, modify its fstab, and then move its /home to the other LV? 1391788350 M * ard to be clear: daniel_hozac is talking about /etc/vservers//fstab 1391788894 M * Will Hi ard, yes that was my understanding too... 1391788910 M * ard Vaxon Networks 1391788910 M * ard Domein Admin, Jac. P. Thijsseweg 1, 2408ER Alphen aan den Rijn, Netherlands 1391788918 M * ard fok... 1391788948 M * ard I paste something in one screen session, and screen switches to the other... 1391788991 P * glen 1391790070 Q * Will Quit: ChatZilla 0.9.90.1 [Firefox 23.0.1/20130814063812] 1391791504 N * Bertl_zZ Bertl 1391791508 M * Bertl morning folks! 1391791755 J * thierryp ~thierry@home.parmentelat.net 1391792024 Q * ggherdov Read error: Connection reset by peer 1391792088 J * ggherdov sid11402@id-11402.ealing.irccloud.com 1391792160 M * Ghislain hello bertl 1391792198 M * Ghislain in my recent kernel i have all my irq done on CPU0 so i added irqbalance to the host but it seems that it balance only host thread and not any guest thread 1391792260 M * Ghislain cannot find any clue about what part of the kernel config can change this behavior 1391792572 M * Ghislain in /proc/interrupts all in on cpu0 dam it 1391792736 M * Bertl usually the IRQ routing is configured via proc (default smp affinity) 1391792782 M * Bertl note that it is preferable to assign certain IRQs to specific cores and not have them all handle every IRQ, but still, both setups should be possible 1391792888 M * Bertl check with e.g. cat /proc/irq/0/smp_affinity 1391792895 M * Ghislain even with afinity set to fff it all happen on cpu0, the irqbalancer seems to balance process with their irqaffinity but only those of the guest 1391792923 M * Ghislain its fff 1391792947 M * Ghislain but look the disk irq: 1391792948 M * Ghislain 34: 792744976 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-fasteoi megasas 1391792956 M * Ghislain all on cpu0 1391793069 M * Bertl well, it doesn't seem to be a problem of the 3.13 kernel, as it doesn't happen in my setup 1391793083 Q * DelTree Ping timeout: 480 seconds 1391793086 M * Bertl do you have any unusual kernel command line options? 1391793132 M * Ghislain not at all, no kernel boot option apart root=uuid.... 1391793216 M * Bertl then I'd suggest to boot with e.g. init=/bin/bash and see if the system shows the same behaviour just to rule out userspace 1391793278 M * Ghislain oh fun thing to try, will do 1391793654 M * Ghislain yep same, all on cpu0 but there is not much interrupts so could be not relevant 1391793664 M * Ghislain top 1391793668 M * Ghislain oups 1391793672 M * Bertl what if you exclude cpu 0 from the list? 1391793687 M * Bertl do the interrupts still get routed to cpu 0 ? 1391793712 M * Bertl note that there is a new affinity_hint there as well 1391793748 J * zerick ~eocrospom@190.187.21.53 1391793755 M * Bertl (well, new is relative :) 1391793941 M * Ghislain in /proc/irq/ ? 1391794154 M * Ghislain cannot wirte to affinity_hints 1391794833 M * Bertl that is read only, i.e. it is a feedback of the driver 1391794864 M * Bertl you change the affinity via e.g. echo 4 >/proc/irq//smp_affinity 1391795442 M * Ghislain by default seems to be on 0, if i change to cpu4 i start to have some interrupts on it 1391795565 M * MooingLemur we use the affinity settings on 10 GbE cards. They often have multiple queues so you can balance the IRQs across CPUs, and then it's easy to peg both ports (at least with iperf) 1391795568 M * Ghislain ypu do not have any irqbamlancer and yet your irq are balanced ? 1391795589 M * Ghislain i'de love to see your .config to diff with mine 1391795616 M * MooingLemur however, we tried balancing disk interrupts and it made performance worse (mpt2sas) 1391795765 M * Ghislain i wonder we just have all of network and disk on cpu0 that worries me as before we had a balanced thing on all cpu. Do not know what whanged 1391795867 M * Bertl yes, I do not suggest an irq balancer 1391795908 M * Bertl i.e. it is better to keep threads on a specific cpu, and distribute the load over different cpus by assigning each irq to a different cpu 1391795953 M * Bertl most high end controllers have a number of irqs to spread 1391795993 M * Ghislain mione use perc6 or simple dell r210 ahci ones 1391796008 M * Ghislain dell servers so nothing fancy 1391796041 M * Bertl still, it might be good to put, e.g. eth0 on cpu0, eth1 on cpu1, and the perc6 on cpu2 (just as an example) 1391796179 M * Ghislain hum think cpu1 is the HT core of cpu0 no ? so beter skip the non even ones for my case 1391796187 M * Ghislain but i got the point 1391796230 M * Ghislain just as guest are pined down to certain cpu it will need caraful manual planning 1391796845 M * Ghislain thanks all for the talk i will see how i can tune this :) 1391797001 M * Bertl you can get the info about siblings from proc as well 1391797044 M * Bertl in cpuinfo is a core id and a number of siblings per cpu 1391797435 Q * thierryp Quit: ciao folks 1391799753 M * Ghislain well in cpuinfo all siblings line are at 8 1391799834 M * Ghislain the core id seems to show which one are paired, but to know wich one is the real core and which on is the false HT one i don't know :) 1391799916 M * Bertl usually there is no real or false between siblings 1391799938 M * Bertl they simply share certain structures like caches and processing pipelines 1391799990 M * Ghislain ok, i have 8 core but the E3-1230 is a 4 core HT i really don't know how the system figure it out :) 1391800041 Q * zerick Ping timeout: 480 seconds 1391800065 M * Bertl upload the cpuinfo, I'll take a look :) 1391800171 M * Ghislain http://paste.linux-vserver.org/41091 1391800294 J * zerick ~eocrospom@190.187.21.53 1391800406 M * Ghislain i see nothing indicating a difference between two affiliated cores :) 1391800423 M * Bertl as I already said, there is no difference 1391800438 M * Ghislain ok :) 1391800448 M * Bertl you have four cores, which are split into 8 HT siblings 1391800464 M * Bertl (something you might consider disabling, btw) 1391800468 M * Ghislain i was beleiving wrongly that the HT core was a simulated one not a real one 1391800486 M * Bertl cpu0/1 are the siblings of the first core 1391800492 M * Bertl cpu2/3 the second, etc 1391800547 M * Ghislain k 1391800550 M * Bertl take as example a multi threaded program on a normal core, or two tasks running on a single core 1391800576 M * Bertl which task is the 'real' one, and which is 'time sliced' ? 1391800677 M * Ghislain i see what you mean 1391800724 M * Ghislain the sibbling is just a pack of register space used to ease some slicing i guess (this is how i see it) 1391800747 M * Ghislain of course if this is it then the two sets exist and none is the real one 1391800797 M * Ghislain so why disabling it if it just enable the kernel to have easier time sharing slice on this core ? 1391800813 M * Bertl http://en.wikipedia.org/wiki/Hyper-threading 1391800904 M * Bertl with increased number of cores, you get increased scheduling overhead in the kernel 1391800949 M * Bertl of course, it always depends on the setup and usage patterns 1391800986 M * Bertl a simple test is to prepare a typical workload and run 4 threads without HT and 8 threads with HT and compare the performance throughput 1391801006 M * Bertl i.e. how much work was done in total and what the latencies are 1391801160 M * Ghislain of course for // workload it can help but for massive cpu burst it limit the core as it must bouce between the two shadow self 1391801415 M * Ghislain ok, got to go. Thanks for the enlgithening discution ! :) 1391801426 M * Ghislain will play a little with affinity this WE 1391801701 Q * Romster Ping timeout: 480 seconds 1391803277 Q * zerick Remote host closed the connection 1391803597 J * Aiken ~Aiken@2001:44b8:2168:1000:21f:d0ff:fed6:d63f 1391805549 J * click click@ice.vcon.no 1391805954 Q * _BovineSpongiformEncephalitis_ Ping timeout: 480 seconds 1391806473 J * zerick ~eocrospom@200.1.177.147 1391806661 J * bonbons ~bonbons@2001:a18:205:7b01:d0e7:4929:e878:9174 1391807614 Q * SteeleNivenson Ping timeout: 480 seconds 1391809341 J * DelTree ~deplagne@2a00:c70:1:213:246:39:115:2 1391809343 Q * DelTree 1391809351 J * DelTree ~deplagne@2a00:c70:1:213:246:39:115:2 1391811763 Q * bonbons Quit: Leaving 1391813108 J * GabiP ~Gabi@188.27.182.2 1391814965 J * SteeleNivenson ~SteeleNiv@static-68-161-224-153.ny325.east.verizon.net 1391815940 J * _BovineSpongiformEncephalitis_ ~MadCow@0001c2a3.user.oftc.net