1227139334 M * daniel_hozac in 2.3, no. 1227139349 M * daniel_hozac in <2.3, it's 16 unless you modify the kernel. 1227140062 J * nox ~nox@static.88-198-17-175.clients.your-server.de 1227140102 M * nox daniel_hozac: you made my day!!! 1227140110 M * nox an this is 19. and 20. 1227140115 M * nox :) 1227140888 M * micah daniel_hozac: probably should remove logrotate and cron, rather than leave cron disabled 1227141076 Q * pisco_ Ping timeout: 480 seconds 1227141238 M * jescheng danile_hozac: can you explain abit more about modifying the kernel. Is there a patch for <2.3? 1227141714 J * ntrs__ ~ntrs@77.29.11.70 1227142190 J * pisco ~pisco@86.59.118.153 1227142197 Q * ntrs__ Ping timeout: 480 seconds 1227142496 Q * jescheng Remote host closed the connection 1227142506 J * jescheng ~jescheng@proxy-sjc-1.cisco.com 1227142619 Q * noxx Quit: leaving 1227143285 Q * dowdle Remote host closed the connection 1227144205 J * hparker|laptop ~hparker@2001:470:1f0f:32c:212:f0ff:fe0f:6f86 1227146154 Q * geb Remote host closed the connection 1227146207 J * Aiken ~Aiken@ppp118-208-61-83.lns4.bne1.internode.on.net 1227146221 Q * Aiken 1227146237 Q * hparker Remote host closed the connection 1227146261 N * hparker|laptop hparker 1227146771 Q * jescheng Quit: Leaving 1227148705 Q * hparker Ping timeout: 480 seconds 1227148927 J * hparker ~hparker@2001:470:1f0f:32c:212:f0ff:fe0f:6f86 1227149186 J * mike ~mike@dav-82-6.ResHall.Berkeley.EDU 1227149259 M * mike is there a way to make a guest's root directory read-only? 1227149368 M * daniel_hozac sure. 1227149386 M * daniel_hozac put /vservers/ / ext3 bind,ro 0 0 in the guest's fstab. 1227149444 M * mike oh, it's that simple, I figured since there wasn't a / entry in the fstab generated by build that that wouldn't work 1227149446 M * mike thanks 1227152464 J * derjohn_foo ~aj@p5B23DE7F.dip.t-dialin.net 1227152896 Q * derjohn_mob Ping timeout: 480 seconds 1227156919 N * quinq qzqy 1227158394 Q * bliz42 Quit: leaving 1227158751 Q * hparker Quit: Read error: 104 (Peer reset by connection) 1227159648 Q * mike Quit: mike 1227160902 Q * _gh_ Ping timeout: 480 seconds 1227160995 J * ntrs__ ~ntrs@77.29.21.65 1227163478 Q * doener Quit: leaving 1227164134 Q * derjohn_foo Ping timeout: 480 seconds 1227165404 J * mtg ~mtg@vollkornmail.dbk-nb.de 1227166731 Q * larsivi Ping timeout: 480 seconds 1227167452 J * chI6iT41 ~chigital@services.mivitec.net 1227167831 J * dna ~dna@52-200-103-86.dynamic.dsl.tng.de 1227168848 N * Bertl_zZ Bertl 1227168857 M * Bertl morning folks! 1227169053 J * ntrs_ ~ntrs@77.29.8.149 1227169478 Q * ntrs__ Ping timeout: 480 seconds 1227169903 Q * ntrs_ Ping timeout: 480 seconds 1227170279 M * arapaho Hi. I'm testing patch-2.6.26.6-vs2.3.0.35.6.diff against 2.6.26 debian source 1227170296 M * arapaho Host has xfs filesystem, running 3 vservers 1227170298 M * Bertl okay? 1227170309 M * arapaho absolutely 1227170316 M * pmjdebruijn XFS seems popular in combo with vserver :) 1227170337 M * ghislainocfs21 hello :) 1227170343 M * arapaho is there anything that I can give to improve that experimental patch ? 1227170359 M * arapaho pmjdebruijn: I'm using XFS for years now ;) 1227170395 M * Bertl well, I would start with 2.3.0.35.10 (for 2.6.27.6) + one patch (or wait a few minutes for the .11 1227170426 M * Bertl (that's the version being worked on) 1227170442 M * Bertl and yes, if you feel like kernel coding, there are a bunch of areas which need some love 1227170446 N * morrigan_oO morrigan 1227170461 J * davidkarban ~david@193.85.217.71 1227170719 M * arapaho Bertl: I understand. Next Debian GNU/Linux will be shipped with a 2.6.26 kernel. As it's the next stable, and as we run Debian host for around 300 vservers, we must test 2.6.26 with 2.3.0.35 extensively 1227170734 M * arapaho s/host/hosts/ 1227170746 M * Bertl well, then I'd start backporting the changes to that kernel :) 1227170763 M * arapaho ok. 1227170776 M * Bertl you don't want the known and fixed bugs in your debian patch, I guess 1227170784 M * pmjdebruijn arapaho: do you use debian supplied kernels? 1227170801 M * pmjdebruijn arapaho: http://lkml.org/lkml/2008/10/11/235 1227170809 M * Bertl btw, I consider 2.6.26 to be a really unfortunate choice, as 2.6.27 is a) a lot more stable, and b) will be maintained for some while 1227170815 M * pmjdebruijn arapaho: we use debian as well, but we'd not even consider Debian supplied kernels 1227170951 M * Bertl but hey, the 2.6.26 kernel will be one in a long row of unfortunate choices :) that's probably 'the debian way' 1227170971 M * arapaho we don't use debian supplied 'kernel_image'. We use debian kernel sources 1227171006 M * arapaho and yes Debian choose 2.6.26 1227171008 M * pmjdebruijn arapaho: some crap 1227171011 M * pmjdebruijn same crap 1227171023 M * pmjdebruijn Debian has never had good kernels 1227171049 M * pmjdebruijn one of the few downsides of Debian 1227171065 M * pmjdebruijn arapaho: make-kpkg is your friend :) 1227171077 M * arapaho yes we use make-kpkg 1227171080 J * Mojo1978 ~Mojo1978@ip-88-152-50-100.unitymediagroup.de 1227171131 J * larsivi ~larsivi@85.221.53.194 1227171223 M * Bertl arapaho: as I said, get your patch updated to vs2.3.0.35.10/11 and follow the development closely, backporting changes to your kernel (and of course, do a lot of testing, as your kernel will otherwise be untested) 1227171234 M * arapaho sorry i made a call at the same time. 1227171247 M * arapaho Bertl: ok. 1227171271 M * pmjdebruijn arapaho: I'd trust Adrian Bunk's maintenance more than most other distro 1227171286 M * pmjdebruijn Adrian's 2.6.16.62 is probably one of the most stable kernels I've ever run... 1227171301 M * pmjdebruijn which is _still_ maintained btw 1227171323 M * pmjdebruijn in a couple of month's Adrian will stop maintaining 2.6.16 and move on to maintaining 2.6.27 1227175919 J * Aiken ~Aiken@ppp118-208-62-96.lns4.bne1.internode.on.net 1227176456 P * ghislainocfs21 1227176472 J * ghislainocfs2 ~Ghislain@adsl2.aqueos.com 1227177814 Q * Aiken Remote host closed the connection 1227179083 J * ntrs_ ~ntrs@77.29.8.149 1227179568 Q * ntrs_ Ping timeout: 480 seconds 1227179808 M * Bertl off for now .. bbl 1227179821 N * Bertl Bertl_oO 1227180803 J * kir ~kir@swsoft-msk-nat.sw.ru 1227182334 J * gnuk ~F404ror@pla93-3-82-240-11-251.fbx.proxad.net 1227183693 J * mrfree ~mrfree@host1-89-static.40-88-b.business.telecomitalia.it 1227185470 J * yarihm ~yarihm@guest-docking-nat-1-133.ethz.ch 1227186322 J * doener ~doener@87.122.198.96 1227187875 M * Bertl_oO back now ... 1227187879 N * Bertl_oO Bertl 1227187983 M * ghislainocfs2 bertl, can you tell me if there any work on the hard scheduler for 2.3 ? i saw that there is some on ipv6 and xfs and i wondered for this part :) 1227188057 M * Bertl no work on the CFS version of the TB-Hard CPU scheduler yet 1227188068 M * ghislainocfs2 ok thanks ! :) 1227189474 M * Bertl daniel_hozac: any pending patches we should add to 2.3.0.36 ? 1227189486 M * daniel_hozac the mainline patches. 1227189506 M * Bertl ah, yes, do you have the urls at hand? 1227189515 M * daniel_hozac https://lists.linux-foundation.org/pipermail/containers/2008-October/013825.html and https://lists.linux-foundation.org/pipermail/containers/2008-October/013823.html 1227189528 M * Bertl excellent, will check and integrate 1227190259 N * qzqy quinq 1227191535 Q * yarihm Quit: This computer has gone to sleep 1227192029 M * Bertl daniel_hozac: okay, uploaded two patches for review (if you find some time) 1227192086 M * daniel_hozac they're the patches from the emails, right? 1227192116 M * daniel_hozac they look find to me. 1227192130 M * Bertl nah, those are the adapted versions for vs2.3.0.35.10 1227192148 M * daniel_hozac well, yeah. 1227192155 M * Bertl http://vserver.13thfloor.at/Experimental/delta-sigkill-fix01.diff 1227192160 M * Bertl http://vserver.13thfloor.at/Experimental/delta-netns-fix01.diff 1227192163 M * daniel_hozac right. 1227192217 M * daniel_hozac they look good to me. 1227192295 M * Bertl excellent, tx! 1227192569 Q * larsivi Remote host closed the connection 1227193950 J * hparker ~hparker@2001:470:1f0f:32c:215:f2ff:fe60:79d4 1227194170 J * ntrs_ ~ntrs@77.29.10.185 1227194459 N * Bertl Bertl_zZ 1227194464 M * Bertl_zZ off for a nap ... bbl 1227194511 Q * chI6iT41 Ping timeout: 480 seconds 1227195049 Q * bluegene_ Quit: leaving 1227196136 M * micah daniel_hozac: the reason I suggest removing cron altogether is because on Debian as soon as there is a new security update, it will get activated... so it should just be removed 1227196867 J * derjohn_mob ~aj@80.69.42.51 1227197816 Q * derjohn_mob Ping timeout: 480 seconds 1227198141 J * dowdle ~dowdle@scott.coe.montana.edu 1227198172 M * blathijs micah: Huh? Cron is not installed on new vservers? Or is that only with recent util-vserver? 1227198197 M * micah blathijs: i'm speaking of debian guests, cron is installed, but the startup links are disabled 1227198243 M * blathijs micah: Has that recently changed? My guests seem to have cron running just fine... 1227198266 M * micah blathijs: yes, it is a recent change, and only will apply to newly built guests 1227198328 M * daniel_hozac if by recent you mean a year old, sure. 1227198375 M * blathijs I don't see the "preserve cron links" stuff in my util-vserver's debian/initpost, so that agrees with daniel_hozac :-) 1227198521 M * micah blathijs: yeah, this is only a surprise to people who are running debian stable, who will be upgrading to a new stable sometime soon and this behavior will change 1227198591 M * blathijs But still, I don't remember enabling cron, so I guess I upgraded cron somewhere along the way then 1227198597 M * blathijs (as it is enabled then as you say) 1227198616 M * daniel_hozac is that ever going to get fixed? 1227198632 M * daniel_hozac it is by far the most annoying thing with Debian. 1227198651 J * doener_ ~doener@i577AF444.versanet.de 1227198712 M * daniel_hozac actually, apt asking questions is probably right up there too. 1227198829 M * ghislainocfs2 for debian you shoudl not remove the cron by update-rc -f cron remove 1227198849 M * ghislainocfs2 but you should set it up to be off for all the runklevel 1227198857 M * ghislainocfs2 this way upgrade will NOT reenable it 1227198865 M * daniel_hozac that requires specifying the priority. 1227198868 M * daniel_hozac which is just crack. 1227198871 M * ghislainocfs2 this is the way you disable a service in debian 1227198878 M * ghislainocfs2 yes lol 1227198905 M * ghislainocfs2 but knowing it makes life simpler :) 1227198918 M * daniel_hozac knowing the priorities? 1227198937 M * ghislainocfs2 no, knowing how to disable in apt friendly way a service 1227198954 Q * doener Ping timeout: 480 seconds 1227198985 Q * dowdle Remote host closed the connection 1227199192 M * daniel_hozac if only it was as easy as chkconfig off. 1227199338 Q * mrfree Quit: Leaving 1227199367 M * blathijs I usually rather like Debian's ways, but this is indeed rather braindead 1227199390 M * micah i still think it would be better to remove the cron package altogether 1227199424 M * micah because this leaves the least surprises 1227199460 M * daniel_hozac sure. 1227199530 M * micah (not just for start/stop links, but also for the admin who installs a service that requires a functional cron: depency will be satisfied, but the cron will be off.) 1227199618 M * daniel_hozac patches accepted, as always. 1227199682 J * jsullivan ~jsullivan@cpe-76-178-208-188.maine.res.rr.com 1227199744 M * jsullivan Good day, everyone. 1227199756 M * jsullivan I think we've hit a showstopping problem on our vserver deployment. 1227199779 M * jsullivan My fault for not investigating thoroughly enough but I thought I'd see if the issue has changed. 1227199780 M * jsullivan NFS 1227199802 M * jsullivan We are planning a large hosted NFS deployment and were intrigued with vservers resource utilization . . 1227199808 M * jsullivan compared to Xen and KVM. 1227199820 M * jsullivan But, it looks like kernel-mode nfs is not supported . . 1227199831 M * jsullivan and we do not want to go with user mode for lack of support. 1227199842 M * jsullivan Is there any chance the NFS issue has been addressed? 1227199844 M * daniel_hozac run it on the host. 1227199847 J * dowdle ~dowdle@scott.coe.montana.edu 1227199852 M * blathijs micah: That would also require removing logrotate, which seems to be the only package that debootstrap installs that depends on cron 1227199854 M * daniel_hozac it doesn't make sense to run kernel services in a guest. 1227199867 M * jsullivan Yes - that's what tipped me off at first. 1227199896 M * jsullivan Alas, we were hoping to carve out many separate NFS servers. 1227199909 M * jsullivan OK - I'll assume this is one application for which vserver is not ideal. 1227199914 M * jsullivan We certainly have been impressed . . 1227199920 M * jsullivan and want to use it wherever we can. 1227199921 M * jsullivan Thanks. 1227200219 M * ard Hmmmm, I really don't see a reason that nfs cannot be used from the host... 1227200239 M * jsullivan I'd love to have that discussioon if no one minds . . . 1227200240 M * ard Since the host (as wel as the host on the client) is considered save 1227200244 M * jsullivan perhaps I am just braincramping . . 1227200249 M * jsullivan and blinding myself to the obvious. 1227200270 M * jsullivan I have two issues . . . 1227200274 M * ard in a xen situation you have a number of unsafe hosts, therefore you need a number of unsafe nfs servers 1227200304 M * jsullivan Security is one - this is a hosted environment and I wouldn't want any of our clients to ever touch the underlying host. 1227200322 M * jsullivan The second is multiplicity . . 1227200323 M * ard In a vserver situation I consider the host save, and the vservers don't have the right to mount or otherwise access the nfs server except through the vfs layer 1227200362 M * jsullivan I can see that point of view . . 1227200371 M * jsullivan but the second issue is eminanently practical. 1227200391 M * jsullivan We had originally envisioned several very large, shared file servers carved up among our clients. 1227200411 M * jsullivan We began to realize some of the LDAP issues that created for uniqueness among uids, uidnumbers and gidnmbers. 1227200428 M * jsullivan Thus, we started leaning toward a separate file server for each client since it was so easy to clone guests in vserver. 1227200446 M * jsullivan That's when we realized we might have a problem with a kernel service in a guest. 1227200482 M * jsullivan Unless I'm missing something . . . 1227200490 M * ard but why does the file server needs to be seperate? You can have multiple names point to the same server ;-) 1227200493 M * jsullivan think we have only a few options. 1227200502 M * jsullivan uids 1227200507 M * daniel_hozac what about them? 1227200509 M * ard per vserver you can point to a different mountpount 1227200531 M * ard ah 1227200542 M * ard the nfs server does not care about uids 1227200555 M * ard neither the host or the vserver actually 1227200566 M * jsullivan That's true. 1227200569 M * ard only the name resolver cares about uid :-) 1227200586 M * jsullivan Let me step back and think though about the details of our setup. 1227200593 M * jsullivan Perhaps there is a better way as you suggest. 1227200625 M * ard what's uid 100 on the nfsserver does not really matter, only the vserver client decides what uid 100 means 1227200650 M * ard of course you will have some problems if you do ls on your nfsserver ;-) 1227200651 M * jsullivan We had originally planned a large shared virtual desktop server with many NFS servers . . 1227200661 M * jsullivan If I turn that around, I might be able to get around the uid problem. 1227200701 M * jsullivan No, wait a second . . . 1227200708 M * jsullivan this may be my ignorance of NFS . . . 1227200721 M * jsullivan but doesn't NFS look to the uid for security? 1227200736 M * jsullivan We are using the NFS mount for home and shared directories. 1227200746 M * ard in what way? 1227200753 M * daniel_hozac NFS is inherently insecure. 1227200756 M * jsullivan We do not want to assign a single user to all NFS users. 1227200783 M * daniel_hozac the client decides what uid the user is, and the server if that uid has access to the files. 1227200784 M * ard daniel_hozac : that's another thing ;-) 1227200793 M * jsullivan NFS will only allow a user to write to the mounted store if the uid matches the uid on the nfs server and has appropriate rights. 1227200800 M * jsullivan Isn't that the case? 1227200811 M * daniel_hozac assuming a well-behaved client, sure. 1227200840 M * jsullivan So my uids on my various client virtual desktops much all be replicated on my nfs server (which I am doing via LDAP). 1227200848 M * daniel_hozac no. 1227200853 M * ard uids are numbers, not names... 1227200861 M * jsullivan OK - please correct me - glad to learn. 1227200862 M * daniel_hozac as ard said, the server doesn't care what the uid is. 1227200882 M * jsullivan Really? 1227200886 M * ard jups 1227200888 M * jsullivan It doesn't for the mount. 1227200896 M * jsullivan but it does for file access, doesn't it? 1227200903 M * ard no 1227200911 M * ard it is still number matching, not name matching ;-) 1227200914 M * daniel_hozac it uses the uid to check for access. 1227200923 M * jsullivan Yes, exactly. 1227200939 M * jsullivan Let's say a file has rwx------ 1227200944 M * ard In a typical unix environment there is only id(think of it as a numnber) checking. 1227200946 M * jsullivan and is owned by uid 2004 1227200968 M * jsullivan If uid 2005 on the client tries to access the file, it will fail . . 1227200969 M * ard Converting that to a name is just another story. Can be done by ldap, or the local /etc/passwd 1227200980 M * ard jups 1227200990 M * jsullivan wherease if uid 2004 on the client tries to access, he will succeed. 1227200992 M * jsullivan Correct? 1227200999 M * ard jups 1227201011 M * jsullivan So, the nfs server needs to know about uid 2004. 1227201021 M * daniel_hozac "uid 2004" is all it needs to know. 1227201026 M * ard it already knows, because it's a number ;-) 1227201034 M * jsullivan Yes, exactly. 1227201050 M * jsullivan Now, if I have 10000 users in various containers in my LDAP tree . . . 1227201066 M * jsullivan let's say spread among 1000 clients . . . 1227201078 M * jsullivan If I have separate nfs servers for them . . . . 1227201100 M * jsullivan they only need to search a small subset of the tree for the uids . . 1227201114 M * jsullivan and they only need to know of a small subset of the uids. 1227201118 M * daniel_hozac the server doesn't need to search at all. 1227201124 M * jsullivan The search scope creates a naming problem. 1227201132 M * ard better yet, your server doesn't really care about ldap... 1227201138 M * ard those names mean nothing 1227201151 M * ard the name is only used during login for name -> uid mapping 1227201172 M * ard And it then forks processes with that uid 1227201205 M * ard if you do ls -l of a directory with unresolvable uids, you just get numbers instead of file owners... 1227201208 M * jsullivan Ah, right - it's the uidnumbers which must be completely unique. 1227201233 M * daniel_hozac no, they don't. 1227201235 M * ard yes, if you can get that unique across the company your the man :-) 1227201258 M * jsullivan We do actually have an LDAP constraint and numbering convention to do that. 1227201265 M * daniel_hozac if you have separate administrative domains, with separate exports, you really don't need them to be unique across the board. 1227201283 M * ard in the end ldap or /etc/passwd really don't matter, it's the uid and gid's that count 1227201291 M * jsullivan That's what we are discussing internally right now. 1227201336 M * jsullivan Am I being imposing on the list or may I walk this through just a bit further to make sure I understand you? 1227201409 M * ard well, there are no other questions currently, and daniel_hozac and Bertl_zZ are amazingly patient ;-) 1227201424 M * jsullivan OK thanks. 1227201446 M * jsullivan Let's say I have two users . . . 1227201453 M * jsullivan I'll shorten the dns 1227201465 M * jsullivan uid=jsullivan.dc=client1 1227201466 J * bonbons ~bonbons@2001:960:7ab:0:2c0:9fff:fe2d:39d 1227201479 M * jsullivan uid=jsullivan,dc=client2, uidnumber 2004 1227201494 M * jsullivan uid=jsullivan.dc=client1 uidnumber 2005 1227201516 M * jsullivan on the NFS server I have the following file structure: 1227201537 Q * mtg Quit: Verlassend 1227201565 M * jsullivan oops! 1227201578 A * ard suddenly understands the reason for the uid confusion ;-) 1227201582 M * jsullivan "data/client1/jsullivan owned by 2005" 1227201599 M * jsullivan "data/client2/jsullivan owned by 2004" 1227201652 M * jsullivan NFS will keep permissions sorted based upon uidnumber so the two jsullivans are not confused. 1227201658 M * ard if we say uid, we mean that uidnumber, but apparantly ldap refers to the name 1227201666 M * ard correct ;-) 1227201694 M * jsullivan Ah, yes. sorry I didn't explain that. 1227201703 M * jsullivan The issues are more practical than technical. 1227201719 M * jsullivan for example, when a user creates a new directory in their shared area . . . 1227201721 M * ard well, I guess that's part of your confusion ;-). Unix file rights are really simple 1227201738 M * jsullivan and they click on the Konqueror drop down box to assign a group . . 1227201756 M * jsullivan they will see all the groups of every client if the LDAP scope is root. 1227201765 M * jsullivan Of course, that is not NFS's problem 1227201775 M * jsullivan and where turning our setup around might solve the problem. 1227201790 M * jsullivan However, given the directory layout above . . . 1227201807 M * jsullivan if I do ls -l on data/client1/jsullivan . . . 1227201827 M * jsullivan won't the NFS server do an ldap query searching for a name . . 1227201829 M * jsullivan Ah 1227201836 M * ard what you will see depends on your resolving 1227201841 M * jsullivan It will get it right because it is searching for the uidnumber 1227201843 M * ard the nfs server won't care about it 1227201848 M * ard correct 1227201866 M * ard the nfs server just tells the uidnumber to the client, and the client will resolve it 1227201900 M * ard and in your case the resolving will be done with ldap I guess :-) 1227201904 M * jsullivan OK - so if I keep uidnumbers unique I solve the name resolution problem . . . 1227201926 M * jsullivan If I limit the LDAP scope on the Virtual Desktop servers instead of the NFS server . . 1227201931 M * ard Well, there is also groupidnumber ;-) 1227201932 M * jsullivan I solve the group visibility problem. 1227201944 M * jsullivan Yes, we can keep those unique much easier than uids. 1227201962 M * ard the nfsserver doesn't care about ldap, you can have an /etc/passwd with just root (to log in), and nothing else 1227201964 M * jsullivan The only remaining issue is the security of having clients touching the underlying host. 1227202002 M * jsullivan The uid is set on the file by the client and the server does not care? 1227202016 M * ard correct :-) 1227202021 M * ard that makes nfs insecure 1227202022 M * jsullivan Ah, I didn't realize that. 1227202035 M * jsullivan I thought the uid had to live on the server as a user. 1227202040 M * jsullivan That makes a big difference. 1227202052 A * ard hears a coin drop ;-) 1227202058 M * jsullivan 1227202092 M * jsullivan I suppose it is a visceral reaction that I would never want anyone touching a host . . 1227202095 M * ard but the security of vserver is that the host can deny the vserver access to other directories on the vserver 1227202096 M * jsullivan in a virtualized environment. 1227202099 M * ard but the security of vserver is that the host can deny the vserver access to other directories on the nfsserver 1227202146 A * ard needs to make a short meeting 1227202154 M * jsullivan Thanks very much, ard. 1227202163 J * geb ~geb@79.82.4.4 1227202169 M * jsullivan This was most helpful although I still must consider the security. 1227202430 M * ard well, there is one other thing you must know about nfs: 1227202471 M * ard if you open a file, the client asks the file-id from the nfs server, and just sends read and write messages with just that file-id 1227202494 M * ard the file-id usually is made op from the mountpoint number and the inodenr. 1227202548 M * ard So if you were running an unsafe nfsclient, the client could just put another inodenr there, and so write to portions of the disk which might not have been exported 1227202621 M * ard so that rules out xen as client with nfs servers. 1227202640 M * jsullivan Ah, you're back, ard. 1227202646 M * ard if you use vservers, you are actually assuming that the host is safe, and will not be rooted 1227202696 M * ard so that's your security... A rooted host can -with difficulty- do weird things. 1227202732 M * jsullivan Let me digest that for a second. 1227202735 M * ard The chance of a xen domu being rooted is much greater than the chance of a vserver host being rooted, because the host is tied down ;-) 1227202752 M * jsullivan That's exactly what I'm concerned about - being able to access files outside the export. 1227202778 M * jsullivan Pardon my ignorance but would you please define "rooted"? 1227202781 M * ard if you can secure the vserver host then there is no problem ;-) 1227202811 M * ard rooted is getting uidnumber=0 with full system access. 1227202817 M * jsullivan Ah, OK. 1227202851 M * ard a vserver can also be rooted (getting uidnumnber=0) but you won't get full system access, hence the rooting is contained 1227202857 Q * davidkarban Quit: Ex-Chat 1227202886 M * daniel_hozac that depends entirely on the exploit. 1227202912 M * ard well, yes ;-) 1227202913 M * jsullivan If I've gained root access on a vserver host, how am I contained? 1227202927 A * ard assumed no kernel exploits 1227202928 J * chI6iT41 ~chigital@tmo-100-136.customers.d1-online.com 1227202943 M * jsullivan To me that's the ultimate security breach. 1227202965 M * ard jsullivan : you cannot change ip's, you cannot sniff, you cannot mount, you cannot make device nodes. You can only destroy the vserver 1227202970 M * ard ow 1227202971 M * ard wait 1227202973 M * ard sorry 1227202978 M * jsullivan Really? 1227202984 M * ard vserver host is indeed the ultimate breach 1227202985 M * jsullivan Not my experiecne 1227202990 M * jsullivan Ah, OK. 1227203027 M * ard but the host should contain nothing more than setting up iptables, starting the vservers and having a serial console login or so ;-) 1227203035 M * jsullivan Is changing the inodenr as simple as hacking the packet to change the field and checksum? 1227203057 M * jsullivan that's why we don't want to run user services in the host! 1227203061 M * ard that's the idea.. 1227203102 M * jsullivan Hmm . . . making the host the file server would solve the nfs and samba broadcast problems for us . . 1227203117 M * jsullivan just still that queezy feeling about users on the host :( 1227203132 M * ard samba broadcast is not needed when you use wins 1227203143 M * jsullivan Ah, we would be using WINS. 1227203158 M * ard and I would not run the samba server on the host 1227203160 M * jsullivan Does that preclude needing to bind the broadcast address to the vserver samba guest? 1227203197 M * daniel_hozac why would you have users on the host? 1227203199 M * ard maybe samba barfs about not being able, but I don't see a reason for samba to broadcast or multicast when wins is there 1227203227 M * jsullivan Because they are using the file system exported via nfs, daniel. 1227203239 M * daniel_hozac so? 1227203249 J * davidkarban ~david@193.85.217.71 1227203249 M * jsullivan Thus, cracking the system by changing inodenr could give them access to other clients' data. 1227203272 M * ard Hmmmm, jsullivan is confused again ;-) 1227203282 M * jsullivan Not unusual! 1227203287 M * jsullivan Deconfuse me please. 1227203296 M * ard for nfs you don't need users... 1227203312 M * ard a file has an owner number, and a group number 1227203324 M * jsullivan Perhaps I misunderstood your earlier comment. 1227203348 M * jsullivan going back to the above scenario . . . 1227203358 M * ard well I was talking about rogue nfs clients. That's not your scenario 1227203381 M * jsullivan If I export /data/client1 to VDhost1:/data/ 1227203385 J * er ~sapanbhat@pool-71-188-83-144.cmdnnj.east.verizon.net 1227203386 M * ard the vserver host is the nfs client, which mounts the nfs server for the vservers 1227203410 M * jsullivan Whoops! That's a whole different scenario. 1227203428 M * ard you must see it like this: 1227203434 M * er Bertl_zZ: hi. could you ping me when you get back? 1227203436 M * jsullivan I thought you were recommending running the NFS server on the host and having clients mount from the host. 1227203443 M * ard you have a big server with a lot of diskspace, and you have all the vservers on that server 1227203449 M * jsullivan YEs. 1227203471 M * jsullivan By the way, the user clients are not on the vserver system. 1227203485 M * ard now, we move that diskspace to another server, mount that from the vserver host, and nothing changes for the vserver guests 1227203514 M * jsullivan Yes, but that's not what I'm doing. 1227203523 M * jsullivan Perhaps I didn't explain clearly. 1227203527 M * ard so you must see the nfs space as part of your vserver host... not of the vserver guests 1227203540 M * geb hi 1227203544 M * ard the vserver host may assign it to the guests ;-) 1227203560 M * jsullivan Oh, not what I was trying to do at all. 1227203578 M * jsullivan In my case, I have a server with oodles of disk space. 1227203587 M * jsullivan That server is running vserver. 1227203604 M * ard well, then they can share that space ;-) 1227203608 M * jsullivan I have another separate computer with minimal disk space which runs user virtual desktops. 1227203638 M * ard what is a user virtual desktop? 1227203640 M * jsullivan The VD servers need to mount NFS shares from the vserver. 1227203666 M * jsullivan We will be using either X2go or noMachine to provide desktops to remote users. 1227203688 M * jsullivan So jsullivan will be sitting on VD1/jsullivan just for arguments sake. 1227203724 M * jsullivan VD1:/data/users/jsullivan - jsullivan's home directory . . 1227203730 M * jsullivan should be mounted from the nfs server. 1227203757 Q * pmenier Quit: Konversation terminated! 1227203771 M * jsullivan Let's say VD1 is used by client1 and VD2 is used by client2 1227203786 M * jsullivan We had wanted two separate NFS servers carved out of the vserver host . . 1227203792 M * jsullivan one for each client. 1227203793 M * ard the VDserver actually runs the applications? 1227203797 M * jsullivan Yes. 1227203828 M * jsullivan so when the user on VD1 creates a new subdirectory in their home directory, it is being created on the nfs server. 1227203842 M * ard And VDserver starts each individual desktop? 1227203858 M * jsullivan Each VD server can host hundreds of virtual desktops. yes. 1227203886 M * ard in that case it's not a vserver question ;-). 1227203886 M * jsullivan My understanding is we cannot run two nfs servers on the vserver as guests. 1227203900 M * jsullivan No, the VDs are not on vserver. 1227203910 N * Bertl_zZ Bertl 1227203910 M * jsullivan Actually they will be but that's a different question. 1227203914 M * Bertl back now ... 1227203914 M * ard on the very big diskspace server just create a volume that's exported to the VDserver 1227203928 M * jsullivan Yes, that's what we're coming down to. 1227203943 M * jsullivan We would have like to carve that big disk space into two vserver guests each running NFS. 1227203947 M * ard and as long as the VDservers are secure, you can just mount what you want 1227203950 M * jsullivan one exports to VD1 and the other to VD2. 1227203972 M * jsullivan If I understand your suggestion correctly, we should run NFS on the vserver host . . 1227203980 M * ard no, just make to volume's and export each volume to a VD server? 1227203982 M * jsullivan and export separate subdirectories to VD1 and VD2. 1227204002 M * ard yes, since the nfs server is in the kernel 1227204024 M * ard and you don't want to go down the road to run nfs-userland, since that code made my colleage cry... 1227204029 M * jsullivan So our previous discussion dispelled all my concerns about uids,uidnumbers and gidnumbers . . . 1227204036 M * jsullivan Bertl_zZ: hi. could you ping me when you get back? 1227204261 M * jsullivan Yes - in fact we are a bit paranoid about security and will be using microperimeter security to separate the VDs from the host systems. 1227204303 M * jsullivan So we start with a vserver host with a huge disk - /dev/sda 1227204313 M * jsullivan We carve it into logical volumes . . 1227204331 M * jsullivan and create a big file sharing volume named Files. 1227204351 M * Bertl ard: thanks! 1227204362 M * ard well, vserver has nothing to do with it :-). 1227204368 M * jsullivan on /dev/VG00/Files I have the following directory structure. 1227204375 M * jsullivan data/client1 1227204376 M * ard vserver could have meant something on the vdserver side ;-) 1227204380 M * jsullivan data/client2 1227204420 M * jsullivan now, on the vserver host, I run nfs and export data/client1 to VD1:/data . . 1227204422 M * ard Bertl : sorry about the bandwidth from this discussion ;-) 1227204434 M * jsullivan Oops - should I bow out for a while? 1227204440 M * jsullivan I've been consuming an awful lot. 1227204535 M * Bertl nah, it's fine 1227204536 M * ard well, I've learned about x2go and nomachine 1227204552 Q * davidkarban Quit: Ex-Chat 1227204566 M * jsullivan So I export data/client1 to VD1 and export data/client2 to VD2 1227204590 M * jsullivan Can a malicious user on VD2 crack the inodenr and access files on data/client1? 1227204651 M * jsullivan Or can they only jump around in data/client2 1227204651 M * jsullivan ? 1227204651 M * ard yeas 1227204653 M * ard eh 1227204654 M * ard yeas 1227204662 M * ard hmmm, keyboard problem ;-) 1227204666 M * jsullivan LOL 1227204689 M * ard you should have created /dev/VG00/FilesVD1 and /dev/VG00/FilesVD2 1227204690 M * jsullivan Then I suppose they could also hop into /etc/vserver, correct? 1227204702 M * jsullivan Ah, they cannot cross volumes? 1227204704 M * ard and export FileVD1 to VD1 and FilesVD2 to VD2 1227204708 M * Bertl well, let me put it this way: your 'thin clients' are connected via nfs to a server with a big partition, right? 1227204717 M * jsullivan Yes. 1227204751 M * Bertl so, to do anything evil, somebody would have to 'crack' your NFS security, and write to places which are not exported, for example 1227204755 M * jsullivan Hmm . . .that takes away some of the resource sharing advantages of vserver but is no worse than KVM or Xen. 1227204771 M * Bertl and he needs to do that over network, right? 1227204777 M * jsullivan Yes. 1227204801 M * Bertl so, you are looking in the wrong place here, if you want to know if NFS can be considered secure, right? :) 1227204808 M * jsullivan No. 1227204820 M * Bertl no? are we the NFS channel? :) 1227204828 M * jsullivan The discussion really began about vserver perhaps not being a good solution for us . . . 1227204839 M * jsullivan because we wanted to isolate the NFS file systems . . 1227204843 M * ard and then I corrected him... 1227204852 M * jsullivan and could not run nfs on the vserver guests. 1227204872 M * jsullivan daniel and ard suceeded in allaying most of my concerns . . 1227204886 M * ard So in the end it's not a vserver discussion, but it took me a while to figure that out ;-) 1227204887 M * jsullivan but we were discussing the security implications of running a single nfs server on the vserver host. 1227204911 M * jsullivan If we can be comfortable with the security of running nfs on a vserver host . . 1227204916 M * jsullivan we can use vserver. 1227204925 M * jsullivan If not, we'll relunctantly move over to Xen or KVM. 1227204967 M * jsullivan If we could theoretically run nfs in a vserver guest, we would solve this inodenr cracking roblem. 1227204969 M * ard but where is that choice between vserver and xen? (you can run vserver guests in a DOMu vserver host) 1227205003 M * jsullivan Well, if we are running Xen, we do not have many of the great advantages of vserver but we would be able to isolate . . 1227205007 M * jsullivan each nfs server to each client. 1227205027 M * jsullivan Then they could only mangle their own data ) 1227205029 M * jsullivan :) 1227205077 M * Bertl jsullivan: it's not very different to Xen from the security PoV 1227205097 M * Bertl you can as well (on Linux-VServer) mount each nfs entry separately for each guest 1227205114 M * ard well, what you mean is that you make a volume for each xen DOMu, and export that, which would be as secure as exporting that volume from the host ;-) 1227205136 M * ard But what are the clients... 1227205155 M * ard If I am correct there are 2 clients: VDserver1 and VDserver2 ? 1227205166 M * jsullivan Yes . . . fr discussion's sake. 1227205198 M * jsullivan So, the inodenr crack cannot cross volumes? 1227205206 M * ard ah, you have a cluster of VDservers, of which each can spawn multiple desktops 1227205211 M * ard correct 1227205226 M * jsullivan Yes. 1227205236 M * jsullivan We are scaling for thousands of users with hundreds on each VD. 1227205241 M * jsullivan VD server. 1227205259 M * jsullivan We are also planning to use vserver for that but have not begun testing. 1227205274 M * jsullivan So I think I see the end of the discussion here . . . 1227205288 M * jsullivan the inodenr crack cannot cross volumes. 1227205298 M * ard if you use VDserver in a vserver guest, it automatically means that your vserver host is 99.99% secure 1227205301 M * jsullivan If we use separate volumes, we are as secure as Xen or KVM . . 1227205318 M * jsullivan It will be on a different piece of hardware. 1227205319 M * ard so the VDservers would then not be able to do that kind of cracks 1227205322 M * jsullivan On the other side of a firewall. 1227205333 J * cga ~weechat@94.36.113.17 1227205359 M * jsullivan If we use separate volumes, we are as secure as Xen or KVM . . 1227205368 M * jsullivan but we loose the pooled disk sharing of vserver . . 1227205372 M * ard on the export site 1227205374 M * jsullivan probably an acceptable trade. 1227205428 M * jsullivan This has been extremely helpful and may pull us back into the vserver fold for our file services solution. 1227205430 M * jsullivan Thanks. 1227205486 A * ard would try to secure the VDserver within vserver guests 1227205497 M * ard that also helps against headaches ;-) 1227205520 M * jsullivan Yes, that's the plan. 1227205528 A * ard should actually go home 1227205537 M * ard jsullivan let us know! 1227205538 M * jsullivan A very large vserver dedicated to several VDs in vserver guests. 1227205547 M * ard jsullivan x2go and nomachine sounds interesting 1227205552 M * jsullivan yes - if this launches as expected, we hope it will be a nice showcase. 1227205571 M * jsullivan I think X2go with vserver is a real alternative to the Citrix offerings. 1227205599 M * jsullivan that's the very next project after setting up file services. 1227205602 M * ard anyway: O/~ 1227205612 M * jsullivan Adios. 1227205673 M * jsullivan btw, if anyone is interested, the X2go folks are in Vienna tomorrow. 1227205680 M * Bertl as a side note: security inside a guest is much higher than on a default Xen system for example 1227205755 M * jsullivan I'm honestly a bit surprised to hear that. 1227205763 M * jsullivan I thought Xen was very well isolated? 1227206253 Q * ghislainocfs2 Quit: Leaving. 1227206552 Q * sardyno Quit: leaving 1227206586 J * sardyno ~me@pool-96-235-18-120.pitbpa.fios.verizon.net 1227206677 M * geb jsullivan, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=464969 1227206730 M * geb well isolated ? ;) 1227206819 M * Bertl jsullivan: the main point is, inside a Xen domU, you have basically all capabilities you have on a 'normal' linux system, and thus can do all kind of evil stuff 1227206842 M * Bertl jsullivan: while in Linux-VServer, you get a minimal set of capabilities, which is considered 'secure' 1227206857 M * blathijs jsullivan: What is this inodenr crack you keep referring? 1227206871 M * blathijs Is that a known and unfixable weakness in NFS, or what? 1227206878 M * Bertl jsullivan: of course, you could secure a Xen domU as well, even by putting a Linux-VServer installation in each domU :) 1227206918 M * Bertl blathijs: I think that is a spooky (but fictional) idea jsullivan is trying to handle/address/prevent 1227206934 M * Bertl daniel_hozac: any last words on vs2.3.0.36? 1227206951 M * daniel_hozac godspeed 1227206954 M * daniel_hozac :) 1227207012 M * jsullivan sorry, had stepped away for a second. 1227207024 M * jsullivan the inodenr problem is something ard mentioned. 1227207063 M * jsullivan Apparently one can hack NFS packets to change the reference inodenr so one can potentially write to portions of the file system outside of the export. 1227207086 Q * dna Quit: Verlassend 1227207106 M * Bertl jsullivan: well, that for example, is something you would be missing the capabilities inside a Linux-VServer guest 1227207123 M * Bertl jsullivan: but I can imagine that you can do that from a Xen domU :) 1227207155 M * Bertl okay, 2.6.27.6-vs2.3.0.36 is up (usual place) 1227207244 M * blathijs Uh, this would be NFS packet modification from the client right? So capabilities on the server aren't really relevant? 1227207253 M * jsullivan That's interesting. 1227207271 M * Bertl blathijs: yes, and the client would be a Xen domU or a Linux-VServer guest 1227207274 M * jsullivan So assuming for the sake of discussion I have not limited what users can install on a vserver guest . . 1227207284 J * ghislainocfs2 ~Ghislain@adsl2.aqueos.com 1227207309 M * jsullivan they could not use a packet altering program to change the packet stream because a guest does not have sufficient privileges? 1227207334 M * blathijs Ah, I was thinking we were discussing the server here... 1227207336 M * blathijs Wouldn't they still just be able to use a usermode NFS client? 1227207369 M * jsullivan Ah, we are hoping to use vserver for both :) 1227207384 M * blathijs ah, right :-) 1227207417 M * Bertl jsullivan: yes, correct, by default you have no access to the networking below layer 3 (IP) 1227207435 M * jsullivan I hadn't considered that. 1227207438 M * jsullivan Thank you. 1227207529 M * jsullivan I was comparing security in terms of possible access to the host . . . 1227207541 M * jsullivan I was not thinking about what a guest could do. 1227207550 M * jsullivan or could not do. 1227207638 M * jsullivan You've all been most helpful in dispelling my ignorance and enlightening my thinking :-) 1227207651 M * Bertl you're welcome! 1227207672 M * Bertl feel free to hang around and ask questions, till you know what suits your needs best 1227207697 M * jsullivan I had better hop off and let someone else monopolize the list for a while :) 1227207697 M * jsullivan Bye. 1227207725 M * blathijs :-) 1227207743 M * jsullivan Well . . . .while I've got your attention . . . 1227207795 M * jsullivan We are toying with the idea of combining vserver . . . 1227207795 M * jsullivan with either X2go or noMachine to produce . . 1227207795 M * jsullivan a hybrid virtual desktop within a virtual machine. 1227207821 M * jsullivan Has anyone investigated automating the spawning of vserver clones in conjunction with a virtual desktop launch? 1227207860 M * jsullivan In the case of x2go, this would give us a fully open source alternative to . . . 1227207868 M * jsullivan products from Citrix and Qumranet. 1227207941 M * Bertl sounds good :) 1227207943 M * cehteh mhm 1227207959 M * cehteh anyone bored? want to hack on a distributed filesystem? 1227208072 Q * er Quit: er 1227208089 M * jsullivan Bertl, do you know of anyone who has done this already and could save us the research time? 1227208098 M * jsullivan Otherwise, we'll plug ahead on our own and publish our results. 1227208127 J * larsivi ~larsivi@9.80-202-30.nextgentel.com 1227208148 M * Bertl I know about folks running Xvnc inside Linux-VServer guests, and I know that Lycos is using NFS to serve their Linux-VServer guests 1227208185 M * Bertl so the various parts can be considered tested, but still you will have to figure out the details for your setup 1227208436 M * jsullivan OK - we'll slog through the details. I'm more concerned about automating the process during user login. 1227208436 M * jsullivan Take care. 1227209299 J * bliz42 ~ksmith@c-98-193-150-250.hsd1.tn.comcast.net 1227209309 J * er ~sapanbhat@pool-71-188-83-144.cmdnnj.east.verizon.net 1227211084 Q * jsullivan Quit: using sirc version 2.211+KSIRC/1.3.12 1227211557 Q * er Quit: er 1227212288 J * ntrs__ ~ntrs@77.29.15.194 1227212304 Q * chI6iT41 Ping timeout: 480 seconds 1227212740 Q * ntrs_ Ping timeout: 480 seconds 1227213134 A * Hawq back 1227213468 J * chI6iT41 ~chigital@tmo-096-243.customers.d1-online.com 1227215093 Q * ghislainocfs2 Quit: Leaving. 1227215399 J * derjohn_mob ~aj@80.69.42.51 1227215644 M * Hawq Bertl: I've updated to 2.6.27.6 and newest patch + applied vspid revert. /proc is now visible but I can't start any guest 1227215876 M * Bertl okay, what error do you get? 1227216015 M * Hawq not error really... just message "Usage: init 0123456SsQqAaBbCcUu" 1227216039 M * daniel_hozac that's expected. 1227216048 M * daniel_hozac you can't use the plain initstyle without the pid virtualization. 1227216057 M * daniel_hozac try sysv. 1227216090 M * Bertl yep, should fix this problem 1227216227 J * yarihm ~yarihm@77-56-182-18.dclient.hispeed.ch 1227216246 J * openblast ~quassel@static.226.173.47.78.clients.your-server.de 1227216260 Q * bonbons Quit: Leaving 1227216303 J * dna ~dna@52-200-103-86.dynamic.dsl.tng.de 1227216411 M * Hawq okay, guest has started. checking if console switching now works 1227216464 M * Hawq bingo. it works as it should 1227216480 M * Bertl okay, so now we have to narrow that down somewhat 1227216532 M * Bertl so could you do me a diff between your version and the mainline Linux-VServer patch? 1227216550 M * Bertl I'll try to break that down into smaller patches, and we try them one by one 1227217131 M * Hawq its taking a while. diff was huge again. 1227217168 M * Bertl try to run 'make mrproper' first (on a copy of your kernel tree) 1227217254 M * Hawq I'm doing copy of my tree now to have it for quick recompiling 1227217279 M * daniel_hozac cp -al? 1227217305 M * Hawq cp -R actually 1227217317 M * daniel_hozac -al would be infinitely faster ;) 1227217325 M * Bertl that was more a hint than a question :) 1227217331 M * Hawq its done, make mrproper run on copy, now diffing 1227217365 Q * dna Quit: Verlassend 1227217437 M * Hawq http://hawk.furud.net/files/vs.diff.bz2 1227218334 M * Bertl hmm, that is now between what kernels? 1227218365 M * Bertl kernel org, 2.6.27 and your version or what? 1227218391 M * Hawq 2.6.27.6 vanilla and my version 1227218404 M * Hawq but let me check 1227218423 M * Hawq I've more than 10 kernel trees now, maybe I got something wrong 1227218426 M * Bertl okay, could you apply the Linux-VServer patch your testing was based on, and get me a diff between that and your version? 1227218553 M * Hawq blah, yeah, I compared vanilla .5 while my vs is .6. just a moment, I'll crate proper patch. 1227218977 M * Hawq http://hawk.furud.net/files/vs.diff 1227219009 M * Hawq its diff between vanilla + patch-2.6.27.6-vs2.3.0.35.10.diff and my tree 1227219040 M * Bertl k, tx! 1227219178 Q * ntrs__ Ping timeout: 480 seconds 1227219328 Q * FloodServ charon.oftc.net services.oftc.net 1227219762 J * FloodServ services@services.oftc.net 1227220942 Q * cga Quit: WeeChat 0.2.6 1227221077 J * ktwilight__ ~ktwilight@69.73-66-87.adsl-dyn.isp.belgacom.be 1227221364 Q * ktwilight_ Ping timeout: 480 seconds 1227221657 M * Hawq anything more to test today Bertl? if not I'll be going sleep. 1227221677 M * Bertl okay, I'll prepare something for tomorrow 1227221687 M * Bertl I see three areas we can break this down to 1227221701 M * Bertl so I'll prepare three patches and we test with that tomorrow 1227221711 M * Hawq great 1227221877 M * Hawq good night then, I'm off to bed. 1227222596 Q * derjohn_mob Ping timeout: 480 seconds 1227222910 Q * chI6iT41 Quit: bin weg 1227223661 J * derjohn_mob ~aj@80.69.42.51 1227224007 Q * Mojo1978 Read error: Connection reset by peer 1227224196 Q * derjohn_mob Ping timeout: 480 seconds 1227224906 Q * yarihm Quit: Leaving