1520212179 J * born_ ~born@xdsl-78-35-193-176.netcologne.de 1520212622 Q * born Ping timeout: 480 seconds 1520217248 M * Bertl_oO off to bed now ... have a good one everyone! 1520217249 N * Bertl_oO Bertl_zZ 1520232557 J * daniel_hozac_ ~daniel@217-211-16-149-no42.tbcn.telia.com 1520232581 Q * daniel_hozac Ping timeout: 480 seconds 1520233102 J * romster ~romster@158.140.215.184 1520233960 Q * romster Ping timeout: 480 seconds 1520234984 J * romster ~romster@158.140.215.184 1520235470 Q * romster Ping timeout: 480 seconds 1520236051 J * romster ~romster@158.140.215.184 1520236540 Q * romster Ping timeout: 480 seconds 1520237419 J * nikolay ~nikolay@149.235.255.3 1520237466 J * romster ~romster@158.140.215.184 1520237980 Q * romster Ping timeout: 480 seconds 1520239894 N * Bertl_zZ Bertl 1520239898 M * Bertl morning folks! 1520240123 M * Le_Coyote Mornin Bertl 1520240144 M * Le_Coyote Question. I've been reading about these testme/testfs tools 1520240156 M * Le_Coyote The website only points to scripts that are from 2006. Are they still valid? 1520240218 M * Le_Coyote I've never used them before 1520240293 M * Bertl yes, they are still valid, although some modifications have been done e.g. for debian 1520240322 M * Bertl (where tools changed and broke the script IIRC) 1520240651 M * Le_Coyote I see 1520240687 M * Le_Coyote But they're kind of diagnostic tools, right? 1520240709 M * Bertl they do a bunch of tests to verify that everything works as expected 1520240720 M * Bertl tests can be adjusted via commandline 1520240744 M * Le_Coyote Ok so it's still a good idea to run them when you build a new kernel then 1520240780 M * Bertl sure, can help diagnose issues if you use them properly 1520240819 M * Le_Coyote Building 4.9.86 at the moment, I think I might use that as my next "production" kernel 1520240881 M * Bertl Greg will be happy to hear :) 1520240907 M * Le_Coyote Who's Greg? 1520240920 M * Bertl the guy maintaining the 4.9 branch :) 1520240925 M * Le_Coyote Ah 'k :) 1520240951 M * Le_Coyote Well it's got all the spectre/meltdown mitigations AND a vserver patch, so what else :) 1520241030 M * Bertl 4.4 too, no? 1520241214 M * Guest212 Gothmog: You are using systemd on the host. Either stop using systemd or try this patch: https://github.com/linux-vserver/util-vserver/pull/26 1520241229 M * Le_Coyote Nope, last I tried was 4.4.111, and I think it was missing on of the two Spectre variants mitigation 1520241331 M * Bertl well, we are at 4.4.120 now ... 1520241442 M * Le_Coyote Yeah, but if 4.9's working, that's good for me too 1520241452 M * Bertl fair enough! 1520241482 M * Le_Coyote Aha, booting new kernel. 1520241524 M * Ghislain 4.9.86...85 was a few days ago... 1520241560 M * Bertl yeah, 4.9.85 is soo old fashioned! :) 1520241605 M * Le_Coyote Bertl: is there a man page or some documentation on how to use testme.sh? Apart from just running it without any arguments? 1520241616 M * Le_Coyote It succeeds, which I guess is good :) but is there anything else beyond that? 1520241632 M * Bertl no man page, but there is the source 1520241635 M * Ghislain lol, well you know it only patches flaw in tg3 bnx2 and e1000. so that is some drivers that nobody uses 1520241675 M * Ghislain its only 100% of the netwrok card i use anyway ^^ 1520241675 M * Bertl let me check here ... nope, not affected :) 1520241685 M * Le_Coyote Gotcha 1520241696 M * Ghislain let me guess you use 10mbps 3c0m cards :p 1520241708 M * Bertl Le_Coyote: for the testme.sh you want to use -L 1520241739 M * Bertl for testfs.sh you want to use -xyz and check according to your filesystems 1520241746 M * Le_Coyote 'k 1520241750 M * Le_Coyote thanks 1520241780 M * Ghislain well only ext4 is supported so no need for others 1520241822 M * Bertl hmm? 1520242029 M * Ghislain xfs and others do not work anymore unless i am mistaken vserver support is ext4 only now, btrfs is not and xfs do not work anymore for the tools 1520242046 M * Ghislain as they changed some flags if i recall 1520242100 M * Le_Coyote 'wish I could do it all over and use ZFS 1520242205 M * Guy- I use zfs with vserver extensively 1520242245 M * Guy- the fs where all the guest roots are is ext4 in a zvol, because that's about the only way to get hashification working (OK, I suppose I could also use jfs there) 1520242260 M * Guy- but /var and /tmp of the guests is on zfs 1520242269 M * Guy- also /usr/share is zfs, with dedup enabled 1520242664 N * Guest212 AlexanderS 1520242696 M * Bertl it really depends on what features you are looking for (filesystem wise) 1520242731 M * Bertl it is correct that ext3 is the one with the most complete Linux-VServer specific support 1520242788 M * Le_Coyote Guy-: is ZFS installed on the bare-metal disk in your setup? Do you use mirrorring at all? Just curious 1520242824 M * Guy- Le_Coyote: yes I use mirroring 1520242849 M * Guy- Le_Coyote: in some cases, zfs is on bare metal; in others, the disks are on raid controllers that don't support jbod, so I have a raid0 "array" on each individual disk and use zfs on that 1520242894 M * Bertl interesting ... what controller does not support JBOD? 1520242901 M * Le_Coyote I was told in the case of mirrorring, it's best to have ZFS on the metal. I have LVM on top of a soft RAID1 (mdadm), dunno if I would benefit from any ZFS feature in that setup 1520242914 M * Guy- Bertl: for example, HP p4xx 1520242961 M * Guy- Le_Coyote: well, you could benefit from compression, deduplication, zfs send/receive, convenient administration, cheap snapshotting etc. 1520243005 M * Guy- Le_Coyote: what you could not benefit from is for example self healing, or data aware resilvering 1520243091 M * Le_Coyote Guy-: ah yeah, I obviously don't know about all the features :) 1520243127 M * Bertl how good are the ZFS recovery tools when something goes wrong? 1520243138 M * Le_Coyote Compression and snapshots were things that sounded nice 1520243194 M * Guy- Bertl: it's hard to say -- it's difficult for things to go very wrong because zfs is always consistent on disk, unless you have an unsafe write cache 1520243226 M * Bertl well, let's say a disk gets (physically) disconnected (for whatever reason) 1520243245 M * Guy- if there is sufficient redundancy in the pool, then life just goes on 1520243252 M * Guy- if there isn't, you're screwed 1520243277 M * Bertl doesn't sound great 1520243288 M * Guy- well, what are you going to do if half the data is missing? 1520243333 M * Bertl on a raid setup with ext3, that's not a big deal, once the disks are back online, recovery is only a matter of time 1520243349 M * Guy- ah, if you reconnect the disk, zfs can resume too 1520243369 M * Guy- I throught you meant someone took the disk for good 1520243375 M * Guy- *thought 1520243399 M * Guy- the moment the pool doesn't have enough devices to continue, no further i/o is permitted to it 1520243401 M * Bertl nah, okay, so it can figure out what's recent and what not and bring the filesystem/redundancy back to normal? 1520243414 M * Ghislain my users needs quota on disk so i can only have it on ext4 1520243475 M * Guy- Bertl: it never overwrites data or metadata in place, so at worst you have to discard the last transaction or two 1520243512 M * Guy- Bertl: since it's RAID and LVM and the fs rolled into one, it knows exactly which blocks are in use and on a mirror resync only syncs those 1520243538 M * Bertl ever had a case where it couldn't mount/fix a filesystem? 1520243543 M * Guy- and it uses variable stripe sizes for its raid5/raid6 like thing so every write is always a full stripe write, eliminating the "write hole" 1520243592 M * Guy- Bertl: yes -- in one case it was due to an unsafe write cache combined with an unclean shutdown; in the other I accidentally wrote to the same pool from two separate computers concurrently 1520243623 M * Bertl any chance to recover data in such a case? 1520243659 M * Guy- if you're lucky, and there is a consistent "uberblock", then yes (you just pass a special command line switch to 'zpool import') 1520243668 M * Guy- if not, then only with extreme difficulty 1520243695 M * Bertl what patches do you use and how easy/hard is it to integrate them with e.g. Linux-VServer? 1520243723 M * Guy- zfsonlinux compiles entirely out of tree (although there is support for integrating it into the kernel source if you want) 1520243736 M * Guy- I do both; no interference with linux-vserver 1520243751 M * Guy- like ships in the night (and they have radar :) 1520243795 M * Guy- some stuff is inconvenient with vserver; for example, if you create a new zfs instance (a new filesystem in an existing pool) after starting a guest, there is no way to mount that fs in the guest without rebooting the guest 1520243797 M * Bertl maybe I should give it a try ... what Linux-VServer features are missing on ZFS? 1520243817 M * Guy- iunlink is missing for hashification 1520243835 M * Guy- cow link breaking would likely also not work 1520243885 M * Guy- the bigges inconvenience is the namespace issue mentioned above -- it should be possible to add new filesystems to existing mount namespaces, but afaict it isn't 1520243890 M * Guy- *biggest 1520243929 M * Bertl did you try creating the instance inside the admin namespace of that guest? 1520243946 M * Guy- what's an admin namespace? 1520243998 M * Guy- you mean mount namespace? 1520244075 M * Bertl when util-vserver creates a guest, there are two mount namespaces created 1520244095 M * Bertl one which is considered the admin namespace for this guest (where all the mounts and stuff are) 1520244125 M * Bertl and the other one which is the guests namespace (where everything outside the guest is removed) 1520244148 M * Guy- no -- how do I find out the ID of the admin namespace? 1520244163 M * Bertl they are connected so mounts from the admin space will propagate into the guest 1520244195 M * Guy- (even if this works, it's only a workaround for a case where the new fs is only supposed to be mounted inside that one guest -- my typical use case is for it to be mounted on the host and in more than one guest) 1520244240 M * Bertl that shouldn't be a problem though 1520244249 M * Bertl you just have to mount it more than once 1520244338 M * Guy- but I can't create the same fs in the admin namespace of several guests, can I 1520244357 M * Guy- I can be in exactly one namespace when I issue the 'zfs create' command 1520244359 M * Bertl shouldn't be a problem 1520244391 M * Bertl well, it might be a problem with zfs if it doesn't support a 'mount' separated from a 'create' 1520244392 M * Guy- (also, speaking of which -- zfs instances can't be removed unless all guests are stopped that saw it when they were started) 1520244432 M * Bertl similar problem, it should be 'unmounted' by the guests 1520244443 M * Guy- it isn't even mounted in the guests 1520244455 M * Bertl but in the admin namespace 1520244461 M * Guy- zfs does support a mount separate from create, but my impression was that the "block device" (which is not a block device, it's just an object in the zfs pool) that you'd mount needs to have existed before the guest is started 1520244466 M * Guy- OK, let's try this 1520244475 M * Guy- how do I find the admin namespace of a guest? 1520244514 M * Bertl check the vserver enter command 1520244527 M * Guy- I use vserver enter all the time 1520244532 M * Guy- that puts me in the admin namespace? 1520244614 M * Bertl sorry, vnamespace 1520244631 M * Bertl it has an -i/--index option 1520244664 M * Guy- so I see -- what do I put therE? 1520244670 M * Bertl 0 or 1 1520244731 M * Guy- so e.g. "vnamespace -e 445 -i 0 zsh" would land me in the admin namespace of ctx 445? 1520244753 M * Bertl if you do 1520244766 M * Bertl vnamespace --enter 42 --index 0 -- /bin/bash 1520244778 M * Bertl (or leave out the --index 0) you'll end up in the admin namespace 1520244781 M * Bertl if you do 1520244790 M * Bertl vnamespace --enter 42 --index 1 -- /bin/bash 1520244797 M * Bertl you end up in the guest namespace 1520244806 M * Bertl (check with e.g. df to see the difference) 1520244967 M * Guy- OK, this is mighty confusing... 1520244968 M * Guy- tank/shared/var/samba 4.4G 60M 4.4G 2% /var/lib/vservers/samba/var 1520244972 M * Guy- I have this mountpoint 1520244982 M * Guy- I attempted to create a new zfs instance called "test" under it 1520245012 M * Guy- it ended up getting mounted on /shared/var/samba/test, but /shared/var/samba in this namespace is not a mountpoint, it's just a subdirectory of the root fs 1520245042 M * Guy- I suppose I could zfs create the fs without mounting it and then specify a custom mountpoint 1520245056 M * Guy- but then I'd need to remember to change it back to the default before rebooting 1520245088 M * Guy- (the mountpoint is a property of the fs; you don't typically have entries for zfs in fstab) 1520245499 M * Bertl well, you could do that with the pre/post scripts 1520245511 M * Bertl if a conventional fstab mount doesn't work for zfs 1520245646 M * Bertl setting the mountpoint to 'legacy' should provide an fstab compatible behaviour (according to google :) 1520249680 M * Guy- yes, but that's icky :) 1520249703 M * Guy- the point of the exercise would be to get a new fs into a guest without rebooting the guest, with minimal hassle 1520249716 M * Guy- and for it to be mounted regularly the next time the host reboots 1520249745 M * Guy- the vnamespace workaround seems quite horrible, even if it works 1520249904 M * Bertl why? when the fstab is used, it should be the same as with any other filesystem 1520249928 M * Bertl i.e. you add it to the guests fstab, then you enter the admin namespace and do the mount 1520249974 M * Bertl you can even create a script for that which does all of it in a single command 1520250047 M * Guy- I'd only add it to the guest's fstab in the form of a bind mount, if that (in many cases it'd be covered by an existing rbind mount) 1520250077 M * Guy- almost none of my guests have private filesystems that they alone mount; in most cases they're bind mounts of host filesystems 1520250081 M * Bertl which is perfectly fine 1520250134 M * Guy- the normal workflow is "zfs create pool/path/to/fs", which causes it to inherit its mountpoint from its parent, and also causes it to be mounted immediately 1520250144 M * Guy- say, I have two guests that will need to see this new fs 1520250155 M * Guy- (and the host) 1520250175 M * Guy- currently, I add the new bind mount (if no rbind covers it) to the guest fstab and reboot the guest 1520250177 M * Bertl not sure why the host, but normally you would run your script twice 1520250193 M * Bertl in the host case, you also want to mount the filesystem on the host 1520250217 M * Guy- but I'd need to set the mountpoint property to legacy to get the fs mounted in the proper place inside the guest, no? 1520250235 M * Bertl probably not 1520250246 M * Bertl but you need to keep creation and mounting apart 1520250247 M * AlexanderS Guy-: You can do the bind mount from the admin namespace. 1520250283 M * Bertl yes, that is where it would happen with any other filesystem as well 1520250291 M * Guy- AlexanderS: also of an fs that didn't exist when the namespace was created? 1520250298 Q * Aiken Remote host closed the connection 1520250308 M * Bertl that's why you mount it in the admin namespace 1520250337 M * Bertl the procedure is rather straight forward ... 1520250343 M * Bertl 1) enter admin namespace 1520250359 M * Bertl 2) mount your 'new' filesystem under /some/path 1520250378 M * Guy- I can't do that without setting the mountpoint to legacy 1520250380 M * Bertl 3) --bind mount a directory from your filesystem to the guest 1520250392 M * Bertl 4) exit admin namespace 1520250399 M * Guy- OK, I'll try this in a minute 1520250421 M * Bertl you can even unmount the filesystem from /some/path before you exit 1520250435 M * Bertl (will be delayed because of the bind mount) 1520250582 M * Guy- # mount -t zfs tank/shared/var/samba/test /var/lib/vservers/samba/var/test -o zfsutil 1520250585 M * Guy- filesystem 'tank/shared/var/samba/test' is already mounted 1520250587 M * Guy- this is in the admin namespace 1520250602 M * Guy- I can't mount it under the proper guest path again, because it's mounted on the host 1520250625 M * Guy- but the host mountpoint is not visible as a mountpoint in the admin namespace, so I can't bind mount either 1520250643 M * Bertl so zfs doesn't support a second mount? 1520250656 M * Bertl (in a different namespace that is) 1520250665 M * Guy- apparently not 1520250695 M * Bertl well, then it doesn't play nicely with the Linux VFS layer 1520250719 M * Bertl which is sad but a ZFS problem 1520253863 M * Bertl off for now ... bbl 1520253868 N * Bertl Bertl_oO 1520255635 M * Ghislain that's why the jails in freebsd are cool, they are first citizen so ZFS is aware of them, they are aware of zfs. Too bad linux has not the same integrations for virtualisation :) 1520256037 M * Ghislain we just lack the bertlFS ;p 1520256983 M * Ghislain i start to have encryption request, do any of you use encrypted FS on the guests ? 1520257908 M * Guy- Ghislain: no, but don't use zfs encryption for the time being, it's not production ready 1520257987 M * Ghislain :) GDPR when app is not ok they want the disk to be encrypted but i think they lack the technical knowledge to knopw it protects only when the server is down 1520258479 M * Guy- still, better than nothing, right? it's also something to wave under an auditor's nose 1520258488 M * Guy- but if that's your use case, I'd go with LUKS 1520258528 M * Guy- performance impact is negligible (with modern CPUs), and it's fully transparent to the upper layers of the storage stack 1520258583 M * Guy- (of course, it's not an "encrypted fs", it's an encrypted block device) 1520258627 M * Ghislain yes, the issue is where do you keep the secrets 1520258645 M * Ghislain it means any start/stop or reboot would need manuel entering of the ssecret 1520258650 M * Ghislain for each guest 1520258685 M * Ghislain if you store them on the host you can automate but that defeat the security 1520258829 M * Guy- you can store the secrets on a different computer and have them retrieved automatically on boot; this adds a thin layer of additional security (it's not sufficient to steal the hard drives; you also need the host's IP) 1520258841 M * Guy- I have a box set up like this 1520258909 M * Guy- I have scripts to retrieve the secret using either SSH or split-horizon DNS (in the latter case, I use seccure to encrypt the LUKS key with a passphrase I store on the local disk, because the DNS response would otherwise be in plaintext) 1520258964 M * Guy- it's a rented hosted box, and the idea with the encryption is to protect against snooping by people who may obtain physical access to the hardware after we're done using it 1520261008 M * Ghislain thats cool , i wanted to plya with a "privacy" box so isp stop their spying by having a host with vpn and such that would be encrypted and need console access to boot where i enter the password 1520262863 M * Guy- even then, you could start dropbear in initramfs so you could enter the password from afar 1520263088 M * AlexanderS Debian even have scripts for that usecase by default. 1520263273 M * Guy- yes 1520263408 M * Ghislain oh really ? dam 1520263676 M * Guy- dropbear-initramfs - lightweight SSH2 server and client - initramfs integration 1520263707 M * Guy- it needs a bit of manual labour to set up, though 1520263797 M * Ghislain seems easy enough 1520263837 M * Ghislain just do not forget to update the key in intiramfs when i rotate my ssh key on my client 1520263842 M * Ghislain :) 1520263854 M * Ghislain now i have to try that dam it 1520263920 M * Guy- how often do you rotate your ssh key, and how did you arrive at that rotation frequency? 1520263983 M * Ghislain every couples of years 1520264019 M * Ghislain it's only pure inside paranoia i reinstall my computer from scratch and rotate my key when i do to start fresh 1520264053 M * Guy- I envy you all that free time :) 1520264058 M * Ghislain i delete my old keys when i am certains i did not miss one (but puppet is here to make sure i dont) 1520264082 M * Ghislain well there is enough rainning week end in the years for that 1520264226 M * Guy- no family, huh? 1520264271 M * Guy- I don't want to complain, but having kids has certainly removed a large chunk of my free time :) 1520265199 M * Ghislain yes but you can rent the kids for great benefit when they can work in the mines 1520265207 M * Ghislain especialy uranium ones 1520265356 M * Guy- I'll look into that, thansk 1520265553 M * Ghislain eheh, i am all for helping 1520268144 M * CcxWrk Guy-: Top of the line ROT26 encryption - twice as safe as ROT13. 1520268253 M * CcxWrk Ideally you don't let unencrypted data even touch the server. But those managing the servers sadly don't tend to have much say about that. 1520268803 Q * nikolay Quit: Leaving 1520281242 J * Aiken ~Aiken@2001:44b8:2168:1000:b26e:bfff:fe2a:b951 1520284444 M * Gothmog AlexanderS: thanks, I will try that 1520284599 Q * Ghislain Quit: Leaving. 1520285416 Q * any0n Remote host closed the connection 1520285449 J * any0n ~k@4G4AABIOE.tor-irc.dnsbl.oftc.net 1520286671 Q * tokkee Remote host closed the connection 1520289884 J * sannes1 ~ace@2a02:fe0:c130:1d90:243a:5d81:2dad:3f61 1520289884 Q * sannes Read error: Connection reset by peer