1502419587 J * fstd_ ~fstd@x4db6960f.dyn.telefonica.de 1502420042 Q * fstd Ping timeout: 480 seconds 1502420042 N * fstd_ fstd 1502424074 M * Bertl_oO off to bed now ... have a good one everyone! 1502424076 N * Bertl_oO Bertl_zZ 1502437711 Q * transacid Ping timeout: 480 seconds 1502437953 J * transacid ~transacid@transacid.de 1502440312 J * transaci1 ~transacid@transacid.de 1502440348 Q * transacid Read error: No route to host 1502441793 Q * transaci1 Ping timeout: 480 seconds 1502446431 J * transacid ~transacid@transacid.de 1502447847 N * Bertl_zZ Bertl 1502447849 M * Bertl morning folks! 1502449546 Q * transacid Quit: reboot 1502451219 J * transacid ~transacid@transacid.de 1502463839 J * Gremble ~Gremble@cpc87179-aztw31-2-0-cust6.18-1.cable.virginm.net 1502464934 Q * Gremble Quit: Leaving 1502469256 J * click_ click@ice.vcon.no 1502469360 J * menomc ~amery@kwa.jpi.io 1502469363 J * funnel_ ~funnel@81.4.123.134 1502469375 J * kshannon_ ~kris@server.kris.shannon.id.au 1502469379 J * s1aden ~paul@starsky.19inch.net 1502469379 J * l0kit_ ~1oxT@ns3096276.ip-94-23-54.eu 1502469388 J * karasz_ ~karasz@kwa.jpi.io 1502469393 J * tokkee_ tokkee@osprey.tokkee.org 1502469451 Q * PowerKe magnet.oftc.net liquid.oftc.net 1502469451 Q * DelTree magnet.oftc.net liquid.oftc.net 1502469451 Q * click magnet.oftc.net liquid.oftc.net 1502469451 Q * fback magnet.oftc.net liquid.oftc.net 1502469451 Q * sladen magnet.oftc.net liquid.oftc.net 1502469451 Q * mnemoc magnet.oftc.net liquid.oftc.net 1502469451 Q * l0kit magnet.oftc.net liquid.oftc.net 1502469451 Q * mcp magnet.oftc.net liquid.oftc.net 1502469451 Q * kshannon magnet.oftc.net liquid.oftc.net 1502469451 Q * geb magnet.oftc.net liquid.oftc.net 1502469451 Q * funnel magnet.oftc.net liquid.oftc.net 1502469451 Q * tokkee magnet.oftc.net liquid.oftc.net 1502469451 Q * karasz magnet.oftc.net liquid.oftc.net 1502469455 N * funnel_ funnel 1502469510 J * mcp ~mcp@wolk-project.de 1502469743 J * PowerKe_ ~tom@d54C69995.access.telenet.be 1502469743 J * fback fback@red.fback.net 1502469743 J * geb ~geb@mars.gebura.eu.org 1502469743 J * DelTree ~deplagne@2a00:c70:1:213:246:56:18:2 1502471819 Q * DoberMann Quit: et hopla changement de disque ;p 1502474606 Q * Defaultti Quit: WeeChat . 1502474676 J * Defaultti defaultti@lakka.kapsi.fi 1502475788 Q * dustinm` Quit: Leaving 1502476087 J * dustinm` ~dustinm`@68.ip-149-56-14.net 1502478838 J * DoberMann ~james@2a01:e35:8b44:84c0::2 1502479566 Q * transacid Remote host closed the connection 1502479894 J * transacid ~transacid@transacid.de 1502481592 M * yang Bertl: so the rsync, once it's synced and you restart it, then all it does on previously completed sync is the message "sending incremental file list" ? 1502481606 M * yang and no other output ? 1502481632 M * Guy- yang: without -v, it probably doesn't print anything else, no 1502481641 M * Guy- with -v, it outputs a summary at the end 1502481727 M * yang you are right, thank you 1502481811 M * yang Any hint regarding shreding an existing directory on the online system, would this command do ? 1502481815 M * yang find /home/user -type f | while read i ; do echo $i; shred -u --random-source=/dev/urandom -n 10 -z -v "$i"; done 1502481947 M * Guy- well, as long as your filenames don't contain crap like newlines, probably yes 1502481955 M * yang ok 1502481982 M * Guy- find /home/user -type f -exec shred -u --random-source=/dev/urandom -n 10 -z -v {} + 1502481993 M * Guy- this is safer, if shred can take more than one filename 1502481996 M * Guy- it's also faster 1502482020 M * yang ok 1502482188 M * Guy- if there are lots of files and your storage backend benefits from parallelism, then you could do find /home/user -type f -print0 | xargs -0 -P shred -u --random-source=/dev/urandom -n 10 -z -v 1502482257 M * yang find /home/user -type f -print0 | xargs -0 shred -u --random-source=/dev/urandom -n 10 -z -v {} \; 1502482387 M * Guy- I don't think the {} \; is necessary 1502482393 M * Guy- that would just be with find -exec 1502482415 M * Guy- xargs without parallelism offers no advantages over find -exec {} + 1502482437 M * Guy- however, it's better than find -exec {} \; 1502482503 M * Bertl you mean xargs -1 ? 1502482544 M * Bertl because by default, xargs will use more args 1502482618 M * Guy- yes, so will find -exec {} + 1502482630 M * Guy- (it's a gnuism) 1502482730 M * yang find /home/user -xdev -type f -exec shred -u --random-source=/dev/urandom -n 10 -z -v {} \ 1502482733 M * yang what about this ? 1502482760 M * Guy- this will just print an error message 1502482777 M * Guy- after the -exec you either need "{} +" or "{} \;" at the end 1502482842 M * Guy- (well, technically {} doesn't have to be at the end, but either the plus or the semicolon does) 1502483193 M * Bertl find . -exec true \; -print 1502483340 M * Guy- yes, OK, at the end of the -exec clause :) 1502483482 M * yang how many cycles are enough, default seems to be 3, 10 cycles makes it very slow on delete... 1502483494 M * yang it will take more than a day to shred with 10 cycles 1502483501 M * yang with lots of files 1502483519 M * Guy- yang: what storage are you using, and what kind of adversary are you shredding against? 1502483566 M * Bertl more importantly, what's your filesystem? 1502483610 M * Guy- yes, that also matters, to be sure -- with zfs, for example, shredding won't do any good at all 1502483635 M * Bertl neither with jfs, reiser, xfs, ext3/4, btrfs ... 1502483637 M * yang ext4 1502483672 M * Guy- why doesn't it work with the traditional overwrite-in-place filesystems? 1502483696 M * Bertl ext3/4 has journaling and will put the data in new blocks 1502483710 M * Bertl (well, depending on the journal options, of course) 1502483772 M * Bertl I think it's even in the man page of shred 1502483818 M * Guy- well, the new data will be written to the journal first, but then it will also be committed to the file, no? 1502483842 M * Bertl but the old block will usually not be overwritten 1502483854 M * Bertl (which is the sole purpose, no?) 1502483889 M * Guy- I'm not saying you're wrong, but I don't see why it wouldn't be overwritten 1502483920 M * Bertl well, let's say you have journal=metadata 1502483959 M * Bertl in this case, it is certain that the new data will be written to a new block first, otherwise it wouldn't allow to roll back or replay the journal 1502483997 M * Guy- aren't you mixing up journal= and data=? 1502484045 M * Guy- there isn't even a journal=metadata option 1502484098 M * Guy- the way I understand it, with data=ordered (which is the default), you're fine -- the idea in this mode is to commit file data before metadata changes (such as the file growing), so that you can't end up with a file that has previously-owned garbage at the end 1502484119 M * Guy- shred never grows the file; the only piece of metadata that gets changes is the mtime 1502484123 M * Bertl that was not a mount option, that was informative :) 1502484145 M * Bertl the mount options are data=journal|ordered|writeback 1502484169 M * Guy- with journal=data, data writes are also journaled, which means that when shred overwrites a block, the new block is first committed to the journal, and then to the file, overwriting the previous contents 1502484228 M * Guy- I don't see how/when/where/why ext3 or ext4 would allocate a new data block in the main fs to store new file data instead of overwriting it in place (potentially after committing it to the journal first) 1502484251 M * Bertl I was going to get to data=journal, which is probably the only mode where shred is effective 1502484278 M * Guy- now I have convinced myself that you're wrong :) 1502484342 M * Guy- with data=ordered, it is certainly effective; with data=journal, if there is a crash before the shred data is committed to the main fs, then it's not effective 1502484359 M * Guy- but if there is a crash while you shred, it's not effective in any case 1502484401 M * Bertl maybe, I wouldn't count on it on modern filesystems in general 1502484416 M * Guy- the only instance where data is not overwritten instantly in place (on ext3/ext4) is with data=journal 1502484433 M * Guy- (well, "instantly", modulo any write caching) 1502484454 M * Bertl but somebody who is running shred 6-10 times to make sure the data is wiped should not 'trust' in assumptions :) 1502484566 M * Guy- yes, it's probably a better idea to use encryption and just throw away the key instead of shredding 1502488811 Q * DoberMann Quit: ventilateur ma tuer 1502489346 J * DoberMann ~james@2a01:e35:8b44:84c0::2 1502491331 Q * sannes1 Ping timeout: 480 seconds 1502495353 M * Bertl off to bed now ... have a good one everyone! 1502495354 N * Bertl Bertl_zZ