I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.
This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.
8 TB but I’m just a regular Joe with a penchant for piracy.
Ahoy!
Arrrrrr!
Not that big by today’s standards, but I once downloaded the Windows 98 beta CD from a friend over dialup, 33.6k at best. Took about a week as I recall.
I remember downloading the scene on American Pie where Shannon Elizabeth strips naked over our 33.6 link and it took like an hour, at an amazing resolution of like 240p for a two minute clip 😂
And then you busted after 15 seconds?
Totally worth it.
Yep, downloaded XP over 33.6k modem, but I’m in NZ so 33.6 was more advertising than reality, it took weeks.
In similar fashion, downloaded dude where’s my car, over dialup, using at the time the latest tech method - a file download system that would split the file into 2mb chunks and download them in order.
It took like 4 days.
I once robocopied 16tb of media
I obviously downloaded a car after seeing that obnoxious anti-piracy ad.
I’m currently backing up my /dev folder to my unlimited cloud storage. The backup of the file
/dev/random
is running since two weeks.No wonder. That file is super slow to transfer for some reason. but wait till you get to /dev/urandom. That file hat TBs to transfer at whatever pipe you can throw at it…
Cool, so I learned something new today. Don’t run
cat /dev/random
Why not try /dev/urandom?
😹
Ya know, if not for the other person’s comment, I might have been gullible enough to try this…
That’s silly. You should compress it before uploading.
I’m guessing this is a joke, right?
/dev/random and other “files” in /dev are not really files, they are interfaces which van be used to interact with virtual or hardware devices. /dev/random spits out cryptographically secure random data. Another example is /dev/zero, which spits out only zero bytes.
Both are infinite.
Not all “files” in /dev are infinite, for example hard drives can (depending on which technology they use) be accessed under /dev/sda /dev/sdb and so on.
I’m aware of that. I was quite sure the author was joking, with the slightest bit of concern of them actually making the mistake.
a .png of your mom’s width
I worked at a niche factory some 20 years ago. We had a tape robot with 8 tapes at some 200GB each. It’d do a full backup of everyone’s home directories and mailboxes every week, and incremental backups nightly.
We’d keep the weekly backups on-site in a safe. Once a month I’d do a run to another plant one town over with a full backup.
I guess at most we’d need five tapes. If they still use it, and with modern tapes, it should scale nicely. Today’s LTO-tapes are 18TB. Driving five tapes half an hour would give a nice bandwidth of 50GB/s. The bottleneck would be the write speed to tape at 400MB/s.
My Chia crypto farm at its peak had about 1.5 PB of plots, each plot was I think about 100ish gigs? I’d plot them on a dedicated machine and then move them to storage for farming. I think I’d move around 10TB per night.
It was done with a combination of powershell and bash scripts on Windows, Linux, and the built in Windows Services for Linux.
Approximately 2 petabytes.
I’ve imaged an entire 128GB SSD to my NAS…
Why would dd have a limit on the amount of data it can copy, afaik dd doesn’t check not does anything fancy, if it can copy one bit it can copy infinite.
Even if it did any sort of validation, if it can do anything larger than RAM it needs to be able to do it in chunks.
Not looking at the man page, but I expect you can limit it if you want and the parser for the parameter knows about these names. If it were me it’d be one parser for byte size values and it’d work for chunk size and limit and sync interval and whatever else dd does.
Also probably limited by the size of the number tracking. I think dd reports the number of bytes copied at the end even in unlimited mode.
Well they do nickname it disk destroyer, so if it was unlimited and someone messed it up, it could delete the entire simulation that we live in. So its for our own good really.
It’s less about dd’s limits and more laughs the fact that it supports units that might take decades or more for us to read a unit that size.
No, it can’t copy infinite bits, because it has to store the current address somewhere. If they implement unbounded integers for this, they are still limited by your RAM, as that number can’t infinitely grow without infinite memory.
Currently pushing about 3-5 TB of images to AI/ML scanning per day. Max we’ve seen through the system is about 8 TB.
Individual file? Probably 660 GB of backups before a migration at a previous job.
Do cloud platform storage operations count? If so, in the hundreds of terabytes (work)
20TB (out of 21TB usable), a second 6x6TB zfs raidz2 server as my send target.
I mean dd claims they can handle a quettabyte but how can we but sure.
dd if=/dev/zero of=/dev/null status=progress
dd can’t really handle quettabytes! GNU has taken us all for fools! Alert the masses! Wake up sheeple!