From my other blog, the long one:
The other day, my uncle asked me if my other uncle was “inventing yottabytes.” I took the question more literally than I should have and replied with some snide remark about it making no sense to say that anyone is “inventing yottabytes.” What he was getting at, obviously, is this: is my uncle working on the technology to handle that amount of information? The name of my uncle’s company is, in fact, Yottabyte, and in terms of far-reaching goals sure, he’d love to be the one to harness (or just store) that amount of data, but so far it’s just a name. The fact is, no one has yet coined a term for any amount of data larger than a yottabyte because there’s nowhere close to that much information waiting to be stored.
Let’s get some perspective: 1 yottabyte = 1,000,000,000,000,000,000,000,000 bytes = 1024 bytes. If printed out, it would be something like 500 quintillion (500,000,000,000,000,000,000) pages of text. Currently, the largest packets of data that are being thrown around are exabytes, and only by big guns like the NSA.1 1 exabyte = 1,000,000,000,000,000,000 bytes = 1018 bytes; that’s 1 quintillion bytes. A yottabyte is 1 septillion bytes. In other words, it’s a whole hell of a lot bigger.2
Storing this much data requires massive databases, and I mean massive. The NSA’s Utah Data Center will cover 1 million square feet and include four 25,000 square foot facilities housing rows and rows of servers, not to mention the 60,000 tons of cooling equipment needed to keep the servers from overheating and the huge electrical substation charged with meting the center’s estimated 65 megawatt demand. To put it simply: big data is super high maintenance.
This is why “we wouldn’t dream of blanketing every square meter of Earth with cameras, and recording every moment for all eternity/human posterity — we simply don’t have the storage capacity.”3 The question is: will we ever? The solution, it turns out, may be in our very own DNA.
(Continued on Girl Meets Whiskey.)