My Mom got into bee keeping about a year ago. I’m not sure what the ranks are in beekeeping, but I’d call her semi-pro at this point. She can locate the queen and tell if she’s sick or not. She knows where the drones and babies are. She feeds them, smokes them to stun them, and knows the right time to harvest the honey. She has had her fair share of stings and also has all the sting-proof gear.
Whenever I go to visit them I’m intrigued with the whole bee keeping thing. Luckily last time I was there my Mom and Dad were checking the hive so I decided to get it on film. This is when I learned my lessons in bee keeping:
There is a reason for the sting-proof gear. (I had none)
There is a reason for the all-white clothing. (I was wearing a bright blue shirt)
Bees get bothered when you rip their hive apart to look at them, don’t push the envelope by trying to film them.
You know that look you get from people when you tell them that you haven’t seen Star Wars? I get that same look here at Fogo when I tell them we don’t have raised floors in our Carrollton and New Orleans Data Centers. And then the conversation kind of goes on from there…
Him:Really? No raised floors?
Him:But how do you…
Him:And how do you…
Me:Cold Aisle Containment.
Him:Man, that must be…
Me:Cheaper? Yea, a lot.
Him:And it’s probably…
Me:Cleaner? Yea, a lot.
It’s actually not all that out of the ordinary. According to TechTarget’s 2010 Data Center Decisions survey, 59% of IT respondents use raised flooring in their current data center, but only 43% expect to use raised floors in a future data center. A big reason for the change is cooling capacity. With the densities of data centers on the rise, data center managers are finding it harder to get the cooling needed through the raised floors. Providers are also finding newer and more innovative ways of cabling. Data Centers with raised floors are not going in and tearing out the existing raised floors, but they are thinking twice before putting them in their new data centers.
Here at Fogo, we like the slab for all of the above reasons and also the fact that it makes most sense for our clients. From a business perspective the slab is more cost effective and fits well with our product delivery objectives.
Steve Hambruch, data center architect at Data Center Resources says that the move to other (non-raised floor) solutions is a trend. I guess we’re trendy.
Geek Warning:The following post contains numbers, acronyms, and four syllable words.
I remember when I was in college I was using those 100 MB Iomega zip disks for storing school papers and projects. I thought I was so far ahead of the curve. Everyone else had a pile of 3.5 floppies, when I had my entire body of school work on that one zip disk. They never took because they couldn’t match the popularity of the 3.5 and the storage space of a re-writable CD and DVD, which were both right around the corner. Then the USB drives hit. I bought my first 1 GB thumb drive for 40 bucks at the school book store my last year of college. Just two years ago I bought my first 1 TB hard drive. I’ve got my entire music collection (including all 20 Rush albums #ProgRock) as well as 3000+ pictures. Throw in some raw video from a VHS conversion project and I’m already running out of space. I’m guessing we’ll see petabyte drives pretty soon if they’re not already here.
Now I’m reading articles talking about exabytes and zettabytes. (Don’t lie, you had to go look up zettabyte just like I did). A zettabyte is 1 billion terabytes. To try to put it in perspective, in 2009 the entire Internet was estimated to contain 500 exabytes. According to the Cisco Visual Networking Index, 1 exabyte amounts to 36,000 years of HD-TV video, or the equivalent of streaming the entire Netflix catalog 3,177 times. During the 2011 Fiscal Year, Seagate reported selling a combined total of 330 exabytes of hard drives.
If that didn’t blow your mind this will: Some nerds over at IBM Research in California who have figured out how to store one bit of data with just 12 atoms. Today’s hard drives are using more than a million atoms to store a single bit and more than half a billion to store a byte, which is 8-bits or the letter “A”. The storage technique is based on an unconventional form of magnetism called antiferromagnetism. (Use that one at your next Community Technology Mixer.) Basically, with conventional ferromagnetism the magnetic fields from one bit interfere with the neighboring bit so you are unable to pack them close together. Antiferromagnets cancel each other out and can be packed closer together allowing for increased data storage density. Once they get all the kinks worked out we could be seeing petabyte thumb drives.
We’re dense here at Fogo. Although we’re not talking in exabytes yet, terabytes are the norm over here. We’ve got the space your looking for. Whether it’s cabinet space or hard drive space, there’s plenty of room. We’re your density….I mean your destiny.
Sorry for the hiatus. On Fire | The Official Blog of Fogo Data Centers is back and in full force for 2012.
We took a little time off to revamp and improve the blog (plus, we just got a little slack over the holiday season). You can expect more of the same Fogo goodness that you’ve come to love from the blog plus a little more fun.
News of Facebook’s new arctic data center, the size of 11 football fields, got me thinking: What’s the biggest data center in the world?
According to Data Center Knowledge, it’s the colossal Lakeside Technology Center in Chicago. This multi-tenant facility spans a massive 1.1 million square feet, more than 20 times the size of Facebook’s newest big site.
The Lakeside Technology Center won’t be the biggest for long, however. QTS has announced a 1.2 million facility coming soon to a Virginia near you.
Of course, it’s not the size that matters. It’s how you wiggle the worm.
While these massive sights are impressive to behold, we’re more than happy with our cozy, sophisticated, and strategically-placed facilities of up to 30,000 square feet located in Georgia and New Orleans.
It hasn’t fully arrived yet, but cloud computing holds a lot of promise. Maximum accessibility of files. Maximum interoperability between devices. And minimized IT costs (both hardware and software) to name a few.
At the same time, there are several legitimate concerns of cloud computing. Let me count the ways. (more…)
Proxy servers are nothing new. But the way Amazon’s new cloud-powered Silk browser will use one is totalitarian to say the least.
In short, to help speed up mobile browsing performance, the new, slick, and aggressively priced Kindle Fire will handshake only with Amazon’s massive EC2 stack, which then fetches all the request content, caches it, optimizes it—all that jazz while the Kindle itself enjoys a lighter workload.
The stated performance gains are enticing and all, but what about if EC2 goes down, as it has a couple of notable and extended times this year? Will Silk or other cloud-powered browser offer workarounds to that?
In Fogo’s opinion, this might not be the best vision for cloud-browsing. At least without further explanation. Of course, we’ll have a better picture once Silk launches on Nov. 15 via the Kindle Fire. It’ll certainly be interesting to see where this thing goes. (And don’t even get me started on the privacy concerns of Silk.)
Bloomberg reported this week that tech giants are starting to build their own servers, instead of buying them from Dell or HP.
“Hewlett-Packard, Dell and companies that sell the computers off the shelf are losing sales in a key market because Facebook and larger rival Google Inc. are leading a switch among Internet companies to do-it-yourself servers,” the magazine wrote. “These customized machines now account for 20 percent of the U.S. market for servers, which generated $31.9 billion globally in last year, said Jeffrey Hewitt, an analyst at Stamford, Connecticut-based Gartner Inc.”
The reason: As Facebook’s director of hardware design, Frank Frankovsky, put it: “We weren’t able to get exactly what we wanted. People want to be able to build (servers) their way. They kind of want a Burger King: ‘I don’t like pickles — why do I have to have pickles?’”
For our part, we primarily use Dell hardware to power both our data centers — with exception to special projects — and for the most part have been able to get what we want. The good news, regardless if you’re a do-it-yourselfer or not, is that increased pressure from Facebook and Google will only make off the shelf servers better, more customizable, and cheaper.