Skip to main content
Christopher Rogers

By: Christopher Rogers on September 22nd, 2017

Print/Save as PDF

Technology’s Changing a Familiar Face

Data Center | Data Center Networking | Cloud Storage | Data Storage

The face of something very familiar is changing. Within a few years you may not recognize the “face” of your data center because many of the technologies you’ve relied on for the last 5-10 years are being disrupted. Networking with software defined networking (SDN), hardware virtualization with container technologies, monolithic storage arrays with a variety of technologies such as all-flash arrays (AFA), hyper-converged storage, cloud based storage, and list can go on and on. They’re all changing the data center because speed, agility and responsiveness is what the business is demanding from IT. We’re not going to focus on all of those today, just one as it will be front and center in this year’s OktoberTekfest demo hall. First, let’s take a look at the current storage situation.

 

REWIND
I’m sure reading, “speed, agility and responsiveness,” made your mind go to how quickly you can make changes within your current storage environment, right? You chuckle but you know it’s true. Those big storage arrays are designed to be entrenched for the long haul, three to five years or longer. Why? Because it’s your data and it’s a bear to move. Of course, things like virtualization have made it easier to move  but it’s still not fun and it’s a tedious process. And then there’s what is takes to run and maintain it, dedicated hardware and a specialized talent pool to maintain and administer. How quickly can you onboard new storage, get it configured so it’s usable within the environment, and then migrate data over to it?


Or how about that mission critical enterprise OTLP application? Is it still isolated in some way within your current storage environment? Is it attached to its own tier of disk within the storage array? By now I am sure you’ve moved it to an AFA. Is your AFA dedicated to that application? If not, is the AFA configured for quality of service (QOS) to provide the best response? What about within the compute environment? Is mission critical enterprise OTLP application running on a set of dedicated hosts? If it’s virtualized, is it running on dedicated set of servers, an isolated cluster or clusters?


We all know storage is an entrenched process, it’s what houses the data, the most important thing to your business. I’m not saying that will be replaced in the short term but is that most of your data center? What percentage of the systems in your environment require that type of hardware? 10%? 20%? I’m sure you’d agree that somewhere in that range is pretty safe, isn’t it? But how did you go about your last storage purchase? Did you size it for all workloads but dedicate the best disks and maybe specific storage processors to that enterprise OLTP application (if possible)? For the other 80% that’s most likely virtualized, what did you do? Maybe you went all flash, hoped for space savings through deduplication and other storage efficiencies, sized it accordingly and bought an AFA? But maybe the budget wasn’t that good so you layered in some flash with traditional disk and created a tier process? We all know there are so many ways we can slice this but the point is you had to make some hard decisions, concessions, etc., with budget playing a pretty big role as well, right? Dedicated storage arrays are not cheap. And we haven’t even started talking about the storage area network (SAN) that’s built to support access to the storage or the compute that needs to access it.


Does any of that scream agile IT? Did you finish that process feeling you were agile and ready for the change in business requirements six months from now? Or did you leave that process hoping not to go through it again until the next storage refresh three to five years from now? Maybe you feel better about your agility this time versus last time, but I don’t know many who feel completely confident they can make fast enough changes to meet the rapidly changing business requirements. Let’s be honest, I don’t know any IT person who deals with storage requirements who will ever feel 100% confident, do you? We all know systems have finite scale and limits, and that’s not addressing the issue of putting all your eggs in one huge basket.

 

ENOUGH
Ok, ok, you say. You’ve rehashed my pain and agony, now what? Well that’s where Tekfest comes in. We’re not offering a storage panacea but there are some compelling updates in technology that just might help with your storage pain. It might not be today, it might not be tomorrow, but these new storage trends are going to transform the data center. Stop by our demo booth to see one of those technologies in action, Cisco’s Hyperflex, a hyper-converged storage solution. I know you may say hyper-converged technologies have been out for a little while now and it’s is coming along but it’s not ready, or going to compete with your large disk arrays. It’s funny you mention that. Stop by and chat with us during the Tekfest demo session to see how innovations really are changing the face of storage in the data center not in the future but today.

 

 

About the Author:

Christopher Rogers has over 20 years of experience in the IT industry. He has experience in most areas of IT with extensive experience in LAN, WAN, compute, storage, virtualization, and automaton. For the last 10 years he has focused on Data Center. As a Director of Data Center Solutions at Internetwork Engineering he focuses on helping customer solve business problems through technology. Click here to connect with Chris on LinkedIn.