What does it take to design your agency’s next-generation data center? Before we can answer that question, we must define what the next-generation data center actually is. A next-generation data center (NGDC) is a storage environment that uses software to virtualize, optimize and simplify data management. The software creates the basis for a Data Fabric making it possible to move data across different environments and put it to work in support of mission-critical activities.

Sounds complicated, right? Well, the truth of the matter is the NGDC is a modern, simple approach that allows agencies to deliver real-time insights. Old data center infrastructures are limited in their capabilities, without the ability to scale out. They are too complex and too costly to manage. When designing your NGDC, there are some key elements that must be in place to ensure its effectiveness. In a recent webinar hosted by FedScoop, Rob Gordon, SolidFire Federal CTO at NetApp, mentions the importance of de-coupling, self-healing, and micro services in the NGDC. Below are some other key elements he says you should consider when building out your NGDC:

  • The Ability to Scale Out: The NGDC must have architecture and infrastructure that can scale out. True scale-out gives you the ability to add in very small increments – not just capacity but compute and memory. It means being able to move one asset from one cluster to another transparently so they can be deployed in any environment. In the NGDC, you want to get away from the hassle of migrations and upgrades. You should be able to simply add another node to your NGDC when capacity and performance grows.
  • Guaranteed Performance: If you can guarantee performance and quality of service, you can have applications running side by side in the same infrastructure, making it easier for you to manage. At NetApp, the term for this is 3-Dimensional Quality of Service (QoS), meaning there are allowable performance requirements for any individual application such that other apps or volumes are not affected.
  • Automate Management: A fully automated NGDC is critical to delivering on the mission. It reduces risk of human error associated with complex administration tasks. You should have a comprehensive API, built from the ground up. Your NGDC should offer complete management integrations where workloads are balanced and distributed across multiple servers.
  • In-Line Data Reduction: Look at how you can reduce your data set through deduplication, compression and thin provisioning. If you can deduplicate data on a global level, it will create efficiencies for your NGDC.
  • Data Assurance: Ensure your NGDC has real-time replication so data is instantaneously copied to one or more places as it is being generated. Integrated backup and recovery minimizes the need for external infrastructure and appliances.
  • Global Efficiencies: Your NGDC should have always-on deduplication capabilities and global thin provisioning to optimize efficiencies and available space utilized in storage area networks.


There is no doubt that with these practices in place, you will be well on your way to simplifying things for your agency and a creating a well-designed NGDC. That said, agencies still face challenges acquiring some of these technologies and services.  NetApp has made the buying process simpler for agencies by giving them a buying model that makes sense, such that individual nodes that can be purchased and deployed as the environment changes. Finally, Rob’s advice for those moving to the NGDC is to start small. Big migrations have been known to fail, but a small core team that can move small assets one at a time works well when it comes to designing your NGDC.

Find out more about designing the Next Generation Data Center here.