There’s a veritable alphabet soup of acronyms that govern the operation of data centers for federal government agencies.  In February 2010 then Federal CIO, Vivek Kundra, introduced the Federal Data Center Consolidation Initiative (FDCCI) to assist agencies in stemming the “growth in redundant infrastructure [that] is costly, inefficient, unsustainable, and has a significant impact on energy consumption.”  With a roughly 300 percent increase in the number of data centers between 1998 and 2009, it was estimated that, left unchecked, “Federal servers and data centers could exceed 12 billion kwh [of electricity] by 2011.”  As well as the impact of energy consumption on federal spending, Kundra and the taskforce noted that data center infrastructures were underutilized and that this resulted in redundant infrastructure costs for each agency in areas as diverse as real estate expenses to hardware and software costs.

While most agencies made some degree of progress under the FDCCI in reducing their data center footprint, the relative lack of success in meeting targets drove the introduction of the Data Center Optimization Initiative (DCOI) in 2016.  In talking with Tom Rascon, Chief Technology Officer for the Department of Defense and Intelligence business for NetApp’s U.S. Public Sector, about these initiatives he shared that “the DCOI is very much an extension of the FDCCI, but with clearer pathways to success through the GSA portal – such as the shared market place and the cloud services guidance.”  Rascon’s civilian counterpart at NetApp, Mike Dye, noted that from his perspective “agencies have made significant strides on eliminating the low-hanging fruit – the data centers tucked away in back closets – but now they have to address power consumption issues in their main data centers.”

Both Dye and Rascon noted the major IT changes in the seven years since the FDCCI was issued that make the DCOI goals more achievable. “While we had strong offerings in cloud storage, what we didn’t have back in 2010 was flash storage,” Rascon shared.  “Now that flash storage is available at a lower cost than spinning disk, agencies should be looking to invest.  Not only is it faster, more responsive, and offers greater capacity in a smaller space, it creates far less heat than spinning disk,” he continued.  To which Dye added concrete metrics: “three years ago, it would take four racks of space to store 1 Petabyte of data, with flash the same amount of data can be stored in 4U.  Similarly, using flash storage reduces power consumption by about 12x over spinning disk.”

So, is it possible as FCC CIO, David Bray, noted at a recent event for federal agencies to operate with just three data centers? 

While Dye and Rascon are firm in their assessment that federal agencies are still a ways off from Bray’s vision of data center optimization, they’re optimistic about the gains in efficiency that agencies are making.  For agencies still grappling with the fundamentals of data center consolidation and optimization, the two CTOs offered these best practices:

  1. Start with an assessment that not only inventories assets and data location but also develops a snapshot of patterns of data access and usage. That will guide capacity usage, help determine where data should be stored – on-premise, in a private cloud, a public cloud, or a hybrid environment – and help tier data to drive further cost efficiencies.
  1. Understand power consumption rates. With many agencies not being directly responsible for their power costs, it’s important to establish a benchmark of mwh usage.
  1. Take time to investigate new technologies for storage media, cloud environments, containerization, and virtualization that enable reduction in physical space and power consumption.

 

Want to learn more about data center optimization strategies? Here’s a good place to start

[maxbutton id=”1″ ]