As the GovDataDownload team looks at the technologies and IT trends that will make an impact on public sector organizations during 2019, data management remains top of mind. From AI solutions to the Internet of Things (IoT) and how applications and services move to the edge, core, and cloud, transforming and managing data is at the core of IT success in 2019. Atish Gude, NetApp’s senior vice president and chief strategy officer recently shared his perspectives on the impact of these technologies in a recent blog post.
For our readers, we’ve highlighted Atish Gude’s top predictions below:
Still at an early stage of development, AI technologies will see action in an explosion of new projects, the majority of which will begin in public clouds.
A rapidly growing body of AI software and service tools – mostly in the cloud – will make early AI development, experimentation, and testing easier and easier. This will enable AI applications to deliver high performance and scalability, both on and off-premises, and support multiple data access protocols and varied new data formats. Accordingly, the infrastructure supporting AI workloads will also have to be fast, resilient, and automated and it must support the movement of workloads within and among multiple clouds and on and off-premises. As AI becomes the next battleground for infrastructure vendors, most new development will use the cloud as a proving ground.
Edge devices will get smarter and more capable of making processing and application decisions in real-time.
Traditional IoT devices have been built around an inherent “phone home” paradigm: collect data, send it for processing, wait for instructions. But even with the advent of 5G networks, real-time decisions can’t wait for data to make the round trip to a cloud or data center and back, plus the rate of data growth is increasing. As a result, data processing will have to happen close to the consumer and this will intensify the demand for more data processing capabilities at the edge. IoT devices and applications – with built-in services such as data analysis and data reduction – will get better, faster, and smarter about deciding what data requires immediate action, what data gets sent home to the core or to the cloud, and even what data can be discarded.
The demand for highly simplified IT services will drive continued abstraction of IT resources and the commoditization of data services.
Remember when car ads began boasting that your first tune up would be at 100,000 miles? (Well, it eventually became sort of true.) Point is, hardly anyone’s spending weekends changing their own oil or spark plugs or adjusting timing belts anymore. You turn on the car, it runs. You don’t have to think about it until you get a message saying something needs attention. Pretty simple.
The same expectations are developing for IT infrastructure, starting with storage and data management: developers and practitioners don’t want to think about it, they just want it to work. “Automagically,” please. Especially with containerization and “server-less” technologies, the trend toward abstraction of individual systems and services will drive IT architects to design for data and data processing and to build hybrid, multi-cloud data fabrics rather than just data centers.
With the application of predictive technologies and diagnostics, decision makers will rely more and more on extremely robust yet “invisible” data services that deliver data when and where it’s needed, wherever it lives. These new capabilities will also automate the brokerage of infrastructure services as dynamic commodities and the shuttling of containers and workloads to and from the most efficient service provider solutions for the job.
Hybrid, multi-cloud will be the default IT architecture for most larger organizations while others will choose the simplicity and consistency of a single cloud provider.
Containers will make workloads extremely portable. But data itself can be far less portable than compute and application resources, and that affects the portability of runtime environments. Even if you solve for data gravity, data consistency, data protection, data security and all that, you can still face the problem of platform lock-in and cloud provider-specific services that you’re writing against, which are not portable across clouds at all.
As a result, smaller organizations will either develop in-house capabilities as an alternative to cloud service providers, or they’ll choose the simplicity, optimization, and hands-off management that comes from buying into a single cloud provider. And you can count on service providers to develop new differentiators to reward those who choose lock-in.
On the other hand, larger organizations will demand the flexibility, neutrality, and cost-effectiveness of being able to move applications between clouds. They’ll leverage containers and data fabrics to break lock-in, to ensure total portability, and to control their own destiny. Whatever path they choose, organizations of all sizes will need to develop policies and practices to get the most out of their choice.
Container-based cloud orchestration will enable true hybrid cloud application development.
Containers promise, among other things, freedom from vendor lock-in. While containerization technologies like Docker will continue to have relevance, the de-facto standard for multi-cloud application development (at the risk of stating the obvious) will be Kubernetes. But here’s the cool part: new container-based cloud orchestration technologies will enable true hybrid cloud application development, which means new development will produce applications for both public and on-premises use cases. This means no more porting applications back and forth. This will make it easier and easier to move workloads to where data is being generated rather than what has traditionally been the other way around.
Read the full blog post here.