How Data Gravity Impacts Data Center Design

   

GettyImages-1142610359 (2) (1)-1

Every few years, enterprise technology encounters a game changing trend that causes us to rethink the way we approach data center design. From the early days of mainframe computing, we have been gathering data for processing and analysis. 

With the arrival of cloud computing capability in the late 1990s, data became centralized in hosted remote servers accessible via the internet. The cloud made data storage less expensive and highly scalable, so pools of data grew to data lakes and are now oceans of data, which is too much data to be easily transferred for analytics. Data gravity is requiring organizations to bring data processing closer to the data for faster, more efficient analytics processing, which means data center design has to accommodate a new set of data processing criteria.

The Growing Black Hole of Data Gravity

Simply put, data gravity means that data and applications are attracted to one another. The more data that has to be processed, the closer the applications need to be to the data source. Now we are seeing an explosion in big data analytics, where massive amounts of unrelated data are correlated to identify patterns. These oceans of data stored in the cloud are ideal for big data analytics, but the data sets themselves are too large to move around. For effective data processing, analytics software needs to have the source data stored locally for processing, and if the data sets are too big it becomes impractical to move them in and out of the cloud, hence the applications have to come to the data. With more and more data being generated every day, data gravity will continue to increase.

With the generation of more data, especially with the advent of new data sources, such as the Internet of Things, 5G mobile data access, blockchain and artificial intelligence (AI), cloud data oceans are essentially becoming black holes of data. IDC predicts that the size of the datasphere will grow from 33 zettabytes in 2018 to 175 zettabytes by 2025. Gartner predicts that by 2020, more than half of all data will be generated and processed outside the data center. 

Normally, data analytics applications reside in their own hardware and software stacks, e.g., a data center in which the data itself is stored in direct attached storage (DAS). Analytics applications written in Hadoop and Splunk need to have easy data access because the data has to migrate before it can be analyzed. With cloud data pools becoming larger, analytics applications have to move closer to the data sources, which is why new data center corridors are popping up in metropolitan locations, creating centralized ecosystems to counteract data gravity.

[Guide] Ensure your project’s success doesn’t fall victim to an unreliable  supply chain by working with a supplier that embraces vendor agnosticism and  open sourcing.

Solving the Data Distribution Problem

As data becomes larger and denser, it becomes harder to move across networks. Larger data sets, such as those used in big data, require more data storage and faster network connections. The 451 Group notes that 84 percent of enterprises are adding data storage, and 30 percent are already storing 250 petabytes annually. To address the problem, new data center districts are emerging where data center campuses continue to expand to consolidate data storage and reduce data transmission costs. 

Data centers are expanding in regions that are becoming high-density data center gravity wells. Northern Virginia, for example, has become home to more than 100 data centers and more than 10 million square feet of data center floor space. Colocation providers, such as Digital Realty and CyrusOne, believe that data gravity is creating new data center ecosystems. Digital Realty is focusing on retooling interconnections to handle bigger data loads. CyrusOne anticipates placing more hybrid cloud facilities in multi-tenant data centers near data gravity wells. To address power requirements, some data centers are installing self-contained power plants. Data center design now includes renewable energy sources, such as solar and wind power, as well as generators and fuel cells. Deploying green energy systems not only gives data center operators energy autonomy, but it also shortens time to completion and saves money. In California, for example, energy costs are expected to rise, following Pacific Gas & Electric filing for bankruptcy, due to the fines and lawsuits related to wildfires caused by an aging electricity infrastructure. Northern California data centers also had to contend with rolling blackouts—PG&E’s solution to minimize wildfires—making the need for reliable data center power even greater.

Modular Data Center Design Offers Flexibility

The need for rapid data center deployment and expansion is driving adoption of modular data center designs. Modular data centers offer an ideal design that is high density and compact, so they can be installed where needed to address the challenges of data gravity, due to their design flexibility. Modular data centers can be custom designed to meet specific computing requirements, and they are readily scalable, so adding computing power and data storage is easy. They also can be designed for quick, trouble-free installation almost anywhere.

Here are just some of the advantages of modular data center design:

  • Fast deployment – Modular data centers can be designed, fabricated, shipped and installed faster than stick-built data centers. In fact, installation time can be cut by as much as 30 percent.

  • Scalable – The modular approach means the data center is readily scalable, so more data storage and computing power can be added as required. The modular design is standardized, and because it is designed and built to specification in advance, it can easily be replicated or expanded.

  • Versatile – Data center modules are fully customizable to accommodate specific hardware or configurations. They can be designed as self-contained skid-based systems, racks that are ready to install or other modular systems.

  • High density and low PUE – Because the components are matched and the systems are prefabricated, they are very compact, which means they require a smaller footprint. The compact design also provides operational cost savings because it has a lower power usage effectiveness (PUE).

  • Reduced cooling required – Modular data center design also reduces cooling demands. Cooling makes up about half of data center energy costs; but with a modular design, components are matched to use less energy—which requires less cooling—and is configured for optimal airflow.

As data gravity continues to take its toll on enterprise infrastructures, modular data centers will fill a necessary gap, making it easier to install scalable data centers closer to cloud data storage facilities. Modular data center design makes it possible to bring more computing power to the data ocean, because they are compact, scalable and can be adapted to almost any data processing requirements.

If you want to learn more about the advantages of modular data center design, be sure to download The Complete Guide to Modular Data Centers.

New call-to-action

Comments

Subscribe for the latest news, research, and innovations in data center design and construction.