Imagine this. You’re planning to migrate your ERP applications to the cloud, and you’re considering two different data centers. One is located in Nashville near the offices with the majority of your administrative staff. The other is in Omaha. You don’t have offices in Omaha, but you have several manufacturing facilities scattered throughout the Great Plains states including Kansas, Oklahoma, and Nebraska. The two data centers seem to offer comparable security, resiliency, and capacity. The Nashville data center has more customers, but it still has plenty of room to expand. The Omaha data center is smaller, but by no means is it a “fly by night” operation. One is considered an edge data center and the other isn’t.
Which one do you choose?
What are your IT Infrastructure priorities?
Many organizations will gravitate toward Nashville. After all, it’s closest to the center of their IT operations, making it easier for those responsible for infrastructure strategy to keep an eye on things. That’s sound reasoning, but you also need to look at the decision from your end user’s point of view.
In the simple scenario above, the administrative users in the Nashville office use productivity applications like Microsoft Office 365 and a variety of financial applications. The operational users in the manufacturing facilities rely on applications like ERP, configuration and quoting systems, warehouse management, and the like. If the organization’s supply chain is digitally integrated, end users may also include customers and vendors as well.
Arguably, the operational users are going to be more impacted by delays. If you’ve never looked closely at an application like MRP (material requirements planning) or supply chain operations, suffice it to say their complex computational functions can be data intensive.
You might be thinking, so what if my production planners in Nebraska have to wait an extra 30 milliseconds for a screen to come up or if MRP takes a little longer to run? Irritating yes, but not the end of the world.
Industry is becoming much more real time. No matter what business you’re in, chances are good you’re seeing old processes and protocols transformed as computing power and the internet allow you to do things in ways that would have been impossible just ten to fifteen years ago. When you’re competing with other more digital-savvy organizations, that 30 extra milliseconds of wait time can make a difference, especially when it’s your customer doing the waiting.
The Internet of Things (IoT) is accelerating the transformation of businesses as well. Many manufacturers are automating a wide array of manufacturing tasks, many of which are hazardous to humans. Some surgeries are now performed by robots. Hotels are getting into the business, using robots for room service. Package delivery services are even using drones to deliver packages. For every one of these IoT use cases, wait time can be a show-stopper.
Many organizations are also exploring Artificial Intelligence (AI) to determine how it can be used to improve the customer experience while decreasing costs. The biggest challenge may not be the capabilities of AI so much as the average person’s willingness to communicate with a computer program mimicking a human. If the computer must wait for a response from the central data center before it responds to something the customer said, the interaction is going to be very cumbersome.
Edge Data Centers Satisfy the Need for Speed
The business of business is changing in so many ways thanks to technology. However, each of these transformative innovations must overcome latency.
Put simply, latency is a measure of the time you spend waiting for information to travel from your PC to the data center and for you to receive a response. The more data intensive your request is (refer to any one of the examples above) the more the impact of latency is felt by your end users.
This is where the edge data center comes in. There are a lot of ways to decrease latency, e.g., faster processing speeds, lower resource utilization rates, direct connections (bypassing the internet), etc. The distance your data has to travel is one of the primary factors. Putting data centers closer to the end user reduces that distance and the time your users need to wait for the system to process their request. These are also known as “edge data centers”, data centers closer to your end users.
As TierPoint Director of Product Management, Dominic Romeo, explains, “When you’re inside a 50-mile radius, latencies get really, really low. The time it takes for the end user to send a command to the server and for the server to come back with a response are in the neighborhood of single-digit milliseconds versus double- or triple-digit milliseconds of round-trip time. That can have a tremendous impact on productivity and the customer experience.”
Find Your Edge (Hint: It’s where your users are)
It isn’t the location of your data center that puts it at “the edge.” It’s the location of your end users. For operational users in the central Midwest, a data center in Omaha may offer the best user experience. You might even consider splitting workloads, housing some workloads in Omaha and some in a data center closer to your home office in Nashville.
One of TierPoint’s advantages is our 40+ data centers in tier 2 and 3 markets that are farther from the public cloud data centers. We can even help you leverage hyperscalers like AWS and Azure, and we offer colocation options where you house your equipment in our data center. (A great solution if you have hardware that isn’t fully depreciated.)
We recently spent some time with Dominic Romeo discussing edge computing and the need for edge computing and edge data centers. To learn more, you can access each segment of our interview at the links below.
Part 1: What Edge Computing Really Means