AI, or artificial intelligence, is the simulation of human intelligence by machines. Through the almost instant analysis of multiple data points – sometimes millions of them – systems can mimic a human response. In the case of sophisticated AI chatbots, for example, you might not know when you’re talking to a computer and when you’re talking to a machine. Edge Computing can help facilitate that experience further.
Consumer Response to AI is Changing
For everyday business use, chatbots are one of the most frequently discussed potential applications of AI, and consumer response to their use is changing rapidly. In a recent study, 62% of respondents said they’d be willing to use an online chatbot to communicate with a business or brand.
When you think about it, this makes sense. Messaging with a support chatbot isn’t all that different from messaging with a human. If we’re using the chatbot, it’s not like we’re expecting to make an emotional connection with the person on the other side. So long as we get the information we’re looking for quickly, fewer and fewer of us feel the need to get it from an actual human.
The rise in the use of personal assistance AI is one indicator of just how willing we are to talk to a computer. More than half (52%) of smartphone owners use Siri or other voice-assist AI applications already.
However, in the study mentioned above, this willingness to interact with a machine came with some notable caveats. A majority (61%) agreed ‘It would be more frustrating if a chatbot couldn’t solve my problem than a human,’ and 79% said they needed to know that a human would step in if they asked to speak to someone.
For AI to be able to help us reach our goals, people need to be willing to interact with the machine to allow it to learn. As the research shows, customers are willing to use AI, but they don’t just want a bot that can mimic human interaction. They want a bot that can perform better than its human counterparts.
Edge computing can help deliver.
Enabling AI Will Require Edge Computing Resources
Data is one necessary component to a functioning AI. The more data points available to the AI, the better. Data is the input used by the program to learn and adapt the way it ‘sees’ the world. Data helps the chatbots in use today reply to a customer’s inquiry in a more human-like fashion, even how to mimic human emotions such as empathy. (Though as the research shows, empathy from a machine might still be a bridge too far for many people.)
Another necessary component is speed. An autonomous vehicle needs to process the debris (or pedestrian) in the roadway in the blink of an eye. A surgical robot needs to be able to detect a patient’s vital signs and condition with each step it takes. A room service robot needs to be able to process the hotel guests’ requests and carry them out. The faster our chatbot can provide accurate answers to the customer’s query, the better the user experience.
Combine the need for speed with the capability to gather and process millions of data points in the blink of an eye and what you get is a perfect scenario for edge computing.
What is Edge Computing?
Say what you will about Wikipedia, they offer a pretty good working definition of edge computing:
Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.
What ‘closer’ means, however, depends on the application. For many of our current customers, that might simply mean deploying workloads in a data center closer to where they do business. This alone can significantly reduce latency when it comes to applications such as MRP, Supply Chain Operations, and customer-facing portals.
Also read: What Edge Computing Really Means
Other types of applications require the data center to be even closer. An industrial robot might require on-board AI capable of analyzing the environment around it as it performs tasks. Amazon’s annual ‘picking challenge’ shows just how hard it can be for robots to learn to sort through thousands of items of all different sizes and either pick them or stock them accurately. To date, the winning entries still can’t perform as accurately or as quickly as their human counterparts, and Amazon’s director of robotics fulfillment has said that fully, end-to-end warehouse automation is still at least 10 years away.
In a scenario like warehouse fulfillment, latency needs to be reduced to allow the robotic worker to even come close to human-like speeds. For an application like this, the data center would probably need to be located on the property itself. Much development is being done in prefabricated, modular data centers for these types of applications.
Finally, robotic surgery is another example of an eventual use of AI, where edge computing will be vital. The ‘robotic surgeries’ being performed today are actually human doctors using robotic instruments to help them make more precise movements.
As every doctor will tell you, you never know what will happen during a surgery until you make that first cut. (Or maybe they won’t tell you that because they don’t want you to worry, but it’s still true.) To fully automate this scenario would require a robot capable of responding to a life-threatening event such as an unexpected bleed. That sort of application would most likely require on-board AI, where the edge becomes the device itself.
Find Your Edge
For most businesses, edge computing still means housing workloads in a traditional data center closer to the customer. Read our Strategic Guide to Edge Computing to help as you develop your own edge strategy.