Edge computing refers to processing that takes place at or near the "edge" of a network, where Internet of Things (IoT) data is generated or collected. Companies that use edge computing with edge analytics – including artificial intelligence and machine learning – capture valuable, real-time insights that can drive competitive advantage.
The biggest advantage of edge computing – vastly reduced latency in analytic processing – is the reason for all the buzz about this technique. Before the advent of edge computing, the data flowing from connected assets had to travel from the edge of the network back to a data center or cloud for processing. This delay limited the potential for businesses to quickly (or automatically) take advantage of insights from their data.
With edge computing and edge analytics, companies can process data on the spot, automating decision making and action. We’re talking as real-time as it gets for the data analytics that drives critical business decisions. Because processing is handled right at the device that's collecting or generating data, edge computing and analysis are ideal for cases where internet or cellular connections are spotty, or when bandwidth is constrained (think offshore oil platforms, mines and remote customer sites).
Georgia-Pacific uses analytics from SAS to achieve a variety of goals. See how edge computing helps the company speed decisions and create value for customers.
For businesses seeking competitive advantage, edge computing holds tremendous promise. For example, edge computing reduces IT costs because you no longer need to move all your high-frequency IoT data to the cloud or keep it on-premises for analysis or long-term storage. Instead, analytics at the edge can identify which data to move and which to store for deeper analysis (such as significant changes in temperature or vibration, or all sensor data captured minutes before and after failures). High-value data can also be compressed, further reducing the overall data volume and network bandwidth that’s needed to move data to the cloud.
When edge computing combines with artificial intelligence (AI), the advantages multiply. Consider some of the outcomes companies achieve using AI-embedded IoT analytics at the edge of a network to uncover on-the-spot hidden patterns in data:
- Lightning-fast, business-critical decisions made with confidence.
- Millions saved by avoiding unplanned downtime.
- Enhanced operational efficiency.
- Differentiated customer experiences.
- Faster innovation.
- Stronger data security.
Data Analytics at the Edge
From a myriad of sensors to industrial controllers and connected devices like vehicles, wearables and drones, vast amounts of IoT data are generated today. Industries that can quickly seize and analyze this data close to its source are well-positioned to thrive in the digital economy. Learn more in this free report from 451 Research.
Real-world, cross-industry examples
Industries are still early in their adoption of edge computing. But given the high-value business needs it can support, expect edge computing to drive a new wave of business innovation. According to IDC, by 2022 over 40% of organizations' cloud deployments will include edge computing and 25% of endpoint devices and systems will execute AI algorithms.1
In anticipation of this trend, many equipment manufacturers are already building generalized computing capabilities into their IoT devices and products. This is extra, unused processing power that can be used in the future for machine and sensor applications that haven’t yet been envisioned. This “overbuilding” of edge devices attests to the fact that manufacturers expect growing demand for edge computing and analytics power to support new innovations, services and business scenarios.
Let's look at some real-world examples that showcase how edge computing is being used today – and point to what’s possible in the future.
Maximizing energy grid uptime through smart vegetation management
The US electrical grid – a vast network consisting of over 200,000 miles of high-voltage transmission lines and 5.5 million miles of local distributions lines – is under constant attack by vegetation. Operations and maintenance costs for the grid represent up to 35% of the total operating budget, and vegetation removal costs are the largest line item. (California’s Independent Operators, for example, spend more than $250 million a year on vegetation management on high-voltage distribution lines alone.)
Most utilities today operate with a time-based, highly manual approach to vegetation management that’s been in place for almost 100 years. With this approach, cutting frequency is determined by on-site inspection. But with edge computing, utilities can instantly analyze data captured by drones in real time to assess types of vegetation, growth rates, rainfall and more. For example, they can:
- Identify dangerous trees and vegetation encroachments more efficiently and effectively.
- Cut maintenance costs.
- Build predictive models of vegetation growth patterns.
- Provide comprehensive right-of-way inventories.
- Identify fire risk areas.
Fog versus edge computing:
What’s the difference?
With fog computing, businesses logically and efficiently distribute data, compute, storage and applications between data source and cloud. They do so according to what makes sense to achieve their desired outcomes. The result is a fog network, which generally focuses on edge devices that speak to each other – such as IoT gateways. In contrast, edge computing focuses on physical devices (such as routers, switches, integrated access devices (IADs), multiplexers and network access devices) that are attached to or embedded into a "thing," such as a cell tower, industrial machinery or other physical asset.
Increasing return on assets
Consider a company that operates a wind turbine farm. With cloud computing, you would normally operate these assets until IoT devices and sensors detect an issue, such as excessive winds. But there’s a long loop between the sensors and the analytical software in the cloud. If winds quickly increase to dangerous levels, processing delays and failure to immediately shut down turbines could result in serious damage, costly downtime and expensive repairs.
With hundreds of sensors on each wind turbine, things like output, weather conditions, wear and tear, and overall operations relative to target parameters measuring output can be measured continuously. Then real-time analysis and machine learning can use this IoT data to recognize a dangerous state and trigger an immediate shutdown. There are no delays caused by shuttling sensor data to the cloud over costly WAN links for processing, or by sending analytic outcomes (or decisions) back to the edge. Filtering IoT data at the edge cuts down on the amount of data that needs to be transported over the network, reducing costs even more.
Helping smart manufacturing detect and correct errors early
Some manufacturers today are using computer vision – powered by cameras, or edge computing devices embedded into machinery – to detect issues faster and earlier. Embedded computer vision is incredibly accurate, detecting defects in real time, as the factory is making products. As a result, it delivers fewer false positives and much earlier detection of product deviations than traditional methods. Using IoT data processing at the edge, manufacturers can adjust machinery or computer systems before products are out of spec. And they can automatically trigger immediate shut-downs – for example, when edge devices detect significant, unexpected defects.
As a result, factories can expect higher manufacturing yields, reduced manual inspections, more asset uptime and lower risk of shipping products outside of customer specs – all business-critical KPIs.
Differentiating in-store retail customer experiences to maximize sales
Retailers are using video cameras as edge devices to monitor the paths customers take in retail environments. These devices use edge computing to merge past buying and omnichannel histories of each customer and to generate unique, real-time offers based on customer profile and geolocation. (Offers are sent to customers who have opted in to store mobile apps.) Edge devices that capture more IoT data can target these offers even more effectively. Consider that edge devices can track a customer’s proximity to a store, path through the store, and more. If they see a customer looking at diapers for a while, they can instantly send a coupon or incentive for diapers or other baby-related products.
Edge computing can also support distinct product experiences that build loyalty and drive retention. For example, car manufacturers are building edge computing power into cars that can detect when a customer is passing a service center. By crunching data about the car’s operational and maintenance history and combining that information with location-based information, they can alert drivers when their car needs service. Edge computing can detect when certain parts are trending toward failure. Then they can notify the customer or tell a local service center to contact the customer and schedule maintenance. These approaches often lead to higher customer satisfaction, retention and brand loyalty.
Enabling new and innovative business models
Edge analytics can enable new business models that drive new revenue streams. Heating and air conditioning manufacturers are building edge computing into assets so they can self-analyze sensor data and proactively report status to asset owners and maintenance service providers. For example, edge computing devices can indicate whether the system is operating within expected parameters and to what degree. They can show risk of potential failures, as well as opportunities to operate more efficiently. Manufacturers can offer owners this reporting feature as an optional, value-added (paid) service.
Edge computing can also help ensure continued service and asset operations despite intermittent cloud connections. Think of an offshore drilling rig that loses internet access during a hurricane. With edge computing, it can continue to monitor machinery data and make real-time, corrective actions to keep people and the environment safe.
Similarly, edge computing can transform customer and patient care delivery models. In health care, it can be used to enhance the patient experience as well as the clinician’s productivity and effectiveness. Connected patients can capture their own vitals (such as blood pressure, blood sugar, heart rate and rhythm data) using IoT-enabled phones or watches, and instantly share this data with clinicians through a patient portal. In this way, edge analytics can facilitate continuous patient monitoring, more effective doctor-patient communications, and faster, more accurate clinical decision making and diagnosis.
The outcomes could be happier, healthier and safer customers, patients and workers. Longer asset life. Reduced downtime and environmental impacts. And higher return on assets.
Can companies afford to delay?
Given the cost of not processing IoT data at the edge, we expect adoption to accelerate quickly. The manufacturing and transportation industries have been early adopters. Other industries – such as health care, agriculture, urban governments and retail – are expected to catch up quickly with accelerated adoption as part of their digital transformation efforts.
Make no mistake: Companies that uncover and automatically act on new insights at the source will gain competitive advantage and will be able to use it to leapfrog their competitors. Viewed from this perspective, late adopters of edge computing and analytics strategies are potentially putting profits and market share at risk.
- IFRS 9 and CECL: The challenges of loss accounting standardsThe loss accounting standards, CECL and IFRS 9, change how credit losses are recognized and reported by financial institutions. Although there are key differences in the standards for CECL (US) and IFRS 9 (international), both require a more forward-looking approach to credit loss estimation.
- A data scientist’s views on data literacyGet a data scientist and teacher's perspective on the value of having foundational knowledge so you can more easily tell data fact from data fiction.
- Understanding capital requirementsCredit risk classification systems have been in use for a long time, and with the advent of Basel II, those systems became the basis for banks’ capital adequacy calculations. What is needed going forward is an efficient and honest dialogue between regulators and investors on capitalization.