Cloud Computing Outlook

No Cloud Needed: When AI Meets the Edge

By Cloud Computing Outlook | Friday, June 07, 2019

FREMONT, CA: Since the advent of AI, one factor has been holding back its rapid rise, and that is the requirement of extensive processing power, cloud, and data centers to host its vast and sophisticated algorithms. Hence, it was challenging to incorporate AI solutions into smartphones and edge devices.

However, the recent innovations in software, hardware, and energy technologies have facilitated the divergence of AI-powered products and services from robust cloud-computing services. Almost 80 percent of smartphones developed in 2022 are predicted to be incorporated with AI capabilities. In the last three years, investors have backed several AI chip startups with almost $1.5 billion.

Based on market research, AI application processors are likely to witness a 46 percent compound annual growth rate until 2023 when AI powers almost every smartphone. Furthermore, Intel Corp. recently announced the release of Ice Lake chips, possessing deep learning boost software and additional AI commands. Also, Arm Ltd. announced a processor series designed for AI applications in smartphones and other edge devices.

According to an analyst at IHS Markit, every processor developer will announce their competitive AI platform to establish themselves in the next-generation of technology. AI chips are also being implemented in the internet of things (IoT) devices, including robots, cars, drones, cameras, and wearables. Hailo, one of the 75 companies developing machine learning chips, procured funding of $21 million in January. Three months later, it unveiled a processor with deep learning capabilities.

New research hints at a significant reduction in the size of neural networks, which could result in compact and robust AI software. Google LLC recently launched TensorFlow Lite, a machine learning library for mobile devices. The application enables smart cameras to identify wildlife and to carry out medical diagnoses. It also released an on-device speech recognizer to bolster the functionality of Gboard, its virtual keyboard application. The transcription algorithm enabling the feature is almost 80 megabytes, which allows it to be hosted on the A series chip inside a pixel phone.

However, the incorporation of AI in mobile devices is no easy task, and machine learning algorithms consume a lot of power. Hence, there is a need for dedicated architecture to power the devices. The successful implementation of machine learning in edge devices will empower a new array of applications, including in smartphones, smart cameras, and monitoring sensors.

Check Out: Energy Tech Review Europe