Categories: Uncategorized

What is Edge AI and how is it done?

As explained in earlier blog edge AI is running your AI inference as close to sensor detecting the data. Conventional AI would require you to first transmit the data to a server where a GPU farm would process the information, run the your trained models, and extract the inference. The edge AI is a paradigm shift where the processing is done almost with the sensor.

How is Edge AI done

Edge AI requires additional compute resource along with the sensor. The additional compute unit is typically an additional piece of hardware which is capable of running inference. We would not be using this HW to train or create new models. The model training and evaluation will already be complete and we will use a trained model along with this piece of hardware to run inference.

Compute Resources

There is a myriad choices to enable AI compute at the edge. It can be a computer, a raspberry pi, a kit like Jetson X2, FPGA etc. There are a wide array of choices for the HW at the edge. You need to take a decision based on cost and application. I have come across Medtech companies that have laptop along with their hi-end cameras. These laptops are more than capable to run analyses on the images to detect issues in the retinal scan. There are grain scanners and sorters that use high speed cameras along with FPGAs to identify good grains from bad.

Edge AI has been one of the most interesting areas for development recently. Google, Envidia, Intel etc. have all released products targeting this space. We will discuss some of these offerings and give a highlight about each of them. In later blogs, I will explain how to get these devices working with examples

Intel Movidius Stick

Intel Movidius Stick for Edge AI

Movidius, now owned by Intel, pioneered VPUs (Visual Processing Units). Their chips were made for processing images at a very high speed with low power consumption. One of their offerings is the Movidius stick, which can plug into the USB slot of a raspberry pi or Ubuntu PC/laptop. Using the Intel OpenVino platform, the stick accelerates image processing. On Raspberry Pis, improvements of upto 80 times have been reported. Some of the drawbacks of these sticks is the limited model support. The stick supports a few Tensorflow and Caffe models. The stick is known to support Inception and MobileNet models but may not work with custom models. In case you plan to use this device, you should verify if your models will work with this kit. Another drawback is that it supports on USB 2.0, which limits the speed with which the stick communicates with the host computer.

Jetson Nano Envidia

Envidia Jetson Nano for Edge AI

Envidia GPUs have been the standard AI hardware platform of choice. The Jetson Nano at $99 is the latest offering targeted for small, cheap and low power applications. They have other offerings like the Jetson TX2, TX1, Tegra etc. which are bigger, more expensive and are capable to do a lot more. The Jetson Nano is like a good device and supports more models than the Movidius and Google Coral. The flexibility of compatible models and applications seems to be the best for the Jetson Nano. It is also possible to train models using the Jetson Nano, however I can’t think of many use cases for this feature for now. On the downside, it has limited connectivity options because it does not support neither WiFi nor Bluetooth. Connectivity is primarily through Ethernet, so it is appropriate for applications where wired connections suffice.

Google Coral TPU

The TPU or Tensor Processing Unit is an ASIC designed by google for processing neural networks. It is highly optimised for matrix operations which is foundation of neural network processing. Google recently released the Google Coral dev board that brings the power of TPUs to the edge. The Google coral boards are designed to do quick inference at the edge. It does not support back-propagation and therefore cannot be used for training. It has WiFi, Bluetooth and Ethernet, so has much better connectivity than the Jetson Nano.

In this blog we barely scratch at the myriad of options with which we can implement AI at the edge. Later blogs we will look at how these tools and more.

Praveen Pavithran

Share
Published by
Praveen Pavithran

Recent Posts

How to setup FTP on AWS Ubuntu

In this blog we will show how to setup an FTP on AWS machine. They…

4 months ago

How to run YOLO on a CCTV live feed

In this blog we explore how to run a very popular computer vision algorithm YOLO…

4 months ago

How to setup CI/CD for React using Jenkins and Docker on AWS S3

Introduction : Continuous Integration (CI) is a development practice that requires developers to integrate code…

5 months ago

How to setup a CCTV camera with JioFi

In this blog we explain how to enable live viewing for CCTV camera with a…

5 months ago

How to setup CCTV Camera with XMeye tool set

In this blog we will setup a Generic CCTV camera supported by XMeye. I will…

6 months ago

AI with a Generic CCTV Camera

CCTV cameras are ubiquitous. You can find one everywhere. On the roads, at traffic junctions,…

6 months ago