from other drivers, the road, and conditions
around vehicles in the Nauto network. Fleets equipped with Nauto
can automatically capture and upload video of significant events and
insights in real time to help fleet managers improve overall driver performance and enhance the safety and efficiency of an entire fleet, according to the company. The platform also uses the VERA (Vision Enhanced Risk Assessment) scoring system, which provides a risk rating
for the frequency and severity of distraction events.
Funds raised in this round will be used to fuel the company’s growth
and the deployment of its retrofit safety and networking system into
more vehicles around the globe, as well as support the expansion of
the Nauto data platform in autonomous vehicle research and development across multiple automakers, noted the press release. The more
Nauto units that are deployed, and the more that vehicles driving with
Nauto accumulate even more miles, the more precise the network will
become, suggests the company.
“SoftBank and Greylock, along with our key strategic partners, are
turbo-charging Nauto’s ability to make roads safer today and to create
an onramp to autonomy for the near future,” said Nauto founder and
CEO Stefan Heck. “At a time when traffic fatalities are climbing and
distracted driving causes more than half of all crashes, we’re tackling
that problem by putting Nauto’s safety features into more commercial
fleet vehicles — from trucks and vans to buses and passenger cars —
to warn drivers and coach them on how to stay focused.”
“And,” he continued, “in pursuit of the profoundly transformational
impact autonomous vehicle technology can have on business and soci-
ety, we’ll now more rapidly be able to gather the billions more miles of
real driving experience and data required to get a precise understand-
ing of how the best drivers behave behind the wheel.”
SoftBank Group Corp. Chairman and CEO Masayoshi Son, also
commented: “While building an increasingly intelligent telematics
business, Nauto is also generating a highly valuable dataset for au-
tonomous driving, at massive scale that will help accelerate the de-
velopment and adoption of safe, effective self-driving technology.”
continued from page 5
Deep learning device from Intel enables artificial
intelligence programming at the edge
Intel (Santa Clara, CA, USA; www.intel.com) released the Movidius
Neural Compute Stick, a deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated
deep neural network processing capabilities to a range of host devices.
The Movidius Neural Compute Stick features the Myriad 2 vision
processing unit (VPU) contains hybrid processing elements including twelve 128-bit VLI W processors and two 32-bit RISC processors.
The device features Caffe framework support and USB 3.0 Type A interface. Host minimum requirements are an x86_64 computing running Ubuntu 16.04 with 1 GB RAM and 4 GB free storage space.
Designed to reduce development, tuning and deployment barriers, the device delivers deep neural network processing in a small
form factor to bring machine intelligence and AI out of the data centers and into end-user devices at the edge.
“The Myriad 2 VPU housed inside the Movidius Neural Compute
Stick provides powerful, yet efficient performance – more than 100 gi-
gaflops of performance within a 1W power envelope – to run real-time
deep neural networks directly from the device,” said Remi El-Ouaz-
zane, vice president and general manager of Movidius (San Mateo,
CA, USA; www.movidius.com), an Intel company. “This enables a wide
range of AI applications to be deployed offline.”
With the Movidius Neural Compute Stick, users can do the following:
• Compile: Automatically convert a trained Caffe-based convolu-
tional neural network (CNN) into an embedded neural network
optimized to run on the onboard Movidius Myriad 2 VPU.
• Tune: Layer-by-layer performance metrics for both industry-stan-
dard and custom-designed neural networks enable effective tuning
for optimal real-world performance at ultra-low power. Validation
scripts allow developers to compare the accuracy of the optimized
model on the device to the original PC-based model.
• Accelerate: Unique to Movidius Neural Compute Stick, the device
can behave as a discrete neural net work accelerator by adding ded-
icated deep learning inference capabilities to existing computing
platforms for improved performance and power efficiency.
Furthermore, the Neural Compute Stick comes with the Movidius
Neural Compute software development kit, which enables deep learning developers to profile, tune, and deploy CNNs on low-power applications that require real-time processing.