videantis enjoys design wins of deep learning processor for mobile phones, drones and VR/AR

Videantis (, a specialist supplier of computer vision and video coding semiconductor solutions, goes upbeat with its v-MP6000UDX visual processing architecture and v-CNNDesigner tool. The new processor increases deep learning algorithm performance by up to three orders of magnitude, while maintaining software compatibility with the already very powerful and successful v-MP4000HDX architecture.

Videantis continues to see a lot of demand for smart sensing systems that combine deep learning with other computer vision and video processing techniques such as SLAM or structure from motion, wide-angle lens correction, and video compression. Videantis is the only company that can run all these tasks on a single unified processing architecture. This simplifies SOC design and integration, eases software design, reduces unused dark silicon, and provides additional flexibility to address a wide variety of use cases.

“We’ve quietly been working on our deep learning solution together with a few select customers for quite some time and are now ready to announce this exciting new technology to the broader market,” says Hans-Joachim Stolberg, CEO at videantis. “To efficiently run deep convolutional nets in real-time requires new performance levels and careful optimization, which we’ve addressed with both a new processor architecture and a new optimization tool. Compared to other solutions on the market, we took great care to create an architecture that truly processes all layers of CNNs on a single architecture rather than adding standalone accelerators where the performance breaks on the data transfers in between.”

He also said: “Using some clever design features, the v-MP6000UDX architecture we’re announcing today increases throughput on key neural network implementations by roughly 3 orders of magnitude, while remaining extremely low power and compatible with our v-MP4000HDX architecture. This compatibility ensures a seamless upgrade path for our customers toward adding deep learning capabilities to their systems, without having to rewrite the computer vision software they’ve already developed for our architecture.”

Mike Demler, Senior Analyst at The Linley Group, said, “Embedded vision is enabling a wide range of new applications such as automotive ADAS, autonomous drones, new AR/VR experiences, and self-driving cars. Videantis is providing an architecture that can run all the visual computing tasks that a typical embedded vision system needs, while meeting stringent power, performance, and cost requirements.”

“With its innovative, specialized processors, videantis has long been a pioneer in enabling the proliferation of computer vision into mass-market applications,” said Jeff Bier, founder of the Embedded Vision Alliance. “By enabling the deployment of deep learning as well as conventional computer vision algorithms, processors like the v-MP6000UDX are making the promise of more intelligent devices a reality.”

The v-MP6000UDX processor architecture includes an extended instruction set optimized for running convolutional neural nets, increases the multiply-accumulate throughput per core eightfold to 64 MACs per core, and extends the number of cores from typically 8 to up to 256. Alongside the new architecture, videantis also announced v-CNNDesigner, a new tool that enables easy porting of neural networks that have been designed and trained using frameworks such as TensorFlow or Caffe.

v-CNNDesigner analyzes, optimizes, and parallelizes trained neural networks for efficient processing on the v-MP6000UDX architecture. Using this tool, the task of implementing a neural network is fully automated and it just takes minutes to get CNNs running on the low power videantis processing architecture.

답글 남기기

이메일은 공개되지 않습니다. 필수 입력창은 로 표시되어 있습니다.


뉴스 모음


시장 및 솔루션 분석