703-543-9662 • info@invincealabs.com    

Invincea Labs Blog

Machine learning has become an integral part of many of the cloud services we use on a daily basis such as Google Assist and Apple Siri. The implementation of the neural networks comprising the back end of these services has taken the form of high performance computing (HPC) nodes using GPU hardware accelerators. Emerging applications such as autonomous cars and drones implement a mixture of machine learning and computer vision that require a high-throughput, low-latency platform while at the same time maintaining strict size, weight and power (SWAP) restrictions. These applications in particular are perfectly suited for FPGA hardware accelerators. Updated for Vivado/Petalinux 2017.1