The challenges involved in executin Neural Networks (NNs) at the edge include providing diversity, flexibility, and sustainability. That implies, for instance, supporting evolving applications and algorithms energy-efficiently. Using hardware (hw) or software accelerators can deliver fast and efficient computation of the NNs, while flexibility can be exploited to support long-term adaptivity. Nonetheless, handcrafting a NN for a specific device, despite the possibility of leading to an optimal solution, takes time and experience, and that’s why frameworks for hw accelerators are being developed. This work, starting from a preliminary semi-integrated ONNX-to-hardware toolchain [23], focuses on enabling Approximate Computing (AC) leveraging the distinctive ability of the original toolchain to favor adaptivity. The goal is to allow lightweight adaptable NN inference on FPGAs at the edge.
ONNX-To-Hardware Design Flow for Adaptive Neural-Network Inference on FPGAs
Manca, Federico;Ratto, Francesco;Palumbo, Francesca
2025-01-01
Abstract
The challenges involved in executin Neural Networks (NNs) at the edge include providing diversity, flexibility, and sustainability. That implies, for instance, supporting evolving applications and algorithms energy-efficiently. Using hardware (hw) or software accelerators can deliver fast and efficient computation of the NNs, while flexibility can be exploited to support long-term adaptivity. Nonetheless, handcrafting a NN for a specific device, despite the possibility of leading to an optimal solution, takes time and experience, and that’s why frameworks for hw accelerators are being developed. This work, starting from a preliminary semi-integrated ONNX-to-hardware toolchain [23], focuses on enabling Approximate Computing (AC) leveraging the distinctive ability of the original toolchain to favor adaptivity. The goal is to allow lightweight adaptable NN inference on FPGAs at the edge.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


