Design
Accelerators by
Regulating
Transformations

DART (Design Accelerators by Regulating Transformations) pioneers advanced hardware accelerator design, enhancing performance and efficiency across cloud computing, IoT, and edge systems. Through innovative, regulated transformations powered by heterogeneous graphs, DART streamlines development from high-level models to optimized hardware.

Why DART?

  • Advanced Automation: Meta-Programming Design-Flow Patterns automate reusable optimizations, boosting productivity and hardware efficiency [1]
  • Diverse Hardware Targeting: Auto-generation of optimized designs across CPUs, GPUs, and FPGAs from unified high-level descriptions simplifies multi-platform deployments [2]
  • Flexible Neural Network Optimization: Customizable cross-stage optimization strategies enhance deep learning accelerator performance, translating high-level models directly to efficient FPGA implementations [3]
  • Machine Learning Integration: Bayesian optimization reduces design exploration effort, accelerating custom processor and FPGA design [4]
  • Edge Device Optimization: Hardware-aware optimizations automate efficient FPGA configurations for AI deployment in IoT environments [5]

Technology

Using heterogeneous graphs, the engine supports multiple design languages—imperative (like C++), declarative (like HeteroCL), and dataflow (like MaxJ)—and enables powerful optimisation strategies guided by meta-programming [1][2]. The project also applies machine learning techniques, such as Bayesian optimisation and predictive modelling [3][4], to explore complex design spaces and speed up the development of hardware tailored for tasks like deep learning. With partners including Microsoft, Intel, Xilinx, and academic institutions worldwide, DART is building a flexible, open-source foundation for more intelligent, efficient, and reusable hardware systems[5].

DART is built on the idea that hardware design can benefit from the flexibility and structure of software-style transformations. Unlike traditional compilers that act as black boxes, DART’s transformation engine lets developers see, control, and experiment with how designs evolve.

Read More

[1] J. Vandebon, J.G.F. Coutinho, W. Luk: Meta-Programming Design-Flow Patterns for Automating Reusable Optimisations. HEART 2022

[2] J. Vandebon, J.G.F. Coutinho, W. Luk: Auto-Generating Diverse Heterogeneous Designs. RAW 2024

[3] Z. Que et al: MetaML: Automating Customizable Cross-Stage Design-Flow for Deep Learning Acceleration. FPL 2023

[4] J.G.F. Coutinho et al: Exploring Machine Learning Adoption in Customisable Processor Design. ASICON 2023

[5] M. Rognlien et al: Wayne Luk: Hardware-Aware Optimizations for Deep Learning Inference on Edge Devices. ARC 2022