Abstract
Practical deployment of convolutional neural network (CNN) and cryptography algorithm on constrained devices are challenging due to the huge computation and memory requirement. Developing separate hardware accelerator for AI and cryptography incur large area consumption, which is not desirable in many applications. This article proposes a viable solution to this issue by expressing the CNN and cryptography as generic-matrix-multiplication (GEMM) operations and map them to the same accelerator for reduced hardware consumption. A novel systolic tensor array (STA) design was proposed to reduce the data movement, effectively reducing the operand registers by 2× . Two novel techniques, input layer extension and polynomial factorization, are proposed to mitigate the under-utilization issue found in existing STA architecture. Additionally, the tensor processing element (TPE) is fused using DSP unit to reduce the look-up table (LUT) and flip-flops (FFs) consumption for implementing multipliers. On top of that, a novel memory efficient factorization technique is proposed to allow computation of polynomial convolution on the same STA. Experimental results show that Cryptensor achieved 21.6% better throughput for VGG-16 implementation on XC7Z020 FPGA; up to 8.40× better-energy efficiency compared to existing ResNet-18 implementation on XC7Z045 FPGA. Cryptensor can also flexibly support multiple security levels in NTRU scheme, with no additional hardware. The proposed hardware unifies the computation of two different domains that are critical for IoT applications, which greatly reduces the hardware consumption on edge nodes.
Original language | English |
---|---|
Pages (from-to) | 4735-4748 |
Number of pages | 14 |
Journal | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |
Volume | 42 |
Issue number | 12 |
DOIs | |
State | Published - 1 Dec 2023 |
Keywords
- Convolutional neural network (CNN)
- ResNet-18
- VGG-16
- cryptography
- field programmable gate array (FPGA)
- generic-matrix-multiplication (GEMM)
- polynomial convolution
- systolic tensor array (STA)