High-performance computing (HPC) specialist Inspur has announced the release of TF2, a full-stack framework for efficient artificial intelligence computing on field-programmable gate arrays (FPGAs), under the permissive Apache 2.0 licence.

“The deployment of AI applications covers the cloud, the edge, and the mobile end, and has highly diverse requirements. TF2 can greatly improve the efficiency of application deployment across different ends and quickly adapt to the model inference requirements in different scenarios,” says Liu Jun, general manager for artificial intelligence and high-performance computing at Inspur Group. “AI users and developers are welcome to join the TF2 open-source community to jointly accelerate the deployment of AI applications and facilitate the implementation of more AI applications.”

The release covers both halves of the TF2 framework: the first half is a model optimisation and conversion tool for compression, pruning, and quantisation of network model data from common deep-learning frameworks; the second is a runtime engine which converts optimised model files into FPGA target running files with improved performance and efficiency – up to 12.8 times the speed of a more general implementation on the same hardware, using the FaceNet model. The project also includes a software-defined reconfigurable chip design architecture, designed to support the development of current convolutional neural network (CNN) models while supporting easy porting for the development of other network models including Transformer and LSTM.

As well as releasing the project under the permissive Apache 2.0 licence, Inspur has confirmed it plans to continue to invest in establishing an open-source community surrounding TF2 with efforts including new automatic model analysis, structural pruning, sparse computing, and other additional features to follow in the coming months. Inspur has named Kuaishou, Shanghai University, and MGI as early members of the community.

The TF2 framework is available now on GitHub, along with supporting documentation.