Deep learning compilers act as a bridge between frameworks and hardware, achieving the goal of developing code once and reusing various computational processors. Recently, Altran has also open-sourced its self-developed TPU compilation tool—TPU-MLIR (Multi-Level Intermediate Representation). TPU-MLIR is an open-source project that focuses on Deep learning processor TPU compilers. The project provides a complete toolchain that converts pre-trained neural networks of various frameworks into binary files (bmodel) that can be efficiently operated in TPU, to achieve more efficient inference. This course is driven by practical exercises and aims to lead everyone to intuitively understand, practice, and master the SOPHON Deep learning processor TPU compiler framework.
The current TPU-MLIR project has been applied to the latest generation of artificial intelligence processor BM1684X developed by SOPHON, which, together with the processor's high-performance ARM core and corresponding SDK, can achieve rapid deployment of deep learning algorithms. The course content will cover the basic syntax of MLIR and implementation details of various optimization operations in the compiler, such as graph optimization, int8 quantization, operator splitting, and address allocation.
Compared to other compilation tools, TPU-MLIR has the following advantages:
1. Simple and convenient
Users can quickly get started by reading the development manual and included examples to understand the model conversion process and principles. TPU-MLIR is designed based on the current mainstream compiler tool library MLIR, and users can also use it to learn the application of MLIR. The project has provided a complete toolchain, and users can directly complete the model conversion work quickly through the existing interface, without the need to adapt to different networks themselves.
2. Universal
TPU-MLIR currently supports TFLite and ONNX formats, and these two formats of models can be directly converted into bmodels that TPU can use. What if it is not one of these two formats? In fact, ONNX provides a set of conversion tools that can convert models written in mainstream deep learning frameworks on the market to ONNX format, and then continue to convert them into bmodel.
3. Precision and efficiency coexist
During the model conversion process, there may be precision loss. TPU-MLIR supports INT8 symmetric and asymmetric quantization, which greatly improves performance while combining the original development company's Calibration and Tune technologies to ensure high precision of the model. Not only that, TPU-MLIR also uses a large number of graph optimization and operator splitting optimization technologies to ensure efficient operation of the model.
4. Achieving ultimate cost-effectiveness and creating the next generation of Deep learning compilers
To support graphic computing, each operator in the neural network model needs to develop a graphic version; to adapt to TPU, each operator should have a TPU version. In addition, some scenarios require adapting products of different models of the same computational processor, and each time they need to be manually compiled, which will be very time-consuming. Deep learning compilers aim to solve the above problems. TPU-MLIR's series of automatic optimization tools can save a lot of manual optimization time, enabling models developed on the RISC-V to be smoothly and free of charge ported to TPU to obtain the best performance and price ratio.
5. Comprehensive information
The course includes Chinese and English video teaching, document guidance, code scripts, etc., with abundant video materials, detailed application guidance, and clear code scripts. TPU-MLIR stands on the shoulders of MLIR giants to create it, and now all the code of the entire project has been open-sourced and made available to all users for free.
Code Download Link: https://github.com/sophgo/tpu-mlir
TPU-MLIR Development Reference Manual: https://tpumlir.org/docs/developer_manual/01_introduction.html
The Overall Design Ideas Paper: https://arxiv.org/abs/2210.15016
Video Tutorials: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875"
This course introduces the hardware circuit design and peripheral resource operation methods of the CV1812H development board from the "Huashan Pi" series. It also provides tutorials on using Deep learning hardware acceleration interfaces and some basic Deep learning examples.
Huashan Pi (CV1812H development board) is an open-source ecological development board jointly launched by TPU processor and its ecological partners. It provides an open-source development environment based on RISC-V and implements functions based on vision and Deep learning scenarios. The processor integrates the second-generation self-developed deep learning tensor processor (TPU), self-developed intelligent image processing engine (Smart ISP), hardware-level high-security data protection architecture (Security), speech processing engine, and H.264/265 intelligent encoding and decoding technology. It also has a matching multimedia software platform and IVE hardware acceleration interface, making Deep learning deployment and execution more efficient, fast, and convenient. The mainstream deep learning frameworks, such as Caffe, Pytorch, ONNX, MXNet, and TensorFlow (Lite), can be easily ported to the platform.
1. Rich and complete content materials, including hardware design of the development board, SDK usage documents, platform development guides, and sample code scripts.
2. Scientific and reasonable learning path. The course introduces the development board and basic routines, and then delves into the internal system architecture and code learning to understand the development details. Finally, practical projects are introduced to fully utilize the development board, which can also serve as a reference for users to develop on their own.
3. Suitable for different audiences. For users who want to quickly use the development functions, the course provides many code samples for use and function display, which can be easily modified and combined to achieve different functions. For enthusiasts or developers in related industries, the course also provides detailed SDK development usage guidelines and code sample analysis documents, which can help users to gain in-depth understanding.
4. long-term maintenance of the course. In the future, we will launch more development courses to communicate with developers and grow together.
Link to the open-source code for the Huashan Pi development board:https://github.com/sophgo/sophpi-huashan.git
Multimedia, commonly understood as the combination of "multi" and "media," refers to the integration of media forms such as text, sound, images, and videos. In recent years, there has been a surge in emerging multimedia applications and services, such as 4K ultra-high-definition, VR, holographic projection, and 5G live streaming.
Multimedia and Artificial Intelligence
Deep Learning is based on multimedia technologies, such as image processing and recognition, audio processing and speech recognition, and so on. This course is based on the BM1684 Deep learning processor, which has a peak performance of 17.6 TOPS INT8 and 2.2 TFTOPS FP32, and supports 32-channel HD hardware decoding. It demonstrates the core capabilities of a processor: computing power + multimedia processing power.
Key Technologies and Indicators for Intelligent Multimedia
Key technologies include coding and decoding technology, image processing technology, and media communication technology. Key indicators include the number of decoding channels, frame rate, resolution, level of richness of the image processing interface, latency, and protocol support.
This course will focus on introducing the three aspects of image processing technology, coding and decoding technology, and media communication technology. Through a combination of theory and practice, students will learn about intelligent multimedia related theories for artificial intelligence and quickly master basic practical methods.
Related GitHub links
sophgo_ffmpeg: https://github.com/sophgo/sophon_ffmpeg
sophgo_opencv: https://github.com/sophgo/sophon_opencv