The courses offered at Sophon Academy Online include: BM16 Development Board Series, CV18 Development Board Series, Computer Vision, Large Language Models, AI Compiler, and Professional Skills Certification. The development board courses primarily cover deployment and usage of boards such as Milk-v Duo, Shaolin, Huashan, and SE5. The Computer Vision course encompasses both the theoretical and practical aspects of multimedia programming, including hands-on segments from the development board courses. The AI Compiler course provides a comprehensive overview of TPU-MLIR, covering theoretical knowledge, environment setup, and programming interfaces. The certification course equips individuals with the necessary knowledge required for IT operations engineers, allowing you to choose a suitable course based on your needs.
As a bridge between the framework and hardware, the Deep learning compiler can realize the goal of one-time code development and reuse of various computing power processors. Recently, Computational Energy has also opened source its self-developed TPU compiler tool - TPU-MLIR (Multi-Level Intermediate Representation). Tpu-mlir is an open source TPU compiler for Deep learning processors. The project provides a complete tool chain, which converts the pre-trained neural network under various frameworks into a binary file bmodel that can operate efficiently in TPU to achieve more efficient reasoning. This course is driven by actual practice, leading you to intuitively understand, practice, and master the TPU compiler framework of intelligent Deep learning processors.
At present, the TPU-MLIR project has been applied to the latest generation of deep learning processor BM1684X, which is developed by Computational Energy. Combined with the high-performance ARM core of the processor itself and the corresponding SDK, it can realize the rapid deployment of deep learning algorithms. The course will cover the basic syntax of MLIR and the implementation details of various optimization operations in the compiler, such as figure optimization, int8 quantization, operator segmentation, and address allocation.
TPU-MLIR has several advantages over other compilation tools
1. Simple and convenient
By reading the development manual and the samples included in the project, users can understand the model conversion process and principles, and quickly get started. Moreover, TPU-MLIR is designed based on the current mainstream compiler tool library MLIR, and users can also learn the application of MLIR through it. The project has provided a complete set of tool chain, users can directly through the existing interface to quickly complete the model transformation work, do not have to adapt to different networks
2. General
At present, TPU-MLIR already supports the TFLite and onnx formats, and the models of these two formats can be directly converted into the bmodel available for TPU. What if it's not either of these formats? In fact, onnx provides a set of conversion tools that can convert models written by major deep learning frameworks on the market today to onnx format, and then proceed to bmodel
3, precision and efficiency coexist
During the process of model conversion, accuracy is sometimes lost. TPU-MLIR supports INT8 symmetric and asymmetric quantization, which can greatly improve the performance and ensure the high accuracy of the model combined with Calibration and Tune technology of the original development company. In addition, TPU-MLIR also uses a lot of graph optimization and operator segmentation optimization techniques to ensure the efficient operation of the model.
4. Achieve the ultimate cost performance and build the next generation of Deep learning compiler
In order to support graphic computation, operators in neural network model need to develop a graphic version; To adapt the TPU, a version of the TPU should be developed for each operator. In addition, some scenarios need to be adapted to different models of the same computing power processor, which must be manually compiled each time, which will be very time-consuming. The Deep learning compiler is designed to solve these problems. Tpu-mlir's range of automatic optimization tools can save a lot of manual optimization time, so that models developed on RISC-V can be smoothly and freely ported to the TPU for the best performance and price ratio.
5. Complete information
Courses include Chinese and English video teaching, documentation guidance, code scripts, etc., detailed and rich video materials detailed application guidance clear code script TPU-MLIR standing on the shoulders of MLIR giants to build, now all the code of the entire project has been open source, open to all users free of charge.
Code Download Link: https://github.com/sophgo/tpu-mlir
TPU-MLIR Development Reference Manual: https://tpumlir.org/docs/developer_manual/01_introduction.html
The Overall Design Ideas Paper: https://arxiv.org/abs/2210.15016
Video Tutorials: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875"
Course catalog
序号 | 课程名 | 课程分类 | 课程资料 | ||
视频 | 文档 | 代码 | |||
1.1 | Deep learning编译器基础 | TPU_MLIR基础 | √ | √ | √ |
1.2 | MLIR基础 | TPU_MLIR基础 | √ | √ | √ |
1.3 | MLIR基本结构 | TPU_MLIR基础 | √ | √ | √ |
1.4 | MLIR之op定义 | TPU_MLIR基础 | √ | √ | √ |
1.5 | TPU_MLIR介绍(一) | TPU_MLIR基础 | √ | √ | √ |
1.6 | TPU_MLIR介绍(二) | TPU_MLIR基础 | √ | √ | √ |
1.7 | TPU_MLIR介绍(三) | TPU_MLIR基础 | √ | √ | √ |
1.8 | 量化概述 | TPU_MLIR基础 | √ | √ | √ |
1.9 | 量化推导 | TPU_MLIR基础 | √ | √ | √ |
1.10 | 量化校准 | TPU_MLIR基础 | √ | √ | √ |
1.11 | 量化感知训练(一) | TPU_MLIR基础 | √ | √ | √ |
1.12 | 量化感知训练(二) | TPU_MLIR基础 | √ | √ | √ |
2.1 | Pattern Rewriting | TPU_MLIR实战 | √ | √ | √ |
2.2 | Dialect Conversion | TPU_MLIR实战 | √ | √ | √ |
2.3 | 前端转换 | TPU_MLIR实战 | √ | √ | √ |
2.4 | Lowering in TPU_MLIR | TPU_MLIR实战 | √ | √ | √ |
2.5 | 添加新算子 | TPU_MLIR实战 | √ | √ | √ |
2.6 | TPU_MLIR图优化 | TPU_MLIR实战 | √ | √ | √ |
2.7 | TPU_MLIR常用操作 | TPU_MLIR实战 | √ | √ | √ |
2.8 | TPU原理(一) | TPU_MLIR实战 | √ | √ | √ |
2.9 | TPU原理(二) | TPU_MLIR实战 | √ | √ | √ |
2.10 | 后端算子实现 | TPU_MLIR实战 | √ | √ | √ |
2.11 | TPU层优化 | TPU_MLIR实战 | √ | √ | √ |
2.12 | bmodel生成 | TPU_MLIR实战 | √ | √ | √ |
2.13 | To ONNX format | TPU_MLIR实战 | √ | √ | √ |
2.14 | Add a New Operator | TPU_MLIR实战 | √ | √ | √ |
2.15 | TPU_MLIR模型适配 | TPU_MLIR实战 | √ | √ | √ |
2.16 | Fuse Preprocess | TPU_MLIR实战 | √ | √ | √ |
2.17 | 精度验证 | TPU_MLIR实战 | √ | √ | √ |
The deep neural network model can be trained and tested quickly and then deployed by the industry to effectively perform tasks in the real world. Deploying such systems on small-sized, low-power Deep learning edge computing platforms is highly favored by the industry. This course takes a practice-driven approach to lead you to intuitively learn, practice, and master the knowledge and technology of deep neural networks.
The SOPHON Deep learning microserver SE5 is a high-performance, low-power edge computing product equipped with the third-generation TPU processor BM1684 developed independently by SOPHGO. With an INT8 computing power of up to 17.6 TOPS, it supports 32 channels of Full HD video hardware decoding and 2 channels of encoding. This course will quickly guide you through the powerful features of the SE5 server. Through this course, you can understand the basics of Deep learning and master its basic applications.
Course Features
1. One-stop service
All common problems encountered in SE5 applications can be found here.
• Provide a full-stack solution for Deep learning micro servers
• Break down the development process step by step, in detail and clearly
• Support all mainstream frameworks, easy to use products
2. Systematic teaching
It includes everything from setting up the environment, developing applications, converting models, and deploying products, as well as having a mirrored practical environment.
• How is the environment built?
• How is the model compiled?
• How is the application developed?
• How are scenarios deployed?
3. Complete materials
The course includes video tutorials, document guides, code scripts, and other comprehensive materials.
• Rich video materials
• Detailed application guidance
• Clear code scripts
Code download link: https://github.com/sophon-ai-algo/examples
4. Free cloud development resources
Online free application for using SE5-16 microserver cloud testing space
• SE5-16 microserver cloud testing space can be used for online development and testing, supporting user data retention and export
• SE5-16 microserver cloud testing space has the same resource performance as the physical machine environment
Cloud platform application link: https://account.sophgo.com/sign_in?service=https://cloud.sophgo.com&locale=zh-CN
Cloud platform usage instructions: https://cloud.sophgo.com/tpu.pdf
This course introduces the hardware circuit design and peripheral resource utilization methods of Shaolin Pi, as well as provides tutorials on using the hardware acceleration interface of Deep learning and some basic Deep learning examples.
"Shaolin Pi" is a development platform based on BM1684 with about 20 TOPS computing power. It has good hardware scalability based on the Mini-PCIe interface, a rich ecosystem, and various connectable peripherals.
Course features:
The content materials are rich and complete, including development board hardware design, peripheral interface instructions, development board upgrade process, and sample code scripts.
The learning path is scientifically reasonable, starting from the introduction and basic usage of the development board, deepening the understanding of the development details through the learning of the internal system architecture and code, and finally leading to practical projects to fully utilize the development board and provide reference for users' own development.
The practical projects are rich, and the course provides many examples of practical code usage and function demonstrations. Different functions can be implemented by simply modifying and combining the code.
Code download link: https://github.com/sophgo/sophpi-shaolin
Note: The model conversion part can refer to the SE5 development series courses.
There are many types of intelligent robots, and the most widely used ones are wheeled mobile robots, mainly used for indoor or warehouse patrol, planet exploration, teaching, scientific research, and civilian transportation. In this course, the intelligent car obtains video information through the built-in camera (visual sensor), recognizes the surrounding environment, and realizes autonomous navigation and obstacle avoidance in a small space based on sensors such as lidar and inertial measurement unit (IMU). This course takes a practical approach to guide you to intuitively learn robot operating system (ROS) and use Shaolin Pi development board to build an intelligent car vision application platform. Through programming the intelligent car in practical exercises, you will master the basic knowledge and application of Deep learning.
The Shaolin Pi development board is a high-performance, low-power edge computing product equipped with the third-generation TPU processor BM1684 independently developed by SOPHGO, with INT8 computing power of up to 17.6 TOPS. It supports hardware decoding of 32 full HD videos and encoding of 2 channels. The Shaolin Pi development board has flexible peripheral configuration, supporting 3 mini-PCIe and 4 USB interfaces, as well as DC power supply and Type-C power supply. According to the needs of different scenarios, the board can achieve optimal configuration, reasonable cost, optimal energy consumption, and optimal function selection. This course will help you quickly master the powerful features of the Shaolin Pi development board. Through this course, you will not only be able to master the basics of the Robot Operating System (ROS) and Deep learning, but also understand the basic applications of Deep learning.
Course Features
1. One-stop Service
All common issues related to KT001 intelligent car can be found here.
2. Systematic Teaching
From product introduction to environment building, and then to visual application.
3. Complete Materials
The course includes video tutorials, document guides, code scripts, etc., which are detailed and rich.
Code download link: https://github.com/sophgo/sophon_robot
Course Catalogue
TPU-MLIR is a TPU compiler dedicated to processors. This compiler project offers a complete toolchain that can convert various pre-trained neural network models from different deep learning frameworks (PyTorch, ONNX, TFLite, and Caffe) into efficient model files (bmodel/cvimodel) for operation on the SOPHON TPU. Through quantization into different precisions of bmodel/cvimodel, the models are optimized for acceleration and performance on the SOPHON computing TPU. This enables the deployment of various models related to object detection, semantic segmentation, and object tracking onto underlying hardware for acceleration.
This course aims to comprehensively and visually demonstrate the usage of the TPU-MLIR compiler through practical demonstrations, enabling a quick understanding of converting and quantizing various deep learning model algorithms and their deployment testing on the SOPHGO computing processor TPU. Currently, TPU-MLIR usage has been applied to the latest generation deep learning processors BM168X and CV18XX developed by SOPHGO, complemented by the processor's high-performance ARM core and corresponding SDK for rapid deployment of deep learning algorithms.
Advantages of this course in model porting and deployment:
Currently supported frameworks include PyTorch, ONNX, TFLite, and Caffe. Models from other frameworks need to be converted into ONNX models. For guidance on converting network models from other deep learning architectures into ONNX, please refer to the ONNX official website: https://github.com/onnx/tutorials.
Understanding the principles and operational steps of TPU-MLIR through the development manual and related deployment cases allows for model deployment from scratch. Familiarity with Linux commands and model compilation quantization commands is sufficient for hands-on practice.
Model conversion needs to be executed within the docker provided by SOPHGO, primarily involving two steps: using model_transform.py to convert the original model into an MLIR file, and using model_deploy.py to convert the MLIR file into bmodel format. The bmodel is the model file format that can be accelerated on SOPHGO TPU hardware.
Quantized bmodel models can be run on TPU in PCIe and SOC modes for performance testing.
Rich instructional videos, including detailed theoretical explanations and practical operations, along with ample guidance and standardized code scripts, are open-sourced within the course for all users to learn.
SOPHON-SDK Development Guide | https://doc.sophgo.com/sdk-docs/v23.05.01/docs_latest_release/docs/SOPHONSDK_doc/en/html/index.html |
TPU-MLIR Quick Start Manual | https://doc.sophgo.com/sdk-docs/v23.05.01/docs_latest_release/docs/tpu-mlir/quick_start/en/html/index.html |
Example model repository | https://github.com/sophon-ai-algo/examples |
TPU-MLIR Official Repository | https://github.com/sophgo/tpu-mlir |
SOPHON-SDK Development Manual | https://doc.sophgo.com/sdk-docs/v23.05.01/docs_latest_release/docs/sophon-sail/docs/en/html/ |
Multimedia, commonly understood as the combination of "multi" and "media," refers to the integration of media forms such as text, sound, images, and videos. In recent years, there has been a surge in emerging multimedia applications and services, such as 4K ultra-high-definition, VR, holographic projection, and 5G live streaming.
Multimedia and Artificial Intelligence
Deep Learning is based on multimedia technologies, such as image processing and recognition, audio processing and speech recognition, and so on. This course is based on the BM1684 Deep learning processor, which has a peak performance of 17.6 TOPS INT8 and 2.2 TFTOPS FP32, and supports 32-channel HD hardware decoding. It demonstrates the core capabilities of a processor: computing power + multimedia processing power.
Key Technologies and Indicators for Intelligent Multimedia
Key technologies include coding and decoding technology, image processing technology, and media communication technology. Key indicators include the number of decoding channels, frame rate, resolution, level of richness of the image processing interface, latency, and protocol support.
This course will focus on introducing the three aspects of image processing technology, coding and decoding technology, and media communication technology. Through a combination of theory and practice, students will learn about intelligent multimedia related theories for artificial intelligence and quickly master basic practical methods.
Related GitHub links
sophgo_ffmpeg: https://github.com/sophgo/sophon_ffmpeg
sophgo_opencv: https://github.com/sophgo/sophon_opencv
This course aims to familiarize learners with SOPHON products, understand their basic usage, and grasp their application scenarios to achieve a preliminary understanding of SOPHON products. The course covers product introductions, SE5 server development environment setup, product deployment, and application examples. Completing all the content of this course enables you to qualify for the 'Junior IT Operations Engineer' certification exam.
Course catalog
Admission requirements/Recommendations
This course is a study course corresponding to the "Junior IT Operations Engineer" certification exam, designed to provide learners with basic product knowledge and skills. Although this course assumes that learners do not have a programming background, in order for learners to better grasp the course content, we recommend that students have the following pre-requirements:
Basic Linux operations: Most of the development is done in a Linux environment, and the development involves basic Linux operations, including file management, network configuration, the text editor Vim, and more.
Basic Docker usage: including pulling images, creating containers, running/deleting containers, etc.
Programming languages: The tutorials in this course cover Python and C++ programming languages, and the Computational Energy Toolchain also provides apis for these two languages for developers to call.
Despite the above pre-requirements/recommendations, inexperienced learners are welcome to join the course. The course will use a simple and easy to understand teaching method, with examples and exercises to help students gradually acquire programming skills. For inexperienced learners, you can quickly learn the pre-requirements through Chapter 2 "Common Commands" of this course; For those with development experience, you can automatically skip the content of Chapter 2 and directly deploy through chapters 3 and 4. At the same time, developers who have the strength to learn can try to complete the transplant deployment of the new model on the device.