Case Center

Certification

Authoritative certification system to boost career development!
Daily Check-in
Pending
5

+5

Mon

+5

Today

+5

Wed

+5

Thur

+5

Fri

+5

Sat

+5

Sun

Points Center
User Center
Daily Task Points can only be added once
Daily Check-in
5 points
Done: 0/1
0%
go
Case Learning
5 points
Done: 0/5
0%
go
Course Learning
no limit
Done: 0/5
0%
go
Latest course
course-cover
Compiler:TPU-MLIR environment construction and use guide

TPU-MLIR is a TPU compiler dedicated to processors. This compiler project offers a complete toolchain that can convert various pre-trained neural network models from different deep learning frameworks (PyTorch, ONNX, TFLite, and Caffe) into efficient model files (bmodel/cvimodel) for operation on the SOPHON TPU. Through quantization into different precisions of bmodel/cvimodel, the models are optimized for acceleration and performance on the SOPHON computing TPU. This enables the deployment of various models related to object detection, semantic segmentation, and object tracking onto underlying hardware for acceleration.

This course is mainly divided into three parts:

  1. Building and configuring a local development environment, understanding related SOPHON SDK, TPU-MLIR compiler core theories, and relevant acceleration interfaces.
  2. Converting and quantizing example deep learning models from ONNX, TFLite, Caffe, and PyTorch, along with methods for converting other deep learning framework models into the intermediate ONNX format.
  3. Guiding participants through the practical porting of four instance algorithms (detection, recognition, and tracking) for compilation, conversion, quantization, and final deployment onto the SOPHON 1684x tensor processor's TPU for performance testing.

This course aims to comprehensively and visually demonstrate the usage of the TPU-MLIR compiler through practical demonstrations, enabling a quick understanding of converting and quantizing various deep learning model algorithms and their deployment testing on the SOPHGO computing processor TPU. Currently, TPU-MLIR usage has been applied to the latest generation deep learning processors BM168X and CV18XX developed by SOPHGO, complemented by the processor's high-performance ARM core and corresponding SDK for rapid deployment of deep learning algorithms.

Advantages of this course in model porting and deployment:

1. Supports multiple deep learning frameworks

Currently supported frameworks include PyTorch, ONNX, TFLite, and Caffe. Models from other frameworks need to be converted into ONNX models. For guidance on converting network models from other deep learning architectures into ONNX, please refer to the ONNX official website: https://github.com/onnx/tutorials.

2. User-friendly operation

Understanding the principles and operational steps of TPU-MLIR through the development manual and related deployment cases allows for model deployment from scratch. Familiarity with Linux commands and model compilation quantization commands is sufficient for hands-on practice.

3. Simplified quantization deployment steps

Model conversion needs to be executed within the docker provided by SOPHGO, primarily involving two steps: using model_transform.py to convert the original model into an MLIR file, and using model_deploy.py to convert the MLIR file into bmodel format. The bmodel is the model file format that can be accelerated on SOPHGO TPU hardware.

4. Adaptable to multiple architectures and modes of hardware

Quantized bmodel models can be run on TPU in PCIe and SOC modes for performance testing.

5. Comprehensive documentation

Rich instructional videos, including detailed theoretical explanations and practical operations, along with ample guidance and standardized code scripts, are open-sourced within the course for all users to learn.

SOPHON-SDK Development Guide https://doc.sophgo.com/sdk-docs/v23.05.01/docs_latest_release/docs/SOPHONSDK_doc/en/html/index.html
TPU-MLIR Quick Start Manual https://doc.sophgo.com/sdk-docs/v23.05.01/docs_latest_release/docs/tpu-mlir/quick_start/en/html/index.html
Example model repository https://github.com/sophon-ai-algo/examples
TPU-MLIR Official Repository https://github.com/sophgo/tpu-mlir
SOPHON-SDK Development Manual https://doc.sophgo.com/sdk-docs/v23.05.01/docs_latest_release/docs/sophon-sail/docs/en/html/
17248
2
3
course-cover
Compiler development

As a bridge between the framework and hardware, the Deep learning compiler can realize the goal of one-time code development and reuse of various computing power processors. Recently, Computational Energy has also opened source its self-developed TPU compiler tool - TPU-MLIR (Multi-Level Intermediate Representation). Tpu-mlir is an open source TPU compiler for Deep learning processors. The project provides a complete tool chain, which converts the pre-trained neural network under various frameworks into a binary file bmodel that can operate efficiently in TPU to achieve more efficient reasoning. This course is driven by actual practice, leading you to intuitively understand, practice, and master the TPU compiler framework of intelligent Deep learning processors.

At present, the TPU-MLIR project has been applied to the latest generation of deep learning processor BM1684X, which is developed by Computational Energy. Combined with the high-performance ARM core of the processor itself and the corresponding SDK, it can realize the rapid deployment of deep learning algorithms. The course will cover the basic syntax of MLIR and the implementation details of various optimization operations in the compiler, such as figure optimization, int8 quantization, operator segmentation, and address allocation.

TPU-MLIR has several advantages over other compilation tools

1. Simple and convenient

By reading the development manual and the samples included in the project, users can understand the model conversion process and principles, and quickly get started. Moreover, TPU-MLIR is designed based on the current mainstream compiler tool library MLIR, and users can also learn the application of MLIR through it. The project has provided a complete set of tool chain, users can directly through the existing interface to quickly complete the model transformation work, do not have to adapt to different networks

2. General

At present, TPU-MLIR already supports the TFLite and onnx formats, and the models of these two formats can be directly converted into the bmodel available for TPU. What if it's not either of these formats? In fact, onnx provides a set of conversion tools that can convert models written by major deep learning frameworks on the market today to onnx format, and then proceed to bmodel

3, precision and efficiency coexist

During the process of model conversion, accuracy is sometimes lost. TPU-MLIR supports INT8 symmetric and asymmetric quantization, which can greatly improve the performance and ensure the high accuracy of the model combined with Calibration and Tune technology of the original development company. In addition, TPU-MLIR also uses a lot of graph optimization and operator segmentation optimization techniques to ensure the efficient operation of the model.

4. Achieve the ultimate cost performance and build the next generation of Deep learning compiler

In order to support graphic computation, operators in neural network model need to develop a graphic version; To adapt the TPU, a version of the TPU should be developed for each operator. In addition, some scenarios need to be adapted to different models of the same computing power processor, which must be manually compiled each time, which will be very time-consuming. The Deep learning compiler is designed to solve these problems. Tpu-mlir's range of automatic optimization tools can save a lot of manual optimization time, so that models developed on RISC-V can be smoothly and freely ported to the TPU for the best performance and price ratio.

5. Complete information

Courses include Chinese and English video teaching, documentation guidance, code scripts, etc., detailed and rich video materials detailed application guidance clear code script TPU-MLIR standing on the shoulders of MLIR giants to build, now all the code of the entire project has been open source, open to all users free of charge.

Code Download Link: https://github.com/sophgo/tpu-mlir

TPU-MLIR Development Reference Manual: https://tpumlir.org/docs/developer_manual/01_introduction.html

The Overall Design Ideas Paper: https://arxiv.org/abs/2210.15016

Video Tutorials: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875"

Course catalog

 

序号 课程名 课程分类 课程资料
      视频 文档 代码
1.1 Deep learning编译器基础 TPU_MLIR基础
1.2 MLIR基础 TPU_MLIR基础
1.3 MLIR基本结构 TPU_MLIR基础
1.4 MLIR之op定义 TPU_MLIR基础
1.5 TPU_MLIR介绍(一) TPU_MLIR基础
1.6 TPU_MLIR介绍(二) TPU_MLIR基础
1.7 TPU_MLIR介绍(三) TPU_MLIR基础
1.8 量化概述 TPU_MLIR基础
1.9 量化推导 TPU_MLIR基础
1.10  量化校准 TPU_MLIR基础
1.11 量化感知训练(一) TPU_MLIR基础
1.12  量化感知训练(二) TPU_MLIR基础
2.1 Pattern Rewriting TPU_MLIR实战
2.2 Dialect Conversion TPU_MLIR实战
2.3 前端转换 TPU_MLIR实战
2.4 Lowering in TPU_MLIR TPU_MLIR实战
2.5 添加新算子 TPU_MLIR实战
2.6 TPU_MLIR图优化 TPU_MLIR实战
2.7 TPU_MLIR常用操作 TPU_MLIR实战
2.8 TPU原理(一) TPU_MLIR实战
2.9 TPU原理(二) TPU_MLIR实战
2.10  后端算子实现 TPU_MLIR实战
2.11 TPU层优化 TPU_MLIR实战
2.12 bmodel生成 TPU_MLIR实战
2.13 To ONNX format TPU_MLIR实战
2.14 Add a New Operator TPU_MLIR实战
2.15 TPU_MLIR模型适配 TPU_MLIR实战
2.16 Fuse Preprocess TPU_MLIR实战
2.17 精度验证 TPU_MLIR实战
21347
1
1
course-cover
The Concept and Practice of LLM

Welcome to the Big Models Course! This course will take you deep into the realm of big models and help you master the skills to apply these powerful models. Whether you're interested in the field of deep learning or looking to apply big models in real-world projects, this course will provide you with valuable knowledge and hands-on experience.

 

Big models refer to deep learning models with enormous parameters and complex structures. These models perform exceptionally well when dealing with large-scale datasets and complex tasks like image recognition, natural language processing, speech recognition, and more. The emergence of big models has sparked significant changes in the field of deep learning, leading to breakthroughs in various domains.

 

In this course, you'll learn the fundamental concepts and principles of big models. We'll delve into the foundational theory, developmental history, commonly used big models, and the evolving techniques like Prompts and In-context learning within LLMs (Large Language Models). As the course progresses, we'll dive into the practical applications of big models. You'll learn how to deploy highly regarded big models such as Stable Diffusion and ChatGLM2-6B onto SOPHON's latest generation deep learning processor, the SOPHON BM1684X. The SOPHON BM1684X is the fourth-generation tensor processor specifically introduced by SOPHON for the field of deep learning, capable of 32TOPS computing power, supporting 32 channels of HD hardware decoding, and 12 channels of HD hardware encoding, applicable in environments such as deep learning, computer vision, high-performance computing, and more.

 

Whether you're inclined toward in-depth academic research on big models or their industrial applications, this course will provide you with a robust foundation and practical skills. Are you ready to take on the challenge of big models? Let's delve into this fascinating field together!

11898
0
2

Why choose SOPHON Practical Training

advantage-icon

Professional Skills Development

Focus on learning new technologies in demand, grasp the theory alongside practical application, and enhance professional technical skills.
advantage-icon

Industry-Standard Tools and Frameworks

"It supports mainstream frameworks such as PyTorch, TensorFlow, Caffe, PaddlePaddle, ONNX, and uses tools and software that adhere to industry standards.
advantage-icon

Online Self-Paced Learning

Self-paced learning, available anytime and anywhere online, enjoying cost-effective and more engaging instructor-led training.
advantage-icon

SOPHON Technical Competence Certification

SOPHON technical competence certification can attest to your achieved learning outcomes in the relevant field, serving as evidence of your improved personal abilities.
advantage-icon

SOPHON.NET Cloud Development Space

Offering cloud spaces for course development, facilitating algorithm testing and development without hardware limitations.
advantage-icon

Industry Application Cases

Learning intelligent accelerated computing applications for industries like drones, robotics, autonomous driving, and manufacturing.

Partner