SOPHGO算能
SOPHGO 算能
TEL: 010-57590724
Please leave a message to us
Your name *

Your phone *

Your weChat/QQ
Your company

Message content *

Upload

Message Success
Customer service personnel will notify you of the results of the message through your contact information.
SOPHGO icon
Follow SOPHGO Official Account
Look Forward to Working with You

案例中心

Online Course

Milk-V Duo开发板实战课
初级 | 0.3h
1416
0
3
大模型理论与实战
高级 | 2.4h
1251
0
2
少林派开发板实战课
初级 | 1.6h
1681
0
1
RISC-V+TPU开发板实战课
初级 | 2.2h
1452
1
1
SE5开发系列课
初级 | 5.7h
1543
0
1

Certification

权威认证体系,助力职业发展!
每日签到
待领取积分
5

+5

周一

+5

周二

+5

周三

+5

今天

+5

周五

+5

周六

+5

周日

积分中心
个人中心
每日任务 每个案例、课程只能加一次积分
每日签到
5积分
已完成:0/1个
0%
立即前往
案例学习
5积分
已完成:0/5个
0%
立即前往
课程学习
不限
已完成:0/5个
0%
立即前往
Latest course
course-cover
Compiler development

Deep learning compilers act as a bridge between frameworks and hardware, achieving the goal of developing code once and reusing various computational processors. Recently, Altran has also open-sourced its self-developed TPU compilation tool—TPU-MLIR (Multi-Level Intermediate Representation). TPU-MLIR is an open-source project that focuses on Deep learning processor TPU compilers. The project provides a complete toolchain that converts pre-trained neural networks of various frameworks into binary files (bmodel) that can be efficiently operated in TPU, to achieve more efficient inference. This course is driven by practical exercises and aims to lead everyone to intuitively understand, practice, and master the SOPHON Deep learning processor TPU compiler framework.

The current TPU-MLIR project has been applied to the latest generation of artificial intelligence processor BM1684X developed by SOPHON, which, together with the processor's high-performance ARM core and corresponding SDK, can achieve rapid deployment of deep learning algorithms. The course content will cover the basic syntax of MLIR and implementation details of various optimization operations in the compiler, such as graph optimization, int8 quantization, operator splitting, and address allocation.

Compared to other compilation tools, TPU-MLIR has the following advantages:

1. Simple and convenient

Users can quickly get started by reading the development manual and included examples to understand the model conversion process and principles. TPU-MLIR is designed based on the current mainstream compiler tool library MLIR, and users can also use it to learn the application of MLIR. The project has provided a complete toolchain, and users can directly complete the model conversion work quickly through the existing interface, without the need to adapt to different networks themselves.

2. Universal

TPU-MLIR currently supports TFLite and ONNX formats, and these two formats of models can be directly converted into bmodels that TPU can use. What if it is not one of these two formats? In fact, ONNX provides a set of conversion tools that can convert models written in mainstream deep learning frameworks on the market to ONNX format, and then continue to convert them into bmodel.

3. Precision and efficiency coexist

During the model conversion process, there may be precision loss. TPU-MLIR supports INT8 symmetric and asymmetric quantization, which greatly improves performance while combining the original development company's Calibration and Tune technologies to ensure high precision of the model. Not only that, TPU-MLIR also uses a large number of graph optimization and operator splitting optimization technologies to ensure efficient operation of the model.

4. Achieving ultimate cost-effectiveness and creating the next generation of Deep learning compilers

To support graphic computing, each operator in the neural network model needs to develop a graphic version; to adapt to TPU, each operator should have a TPU version. In addition, some scenarios require adapting products of different models of the same computational processor, and each time they need to be manually compiled, which will be very time-consuming. Deep learning compilers aim to solve the above problems. TPU-MLIR's series of automatic optimization tools can save a lot of manual optimization time, enabling models developed on the RISC-V to be smoothly and free of charge ported to TPU to obtain the best performance and price ratio.

5. Comprehensive information

The course includes Chinese and English video teaching, document guidance, code scripts, etc., with abundant video materials, detailed application guidance, and clear code scripts. TPU-MLIR stands on the shoulders of MLIR giants to create it, and now all the code of the entire project has been open-sourced and made available to all users for free.

Code Download Link: https://github.com/sophgo/tpu-mlir

TPU-MLIR Development Reference Manual: https://tpumlir.org/docs/developer_manual/01_introduction.html

The Overall Design Ideas Paper: https://arxiv.org/abs/2210.15016

Video Tutorials: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875"

 

1225
0
1
course-cover
RISC-V+TPU Development Board Practical Course

This course introduces the hardware circuit design and peripheral resource operation methods of the CV1812H development board from the "Huashan Pi" series. It also provides tutorials on using Deep learning hardware acceleration interfaces and some basic Deep learning examples.

Huashan Pi (CV1812H development board) is an open-source ecological development board jointly launched by TPU processor and its ecological partners. It provides an open-source development environment based on RISC-V and implements functions based on vision and Deep learning scenarios. The processor integrates the second-generation self-developed deep learning tensor processor (TPU), self-developed intelligent image processing engine (Smart ISP), hardware-level high-security data protection architecture (Security), speech processing engine, and H.264/265 intelligent encoding and decoding technology. It also has a matching multimedia software platform and IVE hardware acceleration interface, making Deep learning deployment and execution more efficient, fast, and convenient. The mainstream deep learning frameworks, such as Caffe, Pytorch, ONNX, MXNet, and TensorFlow (Lite), can be easily ported to the platform.

Course Features

1. Rich and complete content materials, including hardware design of the development board, SDK usage documents, platform development guides, and sample code scripts.

2. Scientific and reasonable learning path. The course introduces the development board and basic routines, and then delves into the internal system architecture and code learning to understand the development details. Finally, practical projects are introduced to fully utilize the development board, which can also serve as a reference for users to develop on their own. 

3. Suitable for different audiences. For users who want to quickly use the development functions, the course provides many code samples for use and function display, which can be easily modified and combined to achieve different functions. For enthusiasts or developers in related industries, the course also provides detailed SDK development usage guidelines and code sample analysis documents, which can help users to gain in-depth understanding. 

4. long-term maintenance of the course. In the future, we will launch more development courses to communicate with developers and grow together.

Course contents

Link to the open-source code for the Huashan Pi development board:https://github.com/sophgo/sophpi-huashan.git

1452
1
1
course-cover
Intelligent Multimedia and TPU Programming Practical Course

Multimedia, commonly understood as the combination of "multi" and "media," refers to the integration of media forms such as text, sound, images, and videos. In recent years, there has been a surge in emerging multimedia applications and services, such as 4K ultra-high-definition, VR, holographic projection, and 5G live streaming.

Multimedia and Artificial Intelligence

Deep Learning is based on multimedia technologies, such as image processing and recognition, audio processing and speech recognition, and so on. This course is based on the BM1684 Deep learning processor, which has a peak performance of 17.6 TOPS INT8 and 2.2 TFTOPS FP32, and supports 32-channel HD hardware decoding. It demonstrates the core capabilities of a processor: computing power + multimedia processing power.

Key Technologies and Indicators for Intelligent Multimedia

Key technologies include coding and decoding technology, image processing technology, and media communication technology. Key indicators include the number of decoding channels, frame rate, resolution, level of richness of the image processing interface, latency, and protocol support.

This course will focus on introducing the three aspects of image processing technology, coding and decoding technology, and media communication technology. Through a combination of theory and practice, students will learn about intelligent multimedia related theories for artificial intelligence and quickly master basic practical methods.

Related GitHub links

sophgo_ffmpeg: https://github.com/sophgo/sophon_ffmpeg

sophgo_opencv: https://github.com/sophgo/sophon_opencv

1360
0
1

Why choose SOPHON Practical Training

advantage-icon

Professional Skills Development

学习当下聚焦的新技术,掌握理论与实验,提升专业技术能力。
advantage-icon

Industry-Standard Tools and Frameworks

支持PyTorch、Tensorflow、Caffe、PaddlePaddle、ONNX等主流框架,使用符合行业标准的工具及软件。
advantage-icon

Online Self-Paced Learning

自主调节学习速度,随时随地在线学习,低成本且更有趣的享受名师培训。
advantage-icon

SOPHON Technical Competence Certification

SOPHON 技术能力认证可以证明您在相关领域达成了一定学习成果,是您提升个人能力的证明。
advantage-icon

SOPHON.NET Cloud Development Space

提供课程需要的云开发空间,为算法开发、测试提供便捷的云端资源,让算法开发不再拘泥于硬件。
advantage-icon

Industry Application Cases

学习适用于无人机、机器人、自动驾驶、制造等行业的智能加速计算应用。

Partner