Introduction

There are many types of intelligent robots, and the most widely used ones are wheeled mobile robots, mainly used for indoor or warehouse patrol, planet exploration, teaching, scientific research, and civilian transportation. In this course, the intelligent car obtains video information through the built-in camera (visual sensor), recognizes the surrounding environment, and realizes autonomous navigation and obstacle avoidance in a small space based on sensors such as lidar and inertial measurement unit (IMU). This course takes a practical approach to guide you to intuitively learn robot operating system (ROS) and use Shaolin Pi development board to build an intelligent car vision application platform. Through programming the intelligent car in practical exercises, you will master the basic knowledge and application of Deep learning.

The Shaolin Pi development board is a high-performance, low-power edge computing product equipped with the third-generation TPU processor BM1684 independently developed by SOPHGO, with INT8 computing power of up to 17.6 TOPS. It supports hardware decoding of 32 full HD videos and encoding of 2 channels. The Shaolin Pi development board has flexible peripheral configuration, supporting 3 mini-PCIe and 4 USB interfaces, as well as DC power supply and Type-C power supply. According to the needs of different scenarios, the board can achieve optimal configuration, reasonable cost, optimal energy consumption, and optimal function selection. This course will help you quickly master the powerful features of the Shaolin Pi development board. Through this course, you will not only be able to master the basics of the Robot Operating System (ROS) and Deep learning, but also understand the basic applications of Deep learning.

Course Features

1. One-stop Service

All common issues related to KT001 intelligent car can be found here.

  • Provides a full-stack solution for KT001 intelligent car.
  • Comprehensively explains the basic concepts and practical applications of ROS.
  • With practical application as the core, it explains a large number of computer vision case studies, such as image processing based on OpenCV, object detection based on YOLOv5, multi-object tracking based on DeepSort, face detection based on RetinaFace and face recognition based on ResNet, as well as the implementation principles and methods of action recognition based on TSM.

2. Systematic Teaching

From product introduction to environment building, and then to visual application.

  • What is the composition of the intelligent car?
  • How is the intelligent car assembled?
  • How is the environment built?
  • How is the application developed?

3. Complete Materials

The course includes video tutorials, document guides, code scripts, etc., which are detailed and rich.

  • Abundant video materials.
  • Detailed application guidance.
  • Clear code scripts.

Code download link: https://github.com/sophgo/sophon_robot

Course Catalogue

Chapters ( 38Lesson)

1_ Product Introduction
Start Learning
1.1 整机概述
To do
1.2 少林派硬件原理图
To do
1.3 峨嵋派硬件原理图
To do
1.4 传感器模组介绍
To do
1.5 KT001循迹及交通牌识别
To do
2_ Environment Setup
Start Learning
2.1 开发环境搭建
To do
2.2 核心板少林派系统安装及连接
To do
2.3 核心板少林派无线接入配置
To do
2.4 核心板少林派无线热点配置
To do
2.5 控制板峨嵋派开发环境搭建
To do
3_ 操作基础
Start Learning
3.1 Linux常用命令
To do
3.2 Docker基础使用
To do
3.3 VIM编辑器基础使用
To do
4_ 理论基础
Start Learning
4.1 ROS基础概念介绍
To do
4.2 ROS架构设计
To do
4.3 ROS运行基础
To do
4.4 ROS中的Topic通讯
To do
4.5 ROS中的Service通讯
To do
4.6 ROS中的消息
To do
4.7 ROS中的日志
To do
4.8 智能车控制理论介绍
To do
4.9 智能车PID控制器介绍
To do
4.10 智能车里程计与状态更新介绍
To do
4.11 智能车两轮差速运动学模型介绍
To do
5_ 智能车基础功能
Start Learning
5.1 多机通讯ROS分布式系统建立
To do
5.2 键盘控制
To do
5.3 IMU角度校正功能测试
To do
5.4 雷达跟随
To do
5.5 雷达避障
To do
5.6 SLAM建图与导航
To do
5.7 深度相机使用说明
To do
6_ 智能车应用
Start Learning
6.1 基于OpenCV的图像处理应用
To do
6.2 基于DeepSort的多目标跟随应用
To do
6.3 基于TSM的动作识别应用
To do
6.4 基于图像的人物身份识别应用
To do
6.5 基于YOLOv5的物体检测应用
To do
7_ Q&A-常见问题汇总
Start Learning
7.1 智能车核心板少林派刷机指导
To do
7.2 智能车电量指示与低压预警
To do

Objective

After completing this course, students will be able to master the following skills:

  • ROS environment setup, configuring communication between the intelligent car and virtual machine
  • Familiarity with commonly used ROS development commands and ROS tools
  • Understanding of the composition of ROS operating system, the framework logic of ROS, and the communication mechanism of ROS
  • Mastery of the architecture of BM1684, the TPU processor of the algorithmic development team, and the use of the platform, as well as the setup and usage of cross-compilation environment
  • Learning the implementation of neural networks such as human tracking, face detection, face recognition, human body keypoint detection, and action detection on the TPU platform
  • Basic ability to solve specific problems using deep learning knowledge.

Course Participants

This course comprehensively and systematically introduces the basic knowledge of intelligent car ROS development, TPU hardware platform and TPU platform practice, as well as person identification, action detection and other content. To learn this course, you need to have a certain Python programming foundation, basic Linux system operation ability, as well as general theoretical foundation of robots.

Course Recommendation

course-cover

Compiler development

As a bridge between the framework and hardware, the Deep learning compiler can realize the goal of one-time code development and reuse of various computing power processors. Recently, Computational Energy has also opened source its self-developed TPU compiler tool - TPU-MLIR (Multi-Level Intermediate Representation). Tpu-mlir is an open source TPU compiler for Deep learning processors. The project provides a complete tool chain, which converts the pre-trained neural network under various frameworks into a binary file bmodel that can operate efficiently in TPU to achieve more efficient reasoning. This course is driven by actual practice, leading you to intuitively understand, practice, and master the TPU compiler framework of intelligent Deep learning processors.

At present, the TPU-MLIR project has been applied to the latest generation of deep learning processor BM1684X, which is developed by Computational Energy. Combined with the high-performance ARM core of the processor itself and the corresponding SDK, it can realize the rapid deployment of deep learning algorithms. The course will cover the basic syntax of MLIR and the implementation details of various optimization operations in the compiler, such as figure optimization, int8 quantization, operator segmentation, and address allocation.

TPU-MLIR has several advantages over other compilation tools

1. Simple and convenient

By reading the development manual and the samples included in the project, users can understand the model conversion process and principles, and quickly get started. Moreover, TPU-MLIR is designed based on the current mainstream compiler tool library MLIR, and users can also learn the application of MLIR through it. The project has provided a complete set of tool chain, users can directly through the existing interface to quickly complete the model transformation work, do not have to adapt to different networks

2. General

At present, TPU-MLIR already supports the TFLite and onnx formats, and the models of these two formats can be directly converted into the bmodel available for TPU. What if it's not either of these formats? In fact, onnx provides a set of conversion tools that can convert models written by major deep learning frameworks on the market today to onnx format, and then proceed to bmodel

3, precision and efficiency coexist

During the process of model conversion, accuracy is sometimes lost. TPU-MLIR supports INT8 symmetric and asymmetric quantization, which can greatly improve the performance and ensure the high accuracy of the model combined with Calibration and Tune technology of the original development company. In addition, TPU-MLIR also uses a lot of graph optimization and operator segmentation optimization techniques to ensure the efficient operation of the model.

4. Achieve the ultimate cost performance and build the next generation of Deep learning compiler

In order to support graphic computation, operators in neural network model need to develop a graphic version; To adapt the TPU, a version of the TPU should be developed for each operator. In addition, some scenarios need to be adapted to different models of the same computing power processor, which must be manually compiled each time, which will be very time-consuming. The Deep learning compiler is designed to solve these problems. Tpu-mlir's range of automatic optimization tools can save a lot of manual optimization time, so that models developed on RISC-V can be smoothly and freely ported to the TPU for the best performance and price ratio.

5. Complete information

Courses include Chinese and English video teaching, documentation guidance, code scripts, etc., detailed and rich video materials detailed application guidance clear code script TPU-MLIR standing on the shoulders of MLIR giants to build, now all the code of the entire project has been open source, open to all users free of charge.

Code Download Link: https://github.com/sophgo/tpu-mlir

TPU-MLIR Development Reference Manual: https://tpumlir.org/docs/developer_manual/01_introduction.html

The Overall Design Ideas Paper: https://arxiv.org/abs/2210.15016

Video Tutorials: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875"

Course catalog

 

序号 课程名 课程分类 课程资料
      视频 文档 代码
1.1 Deep learning编译器基础 TPU_MLIR基础
1.2 MLIR基础 TPU_MLIR基础
1.3 MLIR基本结构 TPU_MLIR基础
1.4 MLIR之op定义 TPU_MLIR基础
1.5 TPU_MLIR介绍(一) TPU_MLIR基础
1.6 TPU_MLIR介绍(二) TPU_MLIR基础
1.7 TPU_MLIR介绍(三) TPU_MLIR基础
1.8 量化概述 TPU_MLIR基础
1.9 量化推导 TPU_MLIR基础
1.10  量化校准 TPU_MLIR基础
1.11 量化感知训练(一) TPU_MLIR基础
1.12  量化感知训练(二) TPU_MLIR基础
2.1 Pattern Rewriting TPU_MLIR实战
2.2 Dialect Conversion TPU_MLIR实战
2.3 前端转换 TPU_MLIR实战
2.4 Lowering in TPU_MLIR TPU_MLIR实战
2.5 添加新算子 TPU_MLIR实战
2.6 TPU_MLIR图优化 TPU_MLIR实战
2.7 TPU_MLIR常用操作 TPU_MLIR实战
2.8 TPU原理(一) TPU_MLIR实战
2.9 TPU原理(二) TPU_MLIR实战
2.10  后端算子实现 TPU_MLIR实战
2.11 TPU层优化 TPU_MLIR实战
2.12 bmodel生成 TPU_MLIR实战
2.13 To ONNX format TPU_MLIR实战
2.14 Add a New Operator TPU_MLIR实战
2.15 TPU_MLIR模型适配 TPU_MLIR实战
2.16 Fuse Preprocess TPU_MLIR实战
2.17 精度验证 TPU_MLIR实战
course-cover

Milk-V Duo Development Board Pratical Course

This course introduces the hardware circuit design and basic environment set up, as well as provides some simple development examples and some basic Deep learning examples.

Milk-V Duo is an ultra-compact embedded development platform based on CV1800B. It has small size and comprehensive functionality, it is equipped with dual cores and can run linux and rtos systems separately, and has various connectable peripherals.

  • Scalability: The Milk-V Duo core board has various interfaces such as GPIO, I2C, UART, SDIO1, SPI, ADC, PWM, etc.
  • Diverse connectable peripherals: The Milk-V Duo core board can be expanded with various devices such as LED, portable screens, cameras, WIFI and so on.

Course features:

  • The content materials are rich and complete, including development board hardware design, peripheral interface instructions, basic environment set up method, and sample code scripts.
  • The learning path is scientifically reasonable, starting from the introduction and basic usage of the development board, and then leading to pratical projects to fully utilize the development board and provide reference for users' own development.
  • The pratical projects are rich, and the course provides many examples of practical code usage and function demonstrations. Different functions can be implemented by simply modifying and combining the code.

course-cover

SE5 Development Series Course

The deep neural network model can be trained and tested quickly and then deployed by the industry to effectively perform tasks in the real world. Deploying such systems on small-sized, low-power Deep learning edge computing platforms is highly favored by the industry. This course takes a practice-driven approach to lead you to intuitively learn, practice, and master the knowledge and technology of deep neural networks.

The SOPHON Deep learning microserver SE5 is a high-performance, low-power edge computing product equipped with the third-generation TPU processor BM1684 developed independently by SOPHGO. With an INT8 computing power of up to 17.6 TOPS, it supports 32 channels of Full HD video hardware decoding and 2 channels of encoding. This course will quickly guide you through the powerful features of the SE5 server.  Through this course, you can understand the basics of Deep learning and master its basic applications.

Course Features

1. One-stop service 

All common problems encountered in SE5 applications can be found here.

 • Provide a full-stack solution for Deep learning micro servers

 • Break down the development process step by step, in detail and clearly

 • Support all mainstream frameworks, easy to use products

2. Systematic teaching 

It includes everything from setting up the environment, developing applications, converting models, and deploying products, as well as having a mirrored practical environment.

• How is the environment built? 

• How is the model compiled? 

• How is the application developed? 

• How are scenarios deployed?

3. Complete materials

The course includes video tutorials, document guides, code scripts, and other comprehensive materials. 

• Rich video materials 

• Detailed application guidance 

• Clear code scripts 

Code download link: https://github.com/sophon-ai-algo/examples

4. Free cloud development resources 

Online free application for using SE5-16 microserver cloud testing space 

• SE5-16 microserver cloud testing space can be used for online development and testing, supporting user data retention and export 

• SE5-16 microserver cloud testing space has the same resource performance as the physical machine environment 

Cloud platform application link: https://account.sophgo.com/sign_in?service=https://cloud.sophgo.com&locale=zh-CN

Cloud platform usage instructions: https://cloud.sophgo.com/tpu.pdf