SOPHON SM5 Mini is an deep learning computing module with super computing power. It is positioned in edge computing scenarios with requirements for high performance and has over 16 channels of HD video deep learning analysis.
SOPHON SM5 Mini is equipped with the third-generation TPU processor BM1684 independently developed by SOPHGO. It has up to 17.6TOPS of INT8 computing power and can process over 16 channels of HD video simultaneously. It has the size of a credit card and rich IO interfaces, so it can be easily integrated into edge or embedded devices. The toolchain is complete and easy to use, and the cost of algorithm migration is low.
32 Channels HD Video Hardware Decoding
With 17.6 TOPS INT8 computing power and up to 35.2 TOPS with Winograd convolution acceleration, SM5 Mini far surpasses similar products in the industry. The typical power consumption of 16-channel video stream analysis is lower than 16W.
SM5 Mini supports up to 32 channels of full HD video decoding and H264/H265 format. It can realize over 16 channels of HD video stream face detection or video structurization.
Support PCIE slave mode and SOC host mode and FP32 high precision and INT8 low precision
Support the mainstream deep learning frameworks including Caffe, Tensorflow, Pytorch, Paddle and Mxnet
It is applied in visual computing deep learning scenarios including intelligent public security, parks, retailing, power and robot UAVs.
SOPHON SDK one-stop toolkit provides a series of software tools including the underlying driver environment, compiler and inference deployment tool. The easy-to-use and convenient toolkit covers the model optimization, efficient runtime support and other capabilities required for neural network inference. It provides easy-to-use and efficient full-stack solutions for the development and deployment of deep learning applications. SOPHON SDK minimizes the development cycle and cost of algorithms and software. Users can quickly deploy deep learning algorithms on various deep learning hardware products of SOPHGO to facilitate intelligent applications.
Deep learning processor
Deep learning performance
Supports FP32/FP16/BF16/INT8, capable of handling up to 16 channels of high-definition video intelligent analysis
RISC-V (SOC host mode)
8-core ARM A53, 2.3GHz main frequency
High speed data interface
PCIE EP interface
PCIE 3.0, X4
PCIE RC interface
PCIE 3.0, X4
Dual Gigabit Ethernet ports
Video decoding and encoding
Video decoding capability
Video decoding format
H.264 and H.265
Maximum decoding resolution
Support 4K, 8K (semi-real time)
Picture decoding and encoding performance
480 PCS/sec @1080p
Low speed data interface
RS485 / RS232 / GPIO / SDIO /PWM / I2C / SPI etc.
Typical power consumption <20W
Maximum power consumption 25W
Heat dissipation mode
SM5-1-M includes passive heatsink
L x W x H
56 x 54 x 8mm without heatsink
62 x 58 x 33.45mm (SM5-1-M includes passive heatsink)