BMNNSDK(SOPHGO Neural Network SDk)is the SOPHGO’s proprietary 深度学习 SDK based on BM Deep learning processor, with its powerful tools, you can deploy the 深度学习 application in the runtime environment, and deliver the maximum inference throughput and efficiency.
There are two device drive modes, PCIE and SOC, developers have more choices.
Combined with the Deep learning processor independently developed by SOPHGO, it provides the largest inference throughput and the simplest application deployment environment.
Provide runtime library programming interface for manipulating the underlying computing resources, users can conduct in-depth development.
The runtime library provides concurrent processing capabilities and supports multi-process and multi-thread modes.
BMNNSDK has two kinds of compilation. For the layer that TPU support, you can use the BMNet to compile and deploy. For the layer that TPU can’t support currently, you can extend the compiler by BMNet programming interface, use the BMKernel programming interface or RISC-V instructions to add custom network layer, enable users to compile a non-public network.
We provide developers with docker image for development, which integrated the tools and libraries required for BMNNSDK, developers can use it to develop the 深度学习 application.
The compiled network and the 深度学习 application can be deployed through BMRuntime after integrated. In the deployed process, you can use the BMNet inference engine API interface for programming.