HOME
Shop
  • English
  • 简体中文
HOME
Shop
  • English
  • 简体中文
  • Product Series

    • FPGA+ARM

      • GM-3568JHF

        • 1. Introduction

          • About GM-3568JHF
        • 2. Quick Start

          • 00 Introduction
          • 01 Environment Setup
          • 02 Compilation Instructions
          • 03 Flashing Guide
          • 04 Debug Tools
          • 05 Software Update
          • 06 View Information
          • 07 Test Commands
          • 08 App Compilation
          • 09 Source Code Acquisition
        • 3. Peripherals and Interfaces

          • 01 USB
          • 02 Display and Touch
          • 03 Ethernet
          • 04 WIFI
          • 05 Bluetooth
          • 06 TF-Card
          • 07 Audio
          • 08 Serial Port
          • 09 CAN
          • 10 RTC
        • 4. Application Development

          • 01 UART read and write case
          • 02 Key detection case
          • 03 LED light flashing case
          • 04 MIPI screen detection case
          • 05 Read USB device information example
          • 06 FAN Detection Case
          • 07 FPGA FSPI Communication Case
          • 08 FPGA DMA read and write case
          • 09 GPS debugging case
          • 10 Ethernet Test Cases
          • 11 RS485 reading and writing examples
          • 12 FPGA IIC read and write examples
          • 13 PN532 NFC card reader case
          • 14 TF card reading and writing case
        • 5. QT Development

          • 01 ARM64 cross compiler environment construction
          • 02 QT program added automatic startup service
        • 6. RKNN_NPU Development

          • 01 RK3568 NPU Overview
          • 02 Development Environment Setup
          • Run Official YOLOv5 Example
          • Model Conversion Detailed Explanation
          • Run Custom Model on Board
        • 7. FPGA Development

          • ARM and FPGA Communication
          • /fpga-arm/GM-3568JHF/FPGA/ch02-FPGA-Development-Manual.html
        • 8. Others

          • 01 Modification of the root directory file system
          • 02 System auto-start service
        • 9. Download

          • Download Resources
    • ShimetaPi

      • M4-R1

        • 1. Introduction

          • 1.1 About M4-R1
        • 2. Quick Start

          • 2.1 OpenHarmony Overview
          • 2.2 Image Burning
          • 2.3 Development Environment Preparation
          • 2.4 Hello World Application
        • 3. Application Development

          • 3.1 Getting Started

            • 3.1.1 ArkTS Language Overview
            • 3.1.2 UI Components (Part 1)
            • 3.1.3 UI Components (Part 2)
            • 3.1.4 UI Components (Part 3)
          • 3.2 Advanced

            • 3.2.1 Getting Started Guide
            • 3.2.2 Usage of Third Party Libraries
            • 3.2.3 Deployment of the Application
            • 3.2.4 Factory Reset
            • 3.2.5 System Debug
            • 3.2.6 APP Stability Testing
            • 3.2.7 Application Testing
          • 3.3 Getting Docs

            • 3.3.1 Official Website Information
          • 3.4 Development Instructions

            • 3.4.1 Full SDK
            • 3.4.2 Introduction of Third Party Libraries
            • 3.4.3 Introduction of HDC Tool
            • 3.4.4 Restore Factory Mode
            • 3.4.5 Update System API
          • 3.5 First Application

            • 3.5.1 First ArkTS App
          • 3.6 Application Demo

            • 3.6.1 UART Tool
            • 3.6.2 Graphics Tablet
            • 3.6.3 Digital Clock
            • 3.6.4 WIFI Tool
        • 4. Device Development

          • 4.1 Ubuntu Environment Development

            • 4.1.1 Environment Setup
            • 4.1.2 Download Source Code
            • 4.1.3 Compile Source Code
          • 4.2 Using DevEco Device Tool

            • 4.2.1 Tool Introduction
            • 4.2.2 Environment Construction
            • 4.2.3 Import SDK
            • 4.2.4 Function Introduction
        • 5. Peripherals and Interfaces

          • 5.1 Raspberry Pi Interfaces
          • 5.2 GPIO Interface
          • 5.3 I2C Interface
          • 5.4 SPI Communication
          • 5.5 PWM Control
          • 5.6 Serial Port Communication
          • 5.7 TF Card Slot
          • 5.8 Display Screen
          • 5.9 Touch Screen
          • 5.10 Audio
          • 5.11 RTC
          • 5.12 Ethernet
          • 5.13 M.2
          • 5.14 MINI PCIE
          • 5.15 Camera
          • 5.16 WIFI BT
          • 5.17 HAT
        • 6. FAQ

          • 6.1 Download Link
      • M5-R1

        • 1. Introduction

          • M5-R1 Development Documentation
        • 2. Quick Start

          • OpenHarmony Overview
          • Image Burning
          • Development Environment Preparation
          • Hello World Application and Deployment
        • 3. Peripherals and Interfaces

          • 3.1 Raspberry Pi Interfaces
          • 3.2 GPIO Interface
          • 3.3 I2C Interface
          • 3.4 SPI Communication
          • 3.5 PWM Control
          • 3.6 Serial Port Communication
          • 3.7 TF Card Slot
          • 3.8 Display Screen
          • 3.9 Touch Screen
          • 3.10 Audio
          • 3.11 RTC
          • 3.12 Ethernet
          • 3.13 M.2
          • 3.14 MINI PCIE
          • 3.15 Camera
          • 3.16 WIFI BT
          • 3.17 HAT
        • 4. Application Development

          • 4.1 Getting Started

            • 4.1.1 ArkTS Language Overview
            • 4.1.2 UI Components (Part 1)
            • 4.1.3 UI Components (Part 2)
            • 4.1.4 UI Components (Part 3)
          • 4.2 Advanced

            • 4.2.1 Getting Started Guide
            • 4.2.2 Usage of Third Party Libraries
            • 4.2.3 Deployment of the Application
            • 4.2.4 Factory Reset
            • 4.2.5 System Debug
            • 4.2.6 APP Stability Testing
            • 4.2.7 Application Testing
        • 5. Device Development

          • 5.1 Environment Setup
          • 5.2 Download Source Code
          • 5.3 Compile Source Code
        • 6. Download

          • Data Download
    • OpenHarmony

      • SC-3568HA

        • 1. Introduction

          • 1.1 About SC-3568HA
        • 2. Quick Start

          • 2.1 OpenHarmony Overview
          • 2.2 Image Burning
          • 2.3 Development Environment Preparation
          • 2.4 Hello World Application
        • 3. Application Development

          • 3.1 ArkUI

            • 3.1.1 ArkTS Language Overview
            • 3.1.2 UI Components (Part 1)
            • 3.1.3 UI Components (Part 2)
            • 3.1.4 UI Components (Part 3)
          • 3.2 Advanced

            • 3.2.1 Getting Started Guide
            • 3.2.2 Usage of Third Party Libraries
            • 3.2.3 Deployment of the Application
            • 3.2.4 Factory Reset
            • 3.2.5 System Debug
            • 3.2.6 APP Stability Testing
            • 3.2.7 Application Testing
        • 4. Device Development

          • 4.1 Environment Setup
          • 4.2 Download Source Code
          • 4.3 Compile Source Code
        • 5. Peripherals and Interfaces

          • 5.1 Raspberry Pi Interfaces
          • 5.2 GPIO Interface
          • 5.3 I2C Interface
          • 5.4 SPI Communication
          • 5.5 PWM Control
          • 5.6 Serial Port Communication
          • 5.7 TF Card Slot
          • 5.8 Display Screen
          • 5.9 Touch Screen
          • 5.10 Audio
          • 5.11 RTC
          • 5.12 Ethernet
          • 5.13 M.2
          • 5.14 MINI PCIE
          • 5.15 Camera
          • 5.16 WIFI BT
          • 5.17 HAT
        • 6. FAQ

          • 6.1 Download Link
      • M-K1HSE

        • 1. Introduction

          • 1.1 Product Introduction
        • 2. Quick Start

          • 2.1 Debug Tool Installation
          • 2.2 Development Environment Setup
          • 2.3 Source Code Download
          • 2.4 Build Instructions
          • 2.5 Flashing Guide
          • 2.6 APT Update Sources
          • 2.7 View Board Info
          • 2.8 CLI LED and Key Test
          • 2.9 GCC Build Programs
        • 3. Application Development

          • 3.1 Basic Application Development

            • 3.1.1 Development Environment Preparation
            • 3.1.2 First Application HelloWorld
            • 3.1.3 Develop HAR Package
          • 3.2 Peripheral Application Cases

            • 3.2.1 UART Read/Write
            • 3.2.2 Key Demo
            • 3.2.3 LED Flash
        • 4. Peripherals and Interfaces

          • 4.1 Standard Peripherals

            • 4.1.1 USB
            • 4.1.2 Display and Touch
            • 4.1.3 Ethernet
            • 4.1.4 WIFI
            • 4.1.5 Bluetooth
            • 4.1.6 TF Card
            • 4.1.7 Audio
            • 4.1.8 Serial Port
            • 4.1.9 CAN
            • 4.1.10 RTC
          • 4.2 Interfaces

            • 4.2.1 Audio
            • 4.2.2 RS485
            • 4.2.3 Display
            • 4.2.4 Touch
        • 5. System Customization Development

          • 5.1 System Porting
          • 5.2 System Customization
          • 5.3 Driver Development
          • 5.4 System Debugging
          • 5.5 OTA Upgrade
        • 6. Download

          • 6.1 Download
    • EVS-Camera

      • CF-NRS1

        • 1. Introduction

          • 1.1 About CF-NRS1
          • 1.2 Event-Based Concepts
          • 1.3 Quick Start
          • 1.4 Resources
        • 2. Development

          • 2.1 Development Overview

            • 2.1.1 Shimetapi Hybrid Camera SDK Introduction
          • 2.2 Environment & API

            • 2.2.1 Environment Overview
            • 2.2.2 Development API Overview
          • 2.3 Linux Development

            • 2.3.1 Linux SDK Introduction
            • 2.3.2 Linux SDK API
            • 2.3.3 Linux Algorithm
            • 2.3.4 Linux Algorithm API
          • 2.4 Service & Web

            • 2.4.1 EVS Server
            • 2.4.2 Time Server
            • 2.4.3 EVS Web
        • 3. Download

          • 3.1 Download
        • 4. Common Problems

          • 4.1 Common Problems
      • CF-CRA2

        • 1. Introduction

          • 1.1 About CF-CRA2
        • 2. Download

          • 2.1 Download
      • EVS Module

        • 1. Related Concepts
        • 2. Hardware Preparation and Environment Configuration
        • 3. Example Program User Guide
        • Resources Download
    • AI-model

      • 1684XB-32T

        • 1. Introduction

          • AIBOX-1684XB-32 Introduction
        • 2. Quick Start

          • First time use
          • Network Configuration
          • Disk usage
          • Memory allocation
          • Fan Strategy
          • Firmware Upgrade
          • Cross-Compilation
          • Model Quantization
        • 3. Application Development

          • 3.1 Development Introduction

            • Sophgo SDK Development
            • SOPHON-DEMO Introduction
          • 3.2 Large Language Models

            • Deploying Llama3 Example
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Sophon_LLM_api_server-Development-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/MiniCPM-V-2_6-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen-2-5-VL-demo-Development-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen-3-chat-demo-Development-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen3-Qwen Agent-MCP.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen3-langchain-AI Agent.html
          • 3.3 Deep Learning

            • ResNet (Image Classification)
            • LPRNet (License Plate Recognition)
            • SAM (Universal Image Segmentation Foundation Model)
            • YOLOv5 (Object Detection)
            • OpenPose (Human Keypoint Detection)
            • PP-OCR (Optical Character Recognition)
        • 4. Download

          • Resource Download
      • 1684X-416T

        • 1. Introduction

          • AIBOX-1684X-416 Introduction
        • 2. Demo Simple Operation Guide

          • Simple instructions for using shimeta smart monitoring demo
      • RDK-X5

        • 1. Introduction

          • RDK-X5 Hardware Introduction
        • 2. Quick Start

          • RDK-X5 Quick Start
        • 3. Application Development

          • 3.1 AI Online Model Development

            • AI Online Development - Experiment01
            • AI Online Development - Experiment02
            • AI Online Development - Experiment03
            • AI Online Development - Experiment04
            • AI Online Development - Experiment05
            • AI Online Development - Experiment06
          • 3.2 Large Language Models (Voice)

            • Voice LLM Application - Experiment01
            • Voice LLM Application - Experiment02
            • Voice LLM Application - Experiment03
            • Voice LLM Application - Experiment04
            • Voice LLM Application - Experiment05
            • Voice LLM Application - Experiment06
          • 3.3 40pin-IO Development

            • 40pin IO Development - Experiment01
            • 40pin IO Development - Experiment02
            • 40pin IO Development - Experiment03
            • 40pin IO Development - Experiment04
            • 40pin IO Development - Experiment05
            • 40pin IO Development - Experiment06
            • 40pin IO Development - Experiment07
          • 3.4 USB Module Development

            • USB Module Usage - Experiment01
            • USB Module Usage - Experiment02
          • 3.5 Machine Vision

            • Machine Vision Technology Development - Experiment01
            • Machine Vision Technology Development - Experiment02
            • Machine Vision Technology Development - Experiment03
            • Machine Vision Technology Development - Experiment04
          • 3.6 ROS2 Base Development

            • ROS2 Basic Development - Experiment01
            • ROS2 Basic Development - Experiment02
            • ROS2 Basic Development - Experiment03
            • ROS2 Basic Development - Experiment04
      • RDK-S100

        • 1. Introduction

          • 1.1 About RDK-S100
        • 2. Quick Start

          • 2.1 First Use
        • 3. Application Development

          • 3.1 AI Online Model Development

            • 3.1.1 Volcano Engine Doubao AI
            • 3.1.2 Image Analysis
            • 3.1.3 Multimodal Visual Analysis
            • 3.1.4 Multimodal Image Comparison
            • 3.1.5 Multimodal Document Analysis
            • 3.1.6 Camera AI Vision Analysis
          • 3.2 Large Language Models

            • 3.2.1 Speech Recognition
            • 3.2.2 Voice Conversation
            • 3.2.3 Multimodal Image Analysis
            • 3.2.4 Multimodal Image Comparison
            • 3.2.5 Multimodal Document Analysis
            • 3.2.6 Multimodal Vision Application
          • 3.3 40pin-IO Development

            • 3.3.1 GPIO Output LED Blink
            • 3.3.2 GPIO Input
            • 3.3.3 Key Control LED
            • 3.3.4 PWM Output
            • 3.3.5 Serial Output
            • 3.3.6 I2C Experiment
          • 3.4 USB Module Development

            • 3.4.1 USB Voice Module
            • 3.4.2 Sound Source Localization
          • 3.5 Machine Vision

            • 3.5.1 USB Camera
            • 3.5.2 Image Processing Basics
            • 3.5.3 Object Detection
            • 3.5.4 Image Segmentation
          • 3.6 ROS2 Base Development

            • 3.6.1 Environment Setup
            • 3.6.2 Create and Build Workspace
            • 3.6.3 ROS2 Topic Communication
            • 3.6.4 ROS2 Camera Application
    • Core-Board

      • C-3568BQ

        • 1. Introduction

          • C-3568BQ Introduction
      • C-3588LQ

        • 1. Introduction

          • C-3588LQ Introduction
      • GC-3568JBAF

        • 1. Introduction

          • GC-3568JBAF Introduction
      • C-K1BA

        • 1. Introduction

          • C-K1BA Introduction

SOPHON-DEMO Introduction

1. SOPHON-DEMO Introduction

SOPHON-DEMO is developed based on the SOPHONSDK interface and provides a series of mainstream algorithm porting examples. Including model compilation and quantization based on TPU-NNTC and TPU-MLIR, inference engine porting based on BMRuntime, and pre-processing and post-processing algorithm porting based on BMCV/OpenCV.

SOPHONSDK is a deep learning SDK customized by Sophgo Technologies based on its self-developed deep learning processors. It covers capabilities such as model optimization and efficient runtime support required for neural network inference stages, providing an easy-to-use and efficient full-stack solution for deep learning application development and deployment. Currently compatible with BM1684/BM1684X/BM1688 (CV186X). Below are some related term explanations:

TermDescription
BM1688/CV186AHBM1684XSophgo's fifth-generation tensor processors for the deep learning field
BM1684Sophgo's third-generation tensor processor for the deep learning field
Intelligent Vision Deep Learning ProcessorNeural network computing unit in BM1688/CV186AH, BM1684/BM1684X
VPUEncoding and decoding unit in BM1688/CV186AH, BM1684/BM1684X
VPPGraphics operation acceleration unit in BM1684/BM1684X
VPSSVideo processing subsystem in BM1688/CV186AH, including graphics operation acceleration unit and decoding unit, also called VPP
JPUImage JPEG encoding and decoding unit in BM1688/CV186AH, BM1684/BM1684X
SOPHONSDKSophgo's original deep learning development toolkit based on BM1688/CV186AH, BM1684/BM1684X
PCIe ModeA working form of BM1688/CV186AH, BM1684/BM1684X, used as an acceleration device
SoC ModeA working form of BM1688/CV186AH, BM1684/BM1684X, runs independently as a host, customer algorithms can run directly on it
arm_pcie ModeA working form of BM1684/BM1684X, the board with BM1684/BM4X serves as a PCIe slave device inserted into an ARM processor server, customer algorithms run on the ARM processor host
BMCompilerDeep neural network optimization compiler for intelligent vision deep learning processors, can convert various deep neural networks from deep learning frameworks into instruction streams that run on processors
BMRuntimeIntelligent vision deep learning processor inference interface library
BMCVGraphics operation hardware acceleration interface library
BMLibA underlying software library encapsulated on top of kernel driver, device management, memory management, data transfer, API sending, A53 enable, power control
mlirIntermediate model format generated by TPU-MLIR, used for migration or quantization of models
BModelDeep neural network model file format for intelligent vision deep learning processors, containing target network weights, instruction streams, etc.
BMLangAdvanced programming model for intelligent vision deep learning processors, users don't need to understand underlying hardware information during development
TPUKernelDevelopment library based on atomic operations of intelligent vision deep learning processors (a set of interfaces encapsulated according to BM1688/CV186AH, BM1684/BM1684X instruction sets).
SAILSOPHON Inference library supporting Python/C++ interfaces, further encapsulation of BMCV, sophon-media, BMLib, BMRuntime, etc.
TPU-MLIRIntelligent vision deep learning processor compiler project, can convert pre-trained neural networks from different frameworks into bmodels that can run efficiently on Sophgo's intelligent vision deep learning processors

1.1 BModel

BModel: A deep neural network model file format for Sophgo's intelligent vision deep learning processors, containing target network weights, instruction streams, etc.

Stage: Supports combining models with different batch sizes of the same network into one BModel; different batch size inputs of the same network correspond to different stages, and BMRuntime will automatically select the model of the corresponding stage based on the input shape size during inference. Also supports combining different networks into one BModel, obtaining different networks through network names.

Dynamic and Static Compilation: Supports dynamic and static compilation of models, which can be set through parameters during model conversion. Dynamic compiled BModel supports any input shape smaller than the shape set during compilation at runtime; static compiled BModel only supports the shape set during compilation at runtime.

Remarks

Prefer using statically compiled models: Dynamic compiled models require the participation of the ARM9 microcontroller inside BM168X to dynamically generate instruction streams for the intelligent vision deep learning processor based on actual input shapes in real-time. Therefore, dynamically compiled models have lower execution efficiency than statically compiled models. When possible, priority should be given to using statically compiled models or statically compiled models supporting multiple input shapes.

1.2 bm_image

BMCV: BMCV provides a set of machine vision libraries optimized for SOPHON Deep learning processors. By utilizing the processor's Tensor Computing Processor and VPP modules, it can complete operations such as color space conversion, scale transformation, affine transformation, projection transformation, linear transformation, drawing boxes, JPEG encoding/decoding, BASE64 encoding/decoding, NMS, sorting, and feature matching.

bm_image: BMCV APIs are all centered around bm_image, where one bm_image object corresponds to one image. Users construct bm_image objects through bm_image_create, then use them with various bmcv functional functions, and need to call bm_image_destroy to destroy them after use.

BMImage: In the SAIL library, bm_image is encapsulated as BMImage. For related information, refer to the SOPHON-SAIL User Manual.

The following are the bm_image struct and related data format definitions:

typedef enum bm_image_format_ext_{
    FORMAT_YUV420P,
    FORMAT_YUV422P,
    FORMAT_YUV444P,
    FORMAT_NV12,
    FORMAT_NV21,
    FORMAT_NV16,
    FORMAT_NV61,
    FORMAT_RGB_PLANAR,
    FORMAT_BGR_PLANAR,
    FORMAT_RGB_PACKED,
    FORMAT_BGR_PACKED,
    PORMAT_RGBP_SEPARATE,
    PORMAT_BGRP_SEPARATE,
    FORMAT_GRAY,
    FORMAT_COMPRESSED
} bm_image_format_ext;

typedef enum bm_image_data_format_ext_{
    DATA_TYPE_EXT_FLOAT32,
    DATA_TYPE_EXT_1N_BYTE,
    DATA_TYPE_EXT_4N_BYTE,
    DATA_TYPE_EXT_1N_BYTE_SIGNED,
    DATA_TYPE_EXT_4N_BYTE_SIGNED,
}bm_image_data_format_ext;

// bm_image struct definition is as follows
struct bm_image {
    int width;
    int height;
    bm_image_format_ext image_format;
    bm_data_format_ext data_type;
    bm_image_private* image_private;
};

2. Directory Structure and Description

The examples provided by SOPHON-DEMO are divided into three modules from easy to difficult: tutorial, sample, and application:

Warning

  • The tutorial module contains examples of basic interface usage;
  • The tutorial module contains examples of basic interface usage;
  • The sample module contains some classic algorithm serial examples on SOPHONSDK;
  • The application module contains some typical applications for typical scenarios.
ModuleLink
tutorialLINK1
sampleLINK2
applicationLINK3

3. Version Description

VersionDescription
0.2.1Improved and fixed documentation and code issues, supplemented CV186X support for some examples, YOLOv5 adapted to SG2042, added GroundingDINO and Qwen1_5 to sample module, StableDiffusionV1_5 new support for multiple resolutions, Qwen, Llama2, ChatGLM3 added web and multi-session modes. Added blend and stitch examples to tutorial module
0.2.0Improved and fixed documentation and code issues, added application and tutorial modules, added ChatGLM3 and Qwen examples, SAM added web ui, BERT, ByteTrack, C3D adapted to BM1688, original YOLOv8 renamed to YOLOv8_det and added cpp post-processing acceleration method, optimized auto_test for common examples, updated TPU-MLIR installation method to pip
0.1.10Fixed documentation and code issues, added ppYoloe, YOLOv8_seg, StableDiffusionV1.5, SAM examples, refactored yolact, CenterNet, YOLOX, YOLOv8 adapted to BM1688, YOLOv5, ResNet, PP-OCR, DeepSORT supplemented BM1688 performance data, WeNet provides C++ cross-compilation method
0.1.9Fixed documentation and code issues, added segformer, YOLOv7, Llama2 examples, refactored YOLOv34, YOLOv5, ResNet, PP-OCR, DeepSORT, LPRNet, RetinaFace, YOLOv34, WeNet adapted to BM1688, OpenPose post-processing acceleration, chatglm2 added compilation method and int8/int4 quantization
0.1.8Improved and fixed documentation and code issues, added BERT, ppYOLOv3, ChatGLM2, refactored YOLOX, PP-OCR added beam search, OpenPose added tpu-kernel post-processing acceleration, updated SFTP download method
0.1.7Fixed documentation issues, some examples support BM1684 mlir, refactored PP-OCR, CenterNet examples, YOLOv5 added sail support
0.1.6Fixed documentation issues, added ByteTrack, YOLOv5_opt, WeNet examples
0.1.5Fixed documentation issues, added DeepSORT example, refactored ResNet, LPRNet examples
0.1.4Fixed documentation issues, added C3D, YOLOv8 examples
0.1.3Added OpenPose example, refactored YOLOv5 examples (including adapting arm PCIe, supporting TPU-MLIR compiled BM1684X models, using ffmpeg component to replace opencv decoding, etc.)
0.1.2Fixed documentation issues, refactored SSD related examples, LPRNet/cpp/lprnet_bmcv uses ffmpeg component to replace opencv decoding
0.1.1Fixed documentation issues, refactored LPRNet/cpp/lprnet_bmcv using BMNN related classes
0.1.0Provided 10 examples including LPRNet, adapted to BM1684X (x86 PCIe, SoC), BM1684 (x86 PCIe, SoC)

4. Environment Dependencies

SOPHON-DEMO mainly depends on TPU-MLIR, TPU-NNTC, LIBSOPHON, SOPHON-FFMPEG, SOPHON-OPENCV, SOPHON-SAIL, with the following version requirements:

SOPHON-DEMOTPU-MLIRTPU-NNTCLIBSOPHONSOPHON-FFMPEGSOPHON-OPENCVSOPHON-SAILRelease Date
0.2.0>=1.6>=3.1.7>=0.5.0>=0.7.3>=0.7.3>=3.7.0>=23.10.01
0.1.10>=1.2.2>=3.1.7>=0.4.6>=0.6.0>=0.6.0>=3.7.0>=23.07.01
0.1.9>=1.2.2>=3.1.7>=0.4.6>=0.6.0>=0.6.0>=3.7.0>=23.07.01
0.1.8>=1.2.2>=3.1.7>=0.4.6>=0.6.0>=0.6.0>=3.6.0>=23.07.01
0.1.7>=1.2.2>=3.1.7>=0.4.6>=0.6.0>=0.6.0>=3.6.0>=23.07.01
0.1.6>=0.9.9>=3.1.7>=0.4.6>=0.6.0>=0.6.0>=3.4.0>=23.05.01
0.1.5>=0.9.9>=3.1.7>=0.4.6>=0.6.0>=0.6.0>=3.4.0>=23.03.01
0.1.4>=0.7.1>=3.1.5>=0.4.4>=0.5.1>=0.5.1>=3.3.0>=22.12.01
0.1.3>=0.7.1>=3.1.5>=0.4.4>=0.5.1>=0.5.1>=3.3.0-
0.1.2Not support>=3.1.4>=0.4.3>=0.5.0>=0.5.0>=3.2.0-
0.1.1Not support>=3.1.3>=0.4.2>=0.4.0>=0.4.0>=3.1.0-
0.1.0Not support>=3.1.3>=0.3.0>=0.2.4>=0.2.4>=3.1.0-

Warning

  1. Different examples may have different version requirements. Refer to the example's README for specifics. Other third-party libraries may need to be installed.

  2. BM1688/CV186X and BM1684X/BM1684 correspond to different SDKs, which have not yet been published on the official website. Please contact technical staff to obtain them.

5. Technical Information

Tips

Please obtain related documents, materials, and video tutorials through the Sophgo official website Technical Information.

6. Community

Tips

Sophgo Community encourages developers to communicate and learn together. Developers can communicate and learn through the following channels.

Sophgo Community website: https://www.sophgo.com/

Sophgo Developer Forum: https://developer.sophgo.com/forum/index.html


Edit this page on GitHub
Last Updated:
Contributors: ZSL
Prev
Sophgo SDK Development