HOME
Shop
  • English
  • 简体中文
HOME
Shop
  • English
  • 简体中文
  • Product Series

    • FPGA+ARM

      • GM-3568JHF

        • 1. Introduction

          • About GM-3568JHF
        • 2. Quick Start

          • 00 Introduction
          • 01 Environment Setup
          • 02 Compilation Instructions
          • 03 Flashing Guide
          • 04 Debug Tools
          • 05 Software Update
          • 06 View Information
          • 07 Test Commands
          • 08 App Compilation
          • 09 Source Code Acquisition
        • 3. Peripherals and Interfaces

          • 01 USB
          • 02 Display and Touch
          • 03 Ethernet
          • 04 WIFI
          • 05 Bluetooth
          • 06 TF-Card
          • 07 Audio
          • 08 Serial Port
          • 09 CAN
          • 10 RTC
        • 4. Application Development

          • 01 UART read and write case
          • 02 Key detection case
          • 03 LED light flashing case
          • 04 MIPI screen detection case
          • 05 Read USB device information example
          • 06 FAN Detection Case
          • 07 FPGA FSPI Communication Case
          • 08 FPGA DMA read and write case
          • 09 GPS debugging case
          • 10 Ethernet Test Cases
          • 11 RS485 reading and writing examples
          • 12 FPGA IIC read and write examples
          • 13 PN532 NFC card reader case
          • 14 TF card reading and writing case
        • 5. QT Development

          • 01 ARM64 cross compiler environment construction
          • 02 QT program added automatic startup service
        • 6. RKNN_NPU Development

          • 01 RK3568 NPU Overview
          • 02 Development Environment Setup
          • Run Official YOLOv5 Example
          • Model Conversion Detailed Explanation
          • Run Custom Model on Board
        • 7. FPGA Development

          • ARM and FPGA Communication
          • /fpga-arm/GM-3568JHF/FPGA/ch02-FPGA-Development-Manual.html
        • 8. Others

          • 01 Modification of the root directory file system
          • 02 System auto-start service
        • 9. Download

          • Download Resources
    • ShimetaPi

      • M4-R1

        • 1. Introduction

          • 1.1 About M4-R1
        • 2. Quick Start

          • 2.1 OpenHarmony Overview
          • 2.2 Image Burning
          • 2.3 Development Environment Preparation
          • 2.4 Hello World Application
        • 3. Application Development

          • 3.1 Getting Started

            • 3.1.1 ArkTS Language Overview
            • 3.1.2 UI Components (Part 1)
            • 3.1.3 UI Components (Part 2)
            • 3.1.4 UI Components (Part 3)
          • 3.2 Advanced

            • 3.2.1 Getting Started Guide
            • 3.2.2 Usage of Third Party Libraries
            • 3.2.3 Deployment of the Application
            • 3.2.4 Factory Reset
            • 3.2.5 System Debug
            • 3.2.6 APP Stability Testing
            • 3.2.7 Application Testing
          • 3.3 Getting Docs

            • 3.3.1 Official Website Information
          • 3.4 Development Instructions

            • 3.4.1 Full SDK
            • 3.4.2 Introduction of Third Party Libraries
            • 3.4.3 Introduction of HDC Tool
            • 3.4.4 Restore Factory Mode
            • 3.4.5 Update System API
          • 3.5 First Application

            • 3.5.1 First ArkTS App
          • 3.6 Application Demo

            • 3.6.1 UART Tool
            • 3.6.2 Graphics Tablet
            • 3.6.3 Digital Clock
            • 3.6.4 WIFI Tool
        • 4. Device Development

          • 4.1 Ubuntu Environment Development

            • 4.1.1 Environment Setup
            • 4.1.2 Download Source Code
            • 4.1.3 Compile Source Code
          • 4.2 Using DevEco Device Tool

            • 4.2.1 Tool Introduction
            • 4.2.2 Environment Construction
            • 4.2.3 Import SDK
            • 4.2.4 Function Introduction
        • 5. Peripherals and Interfaces

          • 5.1 Raspberry Pi Interfaces
          • 5.2 GPIO Interface
          • 5.3 I2C Interface
          • 5.4 SPI Communication
          • 5.5 PWM Control
          • 5.6 Serial Port Communication
          • 5.7 TF Card Slot
          • 5.8 Display Screen
          • 5.9 Touch Screen
          • 5.10 Audio
          • 5.11 RTC
          • 5.12 Ethernet
          • 5.13 M.2
          • 5.14 MINI PCIE
          • 5.15 Camera
          • 5.16 WIFI BT
          • 5.17 HAT
        • 6. FAQ

          • 6.1 Download Link
      • M5-R1

        • 1. Introduction

          • M5-R1 Development Documentation
        • 2. Quick Start

          • OpenHarmony Overview
          • Image Burning
          • Development Environment Preparation
          • Hello World Application and Deployment
        • 3. Peripherals and Interfaces

          • 3.1 Raspberry Pi Interfaces
          • 3.2 GPIO Interface
          • 3.3 I2C Interface
          • 3.4 SPI Communication
          • 3.5 PWM Control
          • 3.6 Serial Port Communication
          • 3.7 TF Card Slot
          • 3.8 Display Screen
          • 3.9 Touch Screen
          • 3.10 Audio
          • 3.11 RTC
          • 3.12 Ethernet
          • 3.13 M.2
          • 3.14 MINI PCIE
          • 3.15 Camera
          • 3.16 WIFI BT
          • 3.17 HAT
        • 4. Application Development

          • 4.1 Getting Started

            • 4.1.1 ArkTS Language Overview
            • 4.1.2 UI Components (Part 1)
            • 4.1.3 UI Components (Part 2)
            • 4.1.4 UI Components (Part 3)
          • 4.2 Advanced

            • 4.2.1 Getting Started Guide
            • 4.2.2 Usage of Third Party Libraries
            • 4.2.3 Deployment of the Application
            • 4.2.4 Factory Reset
            • 4.2.5 System Debug
            • 4.2.6 APP Stability Testing
            • 4.2.7 Application Testing
        • 5. Device Development

          • 5.1 Environment Setup
          • 5.2 Download Source Code
          • 5.3 Compile Source Code
        • 6. Download

          • Data Download
    • OpenHarmony

      • SC-3568HA

        • 1. Introduction

          • 1.1 About SC-3568HA
        • 2. Quick Start

          • 2.1 OpenHarmony Overview
          • 2.2 Image Burning
          • 2.3 Development Environment Preparation
          • 2.4 Hello World Application
        • 3. Application Development

          • 3.1 ArkUI

            • 3.1.1 ArkTS Language Overview
            • 3.1.2 UI Components (Part 1)
            • 3.1.3 UI Components (Part 2)
            • 3.1.4 UI Components (Part 3)
          • 3.2 Advanced

            • 3.2.1 Getting Started Guide
            • 3.2.2 Usage of Third Party Libraries
            • 3.2.3 Deployment of the Application
            • 3.2.4 Factory Reset
            • 3.2.5 System Debug
            • 3.2.6 APP Stability Testing
            • 3.2.7 Application Testing
        • 4. Device Development

          • 4.1 Environment Setup
          • 4.2 Download Source Code
          • 4.3 Compile Source Code
        • 5. Peripherals and Interfaces

          • 5.1 Raspberry Pi Interfaces
          • 5.2 GPIO Interface
          • 5.3 I2C Interface
          • 5.4 SPI Communication
          • 5.5 PWM Control
          • 5.6 Serial Port Communication
          • 5.7 TF Card Slot
          • 5.8 Display Screen
          • 5.9 Touch Screen
          • 5.10 Audio
          • 5.11 RTC
          • 5.12 Ethernet
          • 5.13 M.2
          • 5.14 MINI PCIE
          • 5.15 Camera
          • 5.16 WIFI BT
          • 5.17 HAT
        • 6. FAQ

          • 6.1 Download Link
      • M-K1HSE

        • 1. Introduction

          • 1.1 Product Introduction
        • 2. Quick Start

          • 2.1 Debug Tool Installation
          • 2.2 Development Environment Setup
          • 2.3 Source Code Download
          • 2.4 Build Instructions
          • 2.5 Flashing Guide
          • 2.6 APT Update Sources
          • 2.7 View Board Info
          • 2.8 CLI LED and Key Test
          • 2.9 GCC Build Programs
        • 3. Application Development

          • 3.1 Basic Application Development

            • 3.1.1 Development Environment Preparation
            • 3.1.2 First Application HelloWorld
            • 3.1.3 Develop HAR Package
          • 3.2 Peripheral Application Cases

            • 3.2.1 UART Read/Write
            • 3.2.2 Key Demo
            • 3.2.3 LED Flash
        • 4. Peripherals and Interfaces

          • 4.1 Standard Peripherals

            • 4.1.1 USB
            • 4.1.2 Display and Touch
            • 4.1.3 Ethernet
            • 4.1.4 WIFI
            • 4.1.5 Bluetooth
            • 4.1.6 TF Card
            • 4.1.7 Audio
            • 4.1.8 Serial Port
            • 4.1.9 CAN
            • 4.1.10 RTC
          • 4.2 Interfaces

            • 4.2.1 Audio
            • 4.2.2 RS485
            • 4.2.3 Display
            • 4.2.4 Touch
        • 5. System Customization Development

          • 5.1 System Porting
          • 5.2 System Customization
          • 5.3 Driver Development
          • 5.4 System Debugging
          • 5.5 OTA Upgrade
        • 6. Download

          • 6.1 Download
    • EVS-Camera

      • CF-NRS1

        • 1. Introduction

          • 1.1 About CF-NRS1
          • 1.2 Event-Based Concepts
          • 1.3 Quick Start
          • 1.4 Resources
        • 2. Development

          • 2.1 Development Overview

            • 2.1.1 Shimetapi Hybrid Camera SDK Introduction
          • 2.2 Environment & API

            • 2.2.1 Environment Overview
            • 2.2.2 Development API Overview
          • 2.3 Linux Development

            • 2.3.1 Linux SDK Introduction
            • 2.3.2 Linux SDK API
            • 2.3.3 Linux Algorithm
            • 2.3.4 Linux Algorithm API
          • 2.4 Service & Web

            • 2.4.1 EVS Server
            • 2.4.2 Time Server
            • 2.4.3 EVS Web
        • 3. Download

          • 3.1 Download
        • 4. Common Problems

          • 4.1 Common Problems
      • CF-CRA2

        • 1. Introduction

          • 1.1 About CF-CRA2
        • 2. Download

          • 2.1 Download
      • EVS Module

        • 1. Related Concepts
        • 2. Hardware Preparation and Environment Configuration
        • 3. Example Program User Guide
        • Resources Download
    • AI-model

      • 1684XB-32T

        • 1. Introduction

          • AIBOX-1684XB-32 Introduction
        • 2. Quick Start

          • First time use
          • Network Configuration
          • Disk usage
          • Memory allocation
          • Fan Strategy
          • Firmware Upgrade
          • Cross-Compilation
          • Model Quantization
        • 3. Application Development

          • 3.1 Development Introduction

            • Sophgo SDK Development
            • SOPHON-DEMO Introduction
          • 3.2 Large Language Models

            • Deploying Llama3 Example
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Sophon_LLM_api_server-Development-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/MiniCPM-V-2_6-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen-2-5-VL-demo-Development-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen-3-chat-demo-Development-AIBOX-1684XB-32.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen3-Qwen Agent-MCP.html
            • /ai-model/AIBOX-1684XB-32/application-development/LLM/Qwen3-langchain-AI Agent.html
          • 3.3 Deep Learning

            • ResNet (Image Classification)
            • LPRNet (License Plate Recognition)
            • SAM (Universal Image Segmentation Foundation Model)
            • YOLOv5 (Object Detection)
            • OpenPose (Human Keypoint Detection)
            • PP-OCR (Optical Character Recognition)
        • 4. Download

          • Resource Download
      • 1684X-416T

        • 1. Introduction

          • AIBOX-1684X-416 Introduction
        • 2. Demo Simple Operation Guide

          • Simple instructions for using shimeta smart monitoring demo
      • RDK-X5

        • 1. Introduction

          • RDK-X5 Hardware Introduction
        • 2. Quick Start

          • RDK-X5 Quick Start
        • 3. Application Development

          • 3.1 AI Online Model Development

            • AI Online Development - Experiment01
            • AI Online Development - Experiment02
            • AI Online Development - Experiment03
            • AI Online Development - Experiment04
            • AI Online Development - Experiment05
            • AI Online Development - Experiment06
          • 3.2 Large Language Models (Voice)

            • Voice LLM Application - Experiment01
            • Voice LLM Application - Experiment02
            • Voice LLM Application - Experiment03
            • Voice LLM Application - Experiment04
            • Voice LLM Application - Experiment05
            • Voice LLM Application - Experiment06
          • 3.3 40pin-IO Development

            • 40pin IO Development - Experiment01
            • 40pin IO Development - Experiment02
            • 40pin IO Development - Experiment03
            • 40pin IO Development - Experiment04
            • 40pin IO Development - Experiment05
            • 40pin IO Development - Experiment06
            • 40pin IO Development - Experiment07
          • 3.4 USB Module Development

            • USB Module Usage - Experiment01
            • USB Module Usage - Experiment02
          • 3.5 Machine Vision

            • Machine Vision Technology Development - Experiment01
            • Machine Vision Technology Development - Experiment02
            • Machine Vision Technology Development - Experiment03
            • Machine Vision Technology Development - Experiment04
          • 3.6 ROS2 Base Development

            • ROS2 Basic Development - Experiment01
            • ROS2 Basic Development - Experiment02
            • ROS2 Basic Development - Experiment03
            • ROS2 Basic Development - Experiment04
      • RDK-S100

        • 1. Introduction

          • 1.1 About RDK-S100
        • 2. Quick Start

          • 2.1 First Use
        • 3. Application Development

          • 3.1 AI Online Model Development

            • 3.1.1 Volcano Engine Doubao AI
            • 3.1.2 Image Analysis
            • 3.1.3 Multimodal Visual Analysis
            • 3.1.4 Multimodal Image Comparison
            • 3.1.5 Multimodal Document Analysis
            • 3.1.6 Camera AI Vision Analysis
          • 3.2 Large Language Models

            • 3.2.1 Speech Recognition
            • 3.2.2 Voice Conversation
            • 3.2.3 Multimodal Image Analysis
            • 3.2.4 Multimodal Image Comparison
            • 3.2.5 Multimodal Document Analysis
            • 3.2.6 Multimodal Vision Application
          • 3.3 40pin-IO Development

            • 3.3.1 GPIO Output LED Blink
            • 3.3.2 GPIO Input
            • 3.3.3 Key Control LED
            • 3.3.4 PWM Output
            • 3.3.5 Serial Output
            • 3.3.6 I2C Experiment
          • 3.4 USB Module Development

            • 3.4.1 USB Voice Module
            • 3.4.2 Sound Source Localization
          • 3.5 Machine Vision

            • 3.5.1 USB Camera
            • 3.5.2 Image Processing Basics
            • 3.5.3 Object Detection
            • 3.5.4 Image Segmentation
          • 3.6 ROS2 Base Development

            • 3.6.1 Environment Setup
            • 3.6.2 Create and Build Workspace
            • 3.6.3 ROS2 Topic Communication
            • 3.6.4 ROS2 Camera Application
    • Core-Board

      • C-3568BQ

        • 1. Introduction

          • C-3568BQ Introduction
      • C-3588LQ

        • 1. Introduction

          • C-3588LQ Introduction
      • GC-3568JBAF

        • 1. Introduction

          • GC-3568JBAF Introduction
      • C-K1BA

        • 1. Introduction

          • C-K1BA Introduction

Run Official YOLOv5 Example

3.1 Get Official Precompiled Demo

Download Official Example Code

Get from GitHub

# Clone official RKNN example repository
git clone https://github.com/rockchip-linux/rknn_model_zoo.git
cd rknn_model_zoo

# View directory structure
ls -la

Directory Structure Description

rknn_model_zoo/
├── models/                    # Pretrained models
│   └── CV/                   # Computer Vision models
│       └── object_detection/ # Object Detection models
│           └── yolo/         # YOLO series models
│               └── yolov5/   # YOLOv5 model
├── examples/                 # Example code
│   └── yolov5/              # YOLOv5 example
│       ├── python/          # Python version
│       └── cpp/             # C++ version
└── docs/                    # Documentation

Download Precompiled Model

# Enter YOLOv5 model directory
cd models/CV/object_detection/yolo/yolov5

# Download precompiled RKNN model
wget https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s.rknn

# Or download from Baidu Netdisk (if GitHub download is slow)
# Link: https://pan.baidu.com/s/1XXX
# Extraction code: XXXX

Get Test Data

Download Test Images

# Create test data directory
mkdir -p test_data/images

# Download COCO test images
cd test_data/images
wget https://github.com/ultralytics/yolov5/raw/master/data/images/bus.jpg
wget https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg

# Or use your own test images
# cp /path/to/your/image.jpg ./

Prepare Video Test File (Optional)

# Download test video
wget https://sample-videos.com/zip/10/mp4/SampleVideo_1280x720_1mb.mp4

# Or use camera (will be used later)
# Ensure camera device is available
ls /dev/video*

Verify Downloaded Files

Check Model File

# Check RKNN model file
ls -lh yolov5s.rknn
file yolov5s.rknn

# View model information (requires RKNN tool)
python3 -c "
from rknnlite.api import RKNNLite
rknn = RKNNLite()
ret = rknn.load_rknn('yolov5s.rknn')
if ret == 0:
    print('Model loaded successfully')
    rknn.init_runtime()
    print('Runtime initialized successfully')
else:
    print('Model loading failed')
"

3.2 Run YOLOv5 Demo on Board

Prepare Board Environment

Transfer Files to Development Board

# Use scp to transfer files
scp -r rknn_model_zoo/ root@192.168.1.100:/home/root/

# Or use rsync (more efficient)
rsync -av --progress rknn_model_zoo/ root@192.168.1.100:/home/root/rknn_model_zoo/

# SSH login to development board
ssh root@192.168.1.100

Install Dependencies (Board Side)

# Update package manager
sudo apt update

# Install Python dependencies
pip3 install opencv-python numpy pillow

# Install system dependencies
sudo apt install python3-opencv

# Verify camera (if used)
ls /dev/video*
v4l2-ctl --list-devices

Python Version Example Run

Basic Image Detection

#!/usr/bin/env python3
# yolov5_image_demo.py

import cv2
import numpy as np
import time
from rknnlite.api import RKNNLite

# COCO dataset class names
CLASSES = [
    'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
    'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
    'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
    'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
    'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
    'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
    'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
    'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
    'hair drier', 'toothbrush'
]

def preprocess_image(image, input_size=(640, 640)):
    """Image preprocessing"""
    # Resize maintaining aspect ratio
    h, w = image.shape[:2]
    scale = min(input_size[0] / w, input_size[1] / h)
    new_w, new_h = int(w * scale), int(h * scale)
    
    # Resize image
    resized = cv2.resize(image, (new_w, new_h))
    
    # Create new image and center it
    new_image = np.full((input_size[1], input_size[0], 3), 114, dtype=np.uint8)
    top = (input_size[1] - new_h) // 2
    left = (input_size[0] - new_w) // 2
    new_image[top:top+new_h, left:left+new_w] = resized
    
    # Convert to RGB and normalize
    new_image = cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB)
    new_image = new_image.astype(np.float32) / 255.0
    
    # Convert to NCHW format
    new_image = np.transpose(new_image, (2, 0, 1))
    new_image = np.expand_dims(new_image, axis=0)
    
    return new_image, scale, (left, top)

def postprocess_output(outputs, scale, offset, conf_threshold=0.5, nms_threshold=0.45):
    """Post-process output results"""
    predictions = outputs[0][0]  # Get prediction results
    
    # Parse prediction results
    boxes = []
    scores = []
    class_ids = []
    
    for detection in predictions:
        confidence = detection[4]
        if confidence > conf_threshold:
            # Get class scores
            class_scores = detection[5:]
            class_id = np.argmax(class_scores)
            class_score = class_scores[class_id]
            
            if class_score > conf_threshold:
                # Parse bounding box
                x_center, y_center, width, height = detection[:4]
                
                # Convert to actual coordinates
                x_center = (x_center - offset[0]) / scale
                y_center = (y_center - offset[1]) / scale
                width = width / scale
                height = height / scale
                
                # Convert to top-left coordinates
                x1 = int(x_center - width / 2)
                y1 = int(y_center - height / 2)
                x2 = int(x_center + width / 2)
                y2 = int(y_center + height / 2)
                
                boxes.append([x1, y1, x2, y2])
                scores.append(float(confidence * class_score))
                class_ids.append(class_id)
    
    # Non-Maximum Suppression (NMS)
    if len(boxes) > 0:
        indices = cv2.dnn.NMSBoxes(boxes, scores, conf_threshold, nms_threshold)
        if len(indices) > 0:
            indices = indices.flatten()
            return [boxes[i] for i in indices], [scores[i] for i in indices], [class_ids[i] for i in indices]
    
    return [], [], []

def draw_detections(image, boxes, scores, class_ids):
    """Draw detection results"""
    for box, score, class_id in zip(boxes, scores, class_ids):
        x1, y1, x2, y2 = box
        
        # Draw bounding box
        cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
        
        # Draw label
        label = f"{CLASSES[class_id]}: {score:.2f}"
        label_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 2)[0]
        cv2.rectangle(image, (x1, y1 - label_size[1] - 10), (x1 + label_size[0], y1), (0, 255, 0), -1)
        cv2.putText(image, label, (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2)
    
    return image

def main():
    # Initialize RKNN
    rknn = RKNNLite()
    
    # Load model
    print("Loading RKNN model...")
    ret = rknn.load_rknn('yolov5s.rknn')
    if ret != 0:
        print("Model loading failed!")
        return
    
    # Initialize runtime
    print("Initializing runtime...")
    ret = rknn.init_runtime()
    if ret != 0:
        print("Runtime initialization failed!")
        return
    
    # Load test image
    image_path = 'test_data/images/bus.jpg'
    image = cv2.imread(image_path)
    if image is None:
        print(f"Cannot load image: {image_path}")
        return
    
    print(f"Processing image: {image_path}")
    
    # Preprocess
    input_data, scale, offset = preprocess_image(image)
    
    # Inference
    print("Starting inference...")
    start_time = time.time()
    outputs = rknn.inference(inputs=[input_data])
    inference_time = time.time() - start_time
    print(f"Inference time: {inference_time:.3f}s")
    
    # Postprocess
    boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
    
    # Draw results
    result_image = draw_detections(image.copy(), boxes, scores, class_ids)
    
    # Save results
    output_path = 'yolov5_result.jpg'
    cv2.imwrite(output_path, result_image)
    print(f"Result saved to: {output_path}")
    
    # Print detection results
    print(f"Detected {len(boxes)} objects:")
    for i, (box, score, class_id) in enumerate(zip(boxes, scores, class_ids)):
        print(f"  {i+1}. {CLASSES[class_id]}: {score:.3f} at {box}")
    
    # Release resources
    rknn.release()

if __name__ == "__main__":
    main()

Run Image Detection

# Enter example directory
cd /home/root/rknn_model_zoo/examples/yolov5/python

# Run image detection
python3 yolov5_image_demo.py

# View results
ls -la yolov5_result.jpg

Real-time Camera Detection

Camera Detection Script

#!/usr/bin/env python3
# yolov5_camera_demo.py

import cv2
import numpy as np
import time
from rknnlite.api import RKNNLite
import threading
import queue

class YOLOv5Camera:
    def __init__(self, model_path, camera_id=0):
        self.model_path = model_path
        self.camera_id = camera_id
        self.rknn = None
        self.cap = None
        self.frame_queue = queue.Queue(maxsize=2)
        self.result_queue = queue.Queue(maxsize=2)
        self.running = False
        
    def init_model(self):
        """Initialize RKNN model"""
        self.rknn = RKNNLite()
        ret = self.rknn.load_rknn(self.model_path)
        if ret != 0:
            print("Model loading failed!")
            return False
        
        ret = self.rknn.init_runtime()
        if ret != 0:
            print("Runtime initialization failed!")
            return False
        
        print("RKNN model initialized successfully")
        return True
    
    def init_camera(self):
        """Initialize camera"""
        self.cap = cv2.VideoCapture(self.camera_id)
        if not self.cap.isOpened():
            print(f"Cannot open camera {self.camera_id}")
            return False
        
        # Set camera parameters
        self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
        self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
        self.cap.set(cv2.CAP_PROP_FPS, 30)
        
        print("Camera initialized successfully")
        return True
    
    def capture_thread(self):
        """Camera capture thread"""
        while self.running:
            ret, frame = self.cap.read()
            if ret:
                if not self.frame_queue.full():
                    self.frame_queue.put(frame)
                else:
                    # Drop old frame
                    try:
                        self.frame_queue.get_nowait()
                        self.frame_queue.put(frame)
                    except queue.Empty:
                        pass
            time.sleep(0.01)
    
    def inference_thread(self):
        """Inference thread"""
        while self.running:
            try:
                frame = self.frame_queue.get(timeout=1.0)
                
                # Preprocess
                input_data, scale, offset = preprocess_image(frame)
                
                # Inference
                start_time = time.time()
                outputs = self.rknn.inference(inputs=[input_data])
                inference_time = time.time() - start_time
                
                # Postprocess
                boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
                
                # Draw results
                result_frame = draw_detections(frame.copy(), boxes, scores, class_ids)
                
                # Add FPS information
                fps = 1.0 / inference_time if inference_time > 0 else 0
                cv2.putText(result_frame, f"FPS: {fps:.1f}", (10, 30), 
                           cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
                
                if not self.result_queue.full():
                    self.result_queue.put(result_frame)
                else:
                    try:
                        self.result_queue.get_nowait()
                        self.result_queue.put(result_frame)
                    except queue.Empty:
                        pass
                        
            except queue.Empty:
                continue
    
    def run(self):
        """Run detection"""
        if not self.init_model():
            return
        
        if not self.init_camera():
            return
        
        self.running = True
        
        # Start threads
        capture_thread = threading.Thread(target=self.capture_thread)
        inference_thread = threading.Thread(target=self.inference_thread)
        
        capture_thread.start()
        inference_thread.start()
        
        print("Starting real-time detection, press 'q' to exit...")
        
        try:
            while True:
                try:
                    result_frame = self.result_queue.get(timeout=1.0)
                    cv2.imshow('YOLOv5 Real-time Detection', result_frame)
                    
                    if cv2.waitKey(1) & 0xFF == ord('q'):
                        break
                except queue.Empty:
                    continue
                    
        except KeyboardInterrupt:
            print("User interrupted")
        
        # Clean up resources
        self.running = False
        capture_thread.join()
        inference_thread.join()
        
        if self.cap:
            self.cap.release()
        if self.rknn:
            self.rknn.release()
        cv2.destroyAllWindows()
        
        print("Detection ended")

# Use previously defined preprocess and postprocess functions
# (Omitted here, use functions from image detection above)

if __name__ == "__main__":
    detector = YOLOv5Camera('yolov5s.rknn', camera_id=0)
    detector.run()

Run Camera Detection

# Check camera device
ls /dev/video*

# Run camera detection
python3 yolov5_camera_demo.py

# If no display, save as video file instead of displaying
# Modify code to save as video file instead of displaying

Performance Testing and Optimization

Performance Benchmark

#!/usr/bin/env python3
# yolov5_benchmark.py

import cv2
import numpy as np
import time
from rknnlite.api import RKNNLite

def benchmark_model(model_path, test_image_path, num_runs=100):
    """Model performance benchmark"""
    # Initialize model
    rknn = RKNNLite()
    ret = rknn.load_rknn(model_path)
    if ret != 0:
        print("Model loading failed!")
        return
    
    ret = rknn.init_runtime()
    if ret != 0:
        print("Runtime initialization failed!")
        return
    
    # Load test image
    image = cv2.imread(test_image_path)
    input_data, _, _ = preprocess_image(image)
    
    print(f"Starting performance test, running {num_runs} inferences...")
    
    # Warm up
    for _ in range(10):
        rknn.inference(inputs=[input_data])
    
    # Formal test
    times = []
    for i in range(num_runs):
        start_time = time.time()
        outputs = rknn.inference(inputs=[input_data])
        end_time = time.time()
        times.append(end_time - start_time)
        
        if (i + 1) % 20 == 0:
            print(f"Completed {i + 1}/{num_runs} inferences")
    
    # Statistical results
    times = np.array(times)
    avg_time = np.mean(times)
    min_time = np.min(times)
    max_time = np.max(times)
    std_time = np.std(times)
    
    print(f"\nPerformance Test Results:")
    print(f"Average Inference Time: {avg_time*1000:.2f} ms")
    print(f"Minimum Inference Time: {min_time*1000:.2f} ms")
    print(f"Maximum Inference Time: {max_time*1000:.2f} ms")
    print(f"Standard Deviation: {std_time*1000:.2f} ms")
    print(f"Average FPS: {1/avg_time:.2f}")
    
    rknn.release()

if __name__ == "__main__":
    benchmark_model('yolov5s.rknn', 'test_data/images/bus.jpg')

3.3 Run Simulation Demo on PC

PC Simulation Environment Preparation

Install RKNN-Toolkit2

# Activate virtual environment
source rknn_env/bin/activate

# Confirm RKNN-Toolkit2 is installed
pip list | grep rknn

# If not installed
pip install rknn-toolkit2

PC Simulation Code

Simulation Inference Script

#!/usr/bin/env python3
# yolov5_pc_simulation.py

import cv2
import numpy as np
import time
from rknn.api import RKNN

def pc_simulation_demo():
    """PC simulation demo"""
    # Create RKNN object
    rknn = RKNN(verbose=True)
    
    # Load RKNN model
    print("Loading RKNN model...")
    ret = rknn.load_rknn('yolov5s.rknn')
    if ret != 0:
        print("Model loading failed!")
        return
    
    # Initialize runtime (simulation mode)
    print("Initializing simulation runtime...")
    ret = rknn.init_runtime(target='rk3568', device_id=None)
    if ret != 0:
        print("Simulation runtime initialization failed!")
        return
    
    # Load test image
    image_path = 'test_data/images/bus.jpg'
    image = cv2.imread(image_path)
    if image is None:
        print(f"Cannot load image: {image_path}")
        return
    
    print(f"Processing image: {image_path}")
    
    # Preprocess (use same preprocessing function)
    input_data, scale, offset = preprocess_image(image)
    
    # Simulation inference
    print("Starting simulation inference...")
    start_time = time.time()
    outputs = rknn.inference(inputs=[input_data])
    inference_time = time.time() - start_time
    print(f"Simulation inference time: {inference_time:.3f}s")
    
    # Postprocess
    boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
    
    # Draw results
    result_image = draw_detections(image.copy(), boxes, scores, class_ids)
    
    # Save results
    output_path = 'yolov5_pc_simulation_result.jpg'
    cv2.imwrite(output_path, result_image)
    print(f"Simulation result saved to: {output_path}")
    
    # Print detection results
    print(f"Simulation detected {len(boxes)} objects:")
    for i, (box, score, class_id) in enumerate(zip(boxes, scores, class_ids)):
        print(f"  {i+1}. {CLASSES[class_id]}: {score:.3f} at {box}")
    
    # Release resources
    rknn.release()

if __name__ == "__main__":
    pc_simulation_demo()

Integrated Model Conversion and Simulation

Complete Conversion + Simulation Workflow

#!/usr/bin/env python3
# yolov5_convert_and_simulate.py

import cv2
import numpy as np
import time
from rknn.api import RKNN

def convert_and_simulate():
    """Integrated model conversion and simulation workflow"""
    # Create RKNN object
    rknn = RKNN(verbose=True)
    
    # Configure model conversion parameters
    print("Configuring model conversion parameters...")
    rknn.config(
        target_platform='rk3568',
        quantized_dtype='asymmetric_quantized-u8',
        optimization_level=3,
        output_optimize=1
    )
    
    # Load ONNX model (if original ONNX model exists)
    onnx_model_path = 'yolov5s.onnx'
    try:
        print(f"Loading ONNX model: {onnx_model_path}")
        ret = rknn.load_onnx(model=onnx_model_path)
        if ret != 0:
            print("ONNX model loading failed!")
            return
    except:
        print("ONNX model not found, loading RKNN model directly...")
        ret = rknn.load_rknn('yolov5s.rknn')
        if ret != 0:
            print("RKNN model loading failed!")
            return
    
    # Build model (if converting from ONNX)
    if onnx_model_path:
        print("Building RKNN model...")
        ret = rknn.build(do_quantization=True)
        if ret != 0:
            print("Model building failed!")
            return
        
        # Export RKNN model
        print("Exporting RKNN model...")
        ret = rknn.export_rknn('./yolov5s_converted.rknn')
        if ret != 0:
            print("Model export failed!")
            return
    
    # Initialize simulation runtime
    print("Initializing simulation runtime...")
    ret = rknn.init_runtime()
    if ret != 0:
        print("Simulation runtime initialization failed!")
        return
    
    # Prepare test data
    test_images = [
        'test_data/images/bus.jpg',
        'test_data/images/zidane.jpg'
    ]
    
    for image_path in test_images:
        if not os.path.exists(image_path):
            print(f"Skipping non-existent image: {image_path}")
            continue
            
        print(f"\nProcessing image: {image_path}")
        
        # Load image
        image = cv2.imread(image_path)
        if image is None:
            print(f"Cannot load image: {image_path}")
            continue
        
        # Preprocess
        input_data, scale, offset = preprocess_image(image)
        
        # Simulation inference
        start_time = time.time()
        outputs = rknn.inference(inputs=[input_data])
        inference_time = time.time() - start_time
        print(f"Inference time: {inference_time:.3f}s")
        
        # Postprocess
        boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
        
        # Draw and save results
        result_image = draw_detections(image.copy(), boxes, scores, class_ids)
        output_path = f'result_{os.path.basename(image_path)}'
        cv2.imwrite(output_path, result_image)
        print(f"Result saved to: {output_path}")
        
        # Print detection results
        print(f"Detected {len(boxes)} objects:")
        for i, (box, score, class_id) in enumerate(zip(boxes, scores, class_ids)):
            print(f"  {i+1}. {CLASSES[class_id]}: {score:.3f}")
    
    # Release resources
    rknn.release()
    print("\nSimulation completed!")

if __name__ == "__main__":
    import os
    convert_and_simulate()

Accuracy Comparison Test

Original Model vs RKNN Model Accuracy Comparison

#!/usr/bin/env python3
# accuracy_comparison.py

import torch
import torchvision.transforms as transforms
from rknn.api import RKNN
import cv2
import numpy as np

def compare_accuracy():
    """Compare accuracy of original model and RKNN model"""
    
    # Load original PyTorch model (if available)
    try:
        import torch
        original_model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
        original_model.eval()
        print("Original PyTorch model loaded successfully")
    except:
        print("Cannot load original PyTorch model, skipping accuracy comparison")
        return
    
    # Load RKNN model
    rknn = RKNN(verbose=False)
    ret = rknn.load_rknn('yolov5s.rknn')
    if ret != 0:
        print("RKNN model loading failed!")
        return
    
    ret = rknn.init_runtime()
    if ret != 0:
        print("RKNN runtime initialization failed!")
        return
    
    # Test image
    test_image = 'test_data/images/bus.jpg'
    image = cv2.imread(test_image)
    
    # Original model inference
    print("Original model inference...")
    original_results = original_model(image)
    
    # RKNN model inference
    print("RKNN model inference...")
    input_data, scale, offset = preprocess_image(image)
    rknn_outputs = rknn.inference(inputs=[input_data])
    rknn_boxes, rknn_scores, rknn_class_ids = postprocess_output(rknn_outputs, scale, offset)
    
    # Compare results
    print(f"\nAccuracy Comparison Results:")
    print(f"Original model detected objects: {len(original_results.pandas().xyxy[0])}")
    print(f"RKNN model detected objects: {len(rknn_boxes)}")
    
    # Detailed comparison (needs further implementation)
    # ...
    
    rknn.release()

if __name__ == "__main__":
    compare_accuracy()

Common Issues and Solutions

Runtime Troubleshooting

# 1. Check NPU driver
lsmod | grep rknpu
dmesg | grep -i npu

# 2. Check Python environment
python3 -c "from rknnlite.api import RKNNLite; print('OK')"

# 3. Check model file
file yolov5s.rknn
ls -lh yolov5s.rknn

# 4. Check camera
v4l2-ctl --list-devices
ls /dev/video*

# 5. Monitor memory usage
free -h
top -p $(pgrep python3)

Performance Optimization Suggestions

# 1. Multi-threading optimization
import threading
import queue

# 2. Memory reuse
# Pre-allocate input/output buffers

# 3. Batch processing
# Use batch inference if supported

# 4. Preprocessing optimization
# Use OpenCV's DNN module for preprocessing

# 5. Postprocessing optimization
# Use vectorized operations instead of loops

Summary

Through this chapter, you have successfully run the official YOLOv5 example, including:

  1. Get Official Demo: Download precompiled model and example code.
  2. Run on Board: Run image detection and real-time camera detection on GM-3568JHF development board.
  3. PC Simulation: Perform model simulation verification on PC.

This lays the foundation for subsequent model conversion and custom application development. The next chapter will detail how to perform model conversion and deploy your own model to RK3568 NPU.

Edit this page on GitHub
Last Updated:
Contributors: ZSL
Prev
02 Development Environment Setup
Next
Model Conversion Detailed Explanation