首页
  • GM-3568JHF
  • M4-R1
  • M5-R1
  • SC-3568HA
  • M-K1HSE
  • CF-NRS1
  • CF-CRA2
  • 1684XB-32T
  • 1684X-416T
  • RDK-X5
  • RDK-S100
  • C-3568BQ
  • C-3588LQ
  • GC-3568JBAF
  • C-K1BA
商城
  • English
  • 简体中文
首页
  • GM-3568JHF
  • M4-R1
  • M5-R1
  • SC-3568HA
  • M-K1HSE
  • CF-NRS1
  • CF-CRA2
  • 1684XB-32T
  • 1684X-416T
  • RDK-X5
  • RDK-S100
  • C-3568BQ
  • C-3588LQ
  • GC-3568JBAF
  • C-K1BA
商城
  • English
  • 简体中文
  • GM-3568JHF

    • 一、简介

      • GM-3568JHF 简介
    • 二、快速开始

      • 00 前言
      • 01 环境搭建
      • 02 编译说明
      • 03 烧录指南
      • 04 调试工具
      • 05 软件更新
      • 06 查看信息
      • 07 测试命令
      • 08 应用编译
      • 09 源码获取
    • 三、外设与接口

      • 01 USB
      • 02 显示与触摸
      • 03 以太网
      • 04 WIFI
      • 05 蓝牙
      • 06 TF-Card
      • 07 音频
      • 08 串口
      • 09 CAN
      • 10 RTC
    • 四、应用开发

      • 01 UART读写案例
      • 02 按键检测案例
      • 03 LED灯闪烁案例
      • 04 MIPI屏幕检测案例
      • 05 读取 USB 设备信息案例
      • 06 FAN 检测案例
      • 07 FPGA FSPI 通信案例
      • 08 FPGA DMA 读写案例
      • 09 GPS调试案例
      • 10 以太网测试案例
      • 11 RS485读写案例
      • 12 FPGA IIC 读写案例
      • 13 PN532 NFC读卡案例
      • 14 TF卡读写案例
    • 五、QT开发

      • 01 ARM64交叉编译器环境搭建
      • 02 QT 程序加入开机自启服务
    • 六、RKNN_NPU开发

      • 01 RK3568 NPU 概述
      • 02 开发环境搭建
      • 运行官方 YOLOv5 示例
    • 七、FPGA开发

      • ARM与FPGA通讯
      • FPGA开发手册
    • 八、其他

      • 01 根目录文件系统的修改
      • 02 系统自启服务
    • 九、资料下载

      • 资料下载

运行官方 YOLOv5 示例

3.1 获取官方预编译 Demo

下载官方示例代码

从GitHub获取

# 克隆官方RKNN示例仓库
git clone https://github.com/rockchip-linux/rknn_model_zoo.git
cd rknn_model_zoo

# 查看目录结构
ls -la

目录结构说明

rknn_model_zoo/
├── models/                    # 预训练模型
│   └── CV/                   # 计算机视觉模型
│       └── object_detection/ # 目标检测模型
│           └── yolo/         # YOLO系列模型
│               └── yolov5/   # YOLOv5模型
├── examples/                 # 示例代码
│   └── yolov5/              # YOLOv5示例
│       ├── python/          # Python版本
│       └── cpp/             # C++版本
└── docs/                    # 文档

下载预编译模型

# 进入YOLOv5模型目录
cd models/CV/object_detection/yolo/yolov5

# 下载预编译的RKNN模型
wget https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s.rknn

# 或者从百度网盘下载 (如果GitHub下载慢)
# 链接: https://pan.baidu.com/s/1XXX
# 提取码: XXXX

获取测试数据

下载测试图片

# 创建测试数据目录
mkdir -p test_data/images

# 下载COCO测试图片
cd test_data/images
wget https://github.com/ultralytics/yolov5/raw/master/data/images/bus.jpg
wget https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg

# 或使用自己的测试图片
# cp /path/to/your/image.jpg ./

准备视频测试文件 (可选)

# 下载测试视频
wget https://sample-videos.com/zip/10/mp4/SampleVideo_1280x720_1mb.mp4

# 或使用摄像头 (后续会用到)
# 确保摄像头设备可用
ls /dev/video*

验证下载文件

检查模型文件

# 检查RKNN模型文件
ls -lh yolov5s.rknn
file yolov5s.rknn

# 查看模型信息 (需要RKNN工具)
python3 -c "
from rknnlite.api import RKNNLite
rknn = RKNNLite()
ret = rknn.load_rknn('yolov5s.rknn')
if ret == 0:
    print('模型加载成功')
    rknn.init_runtime()
    print('运行时初始化成功')
else:
    print('模型加载失败')
"

3.2 板端运行 YOLOv5 Demo

准备板端环境

传输文件到开发板

# 使用scp传输文件
scp -r rknn_model_zoo/ root@192.168.1.100:/home/root/

# 或使用rsync (更高效)
rsync -av --progress rknn_model_zoo/ root@192.168.1.100:/home/root/rknn_model_zoo/

# SSH登录到开发板
ssh root@192.168.1.100

安装依赖 (板端)

# 更新包管理器
sudo apt update

# 安装Python依赖
pip3 install opencv-python numpy pillow

# 安装系统依赖
sudo apt install python3-opencv

# 验证摄像头 (如果使用)
ls /dev/video*
v4l2-ctl --list-devices

Python版本示例运行

基本图片检测

#!/usr/bin/env python3
# yolov5_image_demo.py

import cv2
import numpy as np
import time
from rknnlite.api import RKNNLite

# COCO数据集类别名称
CLASSES = [
    'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
    'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
    'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
    'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
    'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
    'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
    'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
    'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
    'hair drier', 'toothbrush'
]

def preprocess_image(image, input_size=(640, 640)):
    """图像预处理"""
    # 保持宽高比的resize
    h, w = image.shape[:2]
    scale = min(input_size[0] / w, input_size[1] / h)
    new_w, new_h = int(w * scale), int(h * scale)
    
    # Resize图像
    resized = cv2.resize(image, (new_w, new_h))
    
    # 创建新的图像并居中放置
    new_image = np.full((input_size[1], input_size[0], 3), 114, dtype=np.uint8)
    top = (input_size[1] - new_h) // 2
    left = (input_size[0] - new_w) // 2
    new_image[top:top+new_h, left:left+new_w] = resized
    
    # 转换为RGB并归一化
    new_image = cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB)
    new_image = new_image.astype(np.float32) / 255.0
    
    # 转换为NCHW格式
    new_image = np.transpose(new_image, (2, 0, 1))
    new_image = np.expand_dims(new_image, axis=0)
    
    return new_image, scale, (left, top)

def postprocess_output(outputs, scale, offset, conf_threshold=0.5, nms_threshold=0.45):
    """后处理输出结果"""
    predictions = outputs[0][0]  # 获取预测结果
    
    # 解析预测结果
    boxes = []
    scores = []
    class_ids = []
    
    for detection in predictions:
        confidence = detection[4]
        if confidence > conf_threshold:
            # 获取类别分数
            class_scores = detection[5:]
            class_id = np.argmax(class_scores)
            class_score = class_scores[class_id]
            
            if class_score > conf_threshold:
                # 解析边界框
                x_center, y_center, width, height = detection[:4]
                
                # 转换为实际坐标
                x_center = (x_center - offset[0]) / scale
                y_center = (y_center - offset[1]) / scale
                width = width / scale
                height = height / scale
                
                # 转换为左上角坐标
                x1 = int(x_center - width / 2)
                y1 = int(y_center - height / 2)
                x2 = int(x_center + width / 2)
                y2 = int(y_center + height / 2)
                
                boxes.append([x1, y1, x2, y2])
                scores.append(float(confidence * class_score))
                class_ids.append(class_id)
    
    # 非极大值抑制
    if len(boxes) > 0:
        indices = cv2.dnn.NMSBoxes(boxes, scores, conf_threshold, nms_threshold)
        if len(indices) > 0:
            indices = indices.flatten()
            return [boxes[i] for i in indices], [scores[i] for i in indices], [class_ids[i] for i in indices]
    
    return [], [], []

def draw_detections(image, boxes, scores, class_ids):
    """绘制检测结果"""
    for box, score, class_id in zip(boxes, scores, class_ids):
        x1, y1, x2, y2 = box
        
        # 绘制边界框
        cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
        
        # 绘制标签
        label = f"{CLASSES[class_id]}: {score:.2f}"
        label_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 2)[0]
        cv2.rectangle(image, (x1, y1 - label_size[1] - 10), (x1 + label_size[0], y1), (0, 255, 0), -1)
        cv2.putText(image, label, (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 2)
    
    return image

def main():
    # 初始化RKNN
    rknn = RKNNLite()
    
    # 加载模型
    print("加载RKNN模型...")
    ret = rknn.load_rknn('yolov5s.rknn')
    if ret != 0:
        print("模型加载失败!")
        return
    
    # 初始化运行时
    print("初始化运行时...")
    ret = rknn.init_runtime()
    if ret != 0:
        print("运行时初始化失败!")
        return
    
    # 加载测试图片
    image_path = 'test_data/images/bus.jpg'
    image = cv2.imread(image_path)
    if image is None:
        print(f"无法加载图片: {image_path}")
        return
    
    print(f"处理图片: {image_path}")
    
    # 预处理
    input_data, scale, offset = preprocess_image(image)
    
    # 推理
    print("开始推理...")
    start_time = time.time()
    outputs = rknn.inference(inputs=[input_data])
    inference_time = time.time() - start_time
    print(f"推理时间: {inference_time:.3f}s")
    
    # 后处理
    boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
    
    # 绘制结果
    result_image = draw_detections(image.copy(), boxes, scores, class_ids)
    
    # 保存结果
    output_path = 'yolov5_result.jpg'
    cv2.imwrite(output_path, result_image)
    print(f"结果保存到: {output_path}")
    
    # 打印检测结果
    print(f"检测到 {len(boxes)} 个目标:")
    for i, (box, score, class_id) in enumerate(zip(boxes, scores, class_ids)):
        print(f"  {i+1}. {CLASSES[class_id]}: {score:.3f} at {box}")
    
    # 释放资源
    rknn.release()

if __name__ == "__main__":
    main()

运行图片检测

# 进入示例目录
cd /home/root/rknn_model_zoo/examples/yolov5/python

# 运行图片检测
python3 yolov5_image_demo.py

# 查看结果
ls -la yolov5_result.jpg

实时摄像头检测

摄像头检测脚本

#!/usr/bin/env python3
# yolov5_camera_demo.py

import cv2
import numpy as np
import time
from rknnlite.api import RKNNLite
import threading
import queue

class YOLOv5Camera:
    def __init__(self, model_path, camera_id=0):
        self.model_path = model_path
        self.camera_id = camera_id
        self.rknn = None
        self.cap = None
        self.frame_queue = queue.Queue(maxsize=2)
        self.result_queue = queue.Queue(maxsize=2)
        self.running = False
        
    def init_model(self):
        """初始化RKNN模型"""
        self.rknn = RKNNLite()
        ret = self.rknn.load_rknn(self.model_path)
        if ret != 0:
            print("模型加载失败!")
            return False
        
        ret = self.rknn.init_runtime()
        if ret != 0:
            print("运行时初始化失败!")
            return False
        
        print("RKNN模型初始化成功")
        return True
    
    def init_camera(self):
        """初始化摄像头"""
        self.cap = cv2.VideoCapture(self.camera_id)
        if not self.cap.isOpened():
            print(f"无法打开摄像头 {self.camera_id}")
            return False
        
        # 设置摄像头参数
        self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
        self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
        self.cap.set(cv2.CAP_PROP_FPS, 30)
        
        print("摄像头初始化成功")
        return True
    
    def capture_thread(self):
        """摄像头捕获线程"""
        while self.running:
            ret, frame = self.cap.read()
            if ret:
                if not self.frame_queue.full():
                    self.frame_queue.put(frame)
                else:
                    # 丢弃旧帧
                    try:
                        self.frame_queue.get_nowait()
                        self.frame_queue.put(frame)
                    except queue.Empty:
                        pass
            time.sleep(0.01)
    
    def inference_thread(self):
        """推理线程"""
        while self.running:
            try:
                frame = self.frame_queue.get(timeout=1.0)
                
                # 预处理
                input_data, scale, offset = preprocess_image(frame)
                
                # 推理
                start_time = time.time()
                outputs = self.rknn.inference(inputs=[input_data])
                inference_time = time.time() - start_time
                
                # 后处理
                boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
                
                # 绘制结果
                result_frame = draw_detections(frame.copy(), boxes, scores, class_ids)
                
                # 添加FPS信息
                fps = 1.0 / inference_time if inference_time > 0 else 0
                cv2.putText(result_frame, f"FPS: {fps:.1f}", (10, 30), 
                           cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
                
                if not self.result_queue.full():
                    self.result_queue.put(result_frame)
                else:
                    try:
                        self.result_queue.get_nowait()
                        self.result_queue.put(result_frame)
                    except queue.Empty:
                        pass
                        
            except queue.Empty:
                continue
    
    def run(self):
        """运行检测"""
        if not self.init_model():
            return
        
        if not self.init_camera():
            return
        
        self.running = True
        
        # 启动线程
        capture_thread = threading.Thread(target=self.capture_thread)
        inference_thread = threading.Thread(target=self.inference_thread)
        
        capture_thread.start()
        inference_thread.start()
        
        print("开始实时检测,按 'q' 退出...")
        
        try:
            while True:
                try:
                    result_frame = self.result_queue.get(timeout=1.0)
                    cv2.imshow('YOLOv5 Real-time Detection', result_frame)
                    
                    if cv2.waitKey(1) & 0xFF == ord('q'):
                        break
                except queue.Empty:
                    continue
                    
        except KeyboardInterrupt:
            print("用户中断")
        
        # 清理资源
        self.running = False
        capture_thread.join()
        inference_thread.join()
        
        if self.cap:
            self.cap.release()
        if self.rknn:
            self.rknn.release()
        cv2.destroyAllWindows()
        
        print("检测结束")

# 使用之前定义的预处理和后处理函数
# (这里省略,使用上面图片检测中的函数)

if __name__ == "__main__":
    detector = YOLOv5Camera('yolov5s.rknn', camera_id=0)
    detector.run()

运行摄像头检测

# 检查摄像头设备
ls /dev/video*

# 运行摄像头检测
python3 yolov5_camera_demo.py

# 如果没有显示器,可以保存视频
# 修改代码保存为视频文件而不是显示

性能测试和优化

性能基准测试

#!/usr/bin/env python3
# yolov5_benchmark.py

import cv2
import numpy as np
import time
from rknnlite.api import RKNNLite

def benchmark_model(model_path, test_image_path, num_runs=100):
    """模型性能基准测试"""
    # 初始化模型
    rknn = RKNNLite()
    ret = rknn.load_rknn(model_path)
    if ret != 0:
        print("模型加载失败!")
        return
    
    ret = rknn.init_runtime()
    if ret != 0:
        print("运行时初始化失败!")
        return
    
    # 加载测试图片
    image = cv2.imread(test_image_path)
    input_data, _, _ = preprocess_image(image)
    
    print(f"开始性能测试,运行 {num_runs} 次推理...")
    
    # 预热
    for _ in range(10):
        rknn.inference(inputs=[input_data])
    
    # 正式测试
    times = []
    for i in range(num_runs):
        start_time = time.time()
        outputs = rknn.inference(inputs=[input_data])
        end_time = time.time()
        times.append(end_time - start_time)
        
        if (i + 1) % 20 == 0:
            print(f"完成 {i + 1}/{num_runs} 次推理")
    
    # 统计结果
    times = np.array(times)
    avg_time = np.mean(times)
    min_time = np.min(times)
    max_time = np.max(times)
    std_time = np.std(times)
    
    print(f"\n性能测试结果:")
    print(f"平均推理时间: {avg_time*1000:.2f} ms")
    print(f"最小推理时间: {min_time*1000:.2f} ms")
    print(f"最大推理时间: {max_time*1000:.2f} ms")
    print(f"标准差: {std_time*1000:.2f} ms")
    print(f"平均FPS: {1/avg_time:.2f}")
    
    rknn.release()

if __name__ == "__main__":
    benchmark_model('yolov5s.rknn', 'test_data/images/bus.jpg')

3.3 PC 端运行仿真 Demo

PC端仿真环境准备

安装RKNN-Toolkit2

# 激活虚拟环境
source rknn_env/bin/activate

# 确认RKNN-Toolkit2已安装
pip list | grep rknn

# 如果未安装
pip install rknn-toolkit2

PC端仿真代码

仿真推理脚本

#!/usr/bin/env python3
# yolov5_pc_simulation.py

import cv2
import numpy as np
import time
from rknn.api import RKNN

def pc_simulation_demo():
    """PC端仿真演示"""
    # 创建RKNN对象
    rknn = RKNN(verbose=True)
    
    # 加载RKNN模型
    print("加载RKNN模型...")
    ret = rknn.load_rknn('yolov5s.rknn')
    if ret != 0:
        print("模型加载失败!")
        return
    
    # 初始化运行时 (仿真模式)
    print("初始化仿真运行时...")
    ret = rknn.init_runtime(target='rk3568', device_id=None)
    if ret != 0:
        print("仿真运行时初始化失败!")
        return
    
    # 加载测试图片
    image_path = 'test_data/images/bus.jpg'
    image = cv2.imread(image_path)
    if image is None:
        print(f"无法加载图片: {image_path}")
        return
    
    print(f"处理图片: {image_path}")
    
    # 预处理 (使用相同的预处理函数)
    input_data, scale, offset = preprocess_image(image)
    
    # 仿真推理
    print("开始仿真推理...")
    start_time = time.time()
    outputs = rknn.inference(inputs=[input_data])
    inference_time = time.time() - start_time
    print(f"仿真推理时间: {inference_time:.3f}s")
    
    # 后处理
    boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
    
    # 绘制结果
    result_image = draw_detections(image.copy(), boxes, scores, class_ids)
    
    # 保存结果
    output_path = 'yolov5_pc_simulation_result.jpg'
    cv2.imwrite(output_path, result_image)
    print(f"仿真结果保存到: {output_path}")
    
    # 打印检测结果
    print(f"仿真检测到 {len(boxes)} 个目标:")
    for i, (box, score, class_id) in enumerate(zip(boxes, scores, class_ids)):
        print(f"  {i+1}. {CLASSES[class_id]}: {score:.3f} at {box}")
    
    # 释放资源
    rknn.release()

if __name__ == "__main__":
    pc_simulation_demo()

模型转换和仿真一体化

完整的转换+仿真流程

#!/usr/bin/env python3
# yolov5_convert_and_simulate.py

import cv2
import numpy as np
import time
from rknn.api import RKNN

def convert_and_simulate():
    """模型转换和仿真一体化流程"""
    # 创建RKNN对象
    rknn = RKNN(verbose=True)
    
    # 配置模型转换参数
    print("配置模型转换参数...")
    rknn.config(
        target_platform='rk3568',
        quantized_dtype='asymmetric_quantized-u8',
        optimization_level=3,
        output_optimize=1
    )
    
    # 加载ONNX模型 (如果有原始ONNX模型)
    onnx_model_path = 'yolov5s.onnx'
    try:
        print(f"加载ONNX模型: {onnx_model_path}")
        ret = rknn.load_onnx(model=onnx_model_path)
        if ret != 0:
            print("ONNX模型加载失败!")
            return
    except:
        print("未找到ONNX模型,直接加载RKNN模型...")
        ret = rknn.load_rknn('yolov5s.rknn')
        if ret != 0:
            print("RKNN模型加载失败!")
            return
    
    # 构建模型 (如果是从ONNX转换)
    if onnx_model_path:
        print("构建RKNN模型...")
        ret = rknn.build(do_quantization=True)
        if ret != 0:
            print("模型构建失败!")
            return
        
        # 导出RKNN模型
        print("导出RKNN模型...")
        ret = rknn.export_rknn('./yolov5s_converted.rknn')
        if ret != 0:
            print("模型导出失败!")
            return
    
    # 初始化仿真运行时
    print("初始化仿真运行时...")
    ret = rknn.init_runtime()
    if ret != 0:
        print("仿真运行时初始化失败!")
        return
    
    # 准备测试数据
    test_images = [
        'test_data/images/bus.jpg',
        'test_data/images/zidane.jpg'
    ]
    
    for image_path in test_images:
        if not os.path.exists(image_path):
            print(f"跳过不存在的图片: {image_path}")
            continue
            
        print(f"\n处理图片: {image_path}")
        
        # 加载图片
        image = cv2.imread(image_path)
        if image is None:
            print(f"无法加载图片: {image_path}")
            continue
        
        # 预处理
        input_data, scale, offset = preprocess_image(image)
        
        # 仿真推理
        start_time = time.time()
        outputs = rknn.inference(inputs=[input_data])
        inference_time = time.time() - start_time
        print(f"推理时间: {inference_time:.3f}s")
        
        # 后处理
        boxes, scores, class_ids = postprocess_output(outputs, scale, offset)
        
        # 绘制和保存结果
        result_image = draw_detections(image.copy(), boxes, scores, class_ids)
        output_path = f'result_{os.path.basename(image_path)}'
        cv2.imwrite(output_path, result_image)
        print(f"结果保存到: {output_path}")
        
        # 打印检测结果
        print(f"检测到 {len(boxes)} 个目标:")
        for i, (box, score, class_id) in enumerate(zip(boxes, scores, class_ids)):
            print(f"  {i+1}. {CLASSES[class_id]}: {score:.3f}")
    
    # 释放资源
    rknn.release()
    print("\n仿真完成!")

if __name__ == "__main__":
    import os
    convert_and_simulate()

精度对比测试

原始模型vs RKNN模型精度对比

#!/usr/bin/env python3
# accuracy_comparison.py

import torch
import torchvision.transforms as transforms
from rknn.api import RKNN
import cv2
import numpy as np

def compare_accuracy():
    """比较原始模型和RKNN模型的精度"""
    
    # 加载原始PyTorch模型 (如果有)
    try:
        import torch
        original_model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
        original_model.eval()
        print("原始PyTorch模型加载成功")
    except:
        print("无法加载原始PyTorch模型,跳过精度对比")
        return
    
    # 加载RKNN模型
    rknn = RKNN(verbose=False)
    ret = rknn.load_rknn('yolov5s.rknn')
    if ret != 0:
        print("RKNN模型加载失败!")
        return
    
    ret = rknn.init_runtime()
    if ret != 0:
        print("RKNN运行时初始化失败!")
        return
    
    # 测试图片
    test_image = 'test_data/images/bus.jpg'
    image = cv2.imread(test_image)
    
    # 原始模型推理
    print("原始模型推理...")
    original_results = original_model(image)
    
    # RKNN模型推理
    print("RKNN模型推理...")
    input_data, scale, offset = preprocess_image(image)
    rknn_outputs = rknn.inference(inputs=[input_data])
    rknn_boxes, rknn_scores, rknn_class_ids = postprocess_output(rknn_outputs, scale, offset)
    
    # 比较结果
    print(f"\n精度对比结果:")
    print(f"原始模型检测目标数: {len(original_results.pandas().xyxy[0])}")
    print(f"RKNN模型检测目标数: {len(rknn_boxes)}")
    
    # 详细对比 (需要进一步实现)
    # ...
    
    rknn.release()

if __name__ == "__main__":
    compare_accuracy()

常见问题和解决方案

运行问题排查

# 1. 检查NPU驱动
lsmod | grep rknpu
dmesg | grep -i npu

# 2. 检查Python环境
python3 -c "from rknnlite.api import RKNNLite; print('OK')"

# 3. 检查模型文件
file yolov5s.rknn
ls -lh yolov5s.rknn

# 4. 检查摄像头
v4l2-ctl --list-devices
ls /dev/video*

# 5. 内存使用监控
free -h
top -p $(pgrep python3)

性能优化建议

# 1. 多线程优化
import threading
import queue

# 2. 内存复用
# 预分配输入输出缓冲区

# 3. 批处理
# 如果支持,使用batch推理

# 4. 预处理优化
# 使用OpenCV的DNN模块进行预处理

# 5. 后处理优化
# 使用向量化操作替代循环

总结

通过本章的学习,您已经成功运行了官方的YOLOv5示例,包括:

  1. 获取官方Demo: 下载预编译模型和示例代码
  2. 板端运行: 在GM-3568JHF开发板上运行图片检测和实时摄像头检测
  3. PC端仿真: 在PC上进行模型仿真验证

这为后续的模型转换和自定义应用开发奠定了基础。下一章将详细介绍如何进行模型转换,将您自己的模型部署到RK3568 NPU上。

在 GitHub 上编辑此页
上次更新:
贡献者: jxc
Prev
02 开发环境搭建