[深度学习] 使用深度学习开发的循线小车-程序员宅基地

技术标签: python  深度学习  pytorch  

ubuntu 安装 docker_ubuntu 如何知道已经安装了docker-程序员宅基地

CentOS7的Docker无法拉取镜像_docker查找不到centos7镜像-程序员宅基地

ubuntu 安装 docker_ubuntu 如何知道已经安装了docker-程序员宅基地

【Python】Pytorch分类模型转onnx以及onnx模型推理-程序员宅基地

OriginBot智能机器人开源套件|23.视觉巡线(AI深度学习) - 知乎

ubuntu22.04新机配置深度学习环境(一遍成) - 知乎

告别反复调参,从零开始搭建深度学习方法的循线小车

地平线旭日X3派-上路第三步-AI工具链环境部署

地平线 X3J3 芯片开发手册 ~~~ 开发环境搭建

ERROR: Get “https://registry-1.docker.io/v2/“: dial tcp: lookup registry-1.docker.io on 127.0.0.53:5_lookup registry-1.docker.io on 127.0.0.53:53: serv-程序员宅基地

完成Docker环境安装后,需要将无root权限的用户添加到Docker用户组中。参考如下命令:
 

sudo groupadd docker
sudo gpasswd -a ${USER} docker
sudo systemctl restart docker  # CentOS7/Ubuntu
# re-login

模型训练

以上提到的模型可以直接复用pytorch中的定义,数据集的切分和模型的训练,都封装在 line_follower_model 功能包的代码中。

model_traning

接下来,运行如下指令,开始训练:

cd ~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model
ros2 run line_follower_model training

报错: ./best_line_follower_model_xy.pth cannot be opened

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ros2 run line_follower_model training
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /home/thomas/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
100.0%
0.672721, 30.660010
save
Traceback (most recent call last):
  File "/home/thomas/dev_ws/install/line_follower_model/lib/line_follower_model/training", line 33, in <module>
    sys.exit(load_entry_point('line-follower-model==0.0.0', 'console_scripts', 'training')())
  File "/home/thomas/dev_ws/install/line_follower_model/lib/python3.10/site-packages/line_follower_model/training_member_function.py", line 131, in main
    torch.save(model.state_dict(), BEST_MODEL_PATH)
  File "/home/thomas/.local/lib/python3.10/site-packages/torch/serialization.py", line 628, in save
    with _open_zipfile_writer(f) as opened_zipfile:
  File "/home/thomas/.local/lib/python3.10/site-packages/torch/serialization.py", line 502, in _open_zipfile_writer
    return container(name_or_buffer)
  File "/home/thomas/.local/lib/python3.10/site-packages/torch/serialization.py", line 473, in __init__
    super().__init__(torch._C.PyTorchFileWriter(self.name))
RuntimeError: File ./best_line_follower_model_xy.pth cannot be opened.


这是由于没有文件夹的写权限

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ ls -l
total 8
drwxr-xr-x 3 root root 4096 Mar 27 11:03 10_model_convert
drwxr-xr-x 7 root root 4096 Mar 27 14:29 line_follower_model
thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ sudo chmod 777 *
[sudo] password for thomas: 
thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ ls
10_model_convert  line_follower_model
thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ ls -l
total 8
drwxrwxrwx 3 root root 4096 Mar 27 11:03 10_model_convert
drwxrwxrwx 7 root root 4096 Mar 27 14:29 line_follower_model

再次执行

ros2 run line_follower_model training

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ros2 run line_follower_model training
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
0.722548, 6.242182
save
0.087550, 5.827808
save
0.045032, 0.380008
save
0.032235, 0.111976
save
0.027896, 0.039962
save
0.030725, 0.204738
0.025075, 0.036258
save
0.028099, 0.040965
0.016858, 0.032197
save
0.019491, 0.036230
0.018325, 0.043560
0.019858, 0.322563
0.015115, 0.070269
0.014820, 0.030373

模型训练过程需要一段时间,几十分钟或者一个小时,需要耐心等待,完成后可以看到生成的文件 best_line_follower_model_xy.pth

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ls -l
total 54892
-rw-rw-r-- 1 thomas thomas 44789846 Mar 28 13:28 best_line_follower_model_xy.pth

模型转换

pytorch训练得到的浮点模型如果直接运行在RDK X3上效率会很低,为了提高运行效率,发挥BPU的5T算力,这里需要进行浮点模型转定点模型操作。

model_transform

生成onnx模型

接下来执行 generate_onnx 将之前训练好的模型,转换成 onnx 模型: 

ros2 run line_follower_model generate_onnx

运行后在当前目录下得到生成 best_line_follower_model_xy.onnx 模型

thomas@J-35:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ls -l
total 98556
-rw-rw-r-- 1 thomas thomas 44700647 Apr  2 21:02 best_line_follower_model_xy.onnx
-rw-rw-r-- 1 thomas thomas 44789846 Apr  2 19:37 best_line_follower_model_xy.pth


 

启动AI工具链docker

解压缩之前下载好的AI工具链的docker镜像和OE包,OE包目录结构如下:

 
. 
├── bsp 
│   └── X3J3-Img-PL2.2-V1.1.0-20220324.tgz 
├── ddk 
│   ├── package 
│   ├── samples 
│   └── tools 
├── doc 
│   ├── cn 
│   ├── ddk_doc 
│   └── en 
├── release_note-CN.txt 
├── release_note-EN.txt 
├── run_docker.sh 
└── tools 
    ├── 0A_CP210x_USB2UART_Driver.zip 
    ├── 0A_PL2302-USB-to-Serial-Comm-Port.zip 
    ├── 0A_PL2303-M_LogoDriver_Setup_v202_20200527.zip 
    ├── 0B_hbupdate_burn_secure-key1.zip 
    ├── 0B_hbupdate_linux_cli_v1.1.tgz 
    ├── 0B_hbupdate_linux_gui_v1.1.tgz 
    ├── 0B_hbupdate_mac_v1.0.5.app.tar.gz 
    └── 0B_hbupdate_win64_v1.1.zip 
 

将 originbot_desktop 代码仓库中的 10_model_convert 包拷贝到至OE开发包 ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/ 目录下。

image

再把 line_follower_model 功能包下标注好的数据集文件夹 image_dataset 和生成的  best_line_follower_model_xy.onnx 模型拷贝到以上 ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/ 目录下,数据集文件夹 image_dataset 保留100张左右的数据用于校准:

image

然后回到OE包的根目录下,加载AI工具链的docker镜像:

cd /home/thomas/Me/deeplearning/horizon_xj3_open_explorer_v2.3.3_20220727/
sh run_docker.sh /data/ 

 

生成校准数据


在启动的Docker镜像中,完成如下操作:

cd ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper 
sh 02_preprocess.sh 


命令执行过程如下:

thomas@J-35:~/Me/deeplearning/horizon_xj3_open_explorer_v2.3.3_20220727$ sudo sh run_docker.sh /data/ 
[sudo] password for thomas: 
run_docker.sh: 14: [: unexpected operator
run_docker.sh: 23: [: openexplorer/ai_toolchain_centos_7_xj3: unexpected operator
docker version is v2.3.3
dataset path is /data
open_explorer folder path is /home/thomas/Me/deeplearning/horizon_xj3_open_explorer_v2.3.3_20220727
[root@1e1a1a7e24f4 open_explorer]# cd ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper 
[root@1e1a1a7e24f4 mapper]# sh 02_preprocess.sh 

cd $(dirname $0) || exit

python3 ../../../data_preprocess.py \
  --src_dir ./image_dataset \
  --dst_dir ./calibration_data_bgr_f32 \
  --pic_ext .rgb \
  --read_mode opencv
Warning please note that the data type is now determined by the name of the folder suffix
Warning if you need to set it explicitly, please configure the value of saved_data_type in the preprocess shell script
regular preprocess
write:./calibration_data_bgr_f32/xy_008_160_31a8e30a-eca6-11ee-bb07-dfd665df7b81.rgb
write:./calibration_data_bgr_f32/xy_009_160_39c18c40-eca6-11ee-bb07-dfd665df7b81.rgb
write:./calibration_data_bgr_f32/xy_028_092_3327df66-ec9b-11ee-bb07-dfd665df7b81.rgb
模型编译生成定点模型

接下来执行以下命令生成定点模型文件,稍后会在机器人上部署:

cd ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper
sh 03_build.sh

命令执行过程如下:

[root@1e1a1a7e24f4 mapper]# sh 03_build.sh
2024-04-02 21:46:50,078 INFO Start hb_mapper....
2024-04-02 21:46:50,079 INFO log will be stored in /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/hb_mapper_makertbin.log
2024-04-02 21:46:50,079 INFO hbdk version 3.37.2
2024-04-02 21:46:50,080 INFO horizon_nn version 0.14.0
2024-04-02 21:46:50,080 INFO hb_mapper version 1.9.9
2024-04-02 21:46:50,081 INFO Start Model Convert....
2024-04-02 21:46:50,100 INFO Using abs path /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/best_line_follower_model_xy.onnx
2024-04-02 21:46:50,102 INFO validating model_parameters...
2024-04-02 21:46:50,231 WARNING User input 'log_level' deleted,Please do not use this parameter again
2024-04-02 21:46:50,231 INFO Using abs path /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/model_output
2024-04-02 21:46:50,232 INFO validating model_parameters finished
2024-04-02 21:46:50,232 INFO validating input_parameters...
2024-04-02 21:46:50,232 INFO input num is set to 1 according to input_names
2024-04-02 21:46:50,233 INFO model name missing, using model name from model file: ['input']
2024-04-02 21:46:50,233 INFO model input shape missing, using shape from model file: [[1, 3, 224, 224]]
2024-04-02 21:46:50,233 INFO validating input_parameters finished
2024-04-02 21:46:50,233 INFO validating calibration_parameters...
2024-04-02 21:46:50,233 INFO Using abs path /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/calibration_data_bgr_f32
2024-04-02 21:46:50,234 INFO validating calibration_parameters finished
2024-04-02 21:46:50,234 INFO validating custom_op...
2024-04-02 21:46:50,234 INFO custom_op does not exist, skipped
2024-04-02 21:46:50,234 INFO validating custom_op finished
2024-04-02 21:46:50,234 INFO validating compiler_parameters...
2024-04-02 21:46:50,235 INFO validating compiler_parameters finished
2024-04-02 21:46:50,239 WARNING Please note that the calibration file data type is set to float32, determined by the name of the calibration dir name suffix
2024-04-02 21:46:50,239 WARNING if you need to set it explicitly, please configure the value of cal_data_type in the calibration_parameters group in yaml
2024-04-02 21:46:50,240 INFO *******************************************
2024-04-02 21:46:50,240 INFO First calibration picture name: xy_008_160_31a8e30a-eca6-11ee-bb07-dfd665df7b81.rgb
2024-04-02 21:46:50,240 INFO First calibration picture md5:
83281dbdee2db08577524faa7f892adf  /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/calibration_data_bgr_f32/xy_008_160_31a8e30a-eca6-11ee-bb07-dfd665df7b81.rgb
2024-04-02 21:46:50,265 INFO *******************************************
2024-04-02 21:46:51,682 INFO [Tue Apr  2 21:46:51 2024] Start to Horizon NN Model Convert.
2024-04-02 21:46:51,683 INFO Parsing the input parameter:{'input': {'input_shape': [1, 3, 224, 224], 'expected_input_type': 'YUV444_128', 'original_input_type': 'RGB', 'original_input_layout': 'NCHW', 'means': array([123.675, 116.28 , 103.53 ], dtype=float32), 'scales': array([0.0171248, 0.017507 , 0.0174292], dtype=float32)}}
2024-04-02 21:46:51,684 INFO Parsing the calibration parameter
2024-04-02 21:46:51,684 INFO Parsing the hbdk parameter:{'hbdk_pass_through_params': '--fast --O3', 'input-source': {'input': 'pyramid', '_default_value': 'ddr'}}
2024-04-02 21:46:51,685 INFO HorizonNN version: 0.14.0
2024-04-02 21:46:51,685 INFO HBDK version: 3.37.2
2024-04-02 21:46:51,685 INFO [Tue Apr  2 21:46:51 2024] Start to parse the onnx model.
2024-04-02 21:46:51,770 INFO Input ONNX model infomation:
ONNX IR version:          6
Opset version:            11
Producer:                 pytorch2.2.2
Domain:                   none
Input name:               input, [1, 3, 224, 224]
Output name:              output, [1, 2]
2024-04-02 21:46:52,323 INFO [Tue Apr  2 21:46:52 2024] End to parse the onnx model.
2024-04-02 21:46:52,324 INFO Model input names: ['input']
2024-04-02 21:46:52,324 INFO Create a preprocessing operator for input_name input with means=[123.675 116.28  103.53 ], std=[58.39484253 57.12000948 57.37498298], original_input_layout=NCHW, color convert from 'RGB' to 'YUV_BT601_FULL_RANGE'.
2024-04-02 21:46:52,750 INFO Saving the original float model: resnet18_224x224_nv12_original_float_model.onnx.
2024-04-02 21:46:52,751 INFO [Tue Apr  2 21:46:52 2024] Start to optimize the model.
2024-04-02 21:46:53,782 INFO [Tue Apr  2 21:46:53 2024] End to optimize the model.
2024-04-02 21:46:53,953 INFO Saving the optimized model: resnet18_224x224_nv12_optimized_float_model.onnx.
2024-04-02 21:46:53,953 INFO [Tue Apr  2 21:46:53 2024] Start to calibrate the model.
2024-04-02 21:46:53,954 INFO There are 100 samples in the calibration data set.
2024-04-02 21:46:54,458 INFO Run calibration model with kl method.
2024-04-02 21:47:06,290 INFO [Tue Apr  2 21:47:06 2024] End to calibrate the model.
2024-04-02 21:47:06,291 INFO [Tue Apr  2 21:47:06 2024] Start to quantize the model.
2024-04-02 21:47:09,926 INFO input input is from pyramid. Its layout is set to NHWC
2024-04-02 21:47:10,502 INFO [Tue Apr  2 21:47:10 2024] End to quantize the model.
2024-04-02 21:47:11,101 INFO Saving the quantized model: resnet18_224x224_nv12_quantized_model.onnx.
2024-04-02 21:47:14,165 INFO [Tue Apr  2 21:47:14 2024] Start to compile the model with march bernoulli2.
2024-04-02 21:47:15,502 INFO Compile submodel: main_graph_subgraph_0
2024-04-02 21:47:16,985 INFO hbdk-cc parameters:['--fast', '--O3', '--input-layout', 'NHWC', '--output-layout', 'NHWC', '--input-source', 'pyramid']
2024-04-02 21:47:17,276 INFO INFO: "-j" or "--jobs" is not specified, launch 2 threads for optimization
2024-04-02 21:47:17,277 WARNING missing stride for pyramid input[0], use its aligned width by default.
[==================================================] 100%
2024-04-02 21:47:25,296 INFO consumed time 8.06245
2024-04-02 21:47:25,555 INFO FPS=121.27, latency = 8246.2 us   (see main_graph_subgraph_0.html)
2024-04-02 21:47:25,895 INFO [Tue Apr  2 21:47:25 2024] End to compile the model with march bernoulli2.
2024-04-02 21:47:25,896 INFO The converted model node information:
========================================================================================================================================
Node                                              ON   Subgraph  Type                    Cosine Similarity  Threshold                   
----------------------------------------------------------------------------------------------------------------------------------------
HZ_PREPROCESS_FOR_input                           BPU  id(0)     HzSQuantizedPreprocess  0.999952           127.000000                  
/conv1/Conv                                       BPU  id(0)     HzSQuantizedConv        0.999723           3.186383                    
/maxpool/MaxPool                                  BPU  id(0)     HzQuantizedMaxPool      0.999790           3.562476                    
/layer1/layer1.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.999393           3.562476                    
/layer1/layer1.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.999360           2.320694                    
/layer1/layer1.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.997865           5.567303                    
/layer1/layer1.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.998228           2.442273                    
/layer2/layer2.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.995588           6.622376                    
/layer2/layer2.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.996943           3.076967                    
/layer2/layer2.0/downsample/downsample.0/Conv     BPU  id(0)     HzSQuantizedConv        0.997177           6.622376                    
/layer2/layer2.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.996080           3.934074                    
/layer2/layer2.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.997443           3.025215                    
/layer3/layer3.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.998448           4.853349                    
/layer3/layer3.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.998819           2.553357                    
/layer3/layer3.0/downsample/downsample.0/Conv     BPU  id(0)     HzSQuantizedConv        0.998717           4.853349                    
/layer3/layer3.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.998631           3.161120                    
/layer3/layer3.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.998802           2.501193                    
/layer4/layer4.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.999474           5.645166                    
/layer4/layer4.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.999709           2.401657                    
/layer4/layer4.0/downsample/downsample.0/Conv     BPU  id(0)     HzSQuantizedConv        0.999250           5.645166                    
/layer4/layer4.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.999808           5.394126                    
/layer4/layer4.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.999865           3.072157                    
/avgpool/GlobalAveragePool                        BPU  id(0)     HzSQuantizedConv        0.999965           17.365398                   
/fc/Gemm                                          BPU  id(0)     HzSQuantizedConv        0.999967           2.144315                    
/fc/Gemm_NHWC2NCHW_LayoutConvert_Output0_reshape  CPU  --        Reshape
2024-04-02 21:47:25,897 INFO The quantify model output:
===========================================================================
Node      Cosine Similarity  L1 Distance  L2 Distance  Chebyshev Distance  
---------------------------------------------------------------------------
/fc/Gemm  0.999967           0.007190     0.005211     0.008810
2024-04-02 21:47:25,898 INFO [Tue Apr  2 21:47:25 2024] End to Horizon NN Model Convert.
2024-04-02 21:47:26,084 INFO start convert to *.bin file....
2024-04-02 21:47:26,183 INFO ONNX model output num : 1
2024-04-02 21:47:26,184 INFO ############# model deps info #############
2024-04-02 21:47:26,185 INFO hb_mapper version   : 1.9.9
2024-04-02 21:47:26,185 INFO hbdk version        : 3.37.2
2024-04-02 21:47:26,185 INFO hbdk runtime version: 3.14.14
2024-04-02 21:47:26,186 INFO horizon_nn version  : 0.14.0
2024-04-02 21:47:26,186 INFO ############# model_parameters info #############
2024-04-02 21:47:26,186 INFO onnx_model          : /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/best_line_follower_model_xy.onnx
2024-04-02 21:47:26,186 INFO BPU march           : bernoulli2
2024-04-02 21:47:26,187 INFO layer_out_dump      : False
2024-04-02 21:47:26,187 INFO log_level           : DEBUG
2024-04-02 21:47:26,187 INFO working dir         : /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/model_output
2024-04-02 21:47:26,187 INFO output_model_file_prefix: resnet18_224x224_nv12
2024-04-02 21:47:26,188 INFO ############# input_parameters info #############
2024-04-02 21:47:26,188 INFO ------------------------------------------
2024-04-02 21:47:26,188 INFO ---------input info : input ---------
2024-04-02 21:47:26,189 INFO input_name          : input
2024-04-02 21:47:26,189 INFO input_type_rt       : nv12
2024-04-02 21:47:26,189 INFO input_space&range   : regular
2024-04-02 21:47:26,189 INFO input_layout_rt     : None
2024-04-02 21:47:26,190 INFO input_type_train    : rgb
2024-04-02 21:47:26,190 INFO input_layout_train  : NCHW
2024-04-02 21:47:26,190 INFO norm_type           : data_mean_and_scale
2024-04-02 21:47:26,191 INFO input_shape         : 1x3x224x224
2024-04-02 21:47:26,191 INFO mean_value          : 123.675,116.28,103.53,
2024-04-02 21:47:26,191 INFO scale_value         : 0.0171248,0.017507,0.0174292,
2024-04-02 21:47:26,192 INFO cal_data_dir        : /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/calibration_data_bgr_f32
2024-04-02 21:47:26,192 INFO ---------input info : input end -------
2024-04-02 21:47:26,192 INFO ------------------------------------------
2024-04-02 21:47:26,192 INFO ############# calibration_parameters info #############
2024-04-02 21:47:26,193 INFO preprocess_on       : False
2024-04-02 21:47:26,193 INFO calibration_type:   : kl
2024-04-02 21:47:26,193 INFO cal_data_type       : N/A
2024-04-02 21:47:26,194 INFO ############# compiler_parameters info #############
2024-04-02 21:47:26,194 INFO hbdk_pass_through_params: --fast --O3
2024-04-02 21:47:26,194 INFO input-source        : {'input': 'pyramid', '_default_value': 'ddr'}
2024-04-02 21:47:26,226 INFO Convert to runtime bin file sucessfully!
2024-04-02 21:47:26,226 INFO End Model Convert
[root@1e1a1a7e24f4 mapper]# 

编译成功后,会在 model_output 路径下生成最终的模型文件 resnet18_224x224_nv12.bin

拷贝模型文件 resnet18_224x224_nv12.bin 到 line_follower_model 功能包里,以备后续部署使用。

模型部署

将编译生成的定点模型 resnet18_224x224_nv12.bin,拷贝到OriginCar端 line_follower_perception 功能包下的 model 文件夹中,替换原有的模型,并且在OriginCar端重新编译工作空间。

scp -r ./resnet18_224x224_nv12.bin [email protected]:/root/dev_ws/src/origincar/origincar_deeplearning/line_follower_perception/model/ 

 

编译完成后,就可以通过以下命令部署模型,其中参数 model_path 和 model_name 指定模型的路径和名称:

cd /root/dev_ws/src/origincar/origincar_deeplearning/line_follower_perception/
ros2 run line_follower_perception line_follower_perception --ros-args -p model_path:=model/resnet18_224x224_nv12.bin -p model_name:=resnet18_224x224_nv12

命令执行过程如下:

root@ubuntu:~/dev_ws/src/origincar/origincar_deeplearning/line_follower_perception# ros2 run line_follower_perception line_follower_perception --ros-args -p model_path:=model/resnet18_224x224_nv12.bin -p model_name:=resnet18_224x224_nv12
[INFO] [1712122458.232674628] [dnn]: Node init.
[INFO] [1712122458.233179215] [LineFollowerPerceptionNode]: path:model/resnet18_224x224_nv12.bin

[INFO] [1712122458.233256001] [LineFollowerPerceptionNode]: name:resnet18_224x224_nv12

[INFO] [1712122458.233340036] [dnn]: Model init.
[EasyDNN]: EasyDNN version = 1.6.1_(1.18.6 DNN)
[BPU_PLAT]BPU Platform Version(1.3.3)!
[HBRT] set log level as 0. version = 3.15.25.0
[DNN] Runtime version = 1.18.6_(3.15.25 HBRT)
[A][DNN][packed_model.cpp:234][Model](2024-04-03,13:34:18.775.957) [HorizonRT] The model builder version = 1.9.9
[INFO] [1712122458.918322553] [dnn]: The model input 0 width is 224 and height is 224
[INFO] [1712122458.918465125] [dnn]: Task init.
[INFO] [1712122458.920699164] [dnn]: Set task_num [4]

启动相机

先将OriginCar放置到巡线的场景中。

通过如下命令,启动零拷贝模式下的摄像头驱动,加速内部的图像处理效率:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
export CYCLONEDDS_URI='<CycloneDDS><Domain><General><NetworkInterfaceAddress>wlan0</NetworkInterfaceAddress></General></Domain></CycloneDDS>'
ros2 launch origincar_bringup usb_websocket_display.launch.py 

相机启动成功后,就可以在巡线终端中看到动态识别的路径线位置了:

启动机器人

启动OriginCar底盘,机器人开始自主寻线运动:

ros2 launch origincar_base origincar_bringup.launch.py 

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/chdlr/article/details/137152412

智能推荐

Orange3入门-程序员宅基地

文章浏览阅读1.1k次,点赞10次,收藏26次。Orange3 是一个开源的机器学习和数据可视化桌面软件。它允许你通过简单的拖放技术来创建数据分析工作流程。Orange3 提供了一个友好的图形化用户界面,非常适合初学者使用,同时也为熟练的数据科学家提供了数据探索和建模的强大功能。在 Orange3 中,数据分析过程被设计为工作流程,它顺畅地转换并可视化数据,使用机器学习算法进行建模,并创建了可视化的数据呈现。每个流程都是一系列称为 "widget" 的模块,它们通过渠道连接在一起进行通信。_orange3

JAVA常用加密方式介绍及使用汇总(全集)-程序员宅基地

文章浏览阅读5.5k次,点赞5次,收藏28次。引文:如果开发者需要保存密码(比如网站用户的密码),要考虑如何保护这些密码数据,网站用户密码的泄露是一件非常严重的事情,容易引起用户恐慌,所以在安全方面是重中之重,直接将密码以明文写入数据库中是极不安全的,因为任何可以打开数据库的人,都将可以直接看到这些密码。解决的办法是将密码加密后再存储进数据库,比较常用的加密方法是使用哈希函数(Hash Function),也就是摘要加密。通过哈希函数,我们就可以将密码的哈希值存储进数据库。

用于理解vtkPolyData和vtkCellArray的一段代码_vtk隐藏单元-程序员宅基地

文章浏览阅读3.5k次。#include "vtkActor.h"#include "vtkCamera.h"#include "vtkCellArray.h"#include "vtkFloatArray.h"#include "vtkPointData.h"#include "vtkPoints.h"#include "vtkPolyData.h"#include "vtkPolyDataMapper._vtk隐藏单元

【.NET Standard依赖问题】错误    CS0012    类型“Object”在未引用的程序集中定义。必须添加对程序集“netstandard, Version=2.0.0.0,-程序员宅基地

文章浏览阅读9.9k次,点赞9次,收藏6次。【背景】在vs2019上运行winform应用程序,弹出来如下提示:错误 CS0012 类型“Object”在未引用的程序集中定义。必须添加对程序集“netstandard, Version=2.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51”的引用.............【解决办法】右键当前项目→属性→应用程序..._类型“object”在未引用的程序集中定义。必须添加对程序集“netstandard, version

操作系统学习 介绍一下fork是什么? 1操作系统提供的用户界面有几种?分别是什么?2,举出人类社会中采用的几个抽象示例,并与操作系统提供的抽象进行对比。3,有人认为,内核态程序可以访问任_操作系统fork-程序员宅基地

文章浏览阅读556次。介绍一下fork是什么? 1操作系统提供的用户界面有几种?分别是什么?2,举出人类社会中采用的几个抽象示例,并与操作系统提供的抽象进行对比。3,有人认为,内核态程序可以访问任何资源的权限对系统安全造成严重成卧,你怎么看?4,处理器的状态设置需要在何种态势下完成?为什么?5,处理器从用户态转为内核态时面临的关键问题是什么?如何解决?_操作系统fork

go jwt 生成token报错 :key is of invalid type-程序员宅基地

文章浏览阅读2.5k次,点赞6次,收藏4次。jwt.SigningMethodES256 两种类型 *SigningMethodECDSA 和 *SigningMethodHMACjwts := jwt.NewWithClaims(jwt.SigningMethodES256, c) // SigningMethodES256 *SigningMethodECDSA 此类型会报错: key is of invalid type`jwts := jwt.NewWithClaims(jwt.SigningMethodHS256, c) // S

随便推点

计算机组成原理期末考试题库(超级多的那种)_云南大学滇池学院2024计算机组成原理期末考试-程序员宅基地

文章浏览阅读2.9w次,点赞160次,收藏1.7k次。废话不多说,不管是应对期末考试还是考研基础复习,刷题是必不可少的!!! 大家冲就完了!!!!记得给 罡罡同学点关注哦! 后期还会更新其他题库的呢!!!打赏一点钱,帮我买瓶肥宅快乐水,继续创作,谢大家!计算机组成原理试题库——题目及答案计算机组成原理试卷1一、选择题(共20分,每题1分)1.CPU响应中断的时间是______。A.中断源提出请求;B.取指周期结束;C.执行周期结束;D.间址周期结束。2.下列说法中______是正确的。A.加法指令的执行周期一定要访存;B.加法指令_云南大学滇池学院2024计算机组成原理期末考试

IPC网络高清摄像机基础知识6(CCD 与 CMOS传感器)_ipc shutter-程序员宅基地

文章浏览阅读1.2w次。需求说明:IPC高清摄像机知识储备内容 :第一部分 从四大方面分析CCD传感器与CMOS传感器的优劣 第二部分 解析数码相机的技术参数整理来自:时间的诗第一部分 从四大方面分析CCD传感器与CMOS传感器的优劣来自:http://www.elecfans.com/yuanqijian/sensor/20_ipc shutter

给NAS-Tool添加拓展功能,自定义刷流、索引规则_nastool安装第三方插件-程序员宅基地

文章浏览阅读1.5k次,点赞7次,收藏8次。给NAS-Tool添加拓展功能,自定义刷流、索引规则_nastool安装第三方插件

【 Arduino 和水流量传感器测量水流量和体积】_arduino 流量计-程序员宅基地

文章浏览阅读2k次,点赞3次,收藏14次。Arduino 和水流量传感器测量水流量和体积_arduino 流量计

F02_HttpRunner关联:提取固定名称方式的 SessionID-程序员宅基地

文章浏览阅读229次。F02_HttpRunner关联:提取固定名称方式的 SessionID接着上一章项目案例分析,SessionID 存在于临时 cookie 中,HttpRunner 可以从cookie中直接提取需要的值,而且要求 cookie 的键名称必须是固定不变的,提取方式为:extract: - 变量名称: cookies.键名称为了验证返回响应内容的正...

Echarts绘制柱状图 报错“Initialize failed: invalid dom.”_echarts 中 打开弹框中的柱状图rror: initialize failed: invali-程序员宅基地

文章浏览阅读1.5k次。原因:echarts在创建图表的时候需要一个div作为dom容器,再将由echarts.init方法初始化一个 echarts 实例初始化的map挂载到dom树中。<div id="map" v-if="showType != 'list'"></div>而此错误的意思是在该element创建之前,就去getElement获取该元素。解决:本以为是生命周期的问题,改了好久,后来发现是该dom容器上挂了一个v-if,而v-if是由后来调后端接口获取的数据来控._echarts 中 打开弹框中的柱状图rror: initialize failed: invalid dom

推荐文章

热门文章

相关标签