当前位置 博文首页 > zhuguiqin1的专栏:实战Detectron2— 训练人体关键点检测

    zhuguiqin1的专栏:实战Detectron2— 训练人体关键点检测

    作者:[db:作者] 时间:2021-09-11 16:53

    Detectron2作为一个成熟的目标检测框架,其官方的tutorials有详细的文档说明。本文按照官方指导文档基于COCO2017数据集,使用一个GPU,型号为?TITAN RTX(24G显存),花费了4天左右的时间训练了人体关键点检测模型。

    步骤如下:

    1.?Detectron2?代码库安装

    按照官方的?install指导命令,安装十分简单。我是采用本地源代码安装方式。

    git clone https://github.com/facebookresearch/detectron2.git
    python -m pip install -e detectron2

    上面的第一条命令运行后本地会从github上clone一个代码库目录结构如下

    detectron2
    ├── configs
    ├── datasets
    ├── demo
    ├── detectron2
    ├── dev
    ├── docker
    ├── docs
    ├── GETTING_STARTED.md
    ├── INSTALL.md
    ├── LICENSE
    ├── MODEL_ZOO.md
    ├── output
    ├── projects
    ├── README.md
    ├── setup.cfg
    ├── setup.py
    ├── tests
    └── tools
    

    2.人体关键点数据集下载

    ?首先打开COCO数据集官方下载链接。

    正常打开后见下图橙色框标注的Images和Annotations分别表示图片和标注文件。

    对于Images一栏的绿色框需要下载三个大的文件,分别对应的是训练集,验证集和测试集:

    2017 Train images [118K/18GB]
    2017 Val images [5K/1GB]
    2017 Test images [41K/6GB]

    对于Annotations一栏绿色框需要下载一个标注文件:

    2017 Train/Val annotations [241MB]? 如果将这个文件解压后,可以得到如下目录结构:

    其中的person_keypoints_train2017.json?和person_keypoints_val2017.json?分别对应的人体关键点检测对应的训练集标注文件是我们真正需要的文件。

    annotations
    ├── captions_train2017.json
    ├── captions_val2017.json
    ├── instances_train2017.json
    ├── instances_val2017.json
    ├── person_keypoints_train2017.json    人体关键点检测对应的训练集标注文件
    └── person_keypoints_val2017.json     人体关键点检测对应的验证集标注文件

    在本地代码库datasets目录下面新建立coco目录?

    将上面的训练集,验证集以及标注文件放到本地代码的coco目录下面

    
    datasets
    ├── coco
    │?? ├── annotations
    │?? ├── test2017
    │?? ├── train2017
    │?? └── val2017

    3.?环境配置与模型训练
    训练时先进入到代码库的detectron2目录下面

    python tools/train_net.py --config-file ./configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml  SOLVER.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.0025

    经过4天左右的训练时间,如果顺利会得到如下训练结果:

    [07/19 03:54:17] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:05:37 (0.067663 s / iter per device, on 1 devices)
    [07/19 03:54:17] d2.evaluation.coco_evaluation INFO: Preparing results for COCO format ...
    [07/19 03:54:17] d2.evaluation.coco_evaluation INFO: Saving results to ./output/inference/coco_instances_results.json
    [07/19 03:54:19] d2.evaluation.coco_evaluation INFO: Evaluating predictions with unofficial COCO API...
    [07/19 03:54:19] d2.evaluation.fast_eval_api INFO: Evaluate annotation type *bbox*
    [07/19 03:54:19] d2.evaluation.fast_eval_api INFO: COCOeval_opt.evaluate() finished in 0.77 seconds.
    [07/19 03:54:19] d2.evaluation.fast_eval_api INFO: Accumulating evaluation results...
    [07/19 03:54:19] d2.evaluation.fast_eval_api INFO: COCOeval_opt.accumulate() finished in 0.10 seconds.
    [07/19 03:54:19] d2.evaluation.coco_evaluation INFO: Evaluation results for bbox: 
    |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   |
    |:------:|:------:|:------:|:------:|:------:|:------:|
    | 55.244 | 83.255 | 60.158 | 36.447 | 63.029 | 72.722 |
    [07/19 03:54:20] d2.evaluation.fast_eval_api INFO: Evaluate annotation type *keypoints*
    [07/19 03:54:25] d2.evaluation.fast_eval_api INFO: COCOeval_opt.evaluate() finished in 5.44 seconds.
    [07/19 03:54:25] d2.evaluation.fast_eval_api INFO: Accumulating evaluation results...
    [07/19 03:54:25] d2.evaluation.fast_eval_api INFO: COCOeval_opt.accumulate() finished in 0.03 seconds.
    [07/19 03:54:25] d2.evaluation.coco_evaluation INFO: Evaluation results for keypoints: 
    |   AP   |  AP50  |  AP75  |  APm   |  APl   |
    |:------:|:------:|:------:|:------:|:------:|
    | 63.696 | 85.916 | 69.254 | 59.249 | 72.083 |
    [07/19 03:54:25] d2.engine.defaults INFO: Evaluation results for keypoints_coco_2017_val in csv format:
    [07/19 03:54:25] d2.evaluation.testing INFO: copypaste: Task: bbox
    [07/19 03:54:25] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl
    [07/19 03:54:25] d2.evaluation.testing INFO: copypaste: 55.2440,83.2547,60.1577,36.4470,63.0290,72.7224
    [07/19 03:54:25] d2.evaluation.testing INFO: copypaste: Task: keypoints
    [07/19 03:54:25] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APm,APl
    [07/19 03:54:25] d2.evaluation.testing INFO: copypaste: 63.6964,85.9162,69.2536,59.2487,72.0832
    

    ?最终在detectron2/output?目录下生成了如下模型文件:

    config.yaml                                    inference          model_0034999.pth  model_0084999.pth  model_0134999.pth  model_0184999.pth  model_0234999.pth
    events.out.tfevents.1626131087.ubuntu.23424.0  last_checkpoint    model_0039999.pth  model_0089999.pth  model_0139999.pth  model_0189999.pth  model_0239999.pth
    events.out.tfevents.1626131279.ubuntu.24690.0  log.txt            model_0044999.pth  model_0094999.pth  model_0144999.pth  model_0194999.pth  model_0244999.pth
    events.out.tfevents.1626218003.ubuntu.28190.0  metrics.json       model_0049999.pth  model_0099999.pth  model_0149999.pth  model_0199999.pth  model_0249999.pth
    events.out.tfevents.1626218189.ubuntu.29394.0  model_0004999.pth  model_0054999.pth  model_0104999.pth  model_0154999.pth  model_0204999.pth  model_0254999.pth
    events.out.tfevents.1626218366.ubuntu.30715.0  model_0009999.pth  model_0059999.pth  model_0109999.pth  model_0159999.pth  model_0209999.pth  model_0259999.pth
    events.out.tfevents.1626218400.ubuntu.30986.0  model_0014999.pth  model_0064999.pth  model_0114999.pth  model_0164999.pth  model_0214999.pth  model_0264999.pth
    events.out.tfevents.1626218466.ubuntu.31493.0  model_0019999.pth  model_0069999.pth  model_0119999.pth  model_0169999.pth  model_0219999.pth  model_0269999.pth
    events.out.tfevents.1626218502.ubuntu.31791.0  model_0024999.pth  model_0074999.pth  model_0124999.pth  model_0174999.pth  model_0224999.pth  model_final.pth
    events.out.tfevents.1626262859.ubuntu.20243.0  model_0029999.pth  model_0079999.pth  model_0129999.pth  model_0179999.pth  model_0229999.pth

    ?我们以最后一次生成的模型做推理,测试下训练的模型效果,从下面三张图片看起来,非常棒的.

    cd ./detectron2/demo
    python demo.py --config-file ../configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml --output ./output  --input ./keypoints_input/000000000552.jpg  ./keypoints_input/000000001152.jpg   ./keypoints_input/000000581918.jpg --opts MODEL.WEIGHTS  ../output/model_final.pth

    ??

    ???

    ??

    cs