Skip to content
Snippets Groups Projects
Unverified Commit 81be5074 authored by i-robot's avatar i-robot Committed by Gitee
Browse files

!3046 [武汉理工大学][ONNX][MCNN]

Merge pull request !3046 from 杜闯/MCNN-ONNX
parents 361835b4 eea81a08
No related branches found
No related tags found
No related merge requests found
......@@ -12,6 +12,9 @@
- [Training](#training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [ONNX Export And Evaluation](#onnx-export-and-evaluation)
- [ONNX Export](#onnx-export)
- [ONNX Evaluation](#onnx-evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
......@@ -119,7 +122,8 @@ bash run_infer_310.sh ../mcnn.mindir ../test_data/images ../test_data/ground_tru
├── modelarts
├── scripts
│ ├──run_distribute_train.sh // train in distribute
│ ├──run_eval.sh // eval in ascend
│ ├──run_eval.sh // eval in ascend
│ ├──run_eval_onnx_gpu.sh // exported onnx eval in gpu
│ ├──run_infer_310.sh // infer in 310
│ ├──run_standalone_train.sh // train in standalone
│ ├──run_train_gpu.sh // train on GPU
......@@ -134,7 +138,9 @@ bash run_infer_310.sh ../mcnn.mindir ../test_data/images ../test_data/ground_tru
│ ├──Mcnn_Callback.py // Mcnn Callback
├── train.py // training script
├── eval.py // evaluation script
├── eval_onnx.py // exported onnx evaluation script
├── export.py // export script
├── export_onnx.py // export onnx format script
```
## [Script Parameters](#contents)
......@@ -147,6 +153,7 @@ Major parameters in train.py and config.py as follows:
--batch_size: Training batch size.
--device_target: Device where the code will be implemented. Optional values are "Ascend", "GPU".
--ckpt_path: The absolute full path to the checkpoint file saved after training.
--onnx_path: The absolute full path to the exported onnx file.
--train_path: Training dataset's data
--train_gt_path: Training dataset's label
--val_path: Testing dataset's data
......@@ -243,6 +250,34 @@ Before running the command below, please check the checkpoint path used for eval
MAE: 105.87984801910736 MSE: 161.6687899899305
```
## [ONNX Export And Evaluation](#contents)
Note that run all onnx concerned scripts on GPU.
### ONNX Export
The command below will produce lots of MCNN onnx files, named different input shapes due to different input shapes of evaluation data.
```bash
python export_onnx.py --ckpt_file [CKPT_PATH] --val_path [VAL_PATH] --val_gt_path [VAL_GT_PATH]
# example: python export_onnx.py --ckpt_file mcnn_ascend_v170_shanghaitecha_official_cv_MAE112.11.ckpt --val_path /data0/dc/mcnn/models/official/cv/MCNN/data/original/shanghaitech/part_A_final/test_data/images/ --val_gt_path /data0/dc/mcnn/models/official/cv/MCNN/data/original/shanghaitech/part_A_final/test_data/ground_truth_csv/
```
### ONNX Evaluation
Note that ONNX_PATH should be the absolute directory to the exported onnx files, such as: '/data0/dc/mcnn/models/official/cv/MCNN/'.
```bash
bash run_eval_onnx_gpu.sh [VAL_PATH] [VAL_GT_PATH] [ONNX_PATH]
# example: bash run_eval_onnx_gpu.sh /data0/dc/mcnn/models/official/cv/MCNN/data/original/shanghaitech/part_A_final/test_data/images/ /data0/dc/mcnn/models/official/cv/MCNN/data/original/shanghaitech/part_A_final/test_data/ground_truth_csv/ /data0/dc/mcnn/models/official/cv/MCNN/
```
You can view the results through the file "log_onnx". The accuracy of the test dataset will be as follows:
```text
MAE: 112.11429375868578 MSE: 172.62108098880813
```
# [Model Description](#contents)
## [Performance](#contents)
......
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
eval for exported onnx model
"""
import argparse
import onnxruntime as ort
from mindspore.common import set_seed
import numpy as np
from src.dataset import create_dataset
from src.data_loader_3channel import ImageDataLoader_3channel
parser = argparse.ArgumentParser(description='MindSpore MCNN Example')
parser.add_argument('--onnx_path', type=str, default="./", help='Location of exported onnx model.')
parser.add_argument('--val_path',
default='../MCNN/data/original/shanghaitech/part_A_final/test_data/images',
help='Location of data.')
parser.add_argument('--val_gt_path',
default='../MCNN/data/original/shanghaitech/part_A_final/test_data/ground_truth_csv',
help='Location of data.')
args = parser.parse_args()
set_seed(64678)
def create_session(onnx_checkpoint_path):
providers = ['CUDAExecutionProvider']
session = ort.InferenceSession(onnx_checkpoint_path, providers=providers)
input_name = session.get_inputs()[0].name
return session, input_name
if __name__ == "__main__":
local_path = args.val_path
local_gt_path = args.val_gt_path
onnx_path = args.onnx_path
data_loader_val = ImageDataLoader_3channel(local_path, local_gt_path, shuffle=False, gt_downsample=True,
pre_load=True)
ds_val = create_dataset(data_loader_val, target="GPU", train=False)
ds_val = ds_val.batch(1)
mae = 0.0
mse = 0.0
for sample in ds_val.create_dict_iterator(output_numpy=True):
im_data = sample['data']
gt_data = sample['gt_density']
im_data_shape = im_data.shape
onnx_file = onnx_path + str(im_data_shape[2]) + '_' + str(im_data_shape[3]) + '.onnx'
sess, input_each = create_session(onnx_file)
density_map = sess.run(None, {input_each: im_data})[0]
gt_count = np.sum(gt_data)
et_count = np.sum(density_map)
mae += abs(gt_count - et_count)
mse += ((gt_count - et_count) * (gt_count - et_count))
mae = mae / ds_val.get_dataset_size()
mse = np.sqrt(mse / ds_val.get_dataset_size())
print('MAE:', mae, ' MSE:', mse)
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""export checkpoint file into onnx models"""
import argparse as argp
import mindspore
from mindspore import Tensor, context, load_checkpoint, load_param_into_net, export
import numpy as np
from src.mcnn import MCNN
from src.dataset import create_dataset
from src.data_loader_3channel import ImageDataLoader_3channel
parser = argp.ArgumentParser(description='MCNN ONNX Export ')
parser.add_argument("--device_id", type=int, default=4, help="Device id")
parser.add_argument("--ckpt_file", type=str, default="./best.ckpt", help="Checkpoint file path.")
parser.add_argument('--val_path',
default='../MCNN/data/original/shanghaitech/part_A_final/test_data/images',
help='Location of data.')
parser.add_argument('--val_gt_path',
default='../MCNN/data/original/shanghaitech/part_A_final/test_data/ground_truth_csv',
help='Location of data.')
args = parser.parse_args()
context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
context.set_context(device_id=args.device_id)
if __name__ == "__main__":
data_loader_val = ImageDataLoader_3channel(args.val_path, args.val_gt_path, shuffle=False, gt_downsample=True,
pre_load=True)
ds_val = create_dataset(data_loader_val, target="GPU", train=False)
ds_val = ds_val.batch(1)
network = MCNN()
param_dict = load_checkpoint(args.ckpt_file)
load_param_into_net(network, param_dict)
for sample in ds_val.create_dict_iterator(output_numpy=True):
im_data_shape = sample['data'].shape
export_path = str(im_data_shape[2]) + '_' + str(im_data_shape[3])
inputs = Tensor(np.ones(list(im_data_shape)), mindspore.float32)
export(network, inputs, file_name=export_path, file_format="ONNX")
numpy
pandas
opencv-python
onnxruntime-gpu
......@@ -16,7 +16,7 @@
if [ $# != 4 ]
then
echo "Usage: sh run_eval.sh [RUN_OFFLINE] [VAL_PATH] [VAL_GT_PATH] [CKPT_PATH]"
echo "Usage: bash run_eval.sh [RUN_OFFLINE] [VAL_PATH] [VAL_GT_PATH] [CKPT_PATH]"
exit 1
fi
......
#!/bin/bash
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
if [ $# != 3 ]
then
echo "Usage: bash run_eval_onnx_gpu.sh [VAL_PATH] [VAL_GT_PATH] [ONNX_PATH]"
exit 1
fi
ulimit -u unlimited
export DEVICE_ID=0
export RANK_SIZE=1
export VAL_PATH=$1
export VAL_GT_PATH=$2
export ONNX_PATH=$3
if [ -d "eval_onnx" ];
then
rm -rf ./eval_onnx
fi
mkdir ./eval_onnx
cp ../*.py ./eval_onnx
cp *.sh ./eval_onnx
cp -r ../src ./eval_onnx
cd ./eval_onnx || exit
env > env_onnx.log
echo "start evaluation for device $DEVICE_ID"
python eval_onnx.py --val_path=$VAL_PATH \
--val_gt_path=$VAL_GT_PATH --onnx_path=$ONNX_PATH &> log_onnx &
cd ..
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment