Skip to content
Snippets Groups Projects
Commit c0537b98 authored by 格物致知's avatar 格物致知
Browse files

[武汉理工大学][ONNX]PSPNet

parent d5528935
No related branches found
No related tags found
No related merge requests found
# Contents
- [Contents](#contents)
- [PSPNet Description](#PSPNet-description)
- [Model Architecture](#PSPNet-Architeture)
- [Dataset](#PSPNet-Dataset)
......@@ -16,10 +17,11 @@
- [Evaluation Result](#evaluation-resul)
- [Export MindIR](#export-mindir)
- [310 infer](#310-inference)
- [ONNX CPU infer](#onnx-cpu-infer)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [Inference Performance](#inference-performance)
- [Distributed Training Performance](#distributed-training-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
......@@ -36,6 +38,8 @@ The pyramid pooling module fuses features under four different pyramid scales.Fo
# [Dataset](#Content)
- [Semantic Boundaries Dataset](http://home.bharathh.info/pubs/codes/SBD/download.html)
- [PASCAL VOC 2012 Website](http://host.robots.ox.ac.uk/pascal/VOC/voc2012)
- It contains 11,357 finely annotated images split into training and testing sets with 8,498 and 2,857 images respectively.
- The path formats in train.txt and val.txt are partial. And the mat file in the cls needs to be converted to image. You can run preprocess_dataset.py to convert the mat file and generate train_list.txt and val_list.txt. As follow:
......@@ -112,9 +116,10 @@ Datasets: attributes (names and colors) are needed, and please download as follo
└── voc2012_pspnet50.yaml
├── src # PSPNet
├── dataset # data processing
├── dataset.py
├── pt_dataset.py
├── create_data_txt.py # generate train_list.txt and val_list.txt
└── transform.py
├── create_voc_list.py # generate train_list.txt and val_list.txt
└── pt_transform.py
├── model # models for training and test
├── PSPNet.py
├── resnet.py
......@@ -130,6 +135,7 @@ Datasets: attributes (names and colors) are needed, and please download as follo
├── run_distribute_train_ascend.sh # multi cards distributed training in ascend
├── run_train1p_ascend.sh # 1P training in ascend
├── run_infer_310.sh # 310 infer
├── run_eval_onnx_cpu.sh # ONNX infer
└── run_eval.sh # validation script
└── train.py # The training python file for ADE20K/VOC2012
```
......@@ -208,10 +214,18 @@ epoch time: 428776.845 ms, per step time: 403.365 ms
Check the checkpoint path in config/ade20k_pspnet50.yaml and config/voc2012_pspnet50.yaml used for evaluation before running the following command.
#### Evalueation on gpu
```shell
bash run_eval.sh [YAML_PATH] [DEVICE_ID]
```
#### Evalueation on cpu
```shell
bash run_eval.sh [YAML_PATH] cpu
```
### Evaluation Result
The results at eval.log were as follows:
......@@ -221,13 +235,21 @@ ADE20K:mIoU/mAcc/allAcc 0.4164/0.5319/0.7996.
VOC2012:mIoU/mAcc/allAcc 0.7380/0.8229/0.9293.
````
## [Export MindIR](#contents)
## [Export](#contents)
### Export MINDIR
```shell
python export.py --yaml_path [YAML_PTAH] --ckpt_file [CKPT_PATH]
```
The ckpt_file parameter is required,
### Export ONNX
```shell
python export.py --yaml_path [YAML_PTAH] --ckpt_file [CKPT_PATH] --file_format ONNX
```
The ckpt_file parameter and yaml_path are required.
## 310 infer
......@@ -237,6 +259,14 @@ The ckpt_file parameter is required,
bash run_infer_310.sh [MINDIR PTAH [YAML PTAH] [DATA PATH] [DEVICE ID]
```
## ONNX CPU infer
- Note: Before executing ONNX CPU infer, please export onnx model first.
```shell
bash PSPNet/scripts/run_eval_onnx_cpu.sh PSPNet/config/voc2012_pspnet50.yaml
```
# [Model Description](#Content)
## Performance
......
......@@ -51,3 +51,7 @@ TEST:
result_path: ./result/ade/
color_txt: ./ade20k/ade20k_colors.txt
name_txt: ./ade20k/ade20k_names.txt
ONNX_INFER:
onnx_path: /home/mindspore/pspnet/PSPNet/PSPNet.onnx
device_target: cpu
\ No newline at end of file
......@@ -51,3 +51,8 @@ TEST:
result_path: ./result/voc/
color_txt: ./voc2012/voc2012_colors.txt
name_txt: ./voc2012/voc2012_names.txt
ONNX_INFER:
onnx_path: /home/mindspore/pspnet/PSPNet/PSPNet.onnx
device_target: cpu
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
""" Evaluate on CPU Platform. """
import os
import time
import logging
import argparse
import cv2
import numpy
from src.dataset import pt_dataset, pt_transform
import src.utils.functions_args as fa
from src.utils.p_util import AverageMeter as AM
from src.utils.p_util import intersectionAndUnion, check_makedirs, colorize
import mindspore.numpy as np
import mindspore
from mindspore import Tensor
import mindspore.dataset as ds
from mindspore import context
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore.train.serialization import load_param_into_net, load_checkpoint
def get_log():
""" Eval Logger """
logger_name = "eval-logger"
log = logging.getLogger(logger_name)
log.setLevel(logging.INFO)
handler = logging.StreamHandler()
fmt = "[%(asctime)s %(levelname)s %(filename)s line %(lineno)d %(process)d] %(message)s"
handler.setFormatter(logging.Formatter(fmt))
log.addHandler(handler)
return log
def main():
""" The main function of the evaluate process """
logger.info("=> creating PSPNet model ...")
logger.info("Class number: %s", args.classes)
value_scale = 255
mean = [0.485, 0.456, 0.406]
mean = [item * value_scale for item in mean]
std = [0.229, 0.224, 0.225]
std = [item * value_scale for item in std]
gray_folder = os.path.join(args.result_path, 'gray')
color_folder = os.path.join(args.result_path, 'color')
test_transform = pt_transform.Compose([pt_transform.Normalize(mean=mean, std=std, is_train=False)])
test_data = pt_dataset.SemData(
split='val', data_root=args.data_root,
data_list=args.val_list,
transform=test_transform)
test_loader = ds.GeneratorDataset(test_data, column_names=["data", "label"], shuffle=False)
test_loader.batch(1)
colors = numpy.loadtxt(args.color_txt).astype('uint8')
names = [line.rstrip('\n') for line in open(args.name_txt)]
from src.model import pspnet
PSPNet = pspnet.PSPNet(
feature_size=args.feature_size,
num_classes=args.classes,
backbone=args.backbone,
pretrained=False,
pretrained_path="",
aux_branch=False,
deep_base=True
)
ms_checkpoint = load_checkpoint(args.ckpt)
load_param_into_net(PSPNet, ms_checkpoint, strict_load=True)
PSPNet.set_train(False)
test_model(test_loader, test_data.data_list, PSPNet, args.classes, mean, std, args.base_size, args.test_h,
args.test_w, args.scales, gray_folder, color_folder, colors)
if args.split != 'test':
calculate_acc(test_data.data_list, gray_folder, args.classes, names, colors)
def net_process(model, image, mean, std=None, flip=True):
""" Give the input to the model"""
transpose = ops.Transpose()
input_ = transpose(image, (2, 0, 1)) # (473, 473, 3) -> (3, 473, 473)
mean = np.array(mean)
std = np.array(std)
if std is None:
input_ = input_ - mean[:, None, None]
else:
input_ = (input_ - mean[:, None, None]) / std[:, None, None]
expand_dim = ops.ExpandDims()
input_ = expand_dim(input_, 0)
if flip:
flip_input = np.flip(input_, axis=3)
concat = ops.Concat(axis=0)
input_ = concat((input_, flip_input))
model.set_train(False)
output = model(input_)
_, _, h_i, w_i = input_.shape
_, _, h_o, w_o = output.shape
if (h_o != h_i) or (w_o != w_i):
bi_linear = nn.ResizeBilinear()
output = bi_linear(output, size=(h_i, w_i), align_corners=True)
softmax = nn.Softmax(axis=1)
output = softmax(output)
if flip:
output = (output[0] + np.flip(output[1], axis=2)) / 2
else:
output = output[0]
output = transpose(output, (1, 2, 0)) # Tensor
output = output.asnumpy()
return output
def calculate_acc(data_list, pred_folder, classes, names, colors):
""" Calculation evaluating indicator """
colors = colors.tolist()
overlap_meter = AM()
union_meter = AM()
target_meter = AM()
length = len(data_list)
for index, (image_path, target_path) in enumerate(data_list):
image_name = image_path.split('/')[-1].split('.')[0]
pred = cv2.imread(os.path.join(pred_folder, image_name + '.png'), cv2.IMREAD_GRAYSCALE)
if args.prefix == 'voc':
target = cv2.imread(target_path)
target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)
anno_label = convert(target, colors)
if args.prefix == 'ADE':
anno_label = cv2.imread(target_path, cv2.IMREAD_GRAYSCALE)
anno_label -= 1
overlap, union, target = intersectionAndUnion(pred, anno_label, classes)
overlap_meter.update(overlap)
union_meter.update(union)
target_meter.update(target)
acc = sum(overlap_meter.val) / (sum(target_meter.val) + 1e-10)
logger.info(
'Evaluating {0}/{1} on image {2}, accuracy {3:.4f}.'.format(index + 1, length, image_name + '.png', acc))
iou_class = overlap_meter.sum / (union_meter.sum + 1e-10)
accuracy_class = overlap_meter.sum / (target_meter.sum + 1e-10)
mIoU = numpy.mean(iou_class)
mAcc = numpy.mean(accuracy_class)
allAcc = sum(overlap_meter.sum) / (sum(target_meter.sum) + 1e-10)
logger.info('Eval result: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(mIoU, mAcc, allAcc))
for i in range(classes):
logger.info('Class_{} result: iou/accuracy {:.4f}/{:.4f}, name: {}.'.format(i, iou_class[i], accuracy_class[i],
names[i]))
def scale_proc(model, image, classes, crop_h, crop_w, h, w, mean, std=None, stride_rate=2 / 3):
""" Process input size """
ori_h, ori_w, _ = image.shape
pad_h = max(crop_h - ori_h, 0)
pad_w = max(crop_w - ori_w, 0)
pad_h_half = int(pad_h / 2)
pad_w_half = int(pad_w / 2)
if pad_h > 0 or pad_w > 0:
image = cv2.copyMakeBorder(image, pad_h_half, pad_h - pad_h_half, pad_w_half, pad_w - pad_w_half,
cv2.BORDER_CONSTANT, value=mean)
new_h, new_w, _ = image.shape
image = Tensor.from_numpy(image)
stride_h = int(numpy.ceil(crop_h * stride_rate))
stride_w = int(numpy.ceil(crop_w * stride_rate))
g_h = int(numpy.ceil(float(new_h - crop_h) / stride_h) + 1)
g_w = int(numpy.ceil(float(new_w - crop_w) / stride_w) + 1)
pred_crop = numpy.zeros((new_h, new_w, classes), dtype=float)
count_crop = numpy.zeros((new_h, new_w), dtype=float)
for idh in range(0, g_h):
for idw in range(0, g_w):
s_h = idh * stride_h
e_h = min(s_h + crop_h, new_h)
s_h = e_h - crop_h
s_w = idw * stride_w
e_w = min(s_w + crop_w, new_w)
s_w = e_w - crop_w
image_crop = image[s_h:e_h, s_w:e_w].copy()
count_crop[s_h:e_h, s_w:e_w] += 1
pred_crop[s_h:e_h, s_w:e_w, :] += net_process(model, image_crop, mean, std)
pred_crop /= numpy.expand_dims(count_crop, 2)
pred_crop = pred_crop[pad_h_half:pad_h_half + ori_h, pad_w_half:pad_w_half + ori_w]
pred = cv2.resize(pred_crop, (w, h), interpolation=cv2.INTER_LINEAR)
return pred
def get_parser():
"""
Read parameter file
-> for ADE20k: ./config/ade20k_pspnet50.yaml
-> for voc2012: ./config/voc2012_pspnet50.yaml
"""
parser = argparse.ArgumentParser(description='MindSpore Semantic Segmentation')
parser.add_argument('--config', type=str, required=True, default='./config/ade20k_pspnet50.yaml',
help='config file')
parser.add_argument('opts', help='see ./config/voc2012_pspnet50.yaml for all options', default=None,
nargs=argparse.REMAINDER)
args_ = parser.parse_args()
assert args_.config is not None
cfg = fa.load_cfg_from_cfg_file(args_.config)
if args_.opts is not None:
cfg = fa.merge_cfg_from_list(cfg, args_.opts)
return cfg
def test_model(data_iter, path_iter, model, classes, mean, std, origin_size, c_h, c_w, scales, gray_folder,
color_folder, colors):
""" Generate evaluate image """
logger.info('>>>>>>>>>>>>>>>>Evaluation Start>>>>>>>>>>>>>>>>')
model.set_train(False)
batch_time = AM()
data_time = AM()
begin_time = time.time()
scales_num = len(scales)
img_num = len(path_iter)
for i, (input_, _) in enumerate(data_iter):
data_time.update(time.time() - begin_time)
input_ = input_.asnumpy()
image = numpy.transpose(input_, (1, 2, 0))
h, w, _ = image.shape
pred = numpy.zeros((h, w, classes), dtype=float)
for ratio in scales:
long_size = round(ratio * origin_size)
new_h = long_size
new_w = new_h
if h < w:
new_h = round(long_size / float(w) * h)
else:
new_w = round(long_size / float(h) * w)
new_image = cv2.resize(image, (new_w, new_h), interpolation=cv2.INTER_LINEAR)
pred = pred + scale_proc(model, new_image, classes, c_h, c_w, h, w, mean, std)
pred = pred / scales_num
pred = numpy.argmax(pred, axis=2)
batch_time.update(time.time() - begin_time)
begin_time = time.time()
if ((i + 1) % 10 == 0) or (i + 1 == img_num):
logger.info('Test: [infer {} images, {} images in total] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Batch {batch_time.val:.3f} ({batch_time.avg:.3f}).'.format(i + 1, img_num,
data_time=data_time,
batch_time=batch_time))
check_makedirs(color_folder)
check_makedirs(gray_folder)
gray = numpy.uint8(pred)
image_path, _ = path_iter[i]
color = colorize(gray, colors)
image_name = image_path.split('/')[-1].split('.')[0]
cv2.imwrite(os.path.join(gray_folder, image_name + '.png'), gray)
color.save(os.path.join(color_folder, image_name + '.png'))
logger.info('<<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<<')
def convert(label, colors):
"""Convert classification ids in labels."""
annotation = numpy.zeros((label.shape[0], label.shape[1]))
for i in range(len(label)):
for j in range(len(label[i])):
if colors.count(label[i][j].tolist()):
annotation[i][j] = colors.index(label[i][j].tolist())
else:
annotation[i][j] = 0
a = Tensor(annotation, dtype=mindspore.uint8)
annotation = a.asnumpy()
return annotation
if __name__ == '__main__':
cv2.ocl.setUseOpenCL(False)
context.set_context(mode=context.GRAPH_MODE, device_target="CPU", save_graphs=False)
args = get_parser()
logger = get_log()
main()
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
""" ONNX EVALUATE CPU"""
import os
import time
import logging
import argparse
import cv2
import numpy
from src.dataset import pt_dataset
from src.dataset import pt_transform as trans
import src.utils.functions_args as fa
from src.utils.p_util import AverageMeter, intersectionAndUnion, check_makedirs, colorize
import mindspore.numpy as np
from mindspore import Tensor
import mindspore.dataset as ds
from mindspore import context
import mindspore.nn as nn
import mindspore.ops as ops
import onnxruntime
cv2.ocl.setUseOpenCL(False)
context.set_context(mode=context.GRAPH_MODE, device_target="CPU",
save_graphs=False)
def get_logger():
""" logger """
logger_name = "main-logger"
log = logging.getLogger(logger_name)
log.setLevel(logging.INFO)
handler = logging.StreamHandler()
fmt = "[%(asctime)s %(levelname)s %(filename)s line %(lineno)d %(process)d] %(message)s"
handler.setFormatter(logging.Formatter(fmt))
log.addHandler(handler)
return log
# Due to Onnx-gpu can not support PSPNet Inference on GPU platform, this function only provide CPUExecutionProvider.
def getonnxmodel():
model = onnxruntime.InferenceSession(args.onnx_path, providers=['CPUExecutionProvider'])
return model
def get_Config():
config_parser = argparse.ArgumentParser(description='MindSpore Semantic Segmentation')
config_parser.add_argument('--config', type=str, required=True, help='config file')
config_parser.add_argument('opts', help='see ./src/config/voc2012_pspnet50.yaml for all options', default=None,
nargs=argparse.REMAINDER)
args_ = config_parser.parse_args()
assert args_.config is not None
cfg = fa.load_cfg_from_cfg_file(args_.config)
if args_.opts is not None:
cfg = fa.merge_cfg_from_list(cfg, args_.opts)
return cfg
def main():
""" The main function of the evaluate process """
logger.info("=> Load PSPNet ...")
logger.info("%s: class num:%s", args.prefix, args.classes)
value_scale = 255
m = [0.485, 0.456, 0.406]
m = [item * value_scale for item in m]
s = [0.229, 0.224, 0.225]
s = [item * value_scale for item in s]
gray_folder = os.path.join(args.result_path, 'gray')
color_folder = os.path.join(args.result_path, 'color')
test_transform = trans.Compose([trans.Normalize(mean=m, std=s, is_train=False)])
test_data = pt_dataset.SemData(
split='val', data_root=args.data_root,
data_list=args.val_list,
transform=test_transform)
test_loader = ds.GeneratorDataset(test_data, column_names=["data", "label"],
shuffle=False)
test_loader.batch(1)
colors = numpy.loadtxt(args.color_txt).astype('uint8')
names = [line.rstrip('\n') for line in open(args.name_txt)]
model = getonnxmodel()
test(test_loader, test_data.data_list, model, args.classes, m, s, args.base_size, args.test_h,
args.test_w, args.scales, gray_folder, color_folder, colors)
if args.split != 'test':
calculate_acc(test_data.data_list, gray_folder, args.classes, names, colors)
def net_process(model, image, mean, std=None, flip=False):
""" Give the input to the model"""
transpose = ops.Transpose()
input_ = transpose(image, (2, 0, 1)) # (473, 473, 3) -> (3, 473, 473)
mean = np.array(mean)
std = np.array(std)
if std is None:
input_ = input_ - mean[:, None, None]
else:
input_ = (input_ - mean[:, None, None]) / std[:, None, None]
expand_dim = ops.ExpandDims()
input_ = expand_dim(input_, 0)
if flip:
flip_input = np.flip(input_, axis=3)
concat = ops.Concat(axis=0)
input_ = concat((input_, flip_input))
input_ = input_.asnumpy()
inputs = {model.get_inputs()[0].name: input_}
output = model.run(None, inputs)
output = Tensor(output)
_, _, h_i, w_i = input_.shape
_, _, _, h_o, w_o = output.shape
if (h_o != h_i) or (w_o != w_i):
bi_linear = nn.ResizeBilinear()
output = bi_linear(output, size=(h_i, w_i), align_corners=True)
softmax = nn.Softmax(axis=2)
output = softmax(output)
if flip:
output = (output[0] + np.flip(output[1], axis=2)) / 2
else:
output = output[0][0]
output = transpose(output, (1, 2, 0)) # Tensor
output = output.asnumpy()
return output
def scale_proc(model, image, classes, crop_h, crop_w, h, w, mean, std=None, stride_rate=2 / 3):
""" Process input size """
ori_h, ori_w, _ = image.shape
pad_h = max(crop_h - ori_h, 0)
pad_w = max(crop_w - ori_w, 0)
pad_h_half = int(pad_h / 2)
pad_w_half = int(pad_w / 2)
if pad_h > 0 or pad_w > 0:
image = cv2.copyMakeBorder(image, pad_h_half, pad_h - pad_h_half, pad_w_half, pad_w - pad_w_half,
cv2.BORDER_CONSTANT, value=mean)
new_h, new_w, _ = image.shape
image = Tensor.from_numpy(image)
stride_h = int(numpy.ceil(crop_h * stride_rate))
stride_w = int(numpy.ceil(crop_w * stride_rate))
g_h = int(numpy.ceil(float(new_h - crop_h) / stride_h) + 1)
g_w = int(numpy.ceil(float(new_w - crop_w) / stride_w) + 1)
pred_crop = numpy.zeros((new_h, new_w, classes), dtype=float)
count_crop = numpy.zeros((new_h, new_w), dtype=float)
for idh in range(0, g_h):
for idw in range(0, g_w):
s_h = idh * stride_h
e_h = min(s_h + crop_h, new_h)
s_h = e_h - crop_h
s_w = idw * stride_w
e_w = min(s_w + crop_w, new_w)
s_w = e_w - crop_w
image_crop = image[s_h:e_h, s_w:e_w].copy()
count_crop[s_h:e_h, s_w:e_w] += 1
pred_crop[s_h:e_h, s_w:e_w, :] += net_process(model, image_crop, mean, std)
pred_crop /= numpy.expand_dims(count_crop, 2)
pred_crop = pred_crop[pad_h_half:pad_h_half + ori_h, pad_w_half:pad_w_half + ori_w]
pred = cv2.resize(pred_crop, (w, h), interpolation=cv2.INTER_LINEAR)
return pred
def test(test_loader, data_list, model, classes, m, s, base_size, crop_h, crop_w, scales, gray_folder,
color_folder, colors):
""" Generate evaluate image """
logger.info('>>>>>>>>>>>>>>>> Start Evaluation >>>>>>>>>>>>>>>>')
data_time = AverageMeter()
batch_time = AverageMeter()
end = time.time()
data_num = len(data_list)
scales_num = len(scales)
for index, (input_, _) in enumerate(test_loader):
data_time.update(time.time() - end)
input_ = input_.asnumpy()
image = numpy.transpose(input_, (1, 2, 0))
height, weight, _ = image.shape
pred = numpy.zeros((height, weight, classes), dtype=float)
for ratio in scales:
long_size = round(ratio * base_size)
new_h = long_size
new_w = long_size
if height > weight:
new_w = round(long_size / float(height) * weight)
else:
new_h = round(long_size / float(weight) * height)
image_scale = cv2.resize(image, (new_w, new_h), interpolation=cv2.INTER_LINEAR)
pred += scale_proc(model, image_scale, classes, crop_h, crop_w, height, weight, m, s)
pred = pred / scales_num
pred = numpy.argmax(pred, axis=2)
batch_time.update(time.time() - end)
end = time.time()
if ((index + 1) % 10 == 0) or (index + 1 == data_num):
logger.info('Test: [{}/{}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Batch {batch_time.val:.3f} ({batch_time.avg:.3f}).'.format(index + 1, data_num,
data_time=data_time,
batch_time=batch_time))
check_makedirs(gray_folder)
check_makedirs(color_folder)
gray = numpy.uint8(pred)
color = colorize(gray, colors)
image_path, _ = data_list[index]
image_name = image_path.split('/')[-1].split('.')[0]
gray_img = os.path.join(gray_folder, image_name + '.png')
color_img = os.path.join(color_folder, image_name + '.png')
cv2.imwrite(gray_img, gray)
color.save(color_img)
logger.info('<<<<<<<<<<<<<<<<< End Evaluation <<<<<<<<<<<<<<<<<')
def convert_label(label, colors):
"""Convert classification ids in labels."""
mask_map = numpy.zeros((label.shape[0], label.shape[1]))
for i in range(len(label)):
for j in range(len(label[i])):
if colors.count(label[i][j].tolist()):
mask_map[i][j] = colors.index(label[i][j].tolist())
import mindspore
a = Tensor(mask_map, dtype=mindspore.uint8)
mask_map = a.asnumpy()
return mask_map
def calculate_acc(data_list, pred_folder, classes, names, colors):
""" Calculation evaluating indicator """
colors = colors.tolist()
overlap_meter = AverageMeter()
union_meter = AverageMeter()
target_meter = AverageMeter()
for i, (image_path, label_path) in enumerate(data_list):
image_name = image_path.split('/')[-1].split('.')[0]
pred = cv2.imread(os.path.join(pred_folder, image_name + '.png'), cv2.IMREAD_GRAYSCALE)
if args.prefix != "ADE":
target = cv2.imread(label_path)
target = cv2.cvtColor(target, cv2.COLOR_BGR2RGB)
anno_label = convert_label(target, colors)
if args.prefix == 'ADE':
anno_label = cv2.imread(label_path, cv2.IMREAD_GRAYSCALE)
anno_label -= 1
overlap, union, label = intersectionAndUnion(pred, anno_label, args.classes)
overlap_meter.update(overlap)
union_meter.update(union)
target_meter.update(label)
accuracy = sum(overlap_meter.val) / (sum(target_meter.val) + 1e-10)
logger.info(
'Evaluating {0}/{1} on image {2}, accuracy {3:.4f}.'.format(i + 1, len(data_list), image_name + '.png',
accuracy))
iou_class = overlap_meter.sum / (union_meter.sum + 1e-10)
accuracy_class = overlap_meter.sum / (target_meter.sum + 1e-10)
mIoU = numpy.mean(iou_class)
mAcc = numpy.mean(accuracy_class)
allAcc = sum(overlap_meter.sum) / (sum(target_meter.sum) + 1e-10)
logger.info('Eval result: mIoU/mAcc/allAcc {:.4f}/{:.4f}/{:.4f}.'.format(mIoU, mAcc, allAcc))
for i in range(classes):
logger.info('Class_{} result: iou/accuracy {:.4f}/{:.4f}, name: {}.'.format(i, iou_class[i], accuracy_class[i],
names[i]))
if __name__ == '__main__':
args = get_Config()
logger = get_logger()
main()
onnxruntime-gpu
numpy
opencv-python
argparse
mindspore-gpu
\ No newline at end of file
......@@ -18,7 +18,9 @@ if [ $# != 2 ]
then
echo "=============================================================================================================="
echo "Usage: bash /PSPNet/scripts/run_eval.sh [YAML_PATH] [DEVICE_ID]"
echo "for example: bash PSPNet/scripts/run_eval.sh PSPNet/config/voc2012_pspnet50.yaml 0"
echo "Warning: before cpu infer, you need check device_target in config file."
echo "for gpu example: bash PSPNet/scripts/run_eval.sh PSPNet/config/voc2012_pspnet50.yaml 0"
echo "for cpu example: bash PSPNet/scripts/run_eval.sh PSPNet/config/voc2012_pspnet50.yaml cpu"
echo "=============================================================================================================="
exit 1
fi
......@@ -31,5 +33,11 @@ export RANK_ID=0
export DEVICE_ID=$2
echo "start evaluating for device $DEVICE_ID"
env > env.log
python3 eval.py --config="$YAML_PATH" > ./LOG/eval_log.txt 2>&1 &
\ No newline at end of file
if [ "$2" == "cpu" ]
then
python3 eval_cpu.py --config="$YAML_PATH" > ./LOG/eval_log.txt 2>&1 &
fi
if [ "$2" != "cpu" ]
then
python3 eval.py --config="$YAML_PATH" > ./LOG/eval_log.txt 2>&1 &
fi
#!/bin/bash
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
if [ $# != 1 ]
then
echo "=============================================================================================================="
echo "Usage: bash /PSPNet/scripts/run_eval_onnx_cpu.sh [YAML_PATH]"
echo "for example: bash PSPNet/scripts/run_eval_onnx_cpu.sh PSPNet/config/voc2012_pspnet50.yaml"
echo "=============================================================================================================="
exit 1
fi
rm -rf LOG
mkdir ./LOG
export YAML_PATH=$1
export RANK_SIZE=1
export RANK_ID=0
echo "start evaluating on CPU"
env > env.log
python3 eval_onnx_cpu.py --config="$YAML_PATH" > ./LOG/eval_onnx.txt 2>&1 &
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
from argparse import ArgumentParser
def parse_args():
"""
parse args
"""
parser = ArgumentParser(description="generate train_list.txt or val_list.txt")
parser.add_argument("--dataset_list_txt", type=str, default="", help="path of val.txt in VOC2012")
parser.add_argument("--image_prefix", type=str, default="JPEGImages",
help="the relative path in the data_root, until to the level of images.jpg")
parser.add_argument("--mask_prefix", type=str, default="SegmentationClass",
help="the relative path in the data_root, until to the level of mask.jpg")
parser.add_argument("--output_txt", type=str, default="voc2012_val.txt", help="name of output txt")
args = parser.parse_args()
return args
def main():
args = parse_args()
with open(args.dataset_list_txt, 'r') as f:
with open(args.output_txt, 'w') as fw:
for i in f:
image_path = args.image_prefix + "/" + i[:-1] + ".jpg "
label_path = args.mask_prefix + "/" + i[:-1] + ".png\n"
line = image_path + label_path
fw.writelines(line)
fw.close()
f.close()
main()
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment