Skip to content
Snippets Groups Projects
Commit 634959b8 authored by wang-yujie2's avatar wang-yujie2
Browse files

ONNX infer:fasttext support

parent 3cfe08b3
No related branches found
No related tags found
No related merge requests found
......@@ -12,6 +12,9 @@
- [Configuration File](#configuration-file)
- [Training Process](#training-process)
- [Inference Process](#inference-process)
- [ONNX Export And Evaluation](#onnx-export-and-evaluation)
- [ONNX Export](#onnx-export)
- [ONNX Evaluation](#onnx-evaluation)
- [Model Description](#model-description)
- [Performance](#performance)
- [Training Performance](#training-performance)
......@@ -55,7 +58,7 @@ architecture. In the following sections, we will introduce how to run the script
- [MindSpore](https://gitee.com/mindspore/mindspore)
- For more information, please check the resources below:
- [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
## [Quick Start](#content)
......@@ -162,13 +165,15 @@ The FastText network script and code result are as follows:
│ ├──run_standalone_train.sh // shell script for standalone eval on ascend.
│ ├──run_distribute_train_gpu.sh // shell script for distributed train on GPU.
│ ├──run_eval_gpu.sh // shell script for standalone eval on GPU.
│ ├──run_eval_onnx_gpu.sh // shell script for standalone eval_onnx on GPU.
│ ├──run_standalone_train_gpu.sh // shell script for standalone train on GPU.
├── ag_config.yaml // ag dataset arguments
├── dbpedia_config.yaml // dbpedia dataset arguments
├── yelpp_config.yaml // yelpp dataset arguments
├── yelp_p_config.yaml // yelpp dataset arguments
├── mindspore_hub_conf.py // mindspore hub scripts
├── export.py // Export API entry.
├── eval.py // Infer API entry.
├── eval_onnx.py // Infer onnx API entry.
├── requirements.txt // Requirements of third party package.
├── train.py // Train API entry.
```
......@@ -184,6 +189,7 @@ The FastText network script and code result are as follows:
bash creat_dataset.sh [SOURCE_DATASET_PATH] [DATASET_NAME]
```
example:bash create_dataset.sh your_path/fasttext/dataset/ag_news_csv ag
### [Configuration File](#content)
Parameters for both training and evaluation can be set in config.py. All the datasets are using same parameter name, parameters value could be changed according the needs.
......@@ -237,7 +243,7 @@ Parameters for both training and evaluation can be set in config.py. All the dat
bash run_eval.sh [DATASET_PATH] [DATASET_NAME] [MODEL_CKPT]
```
Note: The `DATASET_PATH` is path to mindrecord. eg. `/dataset_path/*.mindrecord`
Note: The `DATASET_PATH` is path to mindrecord. eg. `/dataset_path/`
- Running on GPU
......@@ -248,7 +254,35 @@ Parameters for both training and evaluation can be set in config.py. All the dat
bash run_eval_gpu.sh [DATASET_PATH] [DATASET_NAME] [MODEL_CKPT]
```
Note: The `DATASET_PATH` is path to mindrecord. eg. `/dataset_path/*.mindrecord`
Note: The `DATASET_PATH` is path to mindrecord. eg. `/dataset_path/`
### [ONNX Export And Evaluation](#content)
Note that run all onnx concerned scripts on GPU.
- ONNX Export
- The command below will produce lots of fasttext onnx files, named different input shapes due to different input shapes of evaluation data.
```bash
python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [ONNX] --config_path [CONFIG_PATH]
--onnx_path [ONNX_PATH] --dataset_path [DATASET_PATH]
```
example:python export_onnx.py --ckpt_file ./checkpoint/fasttext_ascend_v170_dbpedia_official_nlp_acc98.62.ckpt --file_name fasttext --file_format ONNX --config_path ./dbpedia_config.yaml
--onnx_path ./fasttext --dataset_path ./fasttext/scripts/ag/
- ONNX Evaluation
- Note that ONNX_PATH should be the absolute directory to the exported onnx files, such as: '/home/mindspore/ls/models/official/nlp/fasttext'.
```bash
cd ./scripts
bash run_eval_onnx_gpu.sh DATASET_PATH DATASET_NAME ONNX_PATH
```
example:bash run_eval_onnx_gpu.sh /home/mindspore/fasttext/scripts/dbpedia/ dbpedia /home/mindspore/official/nlp/fasttext
You can view the results through the file "eval_onnx.log".
## [Model Description](#content)
......
......@@ -37,6 +37,7 @@ dataset_path: ""
data_name: "ag"
run_distribute: False
model_ckpt: ""
onnx_path: ""
# export option
device_id: 0
ckpt_file: ""
......@@ -54,4 +55,4 @@ model_ckpt: "existed checkpoint address."
device_id: "Device id"
ckpt_file: "Checkpoint file path"
file_name: "Output file name"
file_format: "Output file format, choice in ['AIR', 'ONNX', 'MINDIR']"
\ No newline at end of file
file_format: "Output file format, choice in ['AIR', 'ONNX', 'MINDIR']"
......@@ -37,6 +37,7 @@ dataset_path: ""
data_name: "dbpedia"
run_distribute: False
model_ckpt: ""
onnx_path: ""
# export option
device_id: 0
ckpt_file: ""
......@@ -54,4 +55,4 @@ model_ckpt: "existed checkpoint address."
device_id: "Device id"
ckpt_file: "Checkpoint file path"
file_name: "Output file name"
file_format: "Output file format, choice in ['AIR', 'ONNX', 'MINDIR']"
\ No newline at end of file
file_format: "Output file format, choice in ['AIR', 'ONNX', 'MINDIR']"
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""FastText for Evaluation"""
import numpy as np
import onnxruntime as ort
import mindspore.common.dtype as mstype
import mindspore.dataset as ds
import mindspore.dataset.transforms.c_transforms as deC
from model_utils.config import config
def create_session(checkpoint_path, target_device):
"""Create ONNX runtime session"""
if target_device == 'GPU':
providers = ['CUDAExecutionProvider']
elif target_device in ('CPU', 'Ascend'):
providers = ['CPUExecutionProvider']
else:
raise ValueError(f"Unsupported target device '{target_device}'. Expected one of: 'CPU', 'GPU', 'Ascend'")
session = ort.InferenceSession(checkpoint_path, providers=providers)
input_names = [x.name for x in session.get_inputs()]
return session, input_names
def load_infer_dataset(batch_size, datafile, bucket):
"""data loader for infer"""
def batch_per_bucket(bucket_length, input_file):
input_file = input_file + '/test_dataset_bs_' + str(bucket_length) + '.mindrecord'
if not input_file:
raise FileNotFoundError("input file parameter must not be empty.")
data_set = ds.MindDataset(input_file,
columns_list=['src_tokens', 'src_tokens_length', 'label_idx'])
type_cast_op = deC.TypeCast(mstype.int32)
data_set = data_set.map(operations=type_cast_op, input_columns="src_tokens")
data_set = data_set.map(operations=type_cast_op, input_columns="src_tokens_length")
data_set = data_set.map(operations=type_cast_op, input_columns="label_idx")
data_set = data_set.batch(batch_size, drop_remainder=False)
return data_set
for i, _ in enumerate(bucket):
bucket_len = bucket[i]
ds_per = batch_per_bucket(bucket_len, datafile)
if i == 0:
data_set = ds_per
else:
data_set = data_set + ds_per
return data_set
def run_fasttext_onnx_infer(target_label1):
"""run infer with FastText"""
dataset = load_infer_dataset(batch_size=config.batch_size, datafile=config.dataset_path, bucket=config.test_buckets)
predictions = []
target_sens = []
onnx_path = config.onnx_path
for batch in dataset.create_dict_iterator(output_numpy=True, num_epochs=1):
target_sens.append(batch['label_idx'])
src_tokens = batch['src_tokens']
src_tokens_length = batch['src_tokens_length']
src_tokens_shape = batch['src_tokens'].shape
onnx_file = onnx_path + '/fasttext_' + str(src_tokens_shape[0]) + '_' + str(src_tokens_shape[1]) + '_' \
+ config.data_name + '.onnx'
session, [src_t, src_t_len] = create_session(onnx_file, config.device_target)
[predicted_idx] = session.run(None, {src_t: src_tokens, src_t_len: src_tokens_length})
predictions.append(predicted_idx)
from sklearn.metrics import accuracy_score, classification_report
target_sens = np.array(target_sens).flatten()
merge_target_sens = []
for target_sen in target_sens:
merge_target_sens.extend(target_sen)
target_sens = merge_target_sens
predictions = np.array(predictions).flatten()
merge_predictions = []
for prediction in predictions:
merge_predictions.extend(prediction)
predictions = merge_predictions
acc = accuracy_score(target_sens, predictions)
result_report = classification_report(target_sens, predictions, target_names=target_label1)
print("********Accuracy: ", acc)
print(result_report)
def main():
if config.data_name == "ag":
target_label1 = ['0', '1', '2', '3']
elif config.data_name == 'dbpedia':
target_label1 = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13']
elif config.data_name == 'yelp_p':
target_label1 = ['0', '1']
print(target_label1)
run_fasttext_onnx_infer(target_label1)
if __name__ == '__main__':
main()
......@@ -24,6 +24,7 @@ from src.fasttext_model import FastText
from model_utils.config import config
from model_utils.moxing_adapter import moxing_wrapper
from eval import load_infer_dataset
if config.data_name == "ag":
target_label1 = ['0', '1', '2', '3']
......@@ -75,10 +76,22 @@ def run_fasttext_export():
src_tokens_shape = [batch_size, 2955]
src_tokens_length_shape = [batch_size, 1]
file_name = config.file_name + '_' + config.data_name
src_tokens = Tensor(np.ones((src_tokens_shape)).astype(np.int32))
src_tokens_length = Tensor(np.ones((src_tokens_length_shape)).astype(np.int32))
export(ft_infer, src_tokens, src_tokens_length, file_name=file_name, file_format=config.file_format)
if config.file_format == 'ONNX':
dataset = load_infer_dataset(batch_size=config.batch_size, datafile=config.dataset_path,
bucket=config.test_buckets)
for batch in dataset.create_dict_iterator(output_numpy=True, num_epochs=1):
src_tokens_shape = batch['src_tokens'].shape
src_tokens_length_shape = batch['src_tokens_length'].shape
file_name = config.file_name + '_' + str(src_tokens_shape[0]) + '_' + str(src_tokens_shape[1]) + '_' \
+ config.data_name
src_tokens = Tensor(np.ones((src_tokens_shape)).astype(np.int32))
src_tokens_length = Tensor(np.ones((src_tokens_length_shape)).astype(np.int32))
export(ft_infer, src_tokens, src_tokens_length, file_name=file_name, file_format=config.file_format)
else:
file_name = config.file_name + '_' + config.data_name
src_tokens = Tensor(np.ones((src_tokens_shape)).astype(np.int32))
src_tokens_length = Tensor(np.ones((src_tokens_length_shape)).astype(np.int32))
export(ft_infer, src_tokens, src_tokens_length, file_name=file_name, file_format=config.file_format)
def modelarts_pre_process():
......
......@@ -65,7 +65,7 @@ fi
if [ $DATASET_NAME == 'yelp_p' ];
then
echo "Begin to process ag news data"
echo "Begin to process yelp_p news data"
if [ -d "yelp_p" ];
then
rm -rf ./yelp_p
......
......@@ -15,8 +15,8 @@
# ============================================================================
echo "=============================================================================================================="
echo "Please run the script as: "
echo "sh run_eval_gpu.sh DATASET_PATH DATASET_NAME MODEL_CKPT"
echo "for example: sh run_eval_gpu.sh /home/workspace/ag/test*.mindrecord ag device0/ckpt0/fasttext-5-118.ckpt"
echo "bash run_eval_gpu.sh DATASET_PATH DATASET_NAME MODEL_CKPT"
echo "for example: bash run_eval_gpu.sh /home/workspace/ag/ ag device0/ckpt0/fasttext-5-118.ckpt"
echo "It is better to use absolute path."
echo "=============================================================================================================="
......@@ -32,7 +32,8 @@ DATASET=$(get_real_path $1)
echo $DATASET
DATANAME=$2
MODEL_CKPT=$(get_real_path $3)
echo "MODEL_CKPT:${MODEL_CKPT}"
echo "DATANAME: ${DATANAME}"
config_path="./${DATANAME}_config.yaml"
echo "config path is : ${config_path}"
......@@ -49,5 +50,10 @@ cp -r ../scripts/*.sh ./eval
cd ./eval || exit
echo "start eval on standalone GPU"
python ../../eval.py --config_path $config_path --device_target GPU --dataset_path $DATASET --data_name $DATANAME --model_ckpt $MODEL_CKPT> log_fasttext.log 2>&1 &
python ../../eval.py \
--config_path $config_path \
--device_target GPU \
--dataset_path $DATASET \
--data_name $DATANAME \
--model_ckpt $MODEL_CKPT> log_fasttext.log 2>&1 &
cd ..
#!/bin/bash
# Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
echo "=============================================================================================================="
echo "Please run the script as: "
echo "bash run_eval_onnx.sh DATASET_PATH DATASET_NAME ONNX_PATH"
echo "for example: bash run_eval_onnx.sh /home/workspace/ag/ ag device0/ckpt0"
echo "It is better to use absolute path."
echo "=============================================================================================================="
if [ $# != 3 ]
then
echo "Usage: bash run_eval_onnx.sh [DATASET_PATH] [DATASET_NAME] [ONNX_PATH]"
exit 1
fi
get_real_path(){
if [ "${1:0:1}" == "/" ]; then
echo "$1"
else
echo "$(realpath -m $PWD/$1)"
fi
}
DATASET=$(get_real_path $1)
echo $DATASET
DATANAME=$2
ONNX_PATH=$(get_real_path $3)
echo "ONNX_PATH:${ONNX_PATH}"
echo "DATANAME: ${DATANAME}"
config_path="./${DATANAME}_config.yaml"
echo "config path is : ${config_path}"
if [ -d "eval_onnx" ];
then
rm -rf ./eval_onnx
fi
mkdir ./eval_onnx
cp ../*.py ./eval_onnx
cp ../*.yaml ./eval_onnx
cp -r ../src ./eval_onnx
cp -r ../model_utils ./eval_onnx
cp -r ../scripts/*.sh ./eval_onnx
cd ./eval_onnx || exit
echo "start eval on standalone GPU"
python ../../eval_onnx.py \
--config_path $config_path \
--device_target GPU \
--onnx_path $ONNX_PATH \
--dataset_path $DATASET \
--data_name $DATANAME > eval_onnx.log 2>&1 &
cd ..
......@@ -95,7 +95,7 @@ class FastTextDataPreProcess():
train_dataset_list = []
test_dataset_list = []
spacy_nlp = spacy.load('en_core_web_lg', disable=['parser', 'tagger', 'ner'])
spacy_nlp.add_pipe(spacy_nlp.create_pipe('sentencizer'))
spacy_nlp.add_pipe('sentencizer')
with open(self.train_path, 'r', newline='', encoding='utf-8') as src_file:
reader = csv.reader(src_file, delimiter=",", quotechar='"')
......
......@@ -37,6 +37,7 @@ dataset_path: ""
data_name: "yelp_p"
run_distribute: False
model_ckpt: ""
onnx_path: ""
# export option
device_id: 0
ckpt_file: ""
......@@ -54,4 +55,4 @@ model_ckpt: "existed checkpoint address."
device_id: "Device id"
ckpt_file: "Checkpoint file path"
file_name: "Output file name"
file_format: "Output file format, choice in ['AIR', 'ONNX', 'MINDIR']"
\ No newline at end of file
file_format: "Output file format, choice in ['AIR', 'ONNX', 'MINDIR']"
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment