Skip to content
Snippets Groups Projects
Commit 73400843 authored by i-robot's avatar i-robot Committed by Gitee
Browse files

!517 Change format of example of `device_target` in readme.

Merge pull request !517 from chenhaozhe/change-readme
parents 9cbc637d 6a9e4d79
No related branches found
No related tags found
No related merge requests found
Showing
with 112 additions and 79 deletions
......@@ -232,13 +232,13 @@ sh run_eval_gpu.sh [IMGS_PATH] [ANNOS_PATH] [CHECKPOINT_PATH] [COCO_TEXT_PARSER_
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_model/'" on default_config.yaml file.
# Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file.
# Set "device_target='Ascend'" on default_config.yaml file.
# Set "device_target=Ascend" on default_config.yaml file.
# Set "file_format='MINDIR'" on default_config.yaml file.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "checkpoint_url='s3://dir_to_trained_model/'" on the website UI interface.
# Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
# Add "device_target='Ascend'" on the website UI interface.
# Add "device_target=Ascend" on the website UI interface.
# Add "file_format='MINDIR'" on the website UI interface.
# Add other parameters on the website UI interface.
# (2) Set the code directory to "/path/deeptext" on the website UI interface.
......
# Contents
- [Contents](#contents)
- [DenseNet Description](#densenet-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
......@@ -12,17 +13,20 @@
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Training](#training)
- [Distributed Training](#distributed-training)
- [Distributed Training](#distributed-training)
- [Evaluation Process](#evaluation-process)
- [Evaluation](#evaluation)
- [Export Process](#export-process)
- [Export](#Export)
- [Inferenct Process](#Inferenct-process)
- [Inferenct](#Inferenct)
- [export](#export)
- [Inference Process](#inference-process)
- [Inference](#inference)
- [Model Description](#model-description)
- [Performance](#performance)
- [DenseNet121](#densenet121)
- [Training accuracy results](#training-accuracy-results)
- [Training performance results](#training-performance-results)
- [DenseNet100](#densenet100)
- [Training performance](#training-performance)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
......@@ -179,13 +183,13 @@ After installing MindSpore via the official website, you can start training and
# run training example
export CUDA_VISIBLE_DEVICES=0
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target='GPU' > train.log 2>&1 &
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target=GPU > train.log 2>&1 &
# run distributed training example
bash run_distribute_train_gpu.sh 8 0,1,2,3,4,5,6,7 [NET_NAME] [DATASET_NAME] [DATASET_PATH]
# run evaluation example
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target='GPU' --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target=GPU --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
OR
bash run_distribute_eval_gpu.sh 1 0 [NET_NAME] [DATASET_NAME] [DATASET_PATH] [CHECKPOINT_PATH]
......@@ -301,7 +305,7 @@ You can modify the training behaviour through the various flags in the `densenet
```python
export CUDA_VISIBLE_DEVICES=0
python train.py --net [NET_NAME] --dataset [DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target='GPU' > train.log 2>&1 &
python train.py --net [NET_NAME] --dataset [DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target=GPU > train.log 2>&1 &
```
......@@ -313,7 +317,7 @@ You can modify the training behaviour through the various flags in the `densenet
```python
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target='CPU' > train.log 2>&1 &
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target=CPU > train.log 2>&1 &
```
......@@ -387,7 +391,7 @@ You can modify the training behaviour through the various flags in the `densenet
```python
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target='GPU' --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target=GPU --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
OR
bash run_distribute_eval_gpu.sh 1 0 [NET_NAME] [DATASET_NAME] [DATASET_PATH] [CHECKPOINT_PATH]
......@@ -416,7 +420,7 @@ You can modify the training behaviour through the various flags in the `densenet
```python
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target='CPU' --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target=CPU --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
```
......
......@@ -24,8 +24,11 @@
- [推理](#推理)
- [模型描述](#模型描述)
- [性能](#性能)
- [DenseNet121](#densenet121)
- [训练准确率结果](#训练准确率结果)
- [训练性能结果](#训练性能结果)
- [DenseNet100](#densenet100)
- [训练结果](#训练结果)
- [随机情况说明](#随机情况说明)
- [ModelZoo主页](#modelzoo主页)
......@@ -178,13 +181,13 @@ DenseNet-100使用的数据集: Cifar-10
```python
# 训练示例
export CUDA_VISIBLE_DEVICES=0
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target='GPU' > train.log 2>&1 &
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target=GPU > train.log 2>&1 &
# 分布式训练示例
bash run_distribute_train_gpu.sh 8 0,1,2,3,4,5,6,7 [NET_NAME] [DATASET_NAME] [DATASET_PATH]
# 评估示例
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target='GPU' --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target=GPU --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
OR
bash run_distribute_eval_gpu.sh 1 0 [NET_NAME] [DATASET_NAME] [DATASET_PATH] [CHECKPOINT_PATH]
```
......@@ -291,7 +294,7 @@ DenseNet-100使用的数据集: Cifar-10
```python
export CUDA_VISIBLE_DEVICES=0
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target='GPU' > train.log 2>&1 &
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target=GPU > train.log 2>&1 &
```
以上python命令在后台运行,在`output/202x-xx-xx_time_xx_xx/`目录下生成日志和模型检查点。
......@@ -299,7 +302,7 @@ python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATA
- CPU处理器环境运行
```python
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target='CPU' > train.log 2>&1 &
python train.py --net=[NET_NAME] --dataset=[DATASET_NAME] --train_data_dir=[DATASET_PATH] --is_distributed=0 --device_target=CPU > train.log 2>&1 &
```
以上python命令在后台运行,在`output/202x-xx-xx_time_xx_xx/`目录下生成日志和模型检查点。
......@@ -362,7 +365,7 @@ bash run_distribute_train_gpu.sh 8 0,1,2,3,4,5,6,7 [NET_NAME] [DATASET_NAME] [DA
运行以下命令进行评估。
```eval
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target='GPU' --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target=GPU --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
OR
bash run_distribute_eval_gpu.sh 1 0 [NET_NAME] [DATASET_NAME] [DATASET_PATH] [CHECKPOINT_PATH]
```
......@@ -385,7 +388,7 @@ bash run_distribute_train_gpu.sh 8 0,1,2,3,4,5,6,7 [NET_NAME] [DATASET_NAME] [DA
运行以下命令进行评估。
```eval
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target='CPU' --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
python eval.py --net=[NET_NAME] --dataset=[DATASET_NAME] --eval_data_dir=[DATASET_PATH] --device_target=CPU --ckpt_files=[CHECKPOINT_PATH] > eval.log 2>&1 &
```
上述python命令在后台运行。可以通过“eval/eval.log”文件查看结果。DenseNet-100在Cifar-10的测试数据集的准确率如下:
......
......@@ -83,7 +83,7 @@ You can start training using python or shell scripts. The usage of shell scripts
```bash
# training example
python:
GPU: mpirun --allow-run-as-root -n 8 --output-filename log_output --merge-stderr-to-stdout python train.py --is_distributed=True --platform='GPU' --dataset_path='~/imagenet/train/' > train.log 2>&1 &
GPU: mpirun --allow-run-as-root -n 8 --output-filename log_output --merge-stderr-to-stdout python train.py --is_distributed=True --platform=GPU --dataset_path='~/imagenet/train/' > train.log 2>&1 &
shell:
GPU: cd scripts & bash run_distribute_train_for_gpu.sh 8 0,1,2,3,4,5,6,7 ~/imagenet/train/
......@@ -106,7 +106,7 @@ You can start evaluation using python or shell scripts. The usage of shell scrip
```bash
# infer example
python:
GPU: CUDA_VISIBLE_DEVICES=0 python eval.py --platform='GPU' --dataset_path='~/imagenet/val/' > eval.log 2>&1 &
GPU: CUDA_VISIBLE_DEVICES=0 python eval.py --platform=GPU --dataset_path='~/imagenet/val/' > eval.log 2>&1 &
shell:
GPU: cd scripts & bash run_eval_for_gpu.sh '~/imagenet/val/' 'checkpoint_file'
......
......@@ -695,14 +695,14 @@ If you need to use the trained model to perform inference on multiple hardware p
# Set context
device_id = int(os.getenv('DEVICE_ID'))
context.set_context(mode=context.GRAPH_MODE,
device_target='Ascend',
device_target=Ascend,
device_id=device_id)
# Load unseen dataset for inference
dataset = create_dataset(dataset_path=config.data_path,
do_train=False,
batch_size=config.batch_size,
target='Ascend')
target=Ascend)
# Define model
net = squeezenet(num_classes=config.class_num)
......@@ -731,7 +731,7 @@ If you need to use the trained model to perform inference on multiple hardware p
do_train=True,
repeat_num=1,
batch_size=config.batch_size,
target='Ascend')
target=Ascend)
step_size = dataset.get_dataset_size()
# define net
......
......@@ -123,7 +123,7 @@ After installing MindSpore via the official website and Dataset is correctly gen
# a. Set "enable_modelarts=True" on default_config.yaml file.
# Set "datapath='/cache/data/amazon_beauty/data_mr'" on default_config.yaml file.
# Set "ckptpath='./ckpts'" on default_config.yaml file.
# (options)Set "device_target='GPU'" on default_config.yaml file if run on GPU.
# (options)Set "device_target=GPU" on default_config.yaml file if run on GPU.
# (options)Set "num_epoch=680" on default_config.yaml file if run on GPU.
# (options)Set "dist_reg=0" on default_config.yaml file if run on GPU.
# Set other parameters on default_config.yaml file you need.
......@@ -150,7 +150,7 @@ After installing MindSpore via the official website and Dataset is correctly gen
# Set "datapath='/cache/data/amazon_beauty/data_mr'" on default_config.yaml file.
# Set "ckptpath='/cache/checkpoint_path'" on default_config.yaml file.
# Set "checkpoint_url='s3://dir_to_your_trained_ckpt/'" on default_config.yaml file.
# (options)Set "device_target='GPU'" on default_config.yaml file if run on GPU.
# (options)Set "device_target=GPU" on default_config.yaml file if run on GPU.
# (options)Set "num_epoch=680" on default_config.yaml file if run on GPU.
# (options)Set "dist_reg=0" on default_config.yaml file if run on GPU.
# Set other parameters on default_config.yaml file you need.
......@@ -179,7 +179,7 @@ After installing MindSpore via the official website and Dataset is correctly gen
# Set "checkpoint_url='s3://dir_to_your_trained_ckpt/'" on default_config.yaml file.
# Set "file_name='bgcf'" on default_config.yaml file.
# Set "file_format='AIR'" on default_config.yaml file.
# (options)Set "device_target='GPU'" on default_config.yaml file if run on GPU.
# (options)Set "device_target=GPU" on default_config.yaml file if run on GPU.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "ckpt_file=/cache/checkpoint_path/model.ckpt" on the website UI interface.
......
......@@ -136,7 +136,7 @@ BGCF包含两个主要模块。首先是抽样,它生成基于节点复制的
# a. 在 default_config.yaml 文件中设置 "enable_modelarts=True"
# 在 default_config.yaml 文件中设置 "datapath='/cache/data/amazon_beauty/data_mr'"
# 在 default_config.yaml 文件中设置 "ckptpath='./ckpts'"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "device_target='GPU'"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "device_target=GPU"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "num_epoch=680"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "dist_reg=0"
# 在 default_config.yaml 文件中设置 其他参数
......@@ -162,7 +162,7 @@ BGCF包含两个主要模块。首先是抽样,它生成基于节点复制的
# a. 在 default_config.yaml 文件中设置 "enable_modelarts=True"
# 在 default_config.yaml 文件中设置 "datapath='/cache/data/amazon_beauty/data_mr'"
# 在 default_config.yaml 文件中设置 "ckptpath='./ckpts'"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "device_target='GPU'"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "device_target=GPU"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "num_epoch=680"
# (可选)如果选择GPU运行,在 default_config.yaml 文件中设置 "dist_reg=0"
# 在 default_config.yaml 文件中设置 其他参数
......@@ -190,7 +190,7 @@ BGCF包含两个主要模块。首先是抽样,它生成基于节点复制的
# 在 default_config.yaml 文件中设置 "checkpoint_url='s3://dir_to_your_trained_ckpt/'"
# 在 default_config.yaml 文件中设置 "file_name='bgcf'"
# 在 default_config.yaml 文件中设置 "file_format='AIR'"
# (可选)在 default_config.yaml 文件中设置 "device_target='GPU'"
# (可选)在 default_config.yaml 文件中设置 "device_target=GPU"
# 在 default_config.yaml 文件中设置 其他参数
# b. 在网页上设置 "enable_modelarts=True"
# 在网页上设置 "ckpt_file=/cache/checkpoint_path/model.ckpt"
......
......@@ -84,12 +84,12 @@ After dataset preparation, you can start training and evaluation as follows:
# a. Set "enable_modelarts=True" on [DATASET_NAME]_config.yaml file.
# Set "dataset_path='/cache/data/[DATASET_NAME]'" on [DATASET_NAME]_config.yaml file.
# Set "data_name='[DATASET_NAME]'" on [DATASET_NAME]_config.yaml file.
# (option)Set "device_target='GPU'" on [DATASET_NAME]_config.yaml file if run with GPU.
# (option)Set "device_target=GPU" on [DATASET_NAME]_config.yaml file if run with GPU.
# (option)Set other parameters on [DATASET_NAME]_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "dataset_path='/cache/data/[DATASET_NAME]'" on the website UI interface.
# Add "data_name='[DATASET_NAME]'" on the website UI interface.
# (option)Set "device_target='GPU'" on the website UI interface if run with GPU.
# (option)Set "device_target=GPU" on the website UI interface if run with GPU.
# (option)Set other parameters on the website UI interface.
# (3) Upload a zip dataset to S3 bucket. (you could also upload the origin dataset, but it can be so slow.)
# (4) Set the code directory to "/path/fasttext" on the website UI interface.
......@@ -105,14 +105,14 @@ After dataset preparation, you can start training and evaluation as follows:
# Set "data_name='[DATASET_NAME]'" on [DATASET_NAME]_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on [DATASET_NAME]_config.yaml file.
# Set "model_ckpt='/cache/checkpoint_path/model.ckpt'" on [DATASET_NAME]_config.yaml file.
# (option)Set "device_target='GPU'" on [DATASET_NAME]_config.yaml file if run with GPU.
# (option)Set "device_target=GPU" on [DATASET_NAME]_config.yaml file if run with GPU.
# (option)Set other parameters on [DATASET_NAME]_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "dataset_path='/cache/data/[DATASET_NAME]'" on the website UI interface.
# Add "data_name='[DATASET_NAME]'" on the website UI interface.
# Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface.
# Add "model_ckpt='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
# (option)Set "device_target='GPU'" on the website UI interface if run with GPU.
# (option)Set "device_target=GPU" on the website UI interface if run with GPU.
# (option)Set other parameters on the website UI interface.
# (3) Upload or copy your trained model to S3 bucket.
# (4) Upload a zip dataset to S3 bucket. (you could also upload the origin dataset, but it can be so slow.)
......
......@@ -76,7 +76,7 @@ After installing MindSpore via the official website, you can start training and
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > ms_log/output.log 2>&1 &
# run distributed training example
......@@ -86,7 +86,7 @@ After installing MindSpore via the official website, you can start training and
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /dataset_path /checkpoint_path/deepfm.ckpt
```
......@@ -108,7 +108,7 @@ After installing MindSpore via the official website, you can start training and
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='GPU' \
--device_target=GPU \
--do_eval=True > ms_log/output.log 2>&1 &
# run distributed training example
......@@ -118,7 +118,7 @@ After installing MindSpore via the official website, you can start training and
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='GPU' > ms_log/eval_output.log 2>&1 &
--device_target=GPU > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 GPU /dataset_path /checkpoint_path/deepfm.ckpt
```
......@@ -132,14 +132,14 @@ After installing MindSpore via the official website, you can start training and
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='CPU' \
--device_target=CPU \
--do_eval=True > ms_log/output.log 2>&1 &
# run evaluation example
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='CPU' > ms_log/eval_output.log 2>&1 &
--device_target=CPU > ms_log/eval_output.log 2>&1 &
```
- Running on [ModelArts](https://support.huaweicloud.com/modelarts/)
......@@ -316,7 +316,7 @@ Parameters for both training and evaluation can be set in config.py
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > ms_log/output.log 2>&1 &
```
......@@ -362,7 +362,7 @@ Parameters for both training and evaluation can be set in config.py
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /dataset_path /checkpoint_path/deepfm.ckpt
```
......
......@@ -79,7 +79,7 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > ms_log/output.log 2>&1 &
# 运行分布式训练示例
......@@ -89,7 +89,7 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /dataset_path /checkpoint_path/deepfm.ckpt
```
......@@ -111,7 +111,7 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='GPU' \
--device_target=GPU \
--do_eval=True > ms_log/output.log 2>&1 &
# 运行分布式训练示例
......@@ -121,7 +121,7 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='GPU' > ms_log/eval_output.log 2>&1 &
--device_target=GPU > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 GPU /dataset_path /checkpoint_path/deepfm.ckpt
```
......@@ -300,7 +300,7 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > ms_log/output.log 2>&1 &
```
......@@ -344,7 +344,7 @@ FM和深度学习部分拥有相同的输入原样特征向量,让DeepFM能从
python eval.py \
--dataset_path='dataset/test' \
--checkpoint_path='./checkpoint/deepfm.ckpt' \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /dataset_path /checkpoint_path/deepfm.ckpt
```
......
......@@ -117,7 +117,7 @@ You can start training using python or shell scripts. The usage of shell scripts
# (1) Add "config_path='/path_to_code/MINDlarge_config.yaml'" on the website UI interface.
# (2) Perform a or b.
# a. Set "enable_modelarts=True" on MINDlarge_config.yaml file.
# Set "platform='Ascend'" on MINDlarge_config.yaml file.
# Set "platform=Ascend" on MINDlarge_config.yaml file.
# Set "dataset='large'" on MINDlarge_config.yaml file.
# Set "dataset_path='/cache/data/MINDlarge'" on MINDlarge_config.yaml file.
# Set "save_checkpoint_path='./checkpoint'" on MINDlarge_config.yaml file.
......@@ -145,7 +145,7 @@ You can start training using python or shell scripts. The usage of shell scripts
# (1) Add "config_path='/path_to_code/MINDlarge_config.yaml'" on the website UI interface.
# (2) Perform a or b.
# a. Set "enable_modelarts=True" on MINDlarge_config.yaml file.
# Set "platform='Ascend'" on MINDlarge_config.yaml file.
# Set "platform=Ascend" on MINDlarge_config.yaml file.
# Set "dataset='large'" on MINDlarge_config.yaml file.
# Set "dataset_path='/cache/data/MINDlarge'" on MINDlarge_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on MINDlarge_config.yaml file.
......@@ -172,7 +172,7 @@ You can start training using python or shell scripts. The usage of shell scripts
# (1) Add "config_path='/path_to_code/MINDlarge_config.yaml'" on the website UI interface.
# (2) Perform a or b.
# a. Set "enable_modelarts=True" on MINDlarge_config.yaml file.
# Set "platform='Ascend'" on MINDlarge_config.yaml file.
# Set "platform=Ascend" on MINDlarge_config.yaml file.
# Set "file_format='AIR'" on MINDlarge_config.yaml file.
# Set "batch_size=1" on MINDlarge_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on MINDlarge_config.yaml file.
......
......@@ -238,13 +238,13 @@ The entire code structure is as following:
# Train 1p with Ascend
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on base_config.yaml file.
# Set "run_platform='Ascend'" on default_config.yaml file.
# Set "run_platform=Ascend" on default_config.yaml file.
# Set "mindrecord_path='/cache/data/face_detect_dataset/mindrecord_train/data.mindrecord'" on default_config.yaml file.
# (optional)Set "checkpoint_url='s3://dir_to_your_pretrain/'" on default_config.yaml file.
# (optional)Set "pretrained='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "run_platform='Ascend'" on the website UI interface.
# Add "run_platform=Ascend" on the website UI interface.
# Add "mindrecord_path='/cache/data/face_detect_dataset/mindrecord_train/data.mindrecord'" on the website UI interface.
# (optional)Add "checkpoint_url='s3://dir_to_your_pretrain/'" on the website UI interface.
# (optional)Add "pretrained='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
......@@ -259,13 +259,13 @@ The entire code structure is as following:
# Eval 1p with Ascend
# (1) Perform a or b.
# a. Set "enable_modelarts=True" on base_config.yaml file.
# Set "run_platform='Ascend'" on default_config.yaml file.
# Set "run_platform=Ascend" on default_config.yaml file.
# Set "mindrecord_path='/cache/data/face_detect_dataset/mindrecord_train/data.mindrecord'" on default_config.yaml file.
# Set "checkpoint_url='s3://dir_to_your_pretrain/'" on default_config.yaml file.
# Set "pretrained='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file.
# Set other parameters on default_config.yaml file you need.
# b. Add "enable_modelarts=True" on the website UI interface.
# Add "run_platform='Ascend'" on the website UI interface.
# Add "run_platform=Ascend" on the website UI interface.
# Add "mindrecord_path='/cache/data/face_detect_dataset/mindrecord_test/data.mindrecord'" on the website UI interface.
# Add "checkpoint_url='s3://dir_to_your_pretrain/'" on the website UI interface.
# Add "pretrained='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
......
......@@ -298,7 +298,7 @@ epoch[179], iter[14930], loss:1.694281, 13417.38 imgs/sec, lr=0.0250000003725290
# Set "batch_size=1" on reid_1p_config.yaml file.
# Set "file_format='AIR'" on reid_1p_config.yaml file.
# Set "file_name='FaceRecognitionForTracking'" on reid_1p_config.yaml file.
# Set "device_target='Ascend'" on reid_1p_config.yaml file.
# Set "device_target=Ascend" on reid_1p_config.yaml file.
# Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on reid_1p_config.yaml file.
# Set "pretrained='/cache/checkpoint_path/model.ckpt'" on reid_1p_config.yaml file.
# Set other parameters on reid_1p_config.yaml file you need.
......@@ -306,7 +306,7 @@ epoch[179], iter[14930], loss:1.694281, 13417.38 imgs/sec, lr=0.0250000003725290
# Add "batch_size=1" on the website UI interface.
# Add "file_format='AIR'" on the website UI interface.
# Add "file_name='FaceRecognitionForTracking'" on the website UI interface.
# Add "device_target='Ascend'" on the website UI interface.
# Add "device_target=Ascend" on the website UI interface.
# Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface.
# Add "pretrained='/cache/checkpoint_path/model.ckpt'" on the website UI interface.
# Add other parameters on the website UI interface.
......
......@@ -83,13 +83,13 @@
```bash
# 运行训练示例
python train.py --device_id=0 --device_type='Ascend' > train.log 2>&1 &
python train.py --device_id=0 --device_type=Ascend > train.log 2>&1 &
# 运行分布式训练示例
bash ./scripts/run_train_ascend.sh [RANK_TABLE_FILE]
# 运行评估示例
python eval.py --checkpoint_path ./ckpt_0 --device_type='Ascend' > ./eval.log 2>&1 &
python eval.py --checkpoint_path ./ckpt_0 --device_type=Ascend > ./eval.log 2>&1 &
# 运行推理示例
bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
......@@ -107,13 +107,13 @@
```bash
# 运行训练示例
python train.py --device_id=0 --device_type='GPU' > train_gpu.log 2>&1 &
python train.py --device_id=0 --device_type=GPU > train_gpu.log 2>&1 &
# 运行分布式训练示例
bash ./scripts/run_train_gpu.sh 8 0,1,2,3,4,5,6,7
# 运行评估示例
python eval.py --checkpoint_path ./ckpt_0 --device_type='GPU' > ./eval_gpu.log 2>&1 &
python eval.py --checkpoint_path ./ckpt_0 --device_type=GPU > ./eval_gpu.log 2>&1 &
```
# 脚本说明
......@@ -176,7 +176,7 @@
- Ascend处理器环境运行
```bash
python train.py --device_id=0 --device_type='Ascend' > train.log 2>&1 &
python train.py --device_id=0 --device_type=Ascend > train.log 2>&1 &
```
上述python命令将在后台运行,可以通过生成的train.log文件查看结果。
......@@ -196,7 +196,7 @@
为了在GPU处理器环境运行,请将配置文件src/config.py中的device_target从Ascend改为GPU。
```bash
python train.py --device_id=0 --device_type='GPU' > train_gpu.log 2>&1 &
python train.py --device_id=0 --device_type=GPU > train_gpu.log 2>&1 &
```
上述python命令将在后台运行,可以通过生成的train_gpu.log文件查看结果。
......@@ -232,7 +232,7 @@
“./ckpt_0”是保存了训练好的.ckpt模型文件的目录。
```bash
python eval.py --checkpoint_path ./ckpt_0 --device_type='Ascend' > ./eval.log 2>&1 &
python eval.py --checkpoint_path ./ckpt_0 --device_type=Ascend > ./eval.log 2>&1 &
```
- 在GPU处理器环境运行时评估ImageNet-1k数据集
......@@ -240,7 +240,7 @@
“./ckpt_0”是保存了训练好的.ckpt模型文件的目录。
```bash
python eval.py --checkpoint_path ./ckpt_0 --device_type='GPU' > ./eval_gpu.log 2>&1 &
python eval.py --checkpoint_path ./ckpt_0 --device_type=GPU > ./eval_gpu.log 2>&1 &
OR
bash ./scripts/run_eval.sh
```
......
# Contents
- [Contents](#contents)
- [SqueezeNet Description](#squeezenet-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
......@@ -11,15 +12,31 @@
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Usage](#usage)
- [Running on Ascend](#running-on-ascend)
- [Running on GPU](#running-on-gpu)
- [Result](#result)
- [Evaluation Process](#evaluation-process)
- [Usage](#usage-1)
- [Running on Ascend](#running-on-ascend-1)
- [Running on GPU](#running-on-gpu-1)
- [Result](#result-1)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [SqueezeNet on CIFAR-10](#squeezenet-on-cifar-10)
- [SqueezeNet on ImageNet](#squeezenet-on-imagenet)
- [SqueezeNet_Residual on CIFAR-10](#squeezenet_residual-on-cifar-10)
- [SqueezeNet_Residual on ImageNet](#squeezenet_residual-on-imagenet)
- [Inference Performance](#inference-performance)
- [SqueezeNet on CIFAR-10](#squeezenet-on-cifar-10-1)
- [SqueezeNet on ImageNet](#squeezenet-on-imagenet-1)
- [SqueezeNet_Residual on CIFAR-10](#squeezenet_residual-on-cifar-10-1)
- [SqueezeNet_Residual on ImageNet](#squeezenet_residual-on-imagenet-1)
- [How to use](#how-to-use)
- [Inference](#inference)
- [Continue Training on the Pretrained Model](#continue-training-on-the-pretrained-model)
- [Transfer Learning](#transfer-learning)
- [Transfer Learning](#transfer-learning)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
......
# Contents
- [Contents](#contents)
- [SqueezeNet1_1 Description](#squeezenet1_1-description)
- [Model Architecture](#model-architecture)
- [Dataset](#dataset)
......@@ -9,19 +10,27 @@
- [Script and Sample Code](#script-and-sample-code)
- [Script Parameters](#script-parameters)
- [Training Process](#training-process)
- [Usage](#usage)
- [Running on Ascend](#running-on-ascend)
- [Result](#result)
- [Evaluation Process](#evaluation-process)
- [Inference Process](#inference-process)
- [Export MindIR](#export-mindir)
- [Infer on Ascend310](#infer-on-ascend310)
- [result](#result)
- [Usage](#usage-1)
- [Running on Ascend](#running-on-ascend-1)
- [Result](#result-1)
- [Inference process](#inference-process)
- [Export MindIR](#export-mindir)
- [Infer on Ascend310](#infer-on-ascend310)
- [result](#result-2)
- [Model Description](#model-description)
- [Performance](#performance)
- [Evaluation Performance](#evaluation-performance)
- [SqueezeNet on ImageNet](#squeezenet-on-imagenet)
- [Inference Performance](#inference-performance)
- [SqueezeNet on ImageNet](#squeezenet-on-imagenet-1)
- [310 Inference Performance](#310-inference-performance)
- [SqueezeNet on ImageNet](#squeezenet-on-imagenet-2)
- [How to use](#how-to-use)
- [Inference](#inference)
- [Continue Training on the Pretrained Model](#continue-training-on-the-pretrained-model)
- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
......
......@@ -102,7 +102,7 @@ First set the config for data, train, eval in src/config.py
```python
# run training example
python train.py --amp_level 'O3' --device_target='GPU' --train_feat_dir your train dataset dir
python train.py --amp_level 'O3' --device_target=GPU --train_feat_dir your train dataset dir
# run evaluation example
# if you want to eval a specific model, you should specify model_dir to the ckpt path:
......
......@@ -75,7 +75,7 @@ Fat - DeepFFM consists of three parts. The FFM component is a factorization mach
--ckpt_path='./checkpoint/Fat-DeepFFM' \
--eval_file_name='./auc.log' \
--loss_file_name='./loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > output.log 2>&1 &
# run distributed training example
......@@ -106,7 +106,7 @@ Fat - DeepFFM consists of three parts. The FFM component is a factorization mach
--ckpt_path='./checkpoint/Fat-DeepFFM' \
--eval_file_name='./auc.log' \
--loss_file_name='./loss.log' \
--device_target='GPU' \
--device_target=GPU \
--do_eval=True > output.log 2>&1 &
# run distributed training example
......@@ -196,7 +196,7 @@ Parameters for both training and evaluation can be set in config.py
--ckpt_path='./checkpoint' \
--eval_file_name='./auc.log' \
--loss_file_name='./loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > output.log 2>&1 &
```
......@@ -237,7 +237,7 @@ Parameters for both training and evaluation can be set in config.py
--dataset_path=' /dataset_path' \
--checkpoint_path='/ckpt_path' \
--device_id=0 \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /dataset_path /ckpt_path
```
......
......@@ -62,14 +62,14 @@ After installing MindSpore via the official website, you can start training and
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > ms_log/output.log 2>&1 &
# run evaluation example
python eval.py \
--test_data_dir='dataset/test' \
--checkpoint_path='./checkpoint/autodis.ckpt' \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /test_data_dir /checkpoint_path/autodis.ckpt
```
......@@ -191,7 +191,7 @@ Parameters for both training and evaluation can be set in `default_config.yaml`
--ckpt_path='./checkpoint' \
--eval_file_name='auc.log' \
--loss_file_name='loss.log' \
--device_target='Ascend' \
--device_target=Ascend \
--do_eval=True > ms_log/output.log 2>&1 &
```
......@@ -219,7 +219,7 @@ Parameters for both training and evaluation can be set in `default_config.yaml`
python eval.py \
--test_data_dir='dataset/test' \
--checkpoint_path='./checkpoint/autodis.ckpt' \
--device_target='Ascend' > ms_log/eval_output.log 2>&1 &
--device_target=Ascend > ms_log/eval_output.log 2>&1 &
OR
bash scripts/run_eval.sh 0 Ascend /test_data_dir /checkpoint_path/autodis.ckpt
```
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment