diff --git a/benchmark/ascend/resnet/README.md b/benchmark/ascend/resnet/README.md index f8d916c772e519e0d5a83a74f65efab8a0baaff8..2ba337d61bcc45fd0851728988ab7fb470ea5be6 100644 --- a/benchmark/ascend/resnet/README.md +++ b/benchmark/ascend/resnet/README.md @@ -454,13 +454,13 @@ Export on ModelArts (If you want to run in modelarts, please check the official # Set "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file. # Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file. # Set "file_name='./resnet'" on default_config.yaml file. -# Set "file_format='AIR'" on default_config.yaml file. +# Set "file_format='MINDIR'" on default_config.yaml file. # Set other parameters on default_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface. # Add "file_name='./resnet'" on the website UI interface. -# Add "file_format='AIR'" on the website UI interface. +# Add "file_format='MINDIR'" on the website UI interface. # Add other parameters on the website UI interface. # (4) Set the code directory to "/path/resnet" on the website UI interface. # (5) Set the startup file to "export.py" on the website UI interface. @@ -590,4 +590,4 @@ Refer to the [ModelZoo FAQ](https://gitee.com/mindspore/models#FAQ) for some com - **Q: How to use `boost` to get the best performance?** - **A**: We provide the `boost_level` in the `Model` interface, when you set it to `O1` or `O2` mode, the network will automatically speed up. The high-performance mode has been fully verified on resnet50, you can use the `resnet50_imagenet2012_Boost_config.yaml` to experience this mode. Meanwhile, in `O1` or `O2` mode, it is recommended to set the following environment variables: ` export ENV_FUSION_CLEAR=1; export DATASET_ENABLE_NUMA=True; export ENV_SINGLE_EVAL=1; export SKT_ENABLE=1;`. \ No newline at end of file + **A**: We provide the `boost_level` in the `Model` interface, when you set it to `O1` or `O2` mode, the network will automatically speed up. The high-performance mode has been fully verified on resnet50, you can use the `resnet50_imagenet2012_Boost_config.yaml` to experience this mode. Meanwhile, in `O1` or `O2` mode, it is recommended to set the following environment variables: ` export ENV_FUSION_CLEAR=1; export DATASET_ENABLE_NUMA=True; export ENV_SINGLE_EVAL=1; export SKT_ENABLE=1;`. diff --git a/benchmark/ascend/resnet/README_CN.md b/benchmark/ascend/resnet/README_CN.md index 3df9996ecf1abc624f932e36789d63bdb8241cc3..5fa688b2c044bbfffb8adac2a2541a63c006ddf9 100644 --- a/benchmark/ascend/resnet/README_CN.md +++ b/benchmark/ascend/resnet/README_CN.md @@ -680,12 +680,12 @@ ModelArts导出mindir # 设置 "checkpoint_file_path='/cache/checkpoint_path/model.ckpt" 在 yaml 文件。 # 设置 "checkpoint_url=/The path of checkpoint in S3/" 在 yaml 文件。 # 设置 "file_name='./resnet'"参数在yaml文件。 -# 设置 "file_format='AIR'" 参数在yaml文件。 +# 设置 "file_format='MINDIR'" 参数在yaml文件。 # b. 增加 "enable_modelarts=True" 参数在modearts的界面上。 # 增加 "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" 参数在modearts的界面上。 # 增加 "checkpoint_url=/The path of checkpoint in S3/" 参数在modearts的界面上。 # 设置 "file_name='./resnet'"参数在modearts的界面上。 -# 设置 "file_format='AIR'" 参数在modearts的界面上。 +# 设置 "file_format='MINDIR'" 参数在modearts的界面上。 # (4) 在modelarts的界面上设置代码的路径 "/path/resnet"。 # (5) 在modelarts的界面上设置模型的启动文件 "export.py" 。 # 模型的输出路径"Output file path" 和模型的日志路径 "Job log path" 。 diff --git a/official/cv/alexnet/README.md b/official/cv/alexnet/README.md index 69e1d87b41a4ba93a3f0ea7b232eac5976312d95..45d6ad67015ba91c45dd376962b62ee78ef8fb59 100644 --- a/official/cv/alexnet/README.md +++ b/official/cv/alexnet/README.md @@ -163,13 +163,13 @@ bash run_standalone_eval_ascend.sh [cifar10|imagenet] [DATA_PATH] [CKPT_NAME] [D # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='alexnet'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='alexnet'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/alexnet/README_CN.md b/official/cv/alexnet/README_CN.md index c284782bc76605c287bfd6a7b4eadec68221d66c..cdf4289c00bb4416b83bfc65f0b2717ea8164cb0 100644 --- a/official/cv/alexnet/README_CN.md +++ b/official/cv/alexnet/README_CN.md @@ -152,13 +152,13 @@ bash run_standalone_eval_ascend.sh [cifar10|imagenet] [DATA_PATH] [CKPT_NAME] [D # (1) 执行 a 或者 b. # a. 在 base_config.yaml 文件中设置 "enable_modelarts=True" # 在 base_config.yaml 文件中设置 "file_name='alexnet'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name='alexnet'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/brdnet/export.py b/official/cv/brdnet/export.py index 32c92bdc70712705b015da690d512918b327213f..569915f985b1c527669d3c1855e13d37a3ef003a 100644 --- a/official/cv/brdnet/export.py +++ b/official/cv/brdnet/export.py @@ -35,7 +35,7 @@ parser.add_argument("--image_height", type=int, default=500, help="Image height. parser.add_argument("--image_width", type=int, default=500, help="Image width.") parser.add_argument("--ckpt_file", type=str, required=True, help="Checkpoint file path.") parser.add_argument("--file_name", type=str, default="brdnet", help="output file name.") -parser.add_argument("--file_format", type=str, choices=["AIR", "ONNX", "MINDIR"], default="AIR", help="file format") +parser.add_argument("--file_format", type=str, choices=["AIR", "ONNX", "MINDIR"], default="MINDIR", help="file format") parser.add_argument('--device_target', type=str, default='Ascend' , help='device where the code will be implemented. (Default: Ascend)') parser.add_argument("--device_id", type=int, default=0, help="Device id") diff --git a/official/cv/brdnet/infer/README_CN.md b/official/cv/brdnet/infer/README_CN.md index c539398516b4833f0421ba1ce60a6641b5b36c67..2d5293ca3ac7f6a3df248d84a4d06e57e7a7fe4a 100644 --- a/official/cv/brdnet/infer/README_CN.md +++ b/official/cv/brdnet/infer/README_CN.md @@ -50,7 +50,7 @@ python export.py \ --image_width=500 \ --ckpt_file=xxx/brdnet.ckpt \ --file_name=brdnet \ ---file_format=AIR \ +--file_format='AIR' \ --device_target=Ascend \ --device_id=0 \ ``` diff --git a/official/cv/centerface/README.md b/official/cv/centerface/README.md index a80ffa5f0b254884460af39f17c36de9a7c91889..5c4824aebd7cc61216811db2aee9348a4c8382d0 100644 --- a/official/cv/centerface/README.md +++ b/official/cv/centerface/README.md @@ -286,13 +286,13 @@ bash eval_all.sh [GROUND_TRUTH_PATH] [FILTER_EASY](optional) [FILTER_MEDIUM](opt # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='centerface'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='centerface'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/deeplabv3/README.md b/official/cv/deeplabv3/README.md index 4f55129f70865688a18f8eacd4cf68287714c4ab..ee4c417dd5238fd875a4ed96c8632d6193465728 100644 --- a/official/cv/deeplabv3/README.md +++ b/official/cv/deeplabv3/README.md @@ -445,7 +445,7 @@ bash run_eval_s8_multiscale_flip.sh # Set "export_model='deeplab_v3_s8'" on base_config.yaml file. # Set "export_batch_size=1" on base_config.yaml file. # Set "file_name='deeplabv3'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. @@ -453,7 +453,7 @@ bash run_eval_s8_multiscale_flip.sh # Add "export_model='deeplab_v3_s8'" on the website UI interface. # Add "export_batch_size=1" on the website UI interface. # Add "file_name='deeplabv3'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/deeplabv3/README_CN.md b/official/cv/deeplabv3/README_CN.md index 63b111ea4c9b20932b734be6e7200d29d84f4fcd..a0e9cafb21ac3769b857d62e2afe431818c04197 100644 --- a/official/cv/deeplabv3/README_CN.md +++ b/official/cv/deeplabv3/README_CN.md @@ -446,7 +446,7 @@ bash run_eval_s8_multiscale_flip.sh # 在 base_config.yaml 文件中设置 "export_model='deeplab_v3_s8'" # 在 base_config.yaml 文件中设置 "export_batch_size=1" # 在 base_config.yaml 文件中设置 "file_name='deeplabv3'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 @@ -454,7 +454,7 @@ bash run_eval_s8_multiscale_flip.sh # 在网页上设置 "export_model='deeplab_v3_s8'" # 在网页上设置 "export_batch_size=1" # 在网页上设置 "file_name='deeplabv3'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/deeplabv3plus/export.py b/official/cv/deeplabv3plus/export.py index 2bae359da90477a73a70040b3f22b663022a778e..90e5808d8d3d44c34fb288e88135a2fa624f8bd7 100644 --- a/official/cv/deeplabv3plus/export.py +++ b/official/cv/deeplabv3plus/export.py @@ -24,4 +24,4 @@ if __name__ == '__main__': # load the parameter into net load_param_into_net(network, param_dict) input_data = np.random.uniform(0.0, 1.0, size=[32, 3, 513, 513]).astype(np.float32) - export(network, Tensor(input_data), file_name=args.model + '-300_11.air', file_format='AIR') + export(network, Tensor(input_data), file_name=args.model + '-300_11.air', file_format='MINDIR') diff --git a/official/cv/depthnet/export.py b/official/cv/depthnet/export.py index 8fb2369825648516beab6201d51314797cca10b5..017bd24cb5468af7406a83bef310d71c24951a1c 100644 --- a/official/cv/depthnet/export.py +++ b/official/cv/depthnet/export.py @@ -57,7 +57,7 @@ if __name__ == "__main__": export(coarse_net, Tensor(input_rgb_coarsenet), file_name=os.path.join(mindir_dir, "FinalCoarseNet"), file_format='MINDIR') export(coarse_net, Tensor(input_rgb_coarsenet), file_name=os.path.join(air_dir, "FinalCoarseNet"), - file_format='AIR') + file_format='MINDIR') else: fine_net = FineNet() fine_net_file_name = os.path.join(ckpt_dir, "FinalFineNet.ckpt") @@ -68,4 +68,4 @@ if __name__ == "__main__": export(fine_net, Tensor(input_rgb_finenet), Tensor(input_coarse_depth), file_name=os.path.join(mindir_dir, "FinalFineNet"), file_format='MINDIR') export(fine_net, Tensor(input_rgb_finenet), Tensor(input_coarse_depth), - file_name=os.path.join(air_dir, "FinalFineNet"), file_format='AIR') + file_name=os.path.join(air_dir, "FinalFineNet"), file_format='MINDIR') diff --git a/official/cv/east/export.py b/official/cv/east/export.py index aeaeecd417748763895f40dd98dc35dd01169d4f..eff2a0c5f7293d64dc6c670ad25b6a40c4e3f4e8 100644 --- a/official/cv/east/export.py +++ b/official/cv/east/export.py @@ -57,7 +57,7 @@ parser.add_argument( "AIR", "ONNX", "MINDIR"], - default="AIR", + default='MINDIR', help="file format") args_opt = parser.parse_args() diff --git a/official/cv/east/infer/README_CN.md b/official/cv/east/infer/README_CN.md index 3995bab9179f596718e51bd4d6beff44a3816aad..28acf92523e3672cda30a06c276f4e05b4f9b5a5 100644 --- a/official/cv/east/infer/README_CN.md +++ b/official/cv/east/infer/README_CN.md @@ -95,7 +95,7 @@ python export.py \ --image_width=1280 \ --ckpt_file=xxx/east.ckpt \ --file_name=east \ ---file_format=AIR \ +--file_format='MINDIR' \ --device_target=Ascend \ --device_id=0 \ ``` diff --git a/official/cv/faster_rcnn/README.md b/official/cv/faster_rcnn/README.md index 21f3209cc6926c2e2c734b2471d467f28ba9b1d4..b8401aba55a6aa891e50984d696bc06440a346f4 100644 --- a/official/cv/faster_rcnn/README.md +++ b/official/cv/faster_rcnn/README.md @@ -268,13 +268,13 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [ANN_FILE] [IMAGE_WIDTH](optiona # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='faster_rcnn'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='faster_rcnn'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/faster_rcnn/README_CN.md b/official/cv/faster_rcnn/README_CN.md index 911d8507080ea1e207ae131dfb952f7e5ca9849b..b234fb61e0e3f87c88015bc7f7e067423365b87a 100644 --- a/official/cv/faster_rcnn/README_CN.md +++ b/official/cv/faster_rcnn/README_CN.md @@ -268,13 +268,13 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [ANN_FILE] [IMAGE_WIDTH](optiona # (1) 执行 a 或者 b. # a. 在 base_config.yaml 文件中设置 "enable_modelarts=True" # 在 base_config.yaml 文件中设置 "file_name='faster_rcnn'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name='faster_rcnn'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/fastscnn/infer/README_CN.md b/official/cv/fastscnn/infer/README_CN.md index e294cba022fa25b1619835a6f73d1feebe2e87b8..b8535fb95af24d54e85c8e373788c7e9f95b9d61 100644 --- a/official/cv/fastscnn/infer/README_CN.md +++ b/official/cv/fastscnn/infer/README_CN.md @@ -49,7 +49,7 @@ python export.py \ --image_width=768 \ --ckpt_file=xxx/fastscnn.ckpt \ --file_name=fastscnn \ ---file_format=AIR \ +--file_format='MINDIR' \ --device_target=Ascend \ --device_id=0 \ diff --git a/official/cv/inceptionv3/README.md b/official/cv/inceptionv3/README.md index e445e702241703a3da1304e7b25c07f7b370c723..49226736664b8fe1797bd31a0bdc3e5f352ec07a 100644 --- a/official/cv/inceptionv3/README.md +++ b/official/cv/inceptionv3/README.md @@ -179,13 +179,13 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='inceptionv3'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='inceptionv3'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/inceptionv3/README_CN.md b/official/cv/inceptionv3/README_CN.md index cb1910b17b3a137367a716a6edb501030160c6d7..cbfee900011dd911b1b2ef80bf382d5e5d203d1e 100644 --- a/official/cv/inceptionv3/README_CN.md +++ b/official/cv/inceptionv3/README_CN.md @@ -183,13 +183,13 @@ InceptionV3的总体网络架构如下: # (1) 执行 a 或者 b. # a. 在 base_config.yaml 文件中设置 "enable_modelarts=True" # 在 base_config.yaml 文件中设置 "file_name='inceptionv3'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name='inceptionv3'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/lenet/README.md b/official/cv/lenet/README.md index 0fec01ff4794d5caa928dc886c29846d1a1a5740..db1b730477b4f0a53f133cd09921a2aa6569af75 100644 --- a/official/cv/lenet/README.md +++ b/official/cv/lenet/README.md @@ -135,13 +135,13 @@ bash run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME] # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='lenet'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='lenet'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/lenet/README_CN.md b/official/cv/lenet/README_CN.md index 769d424d04b5224f35dbde22ffd05ce1ab220b39..aacfccf33b184f15621fba9031f3064b32fc2e40 100644 --- a/official/cv/lenet/README_CN.md +++ b/official/cv/lenet/README_CN.md @@ -135,13 +135,13 @@ bash run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME] # (1) 执行 a 或者 b. # a. 在 base_config.yaml 文件中设置 "enable_modelarts=True" # 在 base_config.yaml 文件中设置 "file_name='/cache/train/lenet'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name='/cache/train/lenet'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/maskrcnn/README.md b/official/cv/maskrcnn/README.md index bd70672b1802b094e3c8fc5aa9f0b6600ef113e2..790cd660f10e4c3c76f005c5cd47b197519c715c 100644 --- a/official/cv/maskrcnn/README.md +++ b/official/cv/maskrcnn/README.md @@ -310,13 +310,13 @@ bash run_eval.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH] [DATA_PATH] # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='maskrcnn'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='maskrcnn'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/maskrcnn/README_CN.md b/official/cv/maskrcnn/README_CN.md index 2e37e9c03d776c751c6c07fb00f443d234ebee96..1bbee945c5f571cb6d27390735e60c50a1f8cd01 100644 --- a/official/cv/maskrcnn/README_CN.md +++ b/official/cv/maskrcnn/README_CN.md @@ -295,13 +295,13 @@ bash run_eval.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH] [DATA_PATH] # (1) 执行 a 或者 b. # a. 在 base_config.yaml 文件中设置 "enable_modelarts=True" # 在 base_config.yaml 文件中设置 "file_name='maskrcnn'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name='maskrcnn'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/maskrcnn_mobilenetv1/README.md b/official/cv/maskrcnn_mobilenetv1/README.md index 3d8b112692eca0c76bb48dfe421eb825d61c1c3f..166e8c963a8a29421153e4dcc2a514649f59115a 100644 --- a/official/cv/maskrcnn_mobilenetv1/README.md +++ b/official/cv/maskrcnn_mobilenetv1/README.md @@ -284,13 +284,13 @@ pip install mmcv=0.2.14 # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='maskrcnn_mobilenetv1'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='maskrcnn_mobilenetv1'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/mobilenetv2/README.md b/official/cv/mobilenetv2/README.md index e7b2b046a42a9e986fc26a8beae47b408ed0a876..47df3292941993a2654191c3afeab12d929c64b5 100644 --- a/official/cv/mobilenetv2/README.md +++ b/official/cv/mobilenetv2/README.md @@ -170,13 +170,13 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='mobilenetv2'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='mobilenetv2'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/mobilenetv2/README_CN.md b/official/cv/mobilenetv2/README_CN.md index 88caa2261ec6d26791c7f16bf6b0781da3f4a538..52285ae0f2c474f19ba84e1b57b22b8a4e33323c 100644 --- a/official/cv/mobilenetv2/README_CN.md +++ b/official/cv/mobilenetv2/README_CN.md @@ -166,13 +166,13 @@ MobileNetV2总体网络架构如下: # (1) 执行 a 或者 b. # a. 在 base_config.yaml 文件中设置 "enable_modelarts=True" # 在 base_config.yaml 文件中设置 "file_name='mobilenetv2'" - # 在 base_config.yaml 文件中设置 "file_format='AIR'" + # 在 base_config.yaml 文件中设置 "file_format='MINDIR'" # 在 base_config.yaml 文件中设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在 base_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 base_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name='mobilenetv2'" - # 在网页上设置 "file_format='AIR'" + # 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url='/The path of checkpoint in S3/'" # 在网页上设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在网页上设置 其他参数 diff --git a/official/cv/nasnet/modelarts/train_start.py b/official/cv/nasnet/modelarts/train_start.py index e0eddf012e16d8563df4b28db1d66ed57ba57c9f..0aaef199284bf5f32440bbb81f5b4c8aaa814282 100755 --- a/official/cv/nasnet/modelarts/train_start.py +++ b/official/cv/nasnet/modelarts/train_start.py @@ -64,7 +64,7 @@ def export_models(checkpoint_path): input_data = Tensor(np.zeros([1, 3, 224, 224]), mstype.float32) if args_opt.export_mindir_model: - export(net, input_data, file_name=output_file, file_format="MINDIR") + export(net, input_data, file_name=output_file, file_format="AIR") if args_opt.export_air_model and context.get_context("device_target") == "Ascend": export(net, input_data, file_name=output_file, file_format="AIR") if args_opt.export_onnx_model: diff --git a/official/cv/resnet/README.md b/official/cv/resnet/README.md index 0a66278094da1b6dcd625fa787aeb15adcf4b5af..000e75673f9523c41bfac31bd587a545595e906f 100644 --- a/official/cv/resnet/README.md +++ b/official/cv/resnet/README.md @@ -719,13 +719,13 @@ Export on ModelArts (If you want to run in modelarts, please check the official # Set "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file. # Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file. # Set "file_name='./resnet'" on default_config.yaml file. -# Set "file_format='AIR'" on default_config.yaml file. +# Set "file_format='MINDIR'" on default_config.yaml file. # Set other parameters on default_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface. # Add "file_name='./resnet'" on the website UI interface. -# Add "file_format='AIR'" on the website UI interface. +# Add "file_format='MINDIR'" on the website UI interface. # Add other parameters on the website UI interface. # (4) Set the code directory to "/path/resnet" on the website UI interface. # (5) Set the startup file to "export.py" on the website UI interface. diff --git a/official/cv/resnet/README_CN.md b/official/cv/resnet/README_CN.md index 3df9996ecf1abc624f932e36789d63bdb8241cc3..5fa688b2c044bbfffb8adac2a2541a63c006ddf9 100644 --- a/official/cv/resnet/README_CN.md +++ b/official/cv/resnet/README_CN.md @@ -680,12 +680,12 @@ ModelArts导出mindir # 设置 "checkpoint_file_path='/cache/checkpoint_path/model.ckpt" 在 yaml 文件。 # 设置 "checkpoint_url=/The path of checkpoint in S3/" 在 yaml 文件。 # 设置 "file_name='./resnet'"参数在yaml文件。 -# 设置 "file_format='AIR'" 参数在yaml文件。 +# 设置 "file_format='MINDIR'" 参数在yaml文件。 # b. 增加 "enable_modelarts=True" 参数在modearts的界面上。 # 增加 "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" 参数在modearts的界面上。 # 增加 "checkpoint_url=/The path of checkpoint in S3/" 参数在modearts的界面上。 # 设置 "file_name='./resnet'"参数在modearts的界面上。 -# 设置 "file_format='AIR'" 参数在modearts的界面上。 +# 设置 "file_format='MINDIR'" 参数在modearts的界面上。 # (4) 在modelarts的界面上设置代码的路径 "/path/resnet"。 # (5) 在modelarts的界面上设置模型的启动文件 "export.py" 。 # 模型的输出路径"Output file path" 和模型的日志路径 "Job log path" 。 diff --git a/official/cv/resnet_thor/README_CN.md b/official/cv/resnet_thor/README_CN.md index 30972e4c8c44da39188576120ebad64985a09f2c..5bfb88e093ee29188bb10c93b8e246adb7a718fb 100644 --- a/official/cv/resnet_thor/README_CN.md +++ b/official/cv/resnet_thor/README_CN.md @@ -120,7 +120,7 @@ bash run_eval_gpu.sh [DATASET_PATH] [CHECKPOINT_PATH] │ └── dataset.py # 数据预处理 ├── eval.py # 推理脚本 ├── train.py # 训练脚本 - ├── export.py # 将checkpoint文件导出为AIR文件 + ├── export.py # 将checkpoint文件导出为MINDIR,AIR文件 └── mindspore_hub_conf.py # MinSpore Hub仓库的配置文件 ``` diff --git a/official/cv/shufflenetv2/modelarts/train_start.py b/official/cv/shufflenetv2/modelarts/train_start.py index 5e036baeff073b5cd283751a0cbf78ca562cc341..592709420d4537ff4f7c81a1a835063d9b159d37 100755 --- a/official/cv/shufflenetv2/modelarts/train_start.py +++ b/official/cv/shufflenetv2/modelarts/train_start.py @@ -59,7 +59,7 @@ def export_models(checkpoint_path): input_data = Tensor(np.zeros([1, 3, 224, 224]), mstype.float32) if args_opt.export_mindir_model: - export(network, input_data, file_name=output_file, file_format="MINDIR") + export(network, input_data, file_name=output_file, file_format="AIR") if args_opt.export_air_model and context.get_context("device_target") == "Ascend": export(network, input_data, file_name=output_file, file_format="AIR") if args_opt.export_onnx_model: diff --git a/official/cv/vgg16/README.md b/official/cv/vgg16/README.md index a2431e540781a9ec5370b7873cfd3a9cec2c8574..93153c227e0aa3c79113ec24ade40093f2e5f7ba 100644 --- a/official/cv/vgg16/README.md +++ b/official/cv/vgg16/README.md @@ -250,13 +250,13 @@ python eval.py --config_path=[YAML_CONFIG_PATH] --device_target="GPU" --dataset= # (2) Perform a or b. # a. Set "enable_modelarts=True" on imagenet2012_config.yaml file. # Set "file_name='vgg16'" on imagenet2012_config.yaml file. -# Set "file_format='AIR'" on imagenet2012_config.yaml file. +# Set "file_format='MINDIR'" on imagenet2012_config.yaml file. # Set "checkpoint_url='s3://dir_to_your_trained_model/'" on imagenet2012_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on imagenet2012_config.yaml file. # Set other parameters on imagenet2012_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name=vgg16" on the website UI interface. -# Add "file_format=AIR" on the website UI interface. +# Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url=s3://dir_to_your_trained_model/" on the website UI interface. # Add "ckpt_file=/cache/checkpoint_path/model.ckpt" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/official/cv/vgg16/README_CN.md b/official/cv/vgg16/README_CN.md index 46b4e6f514a2becff571f8793e73021839668956..82602cc9bb03375a297d7a5b92955bd18581a8a6 100644 --- a/official/cv/vgg16/README_CN.md +++ b/official/cv/vgg16/README_CN.md @@ -251,13 +251,13 @@ python eval.py --config_path=[YAML_CONFIG_PATH] --device_target="GPU" --dataset= # (2) 执行a或者b # a. 在 imagenet2012_config.yaml 文件中设置 "enable_modelarts=True" # 在 imagenet2012_config.yaml 文件中设置 "file_name='vgg16'" -# 在 imagenet2012_config.yaml 文件中设置 "file_format='AIR'" +# 在 imagenet2012_config.yaml 文件中设置 "file_format='MINDIR'" # 在 imagenet2012_config.yaml 文件中设置 "checkpoint_url='s3://dir_to_your_trained_model/'" # 在 imagenet2012_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 imagenet2012_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name=vgg16" -# 在网页上设置 "file_format=AIR" +# 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url=s3://dir_to_your_trained_model/" # 在网页上设置 "ckpt_file=/cache/checkpoint_path/model.ckpt" # 在网页上设置 其他参数 diff --git a/official/cv/yolov4/README_CN.md b/official/cv/yolov4/README_CN.md index 9baa09db39ff0c7d7a19c0fad3a7a51ddb36d620..9a3a6e2d9730f88001010f13803d4db028496986 100644 --- a/official/cv/yolov4/README_CN.md +++ b/official/cv/yolov4/README_CN.md @@ -258,7 +258,7 @@ YOLOv4需要CSPDarknet53主干来提取图像特征进行检测。 您可以从[ ├─config.py # 参数配置 ├─cspdarknet53.py # 网络主干 ├─distributed_sampler.py # 数据集迭代器 - ├─export.py # 将MindSpore模型转换为AIR模型 + ├─export.py # 将MindSpore模型转换为MINDIR,AIR模型 ├─initializer.py # 参数初始化器 ├─logger.py # 日志函数 ├─loss.py # 损失函数 diff --git a/official/gnn/bgcf/README.md b/official/gnn/bgcf/README.md index 991c10c420b8428228c97cd4ef7b8833d0ab43d3..da7a765001e121c3b2bca6e5fe28057b0bd61ddc 100644 --- a/official/gnn/bgcf/README.md +++ b/official/gnn/bgcf/README.md @@ -185,7 +185,7 @@ After installing MindSpore via the official website and Dataset is correctly gen # Add "ckpt_file=/cache/checkpoint_path/model.ckpt" on the website UI interface. # Add "checkpoint_url=s3://dir_to_your_trained_ckpt/" on the website UI interface. # Add "file_name=bgcf" on the website UI interface. - # Add "file_format=AIR" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # (options)Add "device_target=GPU" on the website UI interface if run on GPU. # Add other parameters on the website UI interface. # (2) Upload or copy your trained model to S3 bucket. diff --git a/official/gnn/bgcf/README_CN.md b/official/gnn/bgcf/README_CN.md index 6904dddb4521fc49b065038261bdcbccdaf93786..0514e16630b07b31ee0f71e64793a138805e7e0b 100644 --- a/official/gnn/bgcf/README_CN.md +++ b/official/gnn/bgcf/README_CN.md @@ -196,7 +196,7 @@ BGCF包含两个主要模块。首先是抽样,它生成基于节点复制的 # 在网页上设置 "ckpt_file=/cache/checkpoint_path/model.ckpt" # 在网页上设置 "checkpoint_url=s3://dir_to_your_trained_ckpt/" # 在网页上设置 "file_name=bgcf" - # 在网页上设置 "file_format=AIR" + # 在网页上设置 "file_format='MINDIR'" # (可选)Add "device_target=GPU" # 在网页上设置 其他参数 # (2) 上传你的预训练模型到 S3 桶上 diff --git a/official/nlp/emotect/export.py b/official/nlp/emotect/export.py index 2598a83d660fd65dbe6d795bfa95b0a1d00ab15c..335e21155b0767dd844db919b1ca734296900a7f 100644 --- a/official/nlp/emotect/export.py +++ b/official/nlp/emotect/export.py @@ -28,7 +28,7 @@ parser.add_argument("--number_labels", type=int, default=3, help="batch size") parser.add_argument("--ckpt_file", type=str, required=True, help="Bert ckpt file.") parser.add_argument("--file_name", type=str, default="emotect", help="bert output air name.") parser.add_argument("--file_format", type=str, choices=["AIR", "ONNX", "MINDIR"], - default="AIR", help="file format") + default='MINDIR', help="file format") parser.add_argument("--device_target", type=str, default="Ascend", choices=["Ascend", "GPU", "CPU"], help="device target (default: Ascend)") args = parser.parse_args() diff --git a/official/nlp/ernie/export.py b/official/nlp/ernie/export.py index 2b1ba81f0f3d9d32baf8d2521a9ff2da73b4b426..e56a863eb97be5f6718e368a9350298232ccd8b7 100644 --- a/official/nlp/ernie/export.py +++ b/official/nlp/ernie/export.py @@ -31,7 +31,7 @@ parser.add_argument("--number_labels", type=int, default=3, help="number of labe parser.add_argument("--ckpt_file", type=str, required=True, help="Ernie ckpt file.") parser.add_argument("--file_name", type=str, default="ernie_finetune", help="Ernie output air name.") parser.add_argument("--file_format", type=str, choices=["AIR", "ONNX", "MINDIR"], - default="AIR", help="file format") + default='MINDIR', help="file format") parser.add_argument("--device_target", type=str, default="Ascend", choices=["Ascend", "GPU", "CPU"], help="device target (default: Ascend)") args = parser.parse_args() diff --git a/official/nlp/mass/README.md b/official/nlp/mass/README.md index d421207e629e924aae71919625fca3aefd53a43c..25aaf112058dd7b04b3bdc53c1eff0485c1f494b 100644 --- a/official/nlp/mass/README.md +++ b/official/nlp/mass/README.md @@ -603,13 +603,13 @@ Export on ModelArts (If you want to run in modelarts, please check the official # Set "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file. # Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file. # Set "file_name='./mass'" on default_config.yaml file. -# Set "file_format='AIR'" on default_config.yaml file. +# Set "file_format='MINDIR'" on default_config.yaml file. # Set other parameters on default_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface. # Add "file_name='./mass'" on the website UI interface. -# Add "file_format='AIR'" on the website UI interface. +# Add "file_format='MINDIR'" on the website UI interface. # Add other parameters on the website UI interface. # (2) Set the code directory to "/path/mass" on the website UI interface. # (3) Set the startup file to "export.py" on the website UI interface. diff --git a/official/nlp/mass/README_CN.md b/official/nlp/mass/README_CN.md index fc8f203e79a99ec6b960fd3005855629ce4ad4d6..673e20e917b307e5ffb5679fe7df2341b6e4cc4e 100644 --- a/official/nlp/mass/README_CN.md +++ b/official/nlp/mass/README_CN.md @@ -609,12 +609,12 @@ ModelArts导出mindir # 设置 "checkpoint_file_path='/cache/checkpoint_path/model.ckpt" 在 yaml 文件。 # 设置 "checkpoint_url=/The path of checkpoint in S3/" 在 yaml 文件。 # 设置 "file_name='./mass'"参数在yaml文件。 -# 设置 "file_format='AIR'" 参数在yaml文件。 +# 设置 "file_format='MINDIR'" 参数在yaml文件。 # b. 增加 "enable_modelarts=True" 参数在modearts的界面上。 # 增加 "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" 参数在modearts的界面上。 # 增加 "checkpoint_url=/The path of checkpoint in S3/" 参数在modearts的界面上。 # 设置 "file_name='./mass'"参数在modearts的界面上。 -# 设置 "file_format='AIR'" 参数在modearts的界面上。 +# 设置 "file_format='MINDIR'" 参数在modearts的界面上。 # (3) 在modelarts的界面上设置代码的路径 "/path/mass"。 # (4) 在modelarts的界面上设置模型的启动文件 "export.py" 。 # 模型的输出路径"Output file path" 和模型的日志路径 "Job log path" 。 diff --git a/research/audio/fcn-4/README.md b/research/audio/fcn-4/README.md index 34e07d0c6ded4b4ae8f8e5c5653460bf8ea4bfd8..10b81360b3235f052dc3ece523c1df38143617bd 100644 --- a/research/audio/fcn-4/README.md +++ b/research/audio/fcn-4/README.md @@ -158,13 +158,13 @@ SLOG_PRINT_TO_STDOUT=1 python eval.py --device_id 0 # (1) Perform a or b. # a. Set "enable_modelarts=True" on base_config.yaml file. # Set "file_name='fcn-4'" on base_config.yaml file. - # Set "file_format='AIR'" on base_config.yaml file. + # Set "file_format='MINDIR'" on base_config.yaml file. # Set "checkpoint_url='/The path of checkpoint in S3/'" on beta_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on base_config.yaml file. # Set other parameters on base_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name='fcn-4'" on the website UI interface. - # Add "file_format='AIR'" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url='/The path of checkpoint in S3/'" on the website UI interface. # Add "ckpt_file='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/research/cv/CycleGAN/src/utils/args.py b/research/cv/CycleGAN/src/utils/args.py index 8bb0d59f0d3f6402c8a22f60a1385f730b2051b5..0fad81a8f1fb57e256e8097031e0e4a99fd8fad7 100644 --- a/research/cv/CycleGAN/src/utils/args.py +++ b/research/cv/CycleGAN/src/utils/args.py @@ -112,7 +112,7 @@ parser.add_argument("--export_batch_size", type=int, default=1, \ parser.add_argument("--export_file_name", type=str, default="CycleGAN", \ help="output file name.") parser.add_argument("--export_file_format", type=str, choices=["AIR", "ONNX", "MINDIR"], \ - default="AIR", help="file format") + default='MINDIR', help="file format") args = parser.parse_args() diff --git a/research/cv/FaceDetection/README.md b/research/cv/FaceDetection/README.md index a53b1a3c465c32ed2aba17c4f4beb632ff8469b0..a225a28025f64e497ee3f56ce807b213dab84f7c 100644 --- a/research/cv/FaceDetection/README.md +++ b/research/cv/FaceDetection/README.md @@ -285,14 +285,14 @@ The entire code structure is as following: # Set "checkpoint_url='s3://dir_to_your_pretrain/'" on default_config.yaml file. # Set "pretrained='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file. # Set "batch_size=1" on default_config.yaml file. - # Set "file_format='AIR'" on default_config.yaml file. + # Set "file_format='MINDIR'" on default_config.yaml file. # Set "file_name='FaceDetection'" on default_config.yaml file. # Set other parameters on default_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "checkpoint_url=s3://dir_to_your_pretrain/" on the website UI interface. # Add "pretrained=/cache/checkpoint_path/model.ckpt" on the website UI interface. # Add "batch_size=1" on the website UI interface. - # Add "file_format=AIR" on the website UI interface. + # Add "file_format='MINDIR'" on the website UI interface. # Add "file_name=FaceDetection" on the website UI interface. # Add other parameters on the website UI interface. # (3) Upload or copy your trained model to S3 bucket. diff --git a/research/cv/IRN/export.py b/research/cv/IRN/export.py index c2d59ad0def741ec8155c4436da2c28c1bcb7a06..b496f3b8ce9502651fc05da443724abc933089d6 100644 --- a/research/cv/IRN/export.py +++ b/research/cv/IRN/export.py @@ -45,7 +45,7 @@ parser.add_argument('--file_name', type=str, default='wrn-autoaugment', help='Output file name.',) parser.add_argument( '--file_format', type=str, choices=['AIR', 'ONNX', 'MINDIR'], - default='AIR', help='Export format.', + default='MINDIR', help='Export format.', ) parser.add_argument( '--device_target', type=str, choices=['Ascend', 'GPU', 'CPU'], diff --git a/research/cv/SinGAN/train_modelarts.py b/research/cv/SinGAN/train_modelarts.py index 0c753e4346ba3661671639f967d74a7cb73cf2ea..ea9dd10244b8ad13629507008e9211c6e386a8a7 100644 --- a/research/cv/SinGAN/train_modelarts.py +++ b/research/cv/SinGAN/train_modelarts.py @@ -114,7 +114,7 @@ def export_AIR(opt, reals): G_curr.set_train(False) x = Tensor(functions.generate_noise([opt.nc_z, opt.nzx, opt.nzy])) y = Tensor(functions.generate_noise([opt.nc_z, opt.nzx, opt.nzy])) - export(G_curr, x, y, file_name='%s/SinGAN' % (opt.out_mindir), file_format="MINDIR") + export(G_curr, x, y, file_name='%s/SinGAN' % (opt.out_mindir), file_format="AIR") scale_num += 1 print("SinGAN exported") diff --git a/research/cv/VehicleNet/export.py b/research/cv/VehicleNet/export.py index caefa0723de8cbf3eb6e5f77f224cd8192d4eafe..e487c1312469fdad472cc20d4b74f2f607b494b0 100644 --- a/research/cv/VehicleNet/export.py +++ b/research/cv/VehicleNet/export.py @@ -30,7 +30,7 @@ if __name__ == '__main__': parser.add_argument("--ckpt_url", type=str, required=True, help="Checkpoint file path.") parser.add_argument("--file_name", type=str, default="vehiclenet", help="output file name.") parser.add_argument('--file_format', type=str, choices=["AIR", "ONNX", "MINDIR"], - default='AIR', help='file format') + default='MINDIR', help='file format') args = parser.parse_args() context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target) diff --git a/research/cv/autoaugment/export.py b/research/cv/autoaugment/export.py index 074cb6846327b31f449cdc608e43d9f3633565e3..56b789cc715b17214c765d097b97cec0499d0dc0 100644 --- a/research/cv/autoaugment/export.py +++ b/research/cv/autoaugment/export.py @@ -44,7 +44,7 @@ parser.add_argument( ) parser.add_argument( '--file_format', type=str, choices=['AIR', 'ONNX', 'MINDIR'], - default='AIR', help='Export format.', + default='MINDIR', help='Export format.', ) parser.add_argument( '--device_target', type=str, choices=['Ascend', 'GPU', 'CPU'], diff --git a/research/cv/centernet_det/default_config.yaml b/research/cv/centernet_det/default_config.yaml index bd125cc168ab0eb340d653874c30756d2b644d90..184d0b98b1109c6b2df64816c445972bf3293e06 100644 --- a/research/cv/centernet_det/default_config.yaml +++ b/research/cv/centernet_det/default_config.yaml @@ -235,7 +235,7 @@ eval_config: export_config: input_res: dataset_config.input_res ckpt_file: "./ckpt_file.ckpt" - export_format: "AIR" + export_format: "MINDIR" export_name: "CenterNet_Hourglass" --- diff --git a/research/cv/r2plus1d/infer/README_CN.md b/research/cv/r2plus1d/infer/README_CN.md index b8046557c54ddb3148c6bc43f73167e5a8eeb793..41e90c2724ffdbc9184973530e35d69dcea87371 100644 --- a/research/cv/r2plus1d/infer/README_CN.md +++ b/research/cv/r2plus1d/infer/README_CN.md @@ -92,7 +92,7 @@ python export.py \ --image_width=112 \ --ckpt_file=xxx/r2plus1d_best_map.ckpt \ --file_name=r2plus1d \ ---file_format=AIR \ +--file_format='MINDIR' \ --device_target=Ascend ``` diff --git a/research/cv/res2net/README.md b/research/cv/res2net/README.md index 199f6fa24df8a79b8e716f0074940817f88e5a48..48d7e1033bf5925245fb2db111b321ccc1852f5d 100644 --- a/research/cv/res2net/README.md +++ b/research/cv/res2net/README.md @@ -489,13 +489,13 @@ Export on ModelArts (If you want to run in modelarts, please check the official # Set "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on default_config.yaml file. # Set "checkpoint_url='s3://dir_to_trained_ckpt/'" on default_config.yaml file. # Set "file_name='./res2net'" on default_config.yaml file. -# Set "file_format='AIR'" on default_config.yaml file. +# Set "file_format='MINDIR'" on default_config.yaml file. # Set other parameters on default_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "checkpoint_file_path='/cache/checkpoint_path/model.ckpt'" on the website UI interface. # Add "checkpoint_url='s3://dir_to_trained_ckpt/'" on the website UI interface. # Add "file_name='./res2net'" on the website UI interface. -# Add "file_format='AIR'" on the website UI interface. +# Add "file_format='MINDIR'" on the website UI interface. # Add other parameters on the website UI interface. # (2) Set the config_path="/path/yaml file" on the website UI interface. # (3) Set the code directory to "/path/res2net" on the website UI interface. diff --git a/research/cv/squeezenet/export.py b/research/cv/squeezenet/export.py index b3d83d7306c2b368577c02f7ad0b58b7dba27720..f93d0711856d796a17a1eb30e1fceac37224cef8 100644 --- a/research/cv/squeezenet/export.py +++ b/research/cv/squeezenet/export.py @@ -51,4 +51,4 @@ if __name__ == '__main__': input_arr = Tensor(np.zeros([1, 3, 227, 227], np.float32)) export(net, input_arr, file_name=onnx_filename, file_format="ONNX") - export(net, input_arr, file_name=air_filename, file_format="AIR") + export(net, input_arr, file_name=air_filename, file_format="MINDIR") diff --git a/research/cv/ssd_mobilenetV2/modelart/start.py b/research/cv/ssd_mobilenetV2/modelart/start.py index f5c122021d176816814a551a876e166adc67622e..f1949be259ab72bc0841828a89bb3544780bda28 100644 --- a/research/cv/ssd_mobilenetV2/modelart/start.py +++ b/research/cv/ssd_mobilenetV2/modelart/start.py @@ -199,7 +199,7 @@ def get_args(): "AIR", "ONNX", "MINDIR"], - default='AIR', + default='MINDIR', help='file format') args_opt = parser.parse_args() diff --git a/research/cv/ssd_mobilenetV2_FPNlite/config/ssdlite_mobilenetv2-fpn.yaml b/research/cv/ssd_mobilenetV2_FPNlite/config/ssdlite_mobilenetv2-fpn.yaml index e8b0514a5d2db41518b61f9b06fcd3fa3f374114..85e0fc90f361e0f6e9c0656e1e8248584e07aea3 100644 --- a/research/cv/ssd_mobilenetV2_FPNlite/config/ssdlite_mobilenetv2-fpn.yaml +++ b/research/cv/ssd_mobilenetV2_FPNlite/config/ssdlite_mobilenetv2-fpn.yaml @@ -137,5 +137,5 @@ exp_device_id: "Device id" exp_batch_size: "Export batch size" exp_ckpt_file: "Checkpoint file path." exp_file_name: "output file name." -exp_file_format: "file format. choices - 'AIR', 'ONNX', 'MINDIR', default - 'AIR'" -exp_device_target: "device target. choices - 'Ascend', default = 'Ascend'" \ No newline at end of file +exp_file_format: "file format. choices - 'AIR', 'ONNX', 'MINDIR', default - 'MINDIR'" +exp_device_target: "device target. choices - 'Ascend', default = 'Ascend'" diff --git a/research/cv/vgg19/README.md b/research/cv/vgg19/README.md index b48fc9c79c5568ccd4ed4319b18cd31cd4dfeeb5..26a375cf352c8dd19108ae6887ae9dae6a469e8e 100644 --- a/research/cv/vgg19/README.md +++ b/research/cv/vgg19/README.md @@ -229,13 +229,13 @@ python eval.py --config_path=[YAML_CONFIG_PATH] --device_target="GPU" --dataset= # (2) Perform a or b. # a. Set "enable_modelarts=True" on imagenet2012_config.yaml file. # Set "file_name='vgg19'" on imagenet2012_config.yaml file. -# Set "file_format='AIR'" on imagenet2012_config.yaml file. +# Set "file_format='MINDIR'" on imagenet2012_config.yaml file. # Set "checkpoint_url='s3://dir_to_your_trained_model/'" on imagenet2012_config.yaml file. # Set "ckpt_file='/cache/checkpoint_path/model.ckpt'" on imagenet2012_config.yaml file. # Set other parameters on imagenet2012_config.yaml file you need. # b. Add "enable_modelarts=True" on the website UI interface. # Add "file_name=vgg19" on the website UI interface. -# Add "file_format=AIR" on the website UI interface. +# Add "file_format='MINDIR'" on the website UI interface. # Add "checkpoint_url=s3://dir_to_your_trained_model/" on the website UI interface. # Add "ckpt_file=/cache/checkpoint_path/model.ckpt" on the website UI interface. # Add other parameters on the website UI interface. diff --git a/research/cv/vgg19/README_CN.md b/research/cv/vgg19/README_CN.md index 9c99d010afa6c31c57edc3024bec75be5e0236ec..6eed2a4151b16d13dd130ac5eac7093740aebe68 100644 --- a/research/cv/vgg19/README_CN.md +++ b/research/cv/vgg19/README_CN.md @@ -246,13 +246,13 @@ python eval.py --config_path=[YAML_CONFIG_PATH] --device_target="GPU" --dataset= # (2) 执行a或者b # a. 在 imagenet2012_config.yaml 文件中设置 "enable_modelarts=True" # 在 imagenet2012_config.yaml 文件中设置 "file_name='vgg19'" -# 在 imagenet2012_config.yaml 文件中设置 "file_format='AIR'" +# 在 imagenet2012_config.yaml 文件中设置 "file_format='MINDIR'" # 在 imagenet2012_config.yaml 文件中设置 "checkpoint_url='s3://dir_to_your_trained_model/'" # 在 imagenet2012_config.yaml 文件中设置 "ckpt_file='/cache/checkpoint_path/model.ckpt'" # 在 imagenet2012_config.yaml 文件中设置 其他参数 # b. 在网页上设置 "enable_modelarts=True" # 在网页上设置 "file_name=vgg19" -# 在网页上设置 "file_format=AIR" +# 在网页上设置 "file_format='MINDIR'" # 在网页上设置 "checkpoint_url=s3://dir_to_your_trained_model/" # 在网页上设置 "ckpt_file=/cache/checkpoint_path/model.ckpt" # 在网页上设置 其他参数 diff --git a/research/cv/wgan/src/args.py b/research/cv/wgan/src/args.py index cb702576feb965439f29cc7cfc752c79b1b7b02c..f7bce7b357b80b250513f4cce3ae0392949e912f 100644 --- a/research/cv/wgan/src/args.py +++ b/research/cv/wgan/src/args.py @@ -56,7 +56,7 @@ def get_args(phase): parser.add_argument('--ckpt_file', type=str, required=True, help="Checkpoint file path.") parser.add_argument('--file_name', type=str, default="WGAN", help="output file name prefix.") parser.add_argument('--file_format', type=str, choices=["AIR", "ONNX", "MINDIR"], \ - default='AIR', help='file format') + default='MINDIR', help='file format') parser.add_argument('--nimages', required=True, type=int, help="number of images to generate", default=1) elif phase == 'eval': diff --git a/research/nlp/gpt2/export.py b/research/nlp/gpt2/export.py index c694df2a367db7e32a2df0fc05f6f7b76a0a6e9b..617d59b17e0ff19b3dbba6e5601299a8e9bd519c 100644 --- a/research/nlp/gpt2/export.py +++ b/research/nlp/gpt2/export.py @@ -49,5 +49,5 @@ if __name__ == "__main__": print("==================== Start exporting ==================") print(" | Ckpt path: {}".format(Load_checkpoint_path)) print(" | Air path: {}".format(save_air_path)) - export(net, *input_data, file_name=os.path.join(save_air_path, 'gpt2'), file_format="AIR") + export(net, *input_data, file_name=os.path.join(save_air_path, 'gpt2'), file_format="MINDIR") print("==================== Exporting finished ==================") diff --git a/research/recommend/EDCN/README.md b/research/recommend/EDCN/README.md index 4cb1f19342af75313f680ffffa00a169e79c7326..58a7ac3921ba79e9db6e90d6510f0817b09d423d 100644 --- a/research/recommend/EDCN/README.md +++ b/research/recommend/EDCN/README.md @@ -176,7 +176,7 @@ Parameters for both training and evaluation can be set in `default_config.yaml` batch_size: 16000 # batch_size for exported model. ckpt_file: '' # the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory. file_name: "edcn" # output file name. - file_format: "MINDIR" # output file format, you can choose from AIR or MINDIR, default is AIR" + file_format: "MINDIR" # output file format, you can choose from AIR or MINDIR, default is MINDIR" ``` ## [Training Process](#contents) @@ -261,12 +261,12 @@ Parameters for both training and evaluation can be set in `default_config.yaml` # 1. Set "enable_modelarts: True" # 2. Set "ckpt_file: ./{path}/*.ckpt"('ckpt_file' indicates the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory.) 3. Set "file_name: edcn" - # 4. Set "file_format='AIR'"(you can choose from AIR or MINDIR) + # 4. Set "file_format='MINDIR'"(you can choose from AIR or MINDIR) # b. adding on the website UI interface. # 1. Add "enable_modelarts=True" # 2. Add "ckpt_file=./{path}/*.ckpt"('ckpt_file' indicates the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory.) # 3. Add "file_name=edcn" - # 4. Add "file_format='AIR'"(you can choose from AIR or MINDIR) + # 4. Add "file_format='MINDIR'"(you can choose from AIR or MINDIR) # (7) Check the "data storage location" on the website UI interface and set the "Dataset path" path (This step is useless, but necessary.). # (8) Set the "Output file path" and "Job log path" to your path on the website UI interface. # (9) Under the item "resource pool selection", select the specification of a single card. diff --git a/research/recommend/autodis/README.md b/research/recommend/autodis/README.md index d5eb29509d676629b82fbc9da5fb88d0b79ab0f3..bf17069f56ef5e469bccab0f82f836247978fe12 100644 --- a/research/recommend/autodis/README.md +++ b/research/recommend/autodis/README.md @@ -201,7 +201,7 @@ Parameters for both training and evaluation can be set in `default_config.yaml` batch_size: 16000 # batch_size for exported model. ckpt_file: '' # the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory. file_name: "autodis" # output file name. - file_format: "MINDIR" # output file format, you can choose from AIR or MINDIR, default is AIR" + file_format: "MINDIR" # output file format, you can choose from AIR or MINDIR, default is MINDIR" ``` ## [Training Process](#contents) @@ -293,7 +293,7 @@ Parameters for both training and evaluation can be set in `default_config.yaml` # 1. Add ”enable_modelarts=True“ # 2. Add “ckpt_file=./{path}/*.ckpt”('ckpt_file' indicates the path of the weight file to be exported relative to the file `export.py`, and the weight file must be included in the code directory.) # 3. Add ”file_name=autodis“ - # 4. Add ”file_format=AIR“(you can choose from AIR or MINDIR) + # 4. Add ”file_format='MINDIR'“(you can choose from AIR or MINDIR) # (7) Check the "data storage location" on the website UI interface and set the "Dataset path" path (This step is useless, but necessary.). # (8) Set the "Output file path" and "Job log path" to your path on the website UI interface. # (9) Under the item "resource pool selection", select the specification of a single card. diff --git a/research/recommend/autodis/default_config.yaml b/research/recommend/autodis/default_config.yaml index 215e9302c38208628715a555341bddc20e653001..c81d06c6135048d4aa3de944d8f96d76b3bcd9c6 100644 --- a/research/recommend/autodis/default_config.yaml +++ b/research/recommend/autodis/default_config.yaml @@ -99,7 +99,7 @@ test_data_dir: "Test dataset dir, default is None" checkpoint_path: " Relative path of '*.ckpt' to be evaluated relative to the eval.py" ckpt_file: "Checkpoint file path." file_name: "Output file name." -file_format: "Output file format, you can choose from AIR or MINDIR, default is AIR" +file_format: "Output file format, you can choose from AIR or MINDIR, default is MINDIR" --- #Choices device_target: ["Ascend", "GPU"]