diff --git a/official/cv/FCN8s/README.md b/official/cv/FCN8s/README.md
index 3ede73caa0abeee8363685ea3a32860ae1179791..8cf037fe3eaa9d2c635597368e16b791a1dc76a2 100644
--- a/official/cv/FCN8s/README.md
+++ b/official/cv/FCN8s/README.md
@@ -1,592 +1,355 @@
-# Contents
-
-- [Contents](#contents)
-- [FCN 浠嬬粛](#fcn-浠嬬粛)
-- [妯″瀷鏋舵瀯](#妯″瀷鏋舵瀯)
-- [鏁版嵁闆哴(#鏁版嵁闆�)
-- [鐜瑕佹眰](#鐜瑕佹眰)
-- [蹇€熷紑濮媇(#蹇€熷紑濮�)
-- [鑴氭湰浠嬬粛](#鑴氭湰浠嬬粛)
-    - [鑴氭湰浠ュ強绠€鍗曚唬鐮乚(#鑴氭湰浠ュ強绠€鍗曚唬鐮�)
-    - [鑴氭湰鍙傛暟](#鑴氭湰鍙傛暟)
-    - [鐢熸垚鏁版嵁姝ラ](#鐢熸垚鏁版嵁姝ラ)
-        - [璁粌鏁版嵁](#璁粌鏁版嵁)
-    - [璁粌姝ラ](#璁粌姝ラ)
-        - [璁粌](#璁粌)
-    - [璇勪及姝ラ](#璇勪及姝ラ)
-        - [璇勪及](#璇勪及)
-    - [瀵煎嚭杩囩▼](#瀵煎嚭杩囩▼)
-        - [瀵煎嚭](#瀵煎嚭)
-    - [鎺ㄧ悊杩囩▼](#鎺ㄧ悊杩囩▼)
-        - [鎺ㄧ悊](#鎺ㄧ悊)
-- [妯″瀷浠嬬粛](#妯″瀷浠嬬粛)
-    - [鎬ц兘](#鎬ц兘)
-        - [璇勪及鎬ц兘](#璇勪及鎬ц兘)
-            - [FCN8s on PASCAL VOC 2012](#fcn8s-on-pascal-voc-2012)
-        - [Inference Performance](#inference-performance)
-            - [FCN8s on PASCAL VOC](#fcn8s-on-pascal-voc)
-    - [濡備綍浣跨敤](#濡備綍浣跨敤)
-        - [鏁欑▼](#鏁欑▼)
-- [Set context](#set-context)
-- [Load dataset](#load-dataset)
-- [Define model](#define-model)
-- [optimizer](#optimizer)
-- [loss scale](#loss-scale)
-- [callback for saving ckpts](#callback-for-saving-ckpts)
-- [闅忔満浜嬩欢浠嬬粛](#闅忔満浜嬩欢浠嬬粛)
-- [ModelZoo 涓婚〉](#modelzoo-涓婚〉)
-
-# [FCN 浠嬬粛](#contents)
-
-FCN涓昏鐢ㄧ敤浜庡浘鍍忓垎鍓查鍩燂紝鏄竴绉嶇鍒扮鐨勫垎鍓叉柟娉曘€侳CN涓㈠純浜嗗叏杩炴帴灞傦紝浣垮緱鍏惰兘澶熷鐞嗕换鎰忓ぇ灏忕殑鍥惧儚锛屼笖鍑忓皯浜嗘ā鍨嬬殑鍙傛暟閲忥紝鎻愰珮浜嗘ā鍨嬬殑鍒嗗壊閫熷害銆侳CN鍦ㄧ紪鐮侀儴鍒嗕娇鐢ㄤ簡VGG鐨勭粨鏋勶紝鍦ㄨВ鐮侀儴鍒嗕腑浣跨敤鍙嶅嵎绉�/涓婇噰鏍锋搷浣滄仮澶嶅浘鍍忕殑鍒嗚鲸鐜囥€侳CN-8s鏈€鍚庝娇鐢�8鍊嶇殑鍙嶅嵎绉�/涓婇噰鏍锋搷浣滃皢杈撳嚭鍒嗗壊鍥炬仮澶嶅埌涓庤緭鍏ュ浘鍍忕浉鍚屽ぇ灏忋€�
-
-[Paper]: Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
-
-# [妯″瀷鏋舵瀯](#contents)
-
-FCN-8s浣跨敤涓㈠純鍏ㄨ繛鎺ユ搷浣滅殑VGG16浣滀负缂栫爜閮ㄥ垎锛屽苟鍒嗗埆铻嶅悎VGG16涓3,4,5涓睜鍖栧眰鐗瑰緛锛屾渶鍚庝娇鐢╯tride=8鐨勫弽鍗风Н鑾峰緱鍒嗗壊鍥惧儚銆�
-
-# [鏁版嵁闆哴(#contents)
-
-Dataset used:
-
-[PASCAL VOC 2012](<http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html>)
-
-[SBD](<http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz>)
-
-# [鐜瑕佹眰](#contents)
-
-- 纭欢锛圓scend/GPU/CPU锛�
-    - 闇€瑕佸噯澶囧叿鏈堿scend鎴朑PU鎴朇PU澶勭悊鑳藉姏鐨勭‖浠剁幆澧�.
-- 妗嗘灦
-    - [MindSpore](https://www.mindspore.cn/install/en)
-- 濡傞渶鑾峰彇鏇村淇℃伅锛岃鏌ョ湅濡備笅閾炬帴锛�
-    - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
-    - [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/master/index.html)
-
-# [蹇€熷紑濮媇(#contents)
-
-鍦ㄩ€氳繃瀹樻柟缃戠珯瀹夎MindSpore涔嬪悗锛屼綘鍙互閫氳繃濡備笅姝ラ寮€濮嬭缁冧互鍙婅瘎浼帮細
-
-```backbone
-vgg16璁粌ImageNet鏁版嵁闆嗙殑ckpt鏂囦欢鍋氫负FCN8s鐨刡ackbone
-vgg16缃戠粶璺緞: model_zoo/official/cv/vgg16
-FCN8S缃戠粶GPU璁粌鐨刢kpt鏂囦欢鍋氫负FCN8s CPU璁粌鐨凜KPT
-FCN8S缃戠粶璺緞: ./FCN8s_2-499_1322.ckpt
-```
-
-```default_config.yaml
-data_file: ./src/data/a.mindrecord
-ckpt_vgg16: ./vgg16_predtrained.ckpt
-data_root: ./src/data/path_to_data/fcn8s/VOCdevkit/VOC2012
-data_lst: ./src/data/path_to_data/fcn8s/VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt
-ckpt_file: ./FCN8s-39_254.ckpt
-
-鏍规嵁鏈湴鏁版嵁瀛樻斁璺緞淇敼鍙傛暟
-```
-
-- running on Ascend with default parameters
-
-  ```python
-  # Ascend鍗曞崱璁粌绀轰緥
-  python train.py --device_id device_id
-
-  # Ascend璇勪及绀轰緥
-  python eval.py --device_id device_id
-  ```
-
-- running on GPU with gpu default parameters
-
-  ```python
-  # GPU鍗曞崱璁粌绀轰緥
-  python train.py  \
-  --config_path=gpu_default_config.yaml  \
-  --device_target=GPU
-
-  # GPU澶氬崱璁粌绀轰緥
-  export RANK_SIZE=8
-  mpirun --allow-run-as-root -n $RANK_SIZE --output-filename log_output --merge-stderr-to-stdout  \
-  python train.py  \
-  --config_path=gpu_default_config.yaml \
-  --device_target=GPU
-
-  # GPU璇勪及绀轰緥
-  python eval.py  \
-  --config_path=gpu_default_config.yaml  \
-  --device_target=GPU
-  ```
-
-- running on CPU with cpu default parameters
-
-  ```python
-  # CPU璁粌绀轰緥
-  python train.py
-
-  # CPU璇勪及绀轰緥
-  python eval.py
-  ```
-
-# [鑴氭湰浠嬬粛](#contents)
-
-## [鑴氭湰浠ュ強绠€鍗曚唬鐮乚(#contents)
-
-```python
-鈹溾攢鈹€ model_zoo
-    鈹溾攢鈹€ README.md                     // descriptions about all the models
-    鈹溾攢鈹€ FCN8s
-        鈹溾攢鈹€ README.md                 // descriptions about FCN
-        鈹溾攢鈹€ ascend310_infer           // 瀹炵幇310鎺ㄧ悊婧愪唬鐮�
-        鈹溾攢鈹€ scripts
-            鈹溾攢鈹€ run_train.sh
-            鈹溾攢鈹€ run_standalone_train.sh
-            鈹溾攢鈹€ run_standalone_train_gpu.sh             // train in gpu with single device
-            鈹溾攢鈹€ run_distribute_train_gpu.sh             // train in gpu with multi device
-            鈹溾攢鈹€ run_eval.sh
-            鈹溾攢鈹€ run_infer_310.sh         // Ascend鎺ㄧ悊shell鑴氭湰
-            鈹溾攢鈹€ build_data.sh
-        鈹溾攢鈹€ src
-        鈹�   鈹溾攢鈹€data
-                鈹溾攢鈹€path_to_data           // the path of dataset
-        鈹�       鈹溾攢鈹€build_seg_data.py       // creating dataset
-        鈹�       鈹溾攢鈹€dataset.py          // loading dataset
-        鈹�   鈹溾攢鈹€nets
-        鈹�       鈹溾攢鈹€FCN8s.py            // FCN-8s architecture
-        鈹�   鈹溾攢鈹€loss
-        鈹�       鈹溾攢鈹€loss.py            // loss function
-        鈹�   鈹溾攢鈹€utils
-        鈹�       鈹溾攢鈹€lr_scheduler.py            // getting learning_rateFCN-8s
-        鈹�   鈹溾攢鈹€model_utils
-        鈹�       鈹溾攢鈹€config.py                     // getting config parameters
-        鈹�       鈹溾攢鈹€device_adapter.py            // getting device info
-        鈹�       鈹溾攢鈹€local_adapter.py            // getting device info
-        鈹�       鈹溾攢鈹€moxing_adapter.py          // Decorator
-        鈹溾攢鈹€ default_config.yaml               // Ascend parameters config
-        鈹溾攢鈹€ gpu_default_config.yaml           // GPU parameters config
-        鈹溾攢鈹€ cpu_default_config.yaml           // CPU parameters config
-        鈹溾攢鈹€ train.py                 // training script
-        鈹溾攢鈹€ postprogress.py          // 310鎺ㄧ悊鍚庡鐞嗚剼鏈�
-        鈹溾攢鈹€ quick_start.py          // quick_start
-        鈹溾攢鈹€ export.py                // 灏哻heckpoint鏂囦欢瀵煎嚭鍒癮ir/mindir
-        鈹溾攢鈹€ eval.py                  //  evaluation script
-```
-
-## [鑴氭湰鍙傛暟](#contents)
-
-璁粌浠ュ強璇勪及鐨勫弬鏁板彲浠ュ湪default_config.yaml涓缃�
-
-- config for FCN8s
-
-  ```default_config.yaml
-     # dataset
-    'data_file': '/data/workspace/mindspore_dataset/FCN/FCN/dataset/MINDRECORED_NAME.mindrecord', # path and name of one mindrecord file
-    'train_batch_size': 32,
-    'crop_size': 512,
-    'image_mean': [103.53, 116.28, 123.675],
-    'image_std': [57.375, 57.120, 58.395],
-    'min_scale': 0.5,
-    'max_scale': 2.0,
-    'ignore_label': 255,
-    'num_classes': 21,
-
-    # optimizer
-    'train_epochs': 500,
-    'base_lr': 0.015,
-    'loss_scale': 1024.0,
-
-    # model
-    'model': 'FCN8s',
-    'ckpt_vgg16': '',
-    'ckpt_pre_trained': '',
-
-    # train
-    'save_steps': 330,
-    'keep_checkpoint_max': 5,
-    'ckpt_dir': './ckpt',
-  ```
-
-濡傞渶鑾峰彇鏇村淇℃伅锛孉scend璇锋煡鐪媊default_config.yaml`, GPU璇锋煡鐪媊gpu_default_config.yaml`,CPU璇锋煡鐪媊cpu_default_config.yaml`.
-
-## [鐢熸垚鏁版嵁姝ラ](#contents)
-
-### 璁粌鏁版嵁
-
-- build mindrecord training data
-
-涓嬭浇寰楀埌鐨刡enchmark.tgz鍜孷OCtrainval_11-May-2012.tar鏂囦欢瑙e帇鍚庢斁鍦�/path_to_data/fcn8s_data鐩綍涓�
-
-  ```python
-  python src/data/get_dataset_list.py --data_dir=/path_to_data/fcn8s_data
-
-  bash build_data.sh
-  or
-  python src/data/build_seg_data.py  --data_root=/path_to_data/fcn8s_data/benchmark_RELEASE/dataset  \
-                                     --data_lst=/path_to_data/fcn8s_data/vocaug_train_lst.txt  \
-                                     --dst_path=dataset/MINDRECORED_NAME.mindrecord  \
-                                     --num_shards=1  \
-                                     --shuffle=True
-  data_root: 璁粌鏁版嵁闆嗙殑鎬荤洰褰曞寘鍚袱涓瓙鐩綍img鍜宑ls_png锛宨mg鐩綍涓嬪瓨鏀捐缁冨浘鍍忥紝cls_png鐩綍涓嬪瓨鏀炬爣绛緈ask鍥惧儚锛�
-  data_lst: 瀛樻斁璁粌鏍锋湰鐨勫悕绉板垪琛ㄦ枃妗o紝姣忚涓€涓牱鏈€�
-  dst_path: 鐢熸垚mindrecord鏁版嵁鐨勭洰鏍囦綅缃�
-  ```
-
-## [璁粌姝ラ](#contents)
-
-### 璁粌
-
-- running on Ascend with default parameters
-
-  ```python
-  # Ascend鍗曞崱璁粌绀轰緥
-  python train.py --device_id device_id
-  or
-  bash scripts/run_standalone_train.sh [DEVICE_ID]
-  # example: bash scripts/run_standalone_train.sh 0
-
-  #Ascend鍏崱骞惰璁粌
-  bash scripts/run_train.sh [DEVICE_NUM] rank_table.json
-  # example: bash scripts/run_train.sh 8 ~/hccl_8p.json
-  ```
-
-  鍒嗗竷寮忚缁冮渶瑕佹彁鍓嶅垱寤篔SON鏍煎紡鐨凥CCL閰嶇疆鏂囦欢,璇烽伒寰猍閾炬帴璇存槑](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)
-
-- running on GPU with gpu default parameters
-
-  ```python
-  # GPU鍗曞崱璁粌绀轰緥
-  python train.py  \
-  --config_path=gpu_default_config.yaml  \
-  --device_target=GPU
-  or
-  bash scripts/run_standalone_train_gpu.sh DEVICE_ID
-
-  # GPU鍏崱璁粌绀轰緥
-  export RANK_SIZE=8
-  mpirun --allow-run-as-root -n $RANK_SIZE --output-filename log_output --merge-stderr-to-stdout  \
-  python train.py  \
-  --config_path=gpu_default_config.yaml \
-  --device_target=GPU
-  or
-  bash run_distribute_train_gpu.sh [RANK_SIZE] [TRAIN_DATA_DIR]
-
-  # GPU璇勪及绀轰緥
-  python eval.py  \
-  --config_path=gpu_default_config.yaml \
-  --device_target=GPU
-  ```
-
-- running on CPU with cpu default parameters
-
-  ```python
-  # CPU璁粌绀轰緥
-  python train.py  \
-  --config_path=cpu_default_config.yaml
-
-  # CPU璇勪及绀轰緥
-  python eval.py
-  ```
-
-  璁粌鏃讹紝璁粌杩囩▼涓殑epch鍜宻tep浠ュ強姝ゆ椂鐨刲oss鍜岀簿纭害浼氬憟鐜發og.txt涓�:
-
-  ```python
-  epoch: 1 step: 1, loss is 3.054
-  ...
-  ```
-
-  姝ゆā鍨嬬殑checkpoint浼氬湪榛樿璺緞涓嬪瓨鍌�
-
-- 濡傛灉瑕佸湪modelarts涓婅繘琛屾ā鍨嬬殑璁粌锛屽彲浠ュ弬鑰僲odelarts鐨刐瀹樻柟鎸囧鏂囨。](https://support.huaweicloud.com/modelarts/) 寮€濮嬭繘琛屾ā鍨嬬殑璁粌鍜屾帹鐞嗭紝鍏蜂綋鎿嶄綔濡備笅锛�
-
-```ModelArts
-#  鍦∕odelArts涓婁娇鐢ㄥ垎甯冨紡璁粌绀轰緥:
-#  鏁版嵁闆嗗瓨鏀炬柟寮�
-
-#  鈹溾攢鈹€ VOC2012                                                     # dir
-#    鈹溾攢鈹€ VOCdevkit                                                 # VOCdevkit dir
-#      鈹溾攢鈹€ Please refer to VOCdevkit structure  
-#    鈹溾攢鈹€ benchmark_RELEASE                                         # benchmark_RELEASE dir
-#      鈹溾攢鈹€ Please refer to benchmark_RELEASE structure
-#    鈹溾攢鈹€ backbone                                                  # backbone dir
-#      鈹溾攢鈹€ vgg_predtrained.ckpt
-#    鈹溾攢鈹€ predtrained                                               # predtrained dir
-#      鈹溾攢鈹€ FCN8s_1-133_300.ckpt
-#    鈹溾攢鈹€ checkpoint                                                # checkpoint dir
-#      鈹溾攢鈹€ FCN8s_1-133_300.ckpt
-#    鈹溾攢鈹€ vocaug_mindrecords                                        # train dataset dir
-#      鈹溾攢鈹€ voctrain.mindrecords0
-#      鈹溾攢鈹€ voctrain.mindrecords0.db
-#      鈹溾攢鈹€ voctrain.mindrecords1
-#      鈹溾攢鈹€ voctrain.mindrecords1.db
-#      鈹溾攢鈹€ voctrain.mindrecords2
-#      鈹溾攢鈹€ voctrain.mindrecords2.db
-#      鈹溾攢鈹€ voctrain.mindrecords3
-#      鈹溾攢鈹€ voctrain.mindrecords3.db
-#      鈹溾攢鈹€ voctrain.mindrecords4
-#      鈹溾攢鈹€ voctrain.mindrecords4.db
-#      鈹溾攢鈹€ voctrain.mindrecords5
-#      鈹溾攢鈹€ voctrain.mindrecords5.db
-#      鈹溾攢鈹€ voctrain.mindrecords6
-#      鈹溾攢鈹€ voctrain.mindrecords6.db
-#      鈹溾攢鈹€ voctrain.mindrecords7
-#      鈹溾攢鈹€ voctrain.mindrecords7.db
-
-# (1) 閫夋嫨a(淇敼yaml鏂囦欢鍙傛暟)鎴栬€卋(ModelArts鍒涘缓璁粌浣滀笟淇敼鍙傛暟)鍏朵腑涓€绉嶆柟寮�
-#       a. 璁剧疆 "enable_modelarts=True"
-#          璁剧疆 "ckpt_dir=/cache/train/outputs_FCN8s/"
-#          璁剧疆 "ckpt_vgg16=/cache/data/backbone/vgg_predtrain file"  濡傛灉娌℃湁棰勮缁� ckpt_vgg16=""
-#          璁剧疆 "ckpt_pre_trained=/cache/data/predtrained/pred file" 濡傛灉鏃犻渶缁х画璁粌 ckpt_pre_trained=""
-#          璁剧疆 "data_file=/cache/data/vocaug_mindrecords/voctrain.mindrecords0"
-
-#       b. 澧炲姞 "enable_modelarts=True" 鍙傛暟鍦╩odearts鐨勭晫闈笂
-#          鍦╩odelarts鐨勭晫闈笂璁剧疆鏂规硶a鎵€闇€瑕佺殑鍙傛暟
-#          娉ㄦ剰锛氳矾寰勫弬鏁颁笉闇€瑕佸姞寮曞彿
-
-# (2)璁剧疆缃戠粶閰嶇疆鏂囦欢鐨勮矾寰� "_config_path=/The path of config in default_config.yaml/"
-# (3) 鍦╩odelarts鐨勭晫闈笂璁剧疆浠g爜鐨勮矾寰� "/path/FCN8s"
-# (4) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勫惎鍔ㄦ枃浠� "train.py"
-# (5) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勬暟鎹矾寰� ".../VOC2012"(閫夋嫨VOC2012鏂囦欢澶硅矾寰�)
-# 妯″瀷鐨勮緭鍑鸿矾寰�"Output file path" 鍜屾ā鍨嬬殑鏃ュ織璺緞 "Job log path"
-# (6) 寮€濮嬫ā鍨嬬殑璁粌
-
-# 鍦╩odelarts涓婁娇鐢ㄦā鍨嬫帹鐞嗙殑绀轰緥
-# (1) 鎶婅缁冨ソ鐨勬ā鍨嬪湴鏂瑰埌妗剁殑瀵瑰簲浣嶇疆
-# (2) 閫夋嫨a鎴栬€卋鍏朵腑涓€绉嶆柟寮�
-#       a. 璁剧疆 "enable_modelarts=True"
-#          璁剧疆 "data_root=/cache/data/VOCdevkit/VOC2012/"
-#          璁剧疆 "data_lst=./ImageSets/Segmentation/val.txt"
-#          璁剧疆 "ckpt_file=/cache/data/checkpoint/ckpt file name"
-
-#       b. 澧炲姞 "enable_modelarts=True" 鍙傛暟鍦╩odearts鐨勭晫闈笂
-#          鍦╩odelarts鐨勭晫闈笂璁剧疆鏂规硶a鎵€闇€瑕佺殑鍙傛暟
-#          娉ㄦ剰锛氳矾寰勫弬鏁颁笉闇€瑕佸姞寮曞彿
-
-# (3) 璁剧疆缃戠粶閰嶇疆鏂囦欢鐨勮矾寰� "_config_path=/The path of config in default_config.yaml/"
-# (4) 鍦╩odelarts鐨勭晫闈笂璁剧疆浠g爜鐨勮矾寰� "/path/FCN8s"
-# (5) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勫惎鍔ㄦ枃浠� "eval.py"
-# (6) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勬暟鎹矾寰� ".../VOC2012"(閫夋嫨VOC2012鏂囦欢澶硅矾寰�) ,
-# 妯″瀷鐨勮緭鍑鸿矾寰�"Output file path" 鍜屾ā鍨嬬殑鏃ュ織璺緞 "Job log path"
-# (7) 寮€濮嬫ā鍨嬬殑鎺ㄧ悊
-```
-
-## [璇勪及姝ラ](#contents)
-
-### 璇勪及
-
-- 鍦ˋscend鎴朑PU鎴朇PU涓婁娇鐢≒ASCAL VOC 2012 楠岃瘉闆嗚繘琛岃瘎浼�
-
-  鍦ㄤ娇鐢ㄥ懡浠よ繍琛屽墠锛岃妫€鏌ョ敤浜庤瘎浼扮殑checkpoint鐨勮矾寰勩€傝璁剧疆璺緞涓哄埌checkpoint鐨勭粷瀵硅矾寰勶紝濡� "./FCN8s-39_254.ckpt"銆�
-
-- eval on Ascend
-
-  ```python
-  python eval.py
-  ```
-
-  ```shell 璇勪及
-  bash scripts/run_eval.sh DATA_ROOT DATA_LST CKPT_PATH
-  # example: bash scripts/run_eval.sh /home/DataSet/voc2012/VOCdevkit/VOC2012 \
-  # /home/DataSet/voc2012/VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt /home/FCN8s/ckpt/FCN8s_1-133_300.ckpt
-  ```
-
-  浠ヤ笂鐨刾ython鍛戒护浼氬湪缁堢涓婅繍琛岋紝浣犲彲浠ュ湪缁堢涓婃煡鐪嬫娆¤瘎浼扮殑缁撴灉銆傛祴璇曢泦鐨勭簿纭害浼氫互绫讳技濡備笅鏂瑰紡鍛堢幇锛�
-
-  ```python
-  mean IoU  0.6467
-  ```
-
-- eval on CPU
-
-  ```python
-  python eval.py  
-  ```
-
-  浠ヤ笂鐨刾ython鍛戒护浼氬湪缁堢涓婅繍琛岋紝浣犲彲浠ュ湪缁堢涓婃煡鐪嬫娆¤瘎浼扮殑缁撴灉銆傛祴璇曢泦鐨勭簿纭害浼氫互绫讳技濡備笅鏂瑰紡鍛堢幇锛�
-
-  ```python
-  mean IoU  0.6238
-  ```
-
-## 瀵煎嚭杩囩▼
-
-### 瀵煎嚭
-
-鍦ㄥ鍑轰箣鍓嶉渶瑕佷慨鏀筪efault_config.yaml閰嶇疆鏂囦欢涓殑ckpt_file閰嶇疆椤癸紝file_name鍜宖ile_format閰嶇疆椤规牴鎹儏鍐典慨鏀�.
-
-```shell
-python export.py
-```
-
-- 鍦╩odelarts涓婂鍑篗indIR
-
-```Modelarts
-鍦∕odelArts涓婂鍑篗indIR绀轰緥
-鏁版嵁闆嗗瓨鏀炬柟寮忓悓Modelart璁粌
-# (1) 閫夋嫨a(淇敼yaml鏂囦欢鍙傛暟)鎴栬€卋(ModelArts鍒涘缓璁粌浣滀笟淇敼鍙傛暟)鍏朵腑涓€绉嶆柟寮忋€�
-#       a. 璁剧疆 "enable_modelarts=True"
-#          璁剧疆 "file_name=fcn8s"
-#          璁剧疆 "file_format=MINDIR"
-#          璁剧疆 "ckpt_file=/cache/data/checkpoint file name"
-
-#       b. 澧炲姞 "enable_modelarts=True" 鍙傛暟鍦╩odearts鐨勭晫闈笂銆�
-#          鍦╩odelarts鐨勭晫闈笂璁剧疆鏂规硶a鎵€闇€瑕佺殑鍙傛暟
-#          娉ㄦ剰锛氳矾寰勫弬鏁颁笉闇€瑕佸姞寮曞彿
-# (2)璁剧疆缃戠粶閰嶇疆鏂囦欢鐨勮矾寰� "_config_path=/The path of config in default_config.yaml/"
-# (3) 鍦╩odelarts鐨勭晫闈笂璁剧疆浠g爜鐨勮矾寰� "/path/fcn8s"銆�
-# (4) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勫惎鍔ㄦ枃浠� "export.py" 銆�
-# (5) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勬暟鎹矾寰� ".../VOC2012/checkpoint"(閫夋嫨VOC2012/checkpoint鏂囦欢澶硅矾寰�) ,
-# MindIR鐨勮緭鍑鸿矾寰�"Output file path" 鍜屾ā鍨嬬殑鏃ュ織璺緞 "Job log path" 銆�
-```
-
-## 鎺ㄧ悊杩囩▼
-
-### 鎺ㄧ悊
-
-鍦ㄨ繕琛屾帹鐞嗕箣鍓嶆垜浠渶瑕佸厛瀵煎嚭妯″瀷銆侫ir妯″瀷鍙兘鍦ㄦ槆鑵�910鐜涓婂鍑猴紝mindir鍙互鍦ㄤ换鎰忕幆澧冧笂瀵煎嚭銆俠atch_size鍙敮鎸�1銆�
-
-  ```shell
-  # Ascend310 inference
-  bash run_infer_310.sh [MINDIR_PATH] [DATA_LIST_FILE] [IMAGE_PATH] [MASK_PATH] [DEVICE_ID]
-  ```
-
-鎺ㄧ悊鐨勭粨鏋滀繚瀛樺湪褰撳墠鐩綍涓嬶紝鍦╝cc.log鏃ュ織鏂囦欢涓彲浠ユ壘鍒扮被浼间互涓嬬殑缁撴灉銆�
-
-  ```python
-  mean IoU  0.64519877
-  ```
-
-- eval on GPU
-
-  ```python
-  python eval.py  \
-  --config_path=gpu_default_config.yaml  \
-  --device_target=GPU
-  ```
-
-  浠ヤ笂鐨刾ython鍛戒护浼氬湪缁堢涓婅繍琛岋紝浣犲彲浠ュ湪缁堢涓婃煡鐪嬫娆¤瘎浼扮殑缁撴灉銆傛祴璇曢泦鐨勭簿纭害浼氫互绫讳技濡備笅鏂瑰紡鍛堢幇锛�
-
-  ```python
-  mean IoU  0.6472
-  ```
-
-# [妯″瀷浠嬬粛](#contents)
-
-## [鎬ц兘](#contents)
-
-### 璇勪及鎬ц兘
-
-#### FCN8s on PASCAL VOC 2012
-
-| Parameters                 | Ascend                                                      | GPU                                              | CPU                                             |
-| -------------------------- | ------------------------------------------------------------| -------------------------------------------------|-------------------------------------------------|
-| Model Version              | FCN-8s                                                      | FCN-8s                                           | FCN-8s                                          |
-| Resource                   | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 | NV SMX2 V100-32G                                 | AMD Ryzen 7 5800X 8-Core Processor              |
-| uploaded Date              | 12/30/2020 (month/day/year)                                 | 06/11/2021 (month/day/year)                      | 09/21/2022(month/day/year)                      |
-| MindSpore Version          | 1.1.0                                                       | 1.2.0                                            | 1.8.1                                           |
-| Dataset                    | PASCAL VOC 2012 and SBD                                     | PASCAL VOC 2012 and SBD                          | PASCAL VOC 2012 and SBD                         |
-| Training Parameters        | epoch=500, steps=330, batch_size = 32, lr=0.015             | epoch=500, steps=330, batch_size = 8, lr=0.005   | epoch=500, steps=330, batch_size = 8, lr=0.0008 |
-| Optimizer                  | Momentum                                                    | Momentum                                         | Momentum                                        |
-| Loss Function              | Softmax Cross Entropy                                       | Softmax Cross Entropy                            | Softmax Cross Entropy                           |
-| outputs                    | probability                                                 | probability                                      | probability                                     |
-| Loss                       | 0.038                                                       | 0.036                                            | 0.041                                           |
-| Speed                      | 1pc: 564.652 ms/step;                                       | 1pc: 455.460 ms/step;                            | 1pc: 29041.912 ms/step;                         |
-| Scripts                    | [FCN script](https://gitee.com/mindspore/models/tree/master/official/cv/FCN8s)
-
-### Inference Performance
-
-#### FCN8s on PASCAL VOC
-
-| Parameters          | Ascend                      | GPU                         | CPU                                |
-| ------------------- | --------------------------- |-----------------------------|------------------------------------|
-| Model Version       | FCN-8s                      | FCN-8s                      | FCN-8s                             |
-| Resource            | Ascend 910; OS Euler2.8     | NV SMX2 V100-32G            | AMD Ryzen 7 5800X 8-Core Processor |
-| Uploaded Date       | 10/29/2020 (month/day/year) | 06/11/2021 (month/day/year) | 09/21/2022(month/day/year)         |
-| MindSpore Version   | 1.1.0                       | 1.2.0                       | 1.8.1                              |
-| Dataset             | PASCAL VOC 2012             | PASCAL VOC 2012             | PASCAL VOC 2012                    |
-| batch_size          | 16                          | 16                          | 16                                 |
-| outputs             | probability                 | probability                 | probability                        |
-| mean IoU            | 64.67                       | 64.72      | 62.38                              |
-
-## [quick start](#contents)
-
-瀵筫val_batch_size涓暟鎹繘琛岄娴嬪苟鍙鍖栫粨鏋溿€�
-
-  ```python
-  python quick_start.py  
-  ```
-
-  浠ヤ笂鐨刾ython鍛戒护浼氬湪缁堢涓婅繍琛岋紝浣犲彲浠ュ湪缁堢涓婃煡鐪嬫娆″彲瑙嗗寲鐨勭粨鏋溿€�
-
-## [濡備綍浣跨敤](#contents)
-
-### 鏁欑▼
-
-濡傛灉浣犻渶瑕佸湪涓嶅悓纭欢骞冲彴锛堝CPU锛孏PU锛孉scend 910 鎴栬€� Ascend 310锛変娇鐢ㄨ缁冨ソ鐨勬ā鍨嬶紝浣犲彲浠ュ弬鑰冭繖涓� [Link](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備互涓嬫槸涓€涓畝鍗曚緥瀛愮殑姝ラ浠嬬粛锛�
-
-- Running on Ascend
-
-  ```
-  # Set context
-  context.set_context(mode=context.GRAPH_MODE, device_target=args_opt.device_target, save_graphs=False)
-  context.set_auto_parallel_context(device_num=device_num,parallel_mode=ParallelMode.DATA_PARALLEL)
-  init()
-
-  # Load dataset
-  dataset = data_generator.SegDataset(image_mean=cfg.image_mean,
-                                      image_std=cfg.image_std,
-                                      data_file=cfg.data_file,
-                                      batch_size=cfg.batch_size,
-                                      crop_size=cfg.crop_size,
-                                      max_scale=cfg.max_scale,
-                                      min_scale=cfg.min_scale,
-                                      ignore_label=cfg.ignore_label,
-                                      num_classes=cfg.num_classes,
-                                      num_readers=2,
-                                      num_parallel_calls=4,
-                                      shard_id=args.rank,
-                                      shard_num=args.group_size)
-  dataset = dataset.get_dataset(repeat=1)
-
-  # Define model
-  net = FCN8s(n_class=cfg.num_classes)
-  loss_ = loss.SoftmaxCrossEntropyLoss(cfg.num_classes, cfg.ignore_label)
-
-  # optimizer
-  iters_per_epoch = dataset.get_dataset_size()
-  total_train_steps = iters_per_epoch * cfg.train_epochs
-
-  lr_scheduler = CosineAnnealingLR(cfg.base_lr,
-                                   cfg.train_epochs,
-                                   iters_per_epoch,
-                                   cfg.train_epochs,
-                                   warmup_epochs=0,
-                                   eta_min=0)
-  lr = Tensor(lr_scheduler.get_lr())
-
-  # loss scale
-  manager_loss_scale = FixedLossScaleManager(cfg.loss_scale, drop_overflow_update=False)
-
-  optimizer = nn.Momentum(params=net.trainable_params(), learning_rate=lr, momentum=0.9, weight_decay=0.0001,
-                          loss_scale=cfg.loss_scale)
-
-  model = Model(net, loss_fn=loss_, loss_scale_manager=manager_loss_scale, optimizer=optimizer, amp_level="O3")
-
-  # callback for saving ckpts
-  time_cb = TimeMonitor(data_size=iters_per_epoch)
-  loss_cb = LossMonitor()
-  cbs = [time_cb, loss_cb]
-
-  if args.rank == 0:
-      config_ck = CheckpointConfig(save_checkpoint_steps=cfg.save_steps,
-                                   keep_checkpoint_max=cfg.keep_checkpoint_max)
-      ckpoint_cb = ModelCheckpoint(prefix=cfg.model, directory=cfg.ckpt_dir, config=config_ck)
-      cbs.append(ckpoint_cb)
-
-  model.train(cfg.train_epochs, dataset, callbacks=cbs)
-
-# [闅忔満浜嬩欢浠嬬粛](#contents)
-
-鎴戜滑鍦╰rain.py涓缃簡闅忔満绉嶅瓙
-
-# [ModelZoo 涓婚〉](#contents)
-
- 璇锋煡鐪嬪畼鏂圭綉绔� [homepage](https://gitee.com/mindspore/models).
-
+# 鐩綍
+
+- [鐩綍](#鐩綍)
+- [FCN8s鎻忚堪](#FCN8s鎻忚堪)
+- [妯″瀷鏋舵瀯](#妯″瀷鏋舵瀯)
+- [鏁版嵁闆哴(#鏁版嵁闆�)
+- [鐜瑕佹眰](#鐜瑕佹眰)
+- [鑴氭湰璇存槑](#鑴氭湰璇存槑)
+    - [鑴氭湰鍜岀ず渚嬩唬鐮乚(#鑴氭湰鍜屾牱渚嬩唬鐮�)
+    - [鑴氭湰鍙傛暟](#鑴氭湰鍙傛暟)
+    - [璁粌杩囩▼](#璁粌杩囩▼)
+        - [鍚姩](#鍚姩)
+        - [缁撴灉](#缁撴灉)
+    - [璇勪及杩囩▼](#璇勪及杩囩▼)
+        - [鍚姩](#鍚姩)
+        - [缁撴灉](#缁撴灉)
+    - [鎺ㄧ悊杩囩▼](#鎺ㄧ悊杩囩▼)
+        - [瀵煎嚭ONNX](#瀵煎嚭ONNX)
+        - [鍦℅PU鎵цONNX鎺ㄧ悊](#鍦℅PU鎵цONNX鎺ㄧ悊)
+        - [缁撴灉](#缁撴灉)
+- [妯″瀷璇存槑](#妯″瀷璇存槑)
+    - [璁粌鎬ц兘](#璁粌鎬ц兘)
+- [闅忔満鎯呭喌鐨勬弿杩癩(#闅忔満鎯呭喌鐨勬弿杩�)
+- [ModelZoo 涓婚〉](#modelzoo-涓婚〉)
+
+# FCN8s鎻忚堪
+
+FCN涓昏鐢ㄧ敤浜庡浘鍍忓垎鍓查鍩燂紝鏄竴绉嶇鍒扮鐨勫垎鍓叉柟娉曘€侳CN涓㈠純浜嗗叏杩炴帴灞傦紝浣垮緱鍏惰兘澶熷鐞嗕换鎰忓ぇ灏忕殑鍥惧儚锛屼笖鍑忓皯浜嗘ā鍨嬬殑鍙傛暟閲忥紝鎻愰珮浜嗘ā鍨嬬殑鍒嗗壊閫熷害銆侳CN鍦ㄧ紪鐮侀儴鍒嗕娇鐢ㄤ簡VGG鐨勭粨鏋勶紝鍦ㄨВ鐮侀儴鍒嗕腑浣跨敤鍙嶅嵎绉�/涓婇噰鏍锋搷浣滄仮澶嶅浘鍍忕殑鍒嗚鲸鐜囥€侳CN-8s鏈€鍚庝娇鐢�8鍊嶇殑鍙嶅嵎绉�/涓婇噰鏍锋搷浣滃皢杈撳嚭鍒嗗壊鍥炬仮澶嶅埌涓庤緭鍏ュ浘鍍忕浉鍚屽ぇ灏忋€�
+
+[Paper]: Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
+
+# 妯″瀷鏋舵瀯
+
+FCN-8s浣跨敤涓㈠純鍏ㄨ繛鎺ユ搷浣滅殑VGG16浣滀负缂栫爜閮ㄥ垎锛屽苟鍒嗗埆铻嶅悎VGG16涓3,4,5涓睜鍖栧眰鐗瑰緛锛屾渶鍚庝娇鐢╯tride=8鐨勫弽鍗风Н鑾峰緱鍒嗗壊鍥惧儚銆�
+
+# 鏁版嵁闆�
+
+- Dataset used:
+
+    [PASCAL VOC 2012](<http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html>)
+
+# 鐜瑕佹眰
+
+- 纭欢锛圓scend/GPU锛�
+    - 闇€瑕佸噯澶囧叿鏈堿scend鎴朑PU澶勭悊鑳藉姏鐨勭‖浠剁幆澧�.
+- 妗嗘灦
+    - [MindSpore](https://www.mindspore.cn/install/en)
+- 濡傞渶鑾峰彇鏇村淇℃伅锛岃鏌ョ湅濡備笅閾炬帴锛�
+    - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
+    - [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/master/index.html)
+
+# 鑴氭湰璇存槑
+
+## 鑴氭湰鍜屾牱渚嬩唬鐮�
+
+```text
+鈹溾攢鈹€ model_zoo
+    鈹溾攢鈹€ README.md                     // descriptions about all the models
+    鈹溾攢鈹€ FCN8s
+        鈹溾攢鈹€ README.md                 // descriptions about FCN
+        鈹溾攢鈹€ ascend310_infer           // 瀹炵幇310鎺ㄧ悊婧愪唬鐮�
+        鈹溾攢鈹€ scripts
+            鈹溾攢鈹€ run_train.sh
+            鈹溾攢鈹€ run_standalone_train.sh
+            鈹溾攢鈹€ run_standalone_train_gpu.sh             // train in gpu with single device
+            鈹溾攢鈹€ run_distribute_train_gpu.sh             // train in gpu with multi device
+            鈹溾攢鈹€ run_eval.sh
+            鈹溾攢鈹€ run_eval_onnx.sh         //鐢ㄤ簬ONNX鎺ㄧ悊鐨剆hell鑴氭湰
+            鈹溾攢鈹€ run_infer_310.sh         // Ascend鎺ㄧ悊shell鑴氭湰
+            鈹溾攢鈹€ build_data.sh
+        鈹溾攢鈹€ src
+        鈹�   鈹溾攢鈹€data
+        鈹�       鈹溾攢鈹€build_seg_data.py       // creating dataset
+        鈹�       鈹溾攢鈹€dataset.py          // loading dataset
+        鈹�   鈹溾攢鈹€nets
+        鈹�       鈹溾攢鈹€FCN8s.py            // FCN-8s architecture
+        鈹�   鈹溾攢鈹€loss
+        鈹�       鈹溾攢鈹€loss.py            // loss function
+        鈹�   鈹溾攢鈹€utils
+        鈹�       鈹溾攢鈹€lr_scheduler.py            // getting learning_rateFCN-8s
+        鈹�   鈹溾攢鈹€model_utils
+        鈹�       鈹溾攢鈹€config.py                     // getting config parameters
+        鈹�       鈹溾攢鈹€device_adapter.py            // getting device info
+        鈹�       鈹溾攢鈹€local_adapter.py            // getting device info
+        鈹�       鈹溾攢鈹€moxing_adapter.py          // Decorator
+        鈹溾攢鈹€ default_config.yaml               // Ascend parameters config
+        鈹溾攢鈹€ gpu_default_config.yaml           // GPU parameters config
+        鈹溾攢鈹€ train.py                 // training script
+        鈹溾攢鈹€ postprogress.py          // 310鎺ㄧ悊鍚庡鐞嗚剼鏈�
+        鈹溾攢鈹€ export.py                // 灏哻heckpoint鏂囦欢瀵煎嚭鍒癮ir/mindir
+        鈹溾攢鈹€ eval.py                  //  evaluation script
+        鈹溾攢鈹€ eval_onnx.py             //  onnx璇勪及
+```
+
+## 鑴氭湰鍙傛暟
+
+妯″瀷璁粌鍜岃瘎浼拌繃绋嬩腑浣跨敤鐨勫弬鏁板彲浠ュ湪config.py涓缃�:
+
+```text
+  # dataset
+  'data_file': '/data/workspace/mindspore_dataset/FCN/FCN/dataset/MINDRECORED_NAME.mindrecord', # path and name of one mindrecord file
+  'train_batch_size': 32,
+  'crop_size': 512,
+  'image_mean': [103.53, 116.28, 123.675],
+  'image_std': [57.375, 57.120, 58.395],
+  'min_scale': 0.5,
+  'max_scale': 2.0,
+  'ignore_label': 255,
+  'num_classes': 21,
+
+  # optimizer
+  'train_epochs': 500,
+  'base_lr': 0.015,
+  'loss_scale': 1024.0,
+
+  # model
+  'model': 'FCN8s',
+  'ckpt_vgg16': '',
+  'ckpt_pre_trained': '',
+
+  # train
+  'save_steps': 330,
+  'keep_checkpoint_max': 5,
+  'ckpt_dir': './ckpt',
+```
+
+## 璁粌杩囩▼
+
+### 鍚姩
+
+鎮ㄥ彲浠ヤ娇鐢╬ython鎴杝hell鑴氭湰杩涜璁粌銆�
+
+```bash
+# Ascend鍗曞崱璁粌绀轰緥
+python train.py --device_id device_id
+or
+bash scripts/run_standalone_train.sh [DEVICE_ID]
+# example: bash scripts/run_standalone_train.sh 0
+
+#Ascend鍏崱骞惰璁粌
+bash scripts/run_train.sh [DEVICE_NUM] rank_table.json
+# example: bash scripts/run_train.sh 8 ~/hccl_8p.json
+
+# GPU鍗曞崱璁粌绀轰緥
+python train.py  \
+--config_path=gpu_default_config.yaml  \
+--device_target=GPU
+or
+bash scripts/run_standalone_train_gpu.sh DEVICE_ID
+
+# GPU鍏崱璁粌绀轰緥
+export RANK_SIZE=8
+mpirun --allow-run-as-root -n $RANK_SIZE --output-filename log_output --merge-stderr-to-stdout  \
+python train.py  \
+--config_path=gpu_default_config.yaml \
+--device_target=GPU
+or
+bash run_distribute_train_gpu.sh [RANK_SIZE] [TRAIN_DATA_DIR]
+
+# GPU璇勪及绀轰緥
+python eval.py  \
+--config_path=gpu_default_config.yaml \
+--device_target=GPU
+```
+
+### 缁撴灉
+
+璁粌鏃讹紝璁粌杩囩▼涓殑epch鍜宻tep浠ュ強姝ゆ椂鐨刲oss鍜岀簿纭害浼氬憟鐜發og.txt涓�:
+
+```text
+epoch: * step: **, loss is ****
+...
+```
+
+姝ゆā鍨嬬殑checkpoint浼氬湪榛樿璺緞涓嬪瓨鍌�
+
+濡傛灉瑕佸湪modelarts涓婅繘琛屾ā鍨嬬殑璁粌锛屽彲浠ュ弬鑰僲odelarts鐨刐瀹樻柟鎸囧鏂囨。](https://support.huaweicloud.com/modelarts/) 寮€濮嬭繘琛屾ā鍨嬬殑璁粌鍜屾帹鐞嗭紝鍏蜂綋鎿嶄綔濡備笅锛�
+
+```ModelArts
+#  鍦∕odelArts涓婁娇鐢ㄥ垎甯冨紡璁粌绀轰緥:
+#  鏁版嵁闆嗗瓨鏀炬柟寮�
+
+#  鈹溾攢鈹€ VOC2012                                                     # dir
+#    鈹溾攢鈹€ VOCdevkit                                                 # VOCdevkit dir
+#      鈹溾攢鈹€ Please refer to VOCdevkit structure  
+#    鈹溾攢鈹€ benchmark_RELEASE                                         # benchmark_RELEASE dir
+#      鈹溾攢鈹€ Please refer to benchmark_RELEASE structure
+#    鈹溾攢鈹€ backbone                                                  # backbone dir
+#      鈹溾攢鈹€ vgg_predtrained.ckpt
+#    鈹溾攢鈹€ predtrained                                               # predtrained dir
+#      鈹溾攢鈹€ FCN8s_1-133_300.ckpt
+#    鈹溾攢鈹€ checkpoint                                                # checkpoint dir
+#      鈹溾攢鈹€ FCN8s_1-133_300.ckpt
+#    鈹溾攢鈹€ vocaug_mindrecords                                        # train dataset dir
+#      鈹溾攢鈹€ voctrain.mindrecords0
+#      鈹溾攢鈹€ voctrain.mindrecords0.db
+#      鈹溾攢鈹€ voctrain.mindrecords1
+#      鈹溾攢鈹€ voctrain.mindrecords1.db
+#      鈹溾攢鈹€ voctrain.mindrecords2
+#      鈹溾攢鈹€ voctrain.mindrecords2.db
+#      鈹溾攢鈹€ voctrain.mindrecords3
+#      鈹溾攢鈹€ voctrain.mindrecords3.db
+#      鈹溾攢鈹€ voctrain.mindrecords4
+#      鈹溾攢鈹€ voctrain.mindrecords4.db
+#      鈹溾攢鈹€ voctrain.mindrecords5
+#      鈹溾攢鈹€ voctrain.mindrecords5.db
+#      鈹溾攢鈹€ voctrain.mindrecords6
+#      鈹溾攢鈹€ voctrain.mindrecords6.db
+#      鈹溾攢鈹€ voctrain.mindrecords7
+#      鈹溾攢鈹€ voctrain.mindrecords7.db
+
+# (1) 閫夋嫨a(淇敼yaml鏂囦欢鍙傛暟)鎴栬€卋(ModelArts鍒涘缓璁粌浣滀笟淇敼鍙傛暟)鍏朵腑涓€绉嶆柟寮�
+#       a. 璁剧疆 "enable_modelarts=True"
+#          璁剧疆 "ckpt_dir=/cache/train/outputs_FCN8s/"
+#          璁剧疆 "ckpt_vgg16=/cache/data/backbone/vgg_predtrain file"  濡傛灉娌℃湁棰勮缁� ckpt_vgg16=""
+#          璁剧疆 "ckpt_pre_trained=/cache/data/predtrained/pred file" 濡傛灉鏃犻渶缁х画璁粌 ckpt_pre_trained=""
+#          璁剧疆 "data_file=/cache/data/vocaug_mindrecords/voctrain.mindrecords0"
+
+#       b. 澧炲姞 "enable_modelarts=True" 鍙傛暟鍦╩odearts鐨勭晫闈笂
+#          鍦╩odelarts鐨勭晫闈笂璁剧疆鏂规硶a鎵€闇€瑕佺殑鍙傛暟
+#          娉ㄦ剰锛氳矾寰勫弬鏁颁笉闇€瑕佸姞寮曞彿
+
+# (2)璁剧疆缃戠粶閰嶇疆鏂囦欢鐨勮矾寰� "_config_path=/The path of config in default_config.yaml/"
+# (3) 鍦╩odelarts鐨勭晫闈笂璁剧疆浠g爜鐨勮矾寰� "/path/FCN8s"
+# (4) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勫惎鍔ㄦ枃浠� "train.py"
+# (5) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勬暟鎹矾寰� ".../VOC2012"(閫夋嫨VOC2012鏂囦欢澶硅矾寰�)
+# 妯″瀷鐨勮緭鍑鸿矾寰�"Output file path" 鍜屾ā鍨嬬殑鏃ュ織璺緞 "Job log path"
+# (6) 寮€濮嬫ā鍨嬬殑璁粌
+
+# 鍦╩odelarts涓婁娇鐢ㄦā鍨嬫帹鐞嗙殑绀轰緥
+# (1) 鎶婅缁冨ソ鐨勬ā鍨嬪湴鏂瑰埌妗剁殑瀵瑰簲浣嶇疆
+# (2) 閫夋嫨a鎴栬€卋鍏朵腑涓€绉嶆柟寮�
+#       a. 璁剧疆 "enable_modelarts=True"
+#          璁剧疆 "data_root=/cache/data/VOCdevkit/VOC2012/"
+#          璁剧疆 "data_lst=./ImageSets/Segmentation/val.txt"
+#          璁剧疆 "ckpt_file=/cache/data/checkpoint/ckpt file name"
+
+#       b. 澧炲姞 "enable_modelarts=True" 鍙傛暟鍦╩odearts鐨勭晫闈笂
+#          鍦╩odelarts鐨勭晫闈笂璁剧疆鏂规硶a鎵€闇€瑕佺殑鍙傛暟
+#          娉ㄦ剰锛氳矾寰勫弬鏁颁笉闇€瑕佸姞寮曞彿
+
+# (3) 璁剧疆缃戠粶閰嶇疆鏂囦欢鐨勮矾寰� "_config_path=/The path of config in default_config.yaml/"
+# (4) 鍦╩odelarts鐨勭晫闈笂璁剧疆浠g爜鐨勮矾寰� "/path/FCN8s"
+# (5) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勫惎鍔ㄦ枃浠� "eval.py"
+# (6) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勬暟鎹矾寰� ".../VOC2012"(閫夋嫨VOC2012鏂囦欢澶硅矾寰�) ,
+# 妯″瀷鐨勮緭鍑鸿矾寰�"Output file path" 鍜屾ā鍨嬬殑鏃ュ織璺緞 "Job log path"
+# (7) 寮€濮嬫ā鍨嬬殑鎺ㄧ悊
+```
+
+## 璇勪及杩囩▼
+
+### 鍚姩
+
+鍦ˋscend鎴朑PU涓婁娇鐢≒ASCAL VOC 2012 楠岃瘉闆嗚繘琛岃瘎浼�
+
+鍦ㄤ娇鐢ㄥ懡浠よ繍琛屽墠锛岃妫€鏌ョ敤浜庤瘎浼扮殑checkpoint鐨勮矾寰勩€傝璁剧疆璺緞涓哄埌checkpoint鐨勭粷瀵硅矾寰勶紝濡� "/data/workspace/mindspore_dataset/FCN/FCN/model_new/FCN8s-500_82.ckpt"銆�
+
+```python
+python eval.py
+```
+
+```bash
+bash scripts/run_eval.sh DATA_ROOT DATA_LST CKPT_PATH
+# example: bash scripts/run_eval.sh /home/DataSet/voc2012/VOCdevkit/VOC2012 \
+# /home/DataSet/voc2012/VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt /home/FCN8s/ckpt/fcn8s_ascend_v180_voc2012_official_cv_meanIoU62.7.ckpt
+```
+
+### 缁撴灉
+
+浠ヤ笂鐨刾ython鍛戒护浼氬湪缁堢涓婅繍琛岋紝浣犲彲浠ュ湪缁堢涓婃煡鐪嬫娆¤瘎浼扮殑缁撴灉銆傛祴璇曢泦鐨勭簿纭害浼氫互绫讳技濡備笅鏂瑰紡鍛堢幇锛�
+
+```text
+mean IoU 0.638887018016709
+```
+
+# 鎺ㄧ悊杩囩▼
+
+## 瀵煎嚭ONNX
+
+```bash
+python export.py --ckpt_file [CKPT_PATH] --file_format [EXPORT_FORMAT] --config_path [CONFIG_PATH]
+```
+
+渚嬪锛歱ython expor
+--ckpt_file /root/zj/models/official/cv/FCN8s/checkpoint/fcn8s_ascend_v180_voc2012_official_cv_meanIoU62.7.ckpt  --file_format ONNX  --config_path /root/zj/models/official/cv/FCN8s/default_config.yaml
+
+鍙傛暟ckpt_file涓哄繀濉」锛� `EXPORT_FORMAT` 鍙€� ["AIR", "MINDIR", "ONNX"]. config_path 涓虹浉鍏抽厤缃枃浠�.
+
+鍦╩odelarts涓婂鍑篛NNX
+
+```Modelarts
+鍦∕odelArts涓婂鍑篛NNX绀轰緥
+鏁版嵁闆嗗瓨鏀炬柟寮忓悓Modelart璁粌
+# (1) 閫夋嫨a(淇敼yaml鏂囦欢鍙傛暟)鎴栬€卋(ModelArts鍒涘缓璁粌浣滀笟淇敼鍙傛暟)鍏朵腑涓€绉嶆柟寮忋€�
+#       a. 璁剧疆 "enable_modelarts=True"
+#          璁剧疆 "file_name=fcn8s"
+#          璁剧疆 "file_format=ONNX"
+#          璁剧疆 "ckpt_file=/cache/data/checkpoint file name"
+
+#       b. 澧炲姞 "enable_modelarts=True" 鍙傛暟鍦╩odearts鐨勭晫闈笂銆�
+#          鍦╩odelarts鐨勭晫闈笂璁剧疆鏂规硶a鎵€闇€瑕佺殑鍙傛暟
+#          娉ㄦ剰锛氳矾寰勫弬鏁颁笉闇€瑕佸姞寮曞彿
+# (2)璁剧疆缃戠粶閰嶇疆鏂囦欢鐨勮矾寰� "_config_path=/The path of config in default_config.yaml/"
+# (3) 鍦╩odelarts鐨勭晫闈笂璁剧疆浠g爜鐨勮矾寰� "/path/fcn8s"銆�
+# (4) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勫惎鍔ㄦ枃浠� "export.py" 銆�
+# (5) 鍦╩odelarts鐨勭晫闈笂璁剧疆妯″瀷鐨勬暟鎹矾寰� ".../VOC2012/checkpoint"(閫夋嫨VOC2012/checkpoint鏂囦欢澶硅矾寰�) ,
+# MindIR鐨勮緭鍑鸿矾寰�"Output file path" 鍜屾ā鍨嬬殑鏃ュ織璺緞 "Job log path" 銆�
+```
+
+## 鍦℅PU鎵цONNX鎺ㄧ悊
+
+鍦ㄦ墽琛屾帹鐞嗗墠锛孫NNX鏂囦欢蹇呴』閫氳繃 `export.py` 鑴氭湰瀵煎嚭銆備互涓嬪睍绀轰簡浣跨敤ONNX妯″瀷鎵ц鎺ㄧ悊鐨勭ず渚嬨€�
+
+```bash
+# ONNX inference
+bash scripts/run_eval_onnx.sh [ONNX_PATH][DATA_ROOT] [DATA_LST]
+```
+
+渚嬪锛歜ash scripts/run_eval_onnx.sh /root/zj/models/official/cv/FCN8s/fcn8s.onnx /root/zj/models/official/cv/FCN8s/dataset/VOC2012 /root/zj/models/official/cv/FCN8s/dataset/VOC2012/ImageSets/Segmentation/val.txt
+
+## 缁撴灉
+
+- eval on GPU
+
+浠ヤ笂鐨刾ython鍛戒护浼氬湪缁堢涓婅繍琛岋紝浣犲彲浠ュ湪缁堢涓婃煡鐪嬫娆¤瘎浼扮殑缁撴灉銆傛祴璇曢泦鐨勭簿纭害浼氫互绫讳技濡備笅鏂瑰紡鍛堢幇:
+
+```text
+mean IoU 0.6388868594659682
+```
+
+# 妯″瀷璇存槑
+
+## 璁粌鎬ц兘
+
+| 鍙傛暟     | Ascend                                                       |
+| -------- | ------------------------------------------------------------ |
+| 妯″瀷鍚嶇О | FCN8s                                                        |
+| 杩愯鐜 | TITAN Xp 12G                                                 |
+| 涓婁紶鏃堕棿 | 2022-09-22                                                   |
+| 鏁版嵁闆�   | PASCAL VOC 2012                                              |
+| 璁粌鍙傛暟 | default_config.yaml                                          |
+| 浼樺寲鍣�   | Momentum                                                     |
+| 鎹熷け鍑芥暟 | Softmax Cross Entropy                                        |
+| 鏈€缁堟崯澶� | 0.036                                                        |
+| 閫熷害     | 1pc: 455.460 ms/step;                                        |
+| mean IoU | 0.6388868594659682                                           |
+| 鑴氭湰     | [閾炬帴](https://gitee.com/mindspore/models/tree/master/official/cv/FCN8s) |
+
+# 闅忔満鎯呭喌鐨勬弿杩�
+
+鎴戜滑鍦� `train.py` 鑴氭湰涓缃簡闅忔満绉嶅瓙銆�
+
+# ModelZoo
+
+璇锋牳瀵瑰畼鏂� [涓婚〉](https://gitee.com/mindspore/models) 銆�
diff --git a/official/cv/FCN8s/eval_onnx.py b/official/cv/FCN8s/eval_onnx.py
new file mode 100644
index 0000000000000000000000000000000000000000..4d9d3c80519ca823c5ca03a29aab8c84011711d2
--- /dev/null
+++ b/official/cv/FCN8s/eval_onnx.py
@@ -0,0 +1,200 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""eval FCN8s."""
+import argparse
+import onnxruntime
+import numpy as np
+import cv2
+from PIL import Image
+from mindspore import Tensor
+import mindspore.common.dtype as mstype
+import mindspore.nn as nn
+from mindspore import context
+parser = argparse.ArgumentParser(description='Eval Onnx')
+parser.add_argument('--onnx_path', type=str, default="", help='')
+parser.add_argument('--device_target', type=str, default='GPU', help='Device target')
+parser.add_argument('--image_mean', default=[103.53, 116.28, 123.675], help='Image_mean')
+parser.add_argument('--image_std', default=[57.375, 57.120, 58.395], help='Image_mean')
+parser.add_argument('--eval_batch_size', type=int, default=1, help='eval_batch_size')
+parser.add_argument('--data_lst', default="", help='data_lst')
+parser.add_argument('--data_root', default="", help='data_root')
+parser.add_argument('--num_classes', default=21, help='num_classes')
+parser.add_argument('--scales', default=[1.0], help='scales')
+parser.add_argument('--crop_size', type=int, default=512, help='crop_size')
+parser.add_argument('--flip', default=False, help='Flip')
+args = parser.parse_args()
+def cal_hist(a, b, n):
+    k = (a >= 0) & (a < n)
+    return np.bincount(n * a[k].astype(np.int32) + b[k], minlength=n ** 2).reshape(n, n)
+
+def resize_long(img, long_size=513):
+    h, w, _ = img.shape
+    if h > w:
+        new_h = long_size
+        new_w = int(1.0 * long_size * w / h)
+    else:
+        new_w = long_size
+        new_h = int(1.0 * long_size * h / w)
+    imo = cv2.resize(img, (new_w, new_h))
+    return imo
+
+
+class BuildEvalNetwork(nn.Cell):
+    def __init__(self, network):
+        super(BuildEvalNetwork, self).__init__()
+        self.network = network
+        self.softmax = nn.Softmax(axis=1)
+
+    def construct(self, input_data):
+        output = self.network(input_data)
+        output = self.softmax(output)
+        return output
+
+
+def pre_process(configs, img_, crop_size=512):
+    # resize
+    img_ = resize_long(img_, crop_size)
+    resize_h, resize_w, _ = img_.shape
+
+    # mean, std
+    image_mean = np.array(configs.image_mean)
+    image_std = np.array(configs.image_std)
+    img_ = (img_ - image_mean) / image_std
+
+    # pad to crop_size
+    pad_h = crop_size - img_.shape[0]
+    pad_w = crop_size - img_.shape[1]
+    if pad_h > 0 or pad_w > 0:
+        img_ = cv2.copyMakeBorder(img_, 0, pad_h, 0, pad_w, cv2.BORDER_CONSTANT, value=0)
+
+    # hwc to chw
+    img_ = img_.transpose((2, 0, 1))
+    return img_, resize_h, resize_w
+
+
+def eval_batch(configs, eval_net, img_lst, crop_size=512, flip=True):
+    result_lst = []
+    batch_size = len(img_lst)
+    batch_img = np.zeros((configs.eval_batch_size, 3, crop_size, crop_size), dtype=np.float32)
+    resize_hw = []
+    for l in range(batch_size):
+        img_ = img_lst[l]
+        img_, resize_h, resize_w = pre_process(configs, img_, crop_size)
+        batch_img[l] = img_
+        resize_hw.append([resize_h, resize_w])
+
+    inputs = {eval_net.get_inputs()[0].name: batch_img}
+    net_out = eval_net.run(None, inputs)
+    net_out = np.expand_dims(np.squeeze(net_out), axis=0)
+
+    if flip:
+        batch_img = batch_img[:, :, :, ::-1]
+        batch_img = Tensor(batch_img, mstype.float32)
+        inputs = {eval_net.get_inputs()[0].name: batch_img.asnumpy()}
+        net_out_flip = net_out = eval_net.run(None, inputs)
+        net_out += net_out_flip.asnumpy()[:, :, :, ::-1]
+
+    for bs in range(batch_size):
+
+        probs_ = net_out[bs][:, :resize_hw[bs][0], :resize_hw[bs][1]].transpose((1, 2, 0))
+        ori_h, ori_w = img_lst[bs].shape[0], img_lst[bs].shape[1]
+        probs_ = cv2.resize(probs_.astype(np.float32), (ori_w, ori_h))
+        result_lst.append(probs_)
+
+    return result_lst
+
+
+def eval_batch_scales(configs, eval_net, img_lst, scales,
+                      base_crop_size=512, flip=True):
+    sizes_ = [int((base_crop_size - 1) * sc) + 1 for sc in scales]
+    probs_lst = eval_batch(configs, eval_net, img_lst, crop_size=sizes_[0], flip=flip)
+    print(sizes_)
+    for crop_size_ in sizes_[1:]:
+        probs_lst_tmp = eval_batch(configs, eval_net, img_lst, crop_size=crop_size_, flip=flip)
+        for pl, _ in enumerate(probs_lst):
+            probs_lst[pl] += probs_lst_tmp[pl]
+
+    result_msk = []
+    for i in probs_lst:
+        result_msk.append(i.argmax(axis=2))
+    return result_msk
+
+
+def net_eval():
+    if args.device_target == 'GPU':
+        providers = ['CUDAExecutionProvider']
+    elif args.device_target == 'CPU':
+        providers = ['CPUExecutionProvider']
+    else:
+        raise ValueError(
+            f'Unsupported target device {args.device_target}, '
+            f'Expected one of: "CPU", "GPU"'
+                )
+
+    context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target,
+                        save_graphs=False)
+
+    # data list
+    with open(args.data_lst) as f:
+        img_lst = f.readlines()
+
+    session = onnxruntime.InferenceSession(args.onnx_path, providers=providers)
+
+    # evaluate
+    hist = np.zeros((args.num_classes, args.num_classes))
+    batch_img_lst = []
+    batch_msk_lst = []
+    bi = 0
+    image_num = 0
+    for i, line in enumerate(img_lst):
+
+        img_name = line.strip('\n')
+        data_root = args.data_root
+        img_path = data_root + '/JPEGImages/' + str(img_name) + '.jpg'
+        msk_path = data_root + '/SegmentationClass/' + str(img_name) + '.png'
+
+        img_ = np.array(Image.open(img_path), dtype=np.uint8)
+        msk_ = np.array(Image.open(msk_path), dtype=np.uint8)
+
+        batch_img_lst.append(img_)
+        batch_msk_lst.append(msk_)
+        bi += 1
+        if bi == args.eval_batch_size:
+            batch_res = eval_batch_scales(args, session, batch_img_lst, scales=args.scales,
+                                          base_crop_size=args.crop_size, flip=args.flip)
+            for mi in range(args.eval_batch_size):
+                hist += cal_hist(batch_msk_lst[mi].flatten(), batch_res[mi].flatten(), args.num_classes)
+
+            bi = 0
+            batch_img_lst = []
+            batch_msk_lst = []
+            print('processed {} images'.format(i+1))
+        image_num = i
+
+    if bi > 0:
+        batch_res = eval_batch_scales(args, session, batch_img_lst, scales=args.scales,
+                                      base_crop_size=args.crop_size, flip=args.flip)
+        for mi in range(bi):
+            hist += cal_hist(batch_msk_lst[mi].flatten(), batch_res[mi].flatten(), args.num_classes)
+        print('processed {} images'.format(image_num + 1))
+
+    print(hist)
+    iu = np.diag(hist) / (hist.sum(1) + hist.sum(0) - np.diag(hist))
+    print('per-class IoU', iu)
+    print('mean IoU', np.nanmean(iu))
+
+
+if __name__ == '__main__':
+    net_eval()
diff --git a/official/cv/FCN8s/scripts/run_eval_onnx.sh b/official/cv/FCN8s/scripts/run_eval_onnx.sh
new file mode 100644
index 0000000000000000000000000000000000000000..052ae0aeed0ae97dc3a613cde79b2c6fbba55137
--- /dev/null
+++ b/official/cv/FCN8s/scripts/run_eval_onnx.sh
@@ -0,0 +1,56 @@
+#!/bin/bash
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+
+echo "=============================================================================================================="
+echo "Please run the script as: "
+echo "sh run_distribute_eval.sh ONNX_PATH RANK_TABLE_FILE DATASET CONFIG_PATH "
+echo "for example: sh scripts/run_eval.sh path/to/onnx_path /path/to/dataroot /path/to/dataset"
+echo "It is better to use absolute path."
+echo "================================================================================================================="
+if [ $# -lt 3 ]; then
+    echo "Usage: bash ./scripts/run_eval_onnx.sh [ONNX_PATH] [DATA_ROOT] [DATA_PATH]"
+exit 1
+fi
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+export ONNX_PATH=$(get_real_path $1)
+export DATA_ROOT=$2
+export DATA_PATH=$3
+if [ ! -f $ONNX_PATH ]
+then
+    echo "error: ONNX_PATH=$ONNX_PATH is not a file"
+exit 1
+fi
+rm -rf eval
+mkdir ./eval
+cp ./*.py ./eval
+cp ./*.yaml ./eval
+cp -r ./src ./eval
+cd ./eval || exit
+echo "start testing"
+env > env.log
+python eval_onnx.py  \
+--onnx_path=$ONNX_PATH \
+--data_root=$DATA_ROOT  \
+--data_lst=$DATA_PATH   \
+--device_target="GPU"  \
+--eval_batch_size=1 #> log.txt 2>&1 &