diff --git a/research/cv/EGnet/README_CN.md b/research/cv/EGnet/README_CN.md
index 7de4223ba18f5a00d3e3d9fd3a9dd5200342014d..b2c58680628389abe81b12d06d4bb92139709e95 100644
--- a/research/cv/EGnet/README_CN.md
+++ b/research/cv/EGnet/README_CN.md
@@ -1,73 +1,221 @@
-
 # 鐩綍
 
 - [鐩綍](#鐩綍)
-- [EGNet鎻忚堪](#EGNet鎻忚堪)
-- [妯″瀷鏋舵瀯](#妯″瀷鏋舵瀯)
-- [鏁版嵁闆哴(#鏁版嵁闆�)
-- [鐜瑕佹眰](#鐜瑕佹眰)
-- [蹇€熷叆闂╙(#蹇€熷叆闂�)
-- [鑴氭湰璇存槑](#鑴氭湰璇存槑)
-    - [鑴氭湰鍙婃牱渚嬩唬鐮乚(#鑴氭湰鍙婃牱渚嬩唬鐮�)
-    - [鑴氭湰鍙傛暟](#鑴氭湰鍙傛暟)
-- [璁粌杩囩▼](#璁粌杩囩▼)
-    - [璁粌](#璁粌)
-    - [鍒嗗竷寮忚缁僝(#鍒嗗竷寮忚缁�)
-- [璇勪及杩囩▼](#璇勪及杩囩▼)
-    - [璇勪及](#璇勪及)
-- [瀵煎嚭杩囩▼](#瀵煎嚭杩囩▼)
-    - [瀵煎嚭](#瀵煎嚭)
-- [妯″瀷鎻忚堪](#妯″瀷鎻忚堪)
-    - [鎬ц兘](#鎬ц兘)
-        - [璇勪及鎬ц兘](#璇勪及鎬ц兘)
-            - [DUTS-TR涓婄殑EGNet](#DUTS-TR涓婄殑EGNet)
-        - [鎺ㄧ悊鎬ц兘](#鎺ㄧ悊鎬ц兘)
-            - [鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet](#鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet)
-- [ModelZoo涓婚〉](#modelzoo涓婚〉)
-
-# EGNet鎻忚堪
+    - [EGNet鎻忚堪](#egnet鎻忚堪)
+    - [妯″瀷鏋舵瀯](#妯″瀷鏋舵瀯)
+    - [鏁版嵁闆哴(#鏁版嵁闆�)
+        - [鏁版嵁闆嗛澶勭悊](#鏁版嵁闆嗛澶勭悊)
+    - [棰勮缁冩ā鍨媇(#棰勮缁冩ā鍨�)
+    - [鐜瑕佹眰](#鐜瑕佹眰)
+    - [蹇€熷叆闂╙(#蹇€熷叆闂�)
+    - [鑴氭湰璇存槑](#鑴氭湰璇存槑)
+        - [鑴氭湰鍙婃牱渚嬩唬鐮乚(#鑴氭湰鍙婃牱渚嬩唬鐮�)
+        - [鑴氭湰鍙傛暟](#鑴氭湰鍙傛暟)
+    - [璁粌杩囩▼](#璁粌杩囩▼)
+        - [璁粌](#璁粌)
+        - [鍒嗗竷寮忚缁僝(#鍒嗗竷寮忚缁�)
+    - [璇勪及杩囩▼](#璇勪及杩囩▼)
+        - [璇勪及](#璇勪及)
+    - [瀵煎嚭杩囩▼](#瀵煎嚭杩囩▼)
+        - [瀵煎嚭](#瀵煎嚭)
+    - [妯″瀷鎻忚堪](#妯″瀷鎻忚堪)
+        - [鎬ц兘](#鎬ц兘)
+            - [璇勪及鎬ц兘](#璇勪及鎬ц兘)
+                - [DUTS-TR涓婄殑EGNet(Ascend)](#duts-tr涓婄殑egnetascend)
+                - [DUTS-TR涓婄殑EGNet(GPU)](#duts-tr涓婄殑egnetgpu)
+            - [鎺ㄧ悊鎬ц兘](#鎺ㄧ悊鎬ц兘)
+                - [鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet(Ascend)](#鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑egnetascend)
+                - [鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet(GPU)](#鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑egnetgpu)
+    - [ModelZoo涓婚〉](#modelzoo涓婚〉)
+
+## EGNet鎻忚堪
 
 EGNet鏄敤鏉ヨВ鍐抽潤鎬佺洰鏍囨娴嬮棶棰橈紝瀹冪敱杈圭紭鐗瑰緛鎻愬彇閮ㄥ垎銆佹樉钁楁€х洰鏍囩壒寰佹彁鍙栭儴鍒嗕互鍙婁竴瀵逛竴鐨勫鍚戞ā鍧椾笁閮ㄥ垎鏋勬垚锛屽埄鐢ㄨ竟缂樼壒寰佸府鍔╂樉钁楁€х洰鏍囩壒寰佸畾浣嶇洰鏍囷紝浣跨洰鏍囩殑杈圭晫鏇村姞鍑嗙‘銆傚湪6涓笉鍚岀殑鏁版嵁闆嗕腑涓�15绉嶇洰鍓嶆渶濂界殑鏂规硶杩涜瀵规瘮锛屽疄楠岀粨鏋滆〃鏄嶦GNet鎬ц兘鏈€浼樸€�
 
+[EGNet鐨刾ytorch婧愮爜](https://github.com/JXingZhao/EGNet)锛岀敱璁烘枃浣滆€呮彁渚涖€傚叿浣撳寘鍚繍琛屾枃浠躲€佹ā鍨嬫枃浠跺拰鏁版嵁澶勭悊鏂囦欢锛屾澶栬繕甯︽湁鏁版嵁闆嗐€佸垵濮嬪寲妯″瀷鍜岄璁粌妯″瀷鐨勮幏鍙栭€斿緞锛屽彲鐢ㄤ簬鐩存帴璁粌浠ュ強娴嬭瘯銆�
+
 [璁烘枃](https://arxiv.org/abs/1908.08297): Zhao J X, Liu J J, Fan D P, et al. EGNet: Edge guidance network for salient object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 8779-8788.
 
-# 妯″瀷鏋舵瀯
+## 妯″瀷鏋舵瀯
 
 EGNet缃戠粶鐢变笁涓儴鍒嗙粍鎴愶紝NLSEM锛堣竟缂樻彁鍙栨ā鍧楋級銆丳SFEM锛堢洰鏍囩壒寰佹彁鍙栨ā鍧楋級銆丱2OGM锛堜竴瀵逛竴鎸囧妯″潡锛夛紝鍘熷鍥剧墖閫氳繃涓ゆ鍗风Н杈撳嚭鍥剧墖杈圭紭淇℃伅锛屼笌姝ゅ悓鏃讹紝瀵瑰師濮嬪浘鍍忚繘琛屾洿娣卞眰娆$殑鍗风Н鎿嶄綔鎻愬彇salient object锛岀劧鍚庡皢杈圭紭淇℃伅涓庝笉鍚屾繁搴︽彁鍙栧嚭鏉ョ殑鏄捐憲鐩爣鍦ㄤ竴瀵逛竴鎸囧妯″潡涓垎鍒獸F锛堣瀺鍚堬級锛屽啀鍒嗗埆缁忚繃鍗风Н鎿嶄綔寰楀埌涓嶅悓绋嬪害鐨勬樉钁楁€у浘鍍忥紝鏈€缁堣緭鍑轰簡涓€寮犺瀺鍚堝悗鐨勬樉钁楁€ф娴嬪浘鍍忋€�
 
-# 鏁版嵁闆�
+## 鏁版嵁闆�
+
+鏁版嵁闆嗙粺涓€鏀惧湪涓€涓洰褰曪紝涓嬮潰鐨勬枃浠跺す浠ユ涓哄熀纭€鍒涘缓銆�
+
+- 璁粌闆嗭細[DUTS-TR鏁版嵁闆哴(http://saliencydetection.net/duts/download/DUTS-TR.zip)锛�210MB锛屽叡10533寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忥紝鍧囦粠ImageNet DET璁粌/楠岃瘉闆嗕腑鏀堕泦銆�
+
+鍒涘缓鍚嶄负鈥淒UTS-TR鈥濈殑鏂囦欢澶癸紝鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗘斁鍏ユ枃浠跺す锛屽苟瑙e帇鍒板綋鍓嶈矾寰勩€�
+
+```bash
+鈹溾攢鈹€DUTS-TR
+    鈹溾攢鈹€DUTS-TR-Image
+    鈹溾攢鈹€DUTS-TR-Mask
+```
+
+- 娴嬭瘯闆嗭細[DUTS-TE鏁版嵁闆哴(http://saliencydetection.net/duts/download/DUTS-TE.zip)锛�32.3MB锛屽叡5019寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忥紝鍧囦粠ImageNet DET娴嬭瘯闆嗗拰SUN鏁版嵁闆嗕腑鏀堕泦銆�
+
+鍒涘缓鍚嶄负鈥淒UTS-TE鈥濈殑鏂囦欢澶癸紝鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗘斁鍏ユ枃浠跺す锛屽苟瑙e帇鍒板綋鍓嶈矾寰勩€�
+
+```bash
+鈹溾攢鈹€DUTS-TE
+    鈹溾攢鈹€DUTS-TE-Image
+    鈹溾攢鈹€DUTS-TE-Mask
+```
+
+- 娴嬭瘯闆嗭細[SOD鏁版嵁闆哴(https://www.elderlab.yorku.ca/?smd_process_download=1&download_id=8285)锛�21.2MB锛屽叡300寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忥紝姝ゆ暟鎹泦鏄熀浜嶣erkeley Segmentation Dataset锛圔SD锛夌殑鏄捐憲瀵硅薄杈圭晫鐨勯泦鍚堛€�
+
+鍒涘缓鍚嶄负鈥淪OD鈥濈殑鏂囦欢澶癸紝鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗘斁鍏ユ枃浠跺す锛屽苟瑙e帇鍒板綋鍓嶈矾寰勩€�
+
+```bash
+鈹溾攢鈹€SOD
+    鈹溾攢鈹€Imgs
+```
+
+- 娴嬭瘯闆嗭細[ECSSD鏁版嵁闆哴(http://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data/ECSSD/images.zip锛宧ttp://www.cse.cuhk.edu.hk/leojia/projects/hsaliency/data/ECSSD/ground_truth_mask.zip)锛�64.6MB锛屽叡1000寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忋€�
+
+鍒涘缓鍚嶄负鈥淓CSSD鈥濈殑鏂囦欢澶癸紝鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗙殑鍘熷浘鍙奼roundtruth鏀惧叆鏂囦欢澶癸紝骞惰В鍘嬪埌褰撳墠璺緞銆�
+
+```bash
+鈹溾攢鈹€ECSSD
+    鈹溾攢鈹€ground_truth_mask
+    鈹溾攢鈹€images
+```
+
+- 娴嬭瘯闆嗭細[PASCAL-S鏁版嵁闆哴(https://academictorrents.com/download/6c49defd6f0e417c039637475cde638d1363037e.torrent)锛�175 MB锛屽叡10涓被銆�850寮�32*32褰╄壊鍥惧儚銆傝鏁版嵁闆嗕笌鍏朵粬鏄捐憲鐗╀綋妫€娴嬫暟鎹泦鍖哄埆杈冨ぇ, 娌℃湁闈炲父鏄庢樉鐨勬樉钁楃墿浣�, 骞朵富瑕佹牴鎹汉绫荤殑鐪煎姩杩涜鏍囨敞鏁版嵁闆�, 鍥犳璇ユ暟鎹泦闅惧害杈冨ぇ銆�
+
+鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗗苟瑙e帇鍒板綋鍓嶈矾寰勩€傚湪鏁版嵁闆嗘牴鐩綍鍒涘缓鍚嶄负鈥淧ASCAL-S鈥濅互鍙奍mgs鐨勬枃浠跺す锛屽皢datasets/imgs/pascal鍜宒atasets/masks/pascal鏀惧叆鍒癐mgs鏂囦欢澶逛腑銆�
+
+```bash
+鈹溾攢鈹€PASCAL-S
+    鈹溾攢鈹€Imgs
+```
+
+- 娴嬭瘯闆嗭細[DUTS-OMRON鏁版嵁闆哴(http://saliencydetection.net/dut-omron/download/DUT-OMRON-image.zip,http://saliencydetection.net/dut-omron/download/DUT-OMRON-gt-pixelwise.zip.zip)锛�107 MB锛屽叡5168寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忋€傛暟鎹泦涓叿鏈変竴涓垨澶氫釜鏄捐憲瀵硅薄鍜岀浉瀵瑰鏉傜殑鑳屾櫙锛屽叿鏈夌溂鐫涘浐瀹氥€佽竟鐣屾鍜屽儚绱犳柟闈㈢殑澶ц妯$湡瀹炴爣娉ㄧ殑鏁版嵁闆�.
+
+鍒涘缓鍚嶄负鈥淒UTS-OMRON-image鈥濈殑鏂囦欢澶癸紝鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗘斁鍏ユ枃浠跺す锛屽苟瑙e帇鍒板綋鍓嶈矾寰勩€�
 
-浣跨敤鐨勬暟鎹泦锛歔鏄捐憲鎬ф娴嬫暟鎹泦](<https://blog.csdn.net/studyeboy/article/details/102383922?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522163031601316780274127035%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=163031601316780274127035&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~hot_rank-5-102383922.first_rank_v2_pc_rank_v29&utm_term=DUTS-TE%E6%95%B0%E6%8D%AE%E9%9B%86%E4%B8%8B%E8%BD%BD&spm=1018.2226.3001.4187>)
+```bash
+鈹溾攢鈹€DUTS-OMRON-image
+    鈹溾攢鈹€DUTS-OMRON-image
+    鈹溾攢鈹€pixelwiseGT-new-PNG
+```
+
+- 娴嬭瘯闆嗭細[HKU-IS鏁版嵁闆哴(https://i.cs.hku.hk/~gbli/deep_saliency.html)锛�893MB锛屽叡4447寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忋€傛暟鎹泦涓瘡寮犲浘鍍忚嚦灏戞弧瓒充互涓嬬殑3涓爣鍑嗕箣涓€:1)鍚湁澶氫釜鍒嗘暎鐨勬樉钁楃墿浣�; 2)鑷冲皯鏈�1涓樉钁楃墿浣撳湪鍥惧儚杈圭晫; 3)鏄捐憲鐗╀綋涓庤儗鏅〃瑙傜浉浼笺€�
 
-- 鏁版嵁闆嗗ぇ灏忥細
-    - 璁粌闆嗭細DUTS-TR鏁版嵁闆嗭紝210MB锛屽叡10533寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忥紝鍧囦粠ImageNet DET璁粌/楠岃瘉闆嗕腑鏀堕泦銆�
-    - 娴嬭瘯闆嗭細SOD鏁版嵁闆嗭紝21.2MB锛屽叡300寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忥紝姝ゆ暟鎹泦鏄熀浜嶣erkeley Segmentation Dataset锛圔SD锛夌殑鏄捐憲瀵硅薄杈圭晫鐨勯泦鍚堛€�
-    - 娴嬭瘯闆嗭細ECSSD鏁版嵁闆嗭紝64.6MB锛屽叡1000寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忋€�
-    - 娴嬭瘯闆嗭細PASCAL-S鏁版嵁闆嗭紝175 MB锛屽叡10涓被銆�850寮�32*32褰╄壊鍥惧儚銆傝鏁版嵁闆嗕笌鍏朵粬鏄捐憲鐗╀綋妫€娴嬫暟鎹泦鍖哄埆杈冨ぇ, 娌℃湁闈炲父鏄庢樉鐨勬樉钁楃墿浣�, 骞朵富瑕佹牴鎹汉绫荤殑鐪煎姩杩涜鏍囨敞鏁版嵁闆�, 鍥犳璇ユ暟鎹泦闅惧害杈冨ぇ銆�
-    - 娴嬭瘯闆嗭細DUTS-OMRON鏁版嵁闆嗭紝107 MB锛屽叡5168寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忋€傛暟鎹泦涓叿鏈変竴涓垨澶氫釜鏄捐憲瀵硅薄鍜岀浉瀵瑰鏉傜殑鑳屾櫙锛屽叿鏈夌溂鐫涘浐瀹氥€佽竟鐣屾鍜屽儚绱犳柟闈㈢殑澶ц妯$湡瀹炴爣娉ㄧ殑鏁版嵁闆嗐€�
-    - 娴嬭瘯闆嗭細HKU-IS鏁版嵁闆嗭紝893MB锛屽叡4447寮犳渶澶ц竟闀夸负400鍍忕礌鐨勫僵鑹插浘鍍忋€傛暟鎹泦涓瘡寮犲浘鍍忚嚦灏戞弧瓒充互涓嬬殑3涓爣鍑嗕箣涓€:1)鍚湁澶氫釜鍒嗘暎鐨勬樉钁楃墿浣�; 2)鑷冲皯鏈�1涓樉钁楃墿浣撳湪鍥惧儚杈圭晫; 3)鏄捐憲鐗╀綋涓庤儗鏅〃瑙傜浉浼笺€�
-- 鏁版嵁鏍煎紡锛氫簩杩涘埗鏂囦欢
-    - 娉細鏁版嵁灏嗗湪src/dataset.py涓鐞嗐€�
+鍒涘缓鍚嶄负鈥淗KU-IS鈥濈殑鏂囦欢澶癸紝鏍规嵁浠ヤ笂閾炬帴涓嬭浇鏁版嵁闆嗘斁鍏ユ枃浠跺す锛屽苟瑙e帇鍒板綋鍓嶈矾寰勩€�
 
-# 鐜瑕佹眰
+```bash
+鈹溾攢鈹€HKU-IS
+    鈹溾攢鈹€imgs
+    鈹溾攢鈹€gt
+```
+
+### 鏁版嵁闆嗛澶勭悊
+
+杩愯dataset_preprocess.sh鑴氭湰锛屽鏁版嵁闆嗚繘琛屼簡鏍煎紡缁熶竴锛岃鍓互鍙婄敓鎴愬搴旂殑lst鏂囦欢銆傚叾涓祴璇曢泦鐢熸垚test.lst锛岃缁冮泦鐢熸垚test.lst鍜宼rain_pair_edge.lst銆�
+
+```shell
+# DATA_ROOT 鎵€鏈夋暟鎹泦瀛樻斁鐨勬牴鐩綍
+# OUTPUT_ROOT 缁撴灉鐩綍
+bash dataset_preprocess.sh [DATA_ROOT] [OUTPUT_ROOT]
+```
+
+1. 澶勭悊鍚庣殑DUTS-TR鏁版嵁闆嗙洰褰曞涓嬨€侱UTS-TE-Mask瀛樻斁groundtruth锛孌UTS-TE-Image瀛樻斁鍘熷浘锛宼est.lst鏄暟鎹腑鐨勫浘鐗囨枃浠跺垪琛紝train_pair_edge.lst鏄褰曟暟鎹泦涓浘鐗囥€乬roundtruth鍜岃竟缂樺浘鐨勬枃浠跺垪琛ㄣ€�
+
+```bash
+鈹溾攢鈹€DUTS-TR
+    鈹溾攢鈹€DUTS-TR-Image
+    鈹溾攢鈹€DUTS-TR-Mask
+    鈹溾攢鈹€test.lst
+    鈹溾攢鈹€train_pair_edge.lst
+```
+
+2. 澶勭悊鍚庣殑DUTS-TE鏁版嵁闆嗙洰褰曞涓嬨€侱UTS-TE-Mask瀛樻斁groundtruth锛孌UTS-TE-Image瀛樻斁鍘熷浘锛宼est.lst鏄暟鎹腑鐨勫浘鐗囨枃浠跺垪琛ㄣ€�
+
+```bash
+鈹溾攢鈹€DUTS-TE
+    鈹溾攢鈹€DUTS-TE-Image
+    鈹溾攢鈹€DUTS-TE-Mask
+    鈹溾攢鈹€test.lst
+```
+
+3. 澶勭悊鍚庣殑闄UTS-TE鐨�5涓祴璇曢泦缁熶竴鎴愬涓嬫牸寮�(浠KU-IS涓轰緥)锛実round_truth_mask瀛樻斁groundtruth锛宨mages瀛樻斁鍘熷浘锛宼est.lst鏄暟鎹腑鐨勫浘鐗囨枃浠跺垪琛ㄣ€傘€�
+
+```bash
+鈹溾攢鈹€HKU-IS
+    鈹溾攢鈹€ground_truth_mask
+    鈹溾攢鈹€images
+    鈹溾攢鈹€test.lst
+```
+
+4. test.lst鏄暟鎹腑鐨勫浘鐗囨枃浠跺垪琛紝train_pair_edge.lst鏄寘鍚浘鐗囥€乬roundtruth鍜岃竟缂樺浘鐨勬枃浠跺垪琛ㄣ€�
+
+```bash
+test.lst鏂囦欢鏍煎紡濡備笅(浠KU-IS涓轰緥)
+
+    0004.png
+    0005.png
+    0006.png
+    ....
+    9056.png
+    9057.png
+```
 
-- 纭欢锛圓scend/GPU/CPU锛�
-    - 浣跨敤Ascend/GPU/CPU澶勭悊鍣ㄦ潵鎼缓纭欢鐜銆�
+```bash
+train_pair_edge.lst鏂囦欢鏍煎紡濡備笅(DUTS-TR)
+
+    DUTS-TR-Image/ILSVRC2012_test_00007606.jpg DUTS-TR-Mask/ILSVRC2012_test_00007606.png DUTS-TR-Mask/ILSVRC2012_test_00007606_edge.png
+    DUTS-TR-Image/n03770439_12912.jpg DUTS-TR-Mask/n03770439_12912.png DUTS-TR-Mask/n03770439_12912_edge.png
+    DUTS-TR-Image/ILSVRC2012_test_00062061.jpg DUTS-TR-Mask/ILSVRC2012_test_00062061.png DUTS-TR-Mask/ILSVRC2012_test_00062061_edge.png
+    ....
+    DUTS-TR-Image/n02398521_31039.jpg DUTS-TR-Mask/n02398521_31039.png DUTS-TR-Mask/n02398521_31039_edge.png
+    DUTS-TR-Image/n07768694_14708.jpg DUTS-TR-Mask/n07768694_14708.png DUTS-TR-Mask/n07768694_14708_edge.png
+```
+
+## 棰勮缁冩ā鍨�
+
+pytorch棰勮缁冩ā鍨嬶紙鍖呮嫭vgg16, resnet50)
+
+VGG涓诲共缃戠粶閫夌敤vgg16鐨勭粨鏋勶紝鍖呭惈13涓嵎绉眰鍜�3涓叏杩炴帴灞傦紝鏈ā鍨嬩笉浣跨敤鍏ㄨ繛鎺ュ眰銆�
+
+涓嬭浇 [VGG16棰勮缁冩ā鍨媇(https://download.mindspore.cn/thirdparty/vgg16_20M.pth)
+
+ResNet涓诲共缃戠粶閫夌敤resnet50鐨勭粨鏋勶紝鍖呭惈鍗风Н灞傚拰鍏ㄨ繛鎺ュ眰鍦ㄥ唴鍏辨湁50灞傦紝鏈ā鍨嬩笉浣跨敤鍏ㄨ繛鎺ュ眰銆傛暣浣撶敱5涓猄tage缁勬垚锛岀涓€涓猄tage瀵硅緭鍑鸿繘琛岄澶勭悊锛屽悗鍥涗釜Stage鍒嗗埆鍖呭惈3,4,6,3涓狟ottleneck銆�
+
+涓嬭浇 [ResNet50棰勮缁冩ā鍨媇(https://download.mindspore.cn/thirdparty/resnet50_caffe.pth)
+
+mindspore棰勮缁冩ā鍨�
+
+涓嬭浇pytorch棰勮缁冩ā鍨嬶紝鍐嶈繍琛屽涓嬭剼鏈紝寰楀埌瀵瑰簲鐨刴indspore妯″瀷銆傛敞锛氳繍琛岃鑴氭湰闇€瑕佸悓鏃跺畨瑁卲ytorch鐜(娴嬭瘯鐗堟湰鍙蜂负1.3锛孋PU 鎴� GPU)
+
+```bash
+# MODEL_NAME: 妯″瀷鍚嶇Оvgg鎴杛esnet
+# PTH_FILE: 寰呰浆鎹㈡ā鍨嬫枃浠剁粷瀵硅矾寰�
+# MSP_FILE: 杈撳嚭妯″瀷鏂囦欢缁濆璺緞
+bash convert_model.sh [MODEL_NAME] [PTH_FILE] [MSP_FILE]
+```
+
+## 鐜瑕佹眰
+
+- 纭欢锛圓scend/GPU锛�
+    - 浣跨敤Ascend/GPU澶勭悊鍣ㄦ潵鎼缓纭欢鐜銆�
 - 妗嗘灦
     - [MindSpore](https://www.mindspore.cn/install/en)
 - 濡傞渶鏌ョ湅璇︽儏锛岃鍙傝濡備笅璧勬簮锛�
     - [MindSpore鏁欑▼](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
     - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
 
-# 蹇€熷叆闂�
+## 蹇€熷叆闂�
 
 閫氳繃瀹樻柟缃戠珯瀹夎MindSpore鍚庯紝鎮ㄥ彲浠ユ寜鐓у涓嬫楠よ繘琛岃缁冨拰璇勪及锛�
 
+鍦╠efault_config.yaml杩涜鐩稿叧閰嶇疆锛屽叾涓璽rain_path椤硅缃缁冮泦瀛樻斁璺緞锛宐ase_model椤硅缃富骞茬綉缁滅被鍨嬶紙vgg鎴杛esnet)锛宼est_path椤硅缃祴璇曢泦瀛樻斁璺緞锛寁gg鍜宺esnet椤硅缃璁粌妯″瀷瀛樻斁璺緞銆俿cripts涓殑鑴氭湰鏂囦欢閲屼篃鍙紶閫掑弬鏁帮紝涓斿彲浠ヨ鐩栨帀default_config.yaml涓缃殑鍙傛暟銆傛敞锛氭墍鏈夎剼鏈繍琛岄渶鍏堣繘鍏cripts鐩綍銆�
+
 - Ascend澶勭悊鍣ㄧ幆澧冭繍琛�
 
 ```shell
-# 鏁版嵁闆嗚繘琛岃鍓�
-python data_crop.py --data_name=[DATA_NAME]  --data_root=[DATA_ROOT] --output_path=[OUTPUT_PATH]
 
 # 杩愯璁粌绀轰緥
 bash run_standalone_train.sh
@@ -79,76 +227,94 @@ bash run_distribute_train.sh 8 [RANK_TABLE_FILE]
 bash run_eval.sh
 ```
 
-璁粌闆嗚矾寰勫湪default_config.yaml涓殑data椤硅缃�
+- GPU澶勭悊鍣ㄧ幆澧冭繍琛�
 
-# 鑴氭湰璇存槑
+```shell
 
-## 鑴氭湰鍙婃牱渚嬩唬鐮�
+# 杩愯璁粌绀轰緥
+bash run_standalone_train_gpu.sh
+
+# 杩愯鍒嗗竷寮忚缁冪ず渚�
+# DEVICE_NUM: 浣跨敤鐨勬樉鍗℃暟閲忥紝濡�: 8
+# USED_DEVICES: 浣跨敤鐨勬樉鍗d鍒楄〃锛岄渶鍜屾樉鍗℃暟閲忓搴旓紝濡�: 0,1,2,3,4,5,6,7
+bash run_distribute_train_gpu.sh [DEVICE_NUM] [USED_DEVICES]
+
+# 杩愯璇勪及绀轰緥
+bash run_eval_gpu.sh
+```
+
+## 鑴氭湰璇存槑
+
+### 鑴氭湰鍙婃牱渚嬩唬鐮�
 
 ```bash
 鈹溾攢鈹€ model_zoo
     鈹溾攢鈹€ EGNet
-        鈹溾攢鈹€ README.md                     # EGNet鐩稿叧璇存槑
-        鈹溾攢鈹€ model_utils                   # config锛宮odelarts绛夐厤缃剼鏈枃浠跺す
-        鈹�   鈹溾攢鈹€config.py                  # 瑙f瀽鍙傛暟閰嶇疆鏂囦欢
+        鈹溾攢鈹€ README_CN.md                    # EGNet涓枃README鏂囦欢
+        鈹溾攢鈹€ model_utils                     # config锛宮odelarts绛夐厤缃剼鏈枃浠跺す
+        鈹�   鈹溾攢鈹€config.py                    # 瑙f瀽鍙傛暟閰嶇疆鏂囦欢
         鈹溾攢鈹€ scripts
-        鈹�   鈹溾攢鈹€run_train.sh               # 鍚姩Ascend鍗曟満璁粌锛堝崟鍗★級
-        鈹�   鈹溾攢鈹€run_distribute_train.sh    # 鍚姩Ascend鍒嗗竷寮忚缁冿紙8鍗★級
-        鈹�   鈹溾攢鈹€run_eval.sh                # 鍚姩Ascend璇勪及
+        鈹�   鈹溾攢鈹€run_standalone_train.sh      # 鍚姩Ascend鍗曟満璁粌锛堝崟鍗★級
+        鈹�   鈹溾攢鈹€run_distribute_train.sh      # 鍚姩Ascend鍒嗗竷寮忚缁冿紙8鍗★級
+        鈹�   鈹溾攢鈹€run_eval.sh                  # 鍚姩Ascend璇勪及
+        鈹�   鈹溾攢鈹€run_standalone_train_gpu.sh  # 鍚姩GPU鍗曟満璁粌锛堝崟鍗★級
+        鈹�   鈹溾攢鈹€run_distribute_train_gpu.sh  # 鍚姩GPU鍒嗗竷寮忚缁冿紙澶氬崱锛�
+        鈹�   鈹溾攢鈹€run_eval_gpu.sh              # 鍚姩GPU璇勪及
+        鈹�   鈹溾攢鈹€dataset_preprocess.sh        # 瀵规暟鎹泦棰勫鐞嗗苟鐢熸垚lst鏂囦欢
+        鈹�   鈹溾攢鈹€convert_model.sh             # 杞崲棰勮缁冩ā鍨�
         鈹溾攢鈹€ src
-        鈹�   鈹溾攢鈹€dataset.py                 # 鍔犺浇鏁版嵁闆�
-        鈹�   鈹溾攢鈹€egnet.py                   # EGNet鐨勭綉缁滅粨鏋�
-        鈹�   鈹溾攢鈹€vgg.py                     # vgg鐨勭綉缁滅粨鏋�
-        鈹�   鈹溾攢鈹€resnet.py                  # resnet鐨勭綉缁滅粨鏋�
-        鈹�   鈹溾攢鈹€sal_edge_loss.py           # 鎹熷け瀹氫箟
-        鈹�   鈹溾攢鈹€train_forward_backward.py  # 鍓嶅悜浼犳挱鍜屽弽鍚戜紶鎾畾涔�
-        鈹溾攢鈹€ sal2edge.py                   # 棰勫鐞嗭紝鎶婃樉钁楀浘鍍忚浆鍖栦负杈圭紭鍥惧儚
-        鈹溾攢鈹€ data_crop.py                  # 鏁版嵁瑁佸壀
-        鈹溾攢鈹€ train.py                      # 璁粌鑴氭湰
-        鈹溾攢鈹€ eval.py                       # 璇勪及鑴氭湰
-        鈹溾攢鈹€ export.py                     # 妯″瀷瀵煎嚭鑴氭湰
-        鈹溾攢鈹€ default_config.yaml           # 鍙傛暟閰嶇疆鏂囦欢
+        鈹�   鈹溾攢鈹€dataset.py                   # 鍔犺浇鏁版嵁闆�
+        鈹�   鈹溾攢鈹€egnet.py                     # EGNet鐨勭綉缁滅粨鏋�
+        鈹�   鈹溾攢鈹€vgg.py                       # vgg鐨勭綉缁滅粨鏋勶紝vgg16鐗堟湰
+        鈹�   鈹溾攢鈹€resnet.py                    # resnet鐨勭綉缁滅粨鏋勶紝resnet50鐗堟湰
+        鈹�   鈹溾攢鈹€sal_edge_loss.py             # 鎹熷け瀹氫箟
+        鈹�   鈹溾攢鈹€train_forward_backward.py    # 鍓嶅悜浼犳挱鍜屽弽鍚戜紶鎾畾涔�
+        鈹溾攢鈹€ pretrained_model_convert        # pytorch棰勮缁冩ā鍨嬭浆鎹㈡垚mindspore妯″瀷  
+        鈹�   鈹溾攢鈹€pth_to_msp.py                # pth鏂囦欢杞崲鎴恈kpt鏂囦欢
+        鈹�   鈹溾攢鈹€resnet_msp.py                # mindspore涓媟esnet棰勮缁冩ā鍨嬬殑缃戠粶缁撴瀯
+        鈹�   鈹溾攢鈹€resnet_pth.py                # pytorch涓媟esnet棰勮缁冩ā鍨嬬殑缃戠粶缁撴瀯
+        鈹�   鈹溾攢鈹€vgg_msp.py                   # mindspore涓媣gg棰勮缁冩ā鍨嬬殑缃戠粶缁撴瀯
+        鈹�   鈹溾攢鈹€vgg_pth.py                   # pytorch涓媣gg棰勮缁冩ā鍨嬬殑缃戠粶缁撴瀯
+        鈹溾攢鈹€ sal2edge.py                     # 棰勫鐞嗭紝鎶婃樉钁楀浘鍍忚浆鍖栦负杈圭紭鍥惧儚
+        鈹溾攢鈹€ data_crop.py                    # 鏁版嵁瑁佸壀骞剁敓鎴恡est.lst鏂囦欢
+        鈹溾攢鈹€ train.py                        # 璁粌鑴氭湰
+        鈹溾攢鈹€ eval.py                         # 璇勪及鑴氭湰
+        鈹溾攢鈹€ export.py                       # 妯″瀷瀵煎嚭鑴氭湰
+        鈹溾攢鈹€ default_config.yaml             # 鍙傛暟閰嶇疆鏂囦欢
+        鈹溾攢鈹€ requirements.txt                # 鍏朵粬渚濊禆鍖�
 ```
 
-## 鑴氭湰鍙傛暟
+### 鑴氭湰鍙傛暟
 
-鍦╟onfig.py涓彲浠ュ悓鏃堕厤缃缁冨弬鏁板拰璇勪及鍙傛暟銆�
+鍦╠efault_config.yaml涓彲浠ュ悓鏃堕厤缃缁冨弬鏁板拰璇勪及鍙傛暟銆�
 
-- 閰嶇疆EGNet鍜孌UTS-TR鏁版嵁闆嗐€�
+- 閰嶇疆EGNet锛岃繖閲屽垪鍑轰竴浜涘叧閿弬鏁�
 
 ```text
-dataset_name: "DUTS-TR"                      # 鏁版嵁闆嗗悕绉�
-name: "egnet"                                # 缃戠粶鍚嶇О
-pre_trained: Ture                            # 鏄惁鍩轰簬棰勮缁冩ā鍨嬭缁�
-lr_init: 5e-5(resnet) or 2e-5(vgg)           # 鍒濆瀛︿範鐜�
-batch_size: 10                               # 璁粌鎵规澶у皬
-epoch_size: 30                               # 鎬昏璁粌epoch鏁�
-momentum: 0.1                                # 鍔ㄩ噺
-weight_decay:5e-4                            # 鏉冮噸琛板噺鍊�
-image_height: 200                            # 杈撳叆鍒版ā鍨嬬殑鍥惧儚楂樺害
-image_width: 200                             # 杈撳叆鍒版ā鍨嬬殑鍥惧儚瀹藉害
-train_data_path: "./data/DUTS-TR/"           # 璁粌鏁版嵁闆嗙殑鐩稿璺緞
-eval_data_path: "./data/SOD/"            # 璇勪及鏁版嵁闆嗙殑鐩稿璺緞
-checkpoint_path: "./EGNet/run-nnet/models/"  # checkpoint鏂囦欢淇濆瓨鐨勭浉瀵硅矾寰�
+device_target: "Ascend"                                 # 杩愯璁惧 ["Ascend", "GPU"]
+base_model: "resnet"                                    # 涓诲共缃戠粶锛孾"vgg", "resnet"]
+batch_size: 1                                           # 璁粌鎵规澶у皬
+n_ave_grad: 10                                          # 姊害绱Нstep鏁�
+epoch_size: 30                                          # 鎬昏璁粌epoch鏁�
+image_height: 200                                       # 杈撳叆鍒版ā鍨嬬殑鍥惧儚楂樺害
+image_width: 200                                        # 杈撳叆鍒版ā鍨嬬殑鍥惧儚瀹藉害
+train_path: "./data/DUTS-TR/"                           # 璁粌鏁版嵁闆嗙殑璺緞
+test_path: "./data"                                     # 娴嬭瘯鏁版嵁闆嗙殑鏍圭洰褰�
+vgg: "/home/EGnet/EGnet/model/vgg16.ckpt"               # vgg棰勮缁冩ā鍨嬬殑璺緞
+resnet: "/home/EGnet/EGnet/model/resnet50.ckpt"         # resnet棰勮缁冩ā鍨嬬殑璺緞
+model: "EGNet/run-nnet/models/final_vgg_bone.ckpt"      # 娴嬭瘯鏃朵娇鐢ㄧ殑checkpoint鏂囦欢
 ```
 
-鏇村閰嶇疆缁嗚妭璇峰弬鑰� src/config.py銆�
+鏇村閰嶇疆缁嗚妭璇峰弬鑰� default_config.yaml銆�
 
-# 璁粌杩囩▼
+## 璁粌杩囩▼
 
-## 璁粌
-
-- 鏁版嵁闆嗚繘琛岃鍓細
-
-```bash
-python data_crop.py --data_name=[DATA_NAME]  --data_root=[DATA_ROOT] --output_path=[OUTPUT_PATH]
-```
+### 璁粌
 
 - Ascend澶勭悊鍣ㄧ幆澧冭繍琛�
 
 ```bash
-python train.py --mode=train --base_model=vgg --vgg=[PRETRAINED_PATH]
-python train.py --mode=train --base_model=resnet --resnet=[PRETRAINED_PATH]
+bash run_standalone_train.sh
 ```
 
 - 绾夸笂modelarts璁粌
@@ -159,10 +325,8 @@ online_train_path锛坥bs妗朵腑璁粌闆咲UTS-TR鐨勫瓨鍌ㄨ矾寰勶級
 
 ```bash
 鈹溾攢鈹€DUTS-TR
-    鈹溾攢鈹€DUTS-TR-Image  
+    鈹溾攢鈹€DUTS-TR-Image
     鈹溾攢鈹€DUTS-TR-Mask
-    鈹溾攢鈹€train_pair.lst
-    鈹溾攢鈹€train_pair_edge.lst
 ```
 
 online_pretrained_path锛坥bs妗朵腑棰勮缁冩ā鍨嬬殑瀛樺偍璺緞锛�
@@ -181,7 +345,13 @@ train_online = True锛堣瀹氫负绾夸笂璁粌锛�
 
 璁粌缁撴潫鍚庯紝鎮ㄥ彲鍦ㄩ粯璁�./EGNet/run-nnet/models/鏂囦欢澶逛笅鎵惧埌妫€鏌ョ偣鏂囦欢銆�
 
-## 鍒嗗竷寮忚缁�
+- GPU澶勭悊鍣ㄧ幆澧冭繍琛�
+
+```bash
+bash run_standalone_train_gpu.sh
+```
+
+### 鍒嗗竷寮忚缁�
 
 - Ascend澶勭悊鍣ㄧ幆澧冭繍琛�
 
@@ -193,47 +363,55 @@ bash run_distribute_train.sh 8 [RANK_TABLE_FILE]
 
 - 绾夸笂modelarts鍒嗗竷寮忚缁�
 
-绾夸笂璁粌闇€瑕佺殑鍙傛暟閰嶇疆涓庡崟鍗¤缁冨熀鏈竴鑷达紝鍙渶瑕佹柊澧炲弬鏁癷s_distributed = True
+绾夸笂鍒嗗竷寮忚缁冮渶瑕佺殑鍙傛暟閰嶇疆涓庡崟鍗¤缁冨熀鏈竴鑷达紝鍙渶瑕佹柊澧炲弬鏁癷s_distributed = True
 
 涓婅堪shell鑴氭湰灏嗗湪鍚庡彴杩愯鍒嗗竷璁粌銆傛偍鍙互閫氳繃train/train.log鏂囦欢鏌ョ湅缁撴灉銆�
 
-# 璇勪及杩囩▼
+- GPU澶勭悊鍣ㄧ幆澧冭繍琛�
+
+```bash
+# DEVICE_NUM: 浣跨敤鐨勬樉鍗℃暟閲忥紝濡�: 8
+# USED_DEVICES: 浣跨敤鐨勬樉鍗d鍒楄〃锛岄渶鍜屾樉鍗℃暟閲忓搴旓紝濡�: 0,1,2,3,4,5,6,7
+bash run_distribute_train_gpu.sh [DEVICE_NUM] [USED_DEVICES]
+```
+
+## 璇勪及杩囩▼
 
-## 璇勪及
+### 璇勪及
 
 - Ascend澶勭悊鍣ㄧ幆澧冭繍琛�
 
 ```bash
-python eval.py --model=[MODEL_PATH] --sal_mode=[DATA_NAME] --test_fold=[TEST_DATA_PATH] --base_model=vgg
-python eval.py --model=[MODEL_PATH] --sal_mode=[DATA_NAME] --test_fold=[TEST_DATA_PATH] --base_model=resnet
+bash run_eval.sh
 ```
 
-鏁版嵁闆嗘枃浠剁粨鏋�
+- GPU澶勭悊鍣ㄧ幆澧冭繍琛岋紝闇€淇敼default_config.yaml涓殑model鍙傛暟涓洪渶瑕佽瘎浼扮殑妯″瀷璺緞
+
+```text
+model: "EGNet/run-nnet/models/final_vgg_bone.ckpt"      # 娴嬭瘯鏃朵娇鐢ㄧ殑checkpoint鏂囦欢
+```
 
 ```bash
-鈹溾攢鈹€NAME
-    鈹溾攢鈹€ground_truth_mask  
-    鈹溾攢鈹€images
-    鈹溾攢鈹€test.lst
+bash run_eval_gpu.sh
 ```
 
-# 瀵煎嚭杩囩▼
+## 瀵煎嚭杩囩▼
 
-## 瀵煎嚭
+### 瀵煎嚭
 
-鍦ㄥ鍑轰箣鍓嶉渶瑕佷慨鏀筪efault_config.ymal閰嶇疆鏂囦欢.闇€瑕佷慨鏀圭殑閰嶇疆椤逛负ckpt_file.
+鍦ㄥ鍑轰箣鍓嶉渶瑕佷慨鏀筪efault_config.ymal閰嶇疆鏂囦欢鐨勯厤缃」ckpt_file鎴栦紶鍏ュ弬鏁�--ckpt_file.
 
 ```shell
 python export.py --ckpt_file=[CKPT_FILE]
 ```
 
-# 妯″瀷鎻忚堪
+## 妯″瀷鎻忚堪
 
-## 鎬ц兘
+### 鎬ц兘
 
-### 璇勪及鎬ц兘
+#### 璇勪及鎬ц兘
 
-#### DUTS-TR涓婄殑EGNet
+##### DUTS-TR涓婄殑EGNet(Ascend)
 
 | 鍙傛暟                 | Ascend                                                     | Ascend                    |
 | -------------------------- | ----------------------------------------------------------- | ---------------------- |
@@ -248,11 +426,28 @@ python export.py --ckpt_file=[CKPT_FILE]
 | 閫熷害                      | 鍗曞崱锛�593.460姣/姝� ;  8鍗★細460.952姣/姝�                         | 鍗曞崱锛�569.524姣/姝�;  8鍗★細466.667姣/姝�       |
 | 鎬绘椂闀�                 | 鍗曞崱锛�5h3m ;   8鍗★細 4h2m                         | 鍗曞崱锛�4h59m ;  8鍗★細4h5m     |
 | 寰皟妫€鏌ョ偣 | 412M (.ckpt鏂囦欢)                                         | 426M (.ckpt鏂囦欢)    |
-| 鑴氭湰                    | [EGNnet鑴氭湰]() | [EGNet 鑴氭湰]() |
+| 鑴氭湰                    | [EGNet鑴氭湰](https://gitee.com/mindspore/models/tree/master/research/cv/EGnet) | [EGNet 鑴氭湰](https://gitee.com/mindspore/models/tree/master/research/cv/EGnet) |
+
+##### DUTS-TR涓婄殑EGNet(GPU)
+
+| 鍙傛暟                 | GPU                                                      | GPU                    |
+| -------------------------- | ----------------------------------------------------------- | ---------------------- |
+| 妯″瀷鐗堟湰              | EGNet锛圴GG锛�                                                | EGNet锛坮esnet锛�           |
+| 璧勬簮                   | GeForce RTX 2080 Ti锛堝崟鍗★級 V100锛堝鍗★級             | GeForce RTX 2080 Ti锛堝崟鍗★級 V100锛堝鍗★級               |
+| 涓婁紶鏃ユ湡              | 2021-12-02                                 | 2021-12-02 |
+| MindSpore鐗堟湰          | 1.3.0                                                       | 1.3.0                  |
+| 鏁版嵁闆�                    | DUTS-TR                                                    | DUTS-TR               |
+| 璁粌鍙傛暟        | epoch=30, steps=1050, batch_size = 10, lr=2e-5              | epoch=30, steps=1050, batch_size=10, lr=5e-5    |
+| 浼樺寲鍣�                  | Adam                                                    | Adam               |
+| 鎹熷け鍑芥暟              | Binary浜ゅ弶鐔�                                       | Binary浜ゅ弶鐔�  |
+| 閫熷害                      | 鍗曞崱锛�1148.571姣/姝� ;  2鍗★細921.905姣/姝�                          | 鍗曞崱锛�1323.810姣/姝�;  2鍗★細1057.143姣/姝�      |
+| 鎬绘椂闀�                 | 鍗曞崱锛�10h3m ;  2鍗★細8h4m                          | 鍗曞崱锛�11h35m ;  2鍗★細9h15m      |
+| 寰皟妫€鏌ョ偣 | 412M (.ckpt鏂囦欢)                                         | 426M (.ckpt鏂囦欢)    |
+| 鑴氭湰                    | [EGNet鑴氭湰](https://gitee.com/mindspore/models/tree/master/research/cv/EGnet) | [EGNet 鑴氭湰](https://gitee.com/mindspore/models/tree/master/research/cv/EGnet) |
 
-### 鎺ㄧ悊鎬ц兘
+#### 鎺ㄧ悊鎬ц兘
 
-#### 鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet
+##### 鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet(Ascend)
 
 | 鍙傛暟          | Ascend                      | Ascend                         |
 | ------------------- | --------------------------- | --------------------------- |
@@ -261,21 +456,48 @@ python export.py --ckpt_file=[CKPT_FILE]
 | 涓婁紶鏃ユ湡       | 2021-12-25 | 2021-12-25 |
 | MindSpore 鐗堟湰   | 1.3.0                       | 1.3.0                       |
 | 鏁版嵁闆�             | SOD, 300寮犲浘鍍�     | SOD, 300寮犲浘鍍�     |
-| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.8659637 ; MAE:0.1540910 ; S:0.7317967 | MaxF:0.8763882 ; MAE:0.1453154 ; S:0.7388669  |
-| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.8667928 ; MAE:0.1532886 ; S:0.7360025 | MaxF:0.8798361 ; MAE:0.1448086 ; S:0.74030272  |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.865 ; MAE:0.154 ; S:0.731 | MaxF:0.876 ; MAE:0.145 ; S:0.738  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.866 ; MAE:0.153 ; S:0.736 | MaxF:0.879 ; MAE:0.144 ; S:0.740  |
+| 鏁版嵁闆�             | ECSSD, 1000寮犲浘鍍�     | ECSSD, 1000寮犲浘鍍�     |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.936 ; MAE:0.074 ; S:0.863 | MaxF:0.947 ; MAE:0.064 ; S:0.876  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.935 ; MAE:0.080 ; S:0.859 | MaxF:0.945 ; MAE:0.068 ; S:0.873  |
+| 鏁版嵁闆�             | PASCAL-S, 850寮犲浘鍍�     | PASCAL-S, 850寮犲浘鍍�     |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.877 ; MAE:0.118 ; S:0.765 | MaxF:0.886 ; MAE:0.106 ; S:0.779  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.878 ; MAE:0.119 ; S:0.765 | MaxF:0.888 ; MAE:0.108 ; S:0.778  |
+| 鏁版嵁闆�             | DUTS-OMRON, 5168寮犲浘鍍�     | DUTS-OMRON, 5168寮犲浘鍍�     |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.782 ; MAE:0.142 ; S:0.752 | MaxF:0.799 ; MAE:0.133 ; S:0.767  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.781 ; MAE:0.145 ; S:0.749 | MaxF:0.799 ; MAE:0.133 ; S:0.764  |
+| 鏁版嵁闆�             | HKU-IS, 4447寮犲浘鍍�     | HKU-IS, 4447寮犲浘鍍�     |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.919 ; MAE:0.073 ; S:0.867 | MaxF:0.929 ; MAE:0.063 ; S:0.881  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.914 ; MAE:0.079 ; S:0.860 | MaxF:0.925 ; MAE:0.068 ; S:0.876  |
+
+##### 鏄捐憲鎬ф娴嬫暟鎹泦涓婄殑EGNet(GPU)
+
+| 鍙傛暟          | GPU                      | GPU                         |
+| ------------------- | --------------------------- | --------------------------- |
+| 妯″瀷鐗堟湰       | EGNet锛圴GG锛�                | EGNet锛坮esnet锛�               |
+| 璧勬簮            |  GeForce RTX 2080 Ti                  | GeForce RTX 2080 Ti                          |
+| 涓婁紶鏃ユ湡       | 2021-12-02 | 2021-12-02 |
+| MindSpore 鐗堟湰   | 1.3.0                       | 1.3.0                       |
+| 鏁版嵁闆�             | DUTS-TE, 5019寮犲浘鍍�     | DUTS-TE, 5019寮犲浘鍍�     |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.852 ; MAE:0.094 ; S:0.819 | MaxF:0.862 ; MAE:0.089 ; S:0.829  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.853 ; MAE:0.098 ; S:0.816 | MaxF:0.862 ; MAE:0.095 ; S:0.825  |
+| 鏁版嵁闆�             | SOD, 300寮犲浘鍍�     | SOD, 300寮犲浘鍍�     |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.877 ; MAE:0.149 ; S:0.739 | MaxF:0.876 ; MAE:0.150 ; S:0.732  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.876 ; MAE:0.158 ; S:0.734 | MaxF:0.874 ; MAE:0.153 ; S:0.736  |
 | 鏁版嵁闆�             | ECSSD, 1000寮犲浘鍍�     | ECSSD, 1000寮犲浘鍍�     |
-| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.9365406 ; MAE:0.0744784 ; S:0.8639620 | MaxF:0.9477927 ; MAE:0.0649923 ; S:0.8765208  |
-| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.9356243 ; MAE:0.0805953 ; S:0.8595030 | MaxF:0.9457578 ; MAE:0.0684581 ; S:0.8732929  |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.940 ; MAE:0.069 ; S:0.868 | MaxF:0.947 ; MAE:0.064 ; S:0.876  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.938 ; MAE:0.079 ; S:0.863 | MaxF:0.947 ; MAE:0.066 ; S:0.878  |
 | 鏁版嵁闆�             | PASCAL-S, 850寮犲浘鍍�     | PASCAL-S, 850寮犲浘鍍�     |
-| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.8777129 ; MAE:0.1188116 ; S:0.7653073 | MaxF:0.8861882 ; MAE:0.1061731 ; S:0.7792912  |
-| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.8787268 ; MAE:0.1192975 ; S:0.7657838 | MaxF:0.8883396 ; MAE:0.1081997 ; S:0.7786236  |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.881 ; MAE:0.110 ; S:0.771 | MaxF:0.879 ; MAE:0.112 ; S:0.772  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.883 ; MAE:0.116 ; S:0.772 | MaxF:0.882 ; MAE:0.115 ; S:0.774  |
 | 鏁版嵁闆�             | DUTS-OMRON, 5168寮犲浘鍍�     | DUTS-OMRON, 5168寮犲浘鍍�     |
-| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.7821059 ; MAE:0.1424146 ; S:0.7529001 | MaxF:0.7999835 ; MAE:0.1330678 ; S:0.7671095  |
-| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.7815770 ; MAE:0.1455649 ; S:0.7493499 | MaxF:0.7997979 ; MAE:0.1339806 ; S:0.7646356  |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.787 ; MAE:0.139 ; S:0.754 | MaxF:0.799 ; MAE:0.139 ; S:0.761  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.789 ; MAE:0.144 ; S:0.753 | MaxF:0.800 ; MAE:0.143 ; S:0.762  |
 | 鏁版嵁闆�             | HKU-IS, 4447寮犲浘鍍�     | HKU-IS, 4447寮犲浘鍍�     |
-| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.9193007 ; MAE:0.0732772 ; S:0.8674455 | MaxF:0.9299341 ; MAE:0.0631132 ; S:0.8817522  |
-| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.9145629 ; MAE:0.0793372 ; S:0.8608878 | MaxF:0.9254014; MAE:0.0685441 ; S:0.8762386  |
+| 璇勪及鎸囨爣锛堝崟鍗★級            | MaxF:0.923 ; MAE:0.067 ; S:0.873 | MaxF:0.928 ; MAE:0.063 ; S:0.878  |
+| 璇勪及鎸囨爣锛堝鍗★級            | MaxF:0.921 ; MAE:0.074 ; S:0.868 | MaxF:0.928 ; MAE:0.067 ; S:0.878  |
 
-# ModelZoo涓婚〉  
+## ModelZoo涓婚〉  
 
 璇锋祻瑙堝畼缃慬涓婚〉](https://gitee.com/mindspore/models)銆�
diff --git a/research/cv/EGnet/data_crop.py b/research/cv/EGnet/data_crop.py
index b9fd38c8255b9f74afc16d3575682be787b80f3e..cadca8667ada8ada905c3c11b2d06c472f13e17e 100644
--- a/research/cv/EGnet/data_crop.py
+++ b/research/cv/EGnet/data_crop.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -21,6 +21,7 @@ from concurrent import futures
 import cv2
 import pandas as pd
 
+
 def crop_one(input_img_path, output_img_path):
     """
     center crop one image
@@ -43,7 +44,7 @@ def crop(data_root, output_path):
     crop all images with thread pool
     """
     if not os.path.exists(data_root):
-        raise FileNotFoundError("data root not exist")
+        raise FileNotFoundError("data root not exist: " + data_root)
     if not os.path.exists(output_path):
         os.makedirs(output_path)
 
@@ -58,6 +59,7 @@ def crop(data_root, output_path):
         futures.wait(all_task)
     print("all done!")
 
+
 def save(data_root, output_path):
     file_list = []
     for path in os.listdir(data_root):
@@ -66,6 +68,7 @@ def save(data_root, output_path):
     df = pd.DataFrame(file_list, columns=["one"])
     df.to_csv(os.path.join(output_path, "test.lst"), columns=["one"], index=False, header=False)
 
+
 if __name__ == "__main__":
     parser = argparse.ArgumentParser(description="Crop Image to 200*200")
     parser.add_argument("--data_name", type=str, help="dataset name", required=True,
@@ -76,13 +79,13 @@ if __name__ == "__main__":
                         default="/home/data")
     args = parser.parse_known_args()[0]
     if args.data_name == "DUTS-TE":
-        Mask = "DUTS-TE-MASK"
+        Mask = "DUTS-TE-Mask"
         Image = "DUTS-TE-Image"
     elif args.data_name == "DUTS-TR":
         Mask = "DUTS-TR-Mask"
         Image = "DUTS-TR-Image"
     else:
-        Mask = "groud_truth_mask"
+        Mask = "ground_truth_mask"
         Image = "images"
     crop(os.path.join(args.data_root, args.data_name, Mask),
          os.path.join(args.output_path, args.data_name, Mask))
diff --git a/research/cv/EGnet/default_config.yaml b/research/cv/EGnet/default_config.yaml
index 93b11163e6bf9a5b5949a0b6f30a6a7a91365a2b..bc48ed66b6796f6b1f2cf57ba6f370f69dafc7a4 100644
--- a/research/cv/EGnet/default_config.yaml
+++ b/research/cv/EGnet/default_config.yaml
@@ -1,50 +1,54 @@
 # ==============================================================================
-# Hyper-parameters
-n_color: 3
-device_target: "Ascend"
+# basic parameters
+n_color: 3                                                #  color channels of input images
+device_target: "Ascend"                                   #  device to run the model ["Ascend", "GPU"]
 
 # Dataset settings
-train_path: "data/DUTS-TR"
-test_path: "data"
+train_path: "/home/data2/egnet/DUTS-TR-10498"             #  training dataset dir;
+test_path: "/home/data2/egnet/data200"                    #  testing dataset root;
 
 # Training settings
-train_online: False
-online_train_path: ""
-online_pretrained_path: ""
-train_url: ""
-is_distributed: False
-base_model: "vgg" # ['resnet','vgg']
-pretrained_url: "pretrained"
-vgg: "pretrained/vgg_pretrained.ckpt"
-resnet: "pretrained/resnet_pretrained.ckpt"
-epoch: 30
+base_model: "resnet"                                      #  bone network ["resnet", "vgg"], used when train and eval;
+vgg: "/home/EGNet/EGNet/model/vgg16.ckpt"                 #  set pre-trained model
+resnet: "/home/EGNet/EGNet/model/resnet50.ckpt"           #  set pre-trained model
+is_distributed: False                                     #  set distributed training
+epoch: 30                                                 #  epoch
 batch_size: 1
-num_thread: 4
-save_fold: "EGNet"
+n_ave_grad: 10                                            #  step size for gradient accumulation.
+num_thread: 4                                             #  thread num for dataset
+save_fold: "EGNet"                                        #  root directory for training information
 train_save_name: "nnet"
 epoch_save: 1
 epoch_show: 1
-pre_trained: ""
-start_epoch: 1
-n_ave_grad: 10
 show_every: 10
 save_tmp: 200
 loss_scale: 1
 
+# Training with checkpoint
+pre_trained: ""                                            # checkpoint file 
+start_epoch: 1                                             # start epoch for training
+
+
 # Testing settings
-eval_online: False
-online_eval_path: ""
-online_ckpt_path: ""
-model: "EGNet/run-nnet/models/final_vgg_bone.ckpt"
+model: "/home/data3/egnet_models/resnet/msp/final_bone_1128_1.ckpt"         #  model for evaluation
 test_fold: "result"
 test_save_name: "EGNet_"
 test_mode: 1
-sal_mode: "t" # ['e','t','d','h','p','s']
-test_batch_size: 1
+sal_mode: "t" # ['e','t','d','h','p','s']                  #  which dataset to evaluate
+test_batch_size: 1                                         # test batch, do not edit now!
 
-# Misc
-mode: "train" # ['train','test']
-visdom: False
+
+# Online training setting
+train_online: False
+online_train_path: ""
+online_pretrained_path: ""
+train_url: ""                                                       
+pretrained_url: "pretrained"                              #  used when train and eval;
+
+# Online testing setting
+eval_online: False
+online_eval_path: ""
+online_ckpt_path: ""
 
 # Export settings
 file_name: "EGNet"
diff --git a/research/cv/EGnet/eval.py b/research/cv/EGnet/eval.py
index ed29b6b147fa09c910ac6d7ff348751207ec7e13..8ed91cfda442bcb1658f2d7c6fb84acd10aed9a3 100644
--- a/research/cv/EGnet/eval.py
+++ b/research/cv/EGnet/eval.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -26,6 +26,7 @@ from model_utils.config import base_config
 from src.dataset import create_dataset
 from src.egnet import build_model
 
+
 def main(config):
     if config.eval_online:
         import moxing as mox
@@ -43,8 +44,8 @@ def main(config):
         elif config.sal_mode == "e":
             Evalname = "ECSSD"
         config.test_path = os.path.join("/cache", config.test_path)
-        local_data_url = os.path.join(config.test_path, "%s"%(Evalname))
-        local_list_eval = os.path.join(config.test_path, "%s/test.lst"%(Evalname))
+        local_data_url = os.path.join(config.test_path, "%s" % (Evalname))
+        local_list_eval = os.path.join(config.test_path, "%s/test.lst" % (Evalname))
         mox.file.copy_parallel(config.online_eval_path, local_data_url)
         mox.file.copy_parallel(os.path.join(config.online_eval_path, "test.lst"), local_list_eval)
         ckpt_path = os.path.join("/cache", os.path.dirname(config.model))
@@ -64,6 +65,7 @@ class Metric:
     """
     for metric
     """
+
     def __init__(self):
         self.epsilon = 1e-4
         self.beta = 0.3
diff --git a/research/cv/EGnet/pretrained_model_convert/pth_to_msp.py b/research/cv/EGnet/pretrained_model_convert/pth_to_msp.py
new file mode 100644
index 0000000000000000000000000000000000000000..32bd77a90653aea82ac1fe1b5c9ff0fe0edd3458
--- /dev/null
+++ b/research/cv/EGnet/pretrained_model_convert/pth_to_msp.py
@@ -0,0 +1,83 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+import os.path
+import argparse
+
+import vgg_pth
+import resnet_pth
+import torch.nn
+
+import vgg_msp
+import resnet_msp
+import mindspore.nn
+
+
+def convert_vgg(pretrained_file, result):
+    vgg16_pth = vgg_pth.vgg16()
+    if torch.cuda.is_available():
+        vgg16_pth.load_state_dict(torch.load(pretrained_file))
+    else:
+        vgg16_pth.load_state_dict(torch.load(pretrained_file, map_location=torch.device("cpu")))
+    vgg16_msp = vgg_msp.vgg16()
+    for p_pth, p_msp in zip(vgg16_pth.parameters(), vgg16_msp.get_parameters()):
+        p_msp.set_data(mindspore.Tensor(p_pth.detach().numpy()))
+    mindspore.save_checkpoint(vgg16_msp, result)
+
+
+def convert_resnet(pretrained_file, result):
+    resnet50_pth = resnet_pth.resnet50()
+    resnet50_msp = resnet_msp.resnet50()
+    if torch.cuda.is_available():
+        resnet50_pth.load_state_dict(torch.load(pretrained_file), strict=False)
+    else:
+        resnet50_pth.load_state_dict(torch.load(pretrained_file, map_location=torch.device("cpu")), strict=False)
+
+    p_pth_list = list()
+    for p_pth in resnet50_pth.parameters():
+        p_pth_list.append(p_pth.cpu().detach().numpy())
+
+    bn_list = list()
+    for m in resnet50_pth.modules():
+        if isinstance(m, torch.nn.BatchNorm2d):
+            bn_list.append(m.running_mean.cpu().numpy())
+            bn_list.append(m.running_var.cpu().numpy())
+    p_index = 0
+    bn_index = 0
+    for n_msp, p_msp in resnet50_msp.parameters_and_names():
+        if "moving_" not in n_msp:
+            p_msp.set_data(mindspore.Tensor(p_pth_list[p_index]))
+            p_index += 1
+        else:
+            p_msp.set_data(mindspore.Tensor(bn_list[bn_index]))
+            bn_index += 1
+    mindspore.save_checkpoint(resnet50_msp, result)
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+    parser.add_argument("--model", choices=["vgg", "resnet"], type=str)
+    parser.add_argument("--pth_file", type=str, default="vgg16_20M.pth", help="input pth file")
+    parser.add_argument("--msp_file", type=str, default="vgg16_pretrained.ckpt", help="output msp file")
+    args = parser.parse_args()
+    if not os.path.exists(args.pth_file):
+        raise FileNotFoundError(args.pth_file)
+    if args.model == "vgg":
+        convert_vgg(args.pth_file, args.msp_file)
+    elif args.model == "resnet":
+        convert_resnet(args.pth_file, args.msp_file)
+    else:
+        print("unknown model")
+    print("success")
diff --git a/research/cv/EGnet/pretrained_model_convert/resnet_msp.py b/research/cv/EGnet/pretrained_model_convert/resnet_msp.py
new file mode 100644
index 0000000000000000000000000000000000000000..ce90d8951e3d8fcf60b594ecfcca71ab2f9240d4
--- /dev/null
+++ b/research/cv/EGnet/pretrained_model_convert/resnet_msp.py
@@ -0,0 +1,163 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Resnet model define"""
+
+import mindspore.nn as nn
+from mindspore import load_checkpoint
+
+affine_par = True
+
+
+def conv3x3(in_planes, out_planes, stride=1):
+    return nn.Conv2d(in_planes, out_planes, kernel_size=3, padding="same", stride=stride, has_bias=False)
+
+
+class Bottleneck(nn.Cell):
+    """
+    Bottleneck layer
+    """
+    expansion = 4
+
+    def __init__(self, in_planes, planes, stride=1, dilation_=1, downsample=None):
+        super(Bottleneck, self).__init__()
+        self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride, has_bias=False)
+        self.bn1 = nn.BatchNorm2d(planes, affine=affine_par, use_batch_statistics=False)
+        for i in self.bn1.get_parameters():
+            i.requires_grad = False
+        padding = 1
+        if dilation_ == 2:
+            padding = 2
+        elif dilation_ == 4:
+            padding = 4
+        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=padding, pad_mode="pad", has_bias=False,
+                               dilation=dilation_)
+
+        self.bn2 = nn.BatchNorm2d(planes, affine=affine_par, use_batch_statistics=False)
+        for i in self.bn2.get_parameters():
+            i.requires_grad = False
+        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, has_bias=False)
+        self.bn3 = nn.BatchNorm2d(planes * 4, affine=affine_par, use_batch_statistics=False)
+        for i in self.bn3.get_parameters():
+            i.requires_grad = False
+        self.relu = nn.ReLU()
+        self.downsample = downsample
+        self.stride = stride
+
+    def construct(self, x):
+        """
+        forword
+        """
+        residual = x
+
+        out = self.conv1(x)
+        out = self.bn1(out)
+        out = self.relu(out)
+
+        out = self.conv2(out)
+        out = self.bn2(out)
+        out = self.relu(out)
+
+        out = self.conv3(out)
+        out = self.bn3(out)
+
+        if self.downsample is not None:
+            residual = self.downsample(x)
+
+        out += residual
+        out = self.relu(out)
+
+        return out
+
+
+class ResNet(nn.Cell):
+    """
+    resnet
+    """
+
+    def __init__(self, block, layers):
+        self.in_planes = 64
+        super(ResNet, self).__init__()
+        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, pad_mode="pad",
+                               has_bias=False)
+        self.bn1 = nn.BatchNorm2d(64, affine=affine_par, use_batch_statistics=False)
+        for i in self.bn1.get_parameters():
+            i.requires_grad = False
+        self.relu = nn.ReLU()
+        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="same")  # change
+        self.layer1 = self._make_layer(block, 64, layers[0])
+        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
+        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
+        self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=2)
+
+    def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
+        """
+        make layer
+        """
+        downsample = None
+        if stride != 1 or self.in_planes != planes * block.expansion or dilation == 2 or dilation == 4:
+            downsample = nn.SequentialCell(
+                nn.Conv2d(self.in_planes, planes * block.expansion,
+                          kernel_size=1, stride=stride, has_bias=False),
+                nn.BatchNorm2d(planes * block.expansion, affine=affine_par, use_batch_statistics=False),
+            )
+        for i in downsample[1].get_parameters():
+            i.requires_grad = False
+        layers = [block(self.in_planes, planes, stride, dilation_=dilation, downsample=downsample)]
+        self.in_planes = planes * block.expansion
+        for i in range(1, blocks):
+            layers.append(block(self.in_planes, planes, dilation_=dilation))
+
+        return nn.SequentialCell(*layers)
+
+    def load_pretrained_model(self, model_file):
+        """
+        load pretrained model
+        """
+        load_checkpoint(model_file, net=self)
+
+    def construct(self, x):
+        """
+        forward
+        """
+        tmp_x = []
+        x = self.conv1(x)
+        x = self.bn1(x)
+        x = self.relu(x)
+        tmp_x.append(x)
+        x = self.maxpool(x)
+
+        x = self.layer1(x)
+        tmp_x.append(x)
+        x = self.layer2(x)
+        tmp_x.append(x)
+        x = self.layer3(x)
+        tmp_x.append(x)
+        x = self.layer4(x)
+        tmp_x.append(x)
+
+        return tmp_x
+
+
+# adding prefix "base" to parameter names for load_checkpoint().
+class Tmp(nn.Cell):
+    def __init__(self, base):
+        super(Tmp, self).__init__()
+        self.base = base
+
+
+def resnet50():
+    base = ResNet(Bottleneck, [3, 4, 6, 3])
+    return Tmp(base)
diff --git a/research/cv/EGnet/pretrained_model_convert/resnet_pth.py b/research/cv/EGnet/pretrained_model_convert/resnet_pth.py
new file mode 100644
index 0000000000000000000000000000000000000000..c94ca4edfaa1c209b38d488ffadb409d7037b379
--- /dev/null
+++ b/research/cv/EGnet/pretrained_model_convert/resnet_pth.py
@@ -0,0 +1,140 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+
+import torch.nn as nn
+
+affine_par = True
+
+
+def conv3x3(in_planes, out_planes, stride=1):
+    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
+                     padding=1, bias=False)
+
+
+class Bottleneck(nn.Module):
+    expansion = 4
+
+    def __init__(self, inplanes, planes, stride=1, dilation_=1, downsample=None):
+        super(Bottleneck, self).__init__()
+        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False)  # change
+        self.bn1 = nn.BatchNorm2d(planes, affine=affine_par)
+        for i in self.bn1.parameters():
+            i.requires_grad = False
+        padding = 1
+        if dilation_ == 2:
+            padding = 2
+        elif dilation_ == 4:
+            padding = 4
+        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1,  # change
+                               padding=padding, bias=False, dilation=dilation_)
+        self.bn2 = nn.BatchNorm2d(planes, affine=affine_par)
+        for i in self.bn2.parameters():
+            i.requires_grad = False
+        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
+        self.bn3 = nn.BatchNorm2d(planes * 4, affine=affine_par)
+        for i in self.bn3.parameters():
+            i.requires_grad = False
+        self.relu = nn.ReLU(inplace=True)
+        self.downsample = downsample
+        self.stride = stride
+
+    def forward(self, x):
+        residual = x
+
+        out = self.conv1(x)
+        out = self.bn1(out)
+        out = self.relu(out)
+
+        out = self.conv2(out)
+        out = self.bn2(out)
+        out = self.relu(out)
+
+        out = self.conv3(out)
+        out = self.bn3(out)
+
+        if self.downsample is not None:
+            residual = self.downsample(x)
+
+        out += residual
+        out = self.relu(out)
+
+        return out
+
+
+class ResNet(nn.Module):
+    def __init__(self, block, layers):
+        self.inplanes = 64
+        super(ResNet, self).__init__()
+        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
+                               bias=False)
+        self.bn1 = nn.BatchNorm2d(64, affine=affine_par)
+        for i in self.bn1.parameters():
+            i.requires_grad = False
+        self.relu = nn.ReLU(inplace=True)
+        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
+        self.layer1 = self._make_layer(block, 64, layers[0])
+        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
+        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
+        self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation__=2)
+
+        for m in self.modules():
+            if isinstance(m, nn.Conv2d):
+                m.weight.data.normal_(0, 0.01)
+            elif isinstance(m, nn.BatchNorm2d):
+                m.weight.data.fill_(1)
+                m.bias.data.zero_()
+
+    def _make_layer(self, block, planes, blocks, stride=1, dilation__=1):
+        downsample = None
+        if stride != 1 or self.inplanes != planes * block.expansion or dilation__ == 2 or dilation__ == 4:
+            downsample = nn.Sequential(
+                nn.Conv2d(self.inplanes, planes * block.expansion,
+                          kernel_size=1, stride=stride, bias=False),
+                nn.BatchNorm2d(planes * block.expansion, affine=affine_par),
+            )
+        for i in downsample[1].parameters():
+            i.requires_grad = False
+        layers = []
+        layers.append(block(self.inplanes, planes, stride, dilation_=dilation__, downsample=downsample))
+        self.inplanes = planes * block.expansion
+        for i in range(1, blocks):
+            layers.append(block(self.inplanes, planes, dilation_=dilation__))
+
+        return nn.Sequential(*layers)
+
+    def forward(self, x):
+        tmp_x = []
+        x = self.conv1(x)
+        x = self.bn1(x)
+        x = self.relu(x)
+        tmp_x.append(x)
+        x = self.maxpool(x)
+
+        x = self.layer1(x)
+        tmp_x.append(x)
+        x = self.layer2(x)
+        tmp_x.append(x)
+        x = self.layer3(x)
+        tmp_x.append(x)
+        x = self.layer4(x)
+        tmp_x.append(x)
+
+        return tmp_x
+
+
+def resnet50():
+    model = ResNet(Bottleneck, [3, 4, 6, 3])
+    return model
diff --git a/research/cv/EGnet/pretrained_model_convert/vgg_msp.py b/research/cv/EGnet/pretrained_model_convert/vgg_msp.py
new file mode 100644
index 0000000000000000000000000000000000000000..647538a553603d71808bba048697f323532adbf6
--- /dev/null
+++ b/research/cv/EGnet/pretrained_model_convert/vgg_msp.py
@@ -0,0 +1,53 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+
+import mindspore.nn as nn
+import mindspore
+
+
+def vgg(cfg, i, batch_norm=False):
+    """Make stage network of VGG."""
+    layers = []
+    in_channels = i
+    stage = 1
+    pad = nn.Pad(((0, 0), (0, 0), (1, 1), (1, 1))).to_float(mindspore.dtype.float32)
+    for v in cfg:
+        if v == "M":
+            stage += 1
+            layers += [pad, nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="valid")]
+        else:
+            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, pad_mode="pad", padding=1, has_bias=True)
+            if batch_norm:
+                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU()]
+            else:
+                layers += [conv2d, nn.ReLU()]
+            in_channels = v
+    return layers
+
+
+# adding prefix "base" to parameter names for load_checkpoint().
+class Tmp(nn.Cell):
+    def __init__(self, base):
+        super(Tmp, self).__init__()
+        self.base = base
+
+
+def vgg16():
+    cfg = {'tun': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
+           'tun_ex': [512, 512, 512]}
+    base = nn.CellList(vgg(cfg['tun'], 3))
+    base = Tmp(base)
+    return Tmp(base)
diff --git a/research/cv/EGnet/pretrained_model_convert/vgg_pth.py b/research/cv/EGnet/pretrained_model_convert/vgg_pth.py
new file mode 100644
index 0000000000000000000000000000000000000000..f383b6a23459544f3dfb3b7099063bb603bab82c
--- /dev/null
+++ b/research/cv/EGnet/pretrained_model_convert/vgg_pth.py
@@ -0,0 +1,45 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+
+import torch
+import torch.nn as nn
+
+
+def vgg(cfg, i, batch_norm=False):
+    layers = []
+    in_channels = i
+    stage = 1
+    for v in cfg:
+        if v == 'M':
+            stage += 1
+            if stage == 6:
+                layers += [nn.MaxPool2d(kernel_size=3, stride=2, padding=1)]
+            else:
+                layers += [nn.MaxPool2d(kernel_size=3, stride=2, padding=1)]
+        else:
+            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
+            if batch_norm:
+                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
+            else:
+                layers += [conv2d, nn.ReLU(inplace=True)]
+            in_channels = v
+    return layers
+
+
+def vgg16():
+    cfg = {'tun': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
+           'tun_ex': [512, 512, 512]}
+    return torch.nn.ModuleList(vgg(cfg['tun'], 3))
diff --git a/research/cv/EGnet/requirements.txt b/research/cv/EGnet/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..446b9c1be46aa41f10d969d84a1bec110bc6488a
--- /dev/null
+++ b/research/cv/EGnet/requirements.txt
@@ -0,0 +1,5 @@
+opencv
+pyyaml
+pytorch
+pandas
+Pillow
diff --git a/research/cv/EGnet/sal2edge.py b/research/cv/EGnet/sal2edge.py
index d50ad86d785989612ab2b639f82e6294f41981b3..f49c9b482fe272bdbec3945e67174dc2904d6dad 100644
--- a/research/cv/EGnet/sal2edge.py
+++ b/research/cv/EGnet/sal2edge.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -27,8 +27,9 @@ def sal2edge_one(image_file, output_file):
     process one image
     """
     if not os.path.exists(image_file):
+        print("file not exist:", image_file)
         return
-    image = cv2.imread(image_file, cv2.IMREAD_UNCHANGED)
+    image = cv2.imread(image_file, cv2.IMREAD_GRAYSCALE)
     b_image = image > 128
     b_image = b_image.astype(np.float64)
     dx, dy = np.gradient(b_image)
@@ -52,16 +53,19 @@ def sal2edge(data_root, output_path, image_list_file):
         return
     image_list = np.loadtxt(image_list_file, str)
     file_list = []
-    ext = image_list[0][1][-4:]
+    ext = ".png"
     for image in image_list:
-        file_list.append(image[1][:-4])
+        file_list.append(image[:-4])
+    pair_file = open(data_root+"/../train_pair_edge.lst", "w")
     with futures.ThreadPoolExecutor(max_workers=os.cpu_count()) as tp:
         all_task = []
         for file in file_list:
             img_path = os.path.join(data_root, file + ext)
             result_path = os.path.join(output_path, file + "_edge" + ext)
             all_task.append(tp.submit(sal2edge_one, img_path, result_path))
+            pair_file.write(f"DUTS-TR-Image/{file}.jpg DUTS-TR-Mask/{file}.png DUTS-TR-Mask/{file}_edge.png\n")
         futures.wait(all_task)
+    pair_file.close()
     print("all done!")
 
 
diff --git a/research/cv/EGnet/scripts/convert_model.sh b/research/cv/EGnet/scripts/convert_model.sh
new file mode 100644
index 0000000000000000000000000000000000000000..85717f033293bf477c8e4406caab73d55fad6e1f
--- /dev/null
+++ b/research/cv/EGnet/scripts/convert_model.sh
@@ -0,0 +1,48 @@
+#!/bin/bash
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# The number of parameters transferred is not equal to the required number, print prompt information
+
+if [ $# != 3 ]
+then 
+    echo "=============================================================================================================="
+    echo "Please run the script as: "
+    echo "bash convert_model.sh [MODEL_NAME] [PTH_FILE] [MSP_FILE]"
+    echo "for example: bash convert_model.sh /weights/vgg16.pth ./weights/vgg16.ckpt"
+    echo "================================================================================================================="
+exit 1
+fi
+
+# Get absolute path
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+
+# Get current script path
+BASE_PATH=$(cd "`dirname $0`" || exit; pwd)
+MODEL_NAME=$1
+PTH_FILE=$(get_real_path $2)
+MSP_FILE=$(get_real_path $3)
+
+cd $BASE_PATH/..
+python pretrained_model_convert/pth_to_msp.py \
+    --model=$MODEL_NAME \
+    --pth_file="$PTH_FILE" \
+    --msp_file="$MSP_FILE"
diff --git a/research/cv/EGnet/scripts/dataset_preprocess.sh b/research/cv/EGnet/scripts/dataset_preprocess.sh
new file mode 100644
index 0000000000000000000000000000000000000000..55e7460b36c8f17e079662962a3139a0dfddef51
--- /dev/null
+++ b/research/cv/EGnet/scripts/dataset_preprocess.sh
@@ -0,0 +1,86 @@
+#!/bin/bash
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# The number of parameters transferred is not equal to the required number, print prompt information
+if [ $# != 2 ]
+then 
+    echo "=============================================================================================================="
+    echo "Please run the script as: "
+    echo "bash data_crop.sh [DATA_ROOT] [OUTPUT_ROOT]"
+    echo "for example: bash data_crop.sh /data/ ./data_crop/"
+    echo "================================================================================================================="
+exit 1
+fi
+
+# Get absolute path
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+
+# Get current script path
+BASE_PATH=$(cd "`dirname $0`" || exit; pwd)
+DATA_ROOT=$(get_real_path $1)
+OUTPUT_ROOT=$(get_real_path $2)
+
+cd $DATA_ROOT
+mkdir tmp
+TMP_ROOT=$DATA_ROOT/tmp
+
+# DUT-OMRON
+mkdir $TMP_ROOT/DUT-OMRON
+cp -r DUT-OMRON-image/DUT-OMRON-image $TMP_ROOT/DUT-OMRON/images
+# ground_truth_mask 
+cp -r DUT-OMRON-image/pixelwiseGT-new-PNG $TMP_ROOT/DUT-OMRON/ground_truth_mask
+# ECSSD nothing
+
+#HKU-IS
+mkdir $TMP_ROOT/HKU-IS
+cp -r HKU-IS/imgs $TMP_ROOT/HKU-IS/images
+cp -r HKU-IS/gt $TMP_ROOT/HKU-IS/ground_truth_mask
+
+#PASCAL-S
+mkdir $TMP_ROOT/PASCAL-S
+mkdir $TMP_ROOT/PASCAL-S/ground_truth_mask
+mkdir $TMP_ROOT/PASCAL-S/images
+cp PASCAL-S/Imgs/*.png $TMP_ROOT/PASCAL-S/ground_truth_mask
+cp PASCAL-S/Imgs/*.jpg $TMP_ROOT/PASCAL-S/images
+
+# SOD
+mkdir $TMP_ROOT/SOD
+mkdir $TMP_ROOT/SOD/ground_truth_mask
+mkdir $TMP_ROOT/SOD/images
+cp SOD/Imgs/*.png $TMP_ROOT/SOD/ground_truth_mask/
+cp SOD/Imgs/*.jpg $TMP_ROOT/SOD/images/
+
+
+cd $BASE_PATH/..
+python data_crop.py --data_name=ECSSD  --data_root="$DATA_ROOT" --output_path="$OUTPUT_ROOT"
+python data_crop.py --data_name=SOD  --data_root="$TMP_ROOT" --output_path="$OUTPUT_ROOT"
+python data_crop.py --data_name=DUT-OMRON  --data_root="$TMP_ROOT" --output_path="$OUTPUT_ROOT"
+python data_crop.py --data_name=PASCAL-S  --data_root="$TMP_ROOT" --output_path="$OUTPUT_ROOT"
+python data_crop.py --data_name=HKU-IS  --data_root="$TMP_ROOT" --output_path="$OUTPUT_ROOT"
+python data_crop.py --data_name=DUTS-TE  --data_root="$DATA_ROOT" --output_path="$OUTPUT_ROOT"
+python data_crop.py --data_name=DUTS-TR  --data_root="$DATA_ROOT" --output_path="$OUTPUT_ROOT"
+
+# prevent wrong path
+if [ -d $TMP_ROOT/SOD ]; then
+  rm -rf $TMP_ROOT
+fi
+python sal2edge.py --data_root="$OUTPUT_ROOT/DUTS-TR/DUTS-TR-Mask/" --output_path="$OUTPUT_ROOT/DUTS-TR/DUTS-TR-Mask/" --image_list_file="$OUTPUT_ROOT/DUTS-TR/test.lst"
diff --git a/research/cv/EGnet/scripts/run_distribute_train.sh b/research/cv/EGnet/scripts/run_distribute_train.sh
index 86d246e5cc5a54a193af389ea404f6ac564f5aaa..b206f586feef335bcdd9210145b7d2c7a7dd0f72 100644
--- a/research/cv/EGnet/scripts/run_distribute_train.sh
+++ b/research/cv/EGnet/scripts/run_distribute_train.sh
@@ -43,4 +43,3 @@ do
   python -u ./train.py  --is_distributed True    > train.log 2>&1 &
   cd ../
 done
-
diff --git a/research/cv/EGnet/scripts/run_distribute_train_gpu.sh b/research/cv/EGnet/scripts/run_distribute_train_gpu.sh
new file mode 100644
index 0000000000000000000000000000000000000000..b04621ac44a25a02ced47fd95aa5e5c4444d6bae
--- /dev/null
+++ b/research/cv/EGnet/scripts/run_distribute_train_gpu.sh
@@ -0,0 +1,50 @@
+#!/bin/bash
+
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# The number of parameters transferred is not equal to the required number, print prompt information
+if [ $# != 2 ]
+then 
+    echo "=============================================================================================================="
+    echo "Please run the script as: "
+    echo "bash run_distributed_train_gpu.sh [DEVICE_NUM] [USED_DEVICES]"
+    echo "for example: bash run_distributed_train_gpu.sh 8 0,1,2,3,4,5,6,7"
+    echo "================================================================================================================="
+exit 1
+fi
+
+# Get absolute path
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+
+# Get current script path
+BASE_PATH=$(cd "`dirname $0`" || exit; pwd)
+
+export RANK_SIZE=$1
+
+export CUDA_VISIBLE_DEVICES=$2
+
+cd $BASE_PATH/..
+
+mpirun -n $RANK_SIZE --allow-run-as-root \
+python -u train.py --device_target=GPU  --is_distributed True &> distribute_train.log &
+
+echo "The train log is at ../distribute_train.log."
diff --git a/research/cv/EGnet/scripts/run_eval.sh b/research/cv/EGnet/scripts/run_eval.sh
index 0ef0f42f12ea3db80ddf47dab8d4369f50c52304..8bd31fb3cb8985ec90c8bd70e7bf58439858aeab 100644
--- a/research/cv/EGnet/scripts/run_eval.sh
+++ b/research/cv/EGnet/scripts/run_eval.sh
@@ -14,27 +14,33 @@
 # limitations under the License.
 # ============================================================================
 cd ..
-python eval.py --test_fold='./result/ECSSD'   \
+python eval.py --device_target=Ascend   \
+      --test_fold='./result/ECSSD'      \
       --model='./EGNet/run-nnet/models/final_resnet_bone.ckpt'  \
       --sal_mode=e  \
       --base_model=resnet >test_e.log
-python eval.py --test_fold='./result/PASCAL-S'  \
+python eval.py --device_target=Ascend    \
+      --test_fold='./result/PASCAL-S'  \
       --model='./EGNet/run-nnet/models/final_resnet_bone.ckpt'  \
       --sal_mode=p  \
       --base_model=resnet >test_p.log
-python eval.py --test_fold='./result/DUT-OMRON'  \
+python eval.py --device_target=Ascend      \
+      --test_fold='./result/DUT-OMRON'  \
       --model='./EGNet/run-nnet/models/final_resnet_bone.ckpt'  \
       --sal_mode=d  \
       --base_model=resnet >test_d.log
-python eval.py --test_fold='./result/HKU-IS'  \
+python eval.py --device_target=Ascend   \
+      --test_fold='./result/HKU-IS'     \
       --model='./EGNet/run-nnet/models/final_resnet_bone.ckpt'  \
       --sal_mode=h  \
       --base_model=resnet >test_h.log
-python eval.py --test_fold='./result/SOD'  \
+python eval.py --device_target=Ascend   \
+      --test_fold='./result/SOD'           \
       --model='./EGNet/run-nnet/models/final_resnet_bone.ckpt'  \
       --sal_mode=s  \
       --base_model=resnet >test_s.log
-python eval.py --test_fold='./result/DUTS-TE'  \
+python eval.py --device_target=Ascend   \
+      --test_fold='./result/DUTS-TE'  \
       --model='./EGNet/run-nnet/models/final_resnet_bone.ckpt'  \
       --sal_mode=t  \
       --base_model=resnet >test_t.log
\ No newline at end of file
diff --git a/research/cv/EGnet/scripts/run_eval_gpu.sh b/research/cv/EGnet/scripts/run_eval_gpu.sh
new file mode 100644
index 0000000000000000000000000000000000000000..b60f460c1411fcb97c5310b0d54078d19049faf1
--- /dev/null
+++ b/research/cv/EGnet/scripts/run_eval_gpu.sh
@@ -0,0 +1,59 @@
+#!/bin/bash
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# Get absolute path
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+
+# Get current script path
+BASE_PATH=$(cd "`dirname $0`" || exit; pwd)
+
+cd $BASE_PATH/..
+
+echo "evalating ECSSD"
+python eval.py --device_target=GPU      \
+      --test_fold='./result/ECSSD'   \
+      --sal_mode=e >test_e.log
+
+echo "evalating PASCAL-S"
+python eval.py --device_target=GPU          \
+      --test_fold='./result/PASCAL-S'  \
+      --sal_mode=p >test_p.log
+
+echo "evalating DUT-OMRON"
+python eval.py --device_target=GPU            \
+      --test_fold='./result/DUT-OMRON'  \
+      --sal_mode=d >test_d.log
+
+echo "evalating HKU-IS"
+python eval.py --device_target=GPU      \
+      --test_fold='./result/HKU-IS'  \
+      --sal_mode=h >test_h.log
+
+echo "evalating SOD"
+python eval.py --device_target=GPU \
+      --test_fold='./result/SOD'   \
+      --sal_mode=s >test_s.log
+
+echo "evalating DUTS-TE"
+python eval.py --device_target=GPU        \
+      --test_fold='./result/DUTS-TE'  \
+      --sal_mode=t >test_t.log
diff --git a/research/cv/EGnet/scripts/run_standalone_train.sh b/research/cv/EGnet/scripts/run_standalone_train.sh
index 2f9a83c6cd42761fd87fe5abbd787de6e722270a..380fb84c4e8cf24ab1ec211b7e58bb082e2a3b62 100644
--- a/research/cv/EGnet/scripts/run_standalone_train.sh
+++ b/research/cv/EGnet/scripts/run_standalone_train.sh
@@ -13,4 +13,5 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # ============================================================================
-python train.py --base_model=vgg >train.log
+cd ..
+python train.py --device_target=Ascend --base_model=vgg >train.log
diff --git a/research/cv/EGnet/scripts/run_standalone_train_gpu.sh b/research/cv/EGnet/scripts/run_standalone_train_gpu.sh
new file mode 100644
index 0000000000000000000000000000000000000000..1eb4c599a603ac0dc3dd5f692e2828afbeeb1b74
--- /dev/null
+++ b/research/cv/EGnet/scripts/run_standalone_train_gpu.sh
@@ -0,0 +1,33 @@
+#!/bin/bash
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# Get absolute path
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+
+# Get current script path
+BASE_PATH=$(cd "`dirname $0`" || exit; pwd)
+
+cd $BASE_PATH/..
+
+python train.py --device_target=GPU &>train.log &
+
+echo "The train log is at ../train.log."
diff --git a/research/cv/EGnet/src/dataset.py b/research/cv/EGnet/src/dataset.py
index 49eb0a7a44c6288c0d2127b675852aae5d6d9aff..ab08b34459119473f523e284172f77d5c6445614 100644
--- a/research/cv/EGnet/src/dataset.py
+++ b/research/cv/EGnet/src/dataset.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -24,14 +24,18 @@ import numpy as np
 from model_utils.config import base_config
 from mindspore.dataset import GeneratorDataset
 from mindspore.communication.management import get_rank, get_group_size
+
 if base_config.train_online:
     import moxing as mox
+
     mox.file.shift('os', 'mox')
 
+
 class ImageDataTrain:
     """
     training dataset
     """
+
     def __init__(self, train_path=""):
         self.sal_root = train_path
         self.sal_source = os.path.join(train_path, "train_pair_edge.lst")
@@ -54,6 +58,7 @@ class ImageDataTest:
     """
     test dataset
     """
+
     def __init__(self, test_mode=1, sal_mode="e", test_path="", test_fold=""):
         if test_mode == 1:
             if sal_mode == "e":
@@ -97,7 +102,7 @@ class ImageDataTest:
 
     def __getitem__(self, item):
         image, _ = load_image_test(os.path.join(self.image_root, self.image_list[item]))
-        label = load_sal_label(os.path.join(self.test_root, self.image_list[item][0:-4]+".png"))
+        label = load_sal_label(os.path.join(self.test_root, self.image_list[item][0:-4] + ".png"))
         return image, label, item % self.image_num
 
     def save_folder(self):
@@ -109,7 +114,7 @@ class ImageDataTest:
 
 # get the dataloader (Note: without data augmentation, except saliency with random flip)
 def create_dataset(batch_size, mode="train", num_thread=1, test_mode=1, sal_mode="e", train_path="", test_path="",
-                   test_fold="", is_distributed=False):
+                   test_fold="", is_distributed=False, rank_id=0, rank_size=1):
     """
     create dataset
     """
@@ -135,7 +140,10 @@ def create_dataset(batch_size, mode="train", num_thread=1, test_mode=1, sal_mode
     return ds.batch(batch_size, drop_remainder=drop_remainder, num_parallel_workers=num_thread), dataset
 
 
-def save_img(img, path):
+def save_img(img, path, is_distributed=False):
+
+    if is_distributed and get_rank() != 0:
+        return
     range_ = np.max(img) - np.min(img)
     img = (img - np.min(img)) / range_
     img = img * 255 + 0.5
@@ -145,8 +153,7 @@ def save_img(img, path):
 
 def load_image(pah):
     if not os.path.exists(pah):
-        print("File Not Exists")
-        print(pah)
+        print("File Not Exists,", pah)
     im = cv2.imread(pah)
     in_ = np.array(im, dtype=np.float32)
     in_ -= np.array((104.00699, 116.66877, 122.67892))
@@ -163,7 +170,6 @@ def load_image_test(pah):
         pah = pah + ".png"
     else:
         pah = pah + ".jpg"
-    print("--------", pah)
     if not os.path.exists(pah):
         print("File Not Exists")
     im = cv2.imread(pah)
diff --git a/research/cv/EGnet/src/egnet.py b/research/cv/EGnet/src/egnet.py
index 524d840c7395205c1d99e34abbc5cc2e1b43e5c0..9679d68341e9bf0d3165420151a1999d218db353 100644
--- a/research/cv/EGnet/src/egnet.py
+++ b/research/cv/EGnet/src/egnet.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -61,13 +61,14 @@ class ConvertLayer(nn.Cell):
         return tuple_resl
 
 
-class MergeLayer1(nn.Cell):  # list_k: [[64, 512, 64], [128, 512, 128], [256, 0, 256] ... ]
+class MergeLayer1(nn.Cell):
     """
     merge layer 1
     """
     def __init__(self, list_k):
         """
         initialize merge layer 1
+        @param list_k: [[64, 512, 64], [128, 512, 128], [256, 0, 256] ... ]
         """
         super(MergeLayer1, self).__init__()
         self.list_k = list_k
@@ -75,12 +76,10 @@ class MergeLayer1(nn.Cell):  # list_k: [[64, 512, 64], [128, 512, 128], [256, 0,
         for ik in list_k:
             if ik[1] > 0:
                 trans.append(nn.SequentialCell([nn.Conv2d(ik[1], ik[0], 1, 1, has_bias=False), nn.ReLU()]))
-            # Conv
             up.append(nn.SequentialCell(
                 [nn.Conv2d(ik[0], ik[2], ik[3], 1, has_bias=True, pad_mode="pad", padding=ik[4]), nn.ReLU(),
                  nn.Conv2d(ik[2], ik[2], ik[3], 1, has_bias=True, pad_mode="pad", padding=ik[4]), nn.ReLU(),
                  nn.Conv2d(ik[2], ik[2], ik[3], 1, has_bias=True, pad_mode="pad", padding=ik[4]), nn.ReLU()]))
-            # Conv |
             score.append(nn.Conv2d(ik[2], 1, 3, 1, pad_mode="pad", padding=1, has_bias=True))
         trans.append(nn.SequentialCell([nn.Conv2d(512, 128, 1, 1, has_bias=False), nn.ReLU()]))
         self.trans, self.up, self.score = nn.CellList(trans), nn.CellList(up), nn.CellList(score)
@@ -229,8 +228,6 @@ class EGNet(nn.Cell):
         for i in up_sal_final:
             tuple_up_sal_final += (i,)
 
-        # only can work in dynamic graph
-        # return tuple(up_edge), tuple(up_sal), tuple(up_sal_final)
         return tuple_up_edge, tuple_up_sal, tuple_up_sal_final
 
     def load_pretrained_model(self, model_file):
diff --git a/research/cv/EGnet/src/train_forward_backward.py b/research/cv/EGnet/src/train_forward_backward.py
index 12a50d65b3728a45f9a4ff3ebc0c9786cec1bf4d..ff795bd0945d1734017e6ed3ab3b3bf6d9897e5b 100644
--- a/research/cv/EGnet/src/train_forward_backward.py
+++ b/research/cv/EGnet/src/train_forward_backward.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -15,8 +15,8 @@
 
 """Train forward and backward define"""
 
-from mindspore import ops, ParameterTuple
-from mindspore.nn import Cell
+from mindspore import ops
+from mindspore.nn import Cell, TrainOneStepCell
 
 _sum_op = ops.MultitypeFuncGraph("grad_sum_op")
 _clear_op = ops.MultitypeFuncGraph("clear_op")
@@ -37,19 +37,13 @@ def _clear_grad_sum(grad_sum, zero):
     return success
 
 
-class TrainForwardBackward(Cell):
+class TrainForwardBackward(TrainOneStepCell):
     """
     cell for step train
     """
     def __init__(self, network, optimizer, grad_sum, sens=1.0):
-        super(TrainForwardBackward, self).__init__(auto_prefix=False)
-        self.network = network
-        self.network.set_grad()
-        self.network.add_flags(defer_inline=True)
-        self.weights = ParameterTuple(network.trainable_params())
-        self.optimizer = optimizer
+        super(TrainForwardBackward, self).__init__(network, optimizer, sens)
         self.grad_sum = grad_sum
-        self.grad = ops.GradOperation(get_by_list=True, sens_param=True)
         self.sens = sens
         self.hyper_map = ops.HyperMap()
 
@@ -61,6 +55,7 @@ class TrainForwardBackward(Cell):
         loss = self.network(*inputs)
         sens = ops.Fill()(ops.DType()(loss), ops.Shape()(loss), self.sens)
         grads = self.grad(self.network, weights)(*inputs, sens)
+        grads = self.grad_reducer(grads)
         return ops.depend(loss, self.hyper_map(ops.partial(_sum_op), self.grad_sum, grads))
 
 
@@ -68,6 +63,7 @@ class TrainOptimize(Cell):
     """
     optimize cell
     """
+
     def __init__(self, optimizer, grad_sum):
         super(TrainOptimize, self).__init__(auto_prefix=False)
         self.optimizer = optimizer
@@ -85,6 +81,7 @@ class TrainClear(Cell):
     """
     clear cell
     """
+
     def __init__(self, grad_sum, zeros):
         super(TrainClear, self).__init__(auto_prefix=False)
         self.grad_sum = grad_sum
diff --git a/research/cv/EGnet/src/vgg.py b/research/cv/EGnet/src/vgg.py
index f45a54f0ed4801f60bbace4cb6458de0f386c245..df3f310bc533332c66ef42a4dbbefb9f1018bde9 100644
--- a/research/cv/EGnet/src/vgg.py
+++ b/research/cv/EGnet/src/vgg.py
@@ -20,7 +20,6 @@ from mindspore.train import load_checkpoint
 import mindspore
 
 
-
 def vgg(cfg, i, batch_norm=False):
     """Make stage network of VGG."""
     layers = []
diff --git a/research/cv/EGnet/train.py b/research/cv/EGnet/train.py
index b81d59ae35dbb62fb936f805c290457ef34a4625..dc7f895f3a60d8adb4d023d9527a8c51602fcdaa 100644
--- a/research/cv/EGnet/train.py
+++ b/research/cv/EGnet/train.py
@@ -1,4 +1,4 @@
-# Copyright 2021 Huawei Technologies Co., Ltd
+# Copyright 2022 Huawei Technologies Co., Ltd
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -21,7 +21,7 @@ from collections import OrderedDict
 from mindspore import set_seed
 from mindspore import context
 from mindspore import load_checkpoint, save_checkpoint, DatasetHelper
-from mindspore.communication import init
+from mindspore.communication import init, get_rank, get_group_size
 from mindspore.context import ParallelMode
 from mindspore.nn import Sigmoid
 from mindspore.nn.optim import Adam
@@ -31,10 +31,13 @@ from src.dataset import create_dataset, save_img
 from src.egnet import build_model, init_weights
 from src.sal_edge_loss import SalEdgeLoss, WithLossCell
 from src.train_forward_backward import TrainClear, TrainOptimize, TrainForwardBackward
+
 if base_config.train_online:
     import moxing as mox
+
     mox.file.shift('os', 'mox')
 
+
 def main(config):
     if config.train_online:
         local_data_url = os.path.join("/cache", config.train_path)
@@ -56,24 +59,31 @@ def main(config):
                 config.pre_trained = os.path.join("/cache", config.pre_trained)
                 mox.file.copy_parallel(os.path.join(config.online_pretrained_path,
                                                     os.path.basename(config.pre_trained)), config.pre_trained)
+    id_str = os.getenv("DEVICE_ID", "0")
+    if id_str.isdigit():
+        dev_id = int(id_str)
+    else:
+        dev_id = 0
     context.set_context(mode=context.GRAPH_MODE,
                         device_target=config.device_target,
                         reserve_class_name_in_scope=False,
-                        device_id=os.getenv('DEVICE_ID', '0'))
+                        device_id=dev_id)
+
     if config.is_distributed:
-        config.epoch = config.epoch * 6
-        set_seed(1234)
         context.set_auto_parallel_context(parallel_mode=ParallelMode.DATA_PARALLEL, gradients_mean=True)
         init()
+    if config.is_distributed and config.device_target == "Ascend":
+        config.epoch = config.epoch * 6
+        set_seed(1234)
     train_dataset, _ = create_dataset(config.batch_size, num_thread=config.num_thread, train_path=config.train_path,
                                       is_distributed=config.is_distributed)
     run = config.train_save_name
     if not os.path.exists(config.save_fold):
-        os.mkdir(config.save_fold)
+        os.makedirs(config.save_fold, exist_ok=True)
     if not os.path.exists("%s/run-%s" % (config.save_fold, run)):
-        os.mkdir("%s/run-%s" % (config.save_fold, run))
-        os.mkdir("%s/run-%s/logs" % (config.save_fold, run))
-        os.mkdir("%s/run-%s/models" % (config.save_fold, run))
+        os.makedirs("%s/run-%s" % (config.save_fold, run), exist_ok=True)
+        os.makedirs("%s/run-%s/logs" % (config.save_fold, run), exist_ok=True)
+        os.makedirs("%s/run-%s/models" % (config.save_fold, run), exist_ok=True)
     config.save_fold = "%s/run-%s" % (config.save_fold, run)
     train = Solver(train_dataset, config)
     train.train()
@@ -90,11 +100,11 @@ class Solver:
             if config.base_model == "vgg":
                 if os.path.exists(self.config.vgg):
                     self.network.base.load_pretrained_model(self.config.vgg)
-                    print("Load VGG pretrained model")
+                    print("Load VGG pretrained model from: ", self.config.vgg)
             elif config.base_model == "resnet":
                 if os.path.exists(self.config.resnet):
                     self.network.base.load_pretrained_model(self.config.resnet)
-                    print("Load ResNet pretrained model")
+                    print("Load ResNet pretrained model from: ", self.config.resnet)
             else:
                 raise ValueError("unknown base model")
         else:
@@ -104,13 +114,16 @@ class Solver:
 
         """some hyper params"""
         p = OrderedDict()
-        if self.config.base_model == "vgg":  # Learning rate resnet:5e-5, vgg:2e-5(begin with 2e-8, warm up to 2e-5 in epoch 3)
-            p["lr_bone"] = 2e-8
-            if self.config.is_distributed:
-                p["lr_bone"] = 2e-9
+        # Learning rate resnet:5e-5, vgg:2e-5(begin with 2e-8, warm up to 2e-5 in epoch 3)
+        if self.config.base_model == "vgg":
+            p["lr_bone"] = 2e-5
+            if self.config.device_target == "Ascend":
+                p["lr_bone"] = 2e-8
+                if self.config.is_distributed:
+                    p["lr_bone"] = 2e-9
         elif self.config.base_model == "resnet":
             p["lr_bone"] = 5e-5
-            if self.config.is_distributed:
+            if self.config.is_distributed and self.config.device_target == "Ascend":
                 p["lr_bone"] = 5e-9
         else:
             raise ValueError("unknown base model")
@@ -119,15 +132,20 @@ class Solver:
         p["momentum"] = 0.90  # Momentum
         self.p = p
         self.lr_decay_epoch = [15, 24]
-        if config.is_distributed:
-            self.lr_decay_epoch = [15*6, 24*6]
+        if config.is_distributed and self.config.device_target == "Ascend":
+            self.lr_decay_epoch = [15 * 6, 24 * 6]
+        if config.is_distributed and self.config.device_target == "GPU":
+            ave = int(round(10/get_group_size()))
+            if ave == 0:
+                ave = 1
+            self.config.n_ave_grad = ave
+            print(f"n_ave_grad change to {self.config.n_ave_grad} for distributed training")
         self.tmp_path = "tmp_see"
 
         self.lr_bone = p["lr_bone"]
         self.lr_branch = p["lr_branch"]
         self.optimizer = Adam(self.network.trainable_params(), learning_rate=self.lr_bone,
                               weight_decay=p["wd"], loss_scale=self.config.loss_scale)
-        self.print_network()
         self.loss_fn = SalEdgeLoss(config.n_ave_grad, config.batch_size)
         params = self.optimizer.parameters
         self.grad_sum = params.clone(prefix="grad_sum", init="zeros")
@@ -177,7 +195,7 @@ class Solver:
         iter_num = self.train_ds.get_dataset_size()
         dataset_helper = DatasetHelper(self.train_ds, dataset_sink_mode=False, epoch_num=self.config.epoch)
         if not os.path.exists(self.tmp_path):
-            os.mkdir(self.tmp_path)
+            os.makedirs(self.tmp_path, exist_ok=True)
         for epoch in range(self.config.epoch):
             r_edge_loss, r_sal_loss, r_sum_loss = 0, 0, 0
             for i, data_batch in enumerate(dataset_helper):
@@ -211,12 +229,14 @@ class Solver:
 
                 if (i + 1) % self.config.save_tmp == 0:
                     _, _, up_sal_final = self.network(sal_image)
-                    sal = self.sigmoid((up_sal_final[-1])).asnumpy().squeeze()
-                    sal_image = sal_image.asnumpy().squeeze().transpose((1, 2, 0))
-                    sal_label = sal_label.asnumpy().squeeze()
-                    save_img(sal, os.path.join(self.tmp_path, f"iter{i}-sal-0.jpg"))
-                    save_img(sal_image, os.path.join(self.tmp_path, f"iter{i}-sal-data.jpg"))
-                    save_img(sal_label, os.path.join(self.tmp_path, f"iter{i}-sal-target.jpg"))
+                    sal = self.sigmoid((up_sal_final[-1])).asnumpy()[0].squeeze()
+                    sal_image = sal_image.asnumpy()[0].squeeze().transpose((1, 2, 0))
+                    sal_label = sal_label.asnumpy()[0].squeeze()
+                    save_img(sal, os.path.join(self.tmp_path, f"iter{i}-sal-0.jpg"), self.config.is_distributed)
+                    save_img(sal_image, os.path.join(self.tmp_path, f"iter{i}-sal-data.jpg"),
+                             self.config.is_distributed)
+                    save_img(sal_label, os.path.join(self.tmp_path, f"iter{i}-sal-target.jpg"),
+                             self.config.is_distributed)
 
             if (epoch + 1) % self.config.epoch_save == 0:
                 if self.config.train_online:
@@ -227,9 +247,11 @@ class Solver:
                                            os.path.join(self.config.train_url, "epoch_%d_%s_bone.ckpt" %
                                                         (epoch + 1, self.config.base_model)))
                 else:
-                    save_checkpoint(self.network, "%s/models/epoch_%d_%s_bone.ckpt" %
-                                    (self.config.save_fold, epoch + 1, self.config.base_model))
-            if self.config.base_model == "vgg" or self.config.is_distributed:
+                    self.save_ckpt(os.path.join(self.config.save_fold, "models/epoch_%d_%s_bone.ckpt" %
+                                                (epoch + 1, self.config.base_model)))
+
+            if self.config.device_target == "Ascend" and \
+                    (self.config.base_model == "vgg" or self.config.is_distributed):
                 if self.config.is_distributed:
                     lr_rise_epoch = [3, 6, 9, 12]
                 else:
@@ -245,14 +267,16 @@ class Solver:
                                       learning_rate=self.lr_bone, weight_decay=self.p["wd"])
                 self.train_optimize = self.build_train_optimize()
         if self.config.train_online:
-            save_checkpoint(self.network, "final_%s_bone.ckpt"% (self.config.base_model))
-            mox.file.copy_parallel("final_%s_bone.ckpt"% (self.config.base_model),
-                                   os.path.join(self.config.train_url, "final_%s_bone.ckpt"% (self.config.base_model)))
+            save_checkpoint(self.network, "final_%s_bone.ckpt" % self.config.base_model)
+            mox.file.copy_parallel("final_%s_bone.ckpt" % self.config.base_model,
+                                   os.path.join(self.config.train_url, "final_%s_bone.ckpt" % (self.config.base_model)))
         else:
-            save_checkpoint(self.network,
-                            "%s/models/final_%s_bone.ckpt" % (self.config.save_fold, self.config.base_model))
+            self.save_ckpt("%s/models/final_%s_bone.ckpt" % (self.config.save_fold, self.config.base_model))
+
+    def save_ckpt(self, ckpt_file):
+        if not self.config.is_distributed or get_rank() == 0:
+            save_checkpoint(self.network, ckpt_file)
 
 
 if __name__ == "__main__":
     main(base_config)
-    
\ No newline at end of file