diff --git a/research/gnn/sdne/README.md b/research/gnn/sdne/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..46e27e3d0c83725a7185249b304f609e4e096ac3
--- /dev/null
+++ b/research/gnn/sdne/README.md
@@ -0,0 +1,337 @@
+# Contents
+
+[查看中文](./README_CN.md)
+
+<!-- TOC -->
+
+- [Contents](#contents)
+- [SDNE Description](#sdne-description)
+- [Dataset](#dataset)
+- [Environment Requirements](#environment-requirements)
+- [Quick start](#quick-start)
+    - [Script Description](#script-description)
+        - [Scripts and sample code](#script-and-sample-code)
+        - [Script parameters](#script-description)
+        - [Training process](#train-process)
+            - [Training](#train)
+        - [Evaluation process](#evaluation-process)
+            - [Evaluation](#evaluation)
+        - [Export mindir](#export-mindir-model)\
+            - [Usage](#usage)
+            - [Result](#result)
+- [Model description](#model-description)
+    - [Performance](#performance)
+        - [Training Performance](#training-performance)
+        - [Evaluation Performance](#evaluation-performance)
+- [Random state description](#description-of-random-state)
+- [ModelZoo Homepage](#modelzoo-homepage)
+
+<!-- /TOC -->
+
+# SDNE Description
+
+Network Embedding is an important method for learning low-dimensional representations of network vertices, whose purpose is to capture and preserve network structure.
+Different from existing network embeddings, this paper presents a new deep learning network architecture, SDNE, which can effectively capture the highly nonlinear network structure while preserving the global and local structure of the original network.
+This work has three main contributions:
+(1)The authors propose a structured deep network embedding method that can map data into a highly nonlinear latent space.
+(2)The authors propose a novel semi-supervised learning architecture that simultaneously learns the global and local structure of sparse networks.
+(3)The authors use the method to evaluate on 5 datasets and apply it to 4 application scenarios with remarkable results.
+
+[Paper](https://dl.acm.org/doi/10.1145/2939672.2939753) : Wang D ,  Cui P ,  Zhu W. Structural Deep Network Embedding[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. August 2016.
+
+# Dataset
+
+Used dataset:
+
+- [WIKI](https://github.com/shenweichen/GraphEmbedding/tree/master/data/wiki/) nodes: 2405 edges: 17981
+
+- [CA-GRQC](https://github.com/suanrong/SDNE/tree/master/GraphData/) nodes: 5242 edges: 11496
+
+# Environment Requirements
+
+- Hardware(Ascend/GPU)
+    - Prepare hardware environment with Ascend or GPU.
+- Framework
+    - [MindSpore](https://www.mindspore.cn/install)
+- For more information about MindSpore, please check the resources below:
+    - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
+    - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
+- Other
+    - networkx
+    - numpy
+    - tqdm
+
+# Quick start
+
+After installing MindSpore through the official website, you can follow the steps below for training and evaluation:
+
+```bash
+# training on Ascend
+bash scripts/run_standalone_train.sh dataset_name /path/data /path/ckpt epoch_num 0
+# training on GPU
+bash scripts/run_standalone_train_gpu.sh [dataset_name] [data_path] [ckpt_path] [epoch_num] [device_id]
+
+# evaluation on Ascend
+bash scripts/run_eval.sh dataset_name /path/data /path/ckpt 0
+# evaluation on GPU
+bash scripts/run_eval_gpu.sh [dataset_name] [data_path] [ckpt_path] [epoch_num] [device_id]
+```
+
+## Script Description
+
+## Script and Sample Code
+
+```text
+└── SDNE  
+ ├── README_CN.md
+ ├── scripts
+  ├── run_310_infer.sh
+  ├── run_standalone_train.sh
+  ├── run_eval.sh
+  ├── run_standalone_train_gpu.sh
+  └── run_eval_gpu.sh
+ ├── ascend310_infer
+  ├── inc
+   └── utils.h
+  ├── build.sh
+  ├── convert_data.py                // 转换数据脚本
+  └── CMakeLists.txt
+ ├── src
+  ├── __init__.py
+  ├── loss.py
+  ├── config.py
+  ├── dataset.py
+  ├── sdne.py
+  ├── initializer.py
+  ├── optimizer.py
+  └── utils.py
+ ├── export.py
+ ├── eval.py
+ └── train.py
+```
+
+## Script Parameters
+
+```text
+train.py中主要参数如下:
+
+-- device_id: Device ID used for training or evaluation datasets. This parameter is ignored when using train.sh for distributed training.
+-- device_target: Choose from ['Ascend', 'GPU']
+-- data_url: dataset path.
+-- ckpt_url: Path to store checkpoints.
+-- dataset: dataset used. Choose from ['WIKI', 'GRQC']
+-- epochs: number of iterations.
+-- pretrained: whether to use pretrained parameters.
+```
+
+## Train process
+
+### train
+
+- Ascend
+
+```shell
+bash scripts/run_standalone_train.sh WIKI /path/wiki /path/ckpt 40 0
+```
+
+- GPU
+
+```shell
+bash scripts/run_standalone_train_gpu.sh WIKI /path/wiki /path/ckpt 40 0
+```
+
+The above shell script will run the training. The results can be viewed in the `train.log` file.
+The loss value is as follows:
+
+```text
+...
+epoch: 36 step: 1, loss is 31.026050567626953
+epoch time: 1121.593 ms, per step time: 1121.593 ms
+epoch: 37 step: 1, loss is 29.539968490600586
+epoch time: 1121.818 ms, per step time: 1121.818 ms
+epoch: 38 step: 1, loss is 27.804513931274414
+epoch time: 1120.751 ms, per step time: 1120.751 ms
+epoch: 39 step: 1, loss is 26.283227920532227
+epoch time: 1121.551 ms, per step time: 1121.551 ms
+epoch: 40 step: 1, loss is 24.820133209228516
+epoch time: 1123.054 ms, per step time: 1123.054 ms
+```
+
+- Ascend
+
+```shell
+bash scripts/run_standalone_train.sh GRQC /path/grqc /path/ckpt 2 0
+```
+
+- GPU
+
+```shell
+bash scripts/run_standalone_train_gpu.sh GRQC /path/grqc /path/ckpt 2 0
+```
+
+The above shell script will run the training. The results can be viewed in the `train.log` file.
+The loss value is as follows:
+
+```text
+...
+epoch: 2 step: 157, loss is 607002.3125
+epoch: 2 step: 158, loss is 638598.0625
+epoch: 2 step: 159, loss is 485911.40625
+epoch: 2 step: 160, loss is 774514.1875
+epoch: 2 step: 161, loss is 733589.0625
+epoch: 2 step: 162, loss is 504986.1875
+epoch: 2 step: 163, loss is 416679.625
+epoch: 2 step: 164, loss is 524830.75
+epoch time: 14036.608 ms, per step time: 85.589 ms
+```
+
+## Evaluation process
+
+### evaluation
+
+- Ascend
+
+```bash
+bash scripts/run_eval.sh  WIKI /path/wiki /path/ckpt 0
+```
+
+- GPU
+
+```bash
+bash scripts/run_eval_gpu.sh WIKI /path/wiki /path/ckpt 0
+```
+
+The above command will run in the background and you can view the result through the `eval.log` file.
+The accuracy of the test dataset is as follows:
+
+```text
+Reconstruction Precision K  [1, 10, 20, 100, 200, 1000, 2000, 6000, 8000, 10000]
+Precision@K(1)= 1.0
+Precision@K(10)=        1.0
+Precision@K(20)=        1.0
+Precision@K(100)=       1.0
+Precision@K(200)=       1.0
+Precision@K(1000)=      1.0
+Precision@K(2000)=      1.0
+Precision@K(6000)=      0.9986666666666667
+Precision@K(8000)=      0.991375
+Precision@K(10000)=     0.966
+MAP :  0.6673926856547066
+```
+
+- Ascend
+
+```bash
+bash scripts/run_eval.sh GRQC /path/grqc /path/ckpt 0
+```
+
+- GPU
+
+```bash
+bash scripts/run_eval_gpu.sh GRQC /path/grqc /path/ckpt 0
+```
+
+The above command will run in the background and you can view the result through the `eval.log` file.
+The accuracy of the test dataset is as follows:
+
+```text
+Reconstruction Precision K  [10, 100]
+getting similarity...
+Precision@K(10)=        1.0
+Precision@K(100)=       1.0
+```
+
+## Export mindir model
+
+```bash
+python export.py --dataset [NAME] --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
+```
+
+Argument `ckpt_file` is required, `dataset` is a name of a dataset must be selected from [`WIKI`, `GRQC`], `FILE_FORMAT` must be selected from ["AIR", "MINDIR"].
+
+### Usage
+
+在执行推理之前,需要通过`export.py`导出`mindir`文件。
+
+```bash
+# Ascend310 推理
+bash run_310_infer.sh [MINDIR_PATH] [DATASET_NAME] [DATASET_PATH] [DEVICE_ID]
+```
+
+`MINDIR_PATH`为`mindir`文件路径,`DATASET_NAME`为数据集名称,`DATASET_PATH`表示数据集文件的路径(例如`/datapath/sdne_wiki_dataset/WIKI/Wiki_edgelist.txt`)。
+
+### Result
+
+推理结果保存在当前路径,可在`acc.log`中看到最终精度结果。
+
+# Model description
+
+## Performance
+
+### Training performance
+
+| Parameters        | Ascend                                        | GPU |
+| ----------------- | --------------------------------------------- | ---- |
+| Model             | SDNE                                          | SDNE |
+| Environment       | Ascend 910; CPU: 2.60GHz, 192cores, RAM 755Gb | Ubuntu 18.04.6, GF RTX3090, CPU 2.90GHz, 64cores, RAM 252GB |
+| Upload date       | 2021-12-31                                    | 2022-01-30 |
+| MindSpore version | 1.5.0                                         | 1.5.0 |
+| dataset           | wiki                                          | wiki |
+| parameters        | lr=0.002, epoch=40                            | lr=0.002, epoch=40 |
+| Optimizer         | Adam                                          | Adam |
+| loss function     | SDNE Loss Function                            | SDNE Loss Function |
+| output            | probability                                   | probability |
+| loss              | 24.82                                         | 24.87 |
+| speed             | 1p:1105 ms/step                              | 15 ms/step |
+| total time        | 1p:44 sec                                    | 44 sec |
+| Parameters(M)     | 1.30                                         | 1.30 |
+| Checkpoints       | 15M (.ckpt file)                            | 15M (.ckpt file |
+| Scripts           | [SDNE script](https://gitee.com/mindspore/models/tree/master/research/gnn/sdne) | [SDNE script](https://gitee.com/mindspore/models/tree/master/research/gnn/sdne) |
+
+| Parameters          | Ascend                                         |
+| ------------- | ----------------------------------------------- |
+| Model      | SDNE                                  |
+| Environment          | Ascend 910; CPU: 2.60GHz,192内核;内存,755G |
+| Upload date      | 2022-4-7                                     |
+| MindSpore version | 1.5.0                          |
+| dataset        | CA-GRQC                                       |
+| parameters      | lr=0.01                     |
+| Optimizer        | RMSProp                                             |
+| loss function      | SDNE Loss Function                       |
+| output          | probability                                            |
+| loss          | 736119.18                                            |
+| speed | 1p:86ms/step |
+| total time | 1p:28sec |
+| Parameters(M) | 1.05 |
+| Checkpoints | 13M (.ckpt file) |
+| Scripts | [SDNE script](https://gitee.com/mindspore/models/tree/master/research/gnn/sdne) |
+
+### Evaluation performance
+
+| Parameters        | Ascend              | GPU |
+| ----------------- | ------------------- | ------------------ |
+| Model             | SDNE                | SDNE     |
+| Environment       | Ascend 910          | Ubuntu 18.04.6, GF RTX3090, CPU 2.90GHz, 64cores, RAM 252GB |
+| Upload date       | 2021-12-31          | 2022-01-30 |
+| MindSpore version | MindSpore-1.3.0-c78 | 1.5.0 |
+| dataset           | wiki                | wiki |
+| output            | probability         | probability |
+| MAP               | 1p: 66.74%          | 1p: 66.73% |
+
+| Parameters          | Ascend            |
+| ------------- | ------------------ |
+| Model      | SDNE     |
+| Environment          | Ascend 910         |
+| Upload date      | 2022/4/7        |
+| MindSpore version | MindSpore-1.3.0-c78      |
+| dataset        | CA-GRQC          |
+| output          | 概率               |
+| MAP        | 1卡:1 |
+
+# Description of Random State
+
+Random seed fixed in `train.py` for python, numpy and mindspore.
+
+# ModelZoo homepage  
+
+Please check the official [homepage](https://gitee.com/mindspore/models).
diff --git a/research/gnn/sdne/README_CN.md b/research/gnn/sdne/README_CN.md
index ce082b0e5e4e7be42c647f22c3601fc18fe076a3..0e0c7e946b0a8399cabee3b490a0dc02c1645533 100644
--- a/research/gnn/sdne/README_CN.md
+++ b/research/gnn/sdne/README_CN.md
@@ -1,5 +1,10 @@
+
 # 目录
 
+[View English](./README.md)
+
+<!-- TOC -->
+
 - [目录](#目录)
 - [SDNE概述](#IBN-Net概述)
 - [SDNE示例](#IBN-Net示例)
@@ -14,7 +19,6 @@
         - [评估过程](#评估过程)
             - [评估](#评估)
         - [导出mindir模型](#导出mindir模型)
-        - [推理过程](#推理过程)
             - [用法](#用法)
             - [结果](#结果)
 - [模型描述](#模型描述)
@@ -30,7 +34,7 @@
 
 网络嵌入(Network Embedding)是学习网络顶点低维表示的一种重要方法,其目的是捕获和保存网络结构。与现有的网络嵌入不同,论文给出了一种新的深度学习网络架构SDNE,它可以有效的捕获高度非线性网络结构,同时也可以保留原先网络的全局以及局部结构。这项工作主要有三个贡献。(1)作者提出了结构化深度网络嵌入方法,可以将数据映射到高度非线性潜在空间中。(2)作者提出了一种新的半监督学习架构,可以同时学习到稀疏网络的全局和局部结构。(3)作者使用该方法在5个数据集上进行了评估,并将其应用到4个应用场景中,效果显著。
 
-[论文](https://dl.acm.org/doi/10.1145/2939672.2939753): Wang D ,  Cui P ,  Zhu W. Structural Deep Network Embedding[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. August 2016.
+[论文](https://dl.acm.org/doi/10.1145/2939672.2939753) : Wang D ,  Cui P ,  Zhu W. Structural Deep Network Embedding[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. August 2016.
 
 # SDNE示例
 
@@ -52,30 +56,42 @@
 - 如需查看详情,请参见如下资源:
     - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
     - [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/master/index.html)
+- Other
+    - networkx
+    - numpy
+    - tqdm
 
 # 快速入门
 
 通过官方网站安装MindSpore后,您可以按照如下步骤进行训练和评估:
 
-```python
+```bash
 # 单机训练运行示例
 bash scripts/run_standalone_train.sh dataset_name /path/data /path/ckpt epoch_num 0
 
+# GPU单机训练运行示例
+bash scripts/run_standalone_train_gpu.sh [dataset_name] [data_path] [ckpt_path] [epoch_num] [device_id]
+
 # 运行评估示例
 bash scripts/run_eval.sh dataset_name /path/data /path/ckpt 0
+
+# GPU运行评估示例
+bash scripts/run_eval_gpu.sh  [dataset_name] [data_path] [ckpt_path] [device_id]
 ```
 
 ## 脚本说明
 
 ## 脚本和样例代码
 
-```path
+```text
 └── SDNE  
  ├── README_CN.md                    // SDNE相关描述
  ├── scripts
   ├── run_310_infer.sh               // 310推理脚本
   ├── run_standalone_train.sh        // 用于单机训练的shell脚本
-  └── run_eval.sh                    // 用于评估的shell脚本
+  ├── run_eval.sh                    // 用于评估的shell脚本
+  ├── run_standalone_train_gpu.sh    // 在GPU中用于单机训练的shell脚本
+  └── run_eval_gpu.sh                // 在GPU中评估的shell脚本
  ├── ascend310_infer                 // 310推理相关代码
   ├── inc
    └── utils.h
@@ -97,21 +113,20 @@ bash scripts/run_eval.sh dataset_name /path/data /path/ckpt 0
  ├── export.py
  ├── eval.py                         // 测试脚本
  └── train.py                        // 训练脚本
-
 ```
 
 ## 脚本参数
 
-```python
+```text
 train.py中主要参数如下:
 
 -- device_id:用于训练或评估数据集的设备ID。当使用train.sh进行分布式训练时,忽略此参数。
+-- device_target:['Ascend', 'GPU']
 -- data_url:数据集路径。
 -- ckpt_url:存放checkpoint的路径。
--- dataset:使用的数据集。
+-- dataset:使用的数据集。['WIKI', 'GRQC']
 -- epochs:迭代数。
 -- pretrained:是否要使用预训练参数。
-
 ```
 
 ## 训练过程
@@ -124,10 +139,16 @@ train.py中主要参数如下:
 bash scripts/run_standalone_train.sh WIKI /path/wiki /path/ckpt 40 0
 ```
 
+- 在GPU环境训练
+
+```shell
+bash scripts/run_standalone_train_gpu.sh WIKI /path/wiki /path/ckpt 40 0
+```
+
 上述shell脚本将运行训练。可以通过`train.log`文件查看结果。
 采用以下方式达到损失值:
 
-```log
+```text
 ...
 epoch: 36 step: 1, loss is 31.026050567626953
 epoch time: 1121.593 ms, per step time: 1121.593 ms
@@ -147,10 +168,16 @@ epoch time: 1123.054 ms, per step time: 1123.054 ms
 bash scripts/run_standalone_train.sh GRQC /path/grqc /path/ckpt 2 0
 ```
 
+- 在GPU环境训练
+
+```shell
+bash scripts/run_standalone_train_gpu.sh GRQC /path/grqc /path/ckpt 2 0
+```
+
 上述shell脚本将运行训练。可以通过`train.log`文件查看结果。
 采用以下方式达到损失值:
 
-```log
+```text
 ...
 epoch: 2 step: 157, loss is 607002.3125
 epoch: 2 step: 158, loss is 638598.0625
@@ -173,9 +200,15 @@ epoch time: 14036.608 ms, per step time: 85.589 ms
 bash scripts/run_eval.sh WIKI /path/wiki /path/ckpt 0
 ```
 
-上述命令将在后台运行,您可以通过eval.log文件查看结果。测试数据集的准确性如下:
+- 在GPU环境运行评估
 
 ```bash
+bash scripts/run_eval_gpu.sh WIKI /path/wiki /path/ckpt 0
+```
+
+上述命令将在后台运行,您可以通过eval.log文件查看结果。测试数据集的准确性如下:
+
+```text
 Reconstruction Precision K  [1, 10, 20, 100, 200, 1000, 2000, 6000, 8000, 10000]
 Precision@K(1)= 1.0
 Precision@K(10)=        1.0
@@ -196,9 +229,15 @@ MAP :  0.6673926856547066
 bash scripts/run_eval.sh GRQC /path/grqc /path/ckpt 0
 ```
 
-上述命令将在后台运行,您可以通过eval.log文件查看结果。测试数据集的准确性如下:
+- 在GPU环境运行评估
 
 ```bash
+bash scripts/run_eval_gpu.sh GRQC /path/grqc /path/ckpt 0
+```
+
+上述命令将在后台运行,您可以通过eval.log文件查看结果。测试数据集的准确性如下:
+
+```text
 Reconstruction Precision K  [10, 100]
 getting similarity...
 Precision@K(10)=        1.0
@@ -207,14 +246,12 @@ Precision@K(100)=       1.0
 
 ## 导出mindir模型
 
-```python
+```bash
 python export.py --dataset [NAME] --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
 ```
 
 参数`ckpt_file` 是必需的,`dataset`是数据集名称,例如`GRQC`,FILE_FORMAT` 必须在 ["AIR", "MINDIR"]中进行选择。
 
-## 推理过程
-
 ### 用法
 
 在执行推理之前,需要通过`export.py`导出`mindir`文件。
@@ -236,23 +273,23 @@ bash run_310_infer.sh [MINDIR_PATH] [DATASET_NAME] [DATASET_PATH] [DEVICE_ID]
 
 ### 训练性能
 
-| 参数          | SDNE                                         |
-| ------------- | ----------------------------------------------- |
-| 模型版本      | SDNE                                  |
-| 资源          | Ascend 910; CPU: 2.60GHz,192内核;内存,755G |
-| 上传日期      | 2021-12-31                                     |
-| MindSpore版本 | 1.5.0                          |
-| 数据集        | WIKI                                       |
-| 训练参数      | lr=0.002                     |
-| 优化器        | Adam                                             |
-| 损失函数      | SDNE Loss Function                       |
-| 输出          | 概率                                            |
-| 损失          | 24.8                                          |
-| 速度 | 1卡:1105毫秒/步 |
-| 总时间 | 1卡:44秒 |
-| 参数(M) | 1.30 |
-| 微调检查点 | 15M (.ckpt file) |
-| 脚本 | [脚本路径](https://gitee.com/mindspore/models/tree/master/research/gnn/sdne) |
+| 参数          | SDNE                                         | GPU     |
+| ------------- | ----------------------------------------------- | ------- |
+| 模型版本      | SDNE                                  | SDNE     |
+| 资源          | Ascend 910; CPU: 2.60GHz,192内核;内存,755G | Ubuntu 18.04.6, GF RTX3090, CPU 2.90GHz, 64cores, RAM 252GB |
+| 上传日期      | 2021-12-31                                     | 2022-02-18        |
+| MindSpore版本 | 1.5.0                          | 1.5.0 |
+| 数据集        | WIKI                               | WIKI                               |
+| 训练参数      | lr=0.002                     | lr=0.002, epoch=40 |
+| 优化器        | Adam                                             | Adam        |
+| 损失函数      | SDNE Loss Function                       | SDNE Loss Function          |
+| 输出          | 概率                                            | 概率                       |
+| 损失          | 24.8                                          | 24.87                      |
+| 速度 | 1卡:1105毫秒/步 | 1卡:15毫秒/步 |
+| 总时间 | 1卡:44秒 | 1卡:44秒 |
+| 参数(M) | 1.30 | 1.30 |
+| 微调检查点 | 15M (.ckpt file) | 15M (.ckpt file) |
+| 脚本 | [脚本路径](https://gitee.com/mindspore/models/tree/master/research/gnn/sdne) | [脚本路径](https://gitee.com/mindspore/models/tree/master/research/gnn/sdne) |
 
 | 参数          | SDNE                                         |
 | ------------- | ----------------------------------------------- |
@@ -274,15 +311,15 @@ bash run_310_infer.sh [MINDIR_PATH] [DATASET_NAME] [DATASET_PATH] [DEVICE_ID]
 
 ### 评估性能
 
-| 参数          | SDNE            |
-| ------------- | ------------------ |
-| 模型版本      | SDNE     |
-| 资源          | Ascend 910         |
-| 上传日期      | 2021/12/31        |
-| MindSpore版本 | MindSpore-1.3.0-c78      |
-| 数据集        | WIKI          |
-| 输出          | 概率               |
-| 准确性        | 1卡:66.74% |
+| 参数          | SDNE                | GPU            |
+| ------------- | ------------------ | ------------------ |
+| 模型版本      | SDNE                | SDNE     |
+| 资源          | Ascend 910         | Ubuntu 18.04.6, GF RTX3090, CPU 2.90GHz, 64cores, RAM 252GB |
+| 上传日期      | 2021/12/31        | 2022/02/18       |
+| MindSpore版本 | MindSpore-1.3.0-c78      | 1.5.0      |
+| 数据集        | WIKI          | WIKI          |
+| 输出          | 概率               | 概率          |
+| 准确性        | 1卡:66.74% | 1卡:66.73% |
 
 | 参数          | SDNE            |
 | ------------- | ------------------ |
diff --git a/research/gnn/sdne/eval.py b/research/gnn/sdne/eval.py
index 940ffdc44a50f9b96ff9b26c2c6be1bcb5602065..1c6f3e09c7c96c855f49b2c3666363790ff8e3a3 100644
--- a/research/gnn/sdne/eval.py
+++ b/research/gnn/sdne/eval.py
@@ -15,6 +15,7 @@
 """
 python eval.py
 """
+
 import argparse
 
 from mindspore import context
@@ -26,37 +27,35 @@ from src import check_reconstruction
 from src import reconstruction_precision_k
 from src import cfg
 
+
 parser = argparse.ArgumentParser(description='Mindspore SDNE Training')
 
 # Datasets
 parser.add_argument('--data_url', type=str, default='', help='dataset path')
 parser.add_argument('--data_path', type=str, default='', help='data path')
 parser.add_argument('--label_path', type=str, default='', help='label path')
-parser.add_argument('--dataset', type=str, default='WIKI',
-                    choices=['WIKI', 'GRQC'])
+parser.add_argument('--dataset', type=str, default='WIKI', choices=['WIKI', 'GRQC'])
 
 # Checkpoints
 parser.add_argument('-c', '--checkpoint', required=True, type=str, metavar='PATH',
                     help='path to save checkpoint (default: checkpoint)')
 
 # Device options
+parser.add_argument("--device_target", type=str, choices=["Ascend", "GPU", "CPU"], default="Ascend")
 parser.add_argument('--device_id', type=int, default=0)
 
-args = parser.parse_args()
 
-if __name__ == "__main__":
-    context.set_context(mode=context.GRAPH_MODE, device_target='Ascend', device_id=args.device_id)
+def run_eval():
+    args = parser.parse_args()
+    context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target, device_id=args.device_id)
 
     config = cfg[args.dataset]
 
     data_path = ''
-    label_path = ''
     if args.data_url == '':
         data_path = args.data_path
-        label_path = args.label_path
     else:
         data_path = args.data_url + (config['data_path'] if args.data_path == '' else args.data_path)
-        label_path = args.data_url + (config['label_path'] if args.label_path == '' else args.label_path)
 
     dataset = GraphDataset(args.dataset, data_path, batch=config['batch'], delimiter=config['delimiter'])
     net = SDNEWithLossCell(SDNE(dataset.get_node_size(), hidden_size=config['hidden_size']), SDNELoss1())
@@ -75,3 +74,7 @@ if __name__ == "__main__":
         else:
             check_reconstruction(embeddings, dataset.get_graph(), idx2node,
                                  config['reconstruction']['k_query'])
+
+
+if __name__ == "__main__":
+    run_eval()
diff --git a/research/gnn/sdne/requirements.txt b/research/gnn/sdne/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..d85e6df16ebe130b6fba5df8ad08fc841f46b7d4
--- /dev/null
+++ b/research/gnn/sdne/requirements.txt
@@ -0,0 +1,3 @@
+networkx
+tqdm
+numpy
\ No newline at end of file
diff --git a/research/gnn/sdne/scripts/run_eval_gpu.sh b/research/gnn/sdne/scripts/run_eval_gpu.sh
new file mode 100644
index 0000000000000000000000000000000000000000..56b464c5fbfa8ba03a80063f0c1daf4c1d976c2d
--- /dev/null
+++ b/research/gnn/sdne/scripts/run_eval_gpu.sh
@@ -0,0 +1,39 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+DATASET_NAME=$1
+DATA_URL=$2
+CHECKPOINT=$3
+DEVICE_ID=$4
+
+if [ $# -ne 4 ]; then
+  echo "=============================================================================================================="
+  echo "Please run the script as: "
+  echo "bash scripts/run_eval_gpu.sh DATASET_NAME DATA_URL CHECKPOINT DEVICE_ID"
+  echo "For example: bash scripts/run_eval_gpu.sh WIKI ./dataset ./ckpt/SDNE-40_1.ckpt 0"
+  echo "It is better to use the absolute path."
+  echo "=============================================================================================================="
+  exit 1
+fi
+
+python eval.py  \
+    --device_target "GPU" \
+    --device_id $DEVICE_ID \
+    --dataset "$DATASET_NAME" \
+    --data_url "$DATA_URL" \
+    --checkpoint "$CHECKPOINT" \
+    > eval.log 2>&1 &
+echo "start evaluation"
diff --git a/research/gnn/sdne/scripts/run_standalone_train_gpu.sh b/research/gnn/sdne/scripts/run_standalone_train_gpu.sh
new file mode 100644
index 0000000000000000000000000000000000000000..05ba7d8ec4cfefcab3c60e8bceb56f163d7a0c29
--- /dev/null
+++ b/research/gnn/sdne/scripts/run_standalone_train_gpu.sh
@@ -0,0 +1,42 @@
+#!/bin/bash
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+DATASET_NAME=$1
+DATA_URL=$2
+CKPT_URL=$3
+EPOCH_NUM=$4
+DEVICE_ID=$5
+
+if [ $# -ne 5 ]; then
+  echo "=============================================================================================================="
+  echo "Please run the script as: "
+  echo "bash scripts/run_standalone_train_gpu.sh DATASET_NAME DATA_URL CKPT_URL EPOCH_NUM DEVICE_ID"
+  echo "For example: bash scripts/run_standalone_train_gpu.sh WIKI ./dataset ./ckpt 40 0"
+  echo "It is better to use the absolute path."
+  echo "=============================================================================================================="
+  exit 1
+fi
+
+python train.py  \
+    --device_target "GPU" \
+    --device_id $DEVICE_ID \
+    --epochs $EPOCH_NUM \
+    --dataset "$DATASET_NAME" \
+    --data_url "$DATA_URL" \
+    --ckpt_url "$CKPT_URL" \
+    > train.log 2>&1 &
+echo "start training"
+cd ../
diff --git a/research/gnn/sdne/src/config.py b/research/gnn/sdne/src/config.py
index b1c4aeaba3843495b4eef30bb1404be777f09e72..d15afad037e420fe29c830f2836063c1b251d35b 100644
--- a/research/gnn/sdne/src/config.py
+++ b/research/gnn/sdne/src/config.py
@@ -39,8 +39,8 @@ cfg = EasyDict({
         'ckpt_step': 32,
         'ckpt_max': 10,
         'generate_emb': False,
-        'data_path': './Wiki_edgelist.txt',
-        'label_path': './wiki_labels.txt',
+        'data_path': 'Wiki_edgelist.txt',
+        'label_path': 'wiki_labels.txt',
         'reconstruction': {
             'check': True,
             'k_query': [1, 10, 20, 100, 200, 1000, 2000, 6000, 8000, 10000],
@@ -71,7 +71,7 @@ cfg = EasyDict({
         'ckpt_step': 32,
         'ckpt_max': 1,
         'generate_emb': False,
-        'data_path': './ca-Grqc.txt',
+        'data_path': 'ca-Grqc.txt',
         'label_path': '',
         'reconstruction': {
             'check': True,
diff --git a/research/gnn/sdne/train.py b/research/gnn/sdne/train.py
index 57c8b826a627b7fe86b9fa8f12c5941dca151e3e..d2b560f213711bfa01d15ee3456ef4e6addf9b8c 100644
--- a/research/gnn/sdne/train.py
+++ b/research/gnn/sdne/train.py
@@ -35,10 +35,9 @@ from src import cfg
 
 if cfg.is_modelarts:
     import moxing as mox
-
-DATA_URL = "/cache/data/"
-CKPT_URL = "/cache/ckpt/"
-TMP_URL = "/cache/tmp/"
+    DATA_URL = "/cache/data/"
+    CKPT_URL = "/cache/ckpt/"
+    TMP_URL = "/cache/tmp/"
 
 parser = argparse.ArgumentParser(description='Mindspore SDNE Training')
 
@@ -60,16 +59,16 @@ parser.add_argument('-c', '--checkpoint', default='checkpoint', type=str, metava
 parser.add_argument('--pretrained', type=bool, default=False, help='use pre-trained model')
 
 # Device options
+parser.add_argument("--device_target", type=str, choices=["Ascend", "GPU", "CPU"], default="Ascend")
 parser.add_argument('--device_id', type=int, default=0)
 
-args = parser.parse_args()
 
 class EvalCallBack(Callback):
     """
     Precision verification using callback function.
     """
     # define the operator required and config
-    def __init__(self, ds, c, lp='', tmp_dir='./tmp/'):
+    def __init__(self, ds, c, args, lp='', tmp_dir='./tmp/'):
         super(EvalCallBack, self).__init__()
         self.ds = ds
         self.gemb = c['generate_emb']
@@ -78,6 +77,7 @@ class EvalCallBack(Callback):
         self.batch = c['batch']
         self.label_path = lp
         self.tmp_dir = tmp_dir
+        self.args = args
 
     # define operator function after finishing train
     def end(self, run_context):
@@ -91,7 +91,7 @@ class EvalCallBack(Callback):
         idx2node = self.ds.get_idx2node()
         embeddings = None
         if self.rec['check']:
-            if args.dataset == 'WIKI':
+            if self.args.dataset == 'WIKI':
                 reconstructions, vertices = backbone.get_reconstructions(data, idx2node)
                 reconstruction_precision_k(reconstructions, vertices, graph, self.rec['k_query'])
             else:
@@ -113,15 +113,11 @@ def count_params(n):
         total_param += np.prod(param.shape)
     return total_param
 
-if __name__ == "__main__":
-    context.set_context(mode=context.GRAPH_MODE, device_target='Ascend', device_id=args.device_id)
-
+def run_train():
+    args = parser.parse_args()
     config = cfg[args.dataset]
 
-    # fix all random seed
-    mindspore.set_seed(1)
-    np.random.seed(1)
-    random.seed(1)
+    context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target, device_id=args.device_id)
 
     # set all paths
     ckpt_url = ''
@@ -133,8 +129,8 @@ if __name__ == "__main__":
             os.makedirs(DATA_URL, 0o755)
         mox.file.copy_parallel(args.data_url, DATA_URL)
         print("data finish copy to %s." % DATA_URL)
-        data_path = DATA_URL + (config['data_path'] if args.data_path == '' else args.data_path)
-        label_path = DATA_URL + (config['label_path'] if args.label_path == '' else args.label_path)
+        data_path = os.path.join(DATA_URL, (config['data_path'] if args.data_path == '' else args.data_path))
+        label_path = os.path.join(DATA_URL, (config['label_path'] if args.label_path == '' else args.label_path))
         if not os.path.exists(CKPT_URL):
             os.makedirs(CKPT_URL, 0o755)
         ckpt_url = CKPT_URL
@@ -146,8 +142,9 @@ if __name__ == "__main__":
             data_path = args.data_path
             label_path = args.label_path
         else:
-            data_path = args.data_url + (config['data_path'] if args.data_path == '' else args.data_path)
-            label_path = args.data_url + (config['label_path'] if args.label_path == '' else args.label_path)
+            data_path = os.path.join(args.data_url, (config['data_path'] if args.data_path == '' else args.data_path))
+            label_path = os.path.join(args.data_url,
+                                      (config['label_path'] if args.label_path == '' else args.label_path))
         tmp_url = args.train_url
         ckpt_url = args.ckpt_url
 
@@ -173,7 +170,7 @@ if __name__ == "__main__":
     ckpoint_cb = ModelCheckpoint(prefix="SDNE_" + args.dataset, config=config_ck, directory=ckpt_url)
     time_cb = TimeMonitor(data_size=dataset.get_node_size())
     loss_cb = LossMonitor()
-    eval_cb = EvalCallBack(dataset, config, label_path, tmp_url)
+    eval_cb = EvalCallBack(dataset, config, args, label_path, tmp_url)
     cb = [ckpoint_cb, time_cb, loss_cb, eval_cb]
 
     model.train(args.epochs, dataset.get_dataset(), callbacks=cb, dataset_sink_mode=False)
@@ -181,3 +178,12 @@ if __name__ == "__main__":
     if cfg.is_modelarts:
         mox.file.copy_parallel(CKPT_URL, args.train_url)
         mox.file.copy_parallel(TMP_URL, args.train_url)
+
+
+if __name__ == "__main__":
+    # fix all random seed
+    mindspore.set_seed(1)
+    np.random.seed(1)
+    random.seed(1)
+
+    run_train()