diff --git a/research/cv/ssd_resnet34/README.md b/research/cv/ssd_resnet34/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a3e94a16d9270288c9000dc0c79df45ab42ff58
--- /dev/null
+++ b/research/cv/ssd_resnet34/README.md
@@ -0,0 +1,384 @@
+# Contents
+
+- [Contents](#contents)
+    - [SSD Description](#ssd-description)
+    - [Model Architecture](#model-architecture)
+    - [Dataset](#dataset)
+    - [Environment Requirements](#environment-requirements)
+    - [Quick Start](#quick-start)
+        - [Prepare the model](#prepare-the-model)
+        - [Run the scripts](#run-the-scripts)
+    - [Script Description](#script-description)
+        - [Script and Sample Code](#script-and-sample-code)
+        - [Script Parameters](#script-parameters)
+        - [Training Process](#training-process)
+            - [Training on Ascend](#training-on-ascend)
+        - [Evaluation Process](#evaluation-process)
+            - [Evaluation on Ascend](#evaluation-on-ascend)
+            - [Performance](#performance)
+        - [Export Process](#Export-process)
+            - [Export](#Export)
+        - [Inference Process](#Inference-process)
+            - [Inference](#Inference)
+    - [Description of Random Situation](#description-of-random-situation)
+    - [ModelZoo Homepage](#modelzoo-homepage)
+
+## [SSD Description](#contents)
+
+SSD discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape.Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
+
+[Paper](https://arxiv.org/abs/1512.02325):   Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg.European Conference on Computer Vision (ECCV), 2016 (In press).
+
+## [Model Architecture](#contents)
+
+The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification, which is called the base network. Then add auxiliary structure to the network to produce detections.
+
+We present four different base architecture.
+
+- **ssd300**, reference from the paper. Using mobilenetv2 as backbone and the same bbox predictor as the paper present.
+- ***ssd-mobilenet-v1-fpn**, using mobilenet-v1 and FPN as feature extractor with weight-shared box predcitors.
+- ***ssd-resnet50-fpn**, using resnet50 and FPN as feature extractor with weight-shared box predcitors.
+- **ssd-vgg16**, reference from the paper. Using vgg16 as backbone and the same bbox predictor as the paper present.
+- **ssd-resnet34**,reference from the paper. Using resnet34 as backbone and the same bbox predictor as the paper present.
+
+## [Dataset](#contents)
+
+Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
+
+Dataset used: [COCO2017](<http://images.cocodataset.org/>)
+
+- Dataset size锛�19G
+    - Train锛�18G锛�118000 images  
+    - Val锛�1G锛�5000 images
+    - Annotations锛�241M锛宨nstances锛宑aptions锛宲erson_keypoints etc
+- Data format锛歩mage and json files
+    - Note锛欴ata will be processed in dataset.py
+
+## [Environment Requirements](#contents)
+
+- Install [MindSpore](https://www.mindspore.cn/install/en).
+
+- Download the dataset COCO2017.
+
+- We use COCO2017 as training dataset in this example by default, and you can also use your own datasets.
+  First, install Cython ,pycocotool and opencv to process data and to get evaluation result.
+
+    ```shell
+    pip install Cython
+    pip install pycocotools
+    pip install opencv-python
+    ```
+
+    1. If coco dataset is used. **Select dataset to coco when run script.**
+
+       Change the `coco_root` and other settings you need in `src/config_xxx.py`. The directory structure is as follows:
+
+       ```shell
+       .
+       鈹斺攢coco_dataset
+         鈹溾攢annotations
+           鈹溾攢instance_train2017.json
+           鈹斺攢instance_val2017.json
+         鈹溾攢val2017
+         鈹斺攢train2017
+       ```
+
+    2. If VOC dataset is used. **Select dataset to voc when run script.**
+       Change `classes`, `num_classes`, `voc_json` and `voc_root` in `src/config_xxx.py`. `voc_json` is the path of json file with coco format for evaluation, `voc_root` is the path of VOC dataset, the directory structure is as follows:
+
+       ```shell
+       .
+       鈹斺攢voc_dataset
+         鈹斺攢train
+           鈹溾攢0001.jpg
+           鈹斺攢0001.xml
+           ...
+           鈹溾攢xxxx.jpg
+           鈹斺攢xxxx.xml
+         鈹斺攢eval
+           鈹溾攢0001.jpg
+           鈹斺攢0001.xml
+           ...
+           鈹溾攢xxxx.jpg
+           鈹斺攢xxxx.xml
+       ```
+
+    3. If your own dataset is used. **Select dataset to other when run script.**
+       Organize the dataset information into a TXT file, each row in the file is as follows:
+
+       ```shell
+       train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
+       ```
+
+       Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `image_dir`(dataset directory) and the relative path in `anno_path`(the TXT file path), `image_dir` and `anno_path` are setting in `src/config_xxx.py`.
+
+## [Quick Start](#contents)
+
+### Prepare the model
+
+1. Chose the model by changing the `using_model` in `src/config.py`. The optional models are: `ssd300`, `ssd_mobilenet_v1_fpn`, `ssd_vgg16`, `ssd_resnet50_fpn`,`ssd_resnet34`
+2. Change the dataset config in the corresponding config. `src/config_xxx.py`, `xxx` is the corresponding backbone network name
+3. If you are running with `ssd_mobilenet_v1_fpn` , `ssd_resnet50_fpn`or , `ssd_resnet34` , you need a pretrained model for `mobilenet_v1` ,`resnet50`or `resnet34`. Set the checkpoint path to `feature_extractor_base_param` in `src/config_xxx.py`. For more detail about training pre-trained model, please refer to the corresponding backbone network.
+
+### Run the scripts
+
+After installing MindSpore via the official website, you can start training and evaluation as follows:
+
+- running on Ascend
+
+```shell
+# distributed training on Ascend
+sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH][PRE_TRAINED_PATH](optional)
+
+# run eval on Ascend
+sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDRECORD_PATH]
+
+# run inference on Ascend310, MINDIR_PATH is the mindir model which you can export from checkpoint using export.py
+bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
+```
+
+## [Script Description](#contents)
+
+### [Script and Sample Code](#contents)
+
+```shell
+  鈹斺攢 ssd_resnet34
+    鈹溾攢 ascend310_infer
+    鈹溾攢 scripts
+      鈹溾攢 run_distribute_train.sh      ## shell script for distributed on ascend 910
+      鈹溾攢 run_eval.sh                  ## shell script for eval on ascend 910
+      鈹溾攢 run_infer_310.sh             ## shell script for eval on ascend 310
+      鈹斺攢 run_standalone_train.sh      ## shell script for standalone on ascend 910
+    鈹溾攢 src
+      鈹溾攢 __init__.py                  ## init file
+      鈹溾攢 anchor_generator.py          ## anchor generator
+      鈹溾攢 box_util.py                  ## bbox utils
+      鈹溾攢 callback.py                  ## the callback of train and evaluation
+      鈹溾攢 config.py                    ## total config
+      鈹溾攢 config_ssd_resnet34.py       ## ssd_resnet34 config
+      鈹溾攢 dataset.py                   ## create dataset and process dataset
+      鈹溾攢 eval_utils.py                ## eval utils
+      鈹溾攢 lr_schedule.py               ## learning ratio generator
+      鈹溾攢 init_params.py               ## parameters utils
+      鈹溾攢 resnet34.py                  ## resnet34 architecture
+      鈹溾攢 ssd.py                       ## ssd architecture
+      鈹斺攢 ssd_resnet34.py              ## ssd_resnet34 architecture
+    鈹溾攢 eval.py                        ## eval script
+    鈹溾攢 export.py                      ## export mindir script
+    鈹溾攢 postprocess.py                 ## eval on ascend 310
+    鈹溾攢 README.md                      ## English descriptions about SSD
+    鈹溾攢 README_CN.md                   ## Chinese descriptions about SSD
+    鈹溾攢 requirements.txt               ## Requirements
+    鈹斺攢 train.py                       ## train script
+```
+
+### [Script Parameters](#contents)
+
+  ```shell
+  Major parameters in train.py and config.py as follows:
+
+    "device_num": 1                                  # Use device nums
+    "lr": 0.075                                      # Learning rate init value
+    "dataset": coco                                  # Dataset name
+    "epoch_size": 500                                # Epoch size
+    "batch_size": 32                                 # Batch size of input tensor
+    "pre_trained": None                              # Pretrained checkpoint file path
+    "pre_trained_epoch_size": 0                      # Pretrained epoch size
+    "save_checkpoint_epochs": 10                     # The epoch interval between two checkpoints. By default, the checkpoint will be saved per 10 epochs
+    "loss_scale": 1024                               # Loss scale
+    "filter_weight": False                           # Load parameters in head layer or not. If the class numbers of train dataset is different from the class numbers in pre_trained checkpoint, please set True.
+    "freeze_layer": "none"                           # Freeze the backbone parameters or not, support none and backbone.
+
+    "class_num": 81                                  # Dataset class number
+    "image_shape": [300, 300]                        # Image height and width used as input to the model
+    "mindrecord_dir": "/data/MindRecord_COCO"        # MindRecord path
+    "coco_root": "/data/coco2017"                    # COCO2017 dataset path
+    "voc_root": "/data/voc_dataset"                  # VOC original dataset path
+    "voc_json": "annotations/voc_instances_val.json" # is the path of json file with coco format for evaluation
+    "image_dir": ""                                  # Other dataset image path, if coco or voc used, it will be useless
+    "anno_path": ""                                  # Other dataset annotation path, if coco or voc used, it will be useless
+
+  ```
+
+### [Training Process](#contents)
+
+To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
+
+#### Training on Ascend
+
+- Distribute mode
+
+```shell
+     sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH][PRE_TRAINED_PATH](optional)
+```
+
+- Standalone training
+
+```shell
+     sh scripts/run_standalone_train.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH][PRE_TRAINED_PATH](optional)
+```
+
+We need five or six parameters for this scripts.
+
+- `RANK_TABLE_FILE :` the path of [rank_table.json](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools), it is better to use absolute path.
+- `DATASET`锛歵he dataset mode for distributed train.
+- `DATASET_PATH`锛歵he dataset path for distributed train.
+- `MINDRECIRD_PATH`锛歵he mindrecord path for distributed train.
+- `TRAIN_OUT_PATH`锛歵he output path of train for distributed train.
+- `PRE_TRAINED_PATH :` the path of pretrained checkpoint file, it is better to use absolute path.
+
+Training result will be stored in the train path, whose folder name  "log".  Under this, you can find checkpoint file together with result like the followings in log
+
+```shell
+epoch: 1 step: 458, loss is 4.185711
+epoch time: 138740.569 ms, per step time: 302.927 ms
+epoch: 2 step: 458, loss is 4.3121023
+epoch time: 47116.166 ms, per step time: 102.874 ms
+epoch: 3 step: 458, loss is 3.2209284
+epoch time: 47149.108 ms, per step time: 102.946 ms
+epoch: 4 step: 458, loss is 3.5159926
+epoch time: 47174.645 ms, per step time: 103.001 ms
+...
+epoch: 497 step: 458, loss is 1.0916114
+epoch time: 47164.002 ms, per step time: 102.978 ms
+epoch: 498 step: 458, loss is 1.157409
+epoch time: 47172.836 ms, per step time: 102.997 ms
+epoch: 499 step: 458, loss is 1.2065268
+epoch time: 47155.245 ms, per step time: 102.959 ms
+epoch: 500 step: 458, loss is 1.1856415
+epoch time: 47666.430 ms, per step time: 104.075 ms
+```
+
+#### Transfer Training
+
+You can train your own model based on either pretrained classification model or pretrained detection model. You can perform transfer training by following steps.
+
+1. Convert your own dataset to COCO or VOC style. Otherwise you have to add your own data preprocess code.
+2. Change config_xxx.py according to your own dataset, especially the `num_classes`.
+3. Prepare a pretrained checkpoint. You can load the pretrained checkpoint by `pre_trained` argument. Transfer training means a new training job, so just keep `pre_trained_epoch_size`  same as default value `0`.
+4. Set argument `filter_weight` to `True` while calling `train.py`, this will filter the final detection box weight from the pretrained model.
+5. Build your own bash scripts using new config and arguments for further convenient.
+
+### [Evaluation Process](#contents)
+
+#### Evaluation on Ascend
+
+```shell
+sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDRECORD_PATH]
+```
+
+We need five parameters for this scripts.
+
+- `DEVICE_ID`: the device id for eval.
+- `DATASET`锛歵he dataset mode of evaluation dataset.
+- `DATASET_PATH`锛歵he dataset path for evaluation.
+- `CHECKPOINT_PATH`: the absolute path for checkpoint file.
+- `MINDRECIRD_PATH`锛歵he mindrecord path for evaluation.
+
+> checkpoint can be produced in training process.
+
+Inference result will be stored in the eval path, whose folder name "log". Under this, you can find result like the followings in log.
+
+```shell
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.240
+ Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.360
+ Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.258
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.016
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.229
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.446
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.256
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.389
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.427
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.077
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.439
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.734
+
+========================================
+
+mAP: 0.24011857000302622
+
+```
+
+## Inference Process
+
+### [Export MindIR](#contents)
+
+```shell
+python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
+```
+
+The ckpt_file parameter is required,
+`EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
+
+### Infer on Ascend310
+
+Before performing inference, the mindir file must bu exported by `export.py` script. We only provide an example of inference using MINDIR model.
+Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space, otherwise the process will be killed for execeeding memory limits.
+
+```shell
+# Ascend310 inference
+bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [ANNO_FILE] [DEVICE_ID]
+```
+
+- `DVPP` is mandatory, and must choose from ["DVPP", "CPU"], it's case-insensitive. Note that the image shape of ssd_vgg16 inference is [300, 300], The DVPP hardware restricts width 16-alignment and height even-alignment. Therefore, the network needs to use the CPU operator to process images.
+- `DEVICE_ID` is optional, default value is 0.
+
+### result
+
+Inference result is saved in current path, you can find result like this in acc.log file.
+
+```bash
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.250
+ Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.374
+ Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.266
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.018
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.241
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.462
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.260
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.399
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.435
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.090
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.449
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.739
+0.249879750926743
+```
+
+## [Model Description](#contents)
+
+### [Performance](#contents)
+
+#### Train Performance
+
+| Parameters          | Ascend                      |
+| ------------------- | ----------------------------|
+| Model Version       | SSD_ResNet34                |
+| Resource            | Ascend 910锛汣PU 2.60GHz锛�192 cores锛汳emory 755 G|
+| Uploaded Date       | 08/31/2021 (month/day/year) |
+| MindSpore Version   | 1.3                       |
+| Dataset             | COCO2017                    |
+| Training Parameters | epoch = 500, batch_size = 32|
+| Optimizer           | Momentum                    |
+| Loss Function       | Sigmoid Cross Entropy,SmoothL1Loss|
+| Speed               | 8pcs: 101ms/step            |
+| Total time          | 8pcs: 8.34h                 |
+
+#### Inference Performance
+
+| Parameters          | Ascend                      |
+| ------------------- | --------------------------- |
+| Model Version       | SSD_ResNet34                   |
+| Resource            | Ascend 910                  |
+| Uploaded Date       | 08/31/2021 (month/day/year) |
+| MindSpore Version   | 1.2                       |
+| Dataset             | COCO2017                    |
+| outputs             | mAP                         |
+| Accuracy            | IoU=0.50: 24.0%             |
+| Model for inference | 98.77M(.ckpt file)             |
+
+## [Description of Random Situation](#contents)
+
+In dataset.py, we set the seed inside 鈥渃reate_dataset" function. We also use random seed in train.py.
+
+## [ModelZoo Homepage](#contents)
+
+ Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).  
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/README_CN.md b/research/cv/ssd_resnet34/README_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..ffc2f47c0e9d3aeae74e4a92a939e05875f6c856
--- /dev/null
+++ b/research/cv/ssd_resnet34/README_CN.md
@@ -0,0 +1,338 @@
+# 鐩綍
+
+<!-- TOC -->
+
+- [鐩綍](#鐩綍)
+- [SSD璇存槑](#ssd璇存槑)
+- [妯″瀷鏋舵瀯](#妯″瀷鏋舵瀯)
+- [鏁版嵁闆哴(#鏁版嵁闆�)
+- [鐜瑕佹眰](#鐜瑕佹眰)
+- [蹇€熷叆闂╙(#蹇€熷叆闂�)
+- [鑴氭湰璇存槑](#鑴氭湰璇存槑)
+    - [鑴氭湰鍙婃牱渚嬩唬鐮乚(#鑴氭湰鍙婃牱渚嬩唬鐮�)
+    - [鑴氭湰鍙傛暟](#鑴氭湰鍙傛暟)
+    - [璁粌杩囩▼](#璁粌杩囩▼)
+        - [Ascend涓婅缁僝(#ascend涓婅缁�)
+    - [璇勪及杩囩▼](#璇勪及杩囩▼)
+        - [Ascend澶勭悊鍣ㄧ幆澧冭瘎浼癩(#ascend澶勭悊鍣ㄧ幆澧冭瘎浼�)
+        - [鎬ц兘](#鎬ц兘)
+    - [瀵煎嚭杩囩▼](#瀵煎嚭杩囩▼)
+        - [瀵煎嚭](#瀵煎嚭)
+    - [鎺ㄧ悊杩囩▼](#鎺ㄧ悊杩囩▼)
+        - [鎺ㄧ悊](#鎺ㄧ悊)
+- [闅忔満鎯呭喌璇存槑](#闅忔満鎯呭喌璇存槑)
+- [ModelZoo涓婚〉](#modelzoo涓婚〉)
+
+<!-- /TOC -->
+
+# SSD璇存槑
+
+SSD灏嗚竟鐣屾鐨勮緭鍑虹┖闂寸鏁f垚涓€缁勯粯璁ゆ锛屾瘡涓壒寰佹槧灏勪綅缃叿鏈変笉鍚岀殑绾垫í姣斿拰灏哄害銆傚湪棰勬祴鏃讹紝缃戠粶瀵规瘡涓粯璁ゆ涓瓨鍦ㄧ殑瀵硅薄绫诲埆杩涜璇勫垎锛屽苟瀵规杩涜璋冩暣浠ユ洿濂藉湴鍖归厤瀵硅薄褰㈢姸銆傛澶栵紝缃戠粶灏嗗涓笉鍚屽垎杈ㄧ巼鐨勭壒寰佹槧灏勭殑棰勬祴缁勫悎鍦ㄤ竴璧凤紝鑷劧澶勭悊鍚勭澶у皬鐨勫璞°€�
+
+[璁烘枃](https://arxiv.org/abs/1512.02325)锛�   Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg.European Conference on Computer Vision (ECCV), 2016 (In press).
+
+# 妯″瀷鏋舵瀯
+
+SSD鏂规硶鍩轰簬鍓嶅悜鍗风Н缃戠粶锛岃缃戠粶浜х敓鍥哄畾澶у皬鐨勮竟鐣屾闆嗗悎锛屽苟閽堝杩欎簺妗嗗唴瀛樺湪鐨勫璞$被瀹炰緥杩涜璇勫垎锛岀劧鍚庨€氳繃闈炴瀬澶у€兼姂鍒舵楠よ繘琛屾渶缁堟娴嬨€傛棭鏈熺殑缃戠粶灞傚熀浜庨珮璐ㄩ噺鍥惧儚鍒嗙被鐨勬爣鍑嗕綋绯荤粨鏋勶紝琚О涓哄熀纭€缃戠粶銆傚悗鏉ラ€氳繃鍚戠綉缁滄坊鍔犺緟鍔╃粨鏋勮繘琛屾娴嬨€�
+
+# 鏁版嵁闆�
+
+浣跨敤鐨勬暟鎹泦锛� [COCO2017](<http://images.cocodataset.org/>)
+
+- 鏁版嵁闆嗗ぇ灏忥細19 GB
+    - 璁粌闆嗭細18 GB锛�118000寮犲浘鍍�
+    - 楠岃瘉闆嗭細1 GB锛�5000寮犲浘鍍�
+    - 鏍囨敞锛�241 MB锛屽疄渚嬶紝瀛楀箷锛宲erson_keypoints绛�
+- 鏁版嵁鏍煎紡锛氬浘鍍忓拰json鏂囦欢
+    - 娉ㄦ剰锛氭暟鎹湪dataset.py涓鐞�
+
+# 鐜瑕佹眰
+
+- 瀹夎[MindSpore](https://www.mindspore.cn/install)銆�
+
+- 涓嬭浇鏁版嵁闆咰OCO2017銆�
+
+- 鏈ず渚嬮粯璁や娇鐢–OCO2017浣滀负璁粌鏁版嵁闆嗭紝鎮ㄤ篃鍙互浣跨敤鑷繁鐨勬暟鎹泦銆�
+
+  1. 濡傛灉浣跨敤coco鏁版嵁闆嗐€�**鎵ц鑴氭湰鏃堕€夋嫨鏁版嵁闆哻oco銆�**
+     瀹夎Cython鍜宲ycocotool锛屼篃鍙互瀹夎mmcv杩涜鏁版嵁澶勭悊銆�
+
+     ```python
+     pip install Cython
+     pip install pycocotools
+     ```
+
+     骞跺湪`config.py`涓洿鏀笴OCO_ROOT鍜屽叾浠栨偍闇€瑕佺殑璁剧疆銆傜洰褰曠粨鏋勫涓嬶細
+
+     ```text
+     .
+     鈹斺攢cocodataset
+       鈹溾攢annotations
+         鈹溾攢instance_train2017.json
+         鈹斺攢instance_val2017.json
+       鈹溾攢val2017
+       鈹斺攢train2017
+     ```
+
+  2. 濡傛灉浣跨敤鑷繁鐨勬暟鎹泦銆�**鎵ц鑴氭湰鏃堕€夋嫨鏁版嵁闆嗕负other銆�**
+     灏嗘暟鎹泦淇℃伅鏁寸悊鎴怲XT鏂囦欢锛屾瘡琛屽涓嬶細
+
+     ```text
+     train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
+     ```
+
+     姣忚鏄寜绌洪棿鍒嗗壊鐨勫浘鍍忔爣娉紝绗竴鍒楁槸鍥惧儚鐨勭浉瀵硅矾寰勶紝鍏朵綑涓篬xmin,ymin,xmax,ymax,class]鏍煎紡鐨勬鍜岀被淇℃伅銆傛垜浠粠`IMAGE_DIR`锛堟暟鎹泦鐩綍锛夊拰`ANNO_PATH`锛圱XT鏂囦欢璺緞锛夌殑鐩稿璺緞杩炴帴璧锋潵鐨勫浘鍍忚矾寰勪腑璇诲彇鍥惧儚銆傚湪`config_ssd_resnet34.py`涓缃甡IMAGE_DIR`鍜宍ANNO_PATH`銆�
+
+# 蹇€熷叆闂�
+
+閫氳繃瀹樻柟缃戠珯瀹夎MindSpore鍚庯紝鎮ㄥ彲浠ユ寜鐓у涓嬫楠よ繘琛岃缁冨拰璇勪及锛�
+
+- Ascend澶勭悊鍣ㄧ幆澧冭繍琛�
+
+```shell
+# Ascend鍒嗗竷寮忚缁�
+sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH][PRE_TRAINED_PATH](optional)
+
+```
+
+```shell
+# Ascend鍗曞崱璁粌
+sh scripts/run_standalone_train.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH][PRE_TRAINED_PATH](optional)
+
+```
+
+```shell
+# Ascend澶勭悊鍣ㄧ幆澧冭繍琛宔val
+sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDRECORD_PATH]
+
+```
+
+# 鑴氭湰璇存槑
+
+## 鑴氭湰鍙婃牱渚嬩唬鐮�
+
+```text
+  鈹斺攢 ssd_resnet34
+    鈹溾攢 ascend310_infer
+    鈹溾攢 scripts
+      鈹溾攢 run_distribute_train.sh      ## Ascend 910鍒嗗竷寮弒hell鑴氭湰
+      鈹溾攢 run_standalone_train.sh      ## Ascend 910鍗曞崱shell鑴氭湰
+      鈹溾攢 run_infer_310.sh             ## Ascend 310璇勪及shell鑴氭湰
+      鈹斺攢 run_eval.sh                  ## Ascend910 璇勪及shell鑴氭湰
+    鈹溾攢 src
+      鈹溾攢 __init__.py                  ## 鍒濆鍖栨枃浠�
+      鈹溾攢 anchor_generator.py          ## anchor鐢熸垚鍣�
+      鈹溾攢 box_util.py                  ## bbox宸ュ叿
+      鈹溾攢 callback.py                  ## 鐢ㄤ簬杈硅缁冭竟鎺ㄧ悊鐨刢allback
+      鈹溾攢 config.py                    ## 鎬婚厤缃�
+      鈹溾攢 config_ssd_resnet34.py       ## ssd_resnet34閰嶇疆
+      鈹溾攢 dataset.py                   ## 鍒涘缓骞跺鐞嗘暟鎹泦
+      鈹溾攢 eval_utils.py                ## eval宸ュ叿
+      鈹溾攢 init_params.py               ## 鍙傛暟宸ュ叿
+      鈹溾攢 lr_schedule.py               ## 瀛︿範鐜囩敓鎴愬櫒
+      鈹溾攢 resnet34.py                  ## resnet34鏋舵瀯
+      鈹溾攢 ssd.py                       ## SSD鏋舵瀯
+      鈹斺攢  ssd_resnet34.py              ## ssd_resnet34鏋舵瀯
+    鈹溾攢 eval.py                        ## 璇勪及鑴氭湰
+    鈹溾攢 export.py                      ## 灏哻heckpoint鏂囦欢瀵煎嚭涓簃indir鐢ㄤ簬310鎺ㄧ悊
+    鈹溾攢 postprocess.py                 ## Ascend 310 璇勪及
+    鈹溾攢 README.md                      ## SSD鑻辨枃鐩稿叧璇存槑
+    鈹溾攢 README_CN.md                   ## SSD涓枃鐩稿叧璇存槑
+    鈹溾攢 requirements.txt               ## 闇€姹傛枃妗�
+    鈹斺攢 train.py                       ## 璁粌鑴氭湰
+```
+
+## 鑴氭湰鍙傛暟
+
+  ```text
+  train.py鍜宑onfig_ssd_resnet34.py涓富瑕佸弬鏁板涓嬶細
+
+    "device_num": 1                            # 浣跨敤璁惧鏁伴噺
+    "lr": 0.075                                # 瀛︿範鐜囧垵濮嬪€�
+    "dataset": coco                            # 鏁版嵁闆嗗悕绉�
+    "epoch_size": 500                          # 杞澶у皬
+    "batch_size": 32                           # 杈撳叆寮犻噺鐨勬壒娆″ぇ灏�
+    "pre_trained": None                        # 棰勮缁冩鏌ョ偣鏂囦欢璺緞
+    "pre_trained_epoch_size": 0                # 棰勮缁冭疆娆″ぇ灏�
+    "save_checkpoint_epochs": 10               # 涓や釜妫€鏌ョ偣涔嬮棿鐨勮疆娆¢棿闅斻€傞粯璁ゆ儏鍐典笅锛屾瘡10涓疆娆¢兘浼氫繚瀛樻鏌ョ偣銆�
+    "loss_scale": 1024                         # 鎹熷け鏀惧ぇ
+
+    "class_num": 81                            # 鏁版嵁闆嗙被鏁�
+    "image_shape": [300, 300]                  # 浣滀负妯″瀷杈撳叆鐨勫浘鍍忛珮鍜屽
+    "mindrecord_dir": "/data/MindRecord_COCO"  # MindRecord璺緞
+    "coco_root": "/data/coco2017"              # COCO2017鏁版嵁闆嗚矾寰�
+    "voc_root": ""                             # VOC鍘熷鏁版嵁闆嗚矾寰�
+    "image_dir": ""                            # 鍏朵粬鏁版嵁闆嗗浘鐗囪矾寰勶紝濡傛灉浣跨敤coco鎴杤oc锛屾鍙傛暟鏃犳晥銆�
+    "anno_path": ""                            # 鍏朵粬鏁版嵁闆嗘爣娉ㄨ矾寰勶紝濡傛灉浣跨敤coco鎴杤oc锛屾鍙傛暟鏃犳晥銆�
+
+  ```
+
+## 璁粌杩囩▼
+
+杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�**
+
+### Ascend涓婅缁�
+
+- 鍒嗗竷寮�
+
+```shell script
+   sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH][PRE_TRAINED_PATH](optional)
+```
+
+姝よ剼鏈渶瑕佷簲鎴栧叚涓弬鏁般€�
+
+- `RANK_TABLE_FILE`锛歔rank_table.json](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools)鐨勮矾寰勩€傛渶濂戒娇鐢ㄧ粷瀵硅矾寰勩€�
+
+- `DATASET`锛氬垎甯冨紡璁粌鐨勬暟鎹泦妯″紡銆�
+
+- `DATASET_PATH`锛氬垎甯冨紡璁粌鐨勬暟鎹泦璺緞銆�
+
+- `MINDRECORD_PATH`锛氬垎甯冨紡璁粌鐨刴indrecord鏂囦欢銆�
+
+- `TRAIN_OUTPUT_PATH`锛氳缁冭緭鍑虹殑妫€鏌ョ偣鏂囦欢璺緞銆傛渶濂戒娇鐢ㄧ粷瀵硅矾寰勩€�
+
+- `PRE_TRAINED_PATH`锛氶璁粌妫€鏌ョ偣鏂囦欢鐨勮矾寰勩€傛渶濂戒娇鐢ㄧ粷瀵硅矾寰勩€�
+
+  璁粌缁撴灉淇濆瓨鍦╰rain璺緞涓紝鏂囦欢澶瑰悕涓�"log"銆�  鎮ㄥ彲鍦ㄦ鏂囦欢澶逛腑鎵惧埌妫€鏌ョ偣鏂囦欢浠ュ強缁撴灉锛屽涓嬫墍绀恒€�
+
+```text
+epoch: 1 step: 458, loss is 4.185711
+epoch time: 138740.569 ms, per step time: 302.927 ms
+epoch: 2 step: 458, loss is 4.3121023
+epoch time: 47116.166 ms, per step time: 102.874 ms
+epoch: 3 step: 458, loss is 3.2209284
+epoch time: 47149.108 ms, per step time: 102.946 ms
+epoch: 4 step: 458, loss is 3.5159926
+epoch time: 47174.645 ms, per step time: 103.001 ms
+...
+epoch: 497 step: 458, loss is 1.0916114
+epoch time: 47164.002 ms, per step time: 102.978 ms
+epoch: 498 step: 458, loss is 1.157409
+epoch time: 47172.836 ms, per step time: 102.997 ms
+epoch: 499 step: 458, loss is 1.2065268
+epoch time: 47155.245 ms, per step time: 102.959 ms
+epoch: 500 step: 458, loss is 1.1856415
+epoch time: 47666.430 ms, per step time: 104.075 ms
+```
+
+## 璇勪及杩囩▼
+
+### Ascend澶勭悊鍣ㄧ幆澧冭瘎浼�
+
+```shell script
+sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDRECORD_PATH]
+```
+
+姝よ剼鏈渶瑕佷簲涓弬鏁般€�
+
+- `DEVICE_ID`: 璇勪及鐨勮澶嘔D銆�
+- `DATASET`锛氳瘎浼版暟鎹泦鐨勬ā寮忋€�
+- `DATASET_PATH`锛氳瘎浼扮殑鏁版嵁闆嗚矾寰勩€�
+- `CHECKPOINT_PATH`锛氭鏌ョ偣鏂囦欢鐨勭粷瀵硅矾寰勩€�
+- `MINDRECORD_PATH`锛氳瘎浼扮殑mindrecord鏂囦欢銆�
+
+鈥�       鎺ㄧ悊缁撴灉淇濆瓨鍦╡val璺緞涓紝鏂囦欢澶瑰悕涓衡€渓og鈥濄€傛偍鍙互鍦ㄦ棩蹇椾腑鎵惧埌绫讳技浠ヤ笅鐨勭粨鏋溿€�
+
+```text
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.240
+ Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.360
+ Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.258
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.016
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.229
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.446
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.256
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.389
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.427
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.077
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.439
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.734
+
+========================================
+
+mAP: 0.24011857000302622
+
+```
+
+## 鎺ㄧ悊杩囩▼
+
+### [瀵煎嚭MindIR](#contents)
+
+```shell
+python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
+```
+
+鍙傛暟ckpt_file涓哄繀濉」锛�
+`EXPORT_FORMAT` 蹇呴』鍦� ["AIR", "MINDIR"]涓€夋嫨銆�
+
+### 鍦ˋscend310鎵ц鎺ㄧ悊
+
+鍦ㄦ墽琛屾帹鐞嗗墠锛宮indir鏂囦欢蹇呴』閫氳繃`export.py`鑴氭湰瀵煎嚭銆備互涓嬪睍绀轰簡浣跨敤minir妯″瀷鎵ц鎺ㄧ悊鐨勭ず渚嬨€�
+鐩墠浠呮敮鎸乥atch_Size涓�1鐨勬帹鐞嗐€傜簿搴﹁绠楄繃绋嬮渶瑕�70G+鐨勫唴瀛橈紝鍚﹀垯杩涚▼灏嗕細鍥犱负瓒呭嚭鍐呭瓨琚郴缁熺粓姝€€�
+
+```shell
+# Ascend310 inference
+bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [ANNO_FILE] [DEVICE_ID]
+```
+
+- `DVPP` 涓哄繀濉」锛岄渶瑕佸湪["DVPP", "CPU"]閫夋嫨锛屽ぇ灏忓啓鍧囧彲銆傞渶瑕佹敞鎰忕殑鏄痵sd_vgg16鎵ц鎺ㄧ悊鐨勫浘鐗囧昂瀵镐负[300, 300]锛岀敱浜嶥VPP纭欢闄愬埗瀹戒负16鏁撮櫎锛岄珮涓�2鏁撮櫎锛屽洜姝わ紝杩欎釜缃戠粶闇€瑕侀€氳繃CPU绠楀瓙瀵瑰浘鍍忚繘琛屽墠澶勭悊銆�
+- `DEVICE_ID` 鍙€夛紝榛樿鍊间负0銆�
+
+### 缁撴灉
+
+鎺ㄧ悊缁撴灉淇濆瓨鍦ㄨ剼鏈墽琛岀殑褰撳墠璺緞锛屼綘鍙互鍦╝cc.log涓湅鍒颁互涓嬬簿搴﹁绠楃粨鏋溿€�
+
+```bash
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.250
+ Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.374
+ Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.266
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.018
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.241
+ Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.462
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.260
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.399
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.435
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.090
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.449
+ Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.739
+0.249879750926743
+```
+
+## 妯″瀷鎻忚堪
+
+### 鎬ц兘
+
+#### 璁粌鎬ц兘
+
+| 鍙傛暟          | Ascend           |
+| ------------- | ---------------- |
+| 妯″瀷鐗堟湰      | SSD_ResNet34           |
+| 璧勬簮          | Ascend 910 CPU 2.60GHz锛�192 cores锛汳emory 755 G|
+| 涓婁紶鏃ユ湡      | 2021-08-31       |
+| MindSpore鐗堟湰 | 1.3     |
+| 鏁版嵁闆�        | COCO2017         |
+| 璁粌鍙傛暟     | epoch锛�500 batch_size锛�32|
+| 浼樺寲鍣�         | Momentum             |
+| 鎹熷け鍑芥暟        | Sigmoid Cross Entropy,SmoothL1Loss |
+| 閫熷害         | 101 ms/step           |
+| 鎬昏€楁椂      | 8.34灏忔椂            |
+
+#### 鎺ㄧ悊鎬ц兘
+
+| 鍙傛暟          | Ascend           |
+| ------------- | ---------------- |
+| 妯″瀷鐗堟湰      | SSD_ResNet34           |
+| 璧勬簮          | Ascend 910       |
+| 涓婁紶鏃ユ湡      | 2021-08-31       |
+| MindSpore鐗堟湰 | 1.3      |
+| 鏁版嵁闆�        | COCO2017         |
+| 杈撳嚭          | mAP              |
+| 鍑嗙‘鐜�        | IoU=0.50: 24.0%  |
+| 鎺ㄧ悊妯″瀷      | 98.77M锛�.ckpt鏂囦欢锛� |
+
+## 闅忔満鎯呭喌璇存槑
+
+dataset.py涓缃簡鈥渃reate_dataset鈥濆嚱鏁板唴鐨勭瀛愶紝鍚屾椂杩樹娇鐢ㄤ簡train.py涓殑闅忔満绉嶅瓙銆�
+
+## ModelZoo涓婚〉
+
+ 璇锋祻瑙堝畼缃慬涓婚〉](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)銆�
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/ascend310_infer/CMakeLists.txt b/research/cv/ssd_resnet34/ascend310_infer/CMakeLists.txt
new file mode 100644
index 0000000000000000000000000000000000000000..cfe65d2c9689c2a174f290a170132c01bbf6cd78
--- /dev/null
+++ b/research/cv/ssd_resnet34/ascend310_infer/CMakeLists.txt
@@ -0,0 +1,14 @@
+cmake_minimum_required(VERSION 3.14.1)
+project(Ascend310Infer)
+add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
+set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -g -std=c++17 -Werror -Wall -fPIE -Wl,--allow-shlib-undefined")
+set(PROJECT_SRC_ROOT ${CMAKE_CURRENT_LIST_DIR}/)
+option(MINDSPORE_PATH "mindspore install path" "")
+include_directories(${MINDSPORE_PATH})
+include_directories(${MINDSPORE_PATH}/include)
+include_directories(${PROJECT_SRC_ROOT})
+find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib)
+file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*)
+add_executable(main src/main.cc src/utils.cc)
+target_link_libraries(main ${MS_LIB} ${MD_LIB} gflags)
+find_package(gflags REQUIRED)
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/ascend310_infer/aipp.cfg b/research/cv/ssd_resnet34/ascend310_infer/aipp.cfg
new file mode 100644
index 0000000000000000000000000000000000000000..dc2f2aebb7233a22c8072dcc33f0f115b691cac8
--- /dev/null
+++ b/research/cv/ssd_resnet34/ascend310_infer/aipp.cfg
@@ -0,0 +1,26 @@
+aipp_op {
+    aipp_mode : static
+    input_format : YUV420SP_U8
+    related_input_rank : 0
+    csc_switch : true
+    rbuv_swap_switch : false
+    matrix_r0c0 : 256
+    matrix_r0c1 : 0
+    matrix_r0c2 : 359
+    matrix_r1c0 : 256
+    matrix_r1c1 : -88
+    matrix_r1c2 : -183
+    matrix_r2c0 : 256
+    matrix_r2c1 : 454
+    matrix_r2c2 : 0
+    input_bias_0 : 0
+    input_bias_1 : 128
+    input_bias_2 : 128
+
+    mean_chn_0 : 124
+    mean_chn_1 : 117
+    mean_chn_2 : 104
+    var_reci_chn_0 : 0.0171247538316637
+    var_reci_chn_1 : 0.0175070028011204
+    var_reci_chn_2 : 0.0174291938997821
+}
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/ascend310_infer/build.sh b/research/cv/ssd_resnet34/ascend310_infer/build.sh
new file mode 100644
index 0000000000000000000000000000000000000000..d8ea19ff828dc682c646ef37bf0f513a2523a808
--- /dev/null
+++ b/research/cv/ssd_resnet34/ascend310_infer/build.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+if [ -d out ]; then
+    rm -rf out
+fi
+
+mkdir out
+cd out || exit
+
+if [ -f "Makefile" ]; then
+  make clean
+fi
+
+cmake .. \
+    -DMINDSPORE_PATH="`pip3.7 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`"
+make
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/ascend310_infer/inc/utils.h b/research/cv/ssd_resnet34/ascend310_infer/inc/utils.h
new file mode 100644
index 0000000000000000000000000000000000000000..abeb8fcbf11a042e6fefafa5868166d975e44dfb
--- /dev/null
+++ b/research/cv/ssd_resnet34/ascend310_infer/inc/utils.h
@@ -0,0 +1,32 @@
+/**
+ * Copyright 2021 Huawei Technologies Co., Ltd
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef MINDSPORE_INFERENCE_UTILS_H_
+#define MINDSPORE_INFERENCE_UTILS_H_
+
+#include <sys/stat.h>
+#include <dirent.h>
+#include <vector>
+#include <string>
+#include <memory>
+#include "include/api/types.h"
+
+std::vector<std::string> GetAllFiles(std::string_view dirName);
+DIR *OpenDir(std::string_view dirName);
+std::string RealPath(std::string_view path);
+mindspore::MSTensor ReadFileToTensor(const std::string &file);
+int WriteResult(const std::string& imageFile, const std::vector<mindspore::MSTensor> &outputs);
+#endif
diff --git a/research/cv/ssd_resnet34/ascend310_infer/src/main.cc b/research/cv/ssd_resnet34/ascend310_infer/src/main.cc
new file mode 100644
index 0000000000000000000000000000000000000000..ba0efbbd7edc288f805b701205627499496964d9
--- /dev/null
+++ b/research/cv/ssd_resnet34/ascend310_infer/src/main.cc
@@ -0,0 +1,165 @@
+/**
+ * Copyright 2021 Huawei Technologies Co., Ltd
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <sys/time.h>
+#include <gflags/gflags.h>
+#include <dirent.h>
+#include <iostream>
+#include <string>
+#include <algorithm>
+#include <iosfwd>
+#include <vector>
+#include <fstream>
+#include <sstream>
+
+#include "include/api/model.h"
+#include "include/api/context.h"
+#include "include/api/types.h"
+#include "include/api/serialization.h"
+#include "include/dataset/vision_ascend.h"
+#include "include/dataset/execute.h"
+#include "include/dataset/vision.h"
+#include "inc/utils.h"
+
+using mindspore::Context;
+using mindspore::Serialization;
+using mindspore::Model;
+using mindspore::Status;
+using mindspore::ModelType;
+using mindspore::GraphCell;
+using mindspore::kSuccess;
+using mindspore::MSTensor;
+using mindspore::dataset::Execute;
+using mindspore::dataset::TensorTransform;
+using mindspore::dataset::vision::DvppDecodeResizeJpeg;
+using mindspore::dataset::vision::Resize;
+using mindspore::dataset::vision::HWC2CHW;
+using mindspore::dataset::vision::Normalize;
+using mindspore::dataset::vision::Decode;
+
+DEFINE_string(mindir_path, "", "mindir path");
+DEFINE_string(dataset_path, ".", "dataset path");
+DEFINE_int32(device_id, 0, "device id");
+DEFINE_string(aipp_path, "./aipp.cfg", "aipp path");
+DEFINE_string(cpu_dvpp, "DVPP", "cpu or dvpp process");
+DEFINE_int32(image_height, 300, "image height");
+DEFINE_int32(image_width, 300, "image width");
+
+int main(int argc, char **argv) {
+  gflags::ParseCommandLineFlags(&argc, &argv, true);
+  if (RealPath(FLAGS_mindir_path).empty()) {
+    std::cout << "Invalid mindir" << std::endl;
+    return 1;
+  }
+
+  auto context = std::make_shared<Context>();
+  auto ascend310 = std::make_shared<mindspore::Ascend310DeviceInfo>();
+  ascend310->SetDeviceID(FLAGS_device_id);
+  ascend310->SetBufferOptimizeMode("off_optimize");
+  context->MutableDeviceInfo().push_back(ascend310);
+  mindspore::Graph graph;
+  Serialization::Load(FLAGS_mindir_path, ModelType::kMindIR, &graph);
+  if (FLAGS_cpu_dvpp == "DVPP") {
+    if (RealPath(FLAGS_aipp_path).empty()) {
+      std::cout << "Invalid aipp path" << std::endl;
+      return 1;
+    } else {
+      ascend310->SetInsertOpConfigPath(FLAGS_aipp_path);
+    }
+  }
+
+  Model model;
+  Status ret = model.Build(GraphCell(graph), context);
+  if (ret != kSuccess) {
+    std::cout << "ERROR: Build failed." << std::endl;
+    return 1;
+  }
+
+  auto all_files = GetAllFiles(FLAGS_dataset_path);
+  if (all_files.empty()) {
+    std::cout << "ERROR: no input data." << std::endl;
+    return 1;
+  }
+
+  std::map<double, double> costTime_map;
+  size_t size = all_files.size();
+
+  for (size_t i = 0; i < size; ++i) {
+    struct timeval start = {0};
+    struct timeval end = {0};
+    double startTimeMs;
+    double endTimeMs;
+    std::vector<MSTensor> inputs;
+    std::vector<MSTensor> outputs;
+    std::cout << "Start predict input files:" << all_files[i] << std::endl;
+    if (FLAGS_cpu_dvpp == "DVPP") {
+      auto resizeShape = {static_cast <uint32_t>(FLAGS_image_height), static_cast <uint32_t>(FLAGS_image_width)};
+      Execute resize_op(std::shared_ptr<DvppDecodeResizeJpeg>(new DvppDecodeResizeJpeg(resizeShape)));
+      auto imgDvpp = std::make_shared<MSTensor>();
+      resize_op(ReadFileToTensor(all_files[i]), imgDvpp.get());
+      inputs.emplace_back(imgDvpp->Name(), imgDvpp->DataType(), imgDvpp->Shape(),
+                        imgDvpp->Data().get(), imgDvpp->DataSize());
+    } else {
+      std::shared_ptr<TensorTransform> decode(new Decode());
+      std::shared_ptr<TensorTransform> hwc2chw(new HWC2CHW());
+      std::shared_ptr<TensorTransform> normalize(
+      new Normalize({123.675, 116.28, 103.53}, {58.395, 57.120, 57.375}));
+      auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
+      std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
+      Execute composeDecode({decode, resize, normalize, hwc2chw});
+      auto img = MSTensor();
+      auto image = ReadFileToTensor(all_files[i]);
+      composeDecode(image, &img);
+      std::vector<MSTensor> model_inputs = model.GetInputs();
+      if (model_inputs.empty()) {
+        std::cout << "Invalid model, inputs is empty." << std::endl;
+        return 1;
+      }
+      inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(),
+                       img.Data().get(), img.DataSize());
+    }
+
+    gettimeofday(&start, nullptr);
+    ret = model.Predict(inputs, &outputs);
+    gettimeofday(&end, nullptr);
+    if (ret != kSuccess) {
+      std::cout << "Predict " << all_files[i] << " failed." << std::endl;
+      return 1;
+    }
+    startTimeMs = (1.0 * start.tv_sec * 1000000 + start.tv_usec) / 1000;
+    endTimeMs = (1.0 * end.tv_sec * 1000000 + end.tv_usec) / 1000;
+    costTime_map.insert(std::pair<double, double>(startTimeMs, endTimeMs));
+    WriteResult(all_files[i], outputs);
+  }
+  double average = 0.0;
+  int inferCount = 0;
+
+  for (auto iter = costTime_map.begin(); iter != costTime_map.end(); iter++) {
+    double diff = 0.0;
+    diff = iter->second - iter->first;
+    average += diff;
+    inferCount++;
+  }
+  average = average / inferCount;
+  std::stringstream timeCost;
+  timeCost << "NN inference cost average time: "<< average << " ms of infer_count " << inferCount << std::endl;
+  std::cout << "NN inference cost average time: "<< average << "ms of infer_count " << inferCount << std::endl;
+  std::string fileName = "./time_Result" + std::string("/test_perform_static.txt");
+  std::ofstream fileStream(fileName.c_str(), std::ios::trunc);
+  fileStream << timeCost.str();
+  fileStream.close();
+  costTime_map.clear();
+  return 0;
+}
diff --git a/research/cv/ssd_resnet34/ascend310_infer/src/utils.cc b/research/cv/ssd_resnet34/ascend310_infer/src/utils.cc
new file mode 100644
index 0000000000000000000000000000000000000000..02101598b77dea87ca60fd1e08f8a32a65740dfa
--- /dev/null
+++ b/research/cv/ssd_resnet34/ascend310_infer/src/utils.cc
@@ -0,0 +1,129 @@
+/**
+ * Copyright 2021 Huawei Technologies Co., Ltd
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <fstream>
+#include <algorithm>
+#include <iostream>
+#include "inc/utils.h"
+
+using mindspore::MSTensor;
+using mindspore::DataType;
+
+std::vector<std::string> GetAllFiles(std::string_view dirName) {
+  struct dirent *filename;
+  DIR *dir = OpenDir(dirName);
+  if (dir == nullptr) {
+    return {};
+  }
+  std::vector<std::string> res;
+  while ((filename = readdir(dir)) != nullptr) {
+    std::string dName = std::string(filename->d_name);
+    if (dName == "." || dName == ".." || filename->d_type != DT_REG) {
+      continue;
+    }
+    res.emplace_back(std::string(dirName) + "/" + filename->d_name);
+  }
+  std::sort(res.begin(), res.end());
+  for (auto &f : res) {
+    std::cout << "image file: " << f << std::endl;
+  }
+  return res;
+}
+
+int WriteResult(const std::string& imageFile, const std::vector<MSTensor> &outputs) {
+  std::string homePath = "./result_Files";
+  for (size_t i = 0; i < outputs.size(); ++i) {
+    size_t outputSize;
+    std::shared_ptr<const void> netOutput;
+    netOutput = outputs[i].Data();
+    outputSize = outputs[i].DataSize();
+    int pos = imageFile.rfind('/');
+    std::string fileName(imageFile, pos + 1);
+    fileName.replace(fileName.find('.'), fileName.size() - fileName.find('.'), '_' + std::to_string(i) + ".bin");
+    std::string outFileName = homePath + "/" + fileName;
+    FILE * outputFile = fopen(outFileName.c_str(), "wb");
+    fwrite(netOutput.get(), outputSize, sizeof(char), outputFile);
+    fclose(outputFile);
+    outputFile = nullptr;
+  }
+  return 0;
+}
+
+mindspore::MSTensor ReadFileToTensor(const std::string &file) {
+  if (file.empty()) {
+    std::cout << "Pointer file is nullptr" << std::endl;
+    return mindspore::MSTensor();
+  }
+
+  std::ifstream ifs(file);
+  if (!ifs.good()) {
+    std::cout << "File: " << file << " is not exist" << std::endl;
+    return mindspore::MSTensor();
+  }
+
+  if (!ifs.is_open()) {
+    std::cout << "File: " << file << "open failed" << std::endl;
+    return mindspore::MSTensor();
+  }
+
+  ifs.seekg(0, std::ios::end);
+  size_t size = ifs.tellg();
+  mindspore::MSTensor buffer(file, mindspore::DataType::kNumberTypeUInt8, {static_cast<int64_t>(size)}, nullptr, size);
+
+  ifs.seekg(0, std::ios::beg);
+  ifs.read(reinterpret_cast<char *>(buffer.MutableData()), size);
+  ifs.close();
+
+  return buffer;
+}
+
+
+DIR *OpenDir(std::string_view dirName) {
+  if (dirName.empty()) {
+    std::cout << " dirName is null ! " << std::endl;
+    return nullptr;
+  }
+  std::string realPath = RealPath(dirName);
+  struct stat s;
+  lstat(realPath.c_str(), &s);
+  if (!S_ISDIR(s.st_mode)) {
+    std::cout << "dirName is not a valid directory !" << std::endl;
+    return nullptr;
+  }
+  DIR *dir;
+  dir = opendir(realPath.c_str());
+  if (dir == nullptr) {
+    std::cout << "Can not open dir " << dirName << std::endl;
+    return nullptr;
+  }
+  std::cout << "Successfully opened the dir " << dirName << std::endl;
+  return dir;
+}
+
+std::string RealPath(std::string_view path) {
+  char realPathMem[PATH_MAX] = {0};
+  char *realPathRet = nullptr;
+  realPathRet = realpath(path.data(), realPathMem);
+
+  if (realPathRet == nullptr) {
+    std::cout << "File: " << path << " is not exist.";
+    return "";
+  }
+
+  std::string realPath(realPathMem);
+  std::cout << path << " realpath is: " << realPath << std::endl;
+  return realPath;
+}
diff --git a/research/cv/ssd_resnet34/eval.py b/research/cv/ssd_resnet34/eval.py
new file mode 100644
index 0000000000000000000000000000000000000000..15c2edcbab1c6d57fe9f56ef3c1a01f57f790b44
--- /dev/null
+++ b/research/cv/ssd_resnet34/eval.py
@@ -0,0 +1,124 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Evaluation for SSD"""
+
+import os
+import ast
+import argparse
+import time
+import numpy as np
+from mindspore import context, Tensor
+from mindspore.train.serialization import load_checkpoint, load_param_into_net
+from src.ssd import SsdInferWithDecoder, ssd_resnet34
+from src.dataset import create_ssd_dataset, create_mindrecord
+from src.config import config
+from src.eval_utils import metrics
+from src.box_utils import default_boxes
+
+
+def ssd_eval(dataset_path, ckpt_path, anno_json):
+    """SSD evaluation."""
+    batch_size = 1
+    ds = create_ssd_dataset(dataset_path, batch_size=batch_size, repeat_num=1,
+                            is_training=False, use_multiprocessing=False)
+    if config.model == "ssd_resnet34":
+        net = ssd_resnet34(config=config)
+    else:
+        raise ValueError(f'config.model: {config.model} is not supported')
+    net = SsdInferWithDecoder(net, Tensor(default_boxes), config)
+
+    print("Load Checkpoint!")
+    param_dict = load_checkpoint(ckpt_path)
+    net.init_parameters_data()
+    load_param_into_net(net, param_dict)
+
+    net.set_train(False)
+    i = batch_size
+    total = ds.get_dataset_size() * batch_size
+    start = time.time()
+    pred_data = []
+    print("\n========================================\n")
+    print("total images num: ", total)
+    print("Processing, please wait a moment.")
+    for data in ds.create_dict_iterator(output_numpy=True, num_epochs=1):
+        img_id = data['img_id']
+        img_np = data['image']
+        image_shape = data['image_shape']
+
+        output = net(Tensor(img_np))
+        for batch_idx in range(img_np.shape[0]):
+            pred_data.append({"boxes": output[0].asnumpy()[batch_idx],
+                              "box_scores": output[1].asnumpy()[batch_idx],
+                              "img_id": int(np.squeeze(img_id[batch_idx])),
+                              "image_shape": image_shape[batch_idx]})
+        percent = round(i / total * 100., 2)
+
+        print(f'    {str(percent)} [{i}/{total}]', end='\r')
+        i += batch_size
+    cost_time = int((time.time() - start) * 1000)
+    print(f'    100% [{total}/{total}] cost {cost_time} ms')
+    mAP = metrics(pred_data, anno_json)
+    print("\n========================================\n")
+    print(f"mAP: {mAP}")
+
+
+def get_eval_args():
+    """Get eval args"""
+    parser = argparse.ArgumentParser(description='SSD evaluation')
+    parser.add_argument("--data_url", type=str)
+    parser.add_argument("--train_url", type=str, default="")
+    parser.add_argument("--mindrecord", type=str)
+    parser.add_argument("--run_online", type=ast.literal_eval, default=False)
+    parser.add_argument("--device_id", type=int, default=0, help="Device id, default is 0.")
+    parser.add_argument("--dataset", type=str, default="coco", help="Dataset, default is coco.")
+    parser.add_argument("--checkpoint_path", type=str, required=True, help="Checkpoint file path.")
+    parser.add_argument("--run_platform", type=str, default="Ascend", choices=("Ascend", "GPU", "CPU"),
+                        help="run platform, support Ascend ,GPU and CPU.")
+    return parser.parse_args()
+
+
+if __name__ == '__main__':
+    args_opt = get_eval_args()
+    if args_opt.run_online:
+        import moxing as mox
+
+        config.checkpoint_path = "/cache/checkpoint_path/checkpoint.ckpt"
+        config.coco_root = "/cache/data_url"
+        config.mindrecord_dir = "/cache/mindrecord_url"
+        mox.file.copy_parallel(args_opt.data_url, config.coco_root)
+        mox.file.copy_parallel(args_opt.checkpoint_path, config.checkpoint_path)
+    else:
+        config.checkpoint_path = args_opt.checkpoint_path
+        config.coco_root = args_opt.data_url
+        config.mindrecord_dir = args_opt.mindrecord
+
+    if args_opt.dataset == "coco":
+        json_path = os.path.join(config.coco_root, config.instances_set.format(config.val_data_type))
+    elif args_opt.dataset == "voc":
+        json_path = os.path.join(config.voc_root, config.voc_json)
+    else:
+        raise ValueError('SSD eval only support dataset mode is coco and voc!')
+
+    context.set_context(mode=context.GRAPH_MODE, device_target=args_opt.run_platform, device_id=args_opt.device_id)
+    mindrecord_file = create_mindrecord(args_opt.dataset, "ssd_eval.mindrecord", False)
+
+    if args_opt.run_online:
+        import moxing as mox
+
+        mox.file.copy_parallel(mindrecord_file, args_opt.mindrecord)
+
+    print("Start Eval!")
+    ssd_eval(mindrecord_file, config.checkpoint_path, json_path)
diff --git a/research/cv/ssd_resnet34/export.py b/research/cv/ssd_resnet34/export.py
new file mode 100644
index 0000000000000000000000000000000000000000..5ba73de32319f64863c98ab84bbedbfa2371ce7d
--- /dev/null
+++ b/research/cv/ssd_resnet34/export.py
@@ -0,0 +1,56 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Transfer data format"""
+
+import argparse
+import numpy as np
+
+import mindspore
+from mindspore import context, Tensor
+from mindspore.train.serialization import load_checkpoint, load_param_into_net, export
+from src.ssd import SsdInferWithDecoder, ssd_resnet34
+from src.config import config
+from src.box_utils import default_boxes
+
+parser = argparse.ArgumentParser(description='SSD export')
+parser.add_argument("--device_id", type=int, default=0, help="Device id")
+parser.add_argument("--batch_size", type=int, default=1, help="batch size")
+parser.add_argument("--ckpt_file", type=str, required=True, help="Checkpoint file path.")
+parser.add_argument("--file_name", type=str, default="ssd", help="output file name.")
+parser.add_argument('--file_format', type=str, choices=["AIR", "MINDIR"], default='AIR', help='file format')
+parser.add_argument("--device_target", type=str, choices=["Ascend", "GPU", "CPU"], default="Ascend",
+                    help="device target")
+args = parser.parse_args()
+
+context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
+if args.device_target == "Ascend":
+    context.set_context(device_id=args.device_id)
+
+if __name__ == '__main__':
+    if config.model == "ssd_resnet34":
+        net = ssd_resnet34(config=config)
+    else:
+        raise ValueError(f'config.model: {config.model} is not supported')
+    net = SsdInferWithDecoder(net, Tensor(default_boxes), config)
+
+    param_dict = load_checkpoint(args.ckpt_file)
+    net.init_parameters_data()
+    load_param_into_net(net, param_dict)
+    net.set_train(False)
+
+    input_shp = [args.batch_size, 3] + config.img_shape
+    input_array = Tensor(np.random.uniform(-1.0, 1.0, size=input_shp), mindspore.float32)
+    export(net, input_array, file_name=args.file_name, file_format=args.file_format)
diff --git a/research/cv/ssd_resnet34/postprocess.py b/research/cv/ssd_resnet34/postprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..1dd9886e8e83ace14190695840fcc83dde0bd132
--- /dev/null
+++ b/research/cv/ssd_resnet34/postprocess.py
@@ -0,0 +1,91 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""post process for 310 inference"""
+
+import os
+import argparse
+import numpy as np
+from PIL import Image
+
+from src.config import config
+from src.eval_utils import metrics
+
+batch_size = 1
+parser = argparse.ArgumentParser(description="ssd acc calculation")
+parser.add_argument("--result_path", type=str, required=True, help="result files path.")
+parser.add_argument("--img_path", type=str, required=True, help="image file path.")
+parser.add_argument("--anno_file", type=str, required=True, help="annotation file.")
+parser.add_argument("--drop", action="store_true", help="drop iscrowd images or not.")
+args = parser.parse_args()
+
+def get_imgSize(file_name):
+    img = Image.open(file_name)
+    return img.size
+
+def get_result(result_path, img_id_file_path):
+    """print the mAP"""
+    if args.drop:
+        from pycocotools.coco import COCO
+        train_cls = config.classes
+        train_cls_dict = {}
+        for i, cls in enumerate(train_cls):
+            train_cls_dict[cls] = i
+        coco = COCO(args.anno_file)
+        classs_dict = {}
+        cat_ids = coco.loadCats(coco.getCatIds())
+        for cat in cat_ids:
+            classs_dict[cat["id"]] = cat["name"]
+
+    files = os.listdir(img_id_file_path)
+    pred_data = []
+
+    for file in files:
+        img_ids_name = file.split('.')[0]
+        img_id = int(np.squeeze(img_ids_name))
+        if args.drop:
+            anno_ids = coco.getAnnIds(imgIds=img_id, iscrowd=None)
+            anno = coco.loadAnns(anno_ids)
+            annos = []
+            iscrowd = False
+            for label in anno:
+                bbox = label["bbox"]
+                class_name = classs_dict[label["category_id"]]
+                iscrowd = iscrowd or label["iscrowd"]
+                if class_name in train_cls:
+                    x_min, x_max = bbox[0], bbox[0] + bbox[2]
+                    y_min, y_max = bbox[1], bbox[1] + bbox[3]
+                    annos.append(list(map(round, [y_min, x_min, y_max, x_max])) + [train_cls_dict[class_name]])
+            if iscrowd or (not annos):
+                continue
+
+        img_size = get_imgSize(os.path.join(img_id_file_path, file))
+        image_shape = np.array([img_size[1], img_size[0]])
+        result_path_0 = os.path.join(result_path, img_ids_name + "_0.bin")
+        result_path_1 = os.path.join(result_path, img_ids_name + "_1.bin")
+        boxes = np.fromfile(result_path_0, dtype=np.float32).reshape(config.num_ssd_boxes, 4)
+        box_scores = np.fromfile(result_path_1, dtype=np.float32).reshape(config.num_ssd_boxes, config.num_classes)
+        pred_data.append({
+            "boxes": boxes,
+            "box_scores": box_scores,
+            "img_id": img_id,
+            "image_shape": image_shape
+        })
+    mAP = metrics(pred_data, args.anno_file)
+    print("mAP:{}".format(mAP))
+
+
+if __name__ == '__main__':
+    get_result(args.result_path, args.img_path)
diff --git a/research/cv/ssd_resnet34/requirements.txt b/research/cv/ssd_resnet34/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..15287919a73f91639a67c249ea5f4db8f80c880c
--- /dev/null
+++ b/research/cv/ssd_resnet34/requirements.txt
@@ -0,0 +1,5 @@
+pycocotools
+opencv-python
+xml-python
+Pillow
+numpy
diff --git a/research/cv/ssd_resnet34/scripts/run_distribute_train.sh b/research/cv/ssd_resnet34/scripts/run_distribute_train.sh
new file mode 100644
index 0000000000000000000000000000000000000000..082f64a07e22042a5d32ff3968810b98b03090b2
--- /dev/null
+++ b/research/cv/ssd_resnet34/scripts/run_distribute_train.sh
@@ -0,0 +1,104 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+echo "=============================================================================================================="
+echo "Please run the script as: "
+echo "sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH] [PRE_TRAINED_PATH](optional)"
+echo "for example: sh scripts/run_distribute_train.sh /home/neu/hrnet_final/rank_table_file_path.json coco /home/neu/ssd-coco /home/neu/coco-mindrecord .train_out /home/neu/ssdresnet34lj/resnet34.ckpt(optional)"
+echo "It is better to use absolute path."
+echo "================================================================================================================="
+
+if [ $# != 5 ] && [ $# != 6 ]
+then
+    echo "Using: sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH]"
+    echo "or"
+    echo "Using: sh scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH] [PRE_TRAINED_PATH]"
+    exit 1
+fi
+
+get_real_path(){
+    if [ "${1:0:1}" == "/" ]; then
+        echo "$1"
+    else
+        echo "$(realpath -m $PWD/$1)"
+    fi
+}
+
+PATH1=$(get_real_path $3)    # dataset_path
+PATH2=$(get_real_path $4)    # mindrecord_path
+PATH3=$(get_real_path $5)    # train_output_path
+PATH4=$(get_real_path $6)    # pre_trained_path
+PATH5=$(get_real_path $1)    # rank_table_file_path
+
+
+if [ ! -d $PATH1 ]
+then
+    echo "error: DATASET_PATH=$PATH1 is not a directory."
+    exit 1
+fi
+
+if [ ! -d $PATH2 ]
+then
+    echo "error: MINDRECORD_PATH=$PATH2 is not a directory."
+    exit 1
+fi
+
+if [ ! -d $PATH3 ]
+then
+    echo "error: TRAIN_OUTPUT_PATH=$PATH3 is not a directory."
+fi
+
+if [ ! -f $PATH4 ] && [ $# == 6 ]
+then
+    echo "error: PRE_TRAINED_PATH=$PATH4 is not a file."
+    exit 1
+fi
+
+if [ ! -f $PATH5 ]
+then
+    echo "error: RANK_TABLE_FILE_PATH=$PATH5 is not a file."
+    exit 1
+fi
+
+ulimit -u unlimited
+export DEVICE_NUM=8
+export RANK_SIZE=8
+export RANK_TABLE_FILE=$PATH5
+
+export SERVER_ID=0
+rank_start=$((DEVICE_NUM * SERVER_ID))
+
+
+for((i=0; i<${DEVICE_NUM}; i++))
+do
+    export DEVICE_ID=${i}
+    export RANK_ID=$((rank_start + i))
+    rm -rf ./train_parallel$i
+    mkdir ./train_parallel$i
+    echo ./train_parallel$i
+    cp ./train.py ./train_parallel$i
+    cp -r ./src ./train_parallel$i
+    cd ./train_parallel$i || exit
+    echo "Start training for rank $RANK_ID, device $DEVICE_ID."
+    env > env.log
+    if [ $# == 5 ]
+    then
+    python train.py --data_url $PATH1 --mindrecord_url $PATH2 --train_url $PATH3 --run_platform Ascend --lr 0.075 --epoch_size 500 --dataset $2 --distribute True --device_num 8 &> log &
+    else
+    python train.py --data_url $PATH1 --mindrecord_url $PATH2 --train_url $PATH3 --run_platform Ascend --lr 0.075 --epoch_size 500 --dataset $2 --pre_trained $PATH4 --distribute True --device_num 8 &> log &
+    fi
+    cd ..
+done
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/scripts/run_eval.sh b/research/cv/ssd_resnet34/scripts/run_eval.sh
new file mode 100644
index 0000000000000000000000000000000000000000..9d6e383be7a9d875473d2e3a79f41da294b482eb
--- /dev/null
+++ b/research/cv/ssd_resnet34/scripts/run_eval.sh
@@ -0,0 +1,77 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+echo "=============================================================================================================="
+echo "Please run the script as: "
+echo "sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDRECORD_PATH]"
+echo "for example: sh scripts/run_eval.sh 0 coco /home/neu/ssd-coco /home/neu/ssdresnet34lj/ckpt0/ssd-990_458.ckpt /home/neu/coco-mindrecord"
+echo "It is better to use absolute path."
+echo "================================================================================================================="
+
+if [ $# != 5 ]
+then
+    echo "Using: sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDRECORD_PATH]"
+    exit 1
+fi
+
+get_real_path(){
+    if [ "${1:0:1}" == "/" ]; then
+        echo "$1"
+    else
+        echo "$(realpath -m $PWD/$1)"
+    fi
+}
+
+PATH1=$(get_real_path $3)
+PATH2=$(get_real_path $4)
+PATH3=$(get_real_path $5)
+
+if [ ! -d $PATH1 ]
+then
+    echo "error: DATASET_PATH=$PATH1 is not a dictionary."
+    exit 1
+fi
+
+if [ ! -f $PATH2 ]
+then
+    echo "error: CHECKPOINT_PATH=$PATH2 is not a file."
+    exit 1
+fi
+
+if [ ! -d $PATH3 ]
+then
+    echo "error: MINDRECORD_PATH=$PATH3 is not a dictionary."
+    exit 1
+fi
+
+ulimit -u unlimited
+export DEVICE_NUM=1
+export DEVICE_ID=$1
+export RANK_SIZE=$DEVICE_NUM
+export RANK_ID=0
+
+if [ -d "eval" ];
+then
+    rm -rf ./eval
+fi
+mkdir ./eval
+cp ./eval.py ./eval
+cp -r ./src ./eval
+cd ./eval || exit
+env > env.log
+echo "start evaluation for device $DEVICE_ID"
+python eval.py --data_url $PATH1 --dataset $2 --device_id $1 --run_platform Ascend --checkpoint_path $PATH2 --mindrecord $PATH3 &> log &
+cd ..
\ No newline at end of file
diff --git a/research/cv/ssd_resnet34/scripts/run_infer_310.sh b/research/cv/ssd_resnet34/scripts/run_infer_310.sh
new file mode 100644
index 0000000000000000000000000000000000000000..7a8997c40ec66d91ed418829630855768b347df6
--- /dev/null
+++ b/research/cv/ssd_resnet34/scripts/run_infer_310.sh
@@ -0,0 +1,108 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+if [[ $# -lt 4 || $# -gt 5 ]]; then
+    echo "Usage: bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [ANNO_FILE] [DEVICE_ID]
+    DVPP is mandatory, and must choose from [DVPP|CPU], it's case-insensitive
+    ANNO_PATH is mandatory, and should specify annotation file path of your data including file name.
+    DEVICE_ID is optional, it can be set by environment variable device_id, otherwise the value is zero"
+exit 1
+fi
+
+get_real_path(){
+    if [ "${1:0:1}" == "/" ]; then
+        echo "$1"
+    else
+        echo "$(realpath -m $PWD/$1)"
+    fi
+}
+model=$(get_real_path $1)
+data_path=$(get_real_path $2)
+DVPP=${3^^}
+anno=$(get_real_path $4)
+
+device_id=0
+if [ $# == 5 ]; then
+    device_id=$5
+fi
+
+echo "mindir name: "$model
+echo "dataset path: "$data_path
+echo "image process mode: "$DVPP
+echo "anno file: "$anno
+echo "device id: "$device_id
+
+export ASCEND_HOME=/usr/local/Ascend/
+if [ -d ${ASCEND_HOME}/ascend-toolkit ]; then
+    export PATH=$ASCEND_HOME/fwkacllib/bin:$ASCEND_HOME/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/atc/bin:$PATH
+    export LD_LIBRARY_PATH=$ASCEND_HOME/fwkacllib/lib64:/usr/local/lib:$ASCEND_HOME/ascend-toolkit/latest/atc/lib64:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
+    export TBE_IMPL_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe
+    export PYTHONPATH=$ASCEND_HOME/fwkacllib/python/site-packages:${TBE_IMPL_PATH}:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/python/site-packages:$PYTHONPATH
+    export ASCEND_OPP_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp
+else
+    export PATH=$ASCEND_HOME/fwkacllib/bin:$ASCEND_HOME/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/atc/ccec_compiler/bin:$ASCEND_HOME/atc/bin:$PATH
+    export LD_LIBRARY_PATH=$ASCEND_HOME/fwkacllib/lib64:/usr/local/lib:$ASCEND_HOME/atc/lib64:$ASCEND_HOME/acllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
+    export PYTHONPATH=$ASCEND_HOME/fwkacllib/python/site-packages:$ASCEND_HOME/atc/python/site-packages:$PYTHONPATH
+    export ASCEND_OPP_PATH=$ASCEND_HOME/opp
+fi
+
+function compile_app()
+{
+    cd ../ascend310_infer || exit
+    bash build.sh &> build.log
+}
+
+function infer()
+{
+    cd - || exit
+    if [ -d result_Files ]; then
+        rm -rf ./result_Files
+    fi
+    if [ -d time_Result ]; then
+        rm -rf ./time_Result
+    fi
+    mkdir result_Files
+    mkdir time_Result
+    if [ "$DVPP" == "DVPP" ];then
+      ../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --device_id=$device_id --cpu_dvpp=$DVPP --aipp_path=../ascend310_infer/aipp.cfg --image_height=640 --image_width=640 &> infer.log
+    elif [ "$DVPP" == "CPU"  ]; then
+      ../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --cpu_dvpp=$DVPP --device_id=$device_id --image_height=300 --image_width=300 &> infer.log
+    else
+      echo "image process mode must be in [DVPP|CPU]"
+      exit 1
+    fi
+}
+
+function cal_acc()
+{
+    python3.7 ../postprocess.py --result_path=./result_Files --img_path=$data_path --anno_file=$anno --drop &> acc.log &
+}
+
+compile_app
+if [ $? -ne 0 ]; then
+    echo "compile app code failed"
+    exit 1
+fi
+infer
+if [ $? -ne 0 ]; then
+    echo " execute inference failed"
+    exit 1
+fi
+cal_acc
+if [ $? -ne 0 ]; then
+    echo "calculate accuracy failed"
+    exit 1
+fi
diff --git a/research/cv/ssd_resnet34/scripts/run_standalone_train.sh b/research/cv/ssd_resnet34/scripts/run_standalone_train.sh
new file mode 100644
index 0000000000000000000000000000000000000000..481fe7aeb0d551abd1dac5b9aaab1072c53e19de
--- /dev/null
+++ b/research/cv/ssd_resnet34/scripts/run_standalone_train.sh
@@ -0,0 +1,95 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+echo "=============================================================================================================="
+echo "Please run the script as: "
+echo "sh scripts/run_standalone_train.sh  [DEVICE_ID] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH] [PRE_TRAINED_PATH]"
+echo "for example: sh scripts/run_standalone_train.sh 0 coco /home/neu/ssd-coco /home/neu/coco-mindrecord ./train_out /home/neu/ssdresnet34lj/resnet34.ckpt(optional)"
+echo "It is better to use absolute path."
+echo "================================================================================================================="
+
+if [ $# != 5 ] && [ $# != 6 ]
+then
+    echo "Using: sh scripts/run_standalone_train.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH]"
+    echo "or"
+    echo "Using: sh scripts/run_standalone_train.sh  [DEVICE_ID] [DATASET] [DATASET_PATH] [MINDRECORD_PATH] [TRAIN_OUTPUT_PATH] [PRE_TRAINED_PATH]"
+    exit 1
+fi
+
+get_real_path(){
+    if [ "${1:0:1}" == "/" ]; then
+        echo "$1"
+    else
+        echo "$(realpath -m $PWD/$1)"
+    fi
+}
+
+PATH1=$(get_real_path $3)    # dataset_path
+PATH2=$(get_real_path $4)    # mindrecord_path
+PATH3=$(get_real_path $5)    # train_output_path
+PATH4=$(get_real_path $6)    # pre_trained_path
+
+
+if [ ! -d $PATH1 ]
+then
+    echo "error: DATASET_PATH=$PATH1 is not a directory."
+    exit 1
+fi
+
+if [ ! -d $PATH2 ]
+then
+    echo "error: MINDRECORD_PATH=$PATH2 is not a directory."
+    exit 1
+fi
+
+if [ ! -d $PATH3 ]
+then
+    echo "error: TRAIN_OUTPUT_PATH=$PATH3 is not a directory."
+fi
+
+if [ ! -f $PATH4 ] && [ $# == 6 ]
+then
+    echo "error: PRE_TRAINED_PATH=$PATH4 is not a file."
+    exit 1
+fi
+
+ulimit -u unlimited
+export DEVICE_NUM=1
+export DEVICE_ID=$1
+export RANK_ID=0
+export RANK_SIZE=1
+
+if [ -d "./train" ]
+then
+    rm -rf ./train
+    echo "Remove dir ./train."
+fi
+mkdir ./train
+echo "Create a dir ./train."
+cp ./train.py ./train
+cp -r ./src ./train
+cd ./train || exit
+echo "Start training for device $DEVICE_ID"
+env > env.log
+
+if [ $# == 5 ]
+then
+    python train.py --data_url $PATH1 --mindrecord_url $PATH2 --train_url $PATH3 --run_platform Ascend --lr 0.075 --epoch_size 1000 --dataset $2 &> log &
+else
+    python train.py --data_url $PATH1 --mindrecord_url $PATH2 --train_url $PATH3 --run_platform Ascend --lr 0.075 --epoch_size 1000 --dataset $2 --pre_trained $PATH4 &> log &
+fi
+cd ..
+
diff --git a/research/cv/ssd_resnet34/src/__init__.py b/research/cv/ssd_resnet34/src/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/research/cv/ssd_resnet34/src/anchor_generator.py b/research/cv/ssd_resnet34/src/anchor_generator.py
new file mode 100644
index 0000000000000000000000000000000000000000..6cc27e5a20d6556f06046d54a0ca83a5d79743b8
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/anchor_generator.py
@@ -0,0 +1,94 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Anchor Generator"""
+
+import numpy as np
+
+
+class GridAnchorGenerator:
+    """
+    Anchor Generator
+    """
+    def __init__(self, image_shape, scale, scales_per_octave, aspect_ratios):
+        super(GridAnchorGenerator, self).__init__()
+        self.scale = scale
+        self.scales_per_octave = scales_per_octave
+        self.aspect_ratios = aspect_ratios
+        self.image_shape = image_shape
+
+
+    def generate(self, step):
+        """Generate anchor"""
+        scales = np.array([2**(float(scale) / self.scales_per_octave)
+                           for scale in range(self.scales_per_octave)]).astype(np.float32)
+        aspects = np.array(list(self.aspect_ratios)).astype(np.float32)
+
+        scales_grid, aspect_ratios_grid = np.meshgrid(scales, aspects)
+        scales_grid = scales_grid.reshape([-1])
+        aspect_ratios_grid = aspect_ratios_grid.reshape([-1])
+
+        feature_size = [self.image_shape[0] / step, self.image_shape[1] / step]
+        grid_height, grid_width = feature_size
+
+        base_size = np.array([self.scale * step, self.scale * step]).astype(np.float32)
+        anchor_offset = step / 2.0
+
+        ratio_sqrt = np.sqrt(aspect_ratios_grid)
+        heights = scales_grid / ratio_sqrt * base_size[0]
+        widths = scales_grid * ratio_sqrt * base_size[1]
+
+        y_centers = np.arange(grid_height).astype(np.float32)
+        y_centers = y_centers * step + anchor_offset
+        x_centers = np.arange(grid_width).astype(np.float32)
+        x_centers = x_centers * step + anchor_offset
+        x_centers, y_centers = np.meshgrid(x_centers, y_centers)
+
+        x_centers_shape = x_centers.shape
+        y_centers_shape = y_centers.shape
+
+        widths_grid, x_centers_grid = np.meshgrid(widths, x_centers.reshape([-1]))
+        heights_grid, y_centers_grid = np.meshgrid(heights, y_centers.reshape([-1]))
+
+        x_centers_grid = x_centers_grid.reshape(*x_centers_shape, -1)
+        y_centers_grid = y_centers_grid.reshape(*y_centers_shape, -1)
+        widths_grid = widths_grid.reshape(-1, *x_centers_shape)
+        heights_grid = heights_grid.reshape(-1, *y_centers_shape)
+
+
+        bbox_centers = np.stack([y_centers_grid, x_centers_grid], axis=3)
+        bbox_sizes = np.stack([heights_grid, widths_grid], axis=3)
+        bbox_centers = bbox_centers.reshape([-1, 2])
+        bbox_sizes = bbox_sizes.reshape([-1, 2])
+        bbox_corners = np.concatenate([bbox_centers - 0.5 * bbox_sizes, bbox_centers + 0.5 * bbox_sizes], axis=1)
+        self.bbox_corners = bbox_corners / np.array([*self.image_shape, *self.image_shape]).astype(np.float32)
+        self.bbox_centers = np.concatenate([bbox_centers, bbox_sizes], axis=1)
+        self.bbox_centers = self.bbox_centers / np.array([*self.image_shape, *self.image_shape]).astype(np.float32)
+
+        print(self.bbox_centers.shape)
+        return self.bbox_centers, self.bbox_corners
+
+    def generate_multi_levels(self, steps):
+        """Gennerate multi levels"""
+        bbox_centers_list = []
+        bbox_corners_list = []
+        for step in steps:
+            bbox_centers, bbox_corners = self.generate(step)
+            bbox_centers_list.append(bbox_centers)
+            bbox_corners_list.append(bbox_corners)
+
+        self.bbox_centers = np.concatenate(bbox_centers_list, axis=0)
+        self.bbox_corners = np.concatenate(bbox_corners_list, axis=0)
+        return self.bbox_centers, self.bbox_corners
diff --git a/research/cv/ssd_resnet34/src/box_utils.py b/research/cv/ssd_resnet34/src/box_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..0e6544055f16c00dbd87cef95da40fa2049a5e6d
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/box_utils.py
@@ -0,0 +1,170 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Bbox utils"""
+
+import math
+import itertools as it
+import numpy as np
+from .config import config
+from .anchor_generator import GridAnchorGenerator
+
+
+class GeneratDefaultBoxes():
+    """
+    Generate Default boxes for SSD, follows the order of (W, H, archor_sizes).
+    `self.default_boxes` has a shape of [archor_sizes, H, W, 4], the last dimension is [y, x, h, w].
+    `self.default_boxes_tlbr` has a shape as `self.default_boxes`, the last dimension is [y1, x1, y2, x2].
+    """
+    def __init__(self):
+        fk = config.img_shape[0] / np.array(config.steps)
+        scale_rate = (config.max_scale - config.min_scale) / (len(config.num_default) - 1)
+        scales = [config.min_scale + scale_rate * i for i in range(len(config.num_default))] + [1.0]
+        self.default_boxes = []
+        for idex, feature_size in enumerate(config.feature_size):
+            sk1 = scales[idex]
+            sk2 = scales[idex + 1]
+            sk3 = math.sqrt(sk1 * sk2)
+            if idex == 0 and not config.aspect_ratios[idex]:
+                w, h = sk1 * math.sqrt(2), sk1 / math.sqrt(2)
+                all_sizes = [(0.1, 0.1), (w, h), (h, w)]
+            else:
+                all_sizes = [(sk1, sk1)]
+                for aspect_ratio in config.aspect_ratios[idex]:
+                    w, h = sk1 * math.sqrt(aspect_ratio), sk1 / math.sqrt(aspect_ratio)
+                    all_sizes.append((w, h))
+                    all_sizes.append((h, w))
+                all_sizes.append((sk3, sk3))
+
+            assert len(all_sizes) == config.num_default[idex]
+
+            for i, j in it.product(range(feature_size), repeat=2):
+                for w, h in all_sizes:
+                    cx, cy = (j + 0.5) / fk[idex], (i + 0.5) / fk[idex]
+                    self.default_boxes.append([cy, cx, h, w])
+
+        def to_tlbr(cy, cx, h, w):
+            return cy - h / 2, cx - w / 2, cy + h / 2, cx + w / 2
+
+        # For IoU calculation
+        self.default_boxes_tlbr = np.array(tuple(to_tlbr(*i) for i in self.default_boxes), dtype='float32')
+        self.default_boxes = np.array(self.default_boxes, dtype='float32')
+
+if 'use_anchor_generator' in config and config.use_anchor_generator:
+    generator = GridAnchorGenerator(config.img_shape, 4, 2, [1.0, 2.0, 0.5])
+    default_boxes, default_boxes_tlbr = generator.generate_multi_levels(config.steps)
+else:
+    default_boxes_tlbr = GeneratDefaultBoxes().default_boxes_tlbr
+    default_boxes = GeneratDefaultBoxes().default_boxes
+y1, x1, y2, x2 = np.split(default_boxes_tlbr[:, :4], 4, axis=-1)
+vol_anchors = (x2 - x1) * (y2 - y1)
+matching_threshold = config.match_threshold
+
+
+def ssd_bboxes_encode(boxes):
+    """
+    Labels anchors with ground truth inputs.
+
+    Args:
+        boxex: ground truth with shape [N, 5], for each row, it stores [y, x, h, w, cls].
+
+    Returns:
+        gt_loc: location ground truth with shape [num_anchors, 4].
+        gt_label: class ground truth with shape [num_anchors, 1].
+        num_matched_boxes: number of positives in an image.
+    """
+
+    def jaccard_with_anchors(bbox):
+        """Compute jaccard score a box and the anchors."""
+        # Intersection bbox and volume.
+        ymin = np.maximum(y1, bbox[0])
+        xmin = np.maximum(x1, bbox[1])
+        ymax = np.minimum(y2, bbox[2])
+        xmax = np.minimum(x2, bbox[3])
+        w = np.maximum(xmax - xmin, 0.)
+        h = np.maximum(ymax - ymin, 0.)
+
+        # Volumes.
+        inter_vol = h * w
+        union_vol = vol_anchors + (bbox[2] - bbox[0]) * (bbox[3] - bbox[1]) - inter_vol
+        jaccard = inter_vol / union_vol
+        return np.squeeze(jaccard)
+
+    pre_scores = np.zeros((config.num_ssd_boxes), dtype=np.float32)
+    t_boxes = np.zeros((config.num_ssd_boxes, 4), dtype=np.float32)
+    t_label = np.zeros((config.num_ssd_boxes), dtype=np.int64)
+    for bbox in boxes:
+        label = int(bbox[4])
+        scores = jaccard_with_anchors(bbox)
+        idx = np.argmax(scores)
+        scores[idx] = 2.0
+        mask = (scores > matching_threshold)
+        mask = mask & (scores > pre_scores)
+        pre_scores = np.maximum(pre_scores, scores * mask)
+        t_label = mask * label + (1 - mask) * t_label
+        for i in range(4):
+            t_boxes[:, i] = mask * bbox[i] + (1 - mask) * t_boxes[:, i]
+
+    index = np.nonzero(t_label)
+
+    # Transform to tlbr.
+    bboxes = np.zeros((config.num_ssd_boxes, 4), dtype=np.float32)
+    bboxes[:, [0, 1]] = (t_boxes[:, [0, 1]] + t_boxes[:, [2, 3]]) / 2
+    bboxes[:, [2, 3]] = t_boxes[:, [2, 3]] - t_boxes[:, [0, 1]]
+
+    # Encode features.
+    bboxes_t = bboxes[index]
+    default_boxes_t = default_boxes[index]
+    bboxes_t[:, :2] = (bboxes_t[:, :2] - default_boxes_t[:, :2]) / (default_boxes_t[:, 2:] * config.prior_scaling[0])
+    tmp = np.maximum(bboxes_t[:, 2:4] / default_boxes_t[:, 2:4], 0.000001)
+    bboxes_t[:, 2:4] = np.log(tmp) / config.prior_scaling[1]
+    bboxes[index] = bboxes_t
+
+    num_match = np.array([len(np.nonzero(t_label)[0])], dtype=np.int32)
+    return bboxes, t_label.astype(np.int32), num_match
+
+
+def ssd_bboxes_decode(boxes):
+    """Decode predict boxes to [y, x, h, w]"""
+    boxes_t = boxes.copy()
+    default_boxes_t = default_boxes.copy()
+    boxes_t[:, :2] = boxes_t[:, :2] * config.prior_scaling[0] * default_boxes_t[:, 2:] + default_boxes_t[:, :2]
+    boxes_t[:, 2:4] = np.exp(boxes_t[:, 2:4] * config.prior_scaling[1]) * default_boxes_t[:, 2:4]
+
+    bboxes = np.zeros((len(boxes_t), 4), dtype=np.float32)
+
+    bboxes[:, [0, 1]] = boxes_t[:, [0, 1]] - boxes_t[:, [2, 3]] / 2
+    bboxes[:, [2, 3]] = boxes_t[:, [0, 1]] + boxes_t[:, [2, 3]] / 2
+
+    return np.clip(bboxes, 0, 1)
+
+
+def intersect(box_a, box_b):
+    """Compute the intersect of two sets of boxes."""
+    max_yx = np.minimum(box_a[:, 2:4], box_b[2:4])
+    min_yx = np.maximum(box_a[:, :2], box_b[:2])
+    inter = np.clip((max_yx - min_yx), a_min=0, a_max=np.inf)
+    return inter[:, 0] * inter[:, 1]
+
+
+def jaccard_numpy(box_a, box_b):
+    """Compute the jaccard overlap of two sets of boxes."""
+    inter = intersect(box_a, box_b)
+    area_a = ((box_a[:, 2] - box_a[:, 0]) *
+              (box_a[:, 3] - box_a[:, 1]))
+    area_b = ((box_b[2] - box_b[0]) *
+              (box_b[3] - box_b[1]))
+    union = area_a + area_b - inter
+    return inter / union
diff --git a/research/cv/ssd_resnet34/src/callback.py b/research/cv/ssd_resnet34/src/callback.py
new file mode 100644
index 0000000000000000000000000000000000000000..2b616ac622786a955ad573a9f69e0003ff68af5f
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/callback.py
@@ -0,0 +1,78 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Train and evaluation"""
+
+import os
+import numpy as np
+
+from mindspore import Tensor
+from mindspore.train.callback import Callback
+
+from src.dataset import create_ssd_dataset
+from src.eval_utils import metrics
+from src.box_utils import default_boxes
+from src.config import config
+from src.ssd import SsdInferWithDecoder
+
+
+class EvalCallBack(Callback):
+    """EvalCallBack"""
+    def __init__(self, eval_dataset, net, eval_per_epoch, train_url, json_path):
+        self.net = net
+        self.eval_dataset = eval_dataset
+        self.eval_per_epoch = eval_per_epoch
+        self.train_url = train_url
+        self.json_path = json_path
+        self.best_top_mAP = 0
+        self.best_top_epoch = 0
+
+    def epoch_end(self, run_context):
+        """Epoch_end"""
+        cb_param = run_context.original_args()
+        cur_epoch = cb_param.cur_epoch_num
+        if cur_epoch % self.eval_per_epoch == 0:
+            net = SsdInferWithDecoder(self.net, Tensor(default_boxes), config)
+            net.set_train(False)
+            pred_data = []
+            for data in self.eval_dataset.create_dict_iterator(output_numpy=True, num_epochs=1):
+                img_id = data['img_id']
+                img_np = data['image']
+                image_shape = data['image_shape']
+                output = net(Tensor(img_np))
+                for batch_idx in range(img_np.shape[0]):
+                    pred_data.append({"boxes": output[0].asnumpy()[batch_idx],
+                                      "box_scores": output[1].asnumpy()[batch_idx],
+                                      "img_id": int(np.squeeze(img_id[batch_idx])),
+                                      "image_shape": image_shape[batch_idx]})
+            mAP = metrics(pred_data, self.json_path)
+            if mAP > self.best_top_mAP:
+                self.best_top_mAP = mAP
+                self.best_top_epoch = cur_epoch
+            print(f"mAP: {mAP}" + f"   best_top_mAP: {self.best_top_mAP}" + f"   best_top_epochs:{self.best_top_epoch}")
+            net.set_train(True)
+
+
+def eval_callback(val_data_url, model, json_path, train_url, eval_per_epoch):
+    """Eval_callback"""
+    val_data_list = []
+    for i in range(8):
+        val_data = os.path.join(val_data_url, "ssd_eval.mindrecord" + str(i))
+        val_data_list.append(val_data)
+
+    dataset = create_ssd_dataset(val_data_list, batch_size=32, repeat_num=1,
+                                 is_training=False, use_multiprocessing=False)
+    callback = EvalCallBack(dataset, model, eval_per_epoch, train_url, json_path)
+    return callback
diff --git a/research/cv/ssd_resnet34/src/config.py b/research/cv/ssd_resnet34/src/config.py
new file mode 100644
index 0000000000000000000000000000000000000000..0bc988f1396ac1838176dcf803864433c73a22ab
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/config.py
@@ -0,0 +1,33 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Config parameters for SSD models."""
+
+from .config_ssd_resnet34 import config as config_ssd_resnet34
+using_model = "ssd_resnet34"
+
+config_map = {
+    "ssd_resnet34": config_ssd_resnet34
+}
+
+config = config_map[using_model]
+
+if config.num_ssd_boxes == -1:
+    num = 0
+    h, w = config.img_shape
+    for i in range(len(config.steps)):
+        num += (h // config.steps[i]) * (w // config.steps[i]) * config.num_default[i]
+    config.num_ssd_boxes = num
+    print("num_ssd_boxes: ", num)
diff --git a/research/cv/ssd_resnet34/src/config_ssd_resnet34.py b/research/cv/ssd_resnet34/src/config_ssd_resnet34.py
new file mode 100644
index 0000000000000000000000000000000000000000..32e402b649b806fc72e90aa05f239fe0a48ffc4e
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/config_ssd_resnet34.py
@@ -0,0 +1,83 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Config parameters for SSD_ResNet34 models."""
+
+from easydict import EasyDict as ed
+
+config = ed({
+    "model": "ssd_resnet34",
+    "img_shape": [300, 300],
+    "num_ssd_boxes": 8732,
+    "match_threshold": 0.5,
+    "nms_threshold": 0.6,
+    "min_score": 0.1,
+    "max_boxes": 100,
+
+    "global_step": 0,
+    "lr_init": 0.001,
+    "lr_end_rate": 0.001,
+    "warmup_epochs": 2,
+    "weight_decay": 4e-5,
+    "momentum": 0.9,
+
+    # network
+    "num_default": [4, 6, 6, 6, 4, 4],
+    "extras_in_channels": [],
+    "extras_out_channels": [256, 512, 512, 256, 256, 256],
+    "extras_strides": [1, 1, 2, 2, 2, 1],
+    "extras_ratios": [0.2, 0.2, 0.2, 0.25, 0.5, 0.25],
+    "feature_size": [38, 19, 10, 5, 3, 1],
+    "min_scale": 0.2,
+    "max_scale": 0.95,
+    "aspect_ratios": [[2], [2, 3], [2, 3], [2, 3], [2], [2]],
+    "steps": (7, 15, 30, 60, 100, 300),
+    "prior_scaling": (0.1, 0.2),
+    "gamma": 2.0,  # modify
+    "alpha": 0.75,  # modify
+
+    # `mindrecord_dir` and `coco_root` are better to use absolute path.
+    "feature_extractor_base_param": "",
+    "checkpoint_filter_list": ['multi_loc_layers', 'multi_cls_layers'],
+    "mindrecord_dir": "/home/neu/coco-mindrecord",
+    "coco_root": "/home/neu/ssd-coco",
+    "train_data_type": "train2017",
+    "val_data_type": "val2017",
+    "instances_set": "annotations/instances_{}.json",
+    "classes": ('background', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
+                'train', 'truck', 'boat', 'traffic light', 'fire hydrant',
+                'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
+                'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra',
+                'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
+                'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
+                'kite', 'baseball bat', 'baseball glove', 'skateboard',
+                'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
+                'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
+                'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
+                'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
+                'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
+                'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink',
+                'refrigerator', 'book', 'clock', 'vase', 'scissors',
+                'teddy bear', 'hair drier', 'toothbrush'),
+    "num_classes": 81,
+    # The annotation.json position of voc validation dataset.
+    "voc_json": "annotations/voc_instances_val.json",
+    # voc original dataset.
+    "voc_root": "/data/voc_dataset",
+    # if coco or voc used, `image_dir` and `anno_path` are useless.
+    "image_dir": "",
+    "anno_path": ""
+
+})
diff --git a/research/cv/ssd_resnet34/src/dataset.py b/research/cv/ssd_resnet34/src/dataset.py
new file mode 100644
index 0000000000000000000000000000000000000000..5b2a642557be69456ec1993967ee3148e8e267e8
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/dataset.py
@@ -0,0 +1,453 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""SSD dataset"""
+
+from __future__ import division
+
+import os
+import json
+import xml.etree.ElementTree as et
+import numpy as np
+import cv2
+
+import mindspore.dataset as de
+import mindspore.dataset.vision.c_transforms as C
+from mindspore.mindrecord import FileWriter
+from .config import config
+from .box_utils import jaccard_numpy, ssd_bboxes_encode
+
+
+def _rand(a=0., b=1.):
+    """Generate random."""
+    return np.random.rand() * (b - a) + a
+
+
+def get_imageId_from_fileName(filename, id_iter):
+    """Get imageID from fileName if fileName is int, else return id_iter."""
+    filename = os.path.splitext(filename)[0]
+    if filename.isdigit():
+        return int(filename)
+    return id_iter
+
+
+def random_sample_crop(image, boxes):
+    """Random Crop the image and boxes"""
+    height, width, _ = image.shape
+    min_iou = np.random.choice([None, 0.1, 0.3, 0.5, 0.7, 0.9])
+
+    if min_iou is None:
+        return image, boxes
+
+    # max trails (50)
+    for _ in range(50):
+        image_t = image
+
+        w = _rand(0.3, 1.0) * width
+        h = _rand(0.3, 1.0) * height
+
+        # aspect ratio constraint b/t .5 & 2
+        if h / w < 0.5 or h / w > 2:
+            continue
+
+        left = _rand() * (width - w)
+        top = _rand() * (height - h)
+
+        rect = np.array([int(top), int(left), int(top + h), int(left + w)])
+        overlap = jaccard_numpy(boxes, rect)
+
+        # dropout some boxes
+        drop_mask = overlap > 0
+        if not drop_mask.any():
+            continue
+
+        if overlap[drop_mask].min() < min_iou and overlap[drop_mask].max() > (min_iou + 0.2):
+            continue
+
+        image_t = image_t[rect[0]:rect[2], rect[1]:rect[3], :]
+
+        centers = (boxes[:, :2] + boxes[:, 2:4]) / 2.0
+
+        m1 = (rect[0] < centers[:, 0]) * (rect[1] < centers[:, 1])
+        m2 = (rect[2] > centers[:, 0]) * (rect[3] > centers[:, 1])
+
+        # mask in that both m1 and m2 are true
+        mask = m1 * m2 * drop_mask
+
+        # have any valid boxes? try again if not
+        if not mask.any():
+            continue
+
+        # take only matching gt boxes
+        boxes_t = boxes[mask, :].copy()
+
+        boxes_t[:, :2] = np.maximum(boxes_t[:, :2], rect[:2])
+        boxes_t[:, :2] -= rect[:2]
+        boxes_t[:, 2:4] = np.minimum(boxes_t[:, 2:4], rect[2:4])
+        boxes_t[:, 2:4] -= rect[:2]
+
+        return image_t, boxes_t
+    return image, boxes
+
+
+def preprocess_fn(img_id, image, box, is_training):
+    """Preprocess function for dataset."""
+    cv2.setNumThreads(2)
+
+    def _infer_data(image, input_shape):
+        img_h, img_w, _ = image.shape
+        input_h, input_w = input_shape
+
+        image = cv2.resize(image, (input_w, input_h))
+
+        # When the channels of image is 1
+        if len(image.shape) == 2:
+            image = np.expand_dims(image, axis=-1)
+            image = np.concatenate([image, image, image], axis=-1)
+
+        return img_id, image, np.array((img_h, img_w), np.float32)
+
+    def _data_aug(image, box, is_training, image_size=(300, 300)):
+        """Data augmentation function."""
+        ih, iw, _ = image.shape
+        h, w = image_size
+
+        if not is_training:
+            return _infer_data(image, image_size)
+
+        # Random crop
+        box = box.astype(np.float32)
+        image, box = random_sample_crop(image, box)
+        ih, iw, _ = image.shape
+
+        # Resize image
+        image = cv2.resize(image, (w, h))
+
+        # Flip image or not
+        flip = _rand() < .5
+        if flip:
+            image = cv2.flip(image, 1, dst=None)
+
+        # When the channels of image is 1
+        if len(image.shape) == 2:
+            image = np.expand_dims(image, axis=-1)
+            image = np.concatenate([image, image, image], axis=-1)
+
+        box[:, [0, 2]] = box[:, [0, 2]] / ih
+        box[:, [1, 3]] = box[:, [1, 3]] / iw
+
+        if flip:
+            box[:, [1, 3]] = 1 - box[:, [3, 1]]
+
+        box, label, num_match = ssd_bboxes_encode(box)
+        return image, box, label, num_match
+
+    return _data_aug(image, box, is_training, image_size=config.img_shape)
+
+
+def create_voc_label(is_training):
+    """Get image path and annotation from VOC."""
+    voc_root = config.voc_root
+    cls_map = {name: i for i, name in enumerate(config.classes)}
+    sub_dir = 'train' if is_training else 'eval'
+    voc_dir = os.path.join(voc_root, sub_dir)
+    if not os.path.isdir(voc_dir):
+        raise ValueError(f'Cannot find {sub_dir} dataset path.')
+
+    image_dir = anno_dir = voc_dir
+    if os.path.isdir(os.path.join(voc_dir, 'Images')):
+        image_dir = os.path.join(voc_dir, 'Images')
+    if os.path.isdir(os.path.join(voc_dir, 'Annotations')):
+        anno_dir = os.path.join(voc_dir, 'Annotations')
+
+    if not is_training:
+        json_file = os.path.join(config.voc_root, config.voc_json)
+        file_dir = os.path.split(json_file)[0]
+        if not os.path.isdir(file_dir):
+            os.makedirs(file_dir)
+        json_dict = {"images": [], "type": "instances", "annotations": [],
+                     "categories": []}
+        bnd_id = 1
+
+    image_files_dict = {}
+    image_anno_dict = {}
+    images = []
+    id_iter = 0
+    for anno_file in os.listdir(anno_dir):
+        print(anno_file)
+        if not anno_file.endswith('xml'):
+            continue
+        tree = et.parse(os.path.join(anno_dir, anno_file))
+        root_node = tree.getroot()
+        file_name = root_node.find('filename').text
+        img_id = get_imageId_from_fileName(file_name, id_iter)
+        id_iter += 1
+        image_path = os.path.join(image_dir, file_name)
+        print(image_path)
+        if not os.path.isfile(image_path):
+            print(f'Cannot find image {file_name} according to annotations.')
+            continue
+
+        labels = []
+        for obj in root_node.iter('object'):
+            cls_name = obj.find('name').text
+            if cls_name not in cls_map:
+                print(f'Label "{cls_name}" not in "{config.classes}"')
+                continue
+            bnd_box = obj.find('bndbox')
+            x_min = int(float(bnd_box.find('xmin').text)) - 1
+            y_min = int(float(bnd_box.find('ymin').text)) - 1
+            x_max = int(float(bnd_box.find('xmax').text)) - 1
+            y_max = int(float(bnd_box.find('ymax').text)) - 1
+            labels.append([y_min, x_min, y_max, x_max, cls_map[cls_name]])
+
+            if not is_training:
+                o_width = abs(x_max - x_min)
+                o_height = abs(y_max - y_min)
+                ann = {'area': o_width * o_height, 'iscrowd': 0, 'image_id': \
+                    img_id, 'bbox': [x_min, y_min, o_width, o_height], \
+                       'category_id': cls_map[cls_name], 'id': bnd_id, \
+                       'ignore': 0, \
+                       'segmentation': []}
+                json_dict['annotations'].append(ann)
+                bnd_id = bnd_id + 1
+
+        if labels:
+            images.append(img_id)
+            image_files_dict[img_id] = image_path
+            image_anno_dict[img_id] = np.array(labels)
+
+        if not is_training:
+            size = root_node.find("size")
+            width = int(size.find('width').text)
+            height = int(size.find('height').text)
+            image = {'file_name': file_name, 'height': height, 'width': width,
+                     'id': img_id}
+            json_dict['images'].append(image)
+
+    if not is_training:
+        for cls_name, cid in cls_map.items():
+            cat = {'supercategory': 'none', 'id': cid, 'name': cls_name}
+            json_dict['categories'].append(cat)
+        json_fp = open(json_file, 'w')
+        json_str = json.dumps(json_dict)
+        json_fp.write(json_str)
+        json_fp.close()
+
+    return images, image_files_dict, image_anno_dict
+
+
+def create_coco_label(is_training):
+    """Get image path and annotation from COCO."""
+    from pycocotools.coco import COCO
+
+    coco_root = config.coco_root
+    data_type = config.val_data_type
+    if is_training:
+        data_type = config.train_data_type
+
+    # Classes need to train or test.
+    train_cls = config.classes
+    train_cls_dict = {}
+    for i, cls in enumerate(train_cls):
+        train_cls_dict[cls] = i
+
+    anno_json = os.path.join(coco_root, config.instances_set.format(data_type))
+
+    coco = COCO(anno_json)
+    classs_dict = {}
+    cat_ids = coco.loadCats(coco.getCatIds())
+    for cat in cat_ids:
+        classs_dict[cat["id"]] = cat["name"]
+
+    image_ids = coco.getImgIds()
+    images = []
+    image_path_dict = {}
+    image_anno_dict = {}
+
+    for img_id in image_ids:
+        image_info = coco.loadImgs(img_id)
+        file_name = image_info[0]["file_name"]
+        anno_ids = coco.getAnnIds(imgIds=img_id, iscrowd=None)
+        anno = coco.loadAnns(anno_ids)
+        image_path = os.path.join(coco_root, data_type, file_name)
+        annos = []
+        iscrowd = False
+        for label in anno:
+            bbox = label["bbox"]
+            class_name = classs_dict[label["category_id"]]
+            iscrowd = iscrowd or label["iscrowd"]
+            if class_name in train_cls:
+                x_min, x_max = bbox[0], bbox[0] + bbox[2]
+                y_min, y_max = bbox[1], bbox[1] + bbox[3]
+                annos.append(list(map(round, [y_min, x_min, y_max, x_max])) + [train_cls_dict[class_name]])
+
+        if not is_training and iscrowd:
+            continue
+        if len(annos) >= 1:
+            images.append(img_id)
+            image_path_dict[img_id] = image_path
+            image_anno_dict[img_id] = np.array(annos)
+
+    return images, image_path_dict, image_anno_dict
+
+
+def anno_parser(annos_str):
+    """Parse annotation from string to list."""
+    annos = []
+    for anno_str in annos_str:
+        anno = list(map(int, anno_str.strip().split(',')))
+        annos.append(anno)
+    return annos
+
+
+def filter_valid_data(image_dir, anno_path):
+    """Filter valid image file, which both in image_dir and anno_path."""
+    images = []
+    image_path_dict = {}
+    image_anno_dict = {}
+    if not os.path.isdir(image_dir):
+        raise RuntimeError("Path given is not valid.")
+    if not os.path.isfile(anno_path):
+        raise RuntimeError("Annotation file is not valid.")
+
+    with open(anno_path, "rb") as f:
+        lines = f.readlines()
+    for img_id, line in enumerate(lines):
+        line_str = line.decode("utf-8").strip()
+        line_split = str(line_str).split(' ')
+        file_name = line_split[0]
+        image_path = os.path.join(image_dir, file_name)
+        if os.path.isfile(image_path):
+            images.append(img_id)
+            image_path_dict[img_id] = image_path
+            image_anno_dict[img_id] = anno_parser(line_split[1:])
+
+    return images, image_path_dict, image_anno_dict
+
+
+def voc_data_to_mindrecord(mindrecord_dir, is_training, prefix="ssd.mindrecord", file_num=8):
+    """Create MindRecord file by image_dir and anno_path."""
+    mindrecord_path = os.path.join(mindrecord_dir, prefix)
+    writer = FileWriter(mindrecord_path, file_num)
+    images, image_path_dict, image_anno_dict = create_voc_label(is_training)
+
+    ssd_json = {
+        "img_id": {"type": "int32", "shape": [1]},
+        "image": {"type": "bytes"},
+        "annotation": {"type": "int32", "shape": [-1, 5]},
+    }
+    writer.add_schema(ssd_json, "ssd_json")
+
+    for img_id in images:
+        image_path = image_path_dict[img_id]
+        with open(image_path, 'rb') as f:
+            img = f.read()
+        annos = np.array(image_anno_dict[img_id], dtype=np.int32)
+        img_id = np.array([img_id], dtype=np.int32)
+        row = {"img_id": img_id, "image": img, "annotation": annos}
+        writer.write_raw_data([row])
+    writer.commit()
+
+
+def data_to_mindrecord_byte_image(dataset="coco", is_training=True, prefix="ssd.mindrecord", file_num=8):
+    """Create MindRecord file."""
+    mindrecord_dir = config.mindrecord_dir
+    mindrecord_path = os.path.join(mindrecord_dir, prefix)
+    writer = FileWriter(mindrecord_path, file_num)
+    if dataset == "coco":
+        images, image_path_dict, image_anno_dict = create_coco_label(is_training)
+    else:
+        images, image_path_dict, image_anno_dict = filter_valid_data(config.image_dir, config.anno_path)
+
+    ssd_json = {
+        "img_id": {"type": "int32", "shape": [1]},
+        "image": {"type": "bytes"},
+        "annotation": {"type": "int32", "shape": [-1, 5]},
+    }
+    writer.add_schema(ssd_json, "ssd_json")
+
+    for img_id in images:
+        image_path = image_path_dict[img_id]
+        with open(image_path, 'rb') as f:
+            img = f.read()
+        annos = np.array(image_anno_dict[img_id], dtype=np.int32)
+        img_id = np.array([img_id], dtype=np.int32)
+        row = {"img_id": img_id, "image": img, "annotation": annos}
+        writer.write_raw_data([row])
+    writer.commit()
+
+
+def create_ssd_dataset(mindrecord_file, batch_size=32, repeat_num=10, device_num=1, rank=0,
+                       is_training=True, num_parallel_workers=6, use_multiprocessing=True):
+    """Create SSD dataset with MindDataset."""
+    ds = de.MindDataset(mindrecord_file, columns_list=["img_id", "image", "annotation"], num_shards=device_num,
+                        shard_id=rank, num_parallel_workers=num_parallel_workers, shuffle=is_training)
+    decode = C.Decode()
+    ds = ds.map(operations=decode, input_columns=["image"])
+    change_swap_op = C.HWC2CHW()
+    normalize_op = C.Normalize(mean=[0.485 * 255, 0.456 * 255, 0.406 * 255],
+                               std=[0.229 * 255, 0.224 * 255, 0.225 * 255])
+    color_adjust_op = C.RandomColorAdjust(brightness=0.4, contrast=0.4, saturation=0.4)
+    compose_map_func = (lambda img_id, image, annotation: preprocess_fn(img_id, image, annotation, is_training))
+    if is_training:
+        output_columns = ["image", "box", "label", "num_match"]
+        trans = [color_adjust_op, normalize_op, change_swap_op]
+    else:
+        output_columns = ["img_id", "image", "image_shape"]
+        trans = [normalize_op, change_swap_op]
+    ds = ds.map(operations=compose_map_func, input_columns=["img_id", "image", "annotation"],
+                output_columns=output_columns, column_order=output_columns,
+                python_multiprocessing=use_multiprocessing,
+                num_parallel_workers=num_parallel_workers)
+    ds = ds.map(operations=trans, input_columns=["image"], python_multiprocessing=use_multiprocessing,
+                num_parallel_workers=num_parallel_workers)
+    ds = ds.batch(batch_size, drop_remainder=True)
+    ds = ds.repeat(repeat_num)
+    return ds
+
+
+def create_mindrecord(dataset="coco", prefix="ssd.mindrecord", is_training=True):
+    """Start create mindrecord"""
+    print("Start create dataset!")
+    # It will generate mindrecord file in config.mindrecord_dir,
+    # and the file name is ssd.mindrecord0, 1, ... file_num.
+    mindrecord_dir = config.mindrecord_dir
+    mindrecord_file = os.path.join(mindrecord_dir, prefix + "0")
+    if not os.path.exists(mindrecord_file):
+        if not os.path.isdir(mindrecord_dir):
+            os.makedirs(mindrecord_dir)
+        if dataset == "coco":
+            if os.path.isdir(config.coco_root):
+                print("Create Mindrecord.")
+                data_to_mindrecord_byte_image("coco", is_training, prefix)
+                print("Create Mindrecord Done, at {}".format(mindrecord_dir))
+            else:
+                print("coco_root not exits.")
+        elif dataset == "voc":
+            if os.path.isdir(config.voc_root):
+                print("Create Mindrecord.")
+                voc_data_to_mindrecord(mindrecord_dir, is_training, prefix)
+                print("Create Mindrecord Done, at {}".format(mindrecord_dir))
+            else:
+                print("voc_root not exits.")
+        else:
+            if os.path.isdir(config.image_dir) and os.path.exists(config.anno_path):
+                print("Create Mindrecord.")
+                data_to_mindrecord_byte_image("other", is_training, prefix)
+                print("Create Mindrecord Done, at {}".format(mindrecord_dir))
+            else:
+                print("image_dir or anno_path not exits.")
+    return mindrecord_file
diff --git a/research/cv/ssd_resnet34/src/eval_utils.py b/research/cv/ssd_resnet34/src/eval_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..fd2590ebdbdfa562be1370add55301c7d096d9de
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/eval_utils.py
@@ -0,0 +1,119 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""Coco metrics utils"""
+
+import json
+import numpy as np
+from .config import config
+
+
+def apply_nms(all_boxes, all_scores, thres, max_boxes):
+    """Apply NMS to bboxes."""
+    y1 = all_boxes[:, 0]
+    x1 = all_boxes[:, 1]
+    y2 = all_boxes[:, 2]
+    x2 = all_boxes[:, 3]
+    areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+
+    order = all_scores.argsort()[::-1]
+    keep = []
+
+    while order.size > 0:
+        i = order[0]
+        keep.append(i)
+
+        if len(keep) >= max_boxes:
+            break
+
+        xx1 = np.maximum(x1[i], x1[order[1:]])
+        yy1 = np.maximum(y1[i], y1[order[1:]])
+        xx2 = np.minimum(x2[i], x2[order[1:]])
+        yy2 = np.minimum(y2[i], y2[order[1:]])
+
+        w = np.maximum(0.0, xx2 - xx1 + 1)
+        h = np.maximum(0.0, yy2 - yy1 + 1)
+        inter = w * h
+
+        ovr = inter / (areas[i] + areas[order[1:]] - inter)
+
+        inds = np.where(ovr <= thres)[0]
+
+        order = order[inds + 1]
+    return keep
+
+
+def metrics(pred_data, anno_json):
+    """Calculate mAP of predicted bboxes."""
+    from pycocotools.coco import COCO
+    from pycocotools.cocoeval import COCOeval
+    num_classes = config.num_classes
+
+    #Classes need to train or test.
+    val_cls = config.classes
+    val_cls_dict = {}
+    for i, cls in enumerate(val_cls):
+        val_cls_dict[i] = cls
+    coco_gt = COCO(anno_json)
+    classs_dict = {}
+    cat_ids = coco_gt.loadCats(coco_gt.getCatIds())
+    for cat in cat_ids:
+        classs_dict[cat["name"]] = cat["id"]
+
+    predictions = []
+    img_ids = []
+
+    for sample in pred_data:
+        pred_boxes = sample['boxes']
+        box_scores = sample['box_scores']
+        img_id = sample['img_id']
+        h, w = sample['image_shape']
+
+        final_boxes = []
+        final_label = []
+        final_score = []
+        img_ids.append(img_id)
+
+        for c in range(1, num_classes):
+            class_box_scores = box_scores[:, c]
+            score_mask = class_box_scores > config.min_score
+            class_box_scores = class_box_scores[score_mask]
+            class_boxes = pred_boxes[score_mask] * [h, w, h, w]
+
+            if score_mask.any():
+                nms_index = apply_nms(class_boxes, class_box_scores, config.nms_threshold, config.max_boxes)
+                class_boxes = class_boxes[nms_index]
+                class_box_scores = class_box_scores[nms_index]
+
+                final_boxes += class_boxes.tolist()
+                final_score += class_box_scores.tolist()
+                final_label += [classs_dict[val_cls_dict[c]]] * len(class_box_scores)
+
+        for loc, label, score in zip(final_boxes, final_label, final_score):
+            res = {}
+            res['image_id'] = img_id
+            res['bbox'] = [loc[1], loc[0], loc[3] - loc[1], loc[2] - loc[0]]
+            res['score'] = score
+            res['category_id'] = label
+            predictions.append(res)
+    with open('predictions.json', 'w') as f:
+        json.dump(predictions, f)
+
+    coco_dt = coco_gt.loadRes('predictions.json')
+    E = COCOeval(coco_gt, coco_dt, iouType='bbox')
+    E.params.imgIds = img_ids
+    E.evaluate()
+    E.accumulate()
+    E.summarize()
+    return E.stats[0]
diff --git a/research/cv/ssd_resnet34/src/init_params.py b/research/cv/ssd_resnet34/src/init_params.py
new file mode 100644
index 0000000000000000000000000000000000000000..64833e798657d6c11a49f80f62be9f78646174bc
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/init_params.py
@@ -0,0 +1,50 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""Parameters utils"""
+
+from mindspore.common.initializer import initializer, TruncatedNormal
+
+def init_net_param(network, initialize_mode='TruncatedNormal'):
+    """Init the parameters in net."""
+    params = network.trainable_params()
+    for p in params:
+        if 'beta' not in p.name and 'gamma' not in p.name and 'bias' not in p.name:
+            if initialize_mode == 'TruncatedNormal':
+                p.set_data(initializer(TruncatedNormal(0.02), p.data.shape, p.data.dtype))
+            else:
+                p.set_data(initialize_mode, p.data.shape, p.data.dtype)
+
+
+def load_backbone_params(network, param_dict):
+    """Init the parameters from pre-train model, default is mobilenetv2."""
+    for _, param in network.parameters_and_names():
+        param_name = param.name.replace('network.backbone.', '')
+        name_split = param_name.split('.')
+        if 'features_1' in param_name:
+            param_name = param_name.replace('features_1', 'features')
+        if 'features_2' in param_name:
+            param_name = '.'.join(['features', str(int(name_split[1]) + 14)] + name_split[2:])
+        if param_name in param_dict:
+            param.set_data(param_dict[param_name].data)
+
+
+def filter_checkpoint_parameter_by_list(param_dict, filter_list):
+    """remove useless parameters according to filter_list"""
+    for key in list(param_dict.keys()):
+        for name in filter_list:
+            if name in key:
+                print("Delete parameter from checkpoint: ", key)
+                del param_dict[key]
+                break
diff --git a/research/cv/ssd_resnet34/src/lr_schedule.py b/research/cv/ssd_resnet34/src/lr_schedule.py
new file mode 100644
index 0000000000000000000000000000000000000000..e88234d89a1825e1ba27638033b897bc941b5a8b
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/lr_schedule.py
@@ -0,0 +1,56 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Learning rate schedule"""
+
+import math
+import numpy as np
+
+
+def get_lr(global_step, lr_init, lr_end, lr_max, warmup_epochs, total_epochs, steps_per_epoch):
+    """
+    generate learning rate array
+
+    Args:
+       global_step(int): total steps of the training
+       lr_init(float): init learning rate
+       lr_end(float): end learning rate
+       lr_max(float): max learning rate
+       warmup_epochs(float): number of warmup epochs
+       total_epochs(int): total epoch of training
+       steps_per_epoch(int): steps of one epoch
+
+    Returns:
+       np.array, learning rate array
+    """
+    lr_each_step = []
+    total_steps = steps_per_epoch * total_epochs
+    warmup_steps = steps_per_epoch * warmup_epochs
+    for i in range(total_steps):
+        if i < warmup_steps:
+            lr = lr_init + (lr_max - lr_init) * i / warmup_steps
+        else:
+            lr = lr_end + \
+                 (lr_max - lr_end) * \
+                 (1. + math.cos(math.pi * (i - warmup_steps) / (total_steps - warmup_steps))) / 2.
+        if lr < 0.0:
+            lr = 0.0
+        lr_each_step.append(lr)
+
+    current_step = global_step
+    lr_each_step = np.array(lr_each_step).astype(np.float32)
+    learning_rate = lr_each_step[current_step:]
+
+    return learning_rate
diff --git a/research/cv/ssd_resnet34/src/resnet34.py b/research/cv/ssd_resnet34/src/resnet34.py
new file mode 100644
index 0000000000000000000000000000000000000000..7b623b8f6dd9509fe8ea0ff0966037e068dfe17a
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/resnet34.py
@@ -0,0 +1,225 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Build resnet34"""
+
+import mindspore.nn as nn
+from mindspore.ops import operations as P
+
+def _conv3x3(in_channel, out_channel, stride=1):
+    return nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1, pad_mode='pad')
+
+def _conv1x1(in_channel, out_channel, stride=1):
+    return nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride, padding=1, pad_mode='pad')
+
+def _conv7x7(in_channel, out_channel, stride=1):
+    return nn.Conv2d(in_channel, out_channel, kernel_size=7, stride=stride, padding=3, pad_mode='pad')
+
+def _bn(channel):
+    return nn.BatchNorm2d(channel, eps=1e-5, momentum=0.9,
+                          gamma_init=1, beta_init=0, moving_mean_init=0, moving_var_init=1)
+
+
+def _bn_last(channel):
+    return nn.BatchNorm2d(channel, eps=1e-3, momentum=0.997,
+                          gamma_init=0, beta_init=0, moving_mean_init=0, moving_var_init=1)
+
+
+def _ModifyConvStrideDilation(conv, stride=(1, 1), padding=None):
+    conv.stride = stride
+
+    if padding is not None:
+        conv.padding = padding
+
+
+def _ModifyBlock(block, bottleneck=False, **kwargs):
+    for cell in block:
+        if bottleneck:
+            _ModifyConvStrideDilation(cell.conv2, **kwargs)
+        else:
+            _ModifyConvStrideDilation(cell.conv1, **kwargs)
+        if cell.down_sample_layer is not None:
+            # need to make sure no padding for the 1x1 residual connection
+            _ModifyConvStrideDilation(list(cell.down_sample_layer)[0], **kwargs)
+
+
+class BasicBlock(nn.Cell):
+    """
+    ResNet V1 residual block definition.
+
+    Args:
+        in_channel (int): Input channel.
+        out_channel (int): Output channel.
+        stride (int): Stride size for the first convolutional layer. Default: 1.
+
+    Returns:
+        Tensor, output tensor.
+
+    Examples:
+        >>> BasicBlock(3, 64, stride=2)
+    """
+    expansion = 1
+
+    def __init__(self,
+                 in_channel,
+                 out_channel,
+                 stride=1):
+        super(BasicBlock, self).__init__()
+        self.stride = stride
+        channel = out_channel // self.expansion
+        self.conv1 = _conv3x3(in_channel, channel, stride=stride)
+        self.bn1 = _bn(channel)
+        self.relu = nn.ReLU()
+        self.conv2 = _conv3x3(channel, channel, stride=1)
+        self.bn2 = _bn(channel)
+
+        self.down_sample = False
+
+        if stride != 1 or in_channel != out_channel:
+            self.down_sample = True
+        self.down_sample_layer = None
+
+        if self.down_sample:
+            self.down_sample_layer = \
+                nn.SequentialCell([nn.Conv2d(in_channel,
+                                             out_channel,
+                                             kernel_size=1,
+                                             stride=stride,
+                                             pad_mode='valid'),
+                                   _bn(out_channel)])
+        self.add = P.Add()
+
+    def construct(self, x):
+        """Construct net"""
+        identity = x
+        out = self.conv1(x)
+        out = self.bn1(out)
+        out = self.relu(out)
+        out = self.conv2(out)
+        out = self.bn2(out)
+
+        if self.down_sample:
+            identity = self.down_sample_layer(identity)
+        out = self.add(out, identity)
+        out = self.relu(out)
+
+        return out
+
+class ResNet34(nn.Cell):
+    """
+    ResNet architecture.
+
+    Args:
+        block (Cell): Block for network.
+        layer_nums (list): Numbers of block in different layers.
+        in_channels (list): Input channel in each layer.
+        out_channels (list): Output channel in each layer.
+        strides (list):  Stride size in each layer.
+        num_classes (int): The number of classes that the training images are belonging to.
+    Returns:
+        Tensor, output tensor.
+
+    Examples:
+        >>> ResNet(BasicBlock,
+        >>>        [3, 4, 6, 3],
+        >>>        [64, 64, 128, 256],
+        >>>        [64, 128, 256, 512],
+        >>>        [1, 2, 2, 2]),
+        >>>        6)
+    """
+
+    def __init__(self,
+                 block,
+                 layer_nums,
+                 in_channels,
+                 out_channels,
+                 strides):
+        super(ResNet34, self).__init__()
+
+        if not len(layer_nums) == len(in_channels) == len(out_channels) == 3:
+            raise ValueError("the length of layer_num, in_channels, out_channels list must be 3!")
+        self.conv1 = _conv7x7(3, 64, stride=2)
+        self.bn1 = _bn(64)
+        self.relu = nn.ReLU()
+        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='same')
+        self.layer1 = self._make_layer(block,
+                                       layer_nums[0],
+                                       in_channel=in_channels[0],
+                                       out_channel=out_channels[0],
+                                       stride=strides[0])
+        self.layer2 = self._make_layer(block,
+                                       layer_nums[1],
+                                       in_channel=in_channels[1],
+                                       out_channel=out_channels[1],
+                                       stride=strides[1])
+        self.layer3 = self._make_layer(block,
+                                       layer_nums[2],
+                                       in_channel=in_channels[2],
+                                       out_channel=out_channels[2],
+                                       stride=strides[2])
+
+        _ModifyBlock(list(self.layer3), stride=(1, 1))
+
+
+    def _make_layer(self, block, layer_num, in_channel, out_channel, stride):
+        """
+        Make stage network of ResNet.
+
+        Args:
+            block (Cell): Resnet block.
+            layer_num (int): Layer number.
+            in_channel (int): Input channel.
+            out_channel (int): Output channel.
+            stride (int): Stride size for the first convolutional layer.
+        Returns:
+            SequentialCell, the output layer.
+
+        Examples:
+            >>> _make_layer(BasicBlock, 3, 128, 256, 2)
+        """
+        layers = []
+
+        resnet_block = block(in_channel, out_channel, stride=stride)
+        layers.append(resnet_block)
+        for _ in range(1, layer_num):
+            resnet_block = block(out_channel, out_channel, stride=1)
+            layers.append(resnet_block)
+        return nn.SequentialCell(layers)
+
+    def construct(self, x):
+        """
+        Forward
+        """
+        x = self.conv1(x)
+        x = self.bn1(x)
+        x = self.relu(x)
+        c1 = self.maxpool(x)
+        c2 = self.layer1(c1)
+        c3 = self.layer2(c2)
+        c4 = self.layer3(c3)
+
+        return [c4]
+
+def resnet34():
+    """
+    Get ResNet34 neural network.
+
+    Returns:
+        Cell, cell instance of ResNet34 neural network.
+
+    Examples:
+        >>> net = resnet34()
+    """
+    return ResNet34(BasicBlock, [3, 4, 6], [64, 64, 128], [64, 128, 256], [1, 2, 1])
diff --git a/research/cv/ssd_resnet34/src/ssd.py b/research/cv/ssd_resnet34/src/ssd.py
new file mode 100644
index 0000000000000000000000000000000000000000..6de7fd67dea79911add4abb5eb385c8e27689e8d
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/ssd.py
@@ -0,0 +1,562 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""SSD net based resnet34."""
+
+import mindspore.common.dtype as mstype
+import mindspore as ms
+import mindspore.nn as nn
+from mindspore import context, Tensor
+from mindspore.context import ParallelMode
+from mindspore.parallel._auto_parallel_context import auto_parallel_context
+from mindspore.communication.management import get_group_size
+from mindspore.ops import operations as P
+from mindspore.ops import functional as F
+from mindspore.ops import composite as C
+from src.ssd_resnet34 import SSD_ResNet34
+
+
+def _make_divisible(v, divisor, min_value=None):
+    """nsures that all layers have a channel number that is divisible by 8."""
+    if min_value is None:
+        min_value = divisor
+    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
+    # Make sure that round down does not go down by more than 10%.
+    if new_v < 0.9 * v:
+        new_v += divisor
+    return new_v
+
+
+def _conv2d(in_channel, out_channel, kernel_size=3, stride=1, pad_mod='same'):
+    return nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, stride=stride,
+                     padding=0, pad_mode=pad_mod, has_bias=True)
+
+
+def _bn(channel):
+    return nn.BatchNorm2d(channel, eps=1e-3, momentum=0.97,
+                          gamma_init=1, beta_init=0, moving_mean_init=0, moving_var_init=1)
+
+
+def _last_conv2d(in_channel, out_channel, kernel_size=3, stride=1, pad_mod='same', pad=0):
+    in_channels = in_channel
+    out_channels = in_channel
+    depthwise_conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='same',
+                               padding=pad, group=in_channels)
+    conv = _conv2d(in_channel, out_channel, kernel_size=1)
+    return nn.SequentialCell([depthwise_conv, _bn(in_channel), nn.ReLU6(), conv])
+
+class ConvBNReLU(nn.Cell):
+    """
+    Convolution/Depthwise fused with Batchnorm and ReLU block definition.
+
+    Args:
+        in_planes (int): Input channel.
+        out_planes (int): Output channel.
+        kernel_size (int): Input kernel size.
+        stride (int): Stride size for the first convolutional layer. Default: 1.
+        groups (int): channel group. Convolution is 1 while Depthiwse is input channel. Default: 1.
+        shared_conv(Cell): Use the weight shared conv, default: None.
+
+    Returns:
+        Tensor, output tensor.
+
+    Examples:
+        >>> ConvBNReLU(16, 256, kernel_size=1, stride=1, groups=1)
+    """
+
+    def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1, shared_conv=None):
+        super(ConvBNReLU, self).__init__()
+        padding = 0
+        in_channels = in_planes
+        out_channels = out_planes
+        if shared_conv is None:
+            if groups == 1:
+                conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='same', padding=padding)
+            else:
+                out_channels = in_planes
+                conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='same',
+                                 padding=padding, group=in_channels)
+            layers = [conv, _bn(out_planes), nn.ReLU6()]
+        else:
+            layers = [shared_conv, _bn(out_planes), nn.ReLU6()]
+        self.features = nn.SequentialCell(layers)
+
+    def construct(self, x):
+        output = self.features(x)
+        return output
+
+class InvertedResidual(nn.Cell):
+    """
+    Residual block definition.
+
+    Args:
+        inp (int): Input channel.
+        oup (int): Output channel.
+        stride (int): Stride size for the first convolutional layer. Default: 1.
+        expand_ratio (int): expand ration of input channel
+
+    Returns:
+        Tensor, output tensor.
+
+    Examples:
+        >>> ResidualBlock(3, 256, 1, 1)
+    """
+
+    def __init__(self, inp, oup, stride, expand_ratio, last_relu=False):
+        super(InvertedResidual, self).__init__()
+        assert stride in [1, 2]
+
+        hidden_dim = int(round(inp * expand_ratio))
+        self.use_res_connect = stride == 1 and inp == oup
+
+        layers = []
+        if expand_ratio != 1:
+            layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
+        layers.extend([
+            # dw
+            ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),
+            # pw-linear
+            nn.Conv2d(hidden_dim, oup, kernel_size=1, stride=1, has_bias=False),
+            _bn(oup),
+        ])
+        self.conv = nn.SequentialCell(layers)
+        self.cast = P.Cast()
+        self.last_relu = last_relu
+        self.relu = nn.ReLU6()
+
+    def construct(self, x):
+        identity = x
+        x = self.conv(x)
+        if self.use_res_connect:
+            x = identity + x
+        if self.last_relu:
+            x = self.relu(x)
+        return x
+
+class FlattenConcat(nn.Cell):
+    """
+    Concatenate predictions into a single tensor.
+
+    Args:
+        config (dict): The default config of SSD.
+
+    Returns:
+        Tensor, flatten predictions.
+    """
+
+    def __init__(self, config):
+        super(FlattenConcat, self).__init__()
+        self.num_ssd_boxes = config.num_ssd_boxes
+        self.concat = P.Concat(axis=1)
+        self.transpose = P.Transpose()
+
+    def construct(self, inputs):
+        """Construct FlattenConcat"""
+        output = ()
+        batch_size = F.shape(inputs[0])[0]
+        for x in inputs:
+            x = self.transpose(x, (0, 2, 3, 1))
+            output += (F.reshape(x, (batch_size, -1)),)
+        res = self.concat(output)
+        return F.reshape(res, (batch_size, self.num_ssd_boxes, -1))
+
+class MultiBox(nn.Cell):
+    """
+    Multibox conv layers. Each multibox layer contains class conf scores and localization predictions.
+
+    Args:
+        config (dict): The default config of SSD.
+
+    Returns:
+        Tensor, localization predictions.
+        Tensor, class conf scores.
+    """
+
+    def __init__(self, config):
+        super(MultiBox, self).__init__()
+        num_classes = config.num_classes
+        out_channels = config.extras_out_channels
+        num_default = config.num_default
+
+        loc_layers = []
+        cls_layers = []
+        for k, out_channel in enumerate(out_channels):
+            loc_layers += [_last_conv2d(out_channel, 4 * num_default[k],
+                                        kernel_size=3, stride=1, pad_mod='same', pad=0)]
+            cls_layers += [_last_conv2d(out_channel, num_classes * num_default[k],
+                                        kernel_size=3, stride=1, pad_mod='same', pad=0)]
+
+        self.multi_loc_layers = nn.layer.CellList(loc_layers)
+        self.multi_cls_layers = nn.layer.CellList(cls_layers)
+        self.flatten_concat = FlattenConcat(config)
+
+    def construct(self, inputs):
+        loc_outputs = ()
+        cls_outputs = ()
+        for i in range(len(self.multi_loc_layers)):
+            loc_outputs += (self.multi_loc_layers[i](inputs[i]),)
+
+            cls_outputs += (self.multi_cls_layers[i](inputs[i]),)
+
+        return self.flatten_concat(loc_outputs), self.flatten_concat(cls_outputs)
+
+class WeightSharedMultiBox(nn.Cell):
+    """
+    Weight shared Multi-box conv layers. Each multi-box layer contains class conf scores and localization predictions.
+    All box predictors shares the same conv weight in different features.
+
+    Args:
+        config (dict): The default config of SSD.
+        loc_cls_shared_addition(bool): Whether the location predictor and classifier prediction share the
+                                       same addition layer.
+    Returns:
+        Tensor, localization predictions.
+        Tensor, class conf scores.
+    """
+
+    def __init__(self, config, loc_cls_shared_addition=False):
+        super(WeightSharedMultiBox, self).__init__()
+        num_classes = config.num_classes
+        out_channels = config.extras_out_channels[0]
+        num_default = config.num_default[0]
+        num_features = len(config.feature_size)
+        num_addition_layers = config.num_addition_layers
+        self.loc_cls_shared_addition = loc_cls_shared_addition
+
+        if not loc_cls_shared_addition:
+            loc_convs = [
+                _conv2d(out_channels, out_channels, 3, 1) for x in range(num_addition_layers)
+            ]
+            cls_convs = [
+                _conv2d(out_channels, out_channels, 3, 1) for x in range(num_addition_layers)
+            ]
+            addition_loc_layer_list = []
+            addition_cls_layer_list = []
+            for _ in range(num_features):
+                addition_loc_layer = [
+                    ConvBNReLU(out_channels, out_channels, 3, 1, 1, loc_convs[x]) for x in range(num_addition_layers)
+                ]
+                addition_cls_layer = [
+                    ConvBNReLU(out_channels, out_channels, 3, 1, 1, cls_convs[x]) for x in range(num_addition_layers)
+                ]
+                addition_loc_layer_list.append(nn.SequentialCell(addition_loc_layer))
+                addition_cls_layer_list.append(nn.SequentialCell(addition_cls_layer))
+            self.addition_layer_loc = nn.CellList(addition_loc_layer_list)
+            self.addition_layer_cls = nn.CellList(addition_cls_layer_list)
+        else:
+            convs = [
+                _conv2d(out_channels, out_channels, 3, 1) for x in range(num_addition_layers)
+            ]
+            addition_layer_list = []
+            for _ in range(num_features):
+                addition_layers = [
+                    ConvBNReLU(out_channels, out_channels, 3, 1, 1, convs[x]) for x in range(num_addition_layers)
+                ]
+                addition_layer_list.append(nn.SequentialCell(addition_layers))
+            self.addition_layer = nn.SequentialCell(addition_layer_list)
+
+        loc_layers = [_conv2d(out_channels, 4 * num_default,
+                              kernel_size=3, stride=1, pad_mod='same')]
+        cls_layers = [_conv2d(out_channels, num_classes * num_default,
+                              kernel_size=3, stride=1, pad_mod='same')]
+
+        self.loc_layers = nn.SequentialCell(loc_layers)
+        self.cls_layers = nn.SequentialCell(cls_layers)
+        self.flatten_concat = FlattenConcat(config)
+
+    def construct(self, inputs):
+        """Construct WeightSharedMultiBox"""
+        loc_outputs = ()
+        cls_outputs = ()
+        num_heads = len(inputs)
+        for i in range(num_heads):
+            if self.loc_cls_shared_addition:
+                features = self.addition_layer[i](inputs[i])
+                loc_outputs += (self.loc_layers(features),)
+                cls_outputs += (self.cls_layers(features),)
+            else:
+                features = self.addition_layer_loc[i](inputs[i])
+                loc_outputs += (self.loc_layers(features),)
+                features = self.addition_layer_cls[i](inputs[i])
+                cls_outputs += (self.cls_layers(features),)
+        return self.flatten_concat(loc_outputs), self.flatten_concat(cls_outputs)
+
+class SsdResnet34(nn.Cell):
+    """
+    SsdResnet34
+
+    Args:
+        config (dict): The default config of SSD.
+
+    returns:
+        Tensor,the pred_loc, pred_label
+    """
+    def __init__(self, config):
+        super(SsdResnet34, self).__init__()
+        self.multi_box = MultiBox(config)
+        self.activation = P.Sigmoid()
+        self.feature_extractor = SSD_ResNet34(config)
+
+    def construct(self, x):
+        features = self.feature_extractor(x)
+        pred_loc, pred_label = self.multi_box(features)
+        if not self.training:
+            pred_label = self.activation(pred_label)
+        pred_loc = F.cast(pred_loc, mstype.float32)
+        pred_label = F.cast(pred_label, mstype.float32)
+        return pred_loc, pred_label
+
+class SigmoidFocalClassificationLoss(nn.Cell):
+    """"
+    Sigmoid focal-loss for classification.
+
+    Args:
+        gamma (float): Hyper-parameter to balance the easy and hard examples. Default: 2.0
+        alpha (float): Hyper-parameter to balance the positive and negative example. Default: 0.25
+
+    Returns:
+        Tensor, the focal loss.
+    """
+
+    def __init__(self, gamma=2.0, alpha=0.25):
+        super(SigmoidFocalClassificationLoss, self).__init__()
+        self.sigmiod_cross_entropy = P.SigmoidCrossEntropyWithLogits()
+        self.sigmoid = P.Sigmoid()
+        self.pow = P.Pow()
+        self.onehot = P.OneHot()
+        self.on_value = Tensor(1.0, mstype.float32)
+        self.off_value = Tensor(0.0, mstype.float32)
+        self.gamma = gamma
+        self.alpha = alpha
+
+    def construct(self, logits, label):
+        label = self.onehot(label, F.shape(logits)[-1], self.on_value, self.off_value)
+        sigmiod_cross_entropy = self.sigmiod_cross_entropy(logits, label)
+        sigmoid = self.sigmoid(logits)
+        label = F.cast(label, mstype.float32)
+        p_t = label * sigmoid + (1 - label) * (1 - sigmoid)
+        modulating_factor = self.pow(1 - p_t, self.gamma)
+        alpha_weight_factor = label * self.alpha + (1 - label) * (1 - self.alpha)
+        focal_loss = modulating_factor * alpha_weight_factor * sigmiod_cross_entropy
+        return focal_loss
+
+class SSDWithLossCell(nn.Cell):
+    """"
+    Provide SSD training loss through network.
+
+    Args:
+        network (Cell): The training network.
+        config (dict): SSD config.
+
+    Returns:
+        Tensor, the loss of the network.
+    """
+
+    def __init__(self, network, config):
+        super(SSDWithLossCell, self).__init__()
+        self.network = network
+        self.less = P.Less()
+        self.tile = P.Tile()
+        self.reduce_sum = P.ReduceSum()
+        self.expand_dims = P.ExpandDims()
+        self.class_loss = SigmoidFocalClassificationLoss(config.gamma, config.alpha)
+        self.loc_loss = nn.SmoothL1Loss()
+
+    def construct(self, x, gt_loc, gt_label, num_matched_boxes):
+        """Construct SSDWithLossCell"""
+        pred_loc, pred_label = self.network(x)
+        mask = F.cast(self.less(0, gt_label), mstype.float32)
+        num_matched_boxes = self.reduce_sum(F.cast(num_matched_boxes, mstype.float32))
+
+        # Localization Loss
+        mask_loc = self.tile(self.expand_dims(mask, -1), (1, 1, 4))
+        smooth_l1 = self.loc_loss(pred_loc, gt_loc) * mask_loc
+        loss_loc = self.reduce_sum(self.reduce_sum(smooth_l1, -1), -1)
+
+        # Classification Loss
+        loss_cls = self.class_loss(pred_label, gt_label)
+        loss_cls = self.reduce_sum(loss_cls, (1, 2))
+
+        return self.reduce_sum((loss_cls + loss_loc) / num_matched_boxes)
+
+
+grad_scale = C.MultitypeFuncGraph("grad_scale")
+
+
+@grad_scale.register("Tensor", "Tensor")
+def tensor_grad_scale(scale, grad):
+    return grad * P.Reciprocal()(scale)
+
+
+class TrainingWrapper(nn.Cell):
+    """
+    Encapsulation class of SSD network training.
+
+    Append an optimizer to the training network after that the construct
+    function can be called to create the backward graph.
+
+    Args:
+        network (Cell): The training network. Note that loss function should have been added.
+        optimizer (Optimizer): Optimizer for updating the weights.
+        sens (Number): The adjust parameter. Default: 1.0.
+        use_global_nrom(bool): Whether apply global norm before optimizer. Default: False
+    """
+
+    def __init__(self, network, optimizer, sens=1.0, use_global_norm=False):
+        super(TrainingWrapper, self).__init__(auto_prefix=False)
+        self.network = network
+        self.network.set_grad()
+        self.weights = ms.ParameterTuple(network.trainable_params())
+        self.optimizer = optimizer
+        self.grad = C.GradOperation(get_by_list=True, sens_param=True)
+        self.sens = sens
+        self.reducer_flag = False
+        self.grad_reducer = None
+        self.use_global_norm = use_global_norm
+        self.parallel_mode = context.get_auto_parallel_context("parallel_mode")
+        if self.parallel_mode in [ParallelMode.DATA_PARALLEL, ParallelMode.HYBRID_PARALLEL]:
+            self.reducer_flag = True
+        if self.reducer_flag:
+            mean = context.get_auto_parallel_context("gradients_mean")
+            if auto_parallel_context().get_device_num_is_set():
+                degree = context.get_auto_parallel_context("device_num")
+            else:
+                degree = get_group_size()
+            self.grad_reducer = nn.DistributedGradReducer(optimizer.parameters, mean, degree)
+        self.hyper_map = C.HyperMap()
+
+    def construct(self, *args):
+        """Construct TrainingWrapper"""
+        weights = self.weights
+        loss = self.network(*args)
+        sens = P.Fill()(P.DType()(loss), P.Shape()(loss), self.sens)
+        grads = self.grad(self.network, weights)(*args, sens)
+        if self.reducer_flag:
+            # apply grad reducer on grads
+            grads = self.grad_reducer(grads)
+        if self.use_global_norm:
+            grads = self.hyper_map(F.partial(grad_scale, F.scalar_to_array(self.sens)), grads)
+            grads = C.clip_by_global_norm(grads)
+        return F.depend(loss, self.optimizer(grads))
+
+
+class SSDWithMobileNetV2(nn.Cell):
+    """
+    MobileNetV2 architecture for SSD backbone.
+
+    Args:
+        width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.
+        inverted_residual_setting (list): Inverted residual settings. Default is None
+        round_nearest (list): Channel round to. Default is 8
+    Returns:
+        Tensor, the 13th feature after ConvBNReLU in MobileNetV2.
+        Tensor, the last feature in MobileNetV2.
+
+    Examples:
+        >>> SSDWithMobileNetV2()
+    """
+
+    def __init__(self, width_mult=1.0, inverted_residual_setting=None, round_nearest=8):
+        super(SSDWithMobileNetV2, self).__init__()
+        block = InvertedResidual
+        input_channel = 32
+        last_channel = 1280
+
+        if inverted_residual_setting is None:
+            inverted_residual_setting = [
+                # t, c, n, s
+                [1, 16, 1, 1],
+                [6, 24, 2, 2],
+                [6, 32, 3, 2],
+                [6, 64, 4, 2],
+                [6, 96, 3, 1],
+                [6, 160, 3, 2],
+                [6, 320, 1, 1],
+            ]
+        if len(inverted_residual_setting[0]) != 4:
+            raise ValueError("inverted_residual_setting should be non-empty "
+                             "or a 4-element list, got {}".format(inverted_residual_setting))
+
+        # building first layer
+        input_channel = _make_divisible(input_channel * width_mult, round_nearest)
+        self.last_channel = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
+        features = [ConvBNReLU(3, input_channel, stride=2)]
+        # building inverted residual blocks
+        layer_index = 0
+        for t, c, n, s in inverted_residual_setting:
+            output_channel = _make_divisible(c * width_mult, round_nearest)
+            for i in range(n):
+                if layer_index == 13:
+                    hidden_dim = int(round(input_channel * t))
+                    self.expand_layer_conv_13 = ConvBNReLU(input_channel, hidden_dim, kernel_size=1)
+                stride = s if i == 0 else 1
+                features.append(block(input_channel, output_channel, stride, expand_ratio=t))
+                input_channel = output_channel
+                layer_index += 1
+        # building last several layers
+        features.append(ConvBNReLU(input_channel, self.last_channel, kernel_size=1))
+
+        self.features_1 = nn.SequentialCell(features[:14])
+        self.features_2 = nn.SequentialCell(features[14:])
+
+    def construct(self, x):
+        out = self.features_1(x)
+        expand_layer_conv_13 = self.expand_layer_conv_13(out)
+        out = self.features_2(out)
+        return expand_layer_conv_13, out
+
+    def get_out_channels(self):
+        return self.last_channel
+
+
+class SsdInferWithDecoder(nn.Cell):
+    """
+    SSD Infer wrapper to decode the bbox locations.
+
+    Args:
+        network (Cell): the origin ssd infer network without bbox decoder.
+        default_boxes (Tensor): the default_boxes from anchor generator
+        config (dict): ssd config
+    Returns:
+        Tensor, the locations for bbox after decoder representing (y0,x0,y1,x1)
+        Tensor, the prediction labels.
+
+    """
+
+    def __init__(self, network, default_boxes, config):
+        super(SsdInferWithDecoder, self).__init__()
+        self.network = network
+        self.default_boxes = default_boxes
+        self.prior_scaling_xy = config.prior_scaling[0]
+        self.prior_scaling_wh = config.prior_scaling[1]
+
+    def construct(self, x):
+        """Construct SsdInferWithDecoder"""
+        pred_loc, pred_label = self.network(x)
+
+        default_bbox_xy = self.default_boxes[..., :2]
+        default_bbox_wh = self.default_boxes[..., 2:]
+        pred_xy = pred_loc[..., :2] * self.prior_scaling_xy * default_bbox_wh + default_bbox_xy
+        pred_wh = P.Exp()(pred_loc[..., 2:] * self.prior_scaling_wh) * default_bbox_wh
+
+        pred_xy_0 = pred_xy - pred_wh / 2.0
+        pred_xy_1 = pred_xy + pred_wh / 2.0
+        pred_xy = P.Concat(-1)((pred_xy_0, pred_xy_1))
+        pred_xy = P.Maximum()(pred_xy, 0)
+        pred_xy = P.Minimum()(pred_xy, 1)
+        return pred_xy, pred_label
+
+def ssd_resnet34(**kwargs):
+    return SsdResnet34(**kwargs)
diff --git a/research/cv/ssd_resnet34/src/ssd_resnet34.py b/research/cv/ssd_resnet34/src/ssd_resnet34.py
new file mode 100644
index 0000000000000000000000000000000000000000..121c9944c6185a439ae12e34cea2a8d9190e6cc2
--- /dev/null
+++ b/research/cv/ssd_resnet34/src/ssd_resnet34.py
@@ -0,0 +1,264 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""ssd_resnet34"""
+
+from src.resnet34 import resnet34
+import mindspore.common.dtype as mstype
+import mindspore as ms
+import mindspore.nn as nn
+from mindspore import context, Tensor
+from mindspore.context import ParallelMode
+from mindspore.parallel._auto_parallel_context import auto_parallel_context
+from mindspore.communication.management import get_group_size
+import mindspore.ops.operations as P
+import mindspore.ops.functional as F
+import mindspore.ops.composite as C
+
+
+class SSD_ResNet34(nn.Cell):
+    """
+        Build a SSD module to take 300x300 image input,
+        and output 8732 per class bounding boxes
+
+        vggt: pretrained vgg16 (partial) model
+        label_num: number of classes (including background 0)
+    """
+
+    def __init__(self, config):
+        super(SSD_ResNet34, self).__init__()
+        self.strides = [1, 1, 2, 2, 2, 1]
+        self.module = resnet34()
+        out_size = 38
+        out_channels = config.extras_out_channels
+        self._build_additional_features(out_size, out_channels)
+        # init_net_param()
+
+    def _build_additional_features(self, input_size, input_channels):
+        """
+        Build additional features
+        """
+        idx = 0
+        if input_size == 38:
+            idx = 0
+        elif input_size == 19:
+            idx = 1
+        elif input_size == 10:
+            idx = 2
+
+        self.additional_blocks = []
+
+        if input_size == 38:
+            self.additional_blocks.append(nn.SequentialCell(
+                nn.Conv2d(input_channels[idx], 256, kernel_size=1),
+                nn.ReLU(),
+                nn.Conv2d(256, input_channels[idx + 1], kernel_size=3, pad_mode='pad', padding=1,
+                          stride=self.strides[2]),
+                nn.ReLU(),
+            ))
+            idx += 1
+
+        self.additional_blocks.append(nn.SequentialCell(
+            nn.Conv2d(input_channels[idx], 256, kernel_size=1),
+            nn.ReLU(),
+            nn.Conv2d(256, input_channels[idx + 1], kernel_size=3, pad_mode='pad', padding=1, stride=self.strides[3]),
+            nn.ReLU(),
+        ))
+        idx += 1
+
+        # conv9_1, conv9_2
+        self.additional_blocks.append(nn.SequentialCell(
+            nn.Conv2d(input_channels[idx], 128, kernel_size=1),
+            nn.ReLU(),
+            nn.Conv2d(128, input_channels[idx + 1], kernel_size=3, pad_mode='pad', padding=1, stride=self.strides[4]),
+            nn.ReLU(),
+        ))
+        idx += 1
+
+        # conv10_1, conv10_2
+        self.additional_blocks.append(nn.SequentialCell(
+            nn.Conv2d(input_channels[idx], 128, kernel_size=1),
+            nn.ReLU(),
+            nn.Conv2d(128, input_channels[idx + 1], kernel_size=3, pad_mode='valid', stride=self.strides[5]),
+            nn.ReLU(),
+        ))
+        idx += 1
+
+        # Only necessary in VGG for now
+        if input_size >= 19:
+            # conv11_1, conv11_2
+            self.additional_blocks.append(nn.SequentialCell(
+                nn.Conv2d(input_channels[idx], 128, kernel_size=1),
+                nn.ReLU(),
+                nn.Conv2d(128, input_channels[idx + 1], kernel_size=3, pad_mode='valid'),
+                nn.ReLU(),
+            ))
+
+        self.additional_blocks = nn.CellList(self.additional_blocks)
+
+    def construct(self, x):
+        """
+        Construct SSD_ResNet34
+        """
+        layers = self.module(x)
+        # last result from network goes into additional blocks
+        layer0 = layers[-1]
+        # additional_results = []
+        layer1 = self.additional_blocks[0](layer0)
+        layers.append(layer1)
+        layer2 = self.additional_blocks[1](layer1)
+        layers.append(layer2)
+        layer3 = self.additional_blocks[2](layer2)
+        layers.append(layer3)
+        layer4 = self.additional_blocks[3](layer3)
+        layers.append(layer4)
+        layer5 = self.additional_blocks[4](layer4)
+        layers.append(layer5)
+        return layers
+
+
+class SigmoidFocalClassificationLoss(nn.Cell):
+    """"
+    Sigmoid focal-loss for classification.
+
+    Args:
+        gamma (float): Hyper-parameter to balance the easy and hard examples. Default: 2.0
+        alpha (float): Hyper-parameter to balance the positive and negative example. Default: 0.25
+
+    Returns:
+        Tensor, the focal loss.
+    """
+
+    def __init__(self, gamma=2.0, alpha=0.25):
+        super(SigmoidFocalClassificationLoss, self).__init__()
+        self.sigmiod_cross_entropy = P.SigmoidCrossEntropyWithLogits()
+        self.sigmoid = P.Sigmoid()
+        self.pow = P.Pow()
+        self.onehot = P.OneHot()
+        self.on_value = Tensor(1.0, mstype.float32)
+        self.off_value = Tensor(0.0, mstype.float32)
+        self.gamma = gamma
+        self.alpha = alpha
+
+    def construct(self, logits, label):
+        label = self.onehot(label, F.shape(logits)[-1], self.on_value, self.off_value)
+        sigmiod_cross_entropy = self.sigmiod_cross_entropy(logits, label)
+        sigmoid = self.sigmoid(logits)
+        label = F.cast(label, mstype.float32)
+        p_t = label * sigmoid + (1 - label) * (1 - sigmoid)
+        modulating_factor = self.pow(1 - p_t, self.gamma)
+        alpha_weight_factor = label * self.alpha + (1 - label) * (1 - self.alpha)
+        focal_loss = modulating_factor * alpha_weight_factor * sigmiod_cross_entropy
+        return focal_loss
+
+
+class SSDWithLossCell(nn.Cell):
+    """"
+    Provide SSD training loss through network.
+
+    Args:
+        network (Cell): The training network.
+        config (dict): SSD config.
+
+    Returns:
+        Tensor, the loss of the network.
+    """
+
+    def __init__(self, network, config):
+        super(SSDWithLossCell, self).__init__()
+        self.network = network
+        self.less = P.Less()
+        self.tile = P.Tile()
+        self.reduce_sum = P.ReduceSum()
+        self.expand_dims = P.ExpandDims()
+        self.class_loss = SigmoidFocalClassificationLoss(config.gamma, config.alpha)
+        self.loc_loss = nn.SmoothL1Loss()
+
+    def construct(self, x, gt_loc, gt_label, num_matched_boxes):
+        """Construct SSDWithLossCell"""
+        pred_loc, pred_label = self.network(x)
+        mask = F.cast(self.less(0, gt_label), mstype.float32)
+        num_matched_boxes = self.reduce_sum(F.cast(num_matched_boxes, mstype.float32))
+
+        # Localization Loss
+        mask_loc = self.tile(self.expand_dims(mask, -1), (1, 1, 4))
+        smooth_l1 = self.loc_loss(pred_loc, gt_loc) * mask_loc
+        loss_loc = self.reduce_sum(self.reduce_sum(smooth_l1, -1), -1)
+
+        # Classification Loss
+        loss_cls = self.class_loss(pred_label, gt_label)
+        loss_cls = self.reduce_sum(loss_cls, (1, 2))
+
+        return self.reduce_sum((loss_cls + loss_loc) / num_matched_boxes)
+
+
+grad_scale = C.MultitypeFuncGraph("grad_scale")
+
+
+@grad_scale.register("Tensor", "Tensor")
+def tensor_grad_scale(scale, grad):
+    return grad * P.Reciprocal()(scale)
+
+
+class TrainingWrapper(nn.Cell):
+    """
+    Encapsulation class of SSD network training.
+
+    Append an optimizer to the training network after that the construct
+    function can be called to create the backward graph.
+
+    Args:
+        network (Cell): The training network. Note that loss function should have been added.
+        optimizer (Optimizer): Optimizer for updating the weights.
+        sens (Number): The adjust parameter. Default: 1.0.
+        use_global_nrom(bool): Whether apply global norm before optimizer. Default: False
+    """
+
+    def __init__(self, network, optimizer, sens=1.0, use_global_norm=False):
+        super(TrainingWrapper, self).__init__(auto_prefix=False)
+        self.network = network
+        self.network.set_grad()
+        self.weights = ms.ParameterTuple(network.trainable_params())
+        self.optimizer = optimizer
+        self.grad = C.GradOperation(get_by_list=True, sens_param=True)
+        self.sens = sens
+        self.reducer_flag = False
+        self.grad_reducer = None
+        self.use_global_norm = use_global_norm
+        self.parallel_mode = context.get_auto_parallel_context("parallel_mode")
+        if self.parallel_mode in [ParallelMode.DATA_PARALLEL, ParallelMode.HYBRID_PARALLEL]:
+            self.reducer_flag = True
+        if self.reducer_flag:
+            mean = context.get_auto_parallel_context("gradients_mean")
+            if auto_parallel_context().get_device_num_is_set():
+                degree = context.get_auto_parallel_context("device_num")
+            else:
+                degree = get_group_size()
+            self.grad_reducer = nn.DistributedGradReducer(optimizer.parameters, mean, degree)
+        self.hyper_map = C.HyperMap()
+
+    def construct(self, *args):
+        """Construct TrainingWrapper"""
+        weights = self.weights
+        loss = self.network(*args)
+        sens = P.Fill()(P.DType()(loss), P.Shape()(loss), self.sens)
+        grads = self.grad(self.network, weights)(*args, sens)
+        if self.reducer_flag:
+            # apply grad reducer on grads
+            grads = self.grad_reducer(grads)
+        if self.use_global_norm:
+            grads = self.hyper_map(F.partial(grad_scale, F.scalar_to_array(self.sens)), grads)
+            grads = C.clip_by_global_norm(grads)
+        return F.depend(loss, self.optimizer(grads))
diff --git a/research/cv/ssd_resnet34/train.py b/research/cv/ssd_resnet34/train.py
new file mode 100644
index 0000000000000000000000000000000000000000..a1b7daa2856d036830d5f0f05f45087134b3e508
--- /dev/null
+++ b/research/cv/ssd_resnet34/train.py
@@ -0,0 +1,175 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+"""Train SSD and get checkpoint files."""
+
+import argparse
+import ast
+import os
+import mindspore.nn as nn
+from mindspore import context, Tensor
+from mindspore.communication.management import init, get_rank
+from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, LossMonitor, TimeMonitor
+from mindspore.train import Model
+from mindspore.context import ParallelMode
+from mindspore.train.serialization import load_checkpoint, load_param_into_net
+from mindspore.common import set_seed, dtype
+from src.ssd import SSDWithLossCell, TrainingWrapper, ssd_resnet34
+from src.config import config
+from src.dataset import create_ssd_dataset, create_mindrecord
+from src.lr_schedule import get_lr
+from src.init_params import init_net_param, filter_checkpoint_parameter_by_list
+
+
+set_seed(1)
+
+
+def get_args():
+    """
+    Get args
+    """
+    parser = argparse.ArgumentParser(description="SSD_ResNet34 training")
+    parser.add_argument("--data_url", type=str)
+    parser.add_argument("--train_url", type=str)
+    parser.add_argument("--mindrecord_url", type=str)
+    parser.add_argument("--run_platform", type=str, default="Ascend", choices=("Ascend", "GPU", "CPU"),
+                        help="run platform, support Ascend, GPU and CPU.")
+    parser.add_argument("--only_create_dataset", type=ast.literal_eval, default=False,
+                        help="If set it true, only create Mindrecord, default is False.")
+    parser.add_argument("--distribute", type=ast.literal_eval, default=False,
+                        help="Run distribute, default is False.")
+    parser.add_argument("--device_id", type=int, default=1, help="Device id, default is 0.")
+    parser.add_argument("--device_num", type=int, default=1, help="Use device nums, default is 1.")
+    parser.add_argument("--lr", type=float, default=0.05, help="Learning rate, default is 0.05.")
+    parser.add_argument("--mode", type=str, default="sink", help="Run sink mode or not, default is sink.")
+    parser.add_argument("--dataset", type=str, default="coco", help="Dataset, default is coco.")
+    parser.add_argument("--epoch_size", type=int, default=500, help="Epoch size, default is 500.")
+    parser.add_argument("--batch_size", type=int, default=32, help="Batch size, default is 32.")
+    parser.add_argument("--pre_trained", type=str, default=None, help="Pretrained Checkpoint file path.")
+    parser.add_argument("--pre_trained_epoch_size", type=int, default=0, help="Pretrained epoch size.")
+    parser.add_argument("--save_checkpoint_epochs", type=int, default=10, help="Save checkpoint epochs, default is 10.")
+    parser.add_argument("--loss_scale", type=int, default=1024, help="Loss scale, default is 1024.")
+    parser.add_argument("--filter_weight", type=ast.literal_eval, default=False,
+                        help="Filter head weight parameters, default is False.")
+    parser.add_argument('--freeze_layer', type=str, default="none", choices=["none", "backbone"],
+                        help="freeze the weights of network, support freeze the backbone's weights, "
+                             "default is not freezing.")
+    args_opt = parser.parse_args()
+    return args_opt
+
+
+def ssd_model_build(args_opt):
+    """
+    Build ssd_resnet34.
+    """
+    if config.model == "ssd_resnet34":
+        ssd = ssd_resnet34(config=config)
+        init_net_param(ssd)
+        if config.feature_extractor_base_param != "":
+            param_dict = load_checkpoint(config.feature_extractor_base_param)
+            for x in list(param_dict.keys()):
+                param_dict["network.feature_extractor.resnet." + x] = param_dict[x]
+                del param_dict[x]
+            load_param_into_net(ssd.feature_extractor.resnet, param_dict)
+    else:
+        raise ValueError(f'config.model: {config.model} is not supported')
+    return ssd
+
+
+def main():
+    """
+    Execute training process!
+    """
+    args_opt = get_args()
+    rank = 0
+    device_num = 1
+    config.coco_root = args_opt.data_url
+    config.mindrecord_dir = args_opt.mindrecord_url
+    local_train_url = args_opt.train_url
+
+    if args_opt.run_platform == "CPU":
+        context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
+    else:
+        context.set_context(mode=context.GRAPH_MODE, device_target=args_opt.run_platform)
+
+    if args_opt.distribute:
+        init()
+        device_num = int(os.getenv("RANK_SIZE"))
+        context.reset_auto_parallel_context()
+        context.set_auto_parallel_context(parallel_mode=ParallelMode.DATA_PARALLEL, gradients_mean=True,
+                                          device_num=device_num)
+        context.set_auto_parallel_context(all_reduce_fusion_config=[29, 58, 89])
+        rank = get_rank()
+
+    mindrecord_file = create_mindrecord(args_opt.dataset, "ssd.mindrecord", True)
+
+    if args_opt.only_create_dataset:
+        return
+
+    loss_scale = float(args_opt.loss_scale)
+    if args_opt.run_platform == "CPU":
+        loss_scale = 1.0
+
+    # When create MindDataset, using the fitst mindrecord file, such as ssd.mindrecord0.
+    use_multiprocessing = (args_opt.run_platform != "CPU")
+    dataset = create_ssd_dataset(mindrecord_file, repeat_num=1, batch_size=args_opt.batch_size,
+                                 device_num=device_num, rank=rank, use_multiprocessing=use_multiprocessing)
+    dataset_size = dataset.get_dataset_size()
+    print(f"Create dataset done! dataset size is {dataset_size}")
+    ssd = ssd_model_build(args_opt)
+
+    if ("use_float16" in config and config.use_float16) or args_opt.run_platform == "GPU":
+        ssd.to_float(dtype.float16)
+    net = SSDWithLossCell(ssd, config)
+
+    # checkpoint
+    ckpt_config = CheckpointConfig(save_checkpoint_steps=dataset_size * args_opt.save_checkpoint_epochs,
+                                   keep_checkpoint_max=20)
+    ckpoint_cb = ModelCheckpoint(prefix="ssd", directory=local_train_url + "/card{}".format(rank), config=ckpt_config)
+
+    if args_opt.pre_trained:
+        local_pre_train = args_opt.pre_trained
+        param_dict = load_checkpoint(local_pre_train)
+        if args_opt.filter_weight:
+            filter_checkpoint_parameter_by_list(param_dict, config.checkpoint_filter_list)
+        load_param_into_net(net, param_dict, True)
+
+    lr = Tensor(get_lr(global_step=args_opt.pre_trained_epoch_size * dataset_size,
+                       lr_init=config.lr_init, lr_end=config.lr_end_rate * args_opt.lr, lr_max=args_opt.lr,
+                       warmup_epochs=config.warmup_epochs,
+                       total_epochs=args_opt.epoch_size,
+                       steps_per_epoch=dataset_size))
+
+    if "use_global_norm" in config and config.use_global_norm:
+        opt = nn.Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), lr,
+                          config.momentum, config.weight_decay, 1.0)
+        net = TrainingWrapper(net, opt, loss_scale, True)
+    else:
+        opt = nn.Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), lr,
+                          config.momentum, config.weight_decay, loss_scale)
+        net = TrainingWrapper(net, opt, loss_scale)
+
+    callback = [TimeMonitor(data_size=dataset_size), LossMonitor(), ckpoint_cb]
+    model = Model(net)
+
+    dataset_sink_mode = False
+    if args_opt.mode == "sink" and args_opt.run_platform != "CPU":
+        print("In sink mode, one epoch return a loss.")
+        dataset_sink_mode = True
+    print("Start train SSD, the first epoch will be slower because of the graph compilation.")
+    model.train(args_opt.epoch_size, dataset, callbacks=callback, dataset_sink_mode=dataset_sink_mode)
+
+if __name__ == '__main__':
+    main()