diff --git a/research/cv/CLIFF/README.md b/research/cv/CLIFF/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..ececc1093e10e053b703b940b4aa356f405b64fc
--- /dev/null
+++ b/research/cv/CLIFF/README.md
@@ -0,0 +1,71 @@
+
+# Content
+
+- [Introduction](#introduction)
+- [Dataset](#dataset)
+- [Requiments](#requirements)
+- [Quick Start](#quick-start)
+- [ModelZoo Homepage](#modelzoo-homepage)
+
+## Introduction
+
+<img src="assets/teaser.gif" width="100%">
+
+*(This testing video is from the 3DPW testset, and processed frame by frame without temporal smoothing.)*
+
+This repo contains the CLIFF demo code (Implemented in MindSpore) for the following paper.
+
+> CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation. \
+> Zhihao Li, Jianzhuang Liu, Zhensong Zhang, Songcen Xu, and Youliang Yan 鈰� \
+> ECCV 2022 Oral
+
+<img src="assets/arch.png" width="100%">
+
+## Dataset
+
+Not relevant.
+
+## Requirements
+
+```bash
+conda create -n cliff python=3.9
+pip install -r requirements.txt
+```
+
+Download the pretrained checkpoints and the testing sample to run the demo.
+[[Baidu Pan](https://pan.baidu.com/s/15v0jnoyEpKIXWhh2AjAZeQ?pwd=7777)]
+[[Google Drive](https://drive.google.com/drive/folders/1_d12Q8Yj13TEvB_4vopAbMdwJ1-KVR0R?usp=sharing)]
+
+Finally put these data following the directory structure as below:
+
+```text
+${ROOT}
+|-- ckpt
+    |-- cliff-hr48-PA43.0_MJE69.0_MVE81.2_3dpw.ckpt
+    |-- cliff-res50-PA45.7_MJE72.0_MVE85.3_3dpw.ckpt
+|-- data
+    |-- data/im07937.png
+    |-- data/smpl_mean_params.npz
+```
+
+## Quick Start
+
+```bash
+python demo.py --input_path PATH --ckpt CKPT
+```
+
+<img src="assets/im08036/im08036.png" width="24%">
+<img src="assets/im08036/im08036_bbox.jpg" width="24%">
+<img src="assets/im08036/im08036_front_view_cliff_hr48.jpg" width="24%">
+<img src="assets/im08036/im08036_side_view_cliff_hr48.jpg" width="24%">
+
+<img src="assets/im00492/im00492.png" width="24%">
+<img src="assets/im00492/im00492_bbox.jpg" width="24%">
+<img src="assets/im00492/im00492_front_view_cliff_hr48.jpg" width="24%">
+<img src="assets/im00492/im00492_side_view_cliff_hr48.jpg" width="24%">
+
+One can change the demo options in the script. Please see the option description in the bottom lines of `demo.py`.
+
+## ModelZoo Homepage
+
+Please check the official [homepage](https://gitee.com/mindspore/models).
\ No newline at end of file
diff --git a/research/cv/CLIFF/README_CN.md b/research/cv/CLIFF/README_CN.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b8c7133801aba1e09871ad2dacc7adf4c0ee721
--- /dev/null
+++ b/research/cv/CLIFF/README_CN.md
@@ -0,0 +1,77 @@
+
+# 鐩綍
+
+- [妯″瀷绠€浠媇(#妯″瀷绠€浠�)
+- [鏁版嵁闆哴(#鏁版嵁闆�)
+- [鐜瑕佹眰](#鐜瑕佹眰)
+- [蹇€熷叆闂╙(#鎺ㄧ悊)
+- [ModelZoo涓婚〉](#modelzoo-涓婚〉)
+
+## 妯″瀷绠€浠�
+
+<img src="assets/teaser.gif" width="100%">
+
+*(璇ユ祴璇曡棰戞潵鑷�3DPW鐨勬祴璇曢泦锛屽鐞嗘椂鏄竴甯т竴甯у湴澶勭悊鐨勶紝骞舵病鏈夊姞涓婃椂鍩熷钩婊�.)*
+
+CLIFF锛圗CCV 2022 Oral锛夋槸涓€绉嶅熀浜庡崟鐩浘鍍忕殑浜轰綋鍔ㄤ綔鎹曟崏绠楁硶锛屽湪澶氫釜鍏紑鏁版嵁闆嗕笂鍙栧緱浜嗕紭寮傜殑鏁堟灉銆�
+
+> CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation. \
+> Zhihao Li, Jianzhuang Liu, Zhensong Zhang, Songcen Xu, and Youliang Yan 鈰� \
+> ECCV 2022 Oral
+
+<img src="assets/arch.png" width="100%">
+
+## 鏁版嵁闆�
+
+涓嶆秹鍙�
+
+## 鐜瑕佹眰
+
+```bash
+conda create -n cliff python=3.9
+pip install -r requirements.txt
+```
+
+涓嬭浇棰勮缁冩ā鍨嬪拰娴嬭瘯鏍蜂緥锛屼互杩愯鎺ㄧ悊浠g爜銆�
+[[鐧惧害缃戠洏](https://pan.baidu.com/s/15v0jnoyEpKIXWhh2AjAZeQ?pwd=7777)]
+[[Google Drive](https://drive.google.com/drive/folders/1_d12Q8Yj13TEvB_4vopAbMdwJ1-KVR0R?usp=sharing)]
+
+璇锋妸棰勮缁冩ā鍨嬫斁鍦╜ckpt`鐩綍涓嬶紝娴嬭瘯鏍蜂緥鏀惧湪`data`鐩綍涓嬶紝褰㈡垚濡備笅鐨勭洰褰曠粨鏋勶細
+
+```text
+${ROOT}
+|-- ckpt
+    |-- cliff-hr48-PA43.0_MJE69.0_MVE81.2_3dpw.ckpt
+    |-- cliff-res50-PA45.7_MJE72.0_MVE85.3_3dpw.ckpt
+|-- data
+    |-- data/im07937.png
+    |-- data/smpl_mean_params.npz
+```
+
+## 蹇€熷叆闂�
+
+杩愯鑴氭湰`demo.py`鍗冲彲鎺ㄧ悊銆�
+
+```bash
+python demo.py --input_path PATH --ckpt CKPT
+```
+
+<p float="left">
+    <img src="assets/im08036/im08036.png" width="24%">
+    <img src="assets/im08036/im08036_bbox.jpg" width="24%">
+    <img src="assets/im08036/im08036_front_view_cliff_hr48.jpg" width="24%">
+    <img src="assets/im08036/im08036_side_view_cliff_hr48.jpg" width="24%">
+</p>
+
+<p float="left">
+    <img src="assets/im00492/im00492.png" width="24%">
+    <img src="assets/im00492/im00492_bbox.jpg" width="24%">
+    <img src="assets/im00492/im00492_front_view_cliff_hr48.jpg" width="24%">
+    <img src="assets/im00492/im00492_side_view_cliff_hr48.jpg" width="24%">
+</p>
+
+demo鐨勭浉鍏冲弬鏁板彲浠ヤ慨鏀癸紝鍏充簬杩欎簺鍙傛暟鐨勮鏄庤鐪媊demo.py`鏂囦欢鐨勪笅鏂广€�
+
+## ModelZoo 涓婚〉
+
+璇锋祻瑙堝畼鏂筟涓婚〉](https://gitee.com/mindspore/models)銆�
\ No newline at end of file
diff --git a/research/cv/CLIFF/assets/arch.png b/research/cv/CLIFF/assets/arch.png
new file mode 100644
index 0000000000000000000000000000000000000000..c86d9d04e2783033289f825fb48fb161b096167a
Binary files /dev/null and b/research/cv/CLIFF/assets/arch.png differ
diff --git a/research/cv/CLIFF/assets/im00492/im00492.png b/research/cv/CLIFF/assets/im00492/im00492.png
new file mode 100644
index 0000000000000000000000000000000000000000..eef536c8f6f2b2ae31c52c3a4ae5616573b5a92f
Binary files /dev/null and b/research/cv/CLIFF/assets/im00492/im00492.png differ
diff --git a/research/cv/CLIFF/assets/im00492/im00492_bbox.jpg b/research/cv/CLIFF/assets/im00492/im00492_bbox.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..eebd856ad0b1b65e13f0ec30f95687c4e3925395
Binary files /dev/null and b/research/cv/CLIFF/assets/im00492/im00492_bbox.jpg differ
diff --git a/research/cv/CLIFF/assets/im00492/im00492_front_view_cliff_hr48.jpg b/research/cv/CLIFF/assets/im00492/im00492_front_view_cliff_hr48.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c9da7faf21ae22c445b70217e5e735cbdc9266b2
Binary files /dev/null and b/research/cv/CLIFF/assets/im00492/im00492_front_view_cliff_hr48.jpg differ
diff --git a/research/cv/CLIFF/assets/im00492/im00492_side_view_cliff_hr48.jpg b/research/cv/CLIFF/assets/im00492/im00492_side_view_cliff_hr48.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b189522181aed3e4cb1c1b6f32c7c8348fee6da0
Binary files /dev/null and b/research/cv/CLIFF/assets/im00492/im00492_side_view_cliff_hr48.jpg differ
diff --git a/research/cv/CLIFF/assets/im08036/im08036.png b/research/cv/CLIFF/assets/im08036/im08036.png
new file mode 100644
index 0000000000000000000000000000000000000000..778cfa3a2e42e5b8abb08044d486154660268233
Binary files /dev/null and b/research/cv/CLIFF/assets/im08036/im08036.png differ
diff --git a/research/cv/CLIFF/assets/im08036/im08036_bbox.jpg b/research/cv/CLIFF/assets/im08036/im08036_bbox.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a437c795f2ae01aae91eb619c135dc30decbf721
Binary files /dev/null and b/research/cv/CLIFF/assets/im08036/im08036_bbox.jpg differ
diff --git a/research/cv/CLIFF/assets/im08036/im08036_front_view_cliff_hr48.jpg b/research/cv/CLIFF/assets/im08036/im08036_front_view_cliff_hr48.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..6dba0093ed2cea39b1d2458bfcb26c10ee32f92c
Binary files /dev/null and b/research/cv/CLIFF/assets/im08036/im08036_front_view_cliff_hr48.jpg differ
diff --git a/research/cv/CLIFF/assets/im08036/im08036_side_view_cliff_hr48.jpg b/research/cv/CLIFF/assets/im08036/im08036_side_view_cliff_hr48.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..4e035ebf79a5a3a4ed2d7a861026f25722b0382c
Binary files /dev/null and b/research/cv/CLIFF/assets/im08036/im08036_side_view_cliff_hr48.jpg differ
diff --git a/research/cv/CLIFF/assets/teaser.gif b/research/cv/CLIFF/assets/teaser.gif
new file mode 100644
index 0000000000000000000000000000000000000000..2d0319aef936a397a336304838a7a67486382c4f
Binary files /dev/null and b/research/cv/CLIFF/assets/teaser.gif differ
diff --git a/research/cv/CLIFF/common/constants.py b/research/cv/CLIFF/common/constants.py
new file mode 100644
index 0000000000000000000000000000000000000000..fcdd17b6e5248f64f61a2cb5c48435defe9c060d
--- /dev/null
+++ b/research/cv/CLIFF/common/constants.py
@@ -0,0 +1,27 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+from os.path import join
+
+curr_dir = os.path.dirname(os.path.abspath(__file__))
+SMPL_MEAN_PARAMS = join(curr_dir, '../data/smpl_mean_params.npz')
+
+CROP_IMG_HEIGHT = 256
+CROP_IMG_WIDTH = 192
+CROP_ASPECT_RATIO = CROP_IMG_HEIGHT / float(CROP_IMG_WIDTH)
+
+# Mean and standard deviation for normalizing input image
+IMG_NORM_MEAN = [0.485, 0.456, 0.406]
+IMG_NORM_STD = [0.229, 0.224, 0.225]
diff --git a/research/cv/CLIFF/common/imutils.py b/research/cv/CLIFF/common/imutils.py
new file mode 100644
index 0000000000000000000000000000000000000000..a62d851fff691802ff870690fb6f52fd52fb9e7d
--- /dev/null
+++ b/research/cv/CLIFF/common/imutils.py
@@ -0,0 +1,122 @@
+# Copyright (c) 2019, University of Pennsylvania, Max Planck Institute for Intelligent Systems
+# This script is borrowed and extended from SPIN
+
+import cv2
+import numpy as np
+
+from common import constants
+
+
+def get_transform(center, scale, res, rot=0):
+    """Generate transformation matrix."""
+    # res: (height, width), (rows, cols)
+    crop_aspect_ratio = res[0] / float(res[1])
+    h = 200 * scale
+    w = h / crop_aspect_ratio
+    t = np.zeros((3, 3))
+    t[0, 0] = float(res[1]) / w
+    t[1, 1] = float(res[0]) / h
+    t[0, 2] = res[1] * (-float(center[0]) / w + .5)
+    t[1, 2] = res[0] * (-float(center[1]) / h + .5)
+    t[2, 2] = 1
+    if rot != 0:
+        rot = -rot  # To match direction of rotation from cropping
+        rot_mat = np.zeros((3, 3))
+        rot_rad = rot * np.pi / 180
+        sn, cs = np.sin(rot_rad), np.cos(rot_rad)
+        rot_mat[0, :2] = [cs, -sn]
+        rot_mat[1, :2] = [sn, cs]
+        rot_mat[2, 2] = 1
+        # Need to rotate around center
+        t_mat = np.eye(3)
+        t_mat[0, 2] = -res[1] / 2
+        t_mat[1, 2] = -res[0] / 2
+        t_inv = t_mat.copy()
+        t_inv[:2, 2] *= -1
+        t = np.dot(t_inv, np.dot(rot_mat, np.dot(t_mat, t)))
+    return t
+
+
+def transform(pt, center, scale, res, invert=0, rot=0):
+    """Transform pixel location to different reference."""
+    t = get_transform(center, scale, res, rot=rot)
+    if invert:
+        t = np.linalg.inv(t)
+    new_pt = np.array([pt[0] - 1, pt[1] - 1, 1.]).T
+    new_pt = np.dot(t, new_pt)
+    return np.array([round(new_pt[0]), round(new_pt[1])], dtype=int) + 1
+
+
+def crop(img, center, scale, res):
+    """
+    Crop image according to the supplied bounding box.
+    res: [rows, cols]
+    """
+    # Upper left point
+    ul = np.array(transform([1, 1], center, scale, res, invert=1)) - 1
+    # Bottom right point
+    br = np.array(transform([res[1] + 1, res[0] + 1], center, scale, res, invert=1)) - 1
+
+    new_shape = [br[1] - ul[1], br[0] - ul[0]]
+    if len(img.shape) > 2:
+        new_shape += [img.shape[2]]
+    new_img = np.zeros(new_shape, dtype=np.float32)
+
+    # Range to fill new array
+    new_x = max(0, -ul[0]), min(br[0], len(img[0])) - ul[0]
+    new_y = max(0, -ul[1]), min(br[1], len(img)) - ul[1]
+    # Range to sample from original image
+    old_x = max(0, ul[0]), min(len(img[0]), br[0])
+    old_y = max(0, ul[1]), min(len(img), br[1])
+    new_img[new_y[0]:new_y[1], new_x[0]:new_x[1]] = img[old_y[0]:old_y[1], old_x[0]:old_x[1]]
+
+    new_img = cv2.resize(new_img, (res[1], res[0]))  # (cols, rows)
+
+    return new_img
+
+
+def bbox_from_detector(bbox, rescale=1.1):
+    """
+    Get center and scale of bounding box from bounding box.
+    The expected format is [min_x, min_y, max_x, max_y].
+    """
+    # center
+    center_x = (bbox[0] + bbox[2]) / 2.0
+    center_y = (bbox[1] + bbox[3]) / 2.0
+    center = np.array([center_x, center_y])
+
+    # scale
+    bbox_w = bbox[2] - bbox[0]
+    bbox_h = bbox[3] - bbox[1]
+    bbox_size = max(bbox_w * constants.CROP_ASPECT_RATIO, bbox_h)
+    scale = bbox_size / 200.0
+    # adjust bounding box tightness
+    scale *= rescale
+    return center, scale
+
+
+def process_image(orig_img_rgb, bbox,
+                  crop_height=constants.CROP_IMG_HEIGHT,
+                  crop_width=constants.CROP_IMG_WIDTH):
+    """
+    Read image, do preprocessing and possibly crop it according to the bounding box.
+    If there are bounding box annotations, use them to crop the image.
+    If no bounding box is specified but openpose detections are available, use them to get the bounding box.
+    """
+    if bbox is not None:
+        center, scale = bbox_from_detector(bbox)
+    else:
+        # Assume that the person is centered in the image
+        height = orig_img_rgb.shape[0]
+        width = orig_img_rgb.shape[1]
+        center = np.array([width // 2, height // 2])
+        scale = max(height, width * crop_height / float(crop_width)) / 200.
+
+    img = crop(orig_img_rgb, center, scale, (crop_height, crop_width))
+    img = img / 255.
+    mean = np.array(constants.IMG_NORM_MEAN, dtype=np.float32)
+    std = np.array(constants.IMG_NORM_STD, dtype=np.float32)
+    norm_img = (img - mean) / std
+    norm_img = np.transpose(norm_img, (2, 0, 1))
+
+    return norm_img, center, scale
diff --git a/research/cv/CLIFF/demo.py b/research/cv/CLIFF/demo.py
new file mode 100644
index 0000000000000000000000000000000000000000..91dfb7fbf529d39d1e20767d6e40799d2390bfcd
--- /dev/null
+++ b/research/cv/CLIFF/demo.py
@@ -0,0 +1,72 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import cv2
+import mindspore
+from mindspore import Tensor
+import numpy as np
+
+from models.cliff_res50 import MindSporeModel as cliff_res50
+from common.imutils import process_image
+from common import constants
+
+
+def main(args):
+    # load the model
+    print("ckpt:", args.ckpt)
+    cliff = cliff_res50()
+    param_dict = mindspore.load_checkpoint(args.ckpt)
+    mindspore.load_param_into_net(cliff, param_dict)
+
+    # load and pre-process the image
+    print("input_path:", args.input_path)
+    img_bgr = cv2.imread(args.input_path)
+    img_rgb = img_bgr[:, :, ::-1]
+    norm_img, center, scale = process_image(img_rgb, bbox=None)
+    norm_img = norm_img[np.newaxis, :, :, :]
+
+    # calculate the bbox info
+    cx, cy, b = center[0], center[1], scale * 200
+    img_h, img_w, _ = img_rgb.shape
+    focal_length = (img_w * img_w + img_h * img_h) ** 0.5  # fov: 55 degree
+    bbox_info = np.array([cx - img_w / 2., cy - img_h / 2., b], dtype=np.float32)
+    bbox_info = bbox_info[np.newaxis, :]
+    bbox_info[:, :2] = bbox_info[:, :2] / focal_length * 2.8  # [-1, 1]
+    bbox_info[:, 2] = (bbox_info[:, 2] - 0.24 * focal_length) / (0.06 * focal_length)  # [-1, 1]
+
+    # load the initial parameter
+    mean_params = np.load(constants.SMPL_MEAN_PARAMS)
+    init_pose = mean_params['pose'][np.newaxis, :].astype('float32')
+    init_shape = mean_params['shape'][np.newaxis, :].astype('float32')
+    init_cam = mean_params['cam'][np.newaxis, :].astype('float32')
+
+    # feed-forward
+    pred_rotmat_6d, pred_betas, pred_cam_crop = cliff(Tensor(norm_img), Tensor(bbox_info),
+                                                      Tensor(init_pose), Tensor(init_shape), Tensor(init_cam))
+    print("pred_rotmat_6d", pred_rotmat_6d)
+    print("pred_betas", pred_betas)
+    print("pred_cam_crop", pred_cam_crop)
+    print("Inference finished successfully!")
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+
+    parser.add_argument('--input_path', default='data/im07937.png', help='path to the input data')
+    parser.add_argument('--ckpt', default="ckpt/cliff-res50-PA45.7_MJE72.0_MVE85.3_3dpw.ckpt",
+                        help='path to the pretrained checkpoint')
+
+    arguments = parser.parse_args()
+    main(arguments)
diff --git a/research/cv/CLIFF/models/cliff_res50.py b/research/cv/CLIFF/models/cliff_res50.py
new file mode 100644
index 0000000000000000000000000000000000000000..247a2e3fb5253601d3054955ee1d229a02a93541
--- /dev/null
+++ b/research/cv/CLIFF/models/cliff_res50.py
@@ -0,0 +1,467 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import mindspore.ops as P
+from mindspore import nn
+
+
+class Module9(nn.Cell):
+
+    def __init__(self, conv2d_0_in_channels, conv2d_0_out_channels, conv2d_0_kernel_size, conv2d_0_stride,
+                 conv2d_0_padding, conv2d_0_pad_mode):
+        super(Module9, self).__init__()
+        self.conv2d_0 = nn.Conv2d(in_channels=conv2d_0_in_channels,
+                                  out_channels=conv2d_0_out_channels,
+                                  kernel_size=conv2d_0_kernel_size,
+                                  stride=conv2d_0_stride,
+                                  padding=conv2d_0_padding,
+                                  pad_mode=conv2d_0_pad_mode,
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_1 = nn.ReLU()
+
+    def construct(self, x):
+        opt_conv2d_0 = self.conv2d_0(x)
+        opt_relu_1 = self.relu_1(opt_conv2d_0)
+        return opt_relu_1
+
+
+class Module18(nn.Cell):
+
+    def __init__(self, conv2d_0_in_channels, conv2d_0_out_channels, module9_0_conv2d_0_in_channels,
+                 module9_0_conv2d_0_out_channels, module9_0_conv2d_0_kernel_size, module9_0_conv2d_0_stride,
+                 module9_0_conv2d_0_padding, module9_0_conv2d_0_pad_mode, module9_1_conv2d_0_in_channels,
+                 module9_1_conv2d_0_out_channels, module9_1_conv2d_0_kernel_size, module9_1_conv2d_0_stride,
+                 module9_1_conv2d_0_padding, module9_1_conv2d_0_pad_mode):
+        super(Module18, self).__init__()
+        self.module9_0 = Module9(conv2d_0_in_channels=module9_0_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module9_0_conv2d_0_out_channels,
+                                 conv2d_0_kernel_size=module9_0_conv2d_0_kernel_size,
+                                 conv2d_0_stride=module9_0_conv2d_0_stride,
+                                 conv2d_0_padding=module9_0_conv2d_0_padding,
+                                 conv2d_0_pad_mode=module9_0_conv2d_0_pad_mode)
+        self.module9_1 = Module9(conv2d_0_in_channels=module9_1_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module9_1_conv2d_0_out_channels,
+                                 conv2d_0_kernel_size=module9_1_conv2d_0_kernel_size,
+                                 conv2d_0_stride=module9_1_conv2d_0_stride,
+                                 conv2d_0_padding=module9_1_conv2d_0_padding,
+                                 conv2d_0_pad_mode=module9_1_conv2d_0_pad_mode)
+        self.conv2d_0 = nn.Conv2d(in_channels=conv2d_0_in_channels,
+                                  out_channels=conv2d_0_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=(1, 1),
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+
+    def construct(self, x):
+        module9_0_opt = self.module9_0(x)
+        module9_1_opt = self.module9_1(module9_0_opt)
+        opt_conv2d_0 = self.conv2d_0(module9_1_opt)
+        return opt_conv2d_0
+
+
+class Module0(nn.Cell):
+
+    def __init__(self, conv2d_0_in_channels, conv2d_0_out_channels, conv2d_0_stride, conv2d_3_in_channels,
+                 conv2d_3_out_channels, conv2d_5_in_channels, conv2d_5_out_channels, conv2d_7_in_channels,
+                 conv2d_7_out_channels):
+        super(Module0, self).__init__()
+        self.conv2d_0 = nn.Conv2d(in_channels=conv2d_0_in_channels,
+                                  out_channels=conv2d_0_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=conv2d_0_stride,
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_2 = nn.ReLU()
+        self.conv2d_3 = nn.Conv2d(in_channels=conv2d_3_in_channels,
+                                  out_channels=conv2d_3_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=(1, 1),
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_4 = nn.ReLU()
+        self.conv2d_5 = nn.Conv2d(in_channels=conv2d_5_in_channels,
+                                  out_channels=conv2d_5_out_channels,
+                                  kernel_size=(3, 3),
+                                  stride=(1, 1),
+                                  padding=(1, 1, 1, 1),
+                                  pad_mode="pad",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_6 = nn.ReLU()
+        self.conv2d_7 = nn.Conv2d(in_channels=conv2d_7_in_channels,
+                                  out_channels=conv2d_7_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=(1, 1),
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_9 = nn.ReLU()
+
+    def construct(self, x, x0):
+        opt_conv2d_0 = self.conv2d_0(x)
+        opt_add_1 = P.Add()(x0, opt_conv2d_0)
+        opt_relu_2 = self.relu_2(opt_add_1)
+        opt_conv2d_3 = self.conv2d_3(opt_relu_2)
+        opt_relu_4 = self.relu_4(opt_conv2d_3)
+        opt_conv2d_5 = self.conv2d_5(opt_relu_4)
+        opt_relu_6 = self.relu_6(opt_conv2d_5)
+        opt_conv2d_7 = self.conv2d_7(opt_relu_6)
+        opt_add_8 = P.Add()(opt_conv2d_7, opt_relu_2)
+        opt_relu_9 = self.relu_9(opt_add_8)
+        return opt_relu_9
+
+
+class Module25(nn.Cell):
+
+    def __init__(self, conv2d_0_in_channels, conv2d_0_out_channels, module9_0_conv2d_0_in_channels,
+                 module9_0_conv2d_0_out_channels, module9_0_conv2d_0_kernel_size, module9_0_conv2d_0_stride,
+                 module9_0_conv2d_0_padding, module9_0_conv2d_0_pad_mode, module9_1_conv2d_0_in_channels,
+                 module9_1_conv2d_0_out_channels, module9_1_conv2d_0_kernel_size, module9_1_conv2d_0_stride,
+                 module9_1_conv2d_0_padding, module9_1_conv2d_0_pad_mode):
+        super(Module25, self).__init__()
+        self.module9_0 = Module9(conv2d_0_in_channels=module9_0_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module9_0_conv2d_0_out_channels,
+                                 conv2d_0_kernel_size=module9_0_conv2d_0_kernel_size,
+                                 conv2d_0_stride=module9_0_conv2d_0_stride,
+                                 conv2d_0_padding=module9_0_conv2d_0_padding,
+                                 conv2d_0_pad_mode=module9_0_conv2d_0_pad_mode)
+        self.module9_1 = Module9(conv2d_0_in_channels=module9_1_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module9_1_conv2d_0_out_channels,
+                                 conv2d_0_kernel_size=module9_1_conv2d_0_kernel_size,
+                                 conv2d_0_stride=module9_1_conv2d_0_stride,
+                                 conv2d_0_padding=module9_1_conv2d_0_padding,
+                                 conv2d_0_pad_mode=module9_1_conv2d_0_pad_mode)
+        self.conv2d_0 = nn.Conv2d(in_channels=conv2d_0_in_channels,
+                                  out_channels=conv2d_0_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=(1, 1),
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_2 = nn.ReLU()
+
+    def construct(self, x):
+        module9_0_opt = self.module9_0(x)
+        module9_1_opt = self.module9_1(module9_0_opt)
+        opt_conv2d_0 = self.conv2d_0(module9_1_opt)
+        opt_add_1 = P.Add()(opt_conv2d_0, x)
+        opt_relu_2 = self.relu_2(opt_add_1)
+        return opt_relu_2
+
+
+class Module36(nn.Cell):
+
+    def __init__(self, conv2d_0_in_channels, conv2d_0_out_channels, conv2d_2_in_channels, conv2d_2_out_channels,
+                 conv2d_4_in_channels, conv2d_4_out_channels, module0_0_conv2d_0_in_channels,
+                 module0_0_conv2d_0_out_channels, module0_0_conv2d_0_stride, module0_0_conv2d_3_in_channels,
+                 module0_0_conv2d_3_out_channels, module0_0_conv2d_5_in_channels, module0_0_conv2d_5_out_channels,
+                 module0_0_conv2d_7_in_channels, module0_0_conv2d_7_out_channels, module9_0_conv2d_0_in_channels,
+                 module9_0_conv2d_0_out_channels, module9_0_conv2d_0_kernel_size, module9_0_conv2d_0_stride,
+                 module9_0_conv2d_0_padding, module9_0_conv2d_0_pad_mode, module9_1_conv2d_0_in_channels,
+                 module9_1_conv2d_0_out_channels, module9_1_conv2d_0_kernel_size, module9_1_conv2d_0_stride,
+                 module9_1_conv2d_0_padding, module9_1_conv2d_0_pad_mode, module0_1_conv2d_0_in_channels,
+                 module0_1_conv2d_0_out_channels, module0_1_conv2d_0_stride, module0_1_conv2d_3_in_channels,
+                 module0_1_conv2d_3_out_channels, module0_1_conv2d_5_in_channels, module0_1_conv2d_5_out_channels,
+                 module0_1_conv2d_7_in_channels, module0_1_conv2d_7_out_channels, module0_2_conv2d_0_in_channels,
+                 module0_2_conv2d_0_out_channels, module0_2_conv2d_0_stride, module0_2_conv2d_3_in_channels,
+                 module0_2_conv2d_3_out_channels, module0_2_conv2d_5_in_channels, module0_2_conv2d_5_out_channels,
+                 module0_2_conv2d_7_in_channels, module0_2_conv2d_7_out_channels):
+        super(Module36, self).__init__()
+        self.module0_0 = Module0(conv2d_0_in_channels=module0_0_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module0_0_conv2d_0_out_channels,
+                                 conv2d_0_stride=module0_0_conv2d_0_stride,
+                                 conv2d_3_in_channels=module0_0_conv2d_3_in_channels,
+                                 conv2d_3_out_channels=module0_0_conv2d_3_out_channels,
+                                 conv2d_5_in_channels=module0_0_conv2d_5_in_channels,
+                                 conv2d_5_out_channels=module0_0_conv2d_5_out_channels,
+                                 conv2d_7_in_channels=module0_0_conv2d_7_in_channels,
+                                 conv2d_7_out_channels=module0_0_conv2d_7_out_channels)
+        self.module9_0 = Module9(conv2d_0_in_channels=module9_0_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module9_0_conv2d_0_out_channels,
+                                 conv2d_0_kernel_size=module9_0_conv2d_0_kernel_size,
+                                 conv2d_0_stride=module9_0_conv2d_0_stride,
+                                 conv2d_0_padding=module9_0_conv2d_0_padding,
+                                 conv2d_0_pad_mode=module9_0_conv2d_0_pad_mode)
+        self.module9_1 = Module9(conv2d_0_in_channels=module9_1_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module9_1_conv2d_0_out_channels,
+                                 conv2d_0_kernel_size=module9_1_conv2d_0_kernel_size,
+                                 conv2d_0_stride=module9_1_conv2d_0_stride,
+                                 conv2d_0_padding=module9_1_conv2d_0_padding,
+                                 conv2d_0_pad_mode=module9_1_conv2d_0_pad_mode)
+        self.module0_1 = Module0(conv2d_0_in_channels=module0_1_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module0_1_conv2d_0_out_channels,
+                                 conv2d_0_stride=module0_1_conv2d_0_stride,
+                                 conv2d_3_in_channels=module0_1_conv2d_3_in_channels,
+                                 conv2d_3_out_channels=module0_1_conv2d_3_out_channels,
+                                 conv2d_5_in_channels=module0_1_conv2d_5_in_channels,
+                                 conv2d_5_out_channels=module0_1_conv2d_5_out_channels,
+                                 conv2d_7_in_channels=module0_1_conv2d_7_in_channels,
+                                 conv2d_7_out_channels=module0_1_conv2d_7_out_channels)
+        self.conv2d_0 = nn.Conv2d(in_channels=conv2d_0_in_channels,
+                                  out_channels=conv2d_0_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=(1, 1),
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_1 = nn.ReLU()
+        self.conv2d_2 = nn.Conv2d(in_channels=conv2d_2_in_channels,
+                                  out_channels=conv2d_2_out_channels,
+                                  kernel_size=(3, 3),
+                                  stride=(2, 2),
+                                  padding=(1, 1, 1, 1),
+                                  pad_mode="pad",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.relu_3 = nn.ReLU()
+        self.conv2d_4 = nn.Conv2d(in_channels=conv2d_4_in_channels,
+                                  out_channels=conv2d_4_out_channels,
+                                  kernel_size=(1, 1),
+                                  stride=(1, 1),
+                                  padding=0,
+                                  pad_mode="valid",
+                                  dilation=(1, 1),
+                                  group=1,
+                                  has_bias=True)
+        self.module0_2 = Module0(conv2d_0_in_channels=module0_2_conv2d_0_in_channels,
+                                 conv2d_0_out_channels=module0_2_conv2d_0_out_channels,
+                                 conv2d_0_stride=module0_2_conv2d_0_stride,
+                                 conv2d_3_in_channels=module0_2_conv2d_3_in_channels,
+                                 conv2d_3_out_channels=module0_2_conv2d_3_out_channels,
+                                 conv2d_5_in_channels=module0_2_conv2d_5_in_channels,
+                                 conv2d_5_out_channels=module0_2_conv2d_5_out_channels,
+                                 conv2d_7_in_channels=module0_2_conv2d_7_in_channels,
+                                 conv2d_7_out_channels=module0_2_conv2d_7_out_channels)
+
+    def construct(self, x, x0):
+        module0_0_opt = self.module0_0(x, x0)
+        module9_0_opt = self.module9_0(module0_0_opt)
+        module9_1_opt = self.module9_1(module9_0_opt)
+        module0_1_opt = self.module0_1(module9_1_opt, module0_0_opt)
+        opt_conv2d_0 = self.conv2d_0(module0_1_opt)
+        opt_relu_1 = self.relu_1(opt_conv2d_0)
+        opt_conv2d_2 = self.conv2d_2(opt_relu_1)
+        opt_relu_3 = self.relu_3(opt_conv2d_2)
+        opt_conv2d_4 = self.conv2d_4(opt_relu_3)
+        module0_2_opt = self.module0_2(module0_1_opt, opt_conv2d_4)
+        return module0_2_opt
+
+
+class Module13(nn.Cell):
+
+    def __init__(self):
+        super(Module13, self).__init__()
+        self.module9_0 = Module9(conv2d_0_in_channels=1024,
+                                 conv2d_0_out_channels=256,
+                                 conv2d_0_kernel_size=(1, 1),
+                                 conv2d_0_stride=(1, 1),
+                                 conv2d_0_padding=0,
+                                 conv2d_0_pad_mode="valid")
+        self.module9_1 = Module9(conv2d_0_in_channels=256,
+                                 conv2d_0_out_channels=256,
+                                 conv2d_0_kernel_size=(3, 3),
+                                 conv2d_0_stride=(1, 1),
+                                 conv2d_0_padding=(1, 1, 1, 1),
+                                 conv2d_0_pad_mode="pad")
+
+    def construct(self, x):
+        module9_0_opt = self.module9_0(x)
+        module9_1_opt = self.module9_1(module9_0_opt)
+        return module9_1_opt
+
+
+class Module29(nn.Cell):
+
+    def __init__(self):
+        super(Module29, self).__init__()
+        self.dense_0 = nn.Dense(in_channels=2208, out_channels=1024, has_bias=True)
+        self.dense_1 = nn.Dense(in_channels=1024, out_channels=1024, has_bias=True)
+
+    def construct(self, x):
+        opt_dense_0 = self.dense_0(x)
+        opt_dense_1 = self.dense_1(opt_dense_0)
+        return opt_dense_1
+
+
+class Module35(nn.Cell):
+
+    def __init__(self):
+        super(Module35, self).__init__()
+        self.concat_0 = P.Concat(axis=1)
+        self.module29_0 = Module29()
+
+    def construct(self, x, x0, x1, x2, x3):
+        opt_concat_0 = self.concat_0((x, x0, x1, x2, x3))
+        module29_0_opt = self.module29_0(opt_concat_0)
+        return module29_0_opt
+
+
+class MindSporeModel(nn.Cell):
+
+    def __init__(self):
+        super(MindSporeModel, self).__init__()
+        self.conv2d_0 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(7, 7), stride=(2, 2),
+                                  padding=(3, 3, 3, 3), pad_mode="pad", dilation=(1, 1), group=1, has_bias=True)
+        self.relu_1 = nn.ReLU()
+        self.pad_maxpool2d_2 = nn.Pad(paddings=((0, 0), (0, 0), (1, 0), (1, 0)))
+        self.maxpool2d_2 = nn.MaxPool2d(kernel_size=(3, 3), stride=(2, 2))
+        self.module18_0 = Module18(conv2d_0_in_channels=64, conv2d_0_out_channels=256,
+                                   module9_0_conv2d_0_in_channels=64, module9_0_conv2d_0_out_channels=64,
+                                   module9_0_conv2d_0_kernel_size=(1, 1), module9_0_conv2d_0_stride=(1, 1),
+                                   module9_0_conv2d_0_padding=0, module9_0_conv2d_0_pad_mode="valid",
+                                   module9_1_conv2d_0_in_channels=64, module9_1_conv2d_0_out_channels=64,
+                                   module9_1_conv2d_0_kernel_size=(3, 3), module9_1_conv2d_0_stride=(1, 1),
+                                   module9_1_conv2d_0_padding=(1, 1, 1, 1), module9_1_conv2d_0_pad_mode="pad")
+        self.module0_0 = Module0(conv2d_0_in_channels=64, conv2d_0_out_channels=256, conv2d_0_stride=(1, 1),
+                                 conv2d_3_in_channels=256, conv2d_3_out_channels=64, conv2d_5_in_channels=64,
+                                 conv2d_5_out_channels=64, conv2d_7_in_channels=64, conv2d_7_out_channels=256)
+        self.module25_0 = Module25(conv2d_0_in_channels=64, conv2d_0_out_channels=256,
+                                   module9_0_conv2d_0_in_channels=256, module9_0_conv2d_0_out_channels=64,
+                                   module9_0_conv2d_0_kernel_size=(1, 1), module9_0_conv2d_0_stride=(1, 1),
+                                   module9_0_conv2d_0_padding=0, module9_0_conv2d_0_pad_mode="valid",
+                                   module9_1_conv2d_0_in_channels=64, module9_1_conv2d_0_out_channels=64,
+                                   module9_1_conv2d_0_kernel_size=(3, 3), module9_1_conv2d_0_stride=(1, 1),
+                                   module9_1_conv2d_0_padding=(1, 1, 1, 1), module9_1_conv2d_0_pad_mode="pad")
+        self.module18_1 = Module18(conv2d_0_in_channels=128, conv2d_0_out_channels=512,
+                                   module9_0_conv2d_0_in_channels=256, module9_0_conv2d_0_out_channels=128,
+                                   module9_0_conv2d_0_kernel_size=(1, 1), module9_0_conv2d_0_stride=(1, 1),
+                                   module9_0_conv2d_0_padding=0, module9_0_conv2d_0_pad_mode="valid",
+                                   module9_1_conv2d_0_in_channels=128, module9_1_conv2d_0_out_channels=128,
+                                   module9_1_conv2d_0_kernel_size=(3, 3), module9_1_conv2d_0_stride=(2, 2),
+                                   module9_1_conv2d_0_padding=(1, 1, 1, 1), module9_1_conv2d_0_pad_mode="pad")
+        self.module36_0 = Module36(conv2d_0_in_channels=512, conv2d_0_out_channels=256, conv2d_2_in_channels=256,
+                                   conv2d_2_out_channels=256, conv2d_4_in_channels=256, conv2d_4_out_channels=1024,
+                                   module0_0_conv2d_0_in_channels=256, module0_0_conv2d_0_out_channels=512,
+                                   module0_0_conv2d_0_stride=(2, 2), module0_0_conv2d_3_in_channels=512,
+                                   module0_0_conv2d_3_out_channels=128, module0_0_conv2d_5_in_channels=128,
+                                   module0_0_conv2d_5_out_channels=128, module0_0_conv2d_7_in_channels=128,
+                                   module0_0_conv2d_7_out_channels=512, module9_0_conv2d_0_in_channels=512,
+                                   module9_0_conv2d_0_out_channels=128, module9_0_conv2d_0_kernel_size=(1, 1),
+                                   module9_0_conv2d_0_stride=(1, 1), module9_0_conv2d_0_padding=0,
+                                   module9_0_conv2d_0_pad_mode="valid", module9_1_conv2d_0_in_channels=128,
+                                   module9_1_conv2d_0_out_channels=128, module9_1_conv2d_0_kernel_size=(3, 3),
+                                   module9_1_conv2d_0_stride=(1, 1), module9_1_conv2d_0_padding=(1, 1, 1, 1),
+                                   module9_1_conv2d_0_pad_mode="pad", module0_1_conv2d_0_in_channels=128,
+                                   module0_1_conv2d_0_out_channels=512, module0_1_conv2d_0_stride=(1, 1),
+                                   module0_1_conv2d_3_in_channels=512, module0_1_conv2d_3_out_channels=128,
+                                   module0_1_conv2d_5_in_channels=128, module0_1_conv2d_5_out_channels=128,
+                                   module0_1_conv2d_7_in_channels=128, module0_1_conv2d_7_out_channels=512,
+                                   module0_2_conv2d_0_in_channels=512, module0_2_conv2d_0_out_channels=1024,
+                                   module0_2_conv2d_0_stride=(2, 2), module0_2_conv2d_3_in_channels=1024,
+                                   module0_2_conv2d_3_out_channels=256, module0_2_conv2d_5_in_channels=256,
+                                   module0_2_conv2d_5_out_channels=256, module0_2_conv2d_7_in_channels=256,
+                                   module0_2_conv2d_7_out_channels=1024)
+        self.module13_0 = Module13()
+        self.module36_1 = Module36(conv2d_0_in_channels=1024, conv2d_0_out_channels=512, conv2d_2_in_channels=512,
+                                   conv2d_2_out_channels=512, conv2d_4_in_channels=512, conv2d_4_out_channels=2048,
+                                   module0_0_conv2d_0_in_channels=256, module0_0_conv2d_0_out_channels=1024,
+                                   module0_0_conv2d_0_stride=(1, 1), module0_0_conv2d_3_in_channels=1024,
+                                   module0_0_conv2d_3_out_channels=256, module0_0_conv2d_5_in_channels=256,
+                                   module0_0_conv2d_5_out_channels=256, module0_0_conv2d_7_in_channels=256,
+                                   module0_0_conv2d_7_out_channels=1024, module9_0_conv2d_0_in_channels=1024,
+                                   module9_0_conv2d_0_out_channels=256, module9_0_conv2d_0_kernel_size=(1, 1),
+                                   module9_0_conv2d_0_stride=(1, 1), module9_0_conv2d_0_padding=0,
+                                   module9_0_conv2d_0_pad_mode="valid", module9_1_conv2d_0_in_channels=256,
+                                   module9_1_conv2d_0_out_channels=256, module9_1_conv2d_0_kernel_size=(3, 3),
+                                   module9_1_conv2d_0_stride=(1, 1), module9_1_conv2d_0_padding=(1, 1, 1, 1),
+                                   module9_1_conv2d_0_pad_mode="pad", module0_1_conv2d_0_in_channels=256,
+                                   module0_1_conv2d_0_out_channels=1024, module0_1_conv2d_0_stride=(1, 1),
+                                   module0_1_conv2d_3_in_channels=1024, module0_1_conv2d_3_out_channels=256,
+                                   module0_1_conv2d_5_in_channels=256, module0_1_conv2d_5_out_channels=256,
+                                   module0_1_conv2d_7_in_channels=256, module0_1_conv2d_7_out_channels=1024,
+                                   module0_2_conv2d_0_in_channels=1024, module0_2_conv2d_0_out_channels=2048,
+                                   module0_2_conv2d_0_stride=(2, 2), module0_2_conv2d_3_in_channels=2048,
+                                   module0_2_conv2d_3_out_channels=512, module0_2_conv2d_5_in_channels=512,
+                                   module0_2_conv2d_5_out_channels=512, module0_2_conv2d_7_in_channels=512,
+                                   module0_2_conv2d_7_out_channels=2048)
+        self.module25_1 = Module25(conv2d_0_in_channels=512, conv2d_0_out_channels=2048,
+                                   module9_0_conv2d_0_in_channels=2048, module9_0_conv2d_0_out_channels=512,
+                                   module9_0_conv2d_0_kernel_size=(1, 1), module9_0_conv2d_0_stride=(1, 1),
+                                   module9_0_conv2d_0_padding=0, module9_0_conv2d_0_pad_mode="valid",
+                                   module9_1_conv2d_0_in_channels=512, module9_1_conv2d_0_out_channels=512,
+                                   module9_1_conv2d_0_kernel_size=(3, 3), module9_1_conv2d_0_stride=(1, 1),
+                                   module9_1_conv2d_0_padding=(1, 1, 1, 1), module9_1_conv2d_0_pad_mode="pad")
+        self.avgpool2d_119 = nn.AvgPool2d(kernel_size=(8, 6))
+        self.flatten_120 = nn.Flatten()
+        self.concat_121 = P.Concat(axis=1)
+        self.module29_0 = Module29()
+        self.dense_124 = nn.Dense(in_channels=1024, out_channels=144, has_bias=True)
+        self.dense_125 = nn.Dense(in_channels=1024, out_channels=10, has_bias=True)
+        self.dense_126 = nn.Dense(in_channels=1024, out_channels=3, has_bias=True)
+        self.module35_0 = Module35()
+        self.dense_133 = nn.Dense(in_channels=1024, out_channels=144, has_bias=True)
+        self.dense_134 = nn.Dense(in_channels=1024, out_channels=10, has_bias=True)
+        self.dense_135 = nn.Dense(in_channels=1024, out_channels=3, has_bias=True)
+        self.module35_1 = Module35()
+        self.dense_142 = nn.Dense(in_channels=1024, out_channels=144, has_bias=True)
+        self.dense_143 = nn.Dense(in_channels=1024, out_channels=10, has_bias=True)
+        self.dense_144 = nn.Dense(in_channels=1024, out_channels=3, has_bias=True)
+
+    def construct(self, inp, x0, x1, x2, x3):
+        opt_conv2d_0 = self.conv2d_0(inp)
+        opt_relu_1 = self.relu_1(opt_conv2d_0)
+        opt_maxpool2d_2 = self.pad_maxpool2d_2(opt_relu_1)
+        opt_maxpool2d_2 = self.maxpool2d_2(opt_maxpool2d_2)
+        module18_0_opt = self.module18_0(opt_maxpool2d_2)
+        module0_0_opt = self.module0_0(opt_maxpool2d_2, module18_0_opt)
+        module25_0_opt = self.module25_0(module0_0_opt)
+        module18_1_opt = self.module18_1(module25_0_opt)
+        module36_0_opt = self.module36_0(module25_0_opt, module18_1_opt)
+        module13_0_opt = self.module13_0(module36_0_opt)
+        module36_1_opt = self.module36_1(module13_0_opt, module36_0_opt)
+        module25_1_opt = self.module25_1(module36_1_opt)
+        opt_avgpool2d_119 = self.avgpool2d_119(module25_1_opt)
+        opt_flatten_120 = self.flatten_120(opt_avgpool2d_119)
+        opt_concat_121 = self.concat_121((opt_flatten_120, x0, x1, x2, x3))
+        module29_0_opt = self.module29_0(opt_concat_121)
+        opt_dense_124 = self.dense_124(module29_0_opt)
+        opt_add_127 = P.Add()(opt_dense_124, x1)
+        opt_dense_125 = self.dense_125(module29_0_opt)
+        opt_add_128 = P.Add()(opt_dense_125, x2)
+        opt_dense_126 = self.dense_126(module29_0_opt)
+        opt_add_129 = P.Add()(opt_dense_126, x3)
+        module35_0_opt = self.module35_0(opt_flatten_120, x0, opt_add_127, opt_add_128, opt_add_129)
+        opt_dense_133 = self.dense_133(module35_0_opt)
+        opt_add_136 = P.Add()(opt_dense_133, opt_add_127)
+        opt_dense_134 = self.dense_134(module35_0_opt)
+        opt_add_137 = P.Add()(opt_dense_134, opt_add_128)
+        opt_dense_135 = self.dense_135(module35_0_opt)
+        opt_add_138 = P.Add()(opt_dense_135, opt_add_129)
+        module35_1_opt = self.module35_1(opt_flatten_120, x0, opt_add_136, opt_add_137, opt_add_138)
+        opt_dense_142 = self.dense_142(module35_1_opt)
+        opt_add_145 = P.Add()(opt_dense_142, opt_add_136)
+        opt_dense_143 = self.dense_143(module35_1_opt)
+        opt_add_146 = P.Add()(opt_dense_143, opt_add_137)
+        opt_dense_144 = self.dense_144(module35_1_opt)
+        opt_add_147 = P.Add()(opt_dense_144, opt_add_138)
+        return opt_add_145, opt_add_146, opt_add_147
diff --git a/research/cv/CLIFF/requirements.txt b/research/cv/CLIFF/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..9b720736c931c23642b2fe3b18e688715aeb89e2
--- /dev/null
+++ b/research/cv/CLIFF/requirements.txt
@@ -0,0 +1,2 @@
+opencv-python>=4.6.0.66
+numpy>=1.23.1
\ No newline at end of file