diff --git a/research/cv/pointnet/README.md b/research/cv/pointnet/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..c79c027254444badded12fc6b69aece9b12f1681
--- /dev/null
+++ b/research/cv/pointnet/README.md
@@ -0,0 +1,258 @@
+# Contents
+
+- [Contents](#contents)
+- [PointNet Description](#pointnet2-description)
+- [Model Architecture](#model-architecture)
+- [Dataset](#dataset)
+- [Environment Requirements](#environment-requirements)
+- [Quick Start](#quick-start)
+- [Script Description](#script-description)
+    - [Script and Sample Code](#script-and-sample-code)
+    - [Script Parameters](#script-parameters)
+    - [Training Process](#training-process)
+        - [Training](#training)
+    - [Evaluation Process](#evaluation-process)
+        - [Evaluation](#evaluation)
+    - [310 Inference Process](#310-infer-process)
+        - [Export MindIR](#evaluation)
+- [Model Description](#model-description)
+    - [Performance](#performance)
+        - [Training Performance](#training-performance)
+        - [Inference Performance](#inference-performance)
+- [Description of Random Situation](#description-of-random-situation)
+- [ModelZoo Homepage](#modelzoo-homepage)
+
+# [PointNet Description](#contents)
+
+PointNet was proposed in 2017, it is a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. The author of this paper proposes a method of applying deep learning model directly to point cloud data, which is called pointnet.
+
+[Paper](https://arxiv.org/abs/1612.00593): Qi, Charles R., et al. "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
+ arXiv preprint arXiv:1612.00593 (2017).
+
+# [Model Architecture](#contents)
+
+For each n × 3 N\times 3N × 3 point cloud input, The network first aligns it spatially through a t-net (rotate to the front), map it to the 64 dimensional space through MLP, align it, and finally map it to the 1024 dimensional space. At this time, there is a 1024 dimensional vector representation for each point, and such vector representation is obviously redundant for a 3-dimensional point cloud. Therefore, the maximum pool operation is introduced at this time to keep only the maximum on all 1024 dimensional channels The one that gets 1 × 1024 1\times 1024 × 1 The vector of 1024 is the global feature of n nn point clouds.
+
+# [Dataset](#contents)
+
+Dataset used: Segmentation on A subset of shapenet [shapenet](<https://shapenet.cs.stanford.edu/ericyi/shapenetcore_partanno_segmentation_benchmark_v0.zip>)
+
+- Data format:txt files
+    - Note:Data will be processed in src/dataset.py
+
+# [Environment Requirements](#contents)
+
+- Hardware(Ascend)
+    - Prepare hardware environment with Ascend processor.
+- Framework
+    - [MindSpore](https://www.mindspore.cn/install)
+- For more information, please check the resources below:
+    - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
+    - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
+
+# [Quick Start](#contents)
+
+After installing MindSpore via the official website, you can start training and evaluation as follows:
+
+```shell
+# Run stand-alone training
+bash scripts/run_standalone_train.sh [DATA_PATH] [CKPT_PATH]
+# example:
+bash scripts/run_standalone_train.sh '/home/pointnet/shapenetcore_partanno_segmentation_benchmark_v0' '../results'
+
+# Run distributed training
+bash scripts/run_distributed_train.sh [RANK_TABLE_FILE] [DATA_PATH] [SAVE_DIR] [PRETRAINDE_CKPT(optional)]
+# example:
+bash scripts/run_standalone_train.sh hccl_8p_01234567_127.0.0.1.json modelnet40_normal_resampled save pointnet2.ckpt
+
+# Evaluate
+bash scripts/run_standalone_eval.sh [DATA_PATH] [MODEL_PATH]
+# example:
+bash scripts/run_standalone_eval.sh '/home/pointnet/shapenetcore_partanno_segmentation_benchmark_v0' '../results/pointnet_network_epoch_10.ckpt'
+```
+
+# [Script Description](#contents)
+
+# [Script and Sample Code](#contents)
+
+```bash
+├── .
+    ├── pointnet
+        ├── ascend310_infer
+        │   ├── inc
+        │   │   ├── utils.h
+        │   ├── src
+        │   │   ├── main.cc
+        │   │   ├── utils.cc
+        │   ├── build.sh
+        ├── scripts
+        │   ├── run_distribute_train.sh  # launch distributed training with ascend platform (8p)
+        │   ├── run_standalone_eval.sh   # launch evaluating with ascend platform
+        │   ├── run_infer_310.sh         # run 310 infer
+        │   └── run_standalone_train.sh  # launch standalone training with ascend platform (1p)
+        ├── src
+        │   ├── misc                     # dataset part
+        │   ├── dataset.py               # data preprocessing
+        │   ├── export.py                # export model
+        │   ├── loss.py                  # pointnet loss
+        │   ├── network.py               # network definition
+        │   └── preprocess.py            # data preprocessing for training
+        ├── eval.py                      # eval net
+        ├── postprocess.py               # 310 postprocess
+        ├── preprocess.py                # 310 preprocess
+        ├── README.md
+        ├── requirements.txt
+        └── train.py                     # train net
+```
+
+# [Script Parameters](#contents)
+
+```bash
+Major parameters in train.py are as follows:
+--batchSize        # Training batch size.
+--nepoch           # Total training epochs.
+--learning_rate    # Training learning rate.
+--device_id        # train on which device
+--data_url         # The path to the train and evaluation datasets.
+--loss_per_epoch   # The times to print loss value per epoch.
+--train_url        # The path to save files generated during training.
+--model            # The file path to load checkpoint.
+--enable_modelarts # Whether to use modelarts.
+```
+
+# [Training Process](#contents)
+
+## Training
+
+- running on Ascend
+
+```shell
+# Run stand-alone training
+bash scripts/run_standalone_train.sh [DATA_PATH] [SAVE_DIR] [PRETRAINDE_CKPT(optional)]
+# example:
+bash scripts/run_standalone_train.sh modelnet40_normal_resampled save pointnet2.ckpt
+
+# Run distributed training
+bash scripts/run_distributed_train.sh [RANK_TABLE_FILE] [DATA_PATH] [SAVE_DIR] [PRETRAINDE_CKPT(optional)]
+# example:
+bash scripts/run_standalone_train.sh hccl_8p_01234567_127.0.0.1.json modelnet40_normal_resampled save pointnet2.ckpt
+```
+
+Distributed training requires the creation of an HCCL configuration file in JSON format in advance. For specific
+operations, see the instructions
+in [hccl_tools](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
+
+After training, the loss value will be achieved as follows:
+
+```bash
+# train log
+Epoch : 1/25  episode : 1/40   Loss : 1.3433  Accuracy : 0.489538 step_time: 1.4269
+Epoch : 1/25  episode : 2/40   Loss : 1.2932  Accuracy : 0.541544 step_time: 1.4238
+Epoch : 1/25  episode : 3/40   Loss : 1.2558  Accuracy : 0.567900 step_time: 1.4397
+Epoch : 1/25  episode : 4/40   Loss : 1.1843  Accuracy : 0.654681 step_time: 1.4235
+Epoch : 1/25  episode : 5/40   Loss : 1.1262  Accuracy : 0.726756 step_time: 1.4206
+Epoch : 1/25  episode : 6/40   Loss : 1.1000  Accuracy : 0.736225 step_time: 1.4363
+Epoch : 1/25  episode : 7/40   Loss : 1.0487  Accuracy : 0.814338 step_time: 1.4457
+Epoch : 1/25  episode : 8/40   Loss : 1.0271  Accuracy : 0.782350 step_time: 1.4183
+Epoch : 1/25  episode : 9/40   Loss : 0.9777  Accuracy : 0.831025 step_time: 1.4289
+
+...
+```
+
+The model checkpoint will be saved in the 'SAVE_DIR' directory.
+
+# [Evaluation Process](#contents)
+
+## Evaluation
+
+Before running the command below, please check the checkpoint path used for evaluation.
+
+- running on Ascend
+
+```shell
+# Evaluate
+bash scripts/run_eval.sh [DATA_PATH] [CKPT_NAME]
+# example:
+bash scripts/run_eval.sh shapenetcore_partanno_segmentation_benchmark_v0 pointnet.ckpt
+```
+
+You can view the results through the file "eval.log". The accuracy of the test dataset will be as follows:
+
+```bash
+# grep "mIOU " eval.log
+'mIOU for class Chair: 0.869'
+```
+
+# [310 Inference Process](#310-infer-process)
+
+## [Export MindIR](#evaluation)
+
+```bash
+python src/export.py --model [CKPT_PATH] --file_format [FILE_FORMAT]
+```
+
+FILE_FORMAT should be one of ['AIR','MINDIR'].
+
+The MindIR model will be exported to './mindir/pointnet.mindir'
+
+## [310 Infer](#evaluation)
+
+before inferring in 310, the mindir model should be exported first. Then run the code below to infer:
+
+```bash
+bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [LABEL_PATH] [DVPP] [DEVICE_ID]
+# example:
+bash run_infer_310.sh ./mindir/pointnet.mindir ../shapenetcore_partanno_segmentation_benchmark_v0 [LABEL_PATH] N 2
+```
+
+Here, DVPP should be 'N'!
+
+## [Result](#evaluation)
+
+```bash
+'mIOU : 0.869 '
+```
+
+# [Model Description](#contents)
+
+## [Performance](#contents)
+
+## Training Performance
+
+| Parameters                 | Ascend                                                      |
+| -------------------------- | ----------------------------------------------------------- |
+| Model Version              | PointNet                                                  |
+| Resource                   | Ascend 910; CPU 24cores; Memory 256G; OS Euler2.8           |
+| uploaded Date              | 11/30/2021 (month/day/year)                                 |
+| MindSpore Version          | 1.3.0                                                       |
+| Dataset                    | A subset of ShapeNet                                                  |
+| Training Parameters        | epoch=25, steps=83, batch_size=64, lr=0.005             |
+| Optimizer                  | Adam                                                        |
+| Loss Function              | NLLLoss                                                     |
+| outputs                    | probability                                                 |
+| Loss                       | 0.01                                                        |
+| Speed                      | 1.5 s/step (1p)                                             |
+| Total time                 | 0.3 h (1p)                                                 |
+| Checkpoint for Fine tuning | 17 MB (.ckpt file)                                          |
+
+## Inference Performance
+
+| Parameters          | Ascend                      |
+| ------------------- | --------------------------- |
+| Model Version       | PointNet                  |
+| Resource            | Ascend 910; CPU 24cores; Memory 256G; OS Euler2.8 |
+| Uploaded Date       | 11/30/2021 (month/day/year) |
+| MindSpore Version   | 1.3.0                       |
+| Dataset             | A subset of ShapeNet                 |
+| Batch_size          | 64                          |
+| Outputs             | probability                 |
+| mIOU                | 86.3% (1p)                  |
+| Total time          | 1 min                     |
+
+# [Description of Random Situation](#contents)
+
+We use random seed in train.py
+
+# [ModelZoo Homepage](#contents)
+
+Please check the official [homepage](https://gitee.com/mindspore/models).
diff --git a/research/cv/pointnet/ascend310_infer/CMakeLists.txt b/research/cv/pointnet/ascend310_infer/CMakeLists.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f936989f21405ae4ba068d1284d62f4e1b8613a5
--- /dev/null
+++ b/research/cv/pointnet/ascend310_infer/CMakeLists.txt
@@ -0,0 +1,15 @@
+cmake_minimum_required(VERSION 3.14.1)
+project(Ascend310Infer)
+add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)
+set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O2 -g -std=c++17 -Werror -Wall -fPIE -Wl,--allow-shlib-undefined")
+set(PROJECT_SRC_ROOT ${CMAKE_CURRENT_LIST_DIR}/)
+option(MINDSPORE_PATH "mindspore install path" "")
+include_directories(${MINDSPORE_PATH})
+include_directories(${MINDSPORE_PATH}/include)
+include_directories(${PROJECT_SRC_ROOT})
+find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib)
+file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*)
+
+add_executable(main src/main.cc src/utils.cc)
+target_link_libraries(main ${MS_LIB} ${MD_LIB} gflags)
+find_package(gflags REQUIRED)
\ No newline at end of file
diff --git a/research/cv/pointnet/ascend310_infer/build.sh b/research/cv/pointnet/ascend310_infer/build.sh
new file mode 100644
index 0000000000000000000000000000000000000000..713d7f657ddfa5f75b069351c55f8447f77c72d0
--- /dev/null
+++ b/research/cv/pointnet/ascend310_infer/build.sh
@@ -0,0 +1,29 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+if [ -d out ]; then
+    rm -rf out
+fi
+
+mkdir out
+cd out || exit
+
+if [ -f "Makefile" ]; then
+  make clean
+fi
+
+cmake .. \
+    -DMINDSPORE_PATH="`pip show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`"
+make
diff --git a/research/cv/pointnet/ascend310_infer/inc/utils.h b/research/cv/pointnet/ascend310_infer/inc/utils.h
new file mode 100644
index 0000000000000000000000000000000000000000..efebe03a8c1179f5a1f9d5f7ee07e0352a9937c6
--- /dev/null
+++ b/research/cv/pointnet/ascend310_infer/inc/utils.h
@@ -0,0 +1,32 @@
+/**
+ * Copyright 2021 Huawei Technologies Co., Ltd
+ * 
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef MINDSPORE_INFERENCE_UTILS_H_
+#define MINDSPORE_INFERENCE_UTILS_H_
+
+#include <sys/stat.h>
+#include <dirent.h>
+#include <vector>
+#include <string>
+#include <memory>
+#include "include/api/types.h"
+
+std::vector<std::string> GetAllFiles(std::string_view dirName);
+DIR *OpenDir(std::string_view dirName);
+std::string RealPath(std::string_view path);
+mindspore::MSTensor ReadFileToTensor(const std::string &file);
+int WriteResult(const std::string& imageFile, const std::vector<mindspore::MSTensor> &outputs);
+#endif
diff --git a/research/cv/pointnet/ascend310_infer/src/main.cc b/research/cv/pointnet/ascend310_infer/src/main.cc
new file mode 100644
index 0000000000000000000000000000000000000000..d15f4ef16da9c57f54183ec4d7bbd796f0625c26
--- /dev/null
+++ b/research/cv/pointnet/ascend310_infer/src/main.cc
@@ -0,0 +1,163 @@
+/**
+ * Copyright 2021 Huawei Technologies Co., Ltd
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <sys/time.h>
+#include <gflags/gflags.h>
+#include <dirent.h>
+#include <iostream>
+#include <string>
+#include <algorithm>
+#include <iosfwd>
+#include <vector>
+#include <fstream>
+#include <sstream>
+
+#include "include/api/model.h"
+#include "include/api/context.h"
+#include "include/api/types.h"
+#include "include/api/serialization.h"
+#include "include/dataset/vision_ascend.h"
+#include "include/dataset/execute.h"
+#include "include/dataset/vision.h"
+#include "inc/utils.h"
+
+using mindspore::Context;
+using mindspore::Serialization;
+using mindspore::Model;
+using mindspore::Status;
+using mindspore::ModelType;
+using mindspore::GraphCell;
+using mindspore::kSuccess;
+using mindspore::MSTensor;
+using mindspore::dataset::Execute;
+using mindspore::dataset::TensorTransform;
+using mindspore::dataset::vision::Resize;
+using mindspore::dataset::vision::HWC2CHW;
+using mindspore::dataset::vision::Normalize;
+using mindspore::dataset::vision::Decode;
+
+DEFINE_string(mindir_path, "", "mindir path");
+DEFINE_string(dataset_path, ".", "dataset path");
+DEFINE_int32(device_id, 0, "device id");
+DEFINE_string(aipp_path, "", "aipp path");
+DEFINE_string(cpu_dvpp, "", "cpu or dvpp process");
+DEFINE_int32(image_height, 32, "image height");
+DEFINE_int32(image_width, 32, "image width");
+
+int main(int argc, char **argv) {
+  gflags::ParseCommandLineFlags(&argc, &argv, true);
+  if (RealPath(FLAGS_mindir_path).empty()) {
+    std::cout << "Invalid mindir" << std::endl;
+    return 1;
+  }
+
+  auto context = std::make_shared<Context>();
+  auto ascend310 = std::make_shared<mindspore::Ascend310DeviceInfo>();
+  ascend310->SetDeviceID(FLAGS_device_id);
+  ascend310->SetBufferOptimizeMode("off_optimize");
+  context->MutableDeviceInfo().push_back(ascend310);
+  mindspore::Graph graph;
+  Serialization::Load(FLAGS_mindir_path, ModelType::kMindIR, &graph);
+  if (FLAGS_cpu_dvpp == "DVPP") {
+    if (RealPath(FLAGS_aipp_path).empty()) {
+      std::cout << "Invalid aipp path" << std::endl;
+      return 1;
+    } else {
+      ascend310->SetInsertOpConfigPath(FLAGS_aipp_path);
+    }
+  }
+
+  Model model;
+  Status ret = model.Build(GraphCell(graph), context);
+  if (ret != kSuccess) {
+    std::cout << "ERROR: Build failed." << std::endl;
+    return 1;
+  }
+
+  auto all_files = GetAllFiles(FLAGS_dataset_path);
+  std::map<double, double> costTime_map;
+  size_t size = all_files.size();
+
+  for (size_t i = 0; i < size; ++i) {
+    struct timeval start = {0};
+    struct timeval end = {0};
+    double startTimeMs;
+    double endTimeMs;
+    std::vector<MSTensor> inputs;
+    std::vector<MSTensor> outputs;
+    std::cout << "Start predict input files:" << all_files[i] << std::endl;
+    if (FLAGS_cpu_dvpp == "DVPP") {
+      std::shared_ptr<TensorTransform> decode(new Decode());
+      auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
+      std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
+      // Execute composeDecode({decode, resize});
+      Execute composeDecode({});
+      auto imgDvpp = std::make_shared<MSTensor>();
+      inputs.emplace_back(imgDvpp->Name(), imgDvpp->DataType(), imgDvpp->Shape(),
+                        imgDvpp->Data().get(), imgDvpp->DataSize());
+    } else if (FLAGS_cpu_dvpp == "CPU")  {
+      std::shared_ptr<TensorTransform> decode(new Decode());
+      std::shared_ptr<TensorTransform> hwc2chw(new HWC2CHW());
+      std::shared_ptr<TensorTransform> normalize(
+      new Normalize({123.675, 116.28, 103.53}, {58.395, 57.120, 57.375}));
+      auto resizeShape = {FLAGS_image_height, FLAGS_image_width};
+      std::shared_ptr<TensorTransform> resize(new Resize(resizeShape));
+      auto resizeShape1 = {1, FLAGS_image_height};
+      std::shared_ptr<TensorTransform> reshape_one_channel(new Resize(resizeShape1));
+      Execute composeDecode({decode, resize, normalize, hwc2chw, reshape_one_channel});
+      auto img = MSTensor();
+      auto image = ReadFileToTensor(all_files[i]);
+      composeDecode(image, &img);
+      std::vector<MSTensor> model_inputs = model.GetInputs();
+      inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(),
+                       img.Data().get(), img.DataSize());
+    } else  {
+      auto image = ReadFileToTensor(all_files[i]);
+      inputs.emplace_back(image.Name(), image.DataType(), image.Shape(),
+                        image.Data().get(), image.DataSize());
+    }
+
+    gettimeofday(&start, nullptr);
+    ret = model.Predict(inputs, &outputs);
+    gettimeofday(&end, nullptr);
+    if (ret != kSuccess) {
+      std::cout << "Predict " << all_files[i] << " failed." << std::endl;
+      return 1;
+    }
+    startTimeMs = (1.0 * start.tv_sec * 1000000 + start.tv_usec) / 1000;
+    endTimeMs = (1.0 * end.tv_sec * 1000000 + end.tv_usec) / 1000;
+    costTime_map.insert(std::pair<double, double>(startTimeMs, endTimeMs));
+    WriteResult(all_files[i], outputs);
+  }
+  double average = 0.0;
+  int inferCount = 0;
+
+  for (auto iter = costTime_map.begin(); iter != costTime_map.end(); iter++) {
+    double diff = 0.0;
+    diff = iter->second - iter->first;
+    average += diff;
+    inferCount++;
+  }
+  average = average / inferCount;
+  std::stringstream timeCost;
+  timeCost << "NN inference cost average time: "<< average << " ms of infer_count " << inferCount << std::endl;
+  std::cout << "NN inference cost average time: "<< average << "ms of infer_count " << inferCount << std::endl;
+  std::string fileName = "./time_Result" + std::string("/test_perform_static.txt");
+  std::ofstream fileStream(fileName.c_str(), std::ios::trunc);
+  fileStream << timeCost.str();
+  fileStream.close();
+  costTime_map.clear();
+  return 0;
+}
diff --git a/research/cv/pointnet/ascend310_infer/src/utils.cc b/research/cv/pointnet/ascend310_infer/src/utils.cc
new file mode 100644
index 0000000000000000000000000000000000000000..cc5e872a9377d9028dc6092fa726ab4dc964a9d6
--- /dev/null
+++ b/research/cv/pointnet/ascend310_infer/src/utils.cc
@@ -0,0 +1,130 @@
+/**
+ * Copyright 2021 Huawei Technologies Co., Ltd
+ * 
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "inc/utils.h"
+
+#include <fstream>
+#include <algorithm>
+#include <iostream>
+
+using mindspore::MSTensor;
+using mindspore::DataType;
+
+std::vector<std::string> GetAllFiles(std::string_view dirName) {
+    struct dirent *filename;
+    DIR *dir = OpenDir(dirName);
+    if (dir == nullptr) {
+        return {};
+    }
+    std::vector<std::string> res;
+    while ((filename = readdir(dir)) != nullptr) {
+        std::string dName = std::string(filename->d_name);
+        if (dName == "." || dName == ".." || filename->d_type != DT_REG) {
+            continue;
+        }
+        res.emplace_back(std::string(dirName) + "/" + filename->d_name);
+    }
+    std::sort(res.begin(), res.end());
+    for (auto &f : res) {
+        std::cout << "image file: " << f << std::endl;
+    }
+    return res;
+}
+
+int WriteResult(const std::string& imageFile, const std::vector<MSTensor> &outputs) {
+    std::string homePath = "./result_Files";
+    for (size_t i = 0; i < outputs.size(); ++i) {
+        size_t outputSize;
+        std::shared_ptr<const void> netOutput;
+        netOutput = outputs[i].Data();
+        outputSize = outputs[i].DataSize();
+        int pos = imageFile.rfind('/');
+        std::string fileName(imageFile, pos + 1);
+        fileName.replace(fileName.find('.'), fileName.size() - fileName.find('.'), ".bin");
+        std::string outFileName = homePath + "/" + fileName;
+        FILE * outputFile = fopen(outFileName.c_str(), "wb");
+        fwrite(netOutput.get(), outputSize, sizeof(char), outputFile);
+        fclose(outputFile);
+        outputFile = nullptr;
+    }
+    return 0;
+}
+
+mindspore::MSTensor ReadFileToTensor(const std::string &file) {
+  if (file.empty()) {
+    std::cout << "Pointer file is nullptr" << std::endl;
+    return mindspore::MSTensor();
+  }
+
+  std::ifstream ifs(file);
+  if (!ifs.good()) {
+    std::cout << "File: " << file << " is not exist" << std::endl;
+    return mindspore::MSTensor();
+  }
+
+  if (!ifs.is_open()) {
+    std::cout << "File: " << file << "open failed" << std::endl;
+    return mindspore::MSTensor();
+  }
+
+  ifs.seekg(0, std::ios::end);
+  size_t size = ifs.tellg();
+  mindspore::MSTensor buffer(file, mindspore::DataType::kNumberTypeUInt8, {static_cast<int64_t>(size)}, nullptr, size);
+
+  ifs.seekg(0, std::ios::beg);
+  ifs.read(reinterpret_cast<char *>(buffer.MutableData()), size);
+  ifs.close();
+
+  return buffer;
+}
+
+
+DIR *OpenDir(std::string_view dirName) {
+    if (dirName.empty()) {
+        std::cout << " dirName is null ! " << std::endl;
+        return nullptr;
+    }
+    std::string realPath = RealPath(dirName);
+    struct stat s;
+    lstat(realPath.c_str(), &s);
+    if (!S_ISDIR(s.st_mode)) {
+        std::cout << "dirName is not a valid directory !" << std::endl;
+        return nullptr;
+    }
+    DIR *dir;
+    dir = opendir(realPath.c_str());
+    if (dir == nullptr) {
+        std::cout << "Can not open dir " << dirName << std::endl;
+        return nullptr;
+    }
+    std::cout << "Successfully opened the dir " << dirName << std::endl;
+    return dir;
+}
+
+std::string RealPath(std::string_view path) {
+    char realPathMem[PATH_MAX] = {0};
+    char *realPathRet = nullptr;
+    realPathRet = realpath(path.data(), realPathMem);
+
+    if (realPathRet == nullptr) {
+        std::cout << "File: " << path << " is not exist.";
+        return "";
+    }
+
+    std::string realPath(realPathMem);
+    std::cout << path << " realpath is: " << realPath << std::endl;
+    return realPath;
+}
diff --git a/research/cv/pointnet/eval.py b/research/cv/pointnet/eval.py
new file mode 100644
index 0000000000000000000000000000000000000000..033a039a68f387de1af01aba3807cd731230b356
--- /dev/null
+++ b/research/cv/pointnet/eval.py
@@ -0,0 +1,161 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""eval model"""
+from __future__ import print_function
+import argparse
+import os
+import random
+import math
+import numpy as np
+import mindspore
+from mindspore import load_checkpoint, load_param_into_net, context
+import mindspore.dataset as ds
+import mindspore.ops as ops
+from mindspore.communication.management import init, get_rank
+from src.dataset import ShapeNetDataset
+from src.network import PointNetDenseCls
+from tqdm import tqdm
+
+parser = argparse.ArgumentParser(description='MindSpore Pointnet Segmentation')
+parser.add_argument(
+    '--batchSize', type=int, default=32, help='input batch size')
+parser.add_argument(
+    '--nepoch', type=int, default=100, help='number of epochs to train for')
+parser.add_argument('--device_id', type=int, default=0, help='device id')
+parser.add_argument('--device_target', default='Ascend', help='device id')
+parser.add_argument('--data_path', type=str, default='/home/pointnet/shapenetcore_partanno_segmentation_benchmark_v0'
+                    , help="dataset path")
+parser.add_argument('--model_path', type=str, default=''
+                    , help="dataset path")
+parser.add_argument('--ckpt_dir', type=str, default='./ckpts'
+                    , help="ckpts path")
+parser.add_argument('--class_choice', type=str, default='Chair', help="class_choice")
+parser.add_argument('--feature_transform', action='store_true', help="use feature transform")
+parser.add_argument('--enable_modelarts', default=False, help="use feature transform")
+
+args = parser.parse_args()
+print(args)
+
+def test_net(test_dataset, network, data_path, class_choice, model=None):
+    """test model"""
+    print("============== Starting Testing ==============")
+    if model:
+        param_dict = load_checkpoint(model)
+        load_param_into_net(network, param_dict)
+        print('successfully load model')
+
+    print(type(test_dataset))
+
+    print('batchSize', test_dataset.get_batch_size())
+    print('num_batch', test_dataset.get_dataset_size())
+    print('shapes2', test_dataset.output_shapes())
+
+    print('test_dataset_size', test_dataset.get_dataset_size())
+    network.set_train(False)
+    shape_ious = []
+    for _, data in tqdm(enumerate(test_dataset.create_dict_iterator(), 0)):
+        points, target = data['point'], data['label']
+        pred = network(points)  # pred.shape=[80000,4]
+        pred_choice = ops.ArgMaxWithValue(axis=2)(pred)[0]
+        pred_np = pred_choice.asnumpy()
+        target_np = target.asnumpy() - 1
+
+        for shape_idx in range(target_np.shape[0]):
+            parts = range(num_classes)
+            part_ious = []
+            for part in parts:
+                I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+                U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+                if U == 0:
+                    iou = 1
+                else:
+                    iou = I / float(U)
+                part_ious.append(iou)
+            shape_ious.append(np.mean(part_ious))
+            print(np.mean(part_ious))
+
+    print("mIOU for class {}: {}".format(args.class_choice, np.mean(shape_ious)))
+
+
+if __name__ == "__main__":
+    blue = lambda x: '\033[94m' + x + '\033[0m'
+    local_data_url = args.data_path
+    local_train_url = args.ckpt_dir
+    device_num = int(os.getenv("RANK_SIZE", "1"))
+    if args.enable_modelarts:
+        device_id = int(os.getenv("DEVICE_ID"))
+        import moxing as mox
+
+        local_data_url = './cache/data'
+        local_train_url = './cache/ckpt'
+        device_target = args.device_target
+        context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
+        context.set_context(save_graphs=False)
+        if device_target == "Ascend":
+            context.set_context(device_id=device_id)
+            if device_num > 1:
+                cfg.episode = int(cfg.episode / 2)
+                cfg.learning_rate = cfg.learning_rate * 2
+                context.reset_auto_parallel_context()
+                context.set_auto_parallel_context(device_num=device_num,
+                                                  parallel_mode=context.ParallelMode.DATA_PARALLEL, gradients_mean=True)
+                init()
+                local_data_url = os.path.join(local_data_url, str(device_id))
+                local_train_url = os.path.join(local_train_url, "_" + str(get_rank()))
+        else:
+            raise ValueError("Unsupported platform.")
+        import moxing as mox
+
+        mox.file.copy_parallel(src_url=args.data_url, dst_url=local_data_url)
+    else:
+        context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target, device_id=args.device_id)
+        context.set_context(save_graphs=False)
+        if device_num > 1:
+            cfg.episode = int(cfg.episode / 2)
+            cfg.learning_rate = cfg.learning_rate * 2
+            context.reset_auto_parallel_context()
+            context.set_auto_parallel_context(device_num=device_num, parallel_mode=ParallelMode.DATA_PARALLEL,
+                                              gradients_mean=True)
+            init()
+
+    if not os.path.exists(local_train_url):
+        os.makedirs(local_train_url)
+
+    args.manualSeed = random.randint(1, 10000)
+    print("Random Seed: ", args.manualSeed)
+    random.seed(args.manualSeed)
+    mindspore.set_seed(args.manualSeed)
+    dataset_sink_mode = False
+    context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", device_id=args.device_id)
+
+    dataset_generator = ShapeNetDataset(
+        root=local_data_url,
+        classification=False,
+        class_choice=[args.class_choice])
+    test_dataset_generator = ShapeNetDataset(
+        root=local_data_url,
+        classification=False,
+        class_choice=[args.class_choice],
+        split='test',
+        data_augmentation=False)
+
+    test_dataloader = ds.GeneratorDataset(test_dataset_generator, ["point", "label"], shuffle=True)
+    test_dataset1 = test_dataloader.batch(args.batchSize)
+    num_classes = dataset_generator.num_seg_classes
+    classifier = PointNetDenseCls(k=num_classes, feature_transform=args.feature_transform)
+    classifier.set_train(False)
+    num_batch = math.ceil(len(dataset_generator) / args.batchSize)
+
+    test_net(test_dataset1, classifier, args.data_path, args.class_choice, args.model_path)
diff --git a/research/cv/pointnet/postprocess.py b/research/cv/pointnet/postprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..ecd4e93a42ad36793c0ae0ac56be052ba0ae84b5
--- /dev/null
+++ b/research/cv/pointnet/postprocess.py
@@ -0,0 +1,57 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""postprocess for 310 inference"""
+import glob
+import time
+import argparse
+import numpy as np
+
+parser = argparse.ArgumentParser(description="lenet preprocess data")
+parser.add_argument("--result_path", type=str, required=True, help="result path.")
+parser.add_argument("--label_path", type=str, required=True, help="label path.")
+args = parser.parse_args()
+num_classes = 4
+shape_ious = []
+file_list = []
+file_list1 = glob.glob(args.result_path+'/*')
+for i in range(len(file_list1)):
+    file_list.append(args.result_path+'/shapenet_data_bs1_%03d'%i+'.bin')
+for i, file_name in enumerate(file_list):
+    print("calaccuracy of ", file_name)
+    data = np.fromfile(file_name, dtype=np.float32)
+    label = np.load(args.label_path)
+    label = label[i]
+    start_time = time.time()
+    pred = data.reshape(1, 2500, -1)
+    pred_np = np.argmax(pred, axis=2)
+    target_np = label - 1
+
+    for shape_idx in range(target_np.shape[0]):
+        parts = range(num_classes)
+        part_ious = []
+        for part in parts:
+            I = np.sum(np.logical_and(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+            U = np.sum(np.logical_or(pred_np[shape_idx] == part, target_np[shape_idx] == part))
+            if U == 0:
+                iou = 1
+            else:
+                iou = I / float(U)
+            part_ious.append(iou)
+        shape_ious.append(np.mean(part_ious))
+        print('='*50)
+        print(np.mean(part_ious))
+        print('='*50)
+
+print("Final Miou: {}".format(np.mean(shape_ious)))
diff --git a/research/cv/pointnet/preprocess.py b/research/cv/pointnet/preprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..8e71214ac77737b8395dca905340d929631c4422
--- /dev/null
+++ b/research/cv/pointnet/preprocess.py
@@ -0,0 +1,59 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""pre process for 310 inference"""
+import os
+import argparse
+from mindspore import context
+import mindspore.dataset as ds
+import numpy as np
+from src.dataset import ShapeNetDataset
+
+parser = argparse.ArgumentParser(description="lenet preprocess data")
+parser.add_argument("--dataset_path", type=str, default=
+                    '/home/pointnet/shapenetcore_partanno_segmentation_benchmark_v0', help="dataset path.")
+parser.add_argument("--output_path", type=str, default='./datapath_BS1/', help="output path.")
+parser.add_argument("--device_target", type=str, default='Ascend', help="output path.")
+parser.add_argument("--device_id", default=4, help="output path.")
+parser.add_argument('--class_choice', type=str, default='Chair', help="class_choice")
+parser.add_argument(
+    '--batchSize', type=int, default=1, help='input batch size')
+args = parser.parse_args()
+
+context.set_context(mode=context.PYNATIVE_MODE, device_target="Ascend", device_id=args.device_id)
+if __name__ == '__main__':
+    dataset_generator = ShapeNetDataset(
+        root=args.dataset_path,
+        classification=False,
+        split='test',
+        class_choice=[args.class_choice])
+    dataset = ds.GeneratorDataset(dataset_generator, column_names=["point", "label"])
+    dataset = dataset.batch(args.batchSize)
+
+    data_path = os.path.join(args.output_path, '00_data')
+    if not os.path.exists(data_path):
+        os.makedirs(data_path)
+    label_list = []
+    for i, data in enumerate(dataset.create_dict_iterator()):
+        print(data['label'].shape)
+        file_name = 'shapenet_data_bs'+str(args.batchSize)+'_%03d'%i+'.bin'
+        file_path = os.path.join(data_path, file_name)
+        data['point'].asnumpy().tofile(file_path)
+
+        label_list.append(data['label'].asnumpy())
+        print('loading ', i)
+    print('begin saving label')
+    print(len(label_list))
+    np.save(args.output_path+'labels_ids.npy', np.array(label_list))
+    print('='*20, 'export bin file finished', '='*20)
diff --git a/research/cv/pointnet/requirements.txt b/research/cv/pointnet/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..55fec2d19d0d2e1be4e9226d07d51aea828cf232
--- /dev/null
+++ b/research/cv/pointnet/requirements.txt
@@ -0,0 +1 @@
+plyfile == 0.7.4
\ No newline at end of file
diff --git a/research/cv/pointnet/scripts/run_distribution.sh b/research/cv/pointnet/scripts/run_distribution.sh
new file mode 100644
index 0000000000000000000000000000000000000000..09b0235e4fb6128bf59f025631d8729743df7d05
--- /dev/null
+++ b/research/cv/pointnet/scripts/run_distribution.sh
@@ -0,0 +1,62 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+# an simple tutorial as follows, more parameters can be setting
+
+if [ $# != 3 ]
+then
+    echo "Usage: sh run_distribution_ascend.sh [RANK_TABLE_FILE] [CKPTS_DIR] [DATA_PATH]"
+exit 1
+fi
+
+if [ ! -f $1 ]
+then
+    echo "error: RANK_TABLE_FILE=$1 is not a file"
+exit 1
+fi
+
+get_real_path(){
+  if [ "${1:0:1}" == "/" ]; then
+    echo "$1"
+  else
+    echo "$(realpath -m $PWD/$1)"
+  fi
+}
+
+ulimit -u unlimited
+export DEVICE_NUM=8
+export RANK_SIZE=8
+RANK_TABLE_FILE=$(realpath $1)
+export RANK_TABLE_FILE
+export CKPTS_DIR=$(get_real_path $2)
+export DATA_PATH=$(get_real_path $3)
+echo "RANK_TABLE_FILE=${RANK_TABLE_FILE}"
+
+export SERVER_ID=0
+rank_start=$((DEVICE_NUM * SERVER_ID))
+for((i=0; i<${DEVICE_NUM}; i++))
+do
+    export DEVICE_ID=$i
+    export RANK_ID=$((rank_start + i))
+    rm -rf ./train_parallel$i
+    mkdir ./train_parallel$i
+    cp -r ../src ./train_parallel$i
+    cp ../train.py ./train_parallel$i
+    echo "start training for rank $RANK_ID, device $DEVICE_ID"
+    cd ./train_parallel$i ||exit
+    env > env.log
+    python -u train.py --device_id=$i --train_url=$CKPTS_DIR --data_url=$DATA_PATH 
+    cd ..
+done
diff --git a/research/cv/pointnet/scripts/run_infer_310.sh b/research/cv/pointnet/scripts/run_infer_310.sh
new file mode 100644
index 0000000000000000000000000000000000000000..23c1ccf0001c78a3b489ec63849937c8b1c72dfd
--- /dev/null
+++ b/research/cv/pointnet/scripts/run_infer_310.sh
@@ -0,0 +1,130 @@
+#!/bin/bash
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+if [[ $# -lt 4 || $# -gt 5 ]]; then
+    echo "Usage: bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [LABEL_PATH] [DVPP] [DEVICE_ID]
+    DVPP is mandatory, and must choose from [DVPP|CPU], it's case-insensitive
+    DEVICE_ID is optional, it can be set by environment variable device_id, otherwise the value is zero"
+exit 1
+fi
+
+get_real_path(){
+    if [ "${1:0:1}" == "/" ]; then
+        echo "$1"
+    else
+        echo "$(realpath -m $PWD/$1)"
+    fi
+}
+model=$(get_real_path $1)
+data_path=$(get_real_path $2)
+label_path=$(get_real_path $3)
+DVPP=${4^^}
+
+device_id=0
+if [ $# == 5 ]; then
+    device_id=$5
+fi
+
+echo "mindir name: "$model
+echo "dataset path: "$data_path
+echo "label path: "$label_path
+echo "image process mode: "$DVPP
+echo "device id: "$device_id
+
+export ASCEND_HOME=/usr/local/Ascend/
+if [ -d ${ASCEND_HOME}/ascend-toolkit ]; then
+    export PATH=$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin:$ASCEND_HOME/ascend-toolkit/latest/atc/bin:$PATH
+    export LD_LIBRARY_PATH=/usr/local/lib:$ASCEND_HOME/ascend-toolkit/latest/atc/lib64:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
+    export TBE_IMPL_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe
+    export PYTHONPATH=${TBE_IMPL_PATH}:$ASCEND_HOME/ascend-toolkit/latest/fwkacllib/python/site-packages:$PYTHONPATH
+    export ASCEND_OPP_PATH=$ASCEND_HOME/ascend-toolkit/latest/opp
+else
+    export PATH=$ASCEND_HOME/atc/ccec_compiler/bin:$ASCEND_HOME/atc/bin:$PATH
+    export LD_LIBRARY_PATH=/usr/local/lib:$ASCEND_HOME/atc/lib64:$ASCEND_HOME/acllib/lib64:$ASCEND_HOME/driver/lib64:$ASCEND_HOME/add-ons:$LD_LIBRARY_PATH
+    export PYTHONPATH=$ASCEND_HOME/atc/python/site-packages:$PYTHONPATH
+    export ASCEND_OPP_PATH=$ASCEND_HOME/opp
+fi
+
+
+function preprocess_data()
+{
+    if [ -d preprocess_Result ]; then
+        rm -rf ./preprocess_Result
+    fi
+    mkdir preprocess_Result
+    python ../preprocess.py --dataset_path=$data_path --output_path=./preprocess_Result &> preprocess.log
+    data_path=./preprocess_Result
+}
+
+function compile_app()
+{
+    cd ../ascend310_infer || exit
+    bash build.sh &> build.log
+}
+
+function infer()
+{
+    cd - || exit
+    if [ -d result_Files ]; then
+        rm -rf ./result_Files
+    fi
+    if [ -d time_Result ]; then
+        rm -rf ./time_Result
+    fi
+    mkdir result_Files
+    mkdir time_Result
+    if [ "$DVPP" == "DVPP" ];then
+      ../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --device_id=$device_id --cpu_dvpp=$DVPP --aipp_path=../src/aipp.cfg --image_height=32 --image_width=32 &> infer.log
+    elif [ "$DVPP" == "CPU"  ]; then
+      ../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --cpu_dvpp=$DVPP --device_id=$device_id --image_height=32 --image_width=32 &> infer.log
+    elif [ "$DVPP" == "N" ];then
+      ../ascend310_infer/out/main --mindir_path=$model --dataset_path=$data_path --device_id=$device_id --cpu_dvpp=$DVPP --image_height=32 --image_width=32 &> infer.log
+    else
+      echo "image process mode must be in [DVPP|CPU|N]"
+      exit 1
+    fi
+}
+
+function cal_acc()
+{
+    python ../postprocess.py --result_path=./result_Files --label_path=$label_path &> acc.log &
+}
+
+# preprocess_data
+if [ $? -ne 0 ]; then
+    echo "preprocess data failed"
+    exit 1
+fi
+echo "compiling app"
+compile_app
+echo "successfully complied app"
+if [ $? -ne 0 ]; then
+    echo "compile app code failed"
+    exit 1
+fi
+echo "inferring"
+infer
+echo "successfully inferred"
+if [ $? -ne 0 ]; then
+    echo " execute inference failed"
+    exit 1
+fi
+echo "calculating acc"
+cal_acc
+if [ $? -ne 0 ]; then
+    echo "calculate accuracy failed"
+    exit 1
+fi
\ No newline at end of file
diff --git a/research/cv/pointnet/scripts/run_standalone_eval.sh b/research/cv/pointnet/scripts/run_standalone_eval.sh
new file mode 100644
index 0000000000000000000000000000000000000000..6162fdaed18b21e30fddb64e27189f752f68c5c9
--- /dev/null
+++ b/research/cv/pointnet/scripts/run_standalone_eval.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+# Copyright 2020 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# an simple tutorial as follows, more parameters can be setting
+# script_self=$(readlink -f "$0")
+# self_path=$(dirname "${script_self}")
+DATA_PATH=$1
+MODEL_PATH=$2
+python -s ../eval.py --data_path=$DATA_PATH --device_target="Ascend" --model_path=$MODEL_PATH > log_eval 2>&1 &
diff --git a/research/cv/pointnet/scripts/run_standalone_train.sh b/research/cv/pointnet/scripts/run_standalone_train.sh
new file mode 100644
index 0000000000000000000000000000000000000000..628fcf43196f2ba4faa263304a6a0a46eb4220db
--- /dev/null
+++ b/research/cv/pointnet/scripts/run_standalone_train.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+# Copyright 2020 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+
+# an simple tutorial as follows, more parameters can be setting
+# script_self=$(readlink -f "$0")
+DATA_PATH=$1
+CKPT_PATH=$2
+python -s ../train.py --data_url=$DATA_PATH --device_target="Ascend" --train_url=$CKPT_PATH > log 2>&1 &
diff --git a/research/cv/pointnet/src/dataset.py b/research/cv/pointnet/src/dataset.py
new file mode 100644
index 0000000000000000000000000000000000000000..a85db9e5a2781e36ec6fd2548ac1c37404b1e821
--- /dev/null
+++ b/research/cv/pointnet/src/dataset.py
@@ -0,0 +1,201 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""dataset"""
+import os
+import os.path
+import json
+import numpy as np
+from tqdm import tqdm
+from plyfile import PlyData
+
+def get_segmentation_classes(root):
+    """get_segmentation_classes"""
+    catfile = os.path.join(root, 'synsetoffset2category.txt')
+    cat = {}
+    meta = {}
+
+    with open(catfile, 'r') as f:
+        for line in f:
+            ls = line.strip().split()
+            cat[ls[0]] = ls[1]
+
+    for item in cat:
+        dir_seg = os.path.join(root, cat[item], 'points_label')
+        dir_point = os.path.join(root, cat[item], 'points')
+        fns = sorted(os.listdir(dir_point))
+        meta[item] = []
+        for fn in fns:
+            token = (os.path.splitext(os.path.basename(fn))[0])
+            meta[item].append((os.path.join(dir_point, token + '.pts'), os.path.join(dir_seg, token + '.seg')))
+
+    with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/num_seg_classes.txt'), 'w') as f:
+        for item in cat:
+            datapath = []
+            num_seg_classes = 0
+            for fn in meta[item]:
+                datapath.append((item, fn[0], fn[1]))
+
+            for i in tqdm(range(len(datapath))):
+                l = len(np.unique(np.loadtxt(datapath[i][-1]).astype(np.uint8)))
+                if l > num_seg_classes:
+                    num_seg_classes = l
+
+            print("category {} num segmentation classes {}".format(item, num_seg_classes))
+            f.write("{} {}\n".format(item, num_seg_classes))
+
+
+def gen_modelnet_id(root):
+    classes = []
+    with open(os.path.join(root, 'train.txt'), 'r') as f:
+        for line in f:
+            classes.append(line.strip().split('/')[0])
+    classes = np.unique(classes)
+    with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/modelnet_id.txt'), 'w') as f:
+        for i in range(len(classes)):
+            f.write('{} {}\n'.format(classes[i], i))
+
+class ShapeNetDataset():
+    """ShapeNetDataset"""
+    def __init__(self,
+                 root,
+                 npoints=2500,
+                 classification=False,
+                 class_choice=None,
+                 split='train',
+                 data_augmentation=True):
+        super(ShapeNetDataset, self).__init__()
+        self.npoints = npoints
+        self.root = root
+        self.catfile = os.path.join(self.root, 'synsetoffset2category.txt')
+        self.cat = {}
+        self.data_augmentation = data_augmentation
+        self.classification = classification
+        self.seg_classes = {}
+
+        with open(self.catfile, 'r') as f:
+            for line in f:
+                ls = line.strip().split()
+                self.cat[ls[0]] = ls[1]
+        if not class_choice is None:
+            self.cat = {k: v for k, v in self.cat.items() if k in class_choice}
+
+        self.id2cat = {v: k for k, v in self.cat.items()}
+
+        self.meta = {}
+        splitfile = os.path.join(self.root, 'train_test_split', 'shuffled_{}_file_list.json'.format(split))
+        filelist = json.load(open(splitfile, 'r'))
+        for item in self.cat:
+            self.meta[item] = []
+
+        for file in filelist:
+            _, category, uuid = file.split('/')
+            if category in self.cat.values():
+                self.meta[self.id2cat[category]].append((os.path.join(self.root, category, 'points', uuid + '.pts'),
+                                                         os.path.join(self.root, category, 'points_label',
+                                                                      uuid + '.seg')))
+
+        self.datapath = []
+        for item in self.cat:
+            for fn in self.meta[item]:
+                self.datapath.append((item, fn[0], fn[1]))
+
+        self.classes = dict(zip(sorted(self.cat), range(len(self.cat))))
+        #print(self.classes)
+        with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), './misc/num_seg_classes.txt'), 'r') as f:
+            for line in f:
+                ls = line.strip().split()
+                self.seg_classes[ls[0]] = int(ls[1])
+        self.num_seg_classes = self.seg_classes[list(self.cat.keys())[0]]
+
+    def __getitem__(self, index):
+        fn = self.datapath[index]
+        cls = self.classes[self.datapath[index][0]]
+        point_set = np.loadtxt(fn[1]).astype(np.float32)
+        seg = np.loadtxt(fn[2]).astype(np.int64)
+
+
+        choice = np.random.choice(len(seg), self.npoints, replace=True)
+        point_set = point_set[choice, :]
+
+        point_set = point_set - np.expand_dims(np.mean(point_set, axis=0), 0)  # center
+        dist = np.max(np.sqrt(np.sum(point_set ** 2, axis=1)), 0)
+        point_set = point_set / dist  # scale
+
+        if self.data_augmentation:
+            theta = np.random.uniform(0, np.pi * 2)
+            rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
+            point_set[:, [0, 2]] = point_set[:, [0, 2]].dot(rotation_matrix)  # random rotation
+            point_set += np.random.normal(0, 0.02, size=point_set.shape)  # random jitter
+
+        seg = seg[choice]
+        point_set = np.array(point_set)
+        seg = np.array(seg)
+        cls = np.array(cls).astype(np.int64)
+        return point_set.transpose(1, 0), seg
+
+    def __len__(self):
+        return len(self.datapath)
+
+class ModelNetDataset:
+    """ModelNetDataset"""
+    def __init__(self,
+                 root,
+                 npoints=2500,
+                 split='train',
+                 data_augmentation=True):
+        self.npoints = npoints
+        self.root = root
+        self.split = split
+        self.data_augmentation = data_augmentation
+        self.fns = []
+        with open(os.path.join(root, '{}.txt'.format(self.split)), 'r') as f:
+            for line in f:
+                self.fns.append(line.strip())
+
+        self.cat = {}
+        with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../misc/modelnet_id.txt'), 'r') as f:
+            for line in f:
+                ls = line.strip().split()
+                self.cat[ls[0]] = int(ls[1])
+
+        print(self.cat)
+        self.classes = list(self.cat.keys())
+
+    def __getitem__(self, index):
+        """getitem"""
+        fn = self.fns[index]
+        cls = self.cat[fn.split('/')[0]]
+        with open(os.path.join(self.root, fn), 'rb') as f:
+            plydata = PlyData.read(f)
+        pts = np.vstack([plydata['vertex']['x'], plydata['vertex']['y'], plydata['vertex']['z']]).T
+        choice = np.random.choice(len(pts), self.npoints, replace=True)
+        point_set = pts[choice, :]
+
+        point_set = point_set - np.expand_dims(np.mean(point_set, axis=0), 0)  # center
+        dist = np.max(np.sqrt(np.sum(point_set ** 2, axis=1)), 0)
+        point_set = point_set / dist  # scale
+
+        if self.data_augmentation:
+            theta = np.random.uniform(0, np.pi * 2)
+            rotation_matrix = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]])
+            point_set[:, [0, 2]] = point_set[:, [0, 2]].dot(rotation_matrix)  # random rotation
+            point_set += np.random.normal(0, 0.02, size=point_set.shape)  # random jitter
+
+        point_set = point_set.astype(np.float32)
+        cls = np.array([cls]).astype(np.int64)
+        return point_set.transpose(1, 0), cls
+
+    def __len__(self):
+        return len(self.fns)
diff --git a/research/cv/pointnet/src/export.py b/research/cv/pointnet/src/export.py
new file mode 100644
index 0000000000000000000000000000000000000000..05c9d521322485cffdf1dda5f61c99338ed0fb9a
--- /dev/null
+++ b/research/cv/pointnet/src/export.py
@@ -0,0 +1,39 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""export"""
+import os
+import argparse
+import numpy as np
+from mindspore import Tensor, export, load_checkpoint, context
+from src.network import PointNetDenseCls
+parser = argparse.ArgumentParser(description='MindSpore Pointnet Segmentation')
+parser.add_argument(
+    '--batchSize', type=int, default=32, help='input batch size')
+parser.add_argument('--model', type=str, default='', help='model path')
+parser.add_argument('--device_id', type=int, default=4, help='device id')
+parser.add_argument('--device_target', default='Ascend', help='device id')
+parser.add_argument('--file_format', type=str, default='MINDIR', help="export file format")
+parser.add_argument('--feature_transform', action='store_true', help="use feature transform")
+
+args = parser.parse_args()
+context.set_context(mode=context.PYNATIVE_MODE, device_target=args.device_target)
+num_classes = 4
+classifier = PointNetDenseCls(k=num_classes, feature_transform=args.feature_transform)
+if not os.path.exists('./mindir'):
+    os.mkdir('./mindir')
+load_checkpoint(args.model, net=classifier)
+input_data = np.random.uniform(0.0, 1.0, size=[1, 3, 2500]).astype(np.float32)
+export(classifier, Tensor(input_data), file_name='./mindir/pointnet', file_format=args.file_format)
+print("successfully export model")
diff --git a/research/cv/pointnet/src/loss.py b/research/cv/pointnet/src/loss.py
new file mode 100644
index 0000000000000000000000000000000000000000..13a6a61bd7d744aea04c48599e50b23bc6a66469
--- /dev/null
+++ b/research/cv/pointnet/src/loss.py
@@ -0,0 +1,57 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""Custom losses."""
+import mindspore.nn as nn
+import mindspore.ops as ops
+import mindspore.ops.operations as P
+from mindspore.nn.loss.loss import LossBase
+from mindspore.ops import functional as F
+__all__ = ['PointnetLoss']
+
+
+class PointnetLoss(nn.Cell):
+    """Cross Entropy Loss for ICNet"""
+
+    def __init__(self, feature_transform, num_class=2):
+        super(PointnetLoss, self).__init__()
+
+        self.base_loss = NLLLoss()
+        self.reshape = ops.Reshape()
+        self.num_class = num_class
+        self.trans_feat_loss = 0
+        self.feature_transform = feature_transform
+
+    def construct(self, *inputs):
+        """construct"""
+        preds, target = inputs
+        preds = self.reshape(preds, (-1, self.num_class))
+        target = self.reshape(target, (-1, 1))[:, 0] - 1
+        target = target.astype('int32')
+        loss = self.base_loss(preds, target)
+        return loss
+
+class NLLLoss(LossBase):
+    '''
+       NLLLoss function
+    '''
+    def __init__(self, reduction='mean'):
+        super(NLLLoss, self).__init__(reduction)
+        self.one_hot = P.OneHot()
+        self.reduce_sum = P.ReduceSum()
+
+    def construct(self, logits, label):
+        label_one_hot = self.one_hot(label, F.shape(logits)[-1], F.scalar_to_array(1.0), F.scalar_to_array(0.0))
+        loss = self.reduce_sum(-1.0 * logits * label_one_hot, (1,))
+        return self.get_loss(loss)
diff --git a/research/cv/pointnet/src/misc/modelnet_id.txt b/research/cv/pointnet/src/misc/modelnet_id.txt
new file mode 100644
index 0000000000000000000000000000000000000000..5a21dbb21a100e8e73a55691b5db7f59535312db
--- /dev/null
+++ b/research/cv/pointnet/src/misc/modelnet_id.txt
@@ -0,0 +1,40 @@
+airplane 0
+bathtub 1
+bed 2
+bench 3
+bookshelf 4
+bottle 5
+bowl 6
+car 7
+chair 8
+cone 9
+cup 10
+curtain 11
+desk 12
+door 13
+dresser 14
+flower_pot 15
+glass_box 16
+guitar 17
+keyboard 18
+lamp 19
+laptop 20
+mantel 21
+monitor 22
+night_stand 23
+person 24
+piano 25
+plant 26
+radio 27
+range_hood 28
+sink 29
+sofa 30
+stairs 31
+stool 32
+table 33
+tent 34
+toilet 35
+tv_stand 36
+vase 37
+wardrobe 38
+xbox 39
diff --git a/research/cv/pointnet/src/misc/num_seg_classes.txt b/research/cv/pointnet/src/misc/num_seg_classes.txt
new file mode 100644
index 0000000000000000000000000000000000000000..b871ced7ff6a5328bff2a15d330f895385b61823
--- /dev/null
+++ b/research/cv/pointnet/src/misc/num_seg_classes.txt
@@ -0,0 +1,16 @@
+Airplane 4
+Bag 2
+Cap 2
+Car 4
+Chair 4
+Earphone 3
+Guitar 3
+Knife 2
+Lamp 4
+Laptop 2
+Motorbike 6
+Mug 2
+Pistol 3
+Rocket 3
+Skateboard 3
+Table 3
diff --git a/research/cv/pointnet/src/misc/show3d.png b/research/cv/pointnet/src/misc/show3d.png
new file mode 100644
index 0000000000000000000000000000000000000000..69f5b050c656c727f729643c496726c68e8aa23a
Binary files /dev/null and b/research/cv/pointnet/src/misc/show3d.png differ
diff --git a/research/cv/pointnet/src/network.py b/research/cv/pointnet/src/network.py
new file mode 100644
index 0000000000000000000000000000000000000000..4619b93189fc72ebdfd3ff8813da808e88dcddbd
--- /dev/null
+++ b/research/cv/pointnet/src/network.py
@@ -0,0 +1,273 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""model"""
+import numpy as np
+import mindspore
+from mindspore import Tensor
+import mindspore.nn as nn
+import mindspore.ops as ops
+
+class STN3d(nn.Cell):
+    """STN3d"""
+    def __init__(self):
+        super(STN3d, self).__init__()
+        self.conv1 = mindspore.nn.Conv1d(3, 64, 1, has_bias=True, bias_init='normal')  # in_channels, out_channels, kernel_size
+        self.conv2 = mindspore.nn.Conv1d(64, 128, 1, has_bias=True, bias_init='normal')
+        self.conv3 = mindspore.nn.Conv1d(128, 1024, 1, has_bias=True, bias_init='normal')
+        self.fc1 = nn.Dense(1024, 512)  # in_channels, out_channels
+        self.fc2 = nn.Dense(512, 256)
+        self.fc3 = nn.Dense(256, 9)
+        self.relu = ops.ReLU()
+        self.reshape = ops.Reshape()
+        self.tile = ops.Tile()
+        self.bn1 = nn.BatchNorm2d(64)  # num_features
+        self.bn2 = nn.BatchNorm2d(128)
+        self.bn3 = nn.BatchNorm2d(1024)
+        self.bn4 = nn.BatchNorm1d(512)
+        self.bn5 = nn.BatchNorm1d(256)
+        self.argmaxwithvalue = ops.ArgMaxWithValue(axis=2, keep_dims=True)
+        self.s1 = Tensor([1, 0, 0, 0, 1, 0, 0, 0, 1], mindspore.float32)
+    def construct(self, x):
+        """construct"""
+        batchsize = x.shape[0]
+
+        x = self.conv1(x)
+        x = ops.ExpandDims()(x, -1)
+        x = self.bn1(x)
+        x = ops.Squeeze(-1)(x)
+        x = self.relu(x)
+        x = self.relu(ops.Squeeze(-1)(self.bn2(ops.ExpandDims()(self.conv2(x), -1))))
+        x = self.relu(ops.Squeeze(-1)(self.bn3(ops.ExpandDims()(self.conv3(x), -1))))
+        x = self.argmaxwithvalue(x)[1]
+
+        x = self.reshape(x, (-1, 1024))
+
+        x = self.relu(self.bn4(self.fc1(x)))
+        x = self.relu(self.bn5(self.fc2(x)))
+        x = self.fc3(x)
+        multiples = (batchsize, 1)
+
+        iden = self.tile(self.s1.view(1, 9), multiples)
+
+        x = x + iden
+        x = self.reshape(x, (-1, 3, 3))
+        return x
+
+
+class STNkd(nn.Cell):
+    """STNkd"""
+    def __init__(self, k=64):
+        super(STNkd, self).__init__()
+        self.conv1 = mindspore.nn.Conv1d(k, 64, 1, has_bias=True, bias_init='normal')
+        self.conv2 = mindspore.nn.Conv1d(64, 128, 1, has_bias=True, bias_init='normal')
+        self.conv3 = mindspore.nn.Conv1d(128, 1024, 1, has_bias=True, bias_init='normal')
+        self.fc1 = nn.Dense(1024, 512)
+        self.fc2 = nn.Dense(512, 256)
+        self.fc3 = nn.Dense(256, k * k)
+        self.relu = ops.ReLU()
+        self.flatten = nn.Flatten()
+
+        self.bn1 = nn.BatchNorm2d(64)
+        self.bn2 = nn.BatchNorm2d(128)
+        self.bn3 = nn.BatchNorm2d(1024)
+        self.bn4 = nn.BatchNorm1d(512)
+        self.bn5 = nn.BatchNorm1d(256)
+
+        self.k = k
+
+    def construct(self, x):
+        """construct"""
+        batchsize = x.shape[0]
+        x = self.relu(ops.Squeeze(-1)(self.bn1(ops.ExpandDims()(self.conv1(x), -1))))
+        x = self.relu(ops.Squeeze(-1)(self.bn2(ops.ExpandDims()(self.conv2(x), -1))))
+        x = self.relu(ops.Squeeze(-1)(self.bn3(ops.ExpandDims()(self.conv3(x), -1))))
+        x = ops.ExpandDims()(Tensor(np.max(x.asnumpy(), axis=2)), -1)
+        reshape = ops.Reshape()
+        x = reshape(x, (-1, 1024))
+
+        x = self.relu(self.bn4(self.fc1(x)))
+        x = self.relu(self.bn5(self.fc2(x)))
+        x = self.fc3(x)
+
+        tile = ops.Tile()
+        multiples = (batchsize, 1)
+
+        iden = tile(Tensor(np.eye(self.k).flatten().astype(np.float32)).view(1, self.k * self.k), multiples)
+        x = x + iden
+        x = reshape(x, (-1, self.k, self.k))
+        return x
+
+
+class PointNetfeat(nn.Cell):
+    """PointNetfeat"""
+    def __init__(self, global_feat=True, feature_transform=False):
+        super(PointNetfeat, self).__init__()
+        self.stn = STN3d()
+        self.conv1 = mindspore.nn.Conv1d(3, 64, 1, has_bias=True, bias_init='normal')
+        self.conv2 = mindspore.nn.Conv1d(64, 128, 1, has_bias=True, bias_init='normal')
+        self.conv3 = mindspore.nn.Conv1d(128, 1024, 1, has_bias=True, bias_init='normal')
+        self.bn1 = nn.BatchNorm2d(64)
+        self.bn2 = nn.BatchNorm2d(128)
+        self.bn3 = nn.BatchNorm2d(1024)
+        self.global_feat = global_feat
+        self.feature_transform = feature_transform
+        self.relu = ops.ReLU()
+        self.cat = ops.Concat(axis=1)
+        self.argmaxwithvalue = ops.ArgMaxWithValue(axis=2, keep_dims=True)
+        self.squeeze = ops.Squeeze(-1)
+        self.expanddims = ops.ExpandDims()
+        self.transpose = ops.Transpose()
+        self.batmatmul = ops.BatchMatMul()
+        self.reshape = ops.Reshape()
+        self.tile = ops.Tile()
+        if self.feature_transform:
+            self.fstn = STNkd(k=64)
+
+    def construct(self, x):
+        """construct"""
+        n_pts = x.shape[2]
+        transf = self.stn(x)
+
+        x = self.transpose(x, (0, 2, 1))
+
+        x = self.batmatmul(x, transf)
+        x = self.transpose(x, (0, 2, 1))
+        x = self.relu(ops.Squeeze(-1)(self.bn1(ops.ExpandDims()(self.conv1(x), -1))))
+
+        if self.feature_transform:
+            trans_feat = self.fstn(x)
+            x = self.transpose(x, (0, 2, 1))
+            x = self.batmatmul(x, transf)
+            x = self.transpose(x, (0, 2, 1))
+        else:
+            trans_feat = None
+
+        pointfeats = x
+        x = self.relu(
+            self.squeeze(self.bn2(self.expanddims(self.conv2(x), -1))))
+        x = self.squeeze(self.bn3(self.expanddims(self.conv3(x), -1)))
+        x = self.argmaxwithvalue(x)[1]
+
+
+        x = self.reshape(x, (-1, 1024))
+        multiples = (1, 1, n_pts)
+        x = self.tile(self.reshape(x, (-1, 1024, 1)), multiples)
+
+        return self.cat((x, pointfeats)), transf, trans_feat
+
+class PointNetCls(nn.Cell):
+    """PointNetCls"""
+    def __init__(self, k=2, feature_transform=False):
+        super(PointNetCls, self).__init__()
+        self.feature_transform = feature_transform
+        self.feat = PointNetfeat(global_feat=True, feature_transform=feature_transform)
+        self.fc1 = nn.Dense(1024, 512)
+        self.fc2 = nn.Dense(512, 256)
+        self.fc3 = nn.Dense(256, k)
+        self.dropout = nn.Dropout(keep_prob=0.7)
+        self.bn1 = nn.BatchNorm1d(512)
+        self.bn2 = nn.BatchNorm1d(256)
+        self.relu = ops.ReLU()
+        self.logsoftmax = nn.LogSoftmax(axis=1)
+
+    def construct(self, x):
+        """construct"""
+        x, transf, trans_feat = self.feat(x)
+        x = self.relu(self.bn1(self.fc1(x)))
+        x = self.relu(self.bn2(self.dropout(self.fc2(x))))
+        x = self.fc3(x)
+        return self.logsoftmax(x), transf, trans_feat
+
+class PointNetDenseCls(nn.Cell):
+    """PointNetDenseCls"""
+    def __init__(self, k=2, feature_transform=False):
+        super(PointNetDenseCls, self).__init__()
+        self.k = k
+        self.feature_transform = feature_transform
+        self.feat = PointNetfeat(global_feat=False, feature_transform=feature_transform)
+        self.conv1 = mindspore.nn.Conv1d(1088, 512, 1, has_bias=True, bias_init='normal')
+        self.conv2 = mindspore.nn.Conv1d(512, 256, 1, has_bias=True, bias_init='normal')
+        self.conv3 = mindspore.nn.Conv1d(256, 128, 1, has_bias=True, bias_init='normal')
+        self.conv4 = mindspore.nn.Conv1d(128, self.k, 1, has_bias=True, bias_init='normal')
+        self.bn1 = nn.BatchNorm2d(512)
+        self.bn2 = nn.BatchNorm2d(256)
+        self.bn3 = nn.BatchNorm2d(128)
+        self.logsoftmax = nn.LogSoftmax(axis=-1)
+        self.squeeze = ops.Squeeze(-1)
+        self.expanddims = ops.ExpandDims()
+        self.relu = ops.ReLU()
+        self.train = True
+
+    def construct(self, x):
+        """construct"""
+        batchsize = x.shape[0]
+        n_pts = x.shape[2]
+        x, _, _ = self.feat(x)
+        x = self.relu(ops.Squeeze(-1)(self.bn1(ops.ExpandDims()(self.conv1(x), -1))))
+        x = self.relu(ops.Squeeze(-1)(self.bn2(ops.ExpandDims()(self.conv2(x), -1))))
+        x = self.relu(ops.Squeeze(-1)(self.bn3(ops.ExpandDims()(self.conv3(x), -1))))
+        x = self.conv4(x)
+        transpose = ops.Transpose()
+        x = transpose(x, (0, 2, 1))
+        x = self.logsoftmax(x.view(-1, self.k))
+        x = x.view(batchsize, n_pts, self.k)
+
+        return x
+
+
+
+def feature_transform_regularizer(_trans):
+    """feature_transform_regularizer"""
+    d = _trans.shape[1]
+
+    eye = ops.Eye()
+    I = eye(d, d, mindspore.float32)[None, :, :]
+    norm = nn.Norm(axis=(1, 2))
+    transpose = ops.Transpose()
+    reduce = ops.ReduceMean()
+    loss = reduce(norm(np.matmul(_trans.asnumpy(), transpose(_trans, (0, 2, 1).asnumpy()) - I)))
+    return loss
+
+if __name__ == '__main__':
+
+    shape1 = (32, 3, 2500)
+    uniformreal = ops.UniformReal()
+    sim_data = uniformreal(shape1)
+    trans = STN3d()
+    out = trans(sim_data)
+    print('stn', out.shape)
+
+    shape2 = (32, 64, 2500)
+    sim_data_64d = uniformreal(shape2)
+    trans = STNkd(k=64)
+    out = trans(sim_data_64d)
+    print('stn64d', out.shape)
+
+    pointfeat = PointNetfeat(global_feat=True)
+    out, _, _ = pointfeat(sim_data)
+    print('global feat', out.shape)
+
+    pointfeat = PointNetfeat(global_feat=False)
+    out, _, _ = pointfeat(sim_data)
+    print('point feat', out.shape)
+
+    cls = PointNetCls(k=5)
+    out, _, _ = cls(sim_data)
+    print('class', out.shape)
+
+    seg = PointNetDenseCls(k=3)
+    out, _, _ = seg(sim_data)
+    print('seg', out.shape)
+    
\ No newline at end of file
diff --git a/research/cv/pointnet/src/preprocess.py b/research/cv/pointnet/src/preprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..e44cbb1a4dd22d73d13767b02b79e3ca56d3a366
--- /dev/null
+++ b/research/cv/pointnet/src/preprocess.py
@@ -0,0 +1,60 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""pre process for 310 inference"""
+import os
+import argparse
+from mindspore import context
+import mindspore.dataset as ds
+import numpy as np
+from src.dataset import ShapeNetDataset
+
+batch_size = 1
+parser = argparse.ArgumentParser(description="lenet preprocess data")
+parser.add_argument("--dataset_path", type=str, default=
+                    '/home/pointnet/shapenetcore_partanno_segmentation_benchmark_v0', help="dataset path.")
+parser.add_argument("--output_path", type=str, default='../datapath_BS1/', help="output path.")
+parser.add_argument("--device_target", type=str, default='Ascend', help="output path.")
+parser.add_argument("--device_id", default=4, help="output path.")
+parser.add_argument('--class_choice', type=str, default='Chair', help="class_choice")
+parser.add_argument(
+    '--batchSize', type=int, default=1, help='input batch size')
+args = parser.parse_args()
+
+context.set_context(mode=context.PYNATIVE_MODE, device_target="Ascend", device_id=args.device_id)
+if __name__ == '__main__':
+    dataset_generator = ShapeNetDataset(
+        root=args.dataset_path,
+        classification=False,
+        split='test',
+        class_choice=[args.class_choice])
+    dataset = ds.GeneratorDataset(dataset_generator, column_names=["point", "label"])
+    dataset = dataset.batch(args.batchSize)
+
+    data_path = os.path.join(args.output_path, '00_data')
+    if not os.path.exists(data_path):
+        os.makedirs(data_path)
+    label_list = []
+    for i, data in enumerate(dataset.create_dict_iterator()):
+        print(data['label'].shape)
+        file_name = 'shapenet_data_bs'+str(args.batchSize)+'_%03d'%i+'.bin'
+        file_path = os.path.join(data_path, file_name)
+        data['point'].asnumpy().tofile(file_path)
+
+        label_list.append(data['label'].asnumpy())
+        print('loading ', i)
+    print('begin saving label')
+    print(len(label_list))
+    np.save(args.output_path+'labels_ids.npy', np.array(label_list))
+    print('='*20, 'export bin file finished', '='*20)
diff --git a/research/cv/pointnet/train.py b/research/cv/pointnet/train.py
new file mode 100644
index 0000000000000000000000000000000000000000..04fc7e9759b28d96a2a913ba073fd9927b5426b6
--- /dev/null
+++ b/research/cv/pointnet/train.py
@@ -0,0 +1,210 @@
+# Copyright 2021 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+"""train model"""
+import argparse
+import os
+import random
+import time
+import math
+import numpy as np
+import mindspore
+import mindspore.nn as nn
+from mindspore import context
+from mindspore.context import ParallelMode
+import mindspore.dataset as ds
+from mindspore import save_checkpoint
+import mindspore.ops as ops
+from mindspore.communication.management import init, get_rank
+from src.dataset import ShapeNetDataset
+from src.network import PointNetDenseCls
+from src.loss import PointnetLoss
+
+manualSeed = 2
+random.seed(manualSeed)
+np.random.seed(manualSeed)
+mindspore.set_seed(manualSeed)
+
+parser = argparse.ArgumentParser(description='MindSpore Pointnet Segmentation')
+parser.add_argument(
+    '--batchSize', type=int, default=64, help='input batch size')
+parser.add_argument(
+    '--nepoch', type=int, default=25, help='number of epochs to train for')
+parser.add_argument('--model', type=str, default='', help='model path')
+parser.add_argument('--device_id', type=int, default=5, help='device id')
+parser.add_argument('--learning_rate', type=float, default=0.0005, help='device id')
+parser.add_argument('--device_target', default='Ascend', help='device target')
+parser.add_argument('--data_url', type=str, default='/home/pointnet/shapenetcore_partanno_segmentation_benchmark_v0'
+                    , help="dataset path")
+parser.add_argument('--train_url', type=str, default='./ckpts'
+                    , help="ckpts path")
+parser.add_argument('--class_choice', type=str, default='Chair', help="class_choice")
+parser.add_argument('--feature_transform', action='store_true', help="use feature transform")
+parser.add_argument('--enable_modelarts', default=False, help="use feature transform")
+
+args = parser.parse_args()
+
+reshape = ops.Reshape()
+print(args)
+
+
+def train_model(_net_train, network, _dataset, _test_dataset, _num_classes):
+    """train_model"""
+    print('loading data')
+    print(time.strftime("%Y-%m-%d  %H:%M:%S", time.localtime()))
+
+    steps_per_epoch = _dataset.get_dataset_size() - 1
+    print((time.strftime("%Y-%m-%d  %H:%M:%S", time.localtime())), 'dataset output shape', _dataset.output_shapes())
+    print("============== Starting Training ==============")
+    best_accuracy = 0
+    save_time = 0
+
+    for epoch in range(1, args.nepoch + 1):
+        test_dataset_iterator = _test_dataset.create_dict_iterator()
+        next(test_dataset_iterator)
+        valid_data = next(test_dataset_iterator)
+        for batch_id, data in enumerate(_dataset.create_dict_iterator()):
+            t_0 = time.time()
+            points = data['data']
+            label = data['label']
+            network.set_train(True)
+
+            pred = network(points)
+            pred = ops.Reshape()(pred, (-1, _num_classes))
+            pred_choice = ops.Argmax(axis=1, output_type=mindspore.int32)(pred)
+
+            pred_np = pred_choice.asnumpy()
+            target = ops.Reshape()(label, (-1, 1))[:, 0] - 1
+            target_np = target.asnumpy()
+            correct = np.equal(pred_np, target_np).sum()
+            loss = _net_train(points, label)
+            print('Epoch : %d/%d  episode : %d/%d   Loss : %.4f  Accuracy : %f step_time: %.4f' %
+                  (epoch, args.nepoch, batch_id, steps_per_epoch, np.mean(loss.asnumpy())
+                   , correct.item() / float(args.batchSize * 2500), (time.time() - t_0)))
+            if batch_id % 9 == 0:
+                data = valid_data
+                points, label = data['point'], data['label']
+                network.set_train(False)
+                pred = network(points)
+
+                pred = reshape(pred, (-1, _num_classes))
+                pred_choice = ops.Argmax(axis=1, output_type=mindspore.int32)(pred)
+                pred_np = pred_choice.asnumpy()
+                target = reshape(label, (-1, 1))
+                target = target[:, 0] - 1
+                target_np = target.asnumpy()
+                loss = net_loss(pred, label)
+                correct = np.equal(pred_np, target_np).sum()
+                accuracy = correct.item() / float(args.batchSize * 2500)
+                print('[%d: %d/%d] %s  loss: %f accuracy: %.4f  best_accuracy: %f' %
+                      (epoch, batch_id, steps_per_epoch, blue('test'), np.mean(loss.asnumpy())
+                       , accuracy, best_accuracy))
+                if accuracy > best_accuracy or accuracy > 0.93:
+                    save_time += 1
+                    if accuracy > best_accuracy:
+                        best_accuracy = accuracy
+                    save_checkpoint(network, os.path.join(local_train_url
+                                                          , f"pointnet_network_epoch_{save_time}.ckpt"))
+
+                    if args.enable_modelarts:
+                        mox.file.copy_parallel(src_url=local_train_url, dst_url=args.train_url)
+                    print(blue('save best model for epoch %d  accuracy : %f' % (epoch, accuracy)))
+
+
+if __name__ == "__main__":
+    blue = lambda x: '\033[94m' + x + '\033[0m'
+    local_data_url = args.data_url
+    local_train_url = args.train_url
+    device_num = int(os.getenv("RANK_SIZE", "1"))
+    shard_id = None
+    num_shards = None
+    if args.enable_modelarts:
+        device_id = int(os.getenv("DEVICE_ID"))
+        import moxing as mox
+
+        local_data_url = './cache/data'
+        local_train_url = './cache/ckpt'
+        device_target = args.device_target
+        num_shards = int(os.getenv("RANK_SIZE"))
+        shard_id = int(os.getenv("DEVICE_ID"))
+        context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
+        context.set_context(save_graphs=False)
+        if device_target == "Ascend":
+            context.set_context(device_id=device_id)
+            if device_num > 1:
+                args.learning_rate *= 2
+                context.reset_auto_parallel_context()
+                context.set_auto_parallel_context(device_num=device_num, parallel_mode=ParallelMode.DATA_PARALLEL,
+                                                  gradients_mean=True)
+                init()
+                local_data_url = os.path.join(local_data_url, str(device_id))
+                local_train_url = os.path.join(local_train_url, "_" + str(get_rank()))
+        else:
+            raise ValueError("Unsupported platform.")
+        import moxing as mox
+
+        mox.file.copy_parallel(src_url=args.data_url, dst_url=local_data_url)
+    else:
+        # run on the local server
+        context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target, device_id=args.device_id)
+        context.set_context(save_graphs=False)
+        if device_num > 1:
+
+            args.learning_rate = args.learning_rate * 2
+            context.reset_auto_parallel_context()
+            context.set_auto_parallel_context(device_num=device_num, parallel_mode=ParallelMode.DATA_PARALLEL,
+                                              gradients_mean=True)
+            init()
+
+    if not os.path.exists(local_train_url):
+        os.makedirs(local_train_url)
+
+    dataset_sink_mode = False
+
+    dataset_generator = ShapeNetDataset(
+        root=local_data_url,
+        classification=False,
+        class_choice=[args.class_choice])
+    test_dataset_generator = ShapeNetDataset(
+        root=local_data_url,
+        classification=False,
+        class_choice=[args.class_choice],
+        split='test',
+        data_augmentation=False)
+
+    dataset = ds.GeneratorDataset(dataset_generator, column_names=["data", "label"]
+                                  , shuffle=True, num_shards=num_shards, shard_id=shard_id)
+    dataset = dataset.batch(args.batchSize, drop_remainder=True)
+
+    test_dataset = ds.GeneratorDataset(test_dataset_generator, ["point", "label"], shuffle=True
+                                       , num_shards=num_shards, shard_id=shard_id)
+    test_dataset = test_dataset.batch(args.batchSize, drop_remainder=True)
+
+    num_classes = dataset_generator.num_seg_classes
+    classifier = PointNetDenseCls(k=num_classes, feature_transform=args.feature_transform)
+    classifier.set_train(True)
+
+    num_batch = math.ceil(len(dataset_generator) / args.batchSize)
+
+    milestone = list(range(80, 20000, 80))
+    lr_rate = [args.learning_rate * 0.5 ** x for x in range(249)]
+    learning_rates = nn.piecewise_constant_lr(milestone, lr_rate)
+    optim = nn.Adam(params=classifier.trainable_params(), learning_rate=learning_rates
+                    , beta1=0.9, beta2=0.999, loss_scale=1024)
+    net_loss = PointnetLoss(num_class=num_classes, feature_transform=args.feature_transform)
+    net_with_loss = nn.WithLossCell(classifier, net_loss)
+    net_train = nn.TrainOneStepCell(net_with_loss, optim, sens=1024)
+
+    train_model(_net_train=net_train, network=classifier, _dataset=dataset
+                , _test_dataset=test_dataset, _num_classes=num_classes)