diff --git a/official/cv/essay-recogination/README_CN.md b/official/cv/essay-recogination/README_CN.md index 685ea2a4209d3365c0a7d43a148598ceb2865014..b2c8633cb1e15676814fa32ca688617611f9d8a4 100644 --- a/official/cv/essay-recogination/README_CN.md +++ b/official/cv/essay-recogination/README_CN.md @@ -3,15 +3,13 @@ <!-- TOC --> - [目录](#目录) - - [Eassay-Recognition描述](#essay-recognition描述) - - [模型架构](#模型架构) - - [数据集](#数据集) - - [环境要求](#环境要求) + - [Eassay-Recognition描述](#eassay-recognition描述) +- [模型架构](#模型架构) +- [数据集](#数据集) +- [环境要求](#环境要求) - [快速入门](#快速入门) - [脚本说明](#脚本说明) - [脚本及样例代码](#脚本及样例代码) - - [脚本参数](#脚本参数) - - [训练脚本参数](#训练脚本参数) - [参数配置](#参数配置) - [训练过程](#训练过程) - [训练](#训练) @@ -21,8 +19,6 @@ - [性能](#性能) - [训练性能](#训练性能) - [评估性能](#评估性能) - - [推理性能](#推理性能) - - [ModelZoo主页](#modelzoo主页) <!-- /TOC --> @@ -42,7 +38,7 @@ # 数据集 -[训练样例数据及字符集文件](链接:https://pan.baidu.com/s/1_Nv3lMxZpfxUjRoqoDs8Tghwdb) +[训练样例数据及字符集文件](https://pan.baidu.com/s/1_Nv3lMxZpfxUjRoqoDs8Tghwdb) 提取码:hwdb diff --git a/official/cv/predrnn++/README.md b/official/cv/predrnn++/README.md index 973b85dfc98011eb86e8640d3f3b9dbd74ebe6e8..f2d640494f9931910865ebfa35fbaa8709031e02 100644 --- a/official/cv/predrnn++/README.md +++ b/official/cv/predrnn++/README.md @@ -1,7 +1,7 @@ # Contents - [Contents](#contents) - - [Predrnn++ Description](#Predrnn++-description) + - [Predrnn++ Description](#predrnn-description) - [Model Architecture](#model-architecture) - [Dataset](#dataset) - [Dataset Prepare](#dataset-prepare) @@ -11,8 +11,6 @@ - [Script and Sample Code](#script-and-sample-code) - [Script Parameters](#script-parameters) - [Training Script Parameters](#training-script-parameters) - - [Parameters Configuration](#parameters-configuration) - - [Dataset Preparation](#dataset-preparation) - [Training Process](#training-process) - [Training](#training) - [Evaluation Process](#evaluation-process) @@ -21,7 +19,6 @@ - [Performance](#performance) - [Training Performance](#training-performance) - [Evaluation Performance](#evaluation-performance) - - [Description of MindSpore Version](#description-of-mindspore-version) - [ModelZoo Homepage](#modelzoo-homepage) ## [Predrnn++ Description](#contents) @@ -206,7 +203,7 @@ mse per seq: 478.58854093653633 | Speed | 983ms/step(1pcs) | | Total time | 22h | | Checkpoint for Fine tuning | 177.27M (.ckpt file) | -| Scripts | [Link](https://gitee.com/mindspore/models/tree/master/official/cv/Predrnn++) | +| Scripts | [Link](https://gitee.com/mindspore/models/tree/master/official/cv/predrnn++) | #### [Evaluation Performance](#contents) diff --git a/official/cv/pwcnet/README.md b/official/cv/pwcnet/README.md index 393771241b42df97b7823f1faeac2fc32e831649..c5bd293fbfa2665f6084f990d2abd04000370704 100644 --- a/official/cv/pwcnet/README.md +++ b/official/cv/pwcnet/README.md @@ -1,14 +1,20 @@ # Contents -- [PWCnet Description](#PWCnet-description) +- [Contents](#contents) +- [PWCnet Description](#pwcnet-description) - [Model Architecture](#model-architecture) - [Dataset](#dataset) +- [pretrained](#pretrained) - [Environment Requirements](#environment-requirements) - [Script Description](#script-description) - [Script and Sample Code](#script-and-sample-code) - [Running Example](#running-example) + - [Train](#train) + - [Evaluation](#evaluation) - [Model Description](#model-description) - [Performance](#performance) + - [Training Performance](#training-performance) + - [Evaluation Performance](#evaluation-performance) - [ModelZoo Homepage](#modelzoo-homepage) # [PWCnet Description](#contents) @@ -95,8 +101,8 @@ bash scripts/run_ckpt_convert.sh [PYTORCH_FILE_PATH] [MINDSPORE_FILE_PATH] - Framework - [MindSpore](https://www.mindspore.cn/install/en) - For more information, please check the resources below: - - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html) - - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html) + - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) + - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html) # [Script Description](#contents) @@ -230,4 +236,4 @@ EPE: 6.9049 # [ModelZoo Homepage](#contents) -Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +Please check the official [homepage](https://gitee.com/mindspore/models). diff --git a/official/nlp/gru/README.md b/official/nlp/gru/README.md index 3d238c9879e2dc1819f7c4bcac5e7bce57d9ec6b..bde69f18c4eca0a24ac8657efa83399747857568 100644 --- a/official/nlp/gru/README.md +++ b/official/nlp/gru/README.md @@ -2,9 +2,11 @@ <!-- TOC --> - [GRU](#gru) + - [Paper](#paper) - [Model Structure](#model-structure) - [Dataset](#dataset) - [Environment Requirements](#environment-requirements) + - [Requirements](#requirements) - [Quick Start](#quick-start) - [Script Description](#script-description) - [Dataset Preparation](#dataset-preparation) @@ -12,7 +14,9 @@ - [Training Process](#training-process) - [Inference Process](#inference-process) - [Export MindIR](#export-mindir) - - [Inference Process](#inference-process) + - [Inference Process](#inference-process-1) + - [Usage](#usage) + - [result](#result) - [Model Description](#model-description) - [Performance](#performance) - [Training Performance](#training-performance) @@ -29,7 +33,7 @@ GRU(Gate Recurrent Unit) is a kind of recurrent neural network algorithm, just l ## Paper -1.[Paper](https://arxiv.org/pdf/1607.01759.pdf): "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", 2014, Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio +1.[Paper](https://arxiv.org/abs/1406.1078): "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", 2014, Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio 2.[Paper](https://arxiv.org/pdf/1409.3215.pdf): "Sequence to Sequence Learning with Neural Networks", 2014, Ilya Sutskever, Oriol Vinyals, Quoc V. Le diff --git a/research/cv/PDarts/README_CN.md b/research/cv/PDarts/README_CN.md index 80d522f5ca9b309e3e50c12cba91eb4bc122fe49..c7d948d405fbd418cb54607c99861d100a78843d 100644 --- a/research/cv/PDarts/README_CN.md +++ b/research/cv/PDarts/README_CN.md @@ -3,7 +3,7 @@ <!-- TOC --> - [目录](#目录) -- [PDarts描述](#PDarts描述) +- [PDarts描述](#pdarts描述) - [模型架构](#模型架构) - [数据集](#数据集) - [特性](#特性) @@ -15,12 +15,10 @@ - [脚本参数](#脚本参数) - [训练过程](#训练过程) - [训练](#训练) - - [分布式训练](#分布式训练) - - [评估过程](#评估过程) - [评估](#评估) - [推理过程](#推理过程) - - [导出MindIR](#导出MindIR) - - [在Ascend310执行推理](#在Ascend310执行推理) + - [导出MindIR](#导出mindir) + - [在Ascend310执行推理](#在ascend310执行推理) - [模型描述](#模型描述) - [性能](#性能) - [训练准确率结果](#训练准确率结果) @@ -59,17 +57,16 @@ ## 混合精度 -采用[混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 +采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/r1.6/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 # 环境要求 - 硬件(Ascend910) -- 准备Ascend AI处理器搭建硬件环境。如需试用昇腾处理器,请发送[申请表](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx)至ascend@huawei.com,审核通过即可获得资源。 - 框架 - [MindSpore](https://www.mindspore.cn/install) - 如需查看详情,请参见如下资源: -- [MindSpore教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html) -- [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html) +- [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html) +- [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html) # 快速入门 diff --git a/research/cv/augvit/readme.md b/research/cv/augvit/readme.md index 92d0b0308f1e1046f9c4050c1adc90d1141c7692..00de908e788708ea7ecc7eda61bca299585c6600 100644 --- a/research/cv/augvit/readme.md +++ b/research/cv/augvit/readme.md @@ -1,34 +1,32 @@ # Contents -- [Aug-ViT Description](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#AugViT-description) -- [Model Architecture](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#model-architecture) -- [Dataset](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#dataset) -- [Environment Requirements](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#environment-requirements) -- Script Description - - Script and Sample Code - - [Training Process](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#training-process) - - Evaluation Process - - [Evaluation](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#evaluation) -- Model Description - - Performance - - [Training Performance](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#evaluation-performance) - - [Inference Performance](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#evaluation-performance) -- [Description of Random Situation](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#description-of-random-situation) -- [ModelZoo Homepage](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#modelzoo-homepage) - -## [Aug-ViT Description](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +- [Contents](#contents) + - [Aug-ViT Description](#aug-vit-description) + - [Model architecture](#model-architecture) + - [Dataset](#dataset) + - [Environment Requirements](#environment-requirements) + - [Script description](#script-description) + - [Script and sample code](#script-and-sample-code) + - [Eval process](#eval-process) + - [Usage](#usage) + - [Launch](#launch) + - [Result](#result) + - [Description of Random Situation](#description-of-random-situation) + - [ModelZoo Homepage](#modelzoo-homepage) + +## [Aug-ViT Description](#contents) Aug-ViT inserts additional paths with learnable parameters in parallel on the original shortcuts for alleviating the feature collapse. The block-circulant projection is used to implement augmented shortcut, which brings negligible increase of computational cost. [Paper](https://arxiv.org/abs/2106.15941): Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang. Augmented Shortcuts for Vision Transformers. NeurIPS 2021. -## [Model architecture](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [Model architecture](#contents) A block of Aug-ViT is show below:  -## [Dataset](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [Dataset](#contents) Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) @@ -37,7 +35,7 @@ Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) - Test: 10000 images - Data format: RGB images. -## [Environment Requirements](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [Environment Requirements](#contents) - Hardware(Ascend/GPU) - Prepare hardware environment with Ascend or GPU. @@ -47,9 +45,9 @@ Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) - [MindSpore Tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) - [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html) -## [Script description](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [Script description](#contents) -### [Script and sample code](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +### [Script and sample code](#contents) ```bash AugViT @@ -63,7 +61,7 @@ AugViT └── augvit.py # augvit network ``` -## [Eval process](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [Eval process](#contents) ### Usage @@ -84,10 +82,10 @@ After installing MindSpore via the official website, you can start evaluation as result: {'acc': 0.98} ckpt= ./augvit_c10.ckpt ``` -## [Description of Random Situation](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [Description of Random Situation](#contents) In dataset.py, we set the seed inside "create_dataset" function. We also use random seed in train.py. -## [ModelZoo Homepage](https://gitee.com/mindspore/models/tree/master/research/cv/AugViT#contents) +## [ModelZoo Homepage](#contents) Please check the official [homepage](https://gitee.com/mindspore/models). \ No newline at end of file diff --git a/research/cv/inception_resnet_v2/README_CN.md b/research/cv/inception_resnet_v2/README_CN.md index 345ac9418f6c1ad4b5e6161d29f589db67d990af..9fadab9ea7b505c4da2ffc320e5a0fd61c841a4e 100644 --- a/research/cv/inception_resnet_v2/README_CN.md +++ b/research/cv/inception_resnet_v2/README_CN.md @@ -3,7 +3,7 @@ <!-- TOC --> - [目录](#目录) -- [Inception_ResNet_v2描述](#Inception_ResNet_v2描述) +- [Inception_ResNet_v2描述](#inception_resnet_v2描述) - [模型架构](#模型架构) - [数据集](#数据集) - [特性](#特性) @@ -14,12 +14,13 @@ - [脚本参数](#脚本参数) - [训练过程](#训练过程) - [用法](#用法) - - [启动](#启动) - [结果](#结果) + - [Ascend](#ascend) + - [GPU](#gpu) - [评估过程](#评估过程) - [用法](#用法-1) - - [启动](#启动-1) - [结果](#结果-1) + - [模型导出](#模型导出) - [模型描述](#模型描述) - [性能](#性能) - [训练性能](#训练性能) diff --git a/research/cv/stgcn/README_CN.md b/research/cv/stgcn/README_CN.md index 47986085ee117201cb74471cc1e62abf93ef51b7..9c4cab7a4cdaef7348cb293953e582bd4798bb42 100644 --- a/research/cv/stgcn/README_CN.md +++ b/research/cv/stgcn/README_CN.md @@ -1,6 +1,7 @@ # Contents -- [STGCN 介绍](#STGCN-介绍) +- [Contents](#contents) +- [STGCN 介绍](#stgcn-介绍) - [模型架构](#模型架构) - [数据集](#数据集) - [环境要求](#环境要求) @@ -17,10 +18,13 @@ - [用法](#用法) - [结果](#结果) - [模型介绍](#模型介绍) - - [性能](#性能) + - [性能](#性能) - [评估性能](#评估性能) + - [STGCN on PeMSD7-m (Cheb,n_pred=9)](#stgcn-on-pemsd7-m-chebn_pred9) + - [Inference Performance](#inference-performance) + - [STGCN on PeMSD7-m (Cheb,n_pred=9)](#stgcn-on-pemsd7-m-chebn_pred9-1) - [随机事件介绍](#随机事件介绍) -- [ModelZoo 主页](#ModelZoo-主页) +- [ModelZoo 主页](#modelzoo-主页) # [STGCN 介绍](#contents) @@ -40,7 +44,7 @@ Dataset used: PeMED7(PeMSD7-m、PeMSD7-L) BJER4 -由于数据集下载原因,只找到了[PeMSD7-M](https://github.com/hazdzz/STGCN/tree/main/data/train/road_traffic/pemsd7-m)数据集。 +由于数据集下载原因,只找到了[PeMSD7-M](https://github.com/hazdzz/STGCN/tree/main/data/pemsd7-m)数据集。 # [环境要求](#contents)