From a1e2217e3ba75b202dd36d2b2fee7703cd0f221b Mon Sep 17 00:00:00 2001 From: lvmingfu <lvmingfu@huawei.com> Date: Thu, 14 Apr 2022 14:09:02 +0800 Subject: [PATCH] modify urls adapt docs repository structure --- .jenkins/check/config/filter_linklint.txt | 2 ++ benchmark/ascend/bert/README.md | 2 -- benchmark/ascend/resnet/README.md | 2 +- benchmark/ascend/resnet/README_CN.md | 2 +- official/audio/melgan/README.md | 2 +- official/audio/melgan/README_CN.md | 2 +- official/cv/FCN8s/README.md | 2 +- official/cv/c3d/README.md | 2 +- official/cv/cnnctc/README.md | 6 +++--- official/cv/cnnctc/README_CN.md | 6 +++--- official/cv/crnn/README.md | 2 +- official/cv/crnn_seq2seq_ocr/README.md | 2 +- official/cv/cspdarknet53/README.md | 4 ++-- official/cv/ctpn/README.md | 2 +- official/cv/darknet53/README.md | 2 +- official/cv/deeplabv3/README.md | 2 +- official/cv/deeplabv3/README_CN.md | 2 +- official/cv/deeplabv3plus/README_CN.md | 2 +- official/cv/deeptext/README.md | 2 +- official/cv/densenet/README.md | 2 +- official/cv/densenet/README_CN.md | 2 +- official/cv/depthnet/README.md | 2 +- official/cv/dpn/README.md | 2 +- official/cv/east/README.md | 2 +- official/cv/essay-recogination/README_CN.md | 2 +- official/cv/googlenet/README.md | 6 +++--- official/cv/googlenet/README_CN.md | 4 ++-- official/cv/inceptionv3/README.md | 2 +- official/cv/inceptionv3/README_CN.md | 2 +- official/cv/inceptionv4/README.md | 4 ++-- official/cv/maskrcnn/README.md | 2 +- official/cv/maskrcnn/README_CN.md | 2 +- official/cv/maskrcnn_mobilenetv1/README.md | 4 ++-- official/cv/mobilenetv1/README.md | 2 +- official/cv/mobilenetv2/README.md | 2 +- official/cv/mobilenetv2/README_CN.md | 2 +- official/cv/nima/README.md | 2 +- official/cv/openpose/README.md | 2 +- official/cv/patchcore/README_CN.md | 2 +- official/cv/predrnn++/README.md | 2 +- official/cv/psenet/README.md | 2 +- official/cv/psenet/README_CN.md | 2 +- official/cv/pvnet/README.md | 2 +- official/cv/resnet/README.md | 4 ++-- official/cv/resnet/README_CN.md | 2 +- official/cv/resnext/README.md | 2 +- official/cv/resnext/README_CN.md | 2 +- official/cv/retinanet/README_CN.md | 2 +- official/cv/semantic_human_matting/README.md | 2 +- official/cv/simple_pose/README.md | 2 +- official/cv/squeezenet/README.md | 4 ++-- official/cv/squeezenet/modelarts/README.md | 4 ++-- official/cv/srcnn/README_CN.md | 2 +- official/cv/ssd/README.md | 2 +- official/cv/ssd/README_CN.md | 2 +- official/cv/ssim-ae/README_CN.md | 2 +- official/cv/tinydarknet/README_CN.md | 2 +- official/cv/unet/README.md | 2 +- official/cv/unet/README_CN.md | 2 +- official/cv/unet3d/README.md | 2 +- official/cv/vgg16/README.md | 4 ++-- official/cv/vgg16/README_CN.md | 4 ++-- official/cv/vit/README.md | 4 ++-- official/cv/vit/README_CN.md | 4 ++-- official/cv/warpctc/README.md | 2 +- official/cv/warpctc/README_CN.md | 2 +- official/cv/xception/README.md | 4 ++-- official/cv/yolov3_resnet18/README.md | 4 ++-- official/cv/yolov3_resnet18/README_CN.md | 4 ++-- official/nlp/bert/README.md | 2 -- official/nlp/cpm/README.md | 2 +- official/nlp/cpm/README_CN.md | 2 +- official/nlp/duconv/README_CN.md | 2 +- official/nlp/mass/README.md | 4 ++-- official/nlp/mass/README_CN.md | 4 ++-- official/nlp/pangu_alpha/README.md | 12 ++++++------ official/nlp/transformer/README.md | 2 +- official/nlp/transformer/README_CN.md | 2 +- official/recommend/ncf/README.md | 6 +++--- research/audio/fcn-4/README.md | 2 +- research/audio/speech_transformer/README.md | 2 +- research/cv/3D_DenseNet/README.md | 2 +- research/cv/3D_DenseNet/README_CN.md | 4 +--- research/cv/APDrawingGAN/README_CN.md | 2 +- research/cv/AlignedReID++/README_CN.md | 4 ++-- research/cv/AlphaPose/README_CN.md | 2 +- research/cv/DDRNet/README_CN.md | 2 +- research/cv/EDSR/README_CN.md | 2 +- research/cv/EGnet/README_CN.md | 2 +- research/cv/GENet_Res50/README_CN.md | 2 +- research/cv/LightCNN/README.md | 6 +++--- research/cv/LightCNN/README_CN.md | 4 ++-- research/cv/ManiDP/Readme.md | 2 +- research/cv/NFNet/README_CN.md | 2 +- research/cv/RefineDet/README_CN.md | 2 +- research/cv/RefineNet/README.md | 2 +- research/cv/SE-Net/README.md | 2 +- research/cv/SE_ResNeXt50/README_CN.md | 2 +- research/cv/TNT/README_CN.md | 2 +- research/cv/cct/README_CN.md | 2 +- research/cv/convnext/README_CN.md | 2 +- research/cv/dcgan/README.md | 2 +- research/cv/deeplabv3plus/README_CN.md | 2 +- research/cv/dlinknet/README.md | 2 +- research/cv/dlinknet/README_CN.md | 2 +- research/cv/efficientnetv2/README_CN.md | 2 +- research/cv/fairmot/README.md | 2 +- research/cv/fishnet99/README_CN.md | 2 +- research/cv/glore_res/README_CN.md | 2 +- research/cv/glore_res200/README_CN.md | 2 +- research/cv/glore_res50/README.md | 2 +- research/cv/hardnet/README_CN.md | 6 +++--- research/cv/inception_resnet_v2/README.md | 4 ++-- research/cv/inception_resnet_v2/README_CN.md | 4 ++-- research/cv/mae/README_CN.md | 4 ++-- research/cv/metric_learn/README_CN.md | 2 +- research/cv/midas/README.md | 2 +- research/cv/nas-fpn/README_CN.md | 2 +- research/cv/ntsnet/README.md | 2 +- research/cv/osnet/README.md | 2 +- research/cv/ras/README.md | 2 +- research/cv/renas/Readme.md | 2 +- research/cv/res2net/README.md | 2 +- research/cv/res2net_deeplabv3/README.md | 2 +- research/cv/resnet3d/README_CN.md | 2 +- research/cv/resnet50_bam/README.md | 2 +- research/cv/resnet50_bam/README_CN.md | 2 +- research/cv/resnext152_64x4d/README.md | 2 +- research/cv/resnext152_64x4d/README_CN.md | 2 +- research/cv/retinanet_resnet101/README.md | 2 +- research/cv/retinanet_resnet101/README_CN.md | 2 +- research/cv/retinanet_resnet152/README.md | 2 +- research/cv/retinanet_resnet152/README_CN.md | 2 +- research/cv/siamRPN/README_CN.md | 2 +- research/cv/simple_baselines/README_CN.md | 2 +- research/cv/single_path_nas/README.md | 2 +- research/cv/single_path_nas/README_CN.md | 2 +- research/cv/sknet/README.md | 2 +- research/cv/squeezenet/README.md | 4 ++-- research/cv/squeezenet1_1/README.md | 2 +- research/cv/ssd_ghostnet/README.md | 2 +- research/cv/ssd_inception_v2/README.md | 2 +- research/cv/ssd_inceptionv2/README_CN.md | 2 +- research/cv/ssd_mobilenetV2/README.md | 2 +- research/cv/ssd_mobilenetV2_FPNlite/README.md | 2 +- research/cv/ssd_resnet34/README.md | 2 +- research/cv/ssd_resnet34/README_CN.md | 2 +- research/cv/ssd_resnet50/README.md | 2 +- research/cv/ssd_resnet50/README_CN.md | 2 +- research/cv/ssd_resnet_34/README.md | 2 +- research/cv/swin_transformer/README_CN.md | 2 +- research/cv/tsm/README_CN.md | 2 +- research/cv/vgg19/README.md | 2 +- research/cv/vgg19/README_CN.md | 4 ++-- research/cv/vnet/README_CN.md | 2 +- research/cv/wideresnet/README.md | 2 +- research/cv/wideresnet/README_CN.md | 2 +- research/hpc/pinns/README.md | 4 ++-- research/hpc/pinns/README_CN.md | 2 +- research/nlp/albert/README.md | 5 ++--- research/nlp/atae_lstm/README.md | 2 +- research/nlp/rotate/README_CN.md | 2 +- research/nlp/seq2seq/README_CN.md | 2 +- 163 files changed, 204 insertions(+), 209 deletions(-) create mode 100644 .jenkins/check/config/filter_linklint.txt diff --git a/.jenkins/check/config/filter_linklint.txt b/.jenkins/check/config/filter_linklint.txt new file mode 100644 index 000000000..bbd5911cd --- /dev/null +++ b/.jenkins/check/config/filter_linklint.txt @@ -0,0 +1,2 @@ +http://www.vision.caltech.edu/visipedia/CUB-200-2011.html +http://dl.yf.io/dla/models/imagenet/dla34-ba72cf86.pth \ No newline at end of file diff --git a/benchmark/ascend/bert/README.md b/benchmark/ascend/bert/README.md index 5d9cceade..8c46ef6a6 100644 --- a/benchmark/ascend/bert/README.md +++ b/benchmark/ascend/bert/README.md @@ -209,8 +209,6 @@ Please follow the instructions in the link below to create an hccl.json file in For distributed training among multiple machines, training command should be executed on each machine in a small time interval. Thus, an hccl.json is needed on each machine. [merge_hccl](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools#merge_hccl) is a tool to create hccl.json for multi-machine case. -For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#tfrecord) format. - ```text For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/benchmark/ascend/resnet/README.md b/benchmark/ascend/resnet/README.md index c1e12fc6e..f8d916c77 100644 --- a/benchmark/ascend/resnet/README.md +++ b/benchmark/ascend/resnet/README.md @@ -97,7 +97,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/benchmark/ascend/resnet/README_CN.md b/benchmark/ascend/resnet/README_CN.md index 9663c783e..3df9996ec 100644 --- a/benchmark/ascend/resnet/README_CN.md +++ b/benchmark/ascend/resnet/README_CN.md @@ -103,7 +103,7 @@ ResNet鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/audio/melgan/README.md b/official/audio/melgan/README.md index fbe180bef..edab4f65f 100644 --- a/official/audio/melgan/README.md +++ b/official/audio/melgan/README.md @@ -73,7 +73,7 @@ Dataset used: [LJ Speech](<https://keithito.com/LJ-Speech-Dataset/>) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/audio/melgan/README_CN.md b/official/audio/melgan/README_CN.md index 6e5727b3f..ee2279296 100644 --- a/official/audio/melgan/README_CN.md +++ b/official/audio/melgan/README_CN.md @@ -70,7 +70,7 @@ MelGAN妯″瀷鏄潪鑷洖褰掑叏鍗风Н妯″瀷銆傚畠鐨勫弬鏁版瘮鍚岀被妯″瀷灏戝緱 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/FCN8s/README.md b/official/cv/FCN8s/README.md index 1ce8edd2a..8e5bfc381 100644 --- a/official/cv/FCN8s/README.md +++ b/official/cv/FCN8s/README.md @@ -471,7 +471,7 @@ python export.py ### 鏁欑▼ -濡傛灉浣犻渶瑕佸湪涓嶅悓纭欢骞冲彴锛堝GPU锛孉scend 910 鎴栬€� Ascend 310锛変娇鐢ㄨ缁冨ソ鐨勬ā鍨嬶紝浣犲彲浠ュ弬鑰冭繖涓� [Link](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備互涓嬫槸涓€涓畝鍗曚緥瀛愮殑姝ラ浠嬬粛锛� +濡傛灉浣犻渶瑕佸湪涓嶅悓纭欢骞冲彴锛堝GPU锛孉scend 910 鎴栬€� Ascend 310锛変娇鐢ㄨ缁冨ソ鐨勬ā鍨嬶紝浣犲彲浠ュ弬鑰冭繖涓� [Link](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備互涓嬫槸涓€涓畝鍗曚緥瀛愮殑姝ラ浠嬬粛锛� - Running on Ascend diff --git a/official/cv/c3d/README.md b/official/cv/c3d/README.md index fb6a298ec..dbee0dabb 100644 --- a/official/cv/c3d/README.md +++ b/official/cv/c3d/README.md @@ -324,7 +324,7 @@ epoch time: 150760.797 ms, per step time: 252.954 ms #### Distributed training on Ascend > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > ```text diff --git a/official/cv/cnnctc/README.md b/official/cv/cnnctc/README.md index 7606795b7..f05825fed 100644 --- a/official/cv/cnnctc/README.md +++ b/official/cv/cnnctc/README.md @@ -1,4 +1,4 @@ -锘�# Contents +# Contents - [CNNCTC Description](#CNNCTC-description) - [Model Architecture](#model-architecture) @@ -94,7 +94,7 @@ This takes around 75 minutes. ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -517,7 +517,7 @@ accuracy: 0.8533 ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/official/cv/cnnctc/README_CN.md b/official/cv/cnnctc/README_CN.md index 31b0e6257..4125a7ec3 100644 --- a/official/cv/cnnctc/README_CN.md +++ b/official/cv/cnnctc/README_CN.md @@ -95,7 +95,7 @@ python src/preprocess_dataset.py ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 @@ -250,7 +250,7 @@ bash scripts/run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_CKPT(o > 娉ㄦ剰: - RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). + RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). ### 璁粌缁撴灉 @@ -449,7 +449,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID] ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佸湪GPU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂浣跨敤璁粌濂界殑妯″瀷杩涜鎺ㄧ悊锛岃鍙傝€冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備互涓嬩负绠€鍗曠ず渚嬶細 +濡傛灉鎮ㄩ渶瑕佸湪GPU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂浣跨敤璁粌濂界殑妯″瀷杩涜鎺ㄧ悊锛岃鍙傝€冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備互涓嬩负绠€鍗曠ず渚嬶細 - Ascend澶勭悊鍣ㄧ幆澧冭繍琛� diff --git a/official/cv/crnn/README.md b/official/cv/crnn/README.md index b76d1730b..237142ab1 100644 --- a/official/cv/crnn/README.md +++ b/official/cv/crnn/README.md @@ -238,7 +238,7 @@ Parameters for both training and evaluation can be set in default_config.yaml. ## [Training Process](#contents) -- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. ### [Training](#contents) diff --git a/official/cv/crnn_seq2seq_ocr/README.md b/official/cv/crnn_seq2seq_ocr/README.md index 1707ba078..01d056e7b 100644 --- a/official/cv/crnn_seq2seq_ocr/README.md +++ b/official/cv/crnn_seq2seq_ocr/README.md @@ -229,7 +229,7 @@ Parameters for both training and evaluation can be set in config.py. ## [Training Process](#contents) -- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. ### [Training](#contents) diff --git a/official/cv/cspdarknet53/README.md b/official/cv/cspdarknet53/README.md index cbfd2038d..e8670cb12 100644 --- a/official/cv/cspdarknet53/README.md +++ b/official/cv/cspdarknet53/README.md @@ -49,7 +49,7 @@ Dataset used can refer to paper. ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. @@ -206,7 +206,7 @@ bash run_distribute_train.sh [RANK_TABLE_FILE] [DATA_DIR] (option)[PATH_CHECKPOI bash run_standalone_train.sh [DEVICE_ID] [DATA_DIR] (option)[PATH_CHECKPOINT] ``` -> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV3, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV3, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` diff --git a/official/cv/ctpn/README.md b/official/cv/ctpn/README.md index a89a39c8f..58be8b7ba 100644 --- a/official/cv/ctpn/README.md +++ b/official/cv/ctpn/README.md @@ -231,7 +231,7 @@ imagenet_cfg = edict({ Then you can train it with ImageNet2012. > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` > diff --git a/official/cv/darknet53/README.md b/official/cv/darknet53/README.md index 804488515..168662486 100644 --- a/official/cv/darknet53/README.md +++ b/official/cv/darknet53/README.md @@ -58,7 +58,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/cv/deeplabv3/README.md b/official/cv/deeplabv3/README.md index e8dc4d9f9..185eda20a 100644 --- a/official/cv/deeplabv3/README.md +++ b/official/cv/deeplabv3/README.md @@ -86,7 +86,7 @@ You can also generate the list file automatically by run script: `python get_dat ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/cv/deeplabv3/README_CN.md b/official/cv/deeplabv3/README_CN.md index 742956f10..8017d3f9c 100644 --- a/official/cv/deeplabv3/README_CN.md +++ b/official/cv/deeplabv3/README_CN.md @@ -93,7 +93,7 @@ Pascal VOC鏁版嵁闆嗗拰璇箟杈圭晫鏁版嵁闆嗭紙Semantic Boundaries Dataset锛孲BD ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/deeplabv3plus/README_CN.md b/official/cv/deeplabv3plus/README_CN.md index 9985407b6..29c133a23 100644 --- a/official/cv/deeplabv3plus/README_CN.md +++ b/official/cv/deeplabv3plus/README_CN.md @@ -83,7 +83,7 @@ Pascal VOC鏁版嵁闆嗗拰璇箟杈圭晫鏁版嵁闆嗭紙Semantic Boundaries Dataset锛孲BD ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/deeptext/README.md b/official/cv/deeptext/README.md index d30cccba7..dfc9863ac 100644 --- a/official/cv/deeptext/README.md +++ b/official/cv/deeptext/README.md @@ -133,7 +133,7 @@ sh run_eval_gpu.sh [IMGS_PATH] [ANNOS_PATH] [CHECKPOINT_PATH] [COCO_TEXT_PARSER_ ``` > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` > diff --git a/official/cv/densenet/README.md b/official/cv/densenet/README.md index 97dca3522..6e7f3532c 100644 --- a/official/cv/densenet/README.md +++ b/official/cv/densenet/README.md @@ -79,7 +79,7 @@ The default configuration of the Dataset are as follows: ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. diff --git a/official/cv/densenet/README_CN.md b/official/cv/densenet/README_CN.md index 74c86403e..8e6e8b9f8 100644 --- a/official/cv/densenet/README_CN.md +++ b/official/cv/densenet/README_CN.md @@ -83,7 +83,7 @@ DenseNet-100浣跨敤鐨勬暟鎹泦锛� Cifar-10 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/depthnet/README.md b/official/cv/depthnet/README.md index 5af2a58f5..d1868a4e7 100644 --- a/official/cv/depthnet/README.md +++ b/official/cv/depthnet/README.md @@ -74,7 +74,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/dpn/README.md b/official/cv/dpn/README.md index 106d98cd2..f2ec942bb 100644 --- a/official/cv/dpn/README.md +++ b/official/cv/dpn/README.md @@ -67,7 +67,7 @@ All the models in this repository are trained and validated on ImageNet-1K. The ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/cv/east/README.md b/official/cv/east/README.md index 3686d8fdf..76c1e00e6 100644 --- a/official/cv/east/README.md +++ b/official/cv/east/README.md @@ -130,7 +130,7 @@ bash run_eval_gpu.sh [DATASET_PATH] [CKPT_PATH] [DEVICE_ID] ``` > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` > diff --git a/official/cv/essay-recogination/README_CN.md b/official/cv/essay-recogination/README_CN.md index 6456047af..064425d29 100644 --- a/official/cv/essay-recogination/README_CN.md +++ b/official/cv/essay-recogination/README_CN.md @@ -111,7 +111,7 @@ train.valInterval = 100 #杈硅缁冭竟鎺� ## 璁粌杩囩▼ -- 鍦╜parameters/hwdb.gin`涓缃€夐」锛屽寘鎷涔犵巼鍜岀綉缁滆秴鍙傛暟銆傚崟鍑籟MindSpore鍔犺浇鏁版嵁闆嗘暀绋媇(https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)锛屼簡瑙f洿澶氫俊鎭€� +- 鍦╜parameters/hwdb.gin`涓缃€夐」锛屽寘鎷涔犵巼鍜岀綉缁滆秴鍙傛暟銆傚崟鍑籟MindSpore鍔犺浇鏁版嵁闆嗘暀绋媇(https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)锛屼簡瑙f洿澶氫俊鎭€� ### 璁粌 diff --git a/official/cv/googlenet/README.md b/official/cv/googlenet/README.md index 3fb862acb..04708b39f 100644 --- a/official/cv/googlenet/README.md +++ b/official/cv/googlenet/README.md @@ -1,4 +1,4 @@ -锘�# Contents +# Contents [鏌ョ湅涓枃](./README_CN.md) @@ -71,7 +71,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -595,7 +595,7 @@ Current batch_ Size can only be set to 1. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/official/cv/googlenet/README_CN.md b/official/cv/googlenet/README_CN.md index d8f0e8885..569ed28fa 100644 --- a/official/cv/googlenet/README_CN.md +++ b/official/cv/googlenet/README_CN.md @@ -73,7 +73,7 @@ GoogleNet鐢卞涓猧nception妯″潡涓茶仈璧锋潵锛屽彲浠ユ洿鍔犳繁鍏ャ€� 闄嶇淮鐨� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 @@ -596,7 +596,7 @@ python export.py --config_path [CONFIG_PATH] ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� - Ascend澶勭悊鍣ㄧ幆澧冭繍琛� diff --git a/official/cv/inceptionv3/README.md b/official/cv/inceptionv3/README.md index 3c5bd47fd..e445e7022 100644 --- a/official/cv/inceptionv3/README.md +++ b/official/cv/inceptionv3/README.md @@ -65,7 +65,7 @@ Dataset used: [CIFAR-10](http://www.cs.toronto.edu/~kriz/cifar.html) ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. diff --git a/official/cv/inceptionv3/README_CN.md b/official/cv/inceptionv3/README_CN.md index ff3189a67..cb1910b17 100644 --- a/official/cv/inceptionv3/README_CN.md +++ b/official/cv/inceptionv3/README_CN.md @@ -69,7 +69,7 @@ InceptionV3鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害锛圓scend锛� -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� diff --git a/official/cv/inceptionv4/README.md b/official/cv/inceptionv4/README.md index 4cf8250d4..7cf88d279 100644 --- a/official/cv/inceptionv4/README.md +++ b/official/cv/inceptionv4/README.md @@ -44,7 +44,7 @@ Dataset used can refer to paper. ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. @@ -263,7 +263,7 @@ bash scripts/run_standalone_train_ascend.sh [DEVICE_ID] [DATA_DIR] ``` > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` diff --git a/official/cv/maskrcnn/README.md b/official/cv/maskrcnn/README.md index 3689e43e1..cc3be8102 100644 --- a/official/cv/maskrcnn/README.md +++ b/official/cv/maskrcnn/README.md @@ -544,7 +544,7 @@ Usage: bash run_standalone_train.sh [PRETRAINED_MODEL] [DATA_PATH] ## [Training Process](#contents) -- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. ### [Training](#content) diff --git a/official/cv/maskrcnn/README_CN.md b/official/cv/maskrcnn/README_CN.md index cbe608e85..fcca9b9d0 100644 --- a/official/cv/maskrcnn/README_CN.md +++ b/official/cv/maskrcnn/README_CN.md @@ -526,7 +526,7 @@ bash run_eval.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH] [DATA_PATH] ## 璁粌杩囩▼ -- 鍦╜config.py`涓缃厤缃」锛屽寘鎷琹oss_scale銆佸涔犵巼鍜岀綉缁滆秴鍙傘€傚崟鍑籟姝ゅ](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)鑾峰彇鏇村鏁版嵁闆嗙浉鍏充俊鎭�. +- 鍦╜config.py`涓缃厤缃」锛屽寘鎷琹oss_scale銆佸涔犵巼鍜岀綉缁滆秴鍙傘€傚崟鍑籟姝ゅ](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)鑾峰彇鏇村鏁版嵁闆嗙浉鍏充俊鎭�. ### 璁粌 diff --git a/official/cv/maskrcnn_mobilenetv1/README.md b/official/cv/maskrcnn_mobilenetv1/README.md index 57e5ecd32..3d8b11269 100644 --- a/official/cv/maskrcnn_mobilenetv1/README.md +++ b/official/cv/maskrcnn_mobilenetv1/README.md @@ -1,4 +1,4 @@ -锘�# Contents +# Contents - [MaskRCNN Description](#maskrcnn-description) - [Model Architecture](#model-architecture) @@ -521,7 +521,7 @@ Usage: bash run_distribute_train_gpu.sh [DATA_PATH] [PRETRAINED_PATH] (optional) ## [Training Process](#contents) -- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. ### [Training](#content) diff --git a/official/cv/mobilenetv1/README.md b/official/cv/mobilenetv1/README.md index ce1a3c4b0..5f0771154 100644 --- a/official/cv/mobilenetv1/README.md +++ b/official/cv/mobilenetv1/README.md @@ -73,7 +73,7 @@ Dataset used: [CIFAR-10](http://www.cs.toronto.edu/~kriz/cifar.html) ### Mixed Precision(Ascend) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. ## Environment Requirements diff --git a/official/cv/mobilenetv2/README.md b/official/cv/mobilenetv2/README.md index 454cce4bb..e7b2b046a 100644 --- a/official/cv/mobilenetv2/README.md +++ b/official/cv/mobilenetv2/README.md @@ -59,7 +59,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/cv/mobilenetv2/README_CN.md b/official/cv/mobilenetv2/README_CN.md index 35af3e3d4..88caa2261 100644 --- a/official/cv/mobilenetv2/README_CN.md +++ b/official/cv/mobilenetv2/README_CN.md @@ -55,7 +55,7 @@ MobileNetV2鎬讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害锛圓scend锛� -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/nima/README.md b/official/cv/nima/README.md index 0ddce6559..47485334c 100644 --- a/official/cv/nima/README.md +++ b/official/cv/nima/README.md @@ -84,7 +84,7 @@ python ./src/dividing_label.py --config_path=~/config.yaml ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/openpose/README.md b/official/cv/openpose/README.md index 3bf5319a6..43387c844 100644 --- a/official/cv/openpose/README.md +++ b/official/cv/openpose/README.md @@ -69,7 +69,7 @@ In the currently provided training script, the coco2017 data set is used as an e ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/cv/patchcore/README_CN.md b/official/cv/patchcore/README_CN.md index 98018773c..353c7beff 100644 --- a/official/cv/patchcore/README_CN.md +++ b/official/cv/patchcore/README_CN.md @@ -93,7 +93,7 @@ PatchCore浣跨敤棰勮缁冪殑WideResNet50浣滀负Encoder, 骞跺幓闄ayer3涔嬪悗鐨� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/predrnn++/README.md b/official/cv/predrnn++/README.md index 0018d700f..6e569bc90 100644 --- a/official/cv/predrnn++/README.md +++ b/official/cv/predrnn++/README.md @@ -140,7 +140,7 @@ device_id: 0 # id of NPU used ## [Training Process](#contents) -- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `config.py`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. ### [Training](#contents) diff --git a/official/cv/psenet/README.md b/official/cv/psenet/README.md index 87e844ece..c297a2637 100644 --- a/official/cv/psenet/README.md +++ b/official/cv/psenet/README.md @@ -427,7 +427,7 @@ The `res` folder is generated in the upper-level directory. For details about th ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: ```python # Load unseen dataset for inference diff --git a/official/cv/psenet/README_CN.md b/official/cv/psenet/README_CN.md index 9a3061b8e..9225e8127 100644 --- a/official/cv/psenet/README_CN.md +++ b/official/cv/psenet/README_CN.md @@ -364,7 +364,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID] ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄥ凡璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€僛姝ゅ](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆傛搷浣滅ず渚嬪涓嬶細 +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄥ凡璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€僛姝ゅ](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆傛搷浣滅ず渚嬪涓嬶細 ```python # 鍔犺浇鏈煡鏁版嵁闆嗚繘琛屾帹鐞� diff --git a/official/cv/pvnet/README.md b/official/cv/pvnet/README.md index 7b5579ebb..c462410d3 100644 --- a/official/cv/pvnet/README.md +++ b/official/cv/pvnet/README.md @@ -62,7 +62,7 @@ PvNet鏄竴绉岴ncode-Decode鐨勭綉缁滅粨鏋勶紝閫氳繃杈撳叆涓€寮爎gb鍥撅紝杈撳嚭 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/resnet/README.md b/official/cv/resnet/README.md index 116a9cc2a..a88ab183a 100644 --- a/official/cv/resnet/README.md +++ b/official/cv/resnet/README.md @@ -107,7 +107,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -456,7 +456,7 @@ bash run_eval_gpu_resnet_benchmark.sh [DATASET_PATH] [CKPT_PATH] [BATCH_SIZE](op For distributed training, a hostfile configuration needs to be created in advance. -Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html). +Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html). #### Running parameter server mode training diff --git a/official/cv/resnet/README_CN.md b/official/cv/resnet/README_CN.md index 9663c783e..3df9996ec 100644 --- a/official/cv/resnet/README_CN.md +++ b/official/cv/resnet/README_CN.md @@ -103,7 +103,7 @@ ResNet鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/resnext/README.md b/official/cv/resnext/README.md index 6c5a0985f..d2e356b76 100644 --- a/official/cv/resnext/README.md +++ b/official/cv/resnext/README.md @@ -54,7 +54,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. diff --git a/official/cv/resnext/README_CN.md b/official/cv/resnext/README_CN.md index 09699a0a9..fce417685 100644 --- a/official/cv/resnext/README_CN.md +++ b/official/cv/resnext/README_CN.md @@ -54,7 +54,7 @@ ResNeXt鏁翠綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� diff --git a/official/cv/retinanet/README_CN.md b/official/cv/retinanet/README_CN.md index 7e6d09379..36b50b7db 100644 --- a/official/cv/retinanet/README_CN.md +++ b/official/cv/retinanet/README_CN.md @@ -189,7 +189,7 @@ bash scripts/run_single_train.sh DEVICE_ID MINDRECORD_DIR PRE_TRAINED(optional) > 娉ㄦ剰: - RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). + RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). #### 杩愯 diff --git a/official/cv/semantic_human_matting/README.md b/official/cv/semantic_human_matting/README.md index 68ca05edd..4649c946b 100644 --- a/official/cv/semantic_human_matting/README.md +++ b/official/cv/semantic_human_matting/README.md @@ -78,7 +78,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆備互FP16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱reduce precision`鏌ョ湅绮惧害闄嶄綆鐨勭畻瀛愩€� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆備互FP16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱reduce precision`鏌ョ湅绮惧害闄嶄綆鐨勭畻瀛愩€� # 鐜瑕佹眰 diff --git a/official/cv/simple_pose/README.md b/official/cv/simple_pose/README.md index 9a78c5ca0..f22d647e2 100644 --- a/official/cv/simple_pose/README.md +++ b/official/cv/simple_pose/README.md @@ -57,7 +57,7 @@ Dataset used: COCO2017 ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/official/cv/squeezenet/README.md b/official/cv/squeezenet/README.md index 6b405c2cc..8b4637b8d 100644 --- a/official/cv/squeezenet/README.md +++ b/official/cv/squeezenet/README.md @@ -62,7 +62,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -687,7 +687,7 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/official/cv/squeezenet/modelarts/README.md b/official/cv/squeezenet/modelarts/README.md index d8136687b..ddb66f9e2 100644 --- a/official/cv/squeezenet/modelarts/README.md +++ b/official/cv/squeezenet/modelarts/README.md @@ -62,7 +62,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -687,7 +687,7 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/official/cv/srcnn/README_CN.md b/official/cv/srcnn/README_CN.md index 564fa8855..3fb5fd375 100644 --- a/official/cv/srcnn/README_CN.md +++ b/official/cv/srcnn/README_CN.md @@ -71,7 +71,7 @@ SRCNN棣栧厛浣跨敤鍙屼笁娆�(bicubic)鎻掑€煎皢浣庡垎杈ㄧ巼鍥惧儚鏀惧ぇ鎴愮洰鏍囧昂 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/ssd/README.md b/official/cv/ssd/README.md index 7b4bee4da..3ab719b6b 100644 --- a/official/cv/ssd/README.md +++ b/official/cv/ssd/README.md @@ -306,7 +306,7 @@ Then you can run everything just like on ascend. ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on Ascend diff --git a/official/cv/ssd/README_CN.md b/official/cv/ssd/README_CN.md index fdbdd254b..40fed4347 100644 --- a/official/cv/ssd/README_CN.md +++ b/official/cv/ssd/README_CN.md @@ -246,7 +246,7 @@ bash run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] [CONFIG_PATH] ## 璁粌杩囩▼ -杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** +杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** ### Ascend涓婅缁� diff --git a/official/cv/ssim-ae/README_CN.md b/official/cv/ssim-ae/README_CN.md index 1e954bf4c..6a34cca87 100644 --- a/official/cv/ssim-ae/README_CN.md +++ b/official/cv/ssim-ae/README_CN.md @@ -108,7 +108,7 @@ MVTec AD鏁版嵁闆� ## 娣峰悎绮惧害 -閲囩敤 [娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤 [娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/cv/tinydarknet/README_CN.md b/official/cv/tinydarknet/README_CN.md index 12023a750..c648fd1a1 100644 --- a/official/cv/tinydarknet/README_CN.md +++ b/official/cv/tinydarknet/README_CN.md @@ -64,7 +64,7 @@ Tiny-DarkNet鏄疛oseph Chet Redmon绛変汉鎻愬嚭鐨勪竴涓�16灞傜殑閽堝浜庣粡鍏哥殑 <!-- 涓嶅悓鐨勬満鍣ㄦ湁鍚屼竴涓ā鍨嬬殑澶氫釜鍓湰锛屾瘡涓満鍣ㄥ垎閰嶅埌涓嶅悓鐨勬暟鎹紝鐒跺悗灏嗘墍鏈夋満鍣ㄧ殑璁$畻缁撴灉鎸夌収鏌愮鏂瑰紡鍚堝苟 --> -<!-- 鍦ㄦ繁搴﹀涔犱腑锛屽綋鏁版嵁闆嗗拰鍙傛暟閲忕殑瑙勬ā瓒婃潵瓒婂ぇ锛岃缁冩墍闇€鐨勬椂闂村拰纭欢璧勬簮浼氶殢涔嬪鍔狅紝鏈€鍚庝細鍙樻垚鍒剁害璁粌鐨勭摱棰堛€俒鍒嗗竷寮忓苟琛岃缁僝(<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training.html>)锛屽彲浠ラ檷浣庡鍐呭瓨銆佽绠楁€ц兘绛夌‖浠剁殑闇€姹傦紝鏄繘琛岃缁冪殑閲嶈浼樺寲鎵嬫銆傛湰妯″瀷浣跨敤浜唌indspore鎻愪緵鐨勮嚜鍔ㄥ苟琛屾ā寮廇UTO_PARALLEL锛氳鏂规硶鏄瀺鍚堜簡鏁版嵁骞惰銆佹ā鍨嬪苟琛屽強娣峰悎骞惰鐨�1绉嶅垎甯冨紡骞惰妯″紡锛屽彲浠ヨ嚜鍔ㄥ缓绔嬩唬浠锋ā鍨嬶紝鎵惧埌璁粌鏃堕棿杈冪煭鐨勫苟琛岀瓥鐣ワ紝涓虹敤鎴烽€夋嫨1绉嶅苟琛屾ā寮忋€� --> +<!-- 鍦ㄦ繁搴﹀涔犱腑锛屽綋鏁版嵁闆嗗拰鍙傛暟閲忕殑瑙勬ā瓒婃潵瓒婂ぇ锛岃缁冩墍闇€鐨勬椂闂村拰纭欢璧勬簮浼氶殢涔嬪鍔狅紝鏈€鍚庝細鍙樻垚鍒剁害璁粌鐨勭摱棰堛€俒鍒嗗竷寮忓苟琛岃缁僝(<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/introduction.html>)锛屽彲浠ラ檷浣庡鍐呭瓨銆佽绠楁€ц兘绛夌‖浠剁殑闇€姹傦紝鏄繘琛岃缁冪殑閲嶈浼樺寲鎵嬫銆傛湰妯″瀷浣跨敤浜唌indspore鎻愪緵鐨勮嚜鍔ㄥ苟琛屾ā寮廇UTO_PARALLEL锛氳鏂规硶鏄瀺鍚堜簡鏁版嵁骞惰銆佹ā鍨嬪苟琛屽強娣峰悎骞惰鐨�1绉嶅垎甯冨紡骞惰妯″紡锛屽彲浠ヨ嚜鍔ㄥ缓绔嬩唬浠锋ā鍨嬶紝鎵惧埌璁粌鏃堕棿杈冪煭鐨勫苟琛岀瓥鐣ワ紝涓虹敤鎴烽€夋嫨1绉嶅苟琛屾ā寮忋€� --> # [鐜瑕佹眰](#鐩綍) diff --git a/official/cv/unet/README.md b/official/cv/unet/README.md index 627c4c714..3093c54e5 100644 --- a/official/cv/unet/README.md +++ b/official/cv/unet/README.md @@ -504,7 +504,7 @@ The above python command will run in the background. You can view the results th ### Inference If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you -can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following +can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: #### Running on Ascend 310 diff --git a/official/cv/unet/README_CN.md b/official/cv/unet/README_CN.md index fce8e75dd..1bba434d8 100644 --- a/official/cv/unet/README_CN.md +++ b/official/cv/unet/README_CN.md @@ -503,7 +503,7 @@ bash scripts/run_distribute_train_gpu.sh [RANKSIZE] [DATASET] [CONFIG_PATH] #### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄨ缁冨ソ鐨勬ā鍨嬪湪Ascend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸涓€涓畝鍗曠殑鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄨ缁冨ソ鐨勬ā鍨嬪湪Ascend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸涓€涓畝鍗曠殑鎿嶄綔姝ラ绀轰緥锛� ##### Ascend 310鐜杩愯 diff --git a/official/cv/unet3d/README.md b/official/cv/unet3d/README.md index 49968f868..ecd8796e5 100644 --- a/official/cv/unet3d/README.md +++ b/official/cv/unet3d/README.md @@ -288,7 +288,7 @@ After training, you'll get some checkpoint files under the `train_parallel_fp[32 #### Distributed training on Ascend > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > ```shell diff --git a/official/cv/vgg16/README.md b/official/cv/vgg16/README.md index ea8971d89..e47a112fb 100644 --- a/official/cv/vgg16/README.md +++ b/official/cv/vgg16/README.md @@ -94,7 +94,7 @@ Note that you can run the scripts based on the dataset mentioned in original pap ### Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. @@ -462,7 +462,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 ... ``` -> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training.html). +> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html). > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` ##### Run vgg16 on GPU diff --git a/official/cv/vgg16/README_CN.md b/official/cv/vgg16/README_CN.md index d1423e1e1..62a469525 100644 --- a/official/cv/vgg16/README_CN.md +++ b/official/cv/vgg16/README_CN.md @@ -95,7 +95,7 @@ VGG 16缃戠粶涓昏鐢卞嚑涓熀鏈ā鍧楋紙鍖呮嫭鍗风Н灞傚拰姹犲寲灞傦級鍜屼笁 ### 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� @@ -462,7 +462,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 ... ``` -> 鍏充簬rank_table.json锛屽彲浠ュ弬鑰僛鍒嗗竷寮忓苟琛岃缁僝(https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training.html)銆� +> 鍏充簬rank_table.json锛屽彲浠ュ弬鑰僛鍒嗗竷寮忓苟琛岃缁僝(https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/introduction.html)銆� > **娉ㄦ剰** 灏嗘牴鎹甡device_num`鍜屽鐞嗗櫒鎬绘暟缁戝畾澶勭悊鍣ㄦ牳銆傚鏋滄偍涓嶅笇鏈涢璁粌涓粦瀹氬鐞嗗櫒鍐呮牳锛岃鍦╜scripts/run_distribute_train.sh`鑴氭湰涓Щ闄taskset`鐩稿叧鎿嶄綔銆� ##### GPU澶勭悊鍣ㄧ幆澧冭繍琛孷GG16 diff --git a/official/cv/vit/README.md b/official/cv/vit/README.md index 7da8ba2bf..0b304c75c 100644 --- a/official/cv/vit/README.md +++ b/official/cv/vit/README.md @@ -65,7 +65,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -444,7 +444,7 @@ Current batch_ Size can only be set to 1. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/official/cv/vit/README_CN.md b/official/cv/vit/README_CN.md index 12b1b79d1..db969068f 100644 --- a/official/cv/vit/README_CN.md +++ b/official/cv/vit/README_CN.md @@ -68,7 +68,7 @@ Vit鏄熀浜庡涓猼ransformer encoder妯″潡涓茶仈璧锋潵锛岀敱澶氫釜inception妯� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 @@ -450,7 +450,7 @@ python export.py --config_path=[CONFIG_PATH] ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� - Ascend澶勭悊鍣ㄧ幆澧冭繍琛� diff --git a/official/cv/warpctc/README.md b/official/cv/warpctc/README.md index d8554d92e..783c2596b 100644 --- a/official/cv/warpctc/README.md +++ b/official/cv/warpctc/README.md @@ -254,7 +254,7 @@ save_checkpoint_path: "./checkpoint" # path to save checkpoint ### [Training Process](#contents) -- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `default_config.yaml`, including learning rate and other network hyperparameters. Click [MindSpore dataset preparation tutorial](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. #### [Training](#contents) diff --git a/official/cv/warpctc/README_CN.md b/official/cv/warpctc/README_CN.md index 4e8750d10..6ead399ac 100644 --- a/official/cv/warpctc/README_CN.md +++ b/official/cv/warpctc/README_CN.md @@ -257,7 +257,7 @@ save_checkpoint_path: "./checkpoints" # 妫€鏌ョ偣淇濆瓨璺緞锛岀浉瀵逛簬t ## 璁粌杩囩▼ -- 鍦╜default_config.yaml`涓缃€夐」锛屽寘鎷涔犵巼鍜岀綉缁滆秴鍙傛暟銆傚崟鍑籟MindSpore鍔犺浇鏁版嵁闆嗘暀绋媇(https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)锛屼簡瑙f洿澶氫俊鎭€� +- 鍦╜default_config.yaml`涓缃€夐」锛屽寘鎷涔犵巼鍜岀綉缁滆秴鍙傛暟銆傚崟鍑籟MindSpore鍔犺浇鏁版嵁闆嗘暀绋媇(https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)锛屼簡瑙f洿澶氫俊鎭€� ### 璁粌 diff --git a/official/cv/xception/README.md b/official/cv/xception/README.md index 5ae40e616..6dc790198 100644 --- a/official/cv/xception/README.md +++ b/official/cv/xception/README.md @@ -54,7 +54,7 @@ Dataset used can refer to paper. ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. @@ -193,7 +193,7 @@ bash run_eval_gpu.sh DEVICE_ID DATASET_PATH CHECKPOINT_PATH bash run_infer_310.sh MINDIR_PATH DATA_PATH LABEL_FILE DEVICE_ID ``` -> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). +> Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html), and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). ### Launch diff --git a/official/cv/yolov3_resnet18/README.md b/official/cv/yolov3_resnet18/README.md index 46281a40f..14e442557 100644 --- a/official/cv/yolov3_resnet18/README.md +++ b/official/cv/yolov3_resnet18/README.md @@ -270,7 +270,7 @@ After installing MindSpore via the official website, you can start training and ### Training on Ascend -To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.** +To train the model, run `train.py` with the dataset `image_dir`, `anno_path` and `mindrecord_dir`. If the `mindrecord_dir` is empty, it wil generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) file by `image_dir` and `anno_path`(the absolute image path is joined by the `image_dir` and the relative path in `anno_path`). **Note if `mindrecord_dir` isn't empty, it will use `mindrecord_dir` rather than `image_dir` and `anno_path`.** - Stand alone mode @@ -311,7 +311,7 @@ Note the results is two-classification(person and face) used our own annotations ### Evaluation on Ascend -To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/docs/programming_guide/en/master/save_model.html) file. +To eval, run `eval.py` with the dataset `image_dir`, `anno_path`(eval txt), `mindrecord_dir` and `ckpt_path`. `ckpt_path` is the path of [checkpoint](https://www.mindspore.cn/tutorials/en/master/advanced/train/save.html) file. ```bash bash run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt diff --git a/official/cv/yolov3_resnet18/README_CN.md b/official/cv/yolov3_resnet18/README_CN.md index 6b0719df8..6dd798f1a 100644 --- a/official/cv/yolov3_resnet18/README_CN.md +++ b/official/cv/yolov3_resnet18/README_CN.md @@ -269,7 +269,7 @@ YOLOv3鏁翠綋缃戠粶鏋舵瀯濡備笅锛� ### Ascend涓婅缁� -璁粌妯″瀷杩愯`train.py`锛屼娇鐢ㄦ暟鎹泦`image_dir`銆乣anno_path`鍜宍mindrecord_dir`銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯閫氳繃`image_dir`鍜宍anno_path`锛堝浘鍍忕粷瀵硅矾寰勭敱`image_dir`鍜宍anno_path`涓殑鐩稿璺緞杩炴帴锛夌敓鎴怺MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渀mindrecord_dir`涓嶄负绌猴紝灏嗕娇鐢╜mindrecord_dir`鑰屼笉鏄痐image_dir`鍜宍anno_path`銆�** +璁粌妯″瀷杩愯`train.py`锛屼娇鐢ㄦ暟鎹泦`image_dir`銆乣anno_path`鍜宍mindrecord_dir`銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯閫氳繃`image_dir`鍜宍anno_path`锛堝浘鍍忕粷瀵硅矾寰勭敱`image_dir`鍜宍anno_path`涓殑鐩稿璺緞杩炴帴锛夌敓鎴怺MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渀mindrecord_dir`涓嶄负绌猴紝灏嗕娇鐢╜mindrecord_dir`鑰屼笉鏄痐image_dir`鍜宍anno_path`銆�** - 鍗曟満妯″紡 @@ -310,7 +310,7 @@ YOLOv3鏁翠綋缃戠粶鏋舵瀯濡備笅锛� ### Ascend璇勪及 -杩愯`eval.py`锛屾暟鎹泦涓篳image_dir`銆乣anno_path`(璇勪及TXT)銆乣mindrecord_dir`鍜宍ckpt_path`銆俙ckpt_path`鏄痆妫€鏌ョ偣](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/save_model.html)鏂囦欢鐨勮矾寰勩€� +杩愯`eval.py`锛屾暟鎹泦涓篳image_dir`銆乣anno_path`(璇勪及TXT)銆乣mindrecord_dir`鍜宍ckpt_path`銆俙ckpt_path`鏄痆妫€鏌ョ偣](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/train/save.html)鏂囦欢鐨勮矾寰勩€� ```shell script bash run_eval.sh 0 yolo.ckpt ./Mindrecord_eval ./dataset ./dataset/eval.txt diff --git a/official/nlp/bert/README.md b/official/nlp/bert/README.md index e0f4f38e1..72f6bb9d5 100644 --- a/official/nlp/bert/README.md +++ b/official/nlp/bert/README.md @@ -209,8 +209,6 @@ Please follow the instructions in the link below to create an hccl.json file in For distributed training among multiple machines, training command should be executed on each machine in a small time interval. Thus, an hccl.json is needed on each machine. [merge_hccl](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools#merge_hccl) is a tool to create hccl.json for multi-machine case. -For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#tfrecord) format. - ```text For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/official/nlp/cpm/README.md b/official/nlp/cpm/README.md index a309cd6a6..33afd01d0 100644 --- a/official/nlp/cpm/README.md +++ b/official/nlp/cpm/README.md @@ -309,7 +309,7 @@ After processing, the mindrecord file of training and reasoning is generated in ### Finetune Training Process -- Set options in `src/config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `src/config.py`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. - Run `run_distribute_train_ascend_single_machine.sh` for distributed and single machine training of CPM model. diff --git a/official/nlp/cpm/README_CN.md b/official/nlp/cpm/README_CN.md index bfa87f8ad..f6bc6ad1b 100644 --- a/official/nlp/cpm/README_CN.md +++ b/official/nlp/cpm/README_CN.md @@ -309,7 +309,7 @@ Parameters for dataset and network (Training/Evaluation): ### Finetune璁粌杩囩▼ -- 鍦╜src/config.py`涓缃紝鍖呮嫭妯″瀷骞惰銆乥atchsize銆佸涔犵巼鍜岀綉缁滆秴鍙傛暟銆傜偣鍑籟杩欓噷](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)鏌ョ湅鏇村鏁版嵁闆嗕俊鎭€� +- 鍦╜src/config.py`涓缃紝鍖呮嫭妯″瀷骞惰銆乥atchsize銆佸涔犵巼鍜岀綉缁滆秴鍙傛暟銆傜偣鍑籟杩欓噷](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)鏌ョ湅鏇村鏁版嵁闆嗕俊鎭€� - 杩愯`run_distribute_train_ascend_single_machine.sh`锛岃繘琛孋PM妯″瀷鐨勫崟鏈�8鍗″垎甯冨紡璁粌銆� diff --git a/official/nlp/duconv/README_CN.md b/official/nlp/duconv/README_CN.md index 95047b1b3..afb773f9c 100644 --- a/official/nlp/duconv/README_CN.md +++ b/official/nlp/duconv/README_CN.md @@ -85,7 +85,7 @@ Proactive Conversation妯″瀷鍖呭惈鍥涗釜閮ㄥ垎锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/official/nlp/mass/README.md b/official/nlp/mass/README.md index 98e0c045d..d421207e6 100644 --- a/official/nlp/mass/README.md +++ b/official/nlp/mass/README.md @@ -501,7 +501,7 @@ subword-nmt rouge ``` -<https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html> +<https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html> # Get started @@ -563,7 +563,7 @@ Get the log and output files under the path `./train_mass_*/`, and the model fil ## Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). For inference, config the options in `default_config.yaml` firstly: - Assign the `default_config.yaml` under `data_path` node to the dataset path. diff --git a/official/nlp/mass/README_CN.md b/official/nlp/mass/README_CN.md index 3020cf77c..fc8f203e7 100644 --- a/official/nlp/mass/README_CN.md +++ b/official/nlp/mass/README_CN.md @@ -505,7 +505,7 @@ subword-nmt rouge ``` -<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html> +<https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html> # 蹇€熶笂鎵� @@ -567,7 +567,7 @@ bash run_gpu.sh -t t -n 1 -i 1 ## 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆� 鎺ㄧ悊鏃讹紝璇峰厛閰嶇疆`config.json`涓殑閫夐」锛� - 灏哷default_config.yaml`鑺傜偣涓嬬殑`data_path`閰嶇疆涓烘暟鎹泦璺緞銆� diff --git a/official/nlp/pangu_alpha/README.md b/official/nlp/pangu_alpha/README.md index d885a1fea..bed9c6d0e 100644 --- a/official/nlp/pangu_alpha/README.md +++ b/official/nlp/pangu_alpha/README.md @@ -1,4 +1,4 @@ -锘�# Contents +# Contents - [Contents](#contents) - [PanGu-Alpha Description](#pangu-alpha-description) @@ -45,7 +45,7 @@ with our parallel setting. We summarized the training tricks as followings: 2. Pipeline Model Parallelism 3. Optimizer Model Parallelism -The above features can be found [here](https://www.mindspore.cn/docs/programming_guide/en/master/auto_parallel.html). +The above features can be found [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html). More amazing features are still under developing. The technical report and checkpoint file can be found [here](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-AIpha). @@ -151,7 +151,7 @@ bash scripts/run_distribute_train.sh /data/pangu_30_step_ba64/ /root/hccl_8p.jso The above command involves some `args` described below: - DATASET: The path to the mindrecord files's parent directory . For example: `/home/work/mindrecord/`. -- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`. +- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`. - RANK_SIZE: The device number. This can be your total device numbers. For example, 8, 16, 32 ... - TYPE: The param init type. The parameters will be initialized with float32. Or you can replace it with `fp16`. This will save a little memory used on the device. - MODE: The configure mode. This mode will set the `hidden size` and `layers` to make the parameter number near 2.6 billions. The other mode can be `13B` (`hidden size` 5120 and `layers` 40, which needs at least 16 cards to train.) and `200B`. @@ -189,7 +189,7 @@ device0/log0.log). The script will launch the GPU training through `mpirun`, the user can run the following command on any machine to start training. Note when start training multi-node, the variables `NCCL_SOCKET_IFNAME` `NCCL_IB_HCA` may be different on some servers. If you meet some errors and -strange phenomenon, please unset or set the NCCL variables. Details can be checked on this [link](https://www.mindspore.cn/docs/faq/zh-CN/master/distributed_configure.html). +strange phenomenon, please unset or set the NCCL variables. Details can be checked on this [link](https://www.mindspore.cn/docs/zh-CN/master/faq/distributed_configure.html). ```bash # The following variables are optional. @@ -200,7 +200,7 @@ bash scripts/run_distributed_train_gpu.sh RANK_SIZE HOSTFILE DATASET PER_BATCH M ``` - RANK_SIZE: The device number. This can be your total device numbers. For example, 8, 16, 32 ... -- HOSTFILE: It's a text file describes the host ip and its devices. Please see our [tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html) or [OpenMPI](https://www.open-mpi.org/) for more details. +- HOSTFILE: It's a text file describes the host ip and its devices. Please see our [tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html) or [OpenMPI](https://www.open-mpi.org/) for more details. - DATASET: The path to the mindrecord files's parent directory . For example: `/home/work/mindrecord/`. - PER_BATCH: The batch size for each data parallel-way. - MODE: Can be `1.3B` `2.6B`, `13B` and `200B`. @@ -222,7 +222,7 @@ bash scripts/run_distribute_train_moe_host_device.sh DATASET RANK_TABLE RANK_SIZ The above command involves some `args` described below: - DATASET: The path to the mindrecord files's parent directory . For example: `/home/work/mindrecord/`. -- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`. +- RANK_TABLE: The details of the rank table can be found [here](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html). It's a json file describes the `device id`, `service ip` and `rank`. - RANK_SIZE: The device number. This can be your total device numbers. For example, 8, 16, 32 ... - TYPE: The param init type. The parameters will be initialized with float32. Or you can replace it with `fp16`. This will save a little memory used on the device. - MODE: The configure mode. This mode will set the `hidden size` and `layers` to make the parameter number near 2.6 billions. The other mode can be `13B` (`hidden size` 5120 and `layers` 40, which needs at least 16 cards to train.) and `200B`. diff --git a/official/nlp/transformer/README.md b/official/nlp/transformer/README.md index 3e35c3784..4fec4896d 100644 --- a/official/nlp/transformer/README.md +++ b/official/nlp/transformer/README.md @@ -342,7 +342,7 @@ Parameters for learning rate: ## [Training Process](#contents) -- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. - Run `run_standalone_train.sh` for non-distributed training of Transformer model. diff --git a/official/nlp/transformer/README_CN.md b/official/nlp/transformer/README_CN.md index be21a0a8c..913aafe56 100644 --- a/official/nlp/transformer/README_CN.md +++ b/official/nlp/transformer/README_CN.md @@ -341,7 +341,7 @@ Parameters for learning rate: ### 璁粌杩囩▼ -- 鍦╜default_config.yaml`涓缃€夐」锛屽寘鎷琹oss_scale銆佸涔犵巼鍜岀綉缁滆秴鍙傛暟銆傜偣鍑籟杩欓噷](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_sample.html)鏌ョ湅鏇村鏁版嵁闆嗕俊鎭€� +- 鍦╜default_config.yaml`涓缃€夐」锛屽寘鎷琹oss_scale銆佸涔犵巼鍜岀綉缁滆秴鍙傛暟銆傜偣鍑籟杩欓噷](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset.html)鏌ョ湅鏇村鏁版嵁闆嗕俊鎭€� - 杩愯`run_standalone_train.sh`锛岃繘琛孴ransformer妯″瀷鐨勯潪鍒嗗竷寮忚缁冦€� diff --git a/official/recommend/ncf/README.md b/official/recommend/ncf/README.md index f12d20935..72b829054 100644 --- a/official/recommend/ncf/README.md +++ b/official/recommend/ncf/README.md @@ -73,7 +73,7 @@ In both datasets, the timestamp is represented in seconds since midnight Coordin ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -335,9 +335,9 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: -<https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html> +<https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html> ```python # Load unseen dataset for inference diff --git a/research/audio/fcn-4/README.md b/research/audio/fcn-4/README.md index 663a5e305..34e07d0c6 100644 --- a/research/audio/fcn-4/README.md +++ b/research/audio/fcn-4/README.md @@ -41,7 +41,7 @@ FCN-4 is a convolutional neural network architecture, its name FCN-4 comes from ### Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. ## [Environment Requirements](#contents) diff --git a/research/audio/speech_transformer/README.md b/research/audio/speech_transformer/README.md index 246ff4370..e665fba09 100644 --- a/research/audio/speech_transformer/README.md +++ b/research/audio/speech_transformer/README.md @@ -187,7 +187,7 @@ Dataset is preprocessed using `Kaldi` and converts kaldi binaries into Python pi ## [Training Process](#contents) -- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `default_config.yaml`, including loss_scale, learning rate and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. - Run `run_standalone_train_gpu.sh` for non-distributed training of Transformer model. diff --git a/research/cv/3D_DenseNet/README.md b/research/cv/3D_DenseNet/README.md index e3ad419a9..68a648a46 100644 --- a/research/cv/3D_DenseNet/README.md +++ b/research/cv/3D_DenseNet/README.md @@ -222,7 +222,7 @@ Dice Coefficient (DC) for 9th subject (9 subjects for training and 1 subject for |-------------------|:-------------------:|:---------------------:|:-----:|:--------------:| |3D-SkipDenseSeg | 93.66| 90.80 | 90.65 | 91.70 | -Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) For large models like InceptionV4, it's better to export an external environment variable export HCCL_CONNECT_TIMEOUT=600 to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. To avoid ops error锛寉ou should change the code like below: +Notes: RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) For large models like InceptionV4, it's better to export an external environment variable export HCCL_CONNECT_TIMEOUT=600 to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. To avoid ops error锛寉ou should change the code like below: in train.py锛� diff --git a/research/cv/3D_DenseNet/README_CN.md b/research/cv/3D_DenseNet/README_CN.md index 2f81477c8..e9d5bd111 100644 --- a/research/cv/3D_DenseNet/README_CN.md +++ b/research/cv/3D_DenseNet/README_CN.md @@ -1,5 +1,3 @@ -锘� - # 鐩綍 [View English](./README.md) @@ -214,7 +212,7 @@ bash run_eval.sh 3D-DenseSeg-20000_36.ckpt data/data_val |-------------------|:-------------------:|:---------------------:|:-----:|:--------------:| |3D-SkipDenseSeg | 93.66| 90.80 | 90.65 | 91.70 | -Notes: 鍒嗗竷寮忚缁冮渶瑕佷竴涓猂ANK_TABLE_FILE锛屾枃浠剁殑鍒犻櫎鏂瑰紡鍙互鍙傝€冭閾炬帴[Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) ,device_ip鐨勮缃弬鑰冭閾炬帴 [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) 瀵逛簬鍍廔nceptionV4杩欐牱鐨勫ぇ妯″瀷鏉ヨ, 鏈€濂藉鍑轰竴涓閮ㄧ幆澧冨彉閲忥紝export HCCL_CONNECT_TIMEOUT=600锛屼互灏唄ccl杩炴帴妫€鏌ユ椂闂翠粠榛樿鐨�120绉掑欢闀垮埌600绉掋€傚惁鍒欙紝杩炴帴鍙兘浼氳秴鏃讹紝鍥犱负缂栬瘧鏃堕棿浼氶殢鐫€妯″瀷澶у皬鐨勫鍔犺€屽鍔犮€傚湪1.3.0鐗堟湰涓嬶紝3D绠楀瓙鍙兘瀛樺湪涓€浜涢棶棰橈紝鎮ㄥ彲鑳介渶瑕佹洿鏀筩ontext.set_auto_parallel_context鐨勯儴鍒嗕唬鐮�: +Notes: 鍒嗗竷寮忚缁冮渶瑕佷竴涓猂ANK_TABLE_FILE锛屾枃浠剁殑鍒犻櫎鏂瑰紡鍙互鍙傝€冭閾炬帴[Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) ,device_ip鐨勮缃弬鑰冭閾炬帴 [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) 瀵逛簬鍍廔nceptionV4杩欐牱鐨勫ぇ妯″瀷鏉ヨ, 鏈€濂藉鍑轰竴涓閮ㄧ幆澧冨彉閲忥紝export HCCL_CONNECT_TIMEOUT=600锛屼互灏唄ccl杩炴帴妫€鏌ユ椂闂翠粠榛樿鐨�120绉掑欢闀垮埌600绉掋€傚惁鍒欙紝杩炴帴鍙兘浼氳秴鏃讹紝鍥犱负缂栬瘧鏃堕棿浼氶殢鐫€妯″瀷澶у皬鐨勫鍔犺€屽鍔犮€傚湪1.3.0鐗堟湰涓嬶紝3D绠楀瓙鍙兘瀛樺湪涓€浜涢棶棰橈紝鎮ㄥ彲鑳介渶瑕佹洿鏀筩ontext.set_auto_parallel_context鐨勯儴鍒嗕唬鐮�: in train.py锛� diff --git a/research/cv/APDrawingGAN/README_CN.md b/research/cv/APDrawingGAN/README_CN.md index 292a446ab..15d62ce12 100644 --- a/research/cv/APDrawingGAN/README_CN.md +++ b/research/cv/APDrawingGAN/README_CN.md @@ -86,7 +86,7 @@ auxiliary.ckpt鏂囦欢鑾峰彇锛氫粠 https://cg.cs.tsinghua.edu.cn/people/~Yongjin/A ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/AlignedReID++/README_CN.md b/research/cv/AlignedReID++/README_CN.md index b44ae7167..53bdce3e0 100644 --- a/research/cv/AlignedReID++/README_CN.md +++ b/research/cv/AlignedReID++/README_CN.md @@ -61,7 +61,7 @@ AlignedReID++閲囩敤resnet50浣滀负backbone锛岄噸鏂板懡鍚嶄簡AlignedReID涓彁鍑� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 @@ -403,7 +403,7 @@ market1501涓婅瘎浼癆lignedReID++ ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� 鍦ㄨ繘琛屾帹鐞嗕箣鍓嶆垜浠渶瑕佸厛瀵煎嚭妯″瀷锛宮indir鍙互鍦ㄦ湰鍦扮幆澧冧笂瀵煎嚭銆俠atch_size榛樿涓�1銆� diff --git a/research/cv/AlphaPose/README_CN.md b/research/cv/AlphaPose/README_CN.md index eb2809997..39c521465 100644 --- a/research/cv/AlphaPose/README_CN.md +++ b/research/cv/AlphaPose/README_CN.md @@ -55,7 +55,7 @@ AlphaPose鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/DDRNet/README_CN.md b/research/cv/DDRNet/README_CN.md index e0ef16c49..0723bcef7 100644 --- a/research/cv/DDRNet/README_CN.md +++ b/research/cv/DDRNet/README_CN.md @@ -53,7 +53,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/EDSR/README_CN.md b/research/cv/EDSR/README_CN.md index 1af53d936..cec64c247 100644 --- a/research/cv/EDSR/README_CN.md +++ b/research/cv/EDSR/README_CN.md @@ -97,7 +97,7 @@ EDSR鏄敱澶氫釜浼樺寲鍚庣殑residual blocks涓茶仈鑰屾垚锛岀浉姣斿師濮嬬増鏈殑r ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html?highlight=%E6%B7%B7%E5%90%88%E7%B2%BE%E5%BA%A6)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html?highlight=%E6%B7%B7%E5%90%88%E7%B2%BE%E5%BA%A6)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/EGnet/README_CN.md b/research/cv/EGnet/README_CN.md index e945d0a66..8f9c9f0e5 100644 --- a/research/cv/EGnet/README_CN.md +++ b/research/cv/EGnet/README_CN.md @@ -359,7 +359,7 @@ bash run_standalone_train_gpu.sh bash run_distribute_train.sh 8 [RANK_TABLE_FILE] ``` -绾夸笅杩愯鍒嗗竷寮忚缁冭鍙傜収[mindspore鍒嗗竷寮忓苟琛岃缁冨熀纭€鏍蜂緥锛圓scend锛塢(https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html) +绾夸笅杩愯鍒嗗竷寮忚缁冭鍙傜収[mindspore鍒嗗竷寮忓苟琛岃缁冨熀纭€鏍蜂緥锛圓scend锛塢(https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html) - 绾夸笂modelarts鍒嗗竷寮忚缁� diff --git a/research/cv/GENet_Res50/README_CN.md b/research/cv/GENet_Res50/README_CN.md index 726cf0d2f..0a7736873 100644 --- a/research/cv/GENet_Res50/README_CN.md +++ b/research/cv/GENet_Res50/README_CN.md @@ -64,7 +64,7 @@ Imagenet 2017鍜孖magenet 2012 鏁版嵁闆嗕竴鑷� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/LightCNN/README.md b/research/cv/LightCNN/README.md index c2d524a5a..21f59b7fb 100644 --- a/research/cv/LightCNN/README.md +++ b/research/cv/LightCNN/README.md @@ -119,7 +119,7 @@ Dataset structure: ## [Mixed Precision](#mixedprecision) -The [mixed-precision](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) training +The [mixed-precision](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) training method uses single-precision and half-precision data to improve the training speed of deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or achieving larger batches @@ -139,7 +139,7 @@ reduce precision" to view the operators with reduced precision. - Generate config json file for 8-card training - [Simple tutorial](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) - For detailed configuration method, please refer to - the [official website tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html#configuring-distributed-environment-variables). + the [official website tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html#configuring-distributed-environment-variables). # [Quick start](#Quickstart) @@ -637,7 +637,7 @@ Please check the official [homepage](https://gitee.com/mindspore/models). [5]: https://pan.baidu.com/s/1eR6vHFO -[6]: https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html +[6]: https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html [7]: http://www.cbsr.ia.ac.cn/users/scliao/projects/blufr/BLUFR.zip diff --git a/research/cv/LightCNN/README_CN.md b/research/cv/LightCNN/README_CN.md index 97e91e010..4866de2f2 100644 --- a/research/cv/LightCNN/README_CN.md +++ b/research/cv/LightCNN/README_CN.md @@ -107,7 +107,7 @@ LightCNN閫傜敤浜庢湁澶ч噺鍣0鐨勪汉鑴歌瘑鍒暟鎹泦锛屾彁鍑轰簡maxout 鐨� - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html) - 鐢熸垚config json鏂囦欢鐢ㄤ簬8鍗¤缁冦€� - [绠€鏄撴暀绋媇(https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) - - 璇︾粏閰嶇疆鏂规硶璇峰弬鐓瀹樼綉鏁欑▼](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html#閰嶇疆鍒嗗竷寮忕幆澧冨彉閲�)銆� + - 璇︾粏閰嶇疆鏂规硶璇峰弬鐓瀹樼綉鏁欑▼](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#閰嶇疆鍒嗗竷寮忕幆澧冨彉閲�)銆� # 蹇€熷叆闂� @@ -516,7 +516,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID] [3]: https://drive.google.com/file/d/0ByNaVHFekDPRbFg1YTNiMUxNYXc/view?usp=sharing [4]: https://hyper.ai/datasets/5543 [5]: https://pan.baidu.com/s/1eR6vHFO -[6]: https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html +[6]: https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html [7]: http://www.cbsr.ia.ac.cn/users/scliao/projects/blufr/BLUFR.zip [8]: https://github.com/AlfredXiangWu/face_verification_experiment/blob/master/code/lfw_pairs.mat [9]: https://github.com/AlfredXiangWu/face_verification_experiment/blob/master/results/LightenedCNN_B_lfw.mat diff --git a/research/cv/ManiDP/Readme.md b/research/cv/ManiDP/Readme.md index 2f4302712..403094c0f 100644 --- a/research/cv/ManiDP/Readme.md +++ b/research/cv/ManiDP/Readme.md @@ -40,7 +40,7 @@ Dataset used: [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/NFNet/README_CN.md b/research/cv/NFNet/README_CN.md index d46125b2b..ee4fdad5e 100644 --- a/research/cv/NFNet/README_CN.md +++ b/research/cv/NFNet/README_CN.md @@ -57,7 +57,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/RefineDet/README_CN.md b/research/cv/RefineDet/README_CN.md index 3645326d5..92353c907 100644 --- a/research/cv/RefineDet/README_CN.md +++ b/research/cv/RefineDet/README_CN.md @@ -211,7 +211,7 @@ sh run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] ## 璁粌杩囩▼ -杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** +杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** ### Ascend涓婅缁� diff --git a/research/cv/RefineNet/README.md b/research/cv/RefineNet/README.md index 413b7e363..fb8c1e4db 100644 --- a/research/cv/RefineNet/README.md +++ b/research/cv/RefineNet/README.md @@ -84,7 +84,7 @@ Pascal VOC鏁版嵁闆嗗拰璇箟杈圭晫鏁版嵁闆嗭紙Semantic Boundaries Dataset锛孲BD ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� diff --git a/research/cv/SE-Net/README.md b/research/cv/SE-Net/README.md index dd8993efe..6c981e56c 100644 --- a/research/cv/SE-Net/README.md +++ b/research/cv/SE-Net/README.md @@ -67,7 +67,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/SE_ResNeXt50/README_CN.md b/research/cv/SE_ResNeXt50/README_CN.md index e4d3136ab..e9e54a23c 100644 --- a/research/cv/SE_ResNeXt50/README_CN.md +++ b/research/cv/SE_ResNeXt50/README_CN.md @@ -56,7 +56,7 @@ SE-ResNeXt鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� [閾炬帴](https://arxiv.org/abs/1709.015 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # 鐜瑕佹眰 diff --git a/research/cv/TNT/README_CN.md b/research/cv/TNT/README_CN.md index bf21f0efc..cf8699f34 100644 --- a/research/cv/TNT/README_CN.md +++ b/research/cv/TNT/README_CN.md @@ -53,7 +53,7 @@ Transformer鏄竴绉嶆渶鍒濈敤浜嶯LP浠诲姟鐨勫熀浜庤嚜娉ㄦ剰鍔涚殑绁炵粡缃戠粶銆� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/cct/README_CN.md b/research/cv/cct/README_CN.md index f67e02896..b61064ab0 100644 --- a/research/cv/cct/README_CN.md +++ b/research/cv/cct/README_CN.md @@ -51,7 +51,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/convnext/README_CN.md b/research/cv/convnext/README_CN.md index eec99f773..09a902628 100644 --- a/research/cv/convnext/README_CN.md +++ b/research/cv/convnext/README_CN.md @@ -53,7 +53,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/dcgan/README.md b/research/cv/dcgan/README.md index 5fd8c0e66..cca854467 100644 --- a/research/cv/dcgan/README.md +++ b/research/cv/dcgan/README.md @@ -137,7 +137,7 @@ dcgan_cifar10_cfg { ## [Training Process](#contents) -- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. ### [Training](#content) diff --git a/research/cv/deeplabv3plus/README_CN.md b/research/cv/deeplabv3plus/README_CN.md index 38a80416f..a404b5c28 100644 --- a/research/cv/deeplabv3plus/README_CN.md +++ b/research/cv/deeplabv3plus/README_CN.md @@ -85,7 +85,7 @@ Pascal VOC鏁版嵁闆嗗拰璇箟杈圭晫鏁版嵁闆嗭紙Semantic Boundaries Dataset锛孲BD ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/dlinknet/README.md b/research/cv/dlinknet/README.md index 06cf9bded..d272c06d9 100644 --- a/research/cv/dlinknet/README.md +++ b/research/cv/dlinknet/README.md @@ -316,7 +316,7 @@ bash scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [CONFIG_PATH] #### inference If you need to use the trained model to perform inference on multiple hardware platforms, such as Ascend 910 or Ascend 310, you -can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following +can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: ##### running-on-ascend-310 diff --git a/research/cv/dlinknet/README_CN.md b/research/cv/dlinknet/README_CN.md index 2cdb3ed0c..2e43b9cb7 100644 --- a/research/cv/dlinknet/README_CN.md +++ b/research/cv/dlinknet/README_CN.md @@ -320,7 +320,7 @@ bash scripts/run_distribute_train.sh [RANK_TABLE_FILE] [DATASET] [CONFIG_PATH] #### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄨ缁冨ソ鐨勬ā鍨嬪湪Ascend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸涓€涓畝鍗曠殑鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄨ缁冨ソ鐨勬ā鍨嬪湪Ascend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸涓€涓畝鍗曠殑鎿嶄綔姝ラ绀轰緥锛� ##### Ascend 310鐜杩愯 diff --git a/research/cv/efficientnetv2/README_CN.md b/research/cv/efficientnetv2/README_CN.md index 9e90c4a99..75dea2a67 100644 --- a/research/cv/efficientnetv2/README_CN.md +++ b/research/cv/efficientnetv2/README_CN.md @@ -51,7 +51,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/fairmot/README.md b/research/cv/fairmot/README.md index ff3f565fa..c75f79a65 100644 --- a/research/cv/fairmot/README.md +++ b/research/cv/fairmot/README.md @@ -46,7 +46,7 @@ Dataset used: ETH, CalTech, MOT17, CUHK-SYSU, PRW, CityPerson ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/fishnet99/README_CN.md b/research/cv/fishnet99/README_CN.md index 7129785a4..86aae8c34 100644 --- a/research/cv/fishnet99/README_CN.md +++ b/research/cv/fishnet99/README_CN.md @@ -63,7 +63,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # 鐜瑕佹眰 diff --git a/research/cv/glore_res/README_CN.md b/research/cv/glore_res/README_CN.md index ead07cb53..4a7afb8ba 100644 --- a/research/cv/glore_res/README_CN.md +++ b/research/cv/glore_res/README_CN.md @@ -81,7 +81,7 @@ glore_res200缃戠粶妯″瀷鐨刡ackbone鏄疪esNet200, 鍦⊿tage2, Stage3涓垎鍒潎 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/glore_res200/README_CN.md b/research/cv/glore_res200/README_CN.md index 6c81a1be6..cd0bf75fa 100644 --- a/research/cv/glore_res200/README_CN.md +++ b/research/cv/glore_res200/README_CN.md @@ -72,7 +72,7 @@ ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/glore_res50/README.md b/research/cv/glore_res50/README.md index 39e47cae9..bc80ce7d1 100644 --- a/research/cv/glore_res50/README.md +++ b/research/cv/glore_res50/README.md @@ -61,7 +61,7 @@ glore_res鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/hardnet/README_CN.md b/research/cv/hardnet/README_CN.md index d1b901770..7b44eef55 100644 --- a/research/cv/hardnet/README_CN.md +++ b/research/cv/hardnet/README_CN.md @@ -60,7 +60,7 @@ HarDNet鎸囩殑鏄疕armonic DenseNet: A low memory traffic network锛屽叾绐佸嚭鐨� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 @@ -419,7 +419,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID] ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦ˋscend 910涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦ˋscend 910涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� - Ascend澶勭悊鍣ㄧ幆澧冭繍琛� @@ -456,7 +456,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID] print("==============Acc: {} ==============".format(acc)) ``` -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU涓婅繘琛屾帹鐞嗭紝鍙弬鑰冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� - GPU澶勭悊鍣ㄧ幆澧冭繍琛� diff --git a/research/cv/inception_resnet_v2/README.md b/research/cv/inception_resnet_v2/README.md index cf199c606..3852562de 100644 --- a/research/cv/inception_resnet_v2/README.md +++ b/research/cv/inception_resnet_v2/README.md @@ -44,7 +44,7 @@ The dataset used is [ImageNet](https://image-net.org/download.php). ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. @@ -122,7 +122,7 @@ bash scripts/run_standalone_train_ascend.sh DEVICE_ID DATA_DIR ``` > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh` diff --git a/research/cv/inception_resnet_v2/README_CN.md b/research/cv/inception_resnet_v2/README_CN.md index 9fadab9ea..8be29bd5c 100644 --- a/research/cv/inception_resnet_v2/README_CN.md +++ b/research/cv/inception_resnet_v2/README_CN.md @@ -56,7 +56,7 @@ Inception_ResNet_v2鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害锛圓scend锛� -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� @@ -133,7 +133,7 @@ bash scripts/run_distribute_train_ascend.sh RANK_TABLE_FILE DATA_DIR bash scripts/run_standalone_train_ascend.sh DEVICE_ID DATA_DIR ``` -> 娉細RANK_TABLE_FILE鍙弬鑰僛閾炬帴]( https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html)銆俤evice_ip鍙互閫氳繃[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)鑾峰彇 +> 娉細RANK_TABLE_FILE鍙弬鑰僛閾炬帴]( https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html)銆俤evice_ip鍙互閫氳繃[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools)鑾峰彇 - GPU: diff --git a/research/cv/mae/README_CN.md b/research/cv/mae/README_CN.md index 5c8f9a266..ee67f0afa 100644 --- a/research/cv/mae/README_CN.md +++ b/research/cv/mae/README_CN.md @@ -63,7 +63,7 @@ This is a MindSpore/NPU re-implementation of the paper [Masked Autoencoders Are ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 @@ -390,7 +390,7 @@ This is a MindSpore/NPU re-implementation of the paper [Masked Autoencoders Are ### 鎺ㄧ悊 -濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� +濡傛灉鎮ㄩ渶瑕佷娇鐢ㄦ璁粌妯″瀷鍦℅PU銆丄scend 910銆丄scend 310绛夊涓‖浠跺钩鍙颁笂杩涜鎺ㄧ悊锛屽彲鍙傝€冩[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)銆備笅闈㈡槸鎿嶄綔姝ラ绀轰緥锛� - Ascend澶勭悊鍣ㄧ幆澧冭繍琛� diff --git a/research/cv/metric_learn/README_CN.md b/research/cv/metric_learn/README_CN.md index 6c95794fd..1588e0afa 100644 --- a/research/cv/metric_learn/README_CN.md +++ b/research/cv/metric_learn/README_CN.md @@ -80,7 +80,7 @@ cd Stanford_Online_Products && head -n 1048 test.txt > test_tiny.txt ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/midas/README.md b/research/cv/midas/README.md index 4ce4c8ffd..b55353383 100644 --- a/research/cv/midas/README.md +++ b/research/cv/midas/README.md @@ -55,7 +55,7 @@ Midas鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/nas-fpn/README_CN.md b/research/cv/nas-fpn/README_CN.md index 1fd7d580f..8706b3600 100644 --- a/research/cv/nas-fpn/README_CN.md +++ b/research/cv/nas-fpn/README_CN.md @@ -161,7 +161,7 @@ bash scripts/run_single_train.sh DEVICE_ID MINDRECORD_DIR PRE_TRAINED(optional) ``` > 娉ㄦ剰: -RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). +RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). #### 杩愯 diff --git a/research/cv/ntsnet/README.md b/research/cv/ntsnet/README.md index 20a42e7c3..f53854d33 100644 --- a/research/cv/ntsnet/README.md +++ b/research/cv/ntsnet/README.md @@ -133,7 +133,7 @@ Usage: bash run_standalone_train_ascend.sh [DATA_URL] [TRAIN_URL] ## [Training Process](#contents) -- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_sample.html) for more information about dataset. +- Set options in `config.py`, including learning rate, output filename and network hyperparameters. Click [here](https://www.mindspore.cn/tutorials/en/master/advanced/dataset.html) for more information about dataset. - Get ResNet50 pretrained model from [Mindspore Hub](https://www.mindspore.cn/resources/hub/details?MindSpore/ascend/v1.2/resnet50_v1.2_imagenet2012) ### [Training](#content) diff --git a/research/cv/osnet/README.md b/research/cv/osnet/README.md index 15c7ea6a6..f449b8b59 100644 --- a/research/cv/osnet/README.md +++ b/research/cv/osnet/README.md @@ -155,7 +155,7 @@ bash run_eval_ascend.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] ``` > Notes: -> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. +> RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size. > > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_train_distribute_ascend.sh` > diff --git a/research/cv/ras/README.md b/research/cv/ras/README.md index c2a18eb1b..1dc300f6e 100644 --- a/research/cv/ras/README.md +++ b/research/cv/ras/README.md @@ -73,7 +73,7 @@ RAS鎬讳綋缃戠粶鏋舵瀯濡備笅: ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/renas/Readme.md b/research/cv/renas/Readme.md index 3a862f987..f76c2c8ed 100644 --- a/research/cv/renas/Readme.md +++ b/research/cv/renas/Readme.md @@ -39,7 +39,7 @@ An effective and efficient architecture performance evaluation scheme is essenti ## [Mixed Precision(Ascend)](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/res2net/README.md b/research/cv/res2net/README.md index d971e0ee4..199f6fa24 100644 --- a/research/cv/res2net/README.md +++ b/research/cv/res2net/README.md @@ -82,7 +82,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/res2net_deeplabv3/README.md b/research/cv/res2net_deeplabv3/README.md index 4632c1d4f..478034d00 100644 --- a/research/cv/res2net_deeplabv3/README.md +++ b/research/cv/res2net_deeplabv3/README.md @@ -85,7 +85,7 @@ You can also generate the list file automatically by run script: `python get_dat ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/resnet3d/README_CN.md b/research/cv/resnet3d/README_CN.md index 3410ec5ca..5ed6d25aa 100644 --- a/research/cv/resnet3d/README_CN.md +++ b/research/cv/resnet3d/README_CN.md @@ -105,7 +105,7 @@ python3 generate_video_jpgs.py --video_path ~/dataset/hmdb51/videos/ --target_pa ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/resnet50_bam/README.md b/research/cv/resnet50_bam/README.md index 170f2124c..9367f89f3 100644 --- a/research/cv/resnet50_bam/README.md +++ b/research/cv/resnet50_bam/README.md @@ -56,7 +56,7 @@ Data set used: [ImageNet2012](http://www.image-net.org/) ## Mixed precision -The [mixed-precision](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) training method uses single-precision and half-precision data to improve the training speed of deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or achieving larger batches of training on specific hardware. +The [mixed-precision](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) training method uses single-precision and half-precision data to improve the training speed of deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or achieving larger batches of training on specific hardware. # Environmental requirements diff --git a/research/cv/resnet50_bam/README_CN.md b/research/cv/resnet50_bam/README_CN.md index 5b7ea5b26..d4a8c28f6 100644 --- a/research/cv/resnet50_bam/README_CN.md +++ b/research/cv/resnet50_bam/README_CN.md @@ -56,7 +56,7 @@ resnet50_bam鐨勪綔鑰呮彁鍑轰簡涓€涓畝鍗曚絾鏄湁鏁堢殑Attention妯″瀷鈥斺€擝A ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # 鐜瑕佹眰 diff --git a/research/cv/resnext152_64x4d/README.md b/research/cv/resnext152_64x4d/README.md index 3320bcf35..61fb3a324 100644 --- a/research/cv/resnext152_64x4d/README.md +++ b/research/cv/resnext152_64x4d/README.md @@ -54,7 +54,7 @@ Dataset used: [imagenet](http://www.image-net.org/) ## [Mixed Precision](#contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. diff --git a/research/cv/resnext152_64x4d/README_CN.md b/research/cv/resnext152_64x4d/README_CN.md index 8b6a05b0e..2a6e58096 100644 --- a/research/cv/resnext152_64x4d/README_CN.md +++ b/research/cv/resnext152_64x4d/README_CN.md @@ -54,7 +54,7 @@ ResNeXt鏁翠綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� diff --git a/research/cv/retinanet_resnet101/README.md b/research/cv/retinanet_resnet101/README.md index d20bef0b3..c2e896644 100644 --- a/research/cv/retinanet_resnet101/README.md +++ b/research/cv/retinanet_resnet101/README.md @@ -287,7 +287,7 @@ bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABL bash run_single_train.sh [DEVICE_ID] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional) ``` -> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_ascend.html), for details on how to get device_ip check this [link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). +> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_ascend.html), for details on how to get device_ip check this [link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). - GPU diff --git a/research/cv/retinanet_resnet101/README_CN.md b/research/cv/retinanet_resnet101/README_CN.md index 8f86a05d8..d62df255a 100644 --- a/research/cv/retinanet_resnet101/README_CN.md +++ b/research/cv/retinanet_resnet101/README_CN.md @@ -292,7 +292,7 @@ bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABL bash run_single_train.sh [DEVICE_ID] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional) ``` -> 娉ㄦ剰: RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). +> 娉ㄦ剰: RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). - GPU diff --git a/research/cv/retinanet_resnet152/README.md b/research/cv/retinanet_resnet152/README.md index 23e04a27d..d1be441cb 100644 --- a/research/cv/retinanet_resnet152/README.md +++ b/research/cv/retinanet_resnet152/README.md @@ -291,7 +291,7 @@ bash run_distribute_train.sh DEVICE_NUM EPOCH_SIZE LR DATASET RANK_TABLE_FILE PR bash run_distribute_train.sh DEVICE_ID EPOCH_SIZE LR DATASET PRE_TRAINED(optional) PRE_TRAINED_EPOCH_SIZE(optional) ``` -> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), +> Note: RANK_TABLE_FILE related reference materials see in this [link](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), > for details on how to get device_ip check this [link](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). - GPU: diff --git a/research/cv/retinanet_resnet152/README_CN.md b/research/cv/retinanet_resnet152/README_CN.md index 1dda1c52d..f3c709496 100644 --- a/research/cv/retinanet_resnet152/README_CN.md +++ b/research/cv/retinanet_resnet152/README_CN.md @@ -285,7 +285,7 @@ bash run_distribute_train.sh DEVICE_NUM EPOCH_SIZE LR DATASET RANK_TABLE_FILE PR bash run_distribute_train.sh DEVICE_ID EPOCH_SIZE LR DATASET PRE_TRAINED(optional) PRE_TRAINED_EPOCH_SIZE(optional) ``` -> 娉ㄦ剰: RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), +> 娉ㄦ剰: RANK_TABLE_FILE鐩稿叧鍙傝€冭祫鏂欒[閾炬帴](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), > 鑾峰彇device_ip鏂规硶璇﹁[閾炬帴](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). - GPU: diff --git a/research/cv/siamRPN/README_CN.md b/research/cv/siamRPN/README_CN.md index d7937fa68..3376c0cd3 100644 --- a/research/cv/siamRPN/README_CN.md +++ b/research/cv/siamRPN/README_CN.md @@ -51,7 +51,7 @@ Siam-RPN鎻愬嚭浜嗕竴绉嶅熀浜嶳PN鐨勫鐢熺綉缁滅粨鏋勩€傜敱瀛敓瀛愮綉缁滃拰RPN ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/simple_baselines/README_CN.md b/research/cv/simple_baselines/README_CN.md index 1a228c7e7..3eb48ea04 100644 --- a/research/cv/simple_baselines/README_CN.md +++ b/research/cv/simple_baselines/README_CN.md @@ -53,7 +53,7 @@ simple_baselines鐨勬€讳綋缃戠粶鏋舵瀯濡備笅锛� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html))鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html))鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/single_path_nas/README.md b/research/cv/single_path_nas/README.md index f4899b7ae..ae660b649 100644 --- a/research/cv/single_path_nas/README.md +++ b/research/cv/single_path_nas/README.md @@ -70,7 +70,7 @@ Dataset used锛歔ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed-precision](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +The [mixed-precision](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) training method uses single-precision and half-precision data to improve the training speed of deep learning neural networks, while maintaining the network accuracy that can be achieved by single-precision training. Mixed-precision training increases computing speed and reduces memory usage, while supporting training larger models or diff --git a/research/cv/single_path_nas/README_CN.md b/research/cv/single_path_nas/README_CN.md index 3c71cfe53..62c6d04c6 100644 --- a/research/cv/single_path_nas/README_CN.md +++ b/research/cv/single_path_nas/README_CN.md @@ -57,7 +57,7 @@ single-path-nas鐨勪綔鑰呯敤涓€涓�7x7鐨勫ぇ鍗风Н锛屾潵浠h〃3x3銆�5x5鍜�7x7鐨� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # 鐜瑕佹眰 diff --git a/research/cv/sknet/README.md b/research/cv/sknet/README.md index 60f7315da..6e581761a 100644 --- a/research/cv/sknet/README.md +++ b/research/cv/sknet/README.md @@ -74,7 +74,7 @@ Dataset used: [CIFAR10](https://www.kaggle.com/c/cifar-10) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/cv/squeezenet/README.md b/research/cv/squeezenet/README.md index 7a045e948..de1f902c6 100644 --- a/research/cv/squeezenet/README.md +++ b/research/cv/squeezenet/README.md @@ -74,7 +74,7 @@ Dataset used: [ImageNet2012](http://www.image-net.org/) ## Mixed Precision -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) @@ -512,7 +512,7 @@ result: {'top_1_accuracy': 0.6094950384122919, 'top_5_accuracy': 0.8263244238156 ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/research/cv/squeezenet1_1/README.md b/research/cv/squeezenet1_1/README.md index 5042d64b3..ee112140d 100644 --- a/research/cv/squeezenet1_1/README.md +++ b/research/cv/squeezenet1_1/README.md @@ -304,7 +304,7 @@ Inference result is saved in current path, you can find result like this in acc. ### Inference -If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example: +If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: - Running on Ascend diff --git a/research/cv/ssd_ghostnet/README.md b/research/cv/ssd_ghostnet/README.md index cbc408763..1e8b82af2 100644 --- a/research/cv/ssd_ghostnet/README.md +++ b/research/cv/ssd_ghostnet/README.md @@ -210,7 +210,7 @@ If you want to run in modelarts, please check the official documentation of [mod ### Training on Ascend -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset) or `iamge_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** - Distribute mode diff --git a/research/cv/ssd_inception_v2/README.md b/research/cv/ssd_inception_v2/README.md index 0a55f1663..cd0916115 100644 --- a/research/cv/ssd_inception_v2/README.md +++ b/research/cv/ssd_inception_v2/README.md @@ -213,7 +213,7 @@ bash scripts/docker_start.sh ssd:20.1.0 [DATA_DIR] [MODEL_DIR] ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on GPU diff --git a/research/cv/ssd_inceptionv2/README_CN.md b/research/cv/ssd_inceptionv2/README_CN.md index f1b0298ee..fcf4a26d8 100644 --- a/research/cv/ssd_inceptionv2/README_CN.md +++ b/research/cv/ssd_inceptionv2/README_CN.md @@ -171,7 +171,7 @@ bash run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [MINDREC ## 璁粌杩囩▼ -杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** +杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** ### Ascend涓婅缁� diff --git a/research/cv/ssd_mobilenetV2/README.md b/research/cv/ssd_mobilenetV2/README.md index 3987cbddd..7b2ca8caf 100644 --- a/research/cv/ssd_mobilenetV2/README.md +++ b/research/cv/ssd_mobilenetV2/README.md @@ -221,7 +221,7 @@ bash scripts/run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on Ascend diff --git a/research/cv/ssd_mobilenetV2_FPNlite/README.md b/research/cv/ssd_mobilenetV2_FPNlite/README.md index 0190650aa..6f2cdd299 100644 --- a/research/cv/ssd_mobilenetV2_FPNlite/README.md +++ b/research/cv/ssd_mobilenetV2_FPNlite/README.md @@ -233,7 +233,7 @@ bash run_eval_gpu.sh [CONFIG_FILE] [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on Ascend diff --git a/research/cv/ssd_resnet34/README.md b/research/cv/ssd_resnet34/README.md index e3938abec..8ce22a5ff 100644 --- a/research/cv/ssd_resnet34/README.md +++ b/research/cv/ssd_resnet34/README.md @@ -202,7 +202,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID] ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on Ascend diff --git a/research/cv/ssd_resnet34/README_CN.md b/research/cv/ssd_resnet34/README_CN.md index 963267753..2aab91733 100644 --- a/research/cv/ssd_resnet34/README_CN.md +++ b/research/cv/ssd_resnet34/README_CN.md @@ -169,7 +169,7 @@ sh scripts/run_eval.sh [DEVICE_ID] [DATASET] [DATASET_PATH] [CHECKPOINT_PATH] [M ## 璁粌杩囩▼ -杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** +杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** ### Ascend涓婅缁� diff --git a/research/cv/ssd_resnet50/README.md b/research/cv/ssd_resnet50/README.md index 116c1abb0..9075c7e95 100644 --- a/research/cv/ssd_resnet50/README.md +++ b/research/cv/ssd_resnet50/README.md @@ -204,7 +204,7 @@ Then you can run everything just like on ascend. ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/en/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/en/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on Ascend diff --git a/research/cv/ssd_resnet50/README_CN.md b/research/cv/ssd_resnet50/README_CN.md index 0f2d4067e..4f7d5d167 100644 --- a/research/cv/ssd_resnet50/README_CN.md +++ b/research/cv/ssd_resnet50/README_CN.md @@ -163,7 +163,7 @@ bash run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID] ## 璁粌杩囩▼ -杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** +杩愯`train.py`璁粌妯″瀷銆傚鏋渀mindrecord_dir`涓虹┖锛屽垯浼氶€氳繃`coco_root`锛坈oco鏁版嵁闆嗭級鎴朻image_dir`鍜宍anno_path`锛堣嚜宸辩殑鏁版嵁闆嗭級鐢熸垚[MindRecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html)鏂囦欢銆�**娉ㄦ剰锛屽鏋渕indrecord_dir涓嶄负绌猴紝灏嗕娇鐢╩indrecord_dir浠f浛鍘熷鍥惧儚銆�** ### Ascend涓婅缁� diff --git a/research/cv/ssd_resnet_34/README.md b/research/cv/ssd_resnet_34/README.md index 6fde21e6f..1704cb8eb 100644 --- a/research/cv/ssd_resnet_34/README.md +++ b/research/cv/ssd_resnet_34/README.md @@ -204,7 +204,7 @@ Major parameters in train.py and config.py for Multi GPU train: ### [Training Process](#contents) -To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** +To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorials/zh-CN/master/advanced/dataset/record.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.** #### Training on GPU diff --git a/research/cv/swin_transformer/README_CN.md b/research/cv/swin_transformer/README_CN.md index 7d0a842c9..23ed2d54c 100644 --- a/research/cv/swin_transformer/README_CN.md +++ b/research/cv/swin_transformer/README_CN.md @@ -53,7 +53,7 @@ SwinTransformer鏄柊鍨嬬殑瑙嗚Transformer锛屽畠鍙互鐢ㄤ綔璁$畻鏈鸿瑙夌殑 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html) +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html) 鐨勮缁冩柟娉曪紝浣跨敤鏀寔鍗曠簿搴﹀拰鍗婄簿搴︽暟鎹潵鎻愰珮娣卞害瀛︿範绁炵粡缃戠粶鐨勮缁冮€熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/cv/tsm/README_CN.md b/research/cv/tsm/README_CN.md index 1df30c8db..9f66040d2 100644 --- a/research/cv/tsm/README_CN.md +++ b/research/cv/tsm/README_CN.md @@ -59,7 +59,7 @@ TSM搴旂敤浜嗕竴绉嶉€氱敤鑰屾湁鏁堢殑鏃堕棿杞Щ妯″潡銆� 鏃堕棿杞Щ妯″潡灏� ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/cv/vgg19/README.md b/research/cv/vgg19/README.md index 35bad1b67..b48fc9c79 100644 --- a/research/cv/vgg19/README.md +++ b/research/cv/vgg19/README.md @@ -440,7 +440,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 ... ``` -> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training.html). +> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorials/experts/en/master/parallel/introduction.html). > **Attention** This will bind the processor cores according to the `device_num` and total processor numbers. If you don't expect to run pretraining with binding processor cores, remove the operations about `taskset` in `scripts/run_distribute_train.sh` ##### Run vgg19 on GPU diff --git a/research/cv/vgg19/README_CN.md b/research/cv/vgg19/README_CN.md index b4afd312f..9c99d010a 100644 --- a/research/cv/vgg19/README_CN.md +++ b/research/cv/vgg19/README_CN.md @@ -87,7 +87,7 @@ VGG 19缃戠粶涓昏鐢卞嚑涓熀鏈ā鍧楋紙鍖呮嫭鍗风Н灞傚拰姹犲寲灞傦級鍜屼笁 ### 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� @@ -459,7 +459,7 @@ train_parallel1/log:epcoh: 2 step: 97, loss is 1.7133579 ... ``` -> 鍏充簬rank_table.json锛屽彲浠ュ弬鑰僛鍒嗗竷寮忓苟琛岃缁僝(https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training.html)銆� +> 鍏充簬rank_table.json锛屽彲浠ュ弬鑰僛鍒嗗竷寮忓苟琛岃缁僝(https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/introduction.html)銆� > **娉ㄦ剰** 灏嗘牴鎹甡device_num`鍜屽鐞嗗櫒鎬绘暟缁戝畾澶勭悊鍣ㄦ牳銆傚鏋滄偍涓嶅笇鏈涢璁粌涓粦瀹氬鐞嗗櫒鍐呮牳锛岃鍦╜scripts/run_distribute_train.sh`鑴氭湰涓Щ闄taskset`鐩稿叧鎿嶄綔銆� ##### GPU澶勭悊鍣ㄧ幆澧冭繍琛孷GG19 diff --git a/research/cv/vnet/README_CN.md b/research/cv/vnet/README_CN.md index 4e8c4148c..dd25398ea 100644 --- a/research/cv/vnet/README_CN.md +++ b/research/cv/vnet/README_CN.md @@ -101,7 +101,7 @@ VNet閫傜敤浜庡尰瀛﹀浘鍍忓垎鍓诧紝浣跨敤3D鍗风Н锛岃兘澶熷鐞�3D MR鍥惧儚鏁版嵁 - [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html) - 鐢熸垚config json鏂囦欢鐢ㄤ簬澶氬崱璁粌銆� - [绠€鏄撴暀绋媇(https://gitee.com/mindspore/models/tree/master/utils/hccl_tools) - - 璇︾粏閰嶇疆鏂规硶璇峰弬鐓瀹樼綉鏁欑▼](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html#閰嶇疆鍒嗗竷寮忕幆澧冨彉閲�)銆� + - 璇︾粏閰嶇疆鏂规硶璇峰弬鐓瀹樼綉鏁欑▼](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#閰嶇疆鍒嗗竷寮忕幆澧冨彉閲�)銆� # 蹇€熷叆闂� diff --git a/research/cv/wideresnet/README.md b/research/cv/wideresnet/README.md index 80b40d4bb..f6defab27 100644 --- a/research/cv/wideresnet/README.md +++ b/research/cv/wideresnet/README.md @@ -208,7 +208,7 @@ bash run_standalone_train_gpu.sh [DATASET_PATH] [CONFIG_PATH] [EXPERIMENT_LABEL] For distributed training, a hostfile configuration needs to be created in advance. -Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html). +Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html). ##### Evaluation while training diff --git a/research/cv/wideresnet/README_CN.md b/research/cv/wideresnet/README_CN.md index 634064756..12e2ea27e 100644 --- a/research/cv/wideresnet/README_CN.md +++ b/research/cv/wideresnet/README_CN.md @@ -211,7 +211,7 @@ bash run_standalone_train_gpu.sh [DATASET_PATH] [CONFIG_PATH] [EXPERIMENT_LABEL] 瀵逛簬鍒嗗竷寮忓煿璁紝闇€瑕佹彁鍓嶅垱寤轰富鏈烘枃浠堕厤缃€� -璇锋寜鐓ч摼鎺ヤ腑鐨勮鏄庢搷浣� [GPU-Multi-Host](https://www.mindspore.cn/docs/programming_guide/en/master/distributed_training_gpu.html). +璇锋寜鐓ч摼鎺ヤ腑鐨勮鏄庢搷浣� [GPU-Multi-Host](https://www.mindspore.cn/tutorials/experts/en/master/parallel/train_gpu.html). ## 鍩硅鏃剁殑璇勪及 diff --git a/research/hpc/pinns/README.md b/research/hpc/pinns/README.md index 9ad24330a..6ea8de4f7 100644 --- a/research/hpc/pinns/README.md +++ b/research/hpc/pinns/README.md @@ -1,4 +1,4 @@ -锘�# Contents +# Contents [鏌ョ湅涓枃](./README_CN.md) @@ -72,7 +72,7 @@ Dataset used锛歔cylinder nektar wake](https://github.com/maziarraissi/PINNs/tree ## [Mixed Precision](#Contents) -The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. +The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching 鈥榬educe precision鈥�. # [Environment Requirements](#contents) diff --git a/research/hpc/pinns/README_CN.md b/research/hpc/pinns/README_CN.md index d080e0adf..79cf1a900 100644 --- a/research/hpc/pinns/README_CN.md +++ b/research/hpc/pinns/README_CN.md @@ -70,7 +70,7 @@ Navier-Stokes鏂圭▼鏄祦浣撳姏瀛︿腑鎻忚堪绮樻€х墰椤挎祦浣撶殑鏂圭▼銆傞拡瀵筃 ## [娣峰悎绮惧害](#鐩綍) -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # [鐜瑕佹眰](#鐩綍) diff --git a/research/nlp/albert/README.md b/research/nlp/albert/README.md index 303e363ba..4943392e7 100644 --- a/research/nlp/albert/README.md +++ b/research/nlp/albert/README.md @@ -181,10 +181,9 @@ If you want to run in modelarts, please check the official documentation of [mod ``` For distributed training, an hccl configuration file with JSON format needs to be created in advance. -Please follow the instructions in the link below: -https:gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools. -For dataset, if you want to set the format and parameters, a schema configuration file with JSON format needs to be created, please refer to [tfrecord](https://www.mindspore.cn/docs/programming_guide/en/master/dataset_loading.html#tfrecord) format. +Please follow the instructions in the link below: +[https://gitee.com/mindspore/models/tree/master/utils/hccl_tools](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). ```text For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/research/nlp/atae_lstm/README.md b/research/nlp/atae_lstm/README.md index 34aadc780..59a313be5 100644 --- a/research/nlp/atae_lstm/README.md +++ b/research/nlp/atae_lstm/README.md @@ -54,7 +54,7 @@ AttentionLSTM妯″瀷鐨勮緭鍏ョ敱aspect鍜寃ord鍚戦噺缁勬垚锛岃緭鍏ラ儴鍒嗚緭鍏ュ崟 ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 diff --git a/research/nlp/rotate/README_CN.md b/research/nlp/rotate/README_CN.md index 2c8ae6dd1..dc5e24da7 100644 --- a/research/nlp/rotate/README_CN.md +++ b/research/nlp/rotate/README_CN.md @@ -86,7 +86,7 @@ bash run_infer_310.sh [MINDIR_HEAD_PATH] [MINDIR_TAIL_PATH] [DATASET_PATH] [NEED 鍦ㄨ8鏈虹幆澧冿紙鏈湴鏈堿scend 910 AI 澶勭悊鍣級杩涜鍒嗗竷寮忚缁冩椂锛岄渶瑕侀厤缃綋鍓嶅鍗$幆澧冪殑缁勭綉淇℃伅鏂囦欢銆� 璇烽伒寰竴涓嬮摼鎺ヤ腑鐨勮鏄庡垱寤簀son鏂囦欢锛� -<https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html#閰嶇疆鍒嗗竷寮忕幆澧冨彉閲�> +<https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html#閰嶇疆鍒嗗竷寮忕幆澧冨彉閲�> - GPU澶勭悊鍣ㄧ幆澧冭繍琛� diff --git a/research/nlp/seq2seq/README_CN.md b/research/nlp/seq2seq/README_CN.md index 99c995590..45dc01a07 100644 --- a/research/nlp/seq2seq/README_CN.md +++ b/research/nlp/seq2seq/README_CN.md @@ -33,7 +33,7 @@ bash wmt14_en_fr.sh ## 娣峰悎绮惧害 -閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html))鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� +閲囩敤[娣峰悎绮惧害](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html))鐨勮缁冩柟娉曚娇鐢ㄦ敮鎸佸崟绮惧害鍜屽崐绮惧害鏁版嵁鏉ユ彁楂樻繁搴﹀涔犵缁忕綉缁滅殑璁粌閫熷害锛屽悓鏃朵繚鎸佸崟绮惧害璁粌鎵€鑳借揪鍒扮殑缃戠粶绮惧害銆傛贩鍚堢簿搴﹁缁冩彁楂樿绠楅€熷害銆佸噺灏戝唴瀛樹娇鐢ㄧ殑鍚屾椂锛屾敮鎸佸湪鐗瑰畾纭欢涓婅缁冩洿澶х殑妯″瀷鎴栧疄鐜版洿澶ф壒娆$殑璁粌銆� 浠P16绠楀瓙涓轰緥锛屽鏋滆緭鍏ユ暟鎹被鍨嬩负FP32锛孧indSpore鍚庡彴浼氳嚜鍔ㄩ檷浣庣簿搴︽潵澶勭悊鏁版嵁銆傜敤鎴峰彲鎵撳紑INFO鏃ュ織锛屾悳绱⑩€渞educe precision鈥濇煡鐪嬬簿搴﹂檷浣庣殑绠楀瓙銆� # 鐜瑕佹眰 -- GitLab