diff --git a/benchmark/ascend/bert/README_CN.md b/benchmark/ascend/bert/README_CN.md index 6a74ff96bdaae7fdf065161d1837d3c7f176ea35..7160cdb7a1398a7c12d756f9c5d222a54fc99fd4 100644 --- a/benchmark/ascend/bert/README_CN.md +++ b/benchmark/ascend/bert/README_CN.md @@ -212,8 +212,6 @@ bash scripts/run_distributed_pretrain_for_gpu.sh 8 40 /path/cn-wiki-128 在Ascend设备上做多机分布式训练时,训练命令需要在很短的时间间隔内在各台设备上执行。因此,每台设备上都需要准备HCCL配置文件。请参考[merge_hccl](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools#merge_hccl)创建多机的HCCL配置文件。 -如需设置数据集格式和参数,请创建JSON格式的schema配置文件,详见[TFRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_loading.html#tfrecord)格式。 - ```text For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/official/nlp/bert/README_CN.md b/official/nlp/bert/README_CN.md index 9e066bcb02a33584e3009940a7ba05d431adb977..5dfc513215e4fbde6faa8ff8a0cfa58f240beb90 100644 --- a/official/nlp/bert/README_CN.md +++ b/official/nlp/bert/README_CN.md @@ -212,8 +212,6 @@ bash scripts/run_distributed_pretrain_for_gpu.sh 8 40 /path/cn-wiki-128 在Ascend设备上做多机分布式训练时,训练命令需要在很短的时间间隔内在各台设备上执行。因此,每台设备上都需要准备HCCL配置文件。请参考[merge_hccl](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools#merge_hccl)创建多机的HCCL配置文件。 -如需设置数据集格式和参数,请创建JSON格式的schema配置文件,详见[TFRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_loading.html#tfrecord)格式。 - ```text For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/official/nlp/dgu/README_CN.md b/official/nlp/dgu/README_CN.md index 25fbcdaded8f95c5f163de8fc52688d06dbcbbff..3eca51492cb4324a36562a0454cb7e16bf22fe58 100644 --- a/official/nlp/dgu/README_CN.md +++ b/official/nlp/dgu/README_CN.md @@ -100,8 +100,6 @@ BERT的主干结构为Transformer。对于BERT_base,Transformer包含12个编 在Ascend设备上做多机分布式训练时,训练命令需要在很短的时间间隔内在各台设备上执行。因此,每台设备上都需要准备HCCL配置文件。请参考[here](https://gitee.com/mindspore/mindspore/tree/master/config/hccl_multi_machine_multi_rank.json)创建多机的HCCL配置文件。 -如需设置数据集格式和参数,请创建JSON格式的模式配置文件,详见[TFRecord](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_loading.html#tfrecord)格式。 - ```text For pretraining, schema file contains ["input_ids", "input_mask", "segment_ids", "next_sentence_labels", "masked_lm_positions", "masked_lm_ids", "masked_lm_weights"]. diff --git a/research/cv/AutoSlim/README.md b/research/cv/AutoSlim/README.md index d669041f723e5b0e36dee345fa90fd80f085ee09..501ca531244f115dd2571128617f4be7a8dbd012 100644 --- a/research/cv/AutoSlim/README.md +++ b/research/cv/AutoSlim/README.md @@ -245,11 +245,11 @@ In train.py, we use "dataset.Generator(shuffle=True)" to shuffle dataset. ## [ModelZoo Homepage](#contents) -Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +Please check the official [homepage](https://gitee.com/mindspore/models). ## FAQ -Please refer to [ModelZoo FAQ](https://gitee.com/mindspore/mindspore/tree/master/model_zoo#FAQ) to get some common FAQ. +Please refer to [ModelZoo FAQ](https://gitee.com/mindspore/models/blob/master/README.md#faq) to get some common FAQ. - **Q**: Get "out of memory" error in PYNATIVE_MODE. diff --git a/research/cv/AutoSlim/README_CN.md b/research/cv/AutoSlim/README_CN.md index 3d2f6ab00f38a69dce979028ccf6ebbb70becde3..eaaab1ba0edf979b5db0dbdcdd3e50252d58b959 100644 --- a/research/cv/AutoSlim/README_CN.md +++ b/research/cv/AutoSlim/README_CN.md @@ -260,7 +260,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATASET_PATH] [DEVICE_ID] ## FAQ -优先参考[ModelZoo FAQ](https://gitee.com/mindspore/mindspore/tree/master/model_zoo#FAQ)来查找一些常见的公共问题。 +优先参考[ModelZoo FAQ](https://gitee.com/mindspore/models/blob/master/README_CN.md#faq)来查找一些常见的公共问题。 - **Q**:使用PYNATIVE_MODE发生内存溢出。