diff --git a/README.md b/README.md
index 07ec2ca6600b00bf7518ac5e13b41b3d07ddd4f6..9569934e8dc660651bb23e3094c20b37126cfc49 100644
--- a/README.md
+++ b/README.md
@@ -135,6 +135,8 @@ MindSpore is Apache 2.0 licensed. Please see the LICENSE file.
 
 ## FAQ
 
+For more information about `MindSpore` framework, please refer to [FAQ](https://www.mindspore.cn/docs/faq/en/master/index.html)
+
 - **Q: How to resolve the lack of memory while using the model directly under "models" with errors such as *Failed to alloc memory pool memory*?**
 
   **A**: The typical reason for insufficient memory when directly using models under "models" is due to differences in operating mode (`PYNATIVE_MODE`), operating environment configuration, and license control (AI-TOKEN). `PYNATIVE_MODE` usually uses more memory than `GRAPH_MODE` , especially in the training graph that needs back propagation calculation, you can try to use some smaller batch size; the operating environment will also cause similar problems due to the different configurations of NPU cores, memory, etc.; different gears of License control (AI-TOKEN ) will cause different memory overhead during execution. You can also try to use some smaller batch sizes.
diff --git a/README_CN.md b/README_CN.md
index 6ee8a92ea9468ad9b3b3978d4fec77f409af4463..be8183e862dcc9c64ff23989633230f306c69d37 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -135,6 +135,8 @@ MindSpore已获得Apache 2.0许可,请参见LICENSE文件。
 
 ## FAQ
 
+想要获取更多关于`MindSpore`框架使用本身的FAQ问题的,可以参考[官网FAQ](https://www.mindspore.cn/docs/faq/zh-CN/master/index.html)
+
 - **Q: 直接使用models下的模型出现内存不足错误,例如*Failed to alloc memory pool memory*, 该怎么处理?**
 
   **A**: 直接使用models下的模型出现内存不足的典型原因是由于运行模式(`PYNATIVE_MODE`)、运行环境配置、License控制(AI-TOKEN)的不同造成的:`PYNATIVE_MODE`通常比`GRAPH_MODE`使用更多内存,尤其是在需要进行反向传播计算的训练图中,你可以尝试使用一些更小的batch size;运行环境由于NPU的核数、内存等配置不同也会产生类似问题;License控制(AI-TOKEN)的不同档位会造成执行过程中内存开销不同,也可以尝试使用一些更小的batch size。