diff --git a/README.md b/README.md
index 4d1842a5d0ddf55e7373f6e1d03a2bd2464df3f4..71b3c4a62c87da8d2f661e55f572506a649a52bf 100644
--- a/README.md
+++ b/README.md
@@ -141,7 +141,13 @@ For more information about `MindSpore` framework, please refer to [FAQ](https://
 
 - **Q: How to resolve the lack of memory while using the model directly under "models" with errors such as *Failed to alloc memory pool memory*?**
 
-  **A**: The typical reason for insufficient memory when directly using models under "models" is due to differences in operating mode (`PYNATIVE_MODE`), operating environment configuration, and license control (AI-TOKEN). `PYNATIVE_MODE` usually uses more memory than `GRAPH_MODE` , especially in the training graph that needs back propagation calculation, you can try to use some smaller batch size; the operating environment will also cause similar problems due to the different configurations of NPU cores, memory, etc.; different gears of License control (AI-TOKEN ) will cause different memory overhead during execution. You can also try to use some smaller batch sizes.
+  **A**: The typical reason for insufficient memory when directly using models under "models" is due to differences in operating mode (`PYNATIVE_MODE`), operating environment configuration, and license control (AI-TOKEN).
+    - `PYNATIVE_MODE` usually uses more memory than `GRAPH_MODE` , especially in the training graph that needs back propagation calculation, there are two ways to try to solve this problem.
+        Method 1: You can try to use some smaller batch size;
+        Method 2: Add context.set_context(mempool_block_size="XXGB"), where the current maximum effective value of "XX" can be set to "31".
+        If method 1 and method 2 are used in combination, the effect will be better.
+    - The operating environment will also cause similar problems due to the different configurations of NPU cores, memory, etc.;
+    - Different gears of License control (AI-TOKEN ) will cause different memory overhead during execution. You can also try to use some smaller batch sizes.
 
 - **Q: How to resolve the error about the interface are not supported in some network operations, such as `cann not import`?**
 
diff --git a/README_CN.md b/README_CN.md
index 8347ccc806b21d367c0d9512e4543eb6544b7ece..0e29f02f5431c2a18da5b01236391cd64b47a7aa 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -141,7 +141,13 @@ MindSpore已获得Apache 2.0许可,请参见LICENSE文件。
 
 - **Q: 直接使用models下的模型出现内存不足错误,例如*Failed to alloc memory pool memory*, 该怎么处理?**
 
-  **A**: 直接使用models下的模型出现内存不足的典型原因是由于运行模式(`PYNATIVE_MODE`)、运行环境配置、License控制(AI-TOKEN)的不同造成的:`PYNATIVE_MODE`通常比`GRAPH_MODE`使用更多内存,尤其是在需要进行反向传播计算的训练图中,你可以尝试使用一些更小的batch size;运行环境由于NPU的核数、内存等配置不同也会产生类似问题;License控制(AI-TOKEN)的不同档位会造成执行过程中内存开销不同,也可以尝试使用一些更小的batch size。
+  **A**: 直接使用models下的模型出现内存不足的典型原因是由于运行模式(`PYNATIVE_MODE`)、运行环境配置、License控制(AI-TOKEN)的不同造成的:
+    - `PYNATIVE_MODE`通常比`GRAPH_MODE`使用更多内存,尤其是在需要进行反向传播计算的训练图中,当前有2种方法可以尝试解决该问题。
+        方法1:你可以尝试使用一些更小的batch size;
+        方法2:添加context.set_context(mempool_block_size="XXGB"),其中,“XX”当前最大有效值可设置为“31”。
+        如果将方法1与方法2结合使用,效果会更好。
+    - 运行环境由于NPU的核数、内存等配置不同也会产生类似问题。
+    - License控制(AI-TOKEN)的不同档位会造成执行过程中内存开销不同,也可以尝试使用一些更小的batch size。
 
 - **Q: 一些网络运行中报错接口不存在,例如cannot import,该怎么处理?**