diff --git a/README.md b/README.md index 890756d..fe3c9a3 100644 --- a/README.md +++ b/README.md @@ -305,7 +305,7 @@ 问题10:会出34B或者70B级别的模型吗? 问题11:为什么长上下文版模型是16K,不是32K或者100K? 问题12:为什么Alpaca模型会回复说自己是ChatGPT? -问题13:为什么pt_lora_mdoel或者sft_lora_model下的adapter_model.bin只有几百k? +问题13:为什么pt_lora_model或者sft_lora_model下的adapter_model.bin只有几百k? ``` diff --git a/README_EN.md b/README_EN.md index 89ab6db..ad991ff 100644 --- a/README_EN.md +++ b/README_EN.md @@ -288,7 +288,7 @@ Question 9: How to interprete the results of third-party benchmarks? Question 10: Will you release 34B or 70B models? Question 11: Why the long-context model is 16K context, not 32K or 100K? Question 12: Why does the Alpaca model reply that it is ChatGPT? -Question 13: Why is the adapter_model.bin in the pt_lora_mdoel or sft_lora_model folder only a few hundred kb? +Question 13: Why is the adapter_model.bin in the pt_lora_model or sft_lora_model folder only a few hundred kb? ``` For specific questions and answers, please refer to the project >>> [📚 GitHub Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/faq_en)