diff --git a/README.md b/README.md index 352e7215e..7c763fa46 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,7 @@ English | [简体中文](README_zh-CN.md) ## 🎉 News +- **\[2024/04\]** Support Sequence Parallel for enabling highly efficient and scalable LLM training with extremely long sequence lengths! \[[Usage](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/train_extreme_long_sequence.rst)\] \[[Speed Benchmark](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/benchmark.rst)\] - **\[2024/02\]** Support [Gemma](xtuner/configs/gemma) models! - **\[2024/02\]** Support [Qwen1.5](xtuner/configs/qwen/qwen1_5) models! - **\[2024/01\]** Support [InternLM2](xtuner/configs/internlm) models! The latest VLM [LLaVA-Internlm2-7B](https://huggingface.co/xtuner/llava-internlm2-7b) / [20B](https://huggingface.co/xtuner/llava-internlm2-20b) models are released, with impressive performance! diff --git a/README_zh-CN.md b/README_zh-CN.md index c247be985..83664c308 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -23,6 +23,7 @@ ## 🎉 更新 +- **\[2024/04\]** 支持序列并行训练策略以实现语言模型超长上下文训练!\[[文档](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/train_extreme_long_sequence.rst)\] \[[速度基准](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/benchmark.rst)\] - **\[2024/02\]** 支持 [Gemma](xtuner/configs/gemma) 模型! - **\[2024/02\]** 支持 [Qwen1.5](xtuner/configs/qwen/qwen1_5) 模型! - **\[2024/01\]** 支持 [InternLM2](xtuner/configs/internlm) 模型!同时,最新版的多模态大模型 [LLaVA-Internlm2-7B](https://huggingface.co/xtuner/llava-internlm2-7b) / [20B](https://huggingface.co/xtuner/llava-internlm2-20b) 发布,其表现出强大的性能!