Curated collection of papers and resources on how to build a efficient KV Cache system for LLM inference service.
The template is derived from Awesome-LLM-Reasoning. Still Work In Progress.
-
Long-Context Language Modeling with Parallel Context Encoding
ACL 2024
-
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
EMNLP 2023
Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai [Paper], 2023.05
-
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Preprint
William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, Jonathan Ragan Kelly [Paper], 2024.5
-
You Only Cache Once: Decoder-Decoder Architectures for Language Models
Preprint
Yutao Sun, Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Shuming Ma, Quanlu Zhang, Jianyong Wang, Furu Wei [Paper] [Code], 2024.5
-
GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression
Preprint
Daniel Goldstein, Fares Obeid, Eric Alcaide, Guangyu Song, Eugene Cheah [Paper] [Code], 2024.7
-
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Preprint
DeepSeek-AI Team [Paper], 2024.5
-
Efficient Memory Management for Large Language Model Serving with PagedAttention
SOSP 2023
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica [Paper] [Code], 2023.10 Pubed
-
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition
ACL 2024
-
FastDecode: High-Throughput GPU-Efficient LLM Serving using Heterogeneous Pipelines
Preprint
Jiaao He, Jidong Zhai [Paper], 2024.3
-
Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache
Preprint
Bin Lin, Chen Zhang, Tao Peng, Hanyu Zhao, Wencong Xiao, Minmin Sun, Anmin Liu, Zhipeng Zhang, Lanbo Li, Xiafei Qiu, Shen Li, Zhigang Ji, Tao Xie, Yong Li, Wei Lin [Paper], 2024.1
-
Mooncake: A KVCache-centric Disaggregated Architecture for LLM Serving
Preprint
Ruoyu Qin, Zheming Li, Weiran He, Mingxing Zhang, Yongwei Wu, Weimin Zheng, Xinran Xu [Paper], 2024.6
-
InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management
OSDI 2024
Wonbeom Lee, Jungi Lee, Junghwan Seo, Jaewoong Sim [Paper], 2024.6
-
Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention
ATC 2024
Bin Gao, Zhuomin He, Puru Sharma, Qingxuan Kang, Djordje Jevdjic, Junbo Deng, Xingkun Yang, Zhou Yu, Pengfei Zuo [Paper], 2024.3
-
InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory
Preprint
Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Maosong Sun [Paper], 2024.2
-
Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations
Preprint
Amey Agrawal, Junda Chen, Íñigo Goiri, Ramachandran Ramjee, Chaojie Zhang, Alexey Tumanov, Esha Choukse [Paper], 2024.9
-
Post-Training Sparse Attention with Double Sparsity
Preprint
Shuo Yang, Ying Sheng, Joseph E. Gonzalez, Ion Stoica, Lianmin Zheng [Paper], 2024.8
-
Longformer: The Long-Document Transformer
Preprint
Iz Beltagy, Matthew E. Peters, Arman Cohan [Paper], 2020.4
-
Efficient Streaming Language Models with Attention Sinks
ICLR 2024
Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis [Paper], 2023.9
-
LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models
NAACL 2024
Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, Sinong Wang [Paper], 2023.12
-
RazorAttention: Efficient KV Cache Compression Through Retrieval Heads
Preprint
Hanlin Tang, Yang Lin, Jing Lin, Qingsen Han, Shikuan Hong, Yiwu Yao, Gongyi Wang [Paper], 2024.7
-
H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
NeurIPS 2023
Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark Barrett, Zhangyang "Atlas" Wang, Beidi Chen [Paper], 2023.4
-
Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time
NeurIPS 2023
Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, Anshumali Shrivastava [Paper], 2023.4
-
PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference
ACL 2024
Dongjie Yang, XiaoDong Han, Yan Gao, Yao Hu, Shilin Zhang, Hai Zhao [Paper], 2024.2
-
Keyformer: KV Cache reduction through key tokens selection for Efficient Generative Inference
MLSys 2024
Muhammad Adnan, Akhil Arunkumar, Gaurav Jain, Prashant Nair, Ilya Soloveychik, Purushotham Kamath [Paper], 2024.3
-
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
ICLR 2024 Oral
Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao [Paper], 2023.10
-
SparQ Attention: Bandwidth-Efficient LLM Inference
Preprint
Luka Ribar, Ivan Chelombiev, Luke Hudlass-Galley, Charlie Blake, Carlo Luschi, Douglas Orr [Paper], 2023.12
-
Finch: Prompt-guided Key-Value Cache Compression
TACL
Giulio Corallo, Paolo Papotti [Paper], 2024.8
-
A2SF: Accumulative Attention Scoring with Forgetting Factor for Token Pruning in Transformer Decoder
Preprint
Hyun-rae Jo, Dongkun Shin [Paper], 2024.7
-
ThinK: Thinner Key Cache by Query-Driven Pruning
Preprint
Yuhui Xu, Zhanming Jie, Hanze Dong, Lei Wang, Xudong Lu, Aojun Zhou, Amrita Saha, Caiming Xiong, Doyen Sahoo [Paper], 2024.7
-
LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference
Preprint
Qichen Fu, Minsik Cho, Thomas Merth, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi [Paper], 2024.7
-
SirLLM: Streaming Infinite Retentive LLM
ACL 2024
Yao Yao, Zuchao Li, Hai Zhao [Paper], 2024.2
-
A Simple and Effective
$L_2$ Norm-Based Strategy for KV Cache CompressionACL 2024
Alessio Devoto, Yu Zhao, Simone Scardapane, Pasquale Minervini [Paper], 2024.6
-
Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
ICML 2024
Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti [Paper], 2024.1
-
Effectively Compress KV Heads for LLM
Preprint
Hao Yu, Zelan Yang, Shen Li, Yong Li, Jianxin Wu [Paper], 2024.6
-
D2O: Dynamic Discriminative Operations for Efficient Generative Inference of Large Language Models
Preprint
*Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji * [Paper], 2024.6
-
CaM: Cache Merging for Memory-efficient LLMs Inference
ICML 2024
*Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji * [Paper], 2024.1
-
Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks
Preprint
Zheng Wang, Boxiao Jin, Zhongzhi Yu, Minjia Zhang [Paper], 2024.7
-
MiniCache: KV Cache Compression in Depth Dimension for Large Language Models
Preprint
Akide Liu, Jing Liu, Zizheng Pan, Yefei He, Gholamreza Haffari, Bohan Zhuang [Paper], 2024.5
-
Anchor-based Large Language Models
ACL 2024
Jianhui Pang, Fanghua Ye, Derek Fai Wong, Xin He, Wanshun Chen, Longyue Wang [Paper], 2024.2
-
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Preprint
Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, Amir Gholami [Paper], 2024.1
-
No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization
Preprint
June Yong Yang, Byeongwook Kim, Jeongin Bae, Beomseok Kwon, Gunho Park, Eunho Yang, Se Jung Kwon, Dongsoo Lee [Paper], 2024.2
-
QAQ: Quality Adaptive Quantization for LLM KV Cache
Preprint
Shichen Dong, Wen Cheng, Jiayu Qin, Wei Wang [Paper], 2024.3
-
GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLM
Preprint
Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, Tuo Zhao [Paper], 2024.3
-
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
PMLR 2023
Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Re, Ion Stoica, Ce Zhang [Paper], 2023.3
-
WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More
PMLR 2023
Yuxuan Yue, Zhihang Yuan, Haojie Duanmu, Sifan Zhou, Jianlong Wu, Liqiang Nie [Paper], 2024.2
-
SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models
COLM 2024
Haojie Duanmu, Zhihang Yuan, Xiuhong Li, Jiangfei Duan, Xingcheng Zhang, Dahua Lin [Paper], 2024.5
Work in progress.
Field | Benchmarks |
---|---|
Efficiency | |
Retrieval | |
Reasoning |
- Awesome-LLM-Reasoning Curated collection of papers and resources on how to unlock the reasoning ability of LLMs and MLLMs.
- Awesome-Controllable-Generation Collection of papers and resources on Controllable Generation using Diffusion Models.
- Chain-of-ThoughtsPapers A trend starts from "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models".
- LM-reasoning Collection of papers and resources on Reasoning in Large Language Models.
- Prompt4ReasoningPapers Repository for the paper "Reasoning with Language Model Prompting: A Survey".
- ReasoningNLP Paper list on reasoning in NLP
- Awesome-LLM Curated list of Large Language Model.
- Awesome LLM Self-Consistency Curated list of Self-consistency in Large Language Models.
- Deep-Reasoning-Papers Recent Papers including Neural-Symbolic Reasoning, Logical Reasoning, and Visual Reasoning.
- Add a new paper or update an existing paper, thinking about which category the work should belong to.
- Use the same format as existing entries to describe the work.
- Add the abstract link of the paper (
/abs/
format if it is an arXiv publication).
Don't worry if you do something wrong, it will be fixed for you!