-
Notifications
You must be signed in to change notification settings - Fork 3
Home
(Image credit: Steve Johnson, Unsplash.com)
Large language models (LLMs) are a category of foundation language models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.
LLMs can be used for a variety of tasks, including: Generatingand translating text, recognizing speech, performing natural language processing (NLP) tasks, creating chatbots, summarizing text, and answering questions.
LLMs are typically based on deep learning architectures, such as the Transformer model developed by Google in 2017. The efficacy of an LLM depends largely on its training process, which includes pre-training the model on massive amounts of text data, such as books, articles, or web pages.
Here you will find a collection for learning resources on Large Language Models and Applications.
- A LLM Reading List. Evan Miller. Github. 2023.
- Chatbot Arena Leaderboard: LLMs ratings & performance. LMSYS.
- GPT-4 Technical Report. OpenAI. Mar 27, 2023.
- HuggingFace Arxiv Daily Papers. A Khalik.
- HuggingFace Models.
- Ollama. Running LLMs locally. Downloadable models.
- Papers with Code State of the Art
- Sparks of Artificial General Intelligence: Early experiments with GPT-4. Sebastien Bubeck. Apr 13, 2023.
- State of GPT. Andrej Karpathy. OpenAI. May 23, 2023.
- The Practical Guide for Large Language Models. (Based on Arxiv paper: Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond)
Created: 06/10/2024 (C. Lizárraga)
Updated: 06/10/2024 (C. Lizárraga)
Data Lab, Data Science Institute, University of Arizona.
UArizona DataLab, Data Science Institute, University of Arizona, 2024.