Skip to content
Carlos Lizarraga-Celaya edited this page Jun 25, 2024 · 24 revisions

Welcome to the LLM-Models wiki!

(Image credit: Steve Johnson, Unsplash.com)


Large language models (LLMs) are a category of foundation language models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.

LLMs can be used for a variety of tasks, including: Generatingand translating text, recognizing speech, performing natural language processing (NLP) tasks, creating chatbots, summarizing text, and answering questions.

LLMs are typically based on deep learning architectures, such as the Transformer model developed by Google in 2017. The efficacy of an LLM depends largely on its training process, which includes pre-training the model on massive amounts of text data, such as books, articles, or web pages.


Here you will find a collection for learning resources on Large Language Models and Applications.

General


Topics


General References


Created: 06/10/2024 (C. Lizárraga)

Updated: 06/10/2024 (C. Lizárraga)

Data Lab, Data Science Institute, University of Arizona.

CC BY-NC-SA 4.0