Skip to content
@guestrin-lab

Guestrin Lab at Stanford University

Guestrin Lab at Stanford University, Department of Computer Science

Welcome to the Carlos Guestrin Lab

We are the Guestrin Lab at Stanford University, Department of Computer science. We focus on practical and impactful research in machine learning and artificial intelligence, creating tools and systems that solve real-world problems and make AI more trustworthy.

Major Open Source Projects

A significant focus of our research has been building and releasing ML systems that work in the real-world, with the aim of gaining massive adoption, impacting industry and fundamentally influencing the design and architecture of such systems. Here are some key projects we have co-created:

  • XGBoost: scalable, portable and distributed gradient boosting library.
  • LIME: explaining the predictions of any machine learning classifier.
  • TextGrad: self-optimization of prompts and outputs of LLM programs.
  • AlpacaFarm: small and cheap (<600$) instruction-following large-language model.
  • Apache TVM: end-to-end deep learning compiler stack for CPUs, GPUs and specialized accelerators.

See here for a longer list of projects.

Browse our repositories, join issues, discussions and contribute!!

Thank you for your interest in our work!

Popular repositories Loading

  1. lotus lotus Public

    LOTUS: A semantic query engine for fast and easy LLM-powered data processing

    Python 900 70

  2. ACORN ACORN Public

    state-of-the-art search over vector embeddings and structured data (SIGMOD '24)

    C++ 61 12

  3. extractive-abstractive-spectrum extractive-abstractive-spectrum Public

    Jupyter Notebook 1

  4. textgrad textgrad Public

    Forked from zou-group/textgrad

    TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.

    Python

  5. .github .github Public

  6. ttt-lm-pytorch ttt-lm-pytorch Public

    Forked from test-time-training/ttt-lm-pytorch

    Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States

    Python

Repositories

Showing 6 of 6 repositories
  • lotus Public

    LOTUS: A semantic query engine for fast and easy LLM-powered data processing

    guestrin-lab/lotus’s past year of commit activity
    Python 900 MIT 70 8 (1 issue needs help) 1 Updated Dec 27, 2024
  • guestrin-lab/extractive-abstractive-spectrum’s past year of commit activity
    Jupyter Notebook 1 Apache-2.0 0 0 0 Updated Dec 23, 2024
  • .github Public
    guestrin-lab/.github’s past year of commit activity
    0 0 0 0 Updated Nov 22, 2024
  • textgrad Public Forked from zou-group/textgrad

    TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.

    guestrin-lab/textgrad’s past year of commit activity
    Python 0 MIT 170 0 0 Updated Nov 2, 2024
  • ttt-lm-pytorch Public Forked from test-time-training/ttt-lm-pytorch

    Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States

    guestrin-lab/ttt-lm-pytorch’s past year of commit activity
    Python 0 MIT 64 0 0 Updated Jul 14, 2024
  • ACORN Public

    state-of-the-art search over vector embeddings and structured data (SIGMOD '24)

    guestrin-lab/ACORN’s past year of commit activity
    C++ 61 MIT 12 7 2 Updated Jun 19, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Most used topics

Loading…