๐ GPTSwarm is a graph-based framework for LLM-based agents, providing two high-level features:
- It lets you build LLM-based agents from graphs.
- It enables the customized and automatic self-organization of agent swarms with self-improvement capabilities.
๐ Feb 27, 2024: Our academic paper: Language Agents as Optimizable Graphs is released.
Here is the edge optimization process that updates edge probabilities toward improvement of the benchmark score. Notice that within an agent, the edges are fixed, whereas the inter-agent connections are getting optimized towards either edge pruning (value 0, blue) or creation (value 1, red).
At a granular level, GPTSwarm is a library that includes the following components:
Module | Description |
---|---|
swarm.environment | Domain-specific operations, agents, tools, and tasks |
swarm.graph | Graph-related functions for creating and executing agent graphs and swarm composite graphs |
swarm.llm | Interface for selecting LLM backends and calculating their operational costs |
swarm.memory | Index-based memory |
swarm.optimizer | Optimization algorithms designed to enhance agent performance and overall swarm efficiency |
Clone the repo
git clone https://github.com/metauto-ai/GPTSwarm.git
cd GPTSwarm/
Install packages
conda create -n swarm python=3.10
conda activate swarm
pip install -r requirements_py310_<linux|macos>.txt
You should add API keys in .env.template
and change its name to .env
OPENAI_API_KEY="" # for OpenAI LLM backend
SEARCHAPI_API_KEY="" # for Web Search
Getting started with GPTSwarm is easy. Quickly run a predefined swarm
from swarm.graph.swarm import Swarm
swarm = Swarm(["IO", "IO", "IO"], "gaia")
task = "What is the capital of Jordan?"
inputs = {"task": task}
answer = await swarm.arun(inputs)
or make use of tools, such as the file analyzer
from swarm.graph.swarm import Swarm
swarm = Swarm(["IO", "TOT"], "gaia")
task = "Tell me more about this image and summarize it in 3 sentences."
files = ["./datasets/demos/js.png"]
inputs = {"task": task, "files": files}
danswer = swarm.run(inputs)
Check out the minimal Swarm example in Colab here: .
See how to create a custom Agent and run a Swarm with it here: .
Here is a Youtube video on how to run the demo notebooks:
๐ See our experiments for more advanced use of our framework.
We support local LM inference via LM Studio. Download their desktop app for Mac or Windows, choose a model from the Huggingface repository and start the server. Use model_name='lmstudio'
in GPTSwarm code to run with the local LLM.
- Mingchen Zhuge (PhD Student, KAUST; Project Initiator)
- Wenyi Wang (PhD Student, KAUST; Initial Participant)
- Dmitrii Khizbullin (Research Engineer Lead, KAUST; Project Engineer Lead)
- Louis Kirsch (PhD Student, IDSIA)
- Francesco Faccio (PostDoc, IDSIA; Visiting Researcher, KAUST)
- Jรผrgen Schmidhuber (Director, KAUST AI Initiative; Scientific Director, IDSIA)
Please read our developer document if you are interested in contributing.
Please cite our paper if you find the library useful or interesting.
@article{zhuge2024language,
title={Language Agents as Optimizable Graphs},
author={Zhuge, Mingchen and Wang, Wenyi and Kirsch, Louis and Faccio, Francesco and Khizbullin, Dmitrii and Schmidhuber, Jurgen},
journal={arXiv preprint arXiv:2402.16823},
year={2024}
}