Documentation - API Reference - Changelog - Bug reports - Discord
⚠️ Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.
Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like Jan.ai.
Cortex.cpp is a multi-engine that uses llama.cpp
as the default engine but also supports the following:
This Local Installer packages all required dependencies, so that you don’t need an internet connection during the installation process.
Alternatively, Cortex is available with a Network Installer which downloads the necessary dependencies from the internet during the installation.
cortex-local-installer.exe cortex-local-installer.pkg cortex-local-installer.debDownload the installer and run the following command in terminal:
sudo apt install ./cortex-local-installer.deb
# or
sudo apt install ./cortex-network-installer.deb
The binary will be installed in the /usr/bin/
directory.
After installation, you can run Cortex.cpp from the command line by typing cortex --help
. For Beta preview, you can run cortex-beta --help
.
Cortex.cpp supports various models available on the Cortex Hub. Once downloaded, all model source files will be stored in ~\cortexcpp\models
.
Example models:
Model | llama.cpp:gguf |
TensorRT:tensorrt |
ONNXRuntime:onnx |
Command |
---|---|---|---|---|
llama3.1 | ✅ | ✅ | cortex run llama3.1:gguf | |
llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
mistral | ✅ | ✅ | ✅ | cortex run mistral |
qwen2 | ✅ | cortex run qwen2:7b-gguf | ||
codestral | ✅ | cortex run codestral:22b-gguf | ||
command-r | ✅ | cortex run command-r:35b-gguf | ||
gemma | ✅ | ✅ | cortex run gemma | |
mixtral | ✅ | cortex run mixtral:7x8b-gguf | ||
openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
phi3 (medium) | ✅ | ✅ | cortex run phi3:medium | |
phi3 (mini) | ✅ | ✅ | cortex run phi3:mini | |
tinyllama | ✅ | cortex run tinyllama:1b-gguf |
Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.
For complete details on CLI commands, please refer to our CLI documentation.
Cortex.cpp includes a REST API accessible at localhost:39281
. For a complete list of endpoints and their usage, visit our API documentation.
Beta is an early preview for new versions of Cortex. It is for users who want to try new features early - we appreciate your feedback.
Nightly is our development version of Cortex. It is released every night and may contain bugs and experimental features.
Version Type | Windows | MacOS | Linux |
Stable (Recommended) | cortex-local-installer.exe | cortex-local-installer.pkg | cortex-local-installer.deb |
Beta (Preview) | cortex-local-installer.exe | cortex-local-installer.pkg | cortex-local-installer.deb |
Nightly Build (Experimental) | cortex-local-installer.exe | cortex-local-installer.pkg | cortex-local-installer.deb |
Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
Version Type | Windows | MacOS | Linux |
Stable (Recommended) | cortex-network-installer.exe | cortex-network-installer.pkg | cortex-network-installer.deb |
Beta (Preview) | cortex-network-installer.exe | cortex-network-installer.pkg | cortex-network-installer.deb |
Nightly Build (Experimental) | cortex-network-installer.exe | cortex-network-installer.pkg | cortex-network-installer.deb |
- Clone the Cortex.cpp repository here.
- Navigate to the
engine > vcpkg
folder. - Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
- Build the Cortex.cpp inside the
build
folder:
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
- Use Visual Studio with the C++ development kit to build the project using the files generated in the
build
folder. - Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h
- Clone the Cortex.cpp repository here.
- Navigate to the
engine > vcpkg
folder. - Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
- Build the Cortex.cpp inside the
build
folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
- Use Visual Studio with the C++ development kit to build the project using the files generated in the
build
folder. - Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h
- Clone the Cortex.cpp repository here.
- Navigate to the
engine > vcpkg
folder. - Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
- Build the Cortex.cpp inside the
build
folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
- Use Visual Studio with the C++ development kit to build the project using the files generated in the
build
folder. - Verify that Cortex.cpp is installed correctly by getting help information.
# Get help
cortex
- Open the Windows Control Panel.
- Navigate to
Add or Remove Programs
. - Search for
cortexcpp
and double click to uninstall. (for beta and nightly builds, search forcortexcpp-beta
andcortexcpp-nightly
respectively)
Run the uninstaller script:
sudo sh cortex-uninstall.sh
For MacOS, there is a uninstaller script comes with the binary and added to the /usr/local/bin/
directory. The script is named cortex-uninstall.sh
for stable builds, cortex-beta-uninstall.sh
for beta builds and cortex-nightly-uninstall.sh
for nightly builds.
# For stable builds
sudo apt remove cortexcpp
- For support, please file a GitHub ticket.
- For questions, join our Discord here.
- For long-form inquiries, please email [email protected].