Skip to content

Releases: TheR1D/shell_gpt

1.4.4

10 Aug 23:32
b087d73
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.4.3...1.4.4

1.4.3

06 Apr 16:55
Compare
Choose a tag to compare

What's Changed

  • Bug fixed when parsing config .sgptrc that contains multiple equal "=" symbols #504
  • Interuption option when LLM actively generates (stream) response in REPL mode with Ctrl + C #319
  • Fixed a bug in function calls which didn’t work properly due to caching #485

Shoutout to all contributors: @keiththomps, @artsparkAI, @save196.

1.4.0

22 Feb 02:56
7c3617a
Compare
Choose a tag to compare

What's Changed

  • Added new option —md and —no-md to disable or enable markdown output.
  • Added new config variable PRETTIFY_MARKDOWN to disable or enable markdown output by default.
  • Added new config variable USE_LITELLM to enforce usage of LiteLLM library.

OpenAI and LiteLLM

Because LiteLLM facilitates requests for numerous other LLM backends, it is heavy to import, adding 1-2 seconds of runtime. ShellGPT, by default will use OpenAI's library, which is suitable for most users. Optionally, ShellGPT can be installed with LiteLLM by running pip install shell-gpt[litellm]. To enforce LiteLLM usage set USE_LITELLM to true in the config file ~/.config/shell_gpt/.sgptrc.

1.3.1

17 Feb 02:06
ecb7b26
Compare
Choose a tag to compare

What's Changed

  • Fix #422: Markdown formatting for chat history by @jeanlucthumm in #444
  • New config variable API_BASE_URL #473 and fixing REQUEST_TIMEOUT by @TheR1D in #477
  • Minor code optimisations.

Full Changelog: 1.3.0...1.3.1

1.3.0

09 Feb 22:48
1cb61de
Compare
Choose a tag to compare

What's Changed

  • Ollama and other LLM backends.
  • Markdown formatting now depends on role description.
  • Code refactoring and optimisation.

Multiple LLM backends

ShellGPT now can work with multiple Backends using LiteLLM. You can use locally hosted open source models which are available for free. To use local models, you will need to run your own LLM backend server such as Ollama. To setup ShellGPT with Ollama, please follow this comprehensive guide. Full list of supported models and providers here. Note that ShellGPT is not optimized for local models and may not work as expected❗️

Markdown formatting

Markdown formatting now depends on the role description. For instance, if the role includes "APPLY MARKDOWN" in its description, the output for this role will be Markdown-formatted. This applies to both default and custom roles. If you would like to disable Markdown formatting, edit the default role description in ~/.config/shell_gpt/roles.

Full Changelog: 1.2.0...1.3.0

1.2.0

28 Jan 04:33
c48926a
Compare
Choose a tag to compare
  • Added --interaction that works with --shell option, e.g. sgpt --shell --no-interaction will output suggested command to stdout. This is usefull when you want to redirect output to somewhere else. For instance sgpt -s "say hi" | pbcopy.
  • Fixed issue with stdin and --shell not switching to interactive input mode.
  • REPL mode now can accept stdin or PROMPT argument, or both.
  • Changed shell integrations to use new --no-interaction to generate shell commands.
  • Moved shell integrations into dedicated file integration.py.
  • Changed --install-integration logic, will not download sh script anymore.
  • Removed validation for PROMPT argument, now will be empty string by default.
  • Fixing an issue when sgpt is being called from non-interactive shell environments such as cron tab.
  • Fixed and optimised Dockerfile.
  • GitHub codespaces setup.
  • Improved tests.
  • README.md improvements.
  • New demo video 🐴.

❗️Shell integration logic has been updated, and it will not work with previous version of integration function in ~/.bashrc or ~/.zshrc. Run sgpt --install-integration to apply new changes, and remove old integration function from your shell profile if you were using it before.

ShellGPT.mp4

REPL stdin

REPL mode can now accept stdin, a PROMPT argument, or even both. This is useful when you want to provide some initial context for your prompt.

sgpt --repl temp < my_app.py
Entering REPL mode, press Ctrl+C to exit.
──────────────────────────────────── Input ────────────────────────────────────
name = input("What is your name?")
print(f"Hello {name}")
───────────────────────────────────────────────────────────────────────────────
>>> What is this code about?
The snippet of code you've provided is written in Python. It prompts the user...
>>> Follow up questions...

It is also possible to pass PROMPT to REPL mode sgpt --repl temp "some initial prompt" or even both sgpt --repl temp "initial arg prompt" < text.txt.

Full Changelog: 1.1.0...1.2.0

1.1.0

09 Jan 02:10
20ff0f2
Compare
Choose a tag to compare
main_h264_high_bitrate.mov

OpenAI Library

ShellGPT has now integrated the OpenAI Python library for handling API requests. This integration simplifies the development and maintenance of the ShellGPT code base. Additionally, it enhances user experience by providing more user-friendly error messages, complete with descriptions and potential solutions.

Function Calling

Function calls is a powerful feature OpenAI provides. It allows LLM to execute functions in your system, which can be used to accomplish a variety of tasks. ShellGPT has a convenient way to define functions and use them. To install default functions run:

sgpt --install-functions

This will add a function for LLM to execute shell commands and to execute Apple Scripts (on macOS). More details in demo video and README.md.

Options

  • Shortcut option -c for —code.
  • Shortcut option -lc for --list-chats
  • Shortcut option -lr for --list-roles
  • New —functions option, enables/disable function calling.
  • New —install-functions option, installs default functions.

Config

  • New config variable OPENAI_FUNCTIONS_PATH
  • New config variable OPENAI_USE_FUNCTIONS
  • New config variable SHOW_FUNCTIONS_OUTPUT

Minor Changes

  • Code optimisation
  • Cache optimisations for function calls

1.0.1

22 Dec 00:50
482ec9d
Compare
Choose a tag to compare
  • Fixed a bug in REPL mode which was not working properly since last release.
  • Minor optimisation and bug fixes in default roles.
  • Minor code optimisations.

1.0.0

20 Dec 05:18
7ac1f98
Compare
Choose a tag to compare

ShellGPT v1.0.0 release includes multiple significant changes:

  • Default model has been changed to gpt-4-1106-preview (a.k.a. GPT-4 Turbo).
  • ShellGPT roles (prompts) optimised for OpenAI GPT-4 models.
  • Using system roles when calling OpenAI API with messages.
  • Rendering markdown for default and describe shell command outputs.
  • New config variable CODE_THEME which sets theme for markdown (default is dracula).
  • Multiline input in REPL mode possible with """ triple quotes.
  • New --version options that prints installed ShellGPT version.
  • Fixed issue with home direcotry in Dockerfile which leading to container crash.
  • Code optimisations, minor bug fixes.
sgpt_demo_h264.mp4

For users which will be upgrading from previous versions it is recommended to change DEFAULT_MODEL to gpt-4-1106-preview in config file ~/.config/shell_gpt/.sgptrc since older models might not perform well with system roles. Because of significant changes in roles, unfortunately previously created custom roles and chats will not work with ShellGPT v1.0.0 and you will need to re-create them using new version.

Shoutout to all contributors: @jaycenhorton @arafatsyed @th3happybit @Navidur1 @moritz-t-w @Ismail-Ben

0.9.4

19 Jul 01:20
1c58566
Compare
Choose a tag to compare

By default, ShellGPT leverages OpenAI's large language models. However, this release provides the flexibility to use locally hosted models, which can be a cost-effective alternative. To use local models, you will need to run your own API server. You can accomplish this by using LocalAI, a self-hosted, OpenAI-compatible API. Setting up LocalAI allows you to run language models on your own hardware, potentially without the need for an internet connection, depending on your usage. To set up your LocalAI, please follow this comprehensive guide. Remember that the performance of your local models may depend on the specifications of your hardware and the specific language model you choose to deploy.

  • --model parameter is now string (was enum before).
  • Added LocalAI information to README.md.
  • Created a guide on wiki page.