Skip to content

Supplement of Copilot and Cursor - utilizes AI for batch processing of the entire codebase (对Copilot和Cursor们的补充:用 AI 批量处理项目代码)

License

Notifications You must be signed in to change notification settings

qiangyt/batchai

Repository files navigation

batchai - A supplement to Copilot and Cursor - utilizes AI for batch processing of project codes

中文

I often rely on ChatGPT and GitHub Copilot, but it is a little bit frustrating to constantly copy and paste between the copilot chat window and my open code files. Why not update the files directly? I also tried using Cursor, which solved the first problem, but still have to open each file individually to add them to the AI's context.

That's why I created batchai. The idea is simple: less copy-pasting, fewer clicks on 'Add to Chat' or 'Apply.' batchai traverses files and processing each of them. Since AI isn’t always perfect, I’ve designed it to run only on a Git directory, so we can easily diff the changes and choose to either commit or revert them.

Currently, batchai only supports code check and fixing common issues (think of it as a local AI-driven SonarQube). The next feature in progress is generating unit test code in batches, which I plan to use in a few of my personal projects (including this batchai), as they have very few unit tests. Other planned features include code explanation, comment generation, and refactoring — all of which will be handled in batches. Additionally, I’m working on enabling batchai to have an overall insight of the project’s code, such as building cross-file code symbol indexing, which should help the AI perform better.

Here are some interesting findings from testing batchai on my personal projects over the past two weeks:

  • It can identify issues that traditional tools, such as SonarQube, tend to miss.
  • It may not report all issues in one go, so I need to run it multiple times.
  • Due to outdated LLM training data and hallucinations, it's crucial to confirm the changes for accuracy by myself - That's why I make batchai work only on clean Git repository directories.

I used the spring-petclinic (cloned from https://github.com/spring-projects/spring-petclinic) for demonstration.

Here are some examples of correct check:

And also a wrong fix:

More detail:

Features

  • Batch Code Check : Reports issues to the console, saves as a check report, and then optionally fixes code directly.
  • Batch Test Code Generation
  • Customized Prompts.
  • File Ignoring : Specifies files to ignore, respecting both .gitignore and an additional .batchai_ignore file.
  • Target Specification : Allows specifying target directories and files within the Git repository.
  • Implemented using Go: Resulting in a single executable binary that works on Mac OSX, Linux, and Windows.
  • Diff: Displays colorized diffs in the console.
  • LLM Support : Supports OpenAI-compatible LLMs, including Ollama.
  • I18N : Supports internationalization comment/explaination generation.

Planned features

  • Explain, Comment Generation, Test Generation, Refactoring.
  • Rejected Changes Tracking : Tracks rejected changes to avoid redundant modifications.
  • Language-Specific Prompts : Different prompts for various programming languages.
  • LLM Usage Metrics : Implements metrics for tracking LLM usage.

Getting Started

  1. Download the latest executable binary from here and add it to your $PATH. For Linux and Mac OSX, remember to run chmod +x ... to make the binary executable.

  2. Clone the demo project. The following steps assume the cloned project directory is /data/spring-petclinic

    cd /data
    git clone https://github.com/spring-projects/spring-petclinic
    cd spring-petclinic

    In this directory, create a .env file. In the .env file, set the OPENAI_API_KEY. Below is an example:

    # OpenAI
    OPENAI_API_KEY=change-it
    #OPENAI_PROXY_URL=
    #OPENAI_PROXY_USER=
    #OPENAI_PROXY_PASS=
    #BATCHAI_CHECK_MODEL=openai/gpt-4o-mini
    
    # Ali TONGYI qwen
    #QWEN_API_KEY=change-it
    #BATCHAI_CHECK_MODEL=tongyi/qwen2.5-coder-7b-instruct
    
    # local Ollama
    #OLLAMA_BASE_URL=http://localhost:11434/v1/
    #BATCHAI_CHECK_MODEL=ollama/qwen2.5-coder:7b-instruct-fp16

    For Ollama, you can refer to my example docker-compose.yml

  3. CLI Examples:

    • Report issues to the console (also saved to build/batchai):
    cd /data/spring-petclinic
    batchai check . src/main/java/org/springframework/samples/petclinic/vet/Vets.java
    • Directly fix the target files via option --fix:
    cd /data/spring-petclinic
    batchai check --fix . src/main/java/org/springframework/samples/petclinic/vet/Vets.java
    • Run batchai in main Java code only:
    cd /data/spring-petclinic
    batchai check . src/main/java/
    • Run batchai on the entire project:
    cd /data/spring-petclinic
    batchai check .

CLI Usage

  • To view the global help menu and available commands, run:

    batchai -h
    NAME:
    batchai - utilizes AI for batch processing of project codes
    
    USAGE:
      batchai [global options] command [command options] <repository directory>  [target files/directories in the repository]
    
    VERSION:
      0.1.2 (5eeb081)
    
    COMMANDS:
      check            Scans project codes to check issues. Report is outputed to console and also saved to 'build/batchai'
      list             Lists files to process
      test             Generate unit test code
      explain (TODO)   Explains the code, output result to console or as comment
      comment (TODO)   Comments the code
      refactor (TODO)  Refactors the code
      help, h          Shows a list of commands or help for one command
    
    GLOBAL OPTIONS:
      --enable-symbol-reference  Enables symbol collection to examine code references across the entire project (default: false)
      --force                    Ignores the cache (default: false)
      --num value, -n value      Limits the number of file to process (default: 0)
      --concurrent               If or not concurrent processing (default: false)
      --lang value, -l value     language for generated text (default: en_US.UTF-8) [$LANG]
      --help, -h                 show help
      --version, -v              print the version
  • To see detailed help for the check command, run:

    batchai check -h
    NAME:
      batchai check - Report issues to console, also saved to 'build/batchai'
    
    USAGE:
      batchai check [command options]
    
    OPTIONS:
      --fix, -f   Replaces the target files (default: false)
      --help, -h  show help

Supported LLMs

Tested and supported models:

  • OpenAI series:

    • openai/gpt-4o

    • openai/gpt-4o-mini

    Other OpenAI models should work too.

  • Ali TONYI Qwen series:

    • qwen2.5-coder-7b-instruct (also available via Ollama)

    Other Qwen models should work too.

To add more LLMs, simply follow the configuration in res/static/batchai.yaml, as long as the LLM exposes an OpenAI-compatible API.

Configuration

  • Optional configuration file:

    You can provide an optional configuration file: ${HOME}/batchai/batchai.yaml. For a full example, refer to res/static/batchai.yaml

  • Environment file:

    You can also configure batchai via an environment file .env located in the target Git repository directory. Refer to res/static/batchai.yaml for all available environment variables, and res/static/batchai.env for their default values.

  • Ignore specific files:

    batchai ignores the directories and files following .gitignore files. This is usually sufficient, but if there are additional files or directories that cannot be ignored by Git but should not be processed by batchai, we can specify them in the .batchai_ignore files. The rules are written in the same way as in .gitignore.

  • Customized Prompts Refer to BATCHAI_CHECK_RULE_* and MY_CHECK_RULE_* in [res/static/batchai.yaml]

License

MIT

NA

GitHub release GitHub Releases Download

About

Supplement of Copilot and Cursor - utilizes AI for batch processing of the entire codebase (对Copilot和Cursor们的补充:用 AI 批量处理项目代码)

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages