Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically detect and send files that are mentioned in the prompt #41

Merged
merged 15 commits into from
May 29, 2023
Merged
41 changes: 17 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,33 @@
# Promptr

## TLDR
Promptr is a CLI tool that makes it easy to apply GPT's code change recommendations with a single command. With Promptr, you can quickly refactor code, implement classes to pass tests, and experiment with LLMs. No more copying code from the ChatGPT window into your editor.

<br />

[This PR](https://github.com/ferrislucas/promptr/pull/38) has some good examples of what can be accomplished using Promptr. You can find links to the individual commits and the prompts that created them in the PR description.
Promptr is a CLI tool that lets you use plain English to instruct GPT3 or GPT4 to make changes to your codebase. This is most effective with GPT4 because of its larger context window, but GPT3 is still useful for smaller scopes.
<br /><br />

The PR's below are good examples of what can be accomplished using Promptr. You can find links to the individual commits and the prompts that created them in the PR descriptions.
- https://github.com/ferrislucas/promptr/pull/38
- https://github.com/ferrislucas/promptr/pull/41
<br /><br />

## Introduction
Promptr automates the process of providing ChatGPT with source code and a prompt, and then applying ChatGPT's response to the filesystem. This allows you to apply plain English instructions to your codebase. This is most effective with GPT4 because of its larger context window, but GPT3 is still useful for smaller scopes.
<br />
I've found this to be a good workflow:
- Commit any changes, so you have a clean working area.
- Author your prompt in a text file. The prompt should be specific clear instructions.
- Make sure your prompt contains the relative paths of any files that are relevant to your instructions.
- Use Promptr to execute your prompt. Provide the path to your prompt file using the `-p` option:
`promptr -p my_prompt.txt`
*If you have access to GPT4 then use the `-m gpt4` option to get the best results.*

I've found this to be good workflow:
- Commit any changes, so you have a clean working area
- Author your prompt in a text file. Work with the prompt in your favorite editor - mold it into clear instructions almost as if it's a task for an inexperienced co-worker.
- Use promptr to send your prompt __and the relevant files__ to GPT. It's critical to send the relevant files with your request. Think about what files your inexperienced co-worker would need to know about in order to fulfill the request.
- Complex requests can take a while (or timeout). When the response is ready, promptr applies the changes to your filesystem. Use your favorite git UI to inspect the results.
Complex requests can take a while. If a task is too complex then the request will timeout - try breaking the task down into smaller units of work when this happens. When the response is ready, promptr applies the changes to your filesystem. Use your favorite git UI to inspect the results.

<br /><br />


## Examples
__Cleanup the code in a file__
Promptr recognizes that the file `src/index.js` is referenced in the prompt, so the contents of `src/index.js` is automatically sent to the model along with the prompt.
```bash
$ promptr -p "Cleanup the code in this file" index.js
```
<br />

__Cleanup the code in two files__
<br />
The following example uses GPT4 to cleanup the code in two files by passing the file paths as arguments:
```bash
$ promptr -m gpt4 -p "Cleanup the code in these files" app/index.js app.js
$ promptr -p "Cleanup the code in src/index.js"
```
<br />
<br /><br />

__Alphabetize the methods in all of the javascript files__
<br />
Expand Down Expand Up @@ -91,6 +83,7 @@ Sit back... Relaaxxxxxx... let Promptr carry you on a gentle cruise through a li
- `-x` Optional boolean flag. Promptr attempts to parse the model's response and apply the resulting operations to the current directory tree whe using the "refactor" template. You only need to pass the `-x` flag if you've created your own template, and you want Promptr to parse the output of your template in the same way that the built in "refactor" template is parsed.
- `-o, --output-path <outputPath>`: Optional string flag that specifies the path to the output file. If this flag is not set, the output will be printed to stdout.
- `-v, --verbose`: Optional boolean flag that enables verbose output, providing more detailed information during execution.
- `-dac, --disable-auto-context`: Prevents files referenced in the prompt from being automatically included in the context sent to the model.
- `--version`: Display the version and exit

Additional parameters can specify the paths to files that will be included as context in the prompt. The parameters should be separated by a space.
Expand Down
11 changes: 8 additions & 3 deletions src/cliState.js
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@ export default class CliState {
static init(_args, version) {
this.program = new Command();
this.program.option('-d, --dry-run', 'Dry run only: just display the prompt')
this.program.option('-i, --interactive', 'Interactive mode');
this.program.option('-x, --execute', 'Apply changes suggested by GPT to the local filesystem. The "refactor" template auotmatically applies the changes. You would only use this option if you create your own templates.');
this.program.option('-p, --prompt <prompt>', 'Prompt to use in non-interactive mode');
this.program.option('-i, --interactive', 'Interactive mode')
this.program.option('-x, --execute', 'Apply changes suggested by GPT to the local filesystem. The "refactor" template auotmatically applies the changes. You would only use this option if you create your own templates.')
this.program.option('-p, --prompt <prompt>', 'Prompt to use in non-interactive mode')
this.program.option('-t, --template <template>', 'Teplate name, template path, or a url for a template file')
this.program.option('-o, --output-path <outputPath>', 'Path to output file. If no path is specified, output will be printed to stdout.')
this.program.option('-v, --verbose', 'Verbose output')
this.program.option('-m, --model <model>', 'Specify the model: (gpt3|gpt4)', 'gpt3')
this.program.option('-dac, --disable-auto-context', 'Prevents files referenced in the prompt from being automatically included in the context sent to the model.');

this.program.version(version, '--version', 'Display the current version')

Expand Down Expand Up @@ -66,4 +67,8 @@ Example call:
return this.program.opts().interactive
}

static disableAutoContext() {
return !!this.program.opts().disableAutoContext
}

}
19 changes: 19 additions & 0 deletions src/services/AutoContext.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
export default class AutoContext {
/**
* Extracts file paths from the given prompt.
*
* @param {string} prompt - The input prompt containing file paths.
* @returns {string[]} An array of extracted file paths.
*/
static call(prompt) {
const filePaths = [];
const regex = /(?:^|[\s"])(\/?[\w\.\-\/]+\.\w+)/g;
let match;

while ((match = regex.exec(prompt)) !== null) {
filePaths.push(match[1]);
}

return filePaths;
}
}
4 changes: 2 additions & 2 deletions src/services/fileService.js
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ export class FileService {

static async load(filePath) {
const fileDoesNotExist = !await this.fileExists(filePath)
if (fileDoesNotExist) return("")
if (fileDoesNotExist) return null
try {
// Use the fs module to read the file
const data = await fs.promises.readFile(filePath, "utf-8")
return data
} catch (err) {
this.log(err)
return("")
return null
}
}

Expand Down
6 changes: 4 additions & 2 deletions src/services/pluginService.js
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
import { fileURLToPath } from 'url'
import { dirname } from 'path'
import { encode } from "gpt-3-encoder"
import { FileService } from './fileService.js'
import CliState from '../cliState.js'
import OpenAiGptService from './OpenAiGptService.js'
import RefactorResultProcessor from './refactorResultProcessor.js'
import TemplateLoader from './templateLoaderService.js'
import PromptContext from './promptContext.js'
import AutoContext from './AutoContext.js'
import { extractOperationsFromOutput } from './extractOperationsFromOutput.js'

export default class PluginService {
Expand All @@ -21,7 +21,9 @@ export default class PluginService {
return 1
}
if (CliState.getModel() != "execute") {
let context = await PromptContext.call(CliState.args)
let args = CliState.args
if (!CliState.disableAutoContext()) args = args.concat(AutoContext.call(userInput))
let context = await PromptContext.call(args)
const __filename = fileURLToPath(import.meta.url)

let templatePath = "refactor"
Expand Down
11 changes: 7 additions & 4 deletions src/services/promptContext.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,13 @@ export default class PromptContext {
files: [],
}
for (let n = 0; n < args.length; n++) {
context.files.push({
filename: args[n],
content: await FileService.load(args[n]),
})
const fileContent = await FileService.load(args[n]);
if (fileContent !== null) {
context.files.push({
filename: args[n],
content: fileContent,
})
}
}
return context
}
Expand Down
45 changes: 45 additions & 0 deletions test/AutoContext.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import assert from 'assert';
import AutoContext from '../src/services/AutoContext.js';


describe('AutoContext.call', () => {
let prompt = 'test prompt'

it('returns empty array if there are no paths mentioned in the prompt', () => {
const result = AutoContext.call(prompt)
assert.deepStrictEqual([], result)
})

describe('when the prompt mentions a path', () => {
let prompt = 'add a new method to the class in src/services/AutoContext.js'

it('returns an array of paths mentioned in the prompt', () => {
const result = AutoContext.call(prompt)
assert.deepStrictEqual(['src/services/AutoContext.js'], result)
})
})

describe('when the prompt mentions multiple relative paths', () => {
let prompt = 'add a new method to the class in src/services/AutoContext.js - also, do the same for the class in src/services/AnotherClass.js'

it('returns an array of paths mentioned in the prompt', () => {
const result = AutoContext.call(prompt)
assert.deepStrictEqual([
'src/services/AutoContext.js',
'src/services/AnotherClass.js',
], result)
})
})

describe('when the prompt mentions multiple absolute paths', () => {
let prompt = 'add a new method to the class in /src/services/AutoContext.js - also, do the same for the class in /src/services/AnotherClass.js'

it('returns an array of paths mentioned in the prompt', () => {
const result = AutoContext.call(prompt)
assert.deepStrictEqual([
'/src/services/AutoContext.js',
'/src/services/AnotherClass.js',
], result)
})
})
})
106 changes: 106 additions & 0 deletions test/PluginService_call.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
import assert from 'assert';
import sinon from 'sinon';
import PluginService from '../src/services/pluginService.js';
import CliState from '../src/cliState.js';
import RefactorResultProcessor from '../src/services/refactorResultProcessor.js';
import TemplateLoader from '../src/services/templateLoaderService.js';
import PromptContext from '../src/services/promptContext.js';
import AutoContext from '../src/services/AutoContext.js';

describe('PluginService', () => {

beforeEach(() => {
CliState.init([], '')
})

describe('call method', () => {
let executeModeStub
let loadTemplateStub
let buildContextStub

beforeEach(() => {
loadTemplateStub = sinon.stub(TemplateLoader, 'loadTemplate')
buildContextStub = sinon.stub(PromptContext, 'call')
executeModeStub = sinon.stub(PluginService, 'executeMode')
});

afterEach(() => {
if (loadTemplateStub) loadTemplateStub.restore()
if (buildContextStub) buildContextStub.restore()
if (executeModeStub) executeModeStub.restore()
sinon.restore()
});

it('should use refactor.txt as default template', async () => {
loadTemplateStub.resolves('Test content');
buildContextStub.resolves({ files: [] });
executeModeStub.resolves('{ "operations": [] }');

await PluginService.call('Test input');

assert(loadTemplateStub.calledWith('Test input', { files: [] }, sinon.match(/refactor$/)));
});

describe('when AutoContext.call() returns some paths', () => {
let autoContextPaths = ['test/path1', 'test/path2']
let autoContextStub
const prompt = "Test prompt"

beforeEach(() => {
executeModeStub.resolves('{ "operations": [] }')
autoContextStub = sinon.stub(AutoContext, 'call')
autoContextStub.returns(autoContextPaths)
})

it('should pass the paths from AutoContext.call into PromptContext.call', async () => {
await PluginService.call(prompt)

assert(buildContextStub.calledWith(autoContextPaths))
})

it('passes the prompt to AutoContext.call', async () => {
await PluginService.call(prompt)

assert(autoContextStub.calledWith(prompt))
})

describe('when AutoContext is disabled', () => {
beforeEach(() => {
const args = ['node', 'index.js', '-m', 'gpt3', '-p', prompt, '--disable-auto-context']
CliState.init(args)
})

it('should not pass the paths from AutoContext.call into PromptContext.call', async () => {
await PluginService.call(prompt)

assert(buildContextStub.calledWith([]))
})
})
})

it('should pass RefactorResultProcessor.call the operations', async () => {
loadTemplateStub.resolves('Test content');
buildContextStub.resolves({ files: [] });
executeModeStub.resolves('{ "operations": [{ "thing": 1 }] }');
const refactorResultProcessorStub = sinon.stub(RefactorResultProcessor, 'call').resolves();

await PluginService.call('Test input');

assert(refactorResultProcessorStub.calledWith({ operations: [{ thing: 1 }] }))

refactorResultProcessorStub.restore();
});

it('should call loadTemplate with default templatePath when CliState.getTemplatePath() is empty or undefined', async () => {
loadTemplateStub.resolves('Test content');
buildContextStub.resolves({ files: [] });
executeModeStub.resolves('{ "operations": [] }');
sinon.stub(CliState, 'getExecuteFlag').returns('')

await PluginService.call('Test input');

assert(loadTemplateStub.calledWith(sinon.match.any, { files: [] }, "refactor"))
});
});

});
Loading