Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add full file_edits_to_llm_message + tests #481

Merged
merged 6 commits into from
Jan 17, 2024
Merged

Conversation

granawkins
Copy link
Member

@granawkins granawkins commented Jan 13, 2024

This adds the missing GitParser.file_edits_to_llm_message method, which we'll use to convert file edits from your conversation history into git diffs for Sampler v.0.2.0.

Pull Request Checklist

  • Documentation has been updated, or this change doesn't require that

@biobootloader
Copy link
Member

@PCSwingle can you take a look at this

@@ -62,6 +100,12 @@ def parse_string(self, git_diff: str) -> ParsedLLMResponse:
is_deletion=is_deletion,
rename_file_path=new_name,
)
if not is_creation:
file_edit.previous_file_lines = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used specifically for undo's (when the fileedit is applied, it sets this so that the fileedit can be undone). It feels a bit weird using it here, but I guess it works.

Copy link
Member Author

@granawkins granawkins Jan 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the next PR, the parsedLLMResponse will be saved to conversation history, and the sampler will use these lines to generate a git diff for the conversation history.

I made a class MentatAssistantMessageParam that lets us attach the parsedLLMResponse, like I referenced in an old PR.

async for chunk in response:
for content in chunk_to_lines(chunk):
string += content
return self.parse_string(string)
# Wait for a full line
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we parsing it this way? If we aren't streaming it, this just adds a bunch of complexity for no reason.

if conversation:
while not conversation.endswith("\n\n"):
conversation += "\n"
parsedLLMResponse.full_response = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we just have parse_string return a complete parsedLLMResponse (complete with conversation and full_response) instead of tacking it on up here? I think that would make more sense

@granawkins granawkins merged commit 830ebdf into main Jan 17, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants