Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use chat completions for llama.cpp #75

Merged
merged 1 commit into from
Oct 22, 2024
Merged

Conversation

dnakov
Copy link
Collaborator

@dnakov dnakov commented Oct 22, 2024

Checklist

  • Closing issues: #issue
  • Mark this if you consider it ready to merge
  • I've added tests (optional)
  • I wrote some documentation

Description

I don't think we need the custom formats anymore, they're handled within llama.cpp, but you've played more with the local models than me so lmk.

@trufae
Copy link
Contributor

trufae commented Oct 22, 2024

Sounds like a reasonable change by default but i would like to keep control on that, can you move the original code into a separate file so we can keep playing with custom chat formattings for hacking purposes? We can have r2ai -e var to select which one to use

@dnakov
Copy link
Collaborator Author

dnakov commented Oct 22, 2024

-e chat.use_completion=true

@trufae trufae merged commit 9d83ec3 into radareorg:master Oct 22, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants