-
-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EPIC: Auto mode #5
Comments
i've added doc/auto/*.txt and integrated the indexer context message to make the local functionary know which commands to run and how to achieve certain things with r2. this way we can improve the local model auto interaction quite a lot. pleae give it a try because i was able to do several more things properly without gpt thanks to this :) also, ive updated some models, and mistral2 supports 32K contexts.. so maybe we can use that to compress large disassemblies or decompilation outputs to get an improved output processing. Because the maxtokens and context window is another issue we face with local models. until llama gets improved to handle larger ones in a proper way we must find ways to workaround those limitations. unfornately i cant use some 3rd party apis because im in europe and some AI vendors dont give access to us :__ so once again another reason to use local models |
That's awesome, just tried functionary and it's actually usable with these simple instructions! I thought it'd be a bigger effort :) |
To take into consideration for a another approach on auto mode without function calling https://github.com/user1342/Monocle |
If everything looks good to u i would like to release 0.6 and publish it in pip. So we can start messing with other stuff from this stable point once again :) sgty? |
sounds great! |
In this topic, what do you guys think for this? https://docs.phidata.com/introduction I'm more interested in this:https://docs.phidata.com/knowledge/introduction The thought of having a way we can help the assistant know how to use radare excites me 😂 |
@nitanmarcel there's already a RAG in r2ai that can be used for that I think. I remember pancake got one working a few months ago. |
:auto
to'
r2lang.cmd
chatml-function-calling
via llama-cpp support chatml-function-calling via llama-cpp #4functionary v2
via llama-cppThe text was updated successfully, but these errors were encountered: