You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not able to find an open issue that requests the same enhancement
Problem
The endpoints "/chat/completions" or "/api/generate" are well-suited for writing test cases or generating complete code snippets.
Writing tests:
write a unit test for this function: $(cat example.py)
Code completions:
# A simple python function to remove whitespace from a string:
However, these endpoints are not very effective when dealing with non-standard or incomplete code.
FIM (Fill-in-the-Middle) is a specialized prompting format supported by code completion models, allowing completion of code between two pre-written code segments.
<PRE> def compute_gcd(x, y): <SUF>return result <MID>
<PRE>, <SUF> and <MID> are special tokens that guide the model.
The challenge lies in the fact that different models use different special tokens for this purpose. Developers from llama.cpp and ollama have already identified this issue.
Validations
Problem
The endpoints "/chat/completions" or "/api/generate" are well-suited for writing test cases or generating complete code snippets.
Writing tests:
Code completions:
However, these endpoints are not very effective when dealing with non-standard or incomplete code.
FIM (Fill-in-the-Middle) is a specialized prompting format supported by code completion models, allowing completion of code between two pre-written code segments.
<PRE>
,<SUF>
and<MID>
are special tokens that guide the model.The challenge lies in the fact that different models use different special tokens for this purpose. Developers from llama.cpp and ollama have already identified this issue.
Links:
Solution
No response
The text was updated successfully, but these errors were encountered: