LlamaIndex Open Source Roadmap #9888
Replies: 12 comments 15 replies
-
Looks like 2024 will be amazing 🙏🏻 |
Beta Was this translation helpful? Give feedback.
-
I feel like many improvements we planned here are pretty similar to haystack v2 design. we can learn a lot from it. It was super easy for me to create a custom component in haystack v2, and they separated all integrations into one repo, each integration has its own package and version. |
Beta Was this translation helpful? Give feedback.
-
niiiiice |
Beta Was this translation helpful? Give feedback.
-
Building in public 🚀 |
Beta Was this translation helpful? Give feedback.
-
Congratulations on this great roadmap! 🍾 From the perspective of robust pipelines and accessibility for all users, it will be great to have a language-agnostic implementation for prompts and query engines. For example, in my latest notebook for Japanese LLM, I could not use I think that Spacy did a great job with language support with a simple language code. If you put that in place, I would be happy to help translate it into French and maybe Japanese. 🙇 |
Beta Was this translation helpful? Give feedback.
-
Another suggestion: Having more consistent naming and usage between non-stream/stream/async/sync. For example, in this code https://github.com/run-llama/llama_index/blob/main/llama_index/chat_engine/types.py#L157-L168, we have Also in many places we call |
Beta Was this translation helpful? Give feedback.
-
Is there any sort of fluent interface for prompt building, that also knows about "placement" in the prompt? I find often times I am making prompts templates using triple quoted strings, and am manually comprehending the U-shaped attention given to parts of the prompt. Maybe this is outside of the scope of this discussion, but it would really useful (and unique) if LlamaIndex had a lightweight and fluent abstraction for prompt building: >>> prompt_template = PromptTemplate()
>>> prompt_template.append_intro("I am an overview.")
>>> prompt_template.append_positive_example("I am a desirable example.") # Gets added in an examples section
>>> prompt_template.schema(PydanticModel) # Auto-infers to use JSON
>>> prompt_template.append_schema_info("Field XYZ is case sensitive.") # Gets added after schema
>>> prompt_template.append_intro("And someone later reused me and expanded upon the intro.")
>>> print(prompt_template)
I am an overview. And someone later reused me and expanded upon the intro.
Good examples include:
- I am a desirable example.
Please format the response in JSON according to this schema <pydantic JSON schema>. Field XYZ is case sensitive. I only have like 10 prompts in use, and I hand crafted all of them. I think LlamaIndex team has much greater insight to variance of prompts, so maybe they can think of better abstraction than this. |
Beta Was this translation helpful? Give feedback.
-
Great Roadmap, thank you for your contributions @logan-markewich ! If I might add my 2 cents here as a user: So in order to achieve "[P0] Every Core Component is Easy to Subclass", I think pydantic v1 has to go. Is that implicit in your roadmap? |
Beta Was this translation helpful? Give feedback.
-
Do ye plan to enable metadata filtering on Azure Cognitive Search? |
Beta Was this translation helpful? Give feedback.
-
I would suggest an integration with DSPy would be great. |
Beta Was this translation helpful? Give feedback.
-
Hi, Is there any update to the current roadmap? |
Beta Was this translation helpful? Give feedback.
-
Last Updated: January 7, 2024
This is a living document that presents our 3-6 month roadmap for LlamaIndex. This will give you a sense of where the framework is evolving and the use cases you can plug it into!
We first define our high-level goals, and then outline our tasks/relative priorities and categorize which goal(s) they fit in.
Additional Notes:
High-Level Goals
Priorities
Given these goals, here’s our set of priorities. The priorities are outlined below with tags (P0, P1, etc.).
Beta Was this translation helpful? Give feedback.
All reactions