Skip to content

Commit

Permalink
Merge branch 'run-llama:main' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
raghavdixit99 authored May 24, 2024
2 parents eeb1dac + 4eab119 commit 1acf42e
Show file tree
Hide file tree
Showing 263 changed files with 14,177 additions and 3,839 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ repos:
args:
[
"--ignore-words-list",
"astroid,gallary,momento,narl,ot,rouge,nin,gere",
"astroid,gallary,momento,narl,ot,rouge,nin,gere,asend",
]
- repo: https://github.com/srstevenson/nb-clean
rev: 3.1.0
Expand Down
113 changes: 113 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,118 @@
# ChangeLog

## [2024-05-21]

### `llama-index-core` [0.10.38]

- Enabling streaming in BaseSQLTableQueryEngine (#13599)
- Fix nonetype errors in relational node parsers (#13615)
- feat(instrumentation): new spans for ALL llms (#13565)
- Properly Limit the number of generated questions (#13596)
- Pass 'exclude_llm_metadata_keys' and 'exclude_embed_metadata_keys' in element Node Parsers (#13567)
- Add batch mode to QueryPipeline (#13203)
- Improve SentenceEmbeddingOptimizer to respect Settings.embed_model (#13514)
- ReAct output parser robustness changes (#13459)
- fix for pydantic tool calling with a single argument (#13522)
- Avoid unexpected error when stream chat doesn't yield (#13422)

### `llama-index-embeddings-nomic` [0.2.0]

- Implement local Nomic Embed with the inference_mode parameter (#13607)

### `llama-index-embeddings-nvidia` [0.1.3]

- Deprecate `mode()` in favor of `__init__(base_url=...)` (#13572)
- add snowflake/arctic-embed-l support (#13555)

### `llama-index-embeddings-openai` [0.1.10]

- update how retries get triggered for openai (#13608)

### `llama-index-embeddings-upstage` [0.1.0]

- Integrations: upstage LLM and Embeddings (#13193)

### `llama-index-llms-gemini` [0.1.8]

- feat: add gemini new models to multimodal LLM and regular (#13539)

### `llama-index-llms-groq` [0.1.4]

- fix: enable tool use (#13566)

### `llama-index-llms-lmstudio` [0.1.0]

- Add support for lmstudio integration (#13557)

### `llama-index-llms-nvidia` [0.1.3]

- Deprecate `mode()` in favor of `__init__(base_url=...)` (#13572)

### `llama-index-llms-openai` [0.1.20]

- update how retries get triggered for openai (#13608)

### `llama-index-llms-unify` [0.1.0]

- Add Unify LLM Support (#12921)

### `llama-index-llms-upstage` [0.1.0]

- Integrations: upstage LLM and Embeddings (#13193)

### `llama-index-llms-vertex` [0.1.6]

- Adding Support for MedLM Models (#11911)

### `llama_index.postprocessor.dashscope_rerank` [0.1.0]

- Add dashscope rerank for postprocessor (#13353)

### `llama-index-postprocessor-nvidia-rerank` [0.1.2]

- Deprecate `mode()` in favor of `__init__(base_url=...)` (#13572)

### `llama-index-readers-mongodb` [0.1.5]

- SimpleMongoReader should allow optional fields in metadata (#13575)

### `llama-index-readers-papers` [0.1.5]

- fix: (ArxivReader) set exclude_hidden to False when reading data from hidden directory (#13578)

### `llama-index-readers-sec-filings` [0.1.5]

- fix: sec_filings header when making requests to sec.gov #13548

### `llama-index-readers-web` [0.1.16]

- Added firecrawl search mode (#13560)
- Updated Browserbase web reader (#13535)

### `llama-index-tools-cassandra` [0.1.0]

- added Cassandra database tool spec for agents (#13423)

### `llama-index-vector-stores-azureaisearch` [0.1.7]

- Allow querying AzureAISearch without non-null metadata field (#13531)

### `llama-index-vector-stores-elasticsearch` [0.2.0]

- Integrate VectorStore from Elasticsearch client (#13291)

### `llama-index-vector-stores-milvus` [0.1.14]

- Fix the filter expression construction of Milvus vector store (#13591)

### `llama-index-vector-stores-supabase` [0.1.4]

- Disconnect when deleted (#13611)

### `llama-index-vector-stores-wordlift` [0.1.0]

- Added the WordLift Vector Store (#13028)

## [2024-05-14]

### `llama-index-core` [0.10.37]
Expand Down
113 changes: 113 additions & 0 deletions docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,118 @@
# ChangeLog

## [2024-05-21]

### `llama-index-core` [0.10.38]

- Enabling streaming in BaseSQLTableQueryEngine (#13599)
- Fix nonetype errors in relational node parsers (#13615)
- feat(instrumentation): new spans for ALL llms (#13565)
- Properly Limit the number of generated questions (#13596)
- Pass 'exclude_llm_metadata_keys' and 'exclude_embed_metadata_keys' in element Node Parsers (#13567)
- Add batch mode to QueryPipeline (#13203)
- Improve SentenceEmbeddingOptimizer to respect Settings.embed_model (#13514)
- ReAct output parser robustness changes (#13459)
- fix for pydantic tool calling with a single argument (#13522)
- Avoid unexpected error when stream chat doesn't yield (#13422)

### `llama-index-embeddings-nomic` [0.2.0]

- Implement local Nomic Embed with the inference_mode parameter (#13607)

### `llama-index-embeddings-nvidia` [0.1.3]

- Deprecate `mode()` in favor of `__init__(base_url=...)` (#13572)
- add snowflake/arctic-embed-l support (#13555)

### `llama-index-embeddings-openai` [0.1.10]

- update how retries get triggered for openai (#13608)

### `llama-index-embeddings-upstage` [0.1.0]

- Integrations: upstage LLM and Embeddings (#13193)

### `llama-index-llms-gemini` [0.1.8]

- feat: add gemini new models to multimodal LLM and regular (#13539)

### `llama-index-llms-groq` [0.1.4]

- fix: enable tool use (#13566)

### `llama-index-llms-lmstudio` [0.1.0]

- Add support for lmstudio integration (#13557)

### `llama-index-llms-nvidia` [0.1.3]

- Deprecate `mode()` in favor of `__init__(base_url=...)` (#13572)

### `llama-index-llms-openai` [0.1.20]

- update how retries get triggered for openai (#13608)

### `llama-index-llms-unify` [0.1.0]

- Add Unify LLM Support (#12921)

### `llama-index-llms-upstage` [0.1.0]

- Integrations: upstage LLM and Embeddings (#13193)

### `llama-index-llms-vertex` [0.1.6]

- Adding Support for MedLM Models (#11911)

### `llama_index.postprocessor.dashscope_rerank` [0.1.0]

- Add dashscope rerank for postprocessor (#13353)

### `llama-index-postprocessor-nvidia-rerank` [0.1.2]

- Deprecate `mode()` in favor of `__init__(base_url=...)` (#13572)

### `llama-index-readers-mongodb` [0.1.5]

- SimpleMongoReader should allow optional fields in metadata (#13575)

### `llama-index-readers-papers` [0.1.5]

- fix: (ArxivReader) set exclude_hidden to False when reading data from hidden directory (#13578)

### `llama-index-readers-sec-filings` [0.1.5]

- fix: sec_filings header when making requests to sec.gov #13548

### `llama-index-readers-web` [0.1.16]

- Added firecrawl search mode (#13560)
- Updated Browserbase web reader (#13535)

### `llama-index-tools-cassandra` [0.1.0]

- added Cassandra database tool spec for agents (#13423)

### `llama-index-vector-stores-azureaisearch` [0.1.7]

- Allow querying AzureAISearch without non-null metadata field (#13531)

### `llama-index-vector-stores-elasticsearch` [0.2.0]

- Integrate VectorStore from Elasticsearch client (#13291)

### `llama-index-vector-stores-milvus` [0.1.14]

- Fix the filter expression construction of Milvus vector store (#13591)

### `llama-index-vector-stores-supabase` [0.1.4]

- Disconnect when deleted (#13611)

### `llama-index-vector-stores-wordlift` [0.1.0]

- Added the WordLift Vector Store (#13028)

## [2024-05-14]

### `llama-index-core` [0.10.37]
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/lmstudio.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.lmstudio
options:
members:
- LMStudio
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/unify.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.unify
options:
members:
- Unify
4 changes: 4 additions & 0 deletions docs/docs/api_reference/storage/vector_store/wordlift.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.vector_stores.wordlift
options:
members:
- WordliftVectorStore
4 changes: 4 additions & 0 deletions docs/docs/api_reference/tools/cassandra.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.tools.cassandra
options:
members:
- CassandraDatabaseToolSpec
38 changes: 21 additions & 17 deletions docs/docs/community/integrations/managed_indices.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,41 +51,45 @@ from llama_index.indices.managed.vectara import VectaraIndex
vectara_customer_id = os.environ.get("VECTARA_CUSTOMER_ID")
vectara_corpus_id = os.environ.get("VECTARA_CORPUS_ID")
vectara_api_key = os.environ.get("VECTARA_API_KEY")

documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data()
index = VectaraIndex.from_documents(
documents,
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key,
)

# Query index
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
```

Note that if the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY` are in the environment already, you do not have to explicitly specifying them in your call and the VectaraIndex class will read them from the environment. For example this should be equivalent to the above, if these variables are in the environment already:
Notes:
* If the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY` are in the environment already, you do not have to explicitly specify them in your call and the VectaraIndex class will read them from the environment.
* To connect to multiple Vectara corpora, you can set `VECTARA_CORPUS_ID` to a comma-separated list, for example: `12,51` would connect to corpus `12` and corpus `51`.

If you already have documents in your corpus, you can just access the data directly by constructing the `VectaraIndex` as follows:

```python
from llama_index.core import ManagedIndex, SimpleDirectoryReade
from llama_index.indices.managed.vectara import VectaraIndex
index = VectaraIndex()
```

# Load documents and build index
documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data()
index = VectaraIndex.from_documents(documents)
And the index will connect to the existing corpus without loading any new documents.

# Query index
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
To query the index, simply construct a query engine as follows:

```python
query_engine = index.as_query_engine(summary_enabled=True)
print(query_engine.query("What did the author do growing up?"))
```

If you already have documents in your corpus you can just access them directly by constructing the VectaraIndex as follows:
Or you can use the chat functionality:

```
index = VectaraIndex()
```python
chat_engine = index.as_chat_engine()
print(chat_engine.chat("What did the author do growing up?").response)
```

And the index will connect to the existing corpus without loading any new documents.
Chat works as you expect where subsequent `chat` calls maintain a conversation history. All of this is done on the Vectara platform so you don't have to add any additional logic.

For more examples - please see below:

- [Vectara Demo](../../examples/managed/vectaraDemo.ipynb)
- [Vectara AutoRetriever](../../examples/retrievers/vectara_auto_retriever.ipynb)
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/examples/agent/llm_compiler.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"source": [
"# LLM Compiler Agent Cookbook\n",
"\n",
"<a href=\"https://colab.research.google.com/github/run-llama/llama-hub/blob/main/llama_hub/llama_packs/agents/llm_compiler/llm_compiler.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
"<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/agent/llm_compiler.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
"\n",
"**NOTE**: Full credits to the [source repo for LLMCompiler](https://github.com/SqueezeAILab/LLMCompiler). A lot of our implementation was lifted from this repo (and adapted with LlamaIndex modules).\n",
"\n",
Expand Down
Loading

0 comments on commit 1acf42e

Please sign in to comment.