Files
genai-toolbox/docs/en/getting-started/quickstart/python
Mend Renovate 01ac3134c0 chore(deps): update dependency llama-index to v0.14.6 (#1785)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.4` -> `==0.14.6` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.6?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.4/0.14.6?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.6`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-26)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.5...v0.14.6)

##### llama-index-core \[0.14.6]

- Add allow\_parallel\_tool\_calls for non-streaming
([#&#8203;20117](https://redirect.github.com/run-llama/llama_index/pull/20117))
- Fix invalid use of field-specific metadata
([#&#8203;20122](https://redirect.github.com/run-llama/llama_index/pull/20122))
- update doc for SemanticSplitterNodeParser
([#&#8203;20125](https://redirect.github.com/run-llama/llama_index/pull/20125))
- fix rare cases when sentence splits are larger than chunk size
([#&#8203;20147](https://redirect.github.com/run-llama/llama_index/pull/20147))

##### llama-index-embeddings-bedrock \[0.7.0]

- Fix BedrockEmbedding to support Cohere v4 response format
([#&#8203;20094](https://redirect.github.com/run-llama/llama_index/pull/20094))

##### llama-index-embeddings-isaacus \[0.1.0]

- feat: Isaacus embeddings integration
([#&#8203;20124](https://redirect.github.com/run-llama/llama_index/pull/20124))

##### llama-index-embeddings-oci-genai \[0.4.2]

- Update OCI GenAI cohere models
([#&#8203;20146](https://redirect.github.com/run-llama/llama_index/pull/20146))

##### llama-index-llms-anthropic \[0.9.7]

- Fix double token stream in anthropic llm
([#&#8203;20108](https://redirect.github.com/run-llama/llama_index/pull/20108))
- Ensure anthropic content delta only has user facing response
([#&#8203;20113](https://redirect.github.com/run-llama/llama_index/pull/20113))

##### llama-index-llms-baseten \[0.1.7]

- add GLM
([#&#8203;20121](https://redirect.github.com/run-llama/llama_index/pull/20121))

##### llama-index-llms-helicone \[0.1.0]

- integrate helicone to llama-index
([#&#8203;20131](https://redirect.github.com/run-llama/llama_index/pull/20131))

##### llama-index-llms-oci-genai \[0.6.4]

- Update OCI GenAI cohere models
([#&#8203;20146](https://redirect.github.com/run-llama/llama_index/pull/20146))

##### llama-index-llms-openai \[0.6.5]

- chore: openai vbump
([#&#8203;20095](https://redirect.github.com/run-llama/llama_index/pull/20095))

##### llama-index-readers-imdb-review \[0.4.2]

- chore: Update selenium dependency in imdb-review reader
([#&#8203;20105](https://redirect.github.com/run-llama/llama_index/pull/20105))

##### llama-index-retrievers-bedrock \[0.5.0]

- feat(bedrock): add async support for AmazonKnowledgeBasesRetriever
([#&#8203;20114](https://redirect.github.com/run-llama/llama_index/pull/20114))

##### llama-index-retrievers-superlinked \[0.1.3]

- Update README.md
([#&#8203;19829](https://redirect.github.com/run-llama/llama_index/pull/19829))

##### llama-index-storage-kvstore-postgres \[0.4.2]

- fix: Replace raw SQL string interpolation with proper SQLAlchemy
parameterized APIs in PostgresKVStore
([#&#8203;20104](https://redirect.github.com/run-llama/llama_index/pull/20104))

##### llama-index-tools-mcp \[0.4.3]

- Fix BasicMCPClient resource signatures
([#&#8203;20118](https://redirect.github.com/run-llama/llama_index/pull/20118))

##### llama-index-vector-stores-postgres \[0.7.1]

- Add GIN index support for text array metadata in PostgreSQL vector
store
([#&#8203;20130](https://redirect.github.com/run-llama/llama_index/pull/20130))

###
[`v0.14.5`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-15)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.4...v0.14.5)

##### llama-index-core \[0.14.5]

- Remove debug print
([#&#8203;20000](https://redirect.github.com/run-llama/llama_index/pull/20000))
- safely initialize RefDocInfo in Docstore
([#&#8203;20031](https://redirect.github.com/run-llama/llama_index/pull/20031))
- Add progress bar for multiprocess loading
([#&#8203;20048](https://redirect.github.com/run-llama/llama_index/pull/20048))
- Fix duplicate node positions when identical text appears multiple
times in document
([#&#8203;20050](https://redirect.github.com/run-llama/llama_index/pull/20050))
- chore: tool call block - part 1
([#&#8203;20074](https://redirect.github.com/run-llama/llama_index/pull/20074))

##### llama-index-instrumentation \[0.4.2]

- update instrumentation package metadata
([#&#8203;20079](https://redirect.github.com/run-llama/llama_index/pull/20079))

##### llama-index-llms-anthropic \[0.9.5]

-  feat(anthropic): add prompt caching model validation utilities
([#&#8203;20069](https://redirect.github.com/run-llama/llama_index/pull/20069))
- fix streaming thinking/tool calling with anthropic
([#&#8203;20077](https://redirect.github.com/run-llama/llama_index/pull/20077))
- Add haiku 4.5 support
([#&#8203;20092](https://redirect.github.com/run-llama/llama_index/pull/20092))

##### llama-index-llms-baseten \[0.1.6]

- Baseten provider Kimi K2 0711, Llama 4 Maverick and Llama 4 Scout
Model APIs deprecation
([#&#8203;20042](https://redirect.github.com/run-llama/llama_index/pull/20042))

##### llama-index-llms-bedrock-converse \[0.10.5]

- feat: List Claude Sonnet 4.5 as a reasoning model
([#&#8203;20022](https://redirect.github.com/run-llama/llama_index/pull/20022))
- feat: Support global cross-region inference profile prefix
([#&#8203;20064](https://redirect.github.com/run-llama/llama_index/pull/20064))
- Update utils.py for opus 4.1
([#&#8203;20076](https://redirect.github.com/run-llama/llama_index/pull/20076))
- 4.1 opus bedrockconverse missing in funcitoncalling models
([#&#8203;20084](https://redirect.github.com/run-llama/llama_index/pull/20084))
- Add haiku 4.5 support
([#&#8203;20092](https://redirect.github.com/run-llama/llama_index/pull/20092))

##### llama-index-llms-fireworks \[0.4.4]

- Add Support for Custom Models in Fireworks LLM
([#&#8203;20023](https://redirect.github.com/run-llama/llama_index/pull/20023))
- fix(llms/fireworks): Cannot use Fireworks Deepseek V3.1-20006 issue
([#&#8203;20028](https://redirect.github.com/run-llama/llama_index/pull/20028))

##### llama-index-llms-oci-genai \[0.6.3]

- Add support for xAI models in OCI GenAI
([#&#8203;20089](https://redirect.github.com/run-llama/llama_index/pull/20089))

##### llama-index-llms-openai \[0.6.4]

- Gpt 5 pro addition
([#&#8203;20029](https://redirect.github.com/run-llama/llama_index/pull/20029))
- fix collecting final response with openai responses streaming
([#&#8203;20037](https://redirect.github.com/run-llama/llama_index/pull/20037))
- Add support for GPT-5 models in utils.py (JSON\_SCHEMA\_MODELS)
([#&#8203;20045](https://redirect.github.com/run-llama/llama_index/pull/20045))
- chore: tool call block - part 1
([#&#8203;20074](https://redirect.github.com/run-llama/llama_index/pull/20074))

##### llama-index-llms-sglang \[0.1.0]

- Added Sglang llm integration
([#&#8203;20020](https://redirect.github.com/run-llama/llama_index/pull/20020))

##### llama-index-readers-gitlab \[0.5.1]

- feat(gitlab): add pagination params for repository tree and issues
([#&#8203;20052](https://redirect.github.com/run-llama/llama_index/pull/20052))

##### llama-index-readers-json \[0.4.2]

- vbump the JSON reader
([#&#8203;20039](https://redirect.github.com/run-llama/llama_index/pull/20039))

##### llama-index-readers-web \[0.5.5]

- fix: ScrapflyReader Pydantic validation error
([#&#8203;19999](https://redirect.github.com/run-llama/llama_index/pull/19999))

##### llama-index-storage-chat-store-dynamodb \[0.4.2]

- bump dynamodb chat store deps
([#&#8203;20078](https://redirect.github.com/run-llama/llama_index/pull/20078))

##### llama-index-tools-mcp \[0.4.2]

- 🐛 fix(tools/mcp): Fix dict type handling and reference resolution in …
([#&#8203;20082](https://redirect.github.com/run-llama/llama_index/pull/20082))

##### llama-index-tools-signnow \[0.1.0]

- feat(signnow): SignNow mcp tools integration
([#&#8203;20057](https://redirect.github.com/run-llama/llama_index/pull/20057))

##### llama-index-tools-tavily-research \[0.4.2]

- feat: Add Tavily extract function for URL content extraction
([#&#8203;20038](https://redirect.github.com/run-llama/llama_index/pull/20038))

##### llama-index-vector-stores-azurepostgresql \[0.2.0]

- Add hybrid search to Azure PostgreSQL integration
([#&#8203;20027](https://redirect.github.com/run-llama/llama_index/pull/20027))

##### llama-index-vector-stores-milvus \[0.9.3]

- fix: Milvus get\_field\_kwargs()
([#&#8203;20086](https://redirect.github.com/run-llama/llama_index/pull/20086))

##### llama-index-vector-stores-opensearch \[0.6.2]

- fix(opensearch): Correct version check for efficient filtering
([#&#8203;20067](https://redirect.github.com/run-llama/llama_index/pull/20067))

##### llama-index-vector-stores-qdrant \[0.8.6]

- fix(qdrant): Allow async-only initialization with hybrid search
([#&#8203;20005](https://redirect.github.com/run-llama/llama_index/pull/20005))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTYuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE1Ni4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-10-27 21:26:51 +00:00
..