Files
genai-toolbox/docs/en/getting-started/quickstart/python
Mend Renovate d08dd144ad chore(deps): update dependency llama-index to v0.14.10 (#2092)
This PR contains the following updates:

| Package | Change |
[Age](https://docs.renovatebot.com/merge-confidence/) |
[Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.8` -> `==0.14.10` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.10?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.8/0.14.10?slim=true)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.10`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-04)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.9...v0.14.10)

##### llama-index-core \[0.14.10]

- feat: add mock function calling llm
([#&#8203;20331](https://redirect.github.com/run-llama/llama_index/pull/20331))

##### llama-index-llms-qianfan \[0.4.1]

- test: fix typo 'reponse' to 'response' in variable names
([#&#8203;20329](https://redirect.github.com/run-llama/llama_index/pull/20329))

##### llama-index-tools-airweave \[0.1.0]

- feat: add Airweave tool integration with advanced search features
([#&#8203;20111](https://redirect.github.com/run-llama/llama_index/pull/20111))

##### llama-index-utils-qianfan \[0.4.1]

- test: fix typo 'reponse' to 'response' in variable names
([#&#8203;20329](https://redirect.github.com/run-llama/llama_index/pull/20329))

###
[`v0.14.9`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-02)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.8...v0.14.9)

##### llama-index-agent-azure \[0.2.1]

- fix: Pin azure-ai-projects version to prevent breaking changes
([#&#8203;20255](https://redirect.github.com/run-llama/llama_index/pull/20255))

##### llama-index-core \[0.14.9]

- MultiModalVectorStoreIndex now returns a multi-modal
ContextChatEngine.
([#&#8203;20265](https://redirect.github.com/run-llama/llama_index/pull/20265))
- Ingestion to vector store now ensures that \_node-content is readable
([#&#8203;20266](https://redirect.github.com/run-llama/llama_index/pull/20266))
- fix: ensure context is copied with async utils run\_async
([#&#8203;20286](https://redirect.github.com/run-llama/llama_index/pull/20286))
- fix(memory): ensure first message in queue is always a user message
after flush
([#&#8203;20310](https://redirect.github.com/run-llama/llama_index/pull/20310))

##### llama-index-embeddings-bedrock \[0.7.2]

- feat(embeddings-bedrock): Add support for Amazon Bedrock Application
Inference Profiles
([#&#8203;20267](https://redirect.github.com/run-llama/llama_index/pull/20267))
- fix:(embeddings-bedrock) correct extraction of provider from
model\_name
([#&#8203;20295](https://redirect.github.com/run-llama/llama_index/pull/20295))
- Bump version of bedrock-embedding
([#&#8203;20304](https://redirect.github.com/run-llama/llama_index/pull/20304))

##### llama-index-embeddings-voyageai \[0.5.1]

- VoyageAI correction and documentation
([#&#8203;20251](https://redirect.github.com/run-llama/llama_index/pull/20251))

##### llama-index-llms-anthropic \[0.10.3]

- feat: add anthropic opus 4.5
([#&#8203;20306](https://redirect.github.com/run-llama/llama_index/pull/20306))

##### llama-index-llms-bedrock-converse \[0.12.2]

- fix(bedrock-converse): Only use guardrail\_stream\_processing\_mode in
streaming functions
([#&#8203;20289](https://redirect.github.com/run-llama/llama_index/pull/20289))
- feat: add anthropic opus 4.5
([#&#8203;20306](https://redirect.github.com/run-llama/llama_index/pull/20306))
- feat(bedrock-converse): Additional support for Claude Opus 4.5
([#&#8203;20317](https://redirect.github.com/run-llama/llama_index/pull/20317))

##### llama-index-llms-google-genai \[0.7.4]

- Fix gemini-3 support and gemini function call support
([#&#8203;20315](https://redirect.github.com/run-llama/llama_index/pull/20315))

##### llama-index-llms-helicone \[0.1.1]

- update helicone docs + examples
([#&#8203;20208](https://redirect.github.com/run-llama/llama_index/pull/20208))

##### llama-index-llms-openai \[0.6.10]

- Smallest Nit
([#&#8203;20252](https://redirect.github.com/run-llama/llama_index/pull/20252))
- Feat: Add gpt-5.1-chat model support
([#&#8203;20311](https://redirect.github.com/run-llama/llama_index/pull/20311))

##### llama-index-llms-ovhcloud \[0.1.0]

- Add OVHcloud AI Endpoints provider
([#&#8203;20288](https://redirect.github.com/run-llama/llama_index/pull/20288))

##### llama-index-llms-siliconflow \[0.4.2]

- \[Bugfix] None check on content in delta in siliconflow LLM
([#&#8203;20327](https://redirect.github.com/run-llama/llama_index/pull/20327))

##### llama-index-node-parser-docling \[0.4.2]

- Relax docling Python constraints
([#&#8203;20322](https://redirect.github.com/run-llama/llama_index/pull/20322))

##### llama-index-packs-resume-screener \[0.9.3]

- feat: Update pypdf to latest version
([#&#8203;20285](https://redirect.github.com/run-llama/llama_index/pull/20285))

##### llama-index-postprocessor-voyageai-rerank \[0.4.1]

- VoyageAI correction and documentation
([#&#8203;20251](https://redirect.github.com/run-llama/llama_index/pull/20251))

##### llama-index-protocols-ag-ui \[0.2.3]

- fix: correct order of ag-ui events to avoid event conflicts
([#&#8203;20296](https://redirect.github.com/run-llama/llama_index/pull/20296))

##### llama-index-readers-confluence \[0.6.0]

- Refactor Confluence integration: Update license to MIT, remove
requirements.txt, and implement HtmlTextParser for HTML to Markdown
conversion. Update dependencies and tests accordingly.
([#&#8203;20262](https://redirect.github.com/run-llama/llama_index/pull/20262))

##### llama-index-readers-docling \[0.4.2]

- Relax docling Python constraints
([#&#8203;20322](https://redirect.github.com/run-llama/llama_index/pull/20322))

##### llama-index-readers-file \[0.5.5]

- feat: Update pypdf to latest version
([#&#8203;20285](https://redirect.github.com/run-llama/llama_index/pull/20285))

##### llama-index-readers-reddit \[0.4.1]

- Fix typo in README.md for Reddit integration
([#&#8203;20283](https://redirect.github.com/run-llama/llama_index/pull/20283))

##### llama-index-storage-chat-store-postgres \[0.3.2]

- \[FIX] Postgres ChatStore automatically prefix table name with
"data\_"
([#&#8203;20241](https://redirect.github.com/run-llama/llama_index/pull/20241))

##### llama-index-vector-stores-azureaisearch \[0.4.4]

- `vector-azureaisearch`: check if user agent already in policy before
add it to azure client
([#&#8203;20243](https://redirect.github.com/run-llama/llama_index/pull/20243))
- fix(azureaisearch): Add close/aclose methods to fix unclosed client
session warnings
([#&#8203;20309](https://redirect.github.com/run-llama/llama_index/pull/20309))

##### llama-index-vector-stores-milvus \[0.9.4]

- Fix/consistency level param for milvus
([#&#8203;20268](https://redirect.github.com/run-llama/llama_index/pull/20268))

##### llama-index-vector-stores-postgres \[0.7.2]

- Fix postgresql dispose
([#&#8203;20312](https://redirect.github.com/run-llama/llama_index/pull/20312))

##### llama-index-vector-stores-qdrant \[0.9.0]

- fix: Update qdrant-client version constraints
([#&#8203;20280](https://redirect.github.com/run-llama/llama_index/pull/20280))
- Feat: update Qdrant client to 1.16.0
([#&#8203;20287](https://redirect.github.com/run-llama/llama_index/pull/20287))

##### llama-index-vector-stores-vertexaivectorsearch \[0.3.2]

- fix: update blob path in batch\_update\_index
([#&#8203;20281](https://redirect.github.com/run-llama/llama_index/pull/20281))

##### llama-index-voice-agents-openai \[0.2.2]

- Smallest Nit
([#&#8203;20252](https://redirect.github.com/run-llama/llama_index/pull/20252))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xOS45IiwidXBkYXRlZEluVmVyIjoiNDIuMzIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
2025-12-11 20:29:47 -05:00
..