Files
genai-toolbox/docs/en/getting-started/quickstart/python/llamaindex
Mend Renovate 10a0c09c1f chore(deps): update dependency llama-index to v0.14.2 (#1487)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.13.6` -> `==0.14.2` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.13.6/0.14.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.2`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-15)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.1...v0.14.2)

##### `llama-index-core` \[0.14.2]

- fix: handle data urls in ImageBlock
([#&#8203;19856](https://redirect.github.com/run-llama/llama_index/issues/19856))
- fix: Move IngestionPipeline docstore document insertion after
transformations
([#&#8203;19849](https://redirect.github.com/run-llama/llama_index/issues/19849))
- fix: Update IngestionPipeline async document store insertion
([#&#8203;19868](https://redirect.github.com/run-llama/llama_index/issues/19868))
- chore: remove stepwise usage of workflows from code
([#&#8203;19877](https://redirect.github.com/run-llama/llama_index/issues/19877))

##### `llama-index-embeddings-fastembed` \[0.5.0]

- feat: make fastembed cpu or gpu optional
([#&#8203;19878](https://redirect.github.com/run-llama/llama_index/issues/19878))

##### `llama-index-llms-deepseek` \[0.2.2]

- feat: pass context\_window to super in deepseek llm
([#&#8203;19876](https://redirect.github.com/run-llama/llama_index/issues/19876))

##### `llama-index-llms-google-genai` \[0.5.0]

- feat: Add GoogleGenAI FileAPI support for large files
([#&#8203;19853](https://redirect.github.com/run-llama/llama_index/issues/19853))

##### `llama-index-readers-solr` \[0.1.0]

- feat: Add Solr reader integration
([#&#8203;19843](https://redirect.github.com/run-llama/llama_index/issues/19843))

##### `llama-index-retrievers-alletra-x10000-retriever` \[0.1.0]

- feat: add AlletraX10000Retriever integration
([#&#8203;19798](https://redirect.github.com/run-llama/llama_index/issues/19798))

##### `llama-index-vector-stores-oracledb` \[0.3.2]

- feat: OraLlamaVS Connection Pool Support + Filtering
([#&#8203;19412](https://redirect.github.com/run-llama/llama_index/issues/19412))

##### `llama-index-vector-stores-postgres` \[0.6.8]

- feat: Add `customize_query_fn` to PGVectorStore
([#&#8203;19847](https://redirect.github.com/run-llama/llama_index/issues/19847))

###
[`v0.14.1`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-14)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.0...v0.14.1)

##### `llama-index-core` \[0.14.1]

- feat: add verbose option to RetrieverQueryEngine for detailed output
([#&#8203;19807](https://redirect.github.com/run-llama/llama_index/issues/19807))
- feat: feat: add support for additional kwargs in
`aget_text_embedding_batch` method
([#&#8203;19808](https://redirect.github.com/run-llama/llama_index/issues/19808))
- feat: add `thinking_delta` field to AgentStream events to expose llm
reasoning
([#&#8203;19785](https://redirect.github.com/run-llama/llama_index/issues/19785))
- fix: Bug fix agent streaming thinking delta pydantic validation
([#&#8203;19828](https://redirect.github.com/run-llama/llama_index/issues/19828))
- fix: handle positional args and kwargs both in tool calling
([#&#8203;19822](https://redirect.github.com/run-llama/llama_index/issues/19822))

##### `llama-index-instrumentation` \[0.4.1]

- feat: add sync event/handler support
([#&#8203;19825](https://redirect.github.com/run-llama/llama_index/issues/19825))

##### `llama-index-llms-google-genai` \[0.4.0]

- feat: Add VideoBlock and GoogleGenAI video input support
([#&#8203;19823](https://redirect.github.com/run-llama/llama_index/issues/19823))

##### `llama-index-llms-ollama` \[0.7.3]

- fix: Fix bug using Ollama with Agents and None tool\_calls in final
message
([#&#8203;19844](https://redirect.github.com/run-llama/llama_index/issues/19844))

##### `llama-index-llms-vertex` \[0.6.1]

- fix: align complete/acomplete responses
([#&#8203;19806](https://redirect.github.com/run-llama/llama_index/issues/19806))

##### `llama-index-readers-confluence` \[0.4.3]

- chore: Bump version constraint for atlassian-python-api to include 4.x
([#&#8203;19824](https://redirect.github.com/run-llama/llama_index/issues/19824))

##### `llama-index-readers-github` \[0.6.2]

- fix: Make url optional
([#&#8203;19851](https://redirect.github.com/run-llama/llama_index/issues/19851))

##### `llama-index-readers-web` \[0.5.3]

- feat: Add OlostepWebReader Integration
([#&#8203;19821](https://redirect.github.com/run-llama/llama_index/issues/19821))

##### `llama-index-tools-google` \[0.6.2]

- feat: Add optional creds argument to GoogleCalendarToolSpec
([#&#8203;19826](https://redirect.github.com/run-llama/llama_index/issues/19826))

##### `llama-index-vector-stores-vectorx` \[0.1.0]

- feat: Add vectorx vectorstore
([#&#8203;19758](https://redirect.github.com/run-llama/llama_index/issues/19758))

###
[`v0.14.0`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-08)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.13.6...v0.14.0)

**NOTE:** All packages have been bumped to handle the latest
llama-index-core version.

##### `llama-index-core` \[0.14.0]

- breaking: bumped `llama-index-workflows` dependency to 2.0
- Improve stacktraces clarity by avoiding wrapping errors in
WorkflowRuntimeError
  - Remove deprecated checkpointer feature
  - Remove deprecated sub-workflows feature
- Remove deprecated `send_event` method from Workflow class (still
existing on the Context class)
- Remove deprecated `stream_events()` methods from Workflow class (still
existing on the Context class)
  - Remove deprecated support for stepwise execution

##### `llama-index-llms-openai` \[0.5.6]

- feat: add support for document blocks in openai chat completions
([#&#8203;19809](https://redirect.github.com/run-llama/llama_index/issues/19809))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-09-18 07:06:07 +00:00
..