Commit Graph

20 Commits

Author SHA1 Message Date
Mend Renovate
f87ed05aac chore(deps): update pip (#2215)
This PR contains the following updates:

| Package | Change |
[Age](https://docs.renovatebot.com/merge-confidence/) |
[Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [google-adk](https://redirect.github.com/google/adk-python)
([changelog](https://redirect.github.com/google/adk-python/blob/main/CHANGELOG.md))
| `==1.19.0` → `==1.21.0` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/google-adk/1.21.0?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/google-adk/1.19.0/1.21.0?slim=true)
|
| [google-genai](https://redirect.github.com/googleapis/python-genai) |
`==1.52.0` → `==1.56.0` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/google-genai/1.56.0?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/google-genai/1.52.0/1.56.0?slim=true)
|
| [langchain](https://redirect.github.com/langchain-ai/langchain)
([source](https://redirect.github.com/langchain-ai/langchain/tree/HEAD/libs/langchain),
[changelog](https://redirect.github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain%3D%3D1%22))
| `==1.1.0` → `==1.2.0` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/langchain/1.2.0?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/langchain/1.1.0/1.2.0?slim=true)
|
|
[langchain-google-vertexai](https://redirect.github.com/langchain-ai/langchain-google)
([source](https://redirect.github.com/langchain-ai/langchain-google/tree/HEAD/libs/vertexai),
[changelog](https://redirect.github.com/langchain-ai/langchain-google/releases?q=%22vertexai%22))
| `==3.1.0` → `==3.2.0` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/langchain-google-vertexai/3.2.0?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/langchain-google-vertexai/3.1.0/3.2.0?slim=true)
|
| [langgraph](https://redirect.github.com/langchain-ai/langgraph)
([source](https://redirect.github.com/langchain-ai/langgraph/tree/HEAD/libs/langgraph),
[changelog](https://redirect.github.com/langchain-ai/langgraph/releases))
| `==1.0.4` → `==1.0.5` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/langgraph/1.0.5?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/langgraph/1.0.4/1.0.5?slim=true)
|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.10` → `==0.14.12` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.12?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.10/0.14.12?slim=true)
|
| llama-index-llms-google-genai | `==0.7.3` → `==0.8.3` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.8.3?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.7.3/0.8.3?slim=true)
|
| [pytest](https://redirect.github.com/pytest-dev/pytest)
([changelog](https://docs.pytest.org/en/stable/changelog.html)) |
`==9.0.1` → `==9.0.2` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/pytest/9.0.2?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/pytest/9.0.1/9.0.2?slim=true)
|
|
[toolbox-core](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/CHANGELOG.md))
| `==0.5.3` → `==0.5.4` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/toolbox-core/0.5.4?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/toolbox-core/0.5.3/0.5.4?slim=true)
|
|
[toolbox-langchain](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/CHANGELOG.md))
| `==0.5.3` → `==0.5.4` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/toolbox-langchain/0.5.4?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/toolbox-langchain/0.5.3/0.5.4?slim=true)
|
|
[toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md))
| `==0.5.3` → `==0.5.4` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/toolbox-llamaindex/0.5.4?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/toolbox-llamaindex/0.5.3/0.5.4?slim=true)
|

---

### Release Notes

<details>
<summary>google/adk-python (google-adk)</summary>

###
[`v1.21.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1210-2025-12-11)

[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.20.0...v1.21.0)

##### Features

- **\[Interactions API Support]**
- The newly released Gemini [Interactions
API](https://ai.google.dev/gemini-api/docs/interactions) is supported in
ADK now. To use it:
  ```Python
  Agent(
    model=Gemini(
        model="gemini-3-pro-preview",
        use_interactions_api=True,
    ),
    name="...",
    description="...",
    instruction="...",
  )
  ```
see
[samples](https://redirect.github.com/google/adk-python/tree/main/contributing/samples/interactions_api)
for details

- **\[Services]**
- Add `add_session_to_memory` to `CallbackContext` and `ToolContext` to
explicitly save the current session to memory
([7b356dd](7b356ddc1b))

- **\[Plugins]**
- Add location for table in agent events in plugin
BigQueryAgentAnalytics
([507424a](507424acb9))
- Upgrade BigQueryAgentAnalyticsPlugin to v2.0 with improved
performance, multimodal support, and reliability
([7b2fe14](7b2fe14dab))

- **\[A2A]**
- Adds ADK EventActions to A2A response
([32e87f6](32e87f6381))

- **\[Tools]**
- Add `header_provider` to `OpenAPIToolset` and `RestApiTool`
([e1a7593](e1a7593ae8))
- Allow overriding connection template
([cde7f7c](cde7f7c243))
- Add SSL certificate verification configuration to OpenAPI tools using
the `verify` parameter
([9d2388a](9d2388a46f))
- Use json schema for function tool declaration when feature enabled
([cb3244b](cb3244bb58))

- **\[Models]**
- Add Gemma3Ollama model integration and a sample
([e9182e5](e9182e5eb4))

##### Bug Fixes

- Install dependencies for py 3.10
([9cccab4](9cccab4537))
- Refactor LiteLLM response schema formatting for different models
([894d8c6](894d8c6c26))
- Resolve project and credentials before creating Spanner client
([99f893a](99f893ae28))
- Avoid false positive "App name mismatch" warnings in Runner
([6388ba3](6388ba3b20))
- Update the code to work with either 1 event or more than 1 events
([4f54660](4f54660d6d))
- OpenAPI schema generation by skipping JSON schema for
judge\_model\_config
([56775af](56775afc48))
- Add tool\_name\_prefix support to OpenAPIToolset
([82e6623](82e6623fa9))
- Pass context to client interceptors
([143ad44](143ad44f8c))
- Yield event with error code when agent run raised A2AClientHTTPError
([b7ce5e1](b7ce5e17b6))
- Handle string function responses in LiteLLM conversion
([2b64715](2b64715505))
- ApigeeLLM support for Built-in tools like GoogleSearch,
BuiltInCodeExecutor when calling Gemini models through Apigee
([a9b853f](a9b853fe36))
- Extract and propagate task\_id in RemoteA2aAgent
([82bd4f3](82bd4f380b))
- Update FastAPI and Starlette to fix CVE-2025-62727 (ReDoS
vulnerability)
([c557b0a](c557b0a1f2))
- Add client id to token exchange
([f273517](f2735177f1))

##### Improvements

- Normalize multipart content for LiteLLM's ollama\_chat provider
([055dfc7](055dfc7974))
- Update adk web, fixes image not rendering, state not updating, update
drop down box width and trace icons
([df86847](df8684734b))
- Add sample agent for interaction api integration
([68d7048](68d70488b9))
- Update genAI SDK version
([f0bdcab](f0bdcaba44))
- Introduce `build_function_declaration_with_json_schema` to use
pydantic to generate json schema for FunctionTool
([51a638b](51a638b6b8))
- Update component definition for triaging agent
([ee743bd](ee743bd19a))
- Migrate Google tools to use the new feature decorator
([bab5729](bab57296d5))
- Migrate computer to use the new feature decorator
([1ae944b](1ae944b39d))
- Add Spanner execute sql query result mode using list of dictionaries
([f22bac0](f22bac0b20))
- Improve error message for missing `invocation_id` and `new_message` in
`run_async`
([de841a4](de841a4a09))

###
[`v1.20.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1200-2025-12-01)

[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.19.0...v1.20.0)

##### Features

- **\[Core]**
- Add enum constraint to `agent_name` for `transfer_to_agent`
([4a42d0d](4a42d0d9d8))
- Add validation for unique sub-agent names
([#&#8203;3557](https://redirect.github.com/google/adk-python/issues/3557))
([2247a45](2247a45922))
- Support streaming function call arguments in progressive SSE streaming
feature
([786aaed](786aaed335))

- **\[Models]**
- Enable multi-provider support for Claude and LiteLLM
([d29261a](d29261a3dc))

- **\[Tools]**
- Create APIRegistryToolset to add tools from Cloud API registry to
agent
([ec4ccd7](ec4ccd718f))
- Add an option to disallow propagating runner plugins to AgentTool
runner
([777dba3](777dba3033))

- **\[Web]**
- Added an endpoint to list apps with details
([b57fe5f](b57fe5f459))

##### Bug Fixes

- Allow image parts in user messages for Anthropic Claude
([5453b5b](5453b5bfde))
- Mark the Content as non-empty if its first part contains text or
inline\_data or file\_data or func call/response
([631b583](631b58336d))
- Fixes double response processing issue in `base_llm_flow.py` where, in
Bidi-streaming (live) mode, the multi-agent structure causes duplicated
responses after tool calling.
([cf21ca3](cf21ca3584))
- Fix out of bounds error in \_run\_async\_impl
([8fc6128](8fc6128b62))
- Fix paths for public docs
([cd54f48](cd54f48fed))
- Ensure request bodies without explicit names are named 'body'
([084c2de](084c2de0da)),
closes
[#&#8203;2213](https://redirect.github.com/google/adk-python/issues/2213)
- Optimize Stale Agent with GraphQL and Search API to resolve 429 Quota
errors
([cb19d07](cb19d0714c))
- Update AgentTool to use Agent's description when input\_schema is
provided in FunctionDeclaration
([52674e7](52674e7fac))
- Update LiteLLM system instruction role from "developer" to "system"
([2e1f730](2e1f730c3b)),
closes
[#&#8203;3657](https://redirect.github.com/google/adk-python/issues/3657)
- Update session last update time when appending events
([a3e4ad3](a3e4ad3cd1)),
closes
[#&#8203;2721](https://redirect.github.com/google/adk-python/issues/2721)
- Update the retry\_on\_closed\_resource decorator to retry on all
errors
([a3aa077](a3aa07722a))
- Windows Path Handling and Normalize Cross-Platform Path Resolution in
AgentLoader
([a1c09b7](a1c09b724b))

##### Documentation

- Add Code Wiki badge to README
([caf23ac](caf23ac49f))

</details>

<details>
<summary>googleapis/python-genai (google-genai)</summary>

###
[`v1.56.0`](https://redirect.github.com/googleapis/python-genai/blob/HEAD/CHANGELOG.md#1560-2025-12-16)

[Compare
Source](https://redirect.github.com/googleapis/python-genai/compare/v1.55.0...v1.56.0)

##### Features

- Add minimal and medium thinking levels.
([96d644c](96d644cd52))
- Add support for Struct in ToolResult Content.
([8fd4886](8fd4886a04))
- Add ultra high resolution to the media resolution in Parts.
([356c320](356c320566))
- Add ULTRA\_HIGH MediaResolution and new ThinkingLevel enums
([336b823](336b8236c0))
- Define and use DocumentMimeType for DocumentContent
([dc7f00f](dc7f00f78b))
- Support multi speaker for Vertex AI
([ecb00c2](ecb00c2241))

##### Bug Fixes

- Api version handling for interactions.
([436ca2e](436ca2e1d5))

##### Documentation

- Add documentation for the new Interactions API (Preview).
([e28a69c](e28a69c92a))
- Update and restructure codegen\_instructions
([00422de](00422de07b))
- Update docs for 1.55
([1cc43e7](1cc43e7d06))

###
[`v1.55.0`](https://redirect.github.com/googleapis/python-genai/blob/HEAD/CHANGELOG.md#1550-2025-12-11)

[Compare
Source](https://redirect.github.com/googleapis/python-genai/compare/v1.54.0...v1.55.0)

##### Features

- Add the Interactions API
([836a3](836a33c93f))
- Add enableEnhancedCivicAnswers feature in GenerateContentConfig
([15d1ea9](15d1ea9fbb))
- Add IMAGE\_RECITATION and IMAGE\_OTHER enum values to FinishReason
([8bb4b9a](8bb4b9a8b7))
- Add voice activity detection signal.
([feae46d](feae46dd76))

##### Bug Fixes

- Replicated voice config bytes handling
([c9f8668](c9f8668cea))

##### Documentation

- Regenerate docs for 1.54.0
([8bac8d2](8bac8d2d92))

###
[`v1.54.0`](https://redirect.github.com/googleapis/python-genai/blob/HEAD/CHANGELOG.md#1540-2025-12-08)

[Compare
Source](https://redirect.github.com/googleapis/python-genai/compare/v1.53.0...v1.54.0)

##### Features

- Support ReplicatedVoiceConfig
([07c74dd](07c74dd120))

##### Bug Fixes

- Apply timeout to the total request duration in aiohttp
([a4f4205](a4f4205dd9))
- Make APIError class picklable (fixes
[#&#8203;1144](https://redirect.github.com/googleapis/python-genai/issues/1144))
([e3d5712](e3d5712d9f))

##### Documentation

- Regenerate docs for 1.53.0
([3a2b970](3a2b9702ec))

###
[`v1.53.0`](https://redirect.github.com/googleapis/python-genai/blob/HEAD/CHANGELOG.md#1530-2025-12-03)

[Compare
Source](https://redirect.github.com/googleapis/python-genai/compare/v1.52.0...v1.53.0)

##### Features

- Add empty response for tunings.cancel()
([97cc7e4](97cc7e4eaf))

##### Bug Fixes

- Convert 'citationSources' key in CitationMetadata to 'citations' when
present (fixes
[#&#8203;1222](https://redirect.github.com/googleapis/python-genai/issues/1222))
([2f28b02](2f28b02517))
- Fix google.auth.transport.requests import error in Live API
([a842721](a842721cb1))

##### Documentation

- Improve docs for google.genai.types
([5b50adc](5b50adce2a))
- Recommend using response\_json\_schema in error messages and
docstrings.
([c0b175a](c0b175a0ca))
- Updating codegen instructions to use gemini 3 pro and nano banana pro
([060f015](060f015d7e))

</details>

<details>
<summary>langchain-ai/langgraph (langgraph)</summary>

###
[`v1.0.5`](https://redirect.github.com/langchain-ai/langgraph/releases/tag/1.0.5):
langgraph&#x3D;&#x3D;1.0.5

[Compare
Source](https://redirect.github.com/langchain-ai/langgraph/compare/1.0.4...1.0.5)

Changes since 1.0.4

- release(langgraph): bump to 1.0.5
([#&#8203;6582](https://redirect.github.com/langchain-ai/langgraph/issues/6582))
- feat(sdk-py): emit id as part of stream events
([#&#8203;6581](https://redirect.github.com/langchain-ai/langgraph/issues/6581))
- fix: update readme
([#&#8203;6570](https://redirect.github.com/langchain-ai/langgraph/issues/6570))
- release(checkpoint-postgres): 3.0.1
([#&#8203;6568](https://redirect.github.com/langchain-ai/langgraph/issues/6568))
- release(checkpoint-sqlite): 3.0.1
([#&#8203;6566](https://redirect.github.com/langchain-ai/langgraph/issues/6566))
- chore(cli): Pass through webhook configuration in dev server
([#&#8203;6557](https://redirect.github.com/langchain-ai/langgraph/issues/6557))
- feat: custom encryption at rest
([#&#8203;6482](https://redirect.github.com/langchain-ai/langgraph/issues/6482))
- chore: fix links for docs
([#&#8203;6538](https://redirect.github.com/langchain-ai/langgraph/issues/6538))
- chore: Bump lockfile
([#&#8203;6537](https://redirect.github.com/langchain-ai/langgraph/issues/6537))
- feat: Include pagination in assistants search response
([#&#8203;6526](https://redirect.github.com/langchain-ai/langgraph/issues/6526))

</details>

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.12`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-30)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.10...v0.14.12)

##### llama-index-callbacks-agentops \[0.4.1]

- Feat/async tool spec support
([#&#8203;20338](https://redirect.github.com/run-llama/llama_index/pull/20338))

##### llama-index-core \[0.14.12]

- Feat/async tool spec support
([#&#8203;20338](https://redirect.github.com/run-llama/llama_index/pull/20338))
- Improve `MockFunctionCallingLLM`
([#&#8203;20356](https://redirect.github.com/run-llama/llama_index/pull/20356))
- fix(openai): sanitize generic Pydantic model schema names
([#&#8203;20371](https://redirect.github.com/run-llama/llama_index/pull/20371))
- Element node parser
([#&#8203;20399](https://redirect.github.com/run-llama/llama_index/pull/20399))
- improve llama dev logging
([#&#8203;20411](https://redirect.github.com/run-llama/llama_index/pull/20411))
- test(node\_parser): add unit tests for Java CodeSplitter
([#&#8203;20423](https://redirect.github.com/run-llama/llama_index/pull/20423))
- fix: crash in log\_vector\_store\_query\_result when result.ids is
None
([#&#8203;20427](https://redirect.github.com/run-llama/llama_index/pull/20427))

##### llama-index-embeddings-litellm \[0.4.1]

- Add docstring to LiteLLM embedding class
([#&#8203;20336](https://redirect.github.com/run-llama/llama_index/pull/20336))

##### llama-index-embeddings-ollama \[0.8.5]

- feat(llama-index-embeddings-ollama): Add keep\_alive parameter
([#&#8203;20395](https://redirect.github.com/run-llama/llama_index/pull/20395))
- docs: improve Ollama embeddings README with comprehensive
documentation
([#&#8203;20414](https://redirect.github.com/run-llama/llama_index/pull/20414))

##### llama-index-embeddings-voyageai \[0.5.2]

- Voyage multimodal 35
([#&#8203;20398](https://redirect.github.com/run-llama/llama_index/pull/20398))

##### llama-index-graph-stores-nebula \[0.5.1]

- feat(nebula): add MENTIONS edge to property graph store
([#&#8203;20401](https://redirect.github.com/run-llama/llama_index/pull/20401))

##### llama-index-llms-aibadgr \[0.1.0]

- feat(llama-index-llms-aibadgr): Add AI Badgr OpenAI‑compatible LLM
integration
([#&#8203;20365](https://redirect.github.com/run-llama/llama_index/pull/20365))

##### llama-index-llms-anthropic \[0.10.4]

- add back haiku-3 support
([#&#8203;20408](https://redirect.github.com/run-llama/llama_index/pull/20408))

##### llama-index-llms-bedrock-converse \[0.12.3]

- fix: bedrock converse thinking block issue
([#&#8203;20355](https://redirect.github.com/run-llama/llama_index/pull/20355))

##### llama-index-llms-google-genai \[0.8.3]

- Switch use\_file\_api to Flexible file\_mode; Improve File Upload
Handling & Bump google-genai to v1.52.0
([#&#8203;20347](https://redirect.github.com/run-llama/llama_index/pull/20347))
- Fix missing role from Google-GenAI
([#&#8203;20357](https://redirect.github.com/run-llama/llama_index/pull/20357))
- Add signature index fix
([#&#8203;20362](https://redirect.github.com/run-llama/llama_index/pull/20362))
- Add positional thought signature for thoughts
([#&#8203;20418](https://redirect.github.com/run-llama/llama_index/pull/20418))

##### llama-index-llms-ollama \[0.9.1]

- feature: pydantic no longer complains if you pass 'low', 'medium', 'h…
([#&#8203;20394](https://redirect.github.com/run-llama/llama_index/pull/20394))

##### llama-index-llms-openai \[0.6.12]

- fix: Handle tools=None in OpenAIResponses.\_get\_model\_kwargs
([#&#8203;20358](https://redirect.github.com/run-llama/llama_index/pull/20358))
- feat: add support for gpt-5.2 and 5.2 pro
([#&#8203;20361](https://redirect.github.com/run-llama/llama_index/pull/20361))

##### llama-index-readers-confluence \[0.6.1]

- fix(confluence): support Python 3.14
([#&#8203;20370](https://redirect.github.com/run-llama/llama_index/pull/20370))

##### llama-index-readers-file \[0.5.6]

- Loosen constraint on `pandas` version
([#&#8203;20387](https://redirect.github.com/run-llama/llama_index/pull/20387))

##### llama-index-readers-service-now \[0.2.2]

- chore(deps): bump urllib3 from 2.5.0 to 2.6.0 in
/llama-index-integrations/readers/llama-index-readers-service-now in the
pip group across 1 directory
([#&#8203;20341](https://redirect.github.com/run-llama/llama_index/pull/20341))

##### llama-index-tools-mcp \[0.4.5]

- fix: pass timeout parameters to transport clients in BasicMCPClient
([#&#8203;20340](https://redirect.github.com/run-llama/llama_index/pull/20340))
- feature: Permit to pass a custom httpx.AsyncClient when creating a
BasicMcpClient
([#&#8203;20368](https://redirect.github.com/run-llama/llama_index/pull/20368))

##### llama-index-tools-typecast \[0.1.0]

- feat: add Typecast tool integration with text to speech features
([#&#8203;20343](https://redirect.github.com/run-llama/llama_index/pull/20343))

##### llama-index-vector-stores-azurepostgresql \[0.2.0]

- Feat/async tool spec support
([#&#8203;20338](https://redirect.github.com/run-llama/llama_index/pull/20338))

##### llama-index-vector-stores-chroma \[0.5.5]

- Fix chroma nested metadata filters
([#&#8203;20424](https://redirect.github.com/run-llama/llama_index/pull/20424))
- fix(chroma): support multimodal results
([#&#8203;20426](https://redirect.github.com/run-llama/llama_index/pull/20426))

##### llama-index-vector-stores-couchbase \[0.6.0]

- Update FTS & GSI reference docs for Couchbase vector-store
([#&#8203;20346](https://redirect.github.com/run-llama/llama_index/pull/20346))

##### llama-index-vector-stores-faiss \[0.5.2]

- fix(faiss): pass numpy array instead of int to add\_with\_ids
([#&#8203;20384](https://redirect.github.com/run-llama/llama_index/pull/20384))

##### llama-index-vector-stores-lancedb \[0.4.4]

- Feat/async tool spec support
([#&#8203;20338](https://redirect.github.com/run-llama/llama_index/pull/20338))
- fix(vector\_stores/lancedb): add missing '<' filter operator
([#&#8203;20364](https://redirect.github.com/run-llama/llama_index/pull/20364))
- fix(lancedb): fix metadata filtering logic and list value SQL
generation
([#&#8203;20374](https://redirect.github.com/run-llama/llama_index/pull/20374))

##### llama-index-vector-stores-mongodb \[0.9.0]

- Update mongo vector store to initialize without list permissions
([#&#8203;20354](https://redirect.github.com/run-llama/llama_index/pull/20354))
- add mongodb delete index
([#&#8203;20429](https://redirect.github.com/run-llama/llama_index/pull/20429))
- async mongodb atlas support
([#&#8203;20430](https://redirect.github.com/run-llama/llama_index/pull/20430))

##### llama-index-vector-stores-redis \[0.6.2]

- Redis metadata filter fix
([#&#8203;20359](https://redirect.github.com/run-llama/llama_index/pull/20359))

##### llama-index-vector-stores-vertexaivectorsearch \[0.3.3]

- feat(vertex-vector-search): Add Google Vertex AI Vector Search v2.0
support
([#&#8203;20351](https://redirect.github.com/run-llama/llama_index/pull/20351))

</details>

<details>
<summary>pytest-dev/pytest (pytest)</summary>

###
[`v9.0.2`](https://redirect.github.com/pytest-dev/pytest/releases/tag/9.0.2)

[Compare
Source](https://redirect.github.com/pytest-dev/pytest/compare/9.0.1...9.0.2)

### pytest 9.0.2 (2025-12-06)

#### Bug fixes

-
[#&#8203;13896](https://redirect.github.com/pytest-dev/pytest/issues/13896):
The terminal progress feature added in pytest 9.0.0 has been disabled by
default, except on Windows, due to compatibility issues with some
terminal emulators.

You may enable it again by passing `-p terminalprogress`. We may enable
it by default again once compatibility improves in the future.

Additionally, when the environment variable `TERM` is `dumb`, the escape
codes are no longer emitted, even if the plugin is enabled.

-
[#&#8203;13904](https://redirect.github.com/pytest-dev/pytest/issues/13904):
Fixed the TOML type of the `tmp_path_retention_count` settings in the
API reference from number to string.

-
[#&#8203;13946](https://redirect.github.com/pytest-dev/pytest/issues/13946):
The private `config.inicfg` attribute was changed in a breaking manner
in pytest 9.0.0.
Due to its usage in the ecosystem, it is now restored to working order
using a compatibility shim.
  It will be deprecated in pytest 9.1 and removed in pytest 10.

-
[#&#8203;13965](https://redirect.github.com/pytest-dev/pytest/issues/13965):
Fixed quadratic-time behavior when handling `unittest` subtests in
Python 3.10.

#### Improved documentation

-
[#&#8203;4492](https://redirect.github.com/pytest-dev/pytest/issues/4492):
The API Reference now contains cross-reference-able documentation of
`pytest's command-line flags <command-line-flags>`.

</details>

<details>
<summary>googleapis/mcp-toolbox-sdk-python (toolbox-core)</summary>

###
[`v0.5.4`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-llamaindex-v0.5.4):
toolbox-llamaindex: v0.5.4

[Compare
Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-core-v0.5.3...toolbox-core-v0.5.4)

##### Features

- **toolbox-llamaindex:** add protocol toggle to llamaindex clients
([#&#8203;453](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/453))
([d5eece0](d5eece0d84))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config
help](https://redirect.github.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi41OS4wIiwidXBkYXRlZEluVmVyIjoiNDIuNjYuMTQiLCJ0YXJnZXRCcmFuY2giOiJtYWluIiwibGFiZWxzIjpbXX0=-->
2025-12-30 10:00:49 -08:00
Mend Renovate
d08dd144ad chore(deps): update dependency llama-index to v0.14.10 (#2092)
This PR contains the following updates:

| Package | Change |
[Age](https://docs.renovatebot.com/merge-confidence/) |
[Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.8` -> `==0.14.10` |
![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.10?slim=true)
|
![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.8/0.14.10?slim=true)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.10`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-04)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.9...v0.14.10)

##### llama-index-core \[0.14.10]

- feat: add mock function calling llm
([#&#8203;20331](https://redirect.github.com/run-llama/llama_index/pull/20331))

##### llama-index-llms-qianfan \[0.4.1]

- test: fix typo 'reponse' to 'response' in variable names
([#&#8203;20329](https://redirect.github.com/run-llama/llama_index/pull/20329))

##### llama-index-tools-airweave \[0.1.0]

- feat: add Airweave tool integration with advanced search features
([#&#8203;20111](https://redirect.github.com/run-llama/llama_index/pull/20111))

##### llama-index-utils-qianfan \[0.4.1]

- test: fix typo 'reponse' to 'response' in variable names
([#&#8203;20329](https://redirect.github.com/run-llama/llama_index/pull/20329))

###
[`v0.14.9`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-02)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.8...v0.14.9)

##### llama-index-agent-azure \[0.2.1]

- fix: Pin azure-ai-projects version to prevent breaking changes
([#&#8203;20255](https://redirect.github.com/run-llama/llama_index/pull/20255))

##### llama-index-core \[0.14.9]

- MultiModalVectorStoreIndex now returns a multi-modal
ContextChatEngine.
([#&#8203;20265](https://redirect.github.com/run-llama/llama_index/pull/20265))
- Ingestion to vector store now ensures that \_node-content is readable
([#&#8203;20266](https://redirect.github.com/run-llama/llama_index/pull/20266))
- fix: ensure context is copied with async utils run\_async
([#&#8203;20286](https://redirect.github.com/run-llama/llama_index/pull/20286))
- fix(memory): ensure first message in queue is always a user message
after flush
([#&#8203;20310](https://redirect.github.com/run-llama/llama_index/pull/20310))

##### llama-index-embeddings-bedrock \[0.7.2]

- feat(embeddings-bedrock): Add support for Amazon Bedrock Application
Inference Profiles
([#&#8203;20267](https://redirect.github.com/run-llama/llama_index/pull/20267))
- fix:(embeddings-bedrock) correct extraction of provider from
model\_name
([#&#8203;20295](https://redirect.github.com/run-llama/llama_index/pull/20295))
- Bump version of bedrock-embedding
([#&#8203;20304](https://redirect.github.com/run-llama/llama_index/pull/20304))

##### llama-index-embeddings-voyageai \[0.5.1]

- VoyageAI correction and documentation
([#&#8203;20251](https://redirect.github.com/run-llama/llama_index/pull/20251))

##### llama-index-llms-anthropic \[0.10.3]

- feat: add anthropic opus 4.5
([#&#8203;20306](https://redirect.github.com/run-llama/llama_index/pull/20306))

##### llama-index-llms-bedrock-converse \[0.12.2]

- fix(bedrock-converse): Only use guardrail\_stream\_processing\_mode in
streaming functions
([#&#8203;20289](https://redirect.github.com/run-llama/llama_index/pull/20289))
- feat: add anthropic opus 4.5
([#&#8203;20306](https://redirect.github.com/run-llama/llama_index/pull/20306))
- feat(bedrock-converse): Additional support for Claude Opus 4.5
([#&#8203;20317](https://redirect.github.com/run-llama/llama_index/pull/20317))

##### llama-index-llms-google-genai \[0.7.4]

- Fix gemini-3 support and gemini function call support
([#&#8203;20315](https://redirect.github.com/run-llama/llama_index/pull/20315))

##### llama-index-llms-helicone \[0.1.1]

- update helicone docs + examples
([#&#8203;20208](https://redirect.github.com/run-llama/llama_index/pull/20208))

##### llama-index-llms-openai \[0.6.10]

- Smallest Nit
([#&#8203;20252](https://redirect.github.com/run-llama/llama_index/pull/20252))
- Feat: Add gpt-5.1-chat model support
([#&#8203;20311](https://redirect.github.com/run-llama/llama_index/pull/20311))

##### llama-index-llms-ovhcloud \[0.1.0]

- Add OVHcloud AI Endpoints provider
([#&#8203;20288](https://redirect.github.com/run-llama/llama_index/pull/20288))

##### llama-index-llms-siliconflow \[0.4.2]

- \[Bugfix] None check on content in delta in siliconflow LLM
([#&#8203;20327](https://redirect.github.com/run-llama/llama_index/pull/20327))

##### llama-index-node-parser-docling \[0.4.2]

- Relax docling Python constraints
([#&#8203;20322](https://redirect.github.com/run-llama/llama_index/pull/20322))

##### llama-index-packs-resume-screener \[0.9.3]

- feat: Update pypdf to latest version
([#&#8203;20285](https://redirect.github.com/run-llama/llama_index/pull/20285))

##### llama-index-postprocessor-voyageai-rerank \[0.4.1]

- VoyageAI correction and documentation
([#&#8203;20251](https://redirect.github.com/run-llama/llama_index/pull/20251))

##### llama-index-protocols-ag-ui \[0.2.3]

- fix: correct order of ag-ui events to avoid event conflicts
([#&#8203;20296](https://redirect.github.com/run-llama/llama_index/pull/20296))

##### llama-index-readers-confluence \[0.6.0]

- Refactor Confluence integration: Update license to MIT, remove
requirements.txt, and implement HtmlTextParser for HTML to Markdown
conversion. Update dependencies and tests accordingly.
([#&#8203;20262](https://redirect.github.com/run-llama/llama_index/pull/20262))

##### llama-index-readers-docling \[0.4.2]

- Relax docling Python constraints
([#&#8203;20322](https://redirect.github.com/run-llama/llama_index/pull/20322))

##### llama-index-readers-file \[0.5.5]

- feat: Update pypdf to latest version
([#&#8203;20285](https://redirect.github.com/run-llama/llama_index/pull/20285))

##### llama-index-readers-reddit \[0.4.1]

- Fix typo in README.md for Reddit integration
([#&#8203;20283](https://redirect.github.com/run-llama/llama_index/pull/20283))

##### llama-index-storage-chat-store-postgres \[0.3.2]

- \[FIX] Postgres ChatStore automatically prefix table name with
"data\_"
([#&#8203;20241](https://redirect.github.com/run-llama/llama_index/pull/20241))

##### llama-index-vector-stores-azureaisearch \[0.4.4]

- `vector-azureaisearch`: check if user agent already in policy before
add it to azure client
([#&#8203;20243](https://redirect.github.com/run-llama/llama_index/pull/20243))
- fix(azureaisearch): Add close/aclose methods to fix unclosed client
session warnings
([#&#8203;20309](https://redirect.github.com/run-llama/llama_index/pull/20309))

##### llama-index-vector-stores-milvus \[0.9.4]

- Fix/consistency level param for milvus
([#&#8203;20268](https://redirect.github.com/run-llama/llama_index/pull/20268))

##### llama-index-vector-stores-postgres \[0.7.2]

- Fix postgresql dispose
([#&#8203;20312](https://redirect.github.com/run-llama/llama_index/pull/20312))

##### llama-index-vector-stores-qdrant \[0.9.0]

- fix: Update qdrant-client version constraints
([#&#8203;20280](https://redirect.github.com/run-llama/llama_index/pull/20280))
- Feat: update Qdrant client to 1.16.0
([#&#8203;20287](https://redirect.github.com/run-llama/llama_index/pull/20287))

##### llama-index-vector-stores-vertexaivectorsearch \[0.3.2]

- fix: update blob path in batch\_update\_index
([#&#8203;20281](https://redirect.github.com/run-llama/llama_index/pull/20281))

##### llama-index-voice-agents-openai \[0.2.2]

- Smallest Nit
([#&#8203;20252](https://redirect.github.com/run-llama/llama_index/pull/20252))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xOS45IiwidXBkYXRlZEluVmVyIjoiNDIuMzIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
2025-12-11 20:29:47 -05:00
Mend Renovate
baf1bd1a97 chore(deps): update dependency llama-index to v0.14.8 (#1831)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.6` -> `==0.14.8` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.8?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.6/0.14.8?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.8`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-11-10)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.7...v0.14.8)

##### llama-index-core \[0.14.8]

- Fix ReActOutputParser getting stuck when "Answer:" contains "Action:"
([#&#8203;20098](https://redirect.github.com/run-llama/llama_index/pull/20098))
- Add buffer to image, audio, video and document blocks
([#&#8203;20153](https://redirect.github.com/run-llama/llama_index/pull/20153))
- fix(agent): Handle multi-block ChatMessage in ReActAgent
([#&#8203;20196](https://redirect.github.com/run-llama/llama_index/pull/20196))
- Fix/20209
([#&#8203;20214](https://redirect.github.com/run-llama/llama_index/pull/20214))
- Preserve Exception in ToolOutput
([#&#8203;20231](https://redirect.github.com/run-llama/llama_index/pull/20231))
- fix weird pydantic warning
([#&#8203;20235](https://redirect.github.com/run-llama/llama_index/pull/20235))

##### llama-index-embeddings-nvidia \[0.4.2]

- docs: Edit pass and update example model
([#&#8203;20198](https://redirect.github.com/run-llama/llama_index/pull/20198))

##### llama-index-embeddings-ollama \[0.8.4]

- Added a test case (no code) to check the embedding through an actual
connection to a Ollama server (after checking that the ollama server
exists)
([#&#8203;20230](https://redirect.github.com/run-llama/llama_index/pull/20230))

##### llama-index-llms-anthropic \[0.10.2]

- feat(llms/anthropic): Add support for RawMessageDeltaEvent in
streaming
([#&#8203;20206](https://redirect.github.com/run-llama/llama_index/pull/20206))
- chore: remove unsupported models
([#&#8203;20211](https://redirect.github.com/run-llama/llama_index/pull/20211))

##### llama-index-llms-bedrock-converse \[0.11.1]

- feat: integrate bedrock converse with tool call block
([#&#8203;20099](https://redirect.github.com/run-llama/llama_index/pull/20099))
- feat: Update model name extraction to include 'jp' region prefix and …
([#&#8203;20233](https://redirect.github.com/run-llama/llama_index/pull/20233))

##### llama-index-llms-google-genai \[0.7.3]

- feat: google genai integration with tool block
([#&#8203;20096](https://redirect.github.com/run-llama/llama_index/pull/20096))
- fix: non-streaming gemini tool calling
([#&#8203;20207](https://redirect.github.com/run-llama/llama_index/pull/20207))
- Add token usage information in GoogleGenAI chat additional\_kwargs
([#&#8203;20219](https://redirect.github.com/run-llama/llama_index/pull/20219))
- bug fix google genai stream\_complete
([#&#8203;20220](https://redirect.github.com/run-llama/llama_index/pull/20220))

##### llama-index-llms-nvidia \[0.4.4]

- docs: Edit pass and code example updates
([#&#8203;20200](https://redirect.github.com/run-llama/llama_index/pull/20200))

##### llama-index-llms-openai \[0.6.8]

- FixV2: Correct DocumentBlock type for OpenAI from 'input\_file' to
'file'
([#&#8203;20203](https://redirect.github.com/run-llama/llama_index/pull/20203))
- OpenAI v2 sdk support
([#&#8203;20234](https://redirect.github.com/run-llama/llama_index/pull/20234))

##### llama-index-llms-upstage \[0.6.5]

- OpenAI v2 sdk support
([#&#8203;20234](https://redirect.github.com/run-llama/llama_index/pull/20234))

##### llama-index-packs-streamlit-chatbot \[0.5.2]

- OpenAI v2 sdk support
([#&#8203;20234](https://redirect.github.com/run-llama/llama_index/pull/20234))

##### llama-index-packs-voyage-query-engine \[0.5.2]

- OpenAI v2 sdk support
([#&#8203;20234](https://redirect.github.com/run-llama/llama_index/pull/20234))

##### llama-index-postprocessor-nvidia-rerank \[0.5.1]

- docs: Edit pass
([#&#8203;20199](https://redirect.github.com/run-llama/llama_index/pull/20199))

##### llama-index-readers-web \[0.5.6]

- feat: Add ScrapyWebReader Integration
([#&#8203;20212](https://redirect.github.com/run-llama/llama_index/pull/20212))
- Update Scrapy dependency to 2.13.3
([#&#8203;20228](https://redirect.github.com/run-llama/llama_index/pull/20228))

##### llama-index-readers-whisper \[0.3.0]

- OpenAI v2 sdk support
([#&#8203;20234](https://redirect.github.com/run-llama/llama_index/pull/20234))

##### llama-index-storage-kvstore-postgres \[0.4.3]

- fix: Ensure schema creation only occurs if it doesn't already exist
([#&#8203;20225](https://redirect.github.com/run-llama/llama_index/pull/20225))

##### llama-index-tools-brightdata \[0.2.1]

- docs: add api key claim instructions
([#&#8203;20204](https://redirect.github.com/run-llama/llama_index/pull/20204))

##### llama-index-tools-mcp \[0.4.3]

- Added test case for issue 19211. No code change
([#&#8203;20201](https://redirect.github.com/run-llama/llama_index/pull/20201))

##### llama-index-utils-oracleai \[0.3.1]

- Update llama-index-core dependency to 0.12.45
([#&#8203;20227](https://redirect.github.com/run-llama/llama_index/pull/20227))

##### llama-index-vector-stores-lancedb \[0.4.2]

- fix: FTS index recreation bug on every LanceDB query
([#&#8203;20213](https://redirect.github.com/run-llama/llama_index/pull/20213))

###
[`v0.14.7`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-30)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.6...v0.14.7)

##### llama-index-core \[0.14.7]

- Feat/serpex tool integration
([#&#8203;20141](https://redirect.github.com/run-llama/llama_index/pull/20141))
- Fix outdated error message about setting LLM
([#&#8203;20157](https://redirect.github.com/run-llama/llama_index/pull/20157))
- Fixing some recently failing tests
([#&#8203;20165](https://redirect.github.com/run-llama/llama_index/pull/20165))
- Fix: update lock to latest workflow and fix issues
([#&#8203;20173](https://redirect.github.com/run-llama/llama_index/pull/20173))
- fix: ensure full docstring is used in FunctionTool
([#&#8203;20175](https://redirect.github.com/run-llama/llama_index/pull/20175))
- fix api docs build
([#&#8203;20180](https://redirect.github.com/run-llama/llama_index/pull/20180))

##### llama-index-embeddings-voyageai \[0.5.0]

- Updating the VoyageAI integration
([#&#8203;20073](https://redirect.github.com/run-llama/llama_index/pull/20073))

##### llama-index-llms-anthropic \[0.10.0]

- feat: integrate anthropic with tool call block
([#&#8203;20100](https://redirect.github.com/run-llama/llama_index/pull/20100))

##### llama-index-llms-bedrock-converse \[0.10.7]

- feat: Add support for Bedrock Guardrails streamProcessingMode
([#&#8203;20150](https://redirect.github.com/run-llama/llama_index/pull/20150))
- bedrock structured output optional force
([#&#8203;20158](https://redirect.github.com/run-llama/llama_index/pull/20158))

##### llama-index-llms-fireworks \[0.4.5]

- Update FireworksAI models
([#&#8203;20169](https://redirect.github.com/run-llama/llama_index/pull/20169))

##### llama-index-llms-mistralai \[0.9.0]

- feat: mistralai integration with tool call block
([#&#8203;20103](https://redirect.github.com/run-llama/llama_index/pull/20103))

##### llama-index-llms-ollama \[0.9.0]

- feat: integrate ollama with tool call block
([#&#8203;20097](https://redirect.github.com/run-llama/llama_index/pull/20097))

##### llama-index-llms-openai \[0.6.6]

- Allow setting temp of gpt-5-chat
([#&#8203;20156](https://redirect.github.com/run-llama/llama_index/pull/20156))

##### llama-index-readers-confluence \[0.5.0]

- feat(confluence): make SVG processing optional to fix pycairo install…
([#&#8203;20115](https://redirect.github.com/run-llama/llama_index/pull/20115))

##### llama-index-readers-github \[0.9.0]

- Add GitHub App authentication support
([#&#8203;20106](https://redirect.github.com/run-llama/llama_index/pull/20106))

##### llama-index-retrievers-bedrock \[0.5.1]

- Fixing some recently failing tests
([#&#8203;20165](https://redirect.github.com/run-llama/llama_index/pull/20165))

##### llama-index-tools-serpex \[0.1.0]

- Feat/serpex tool integration
([#&#8203;20141](https://redirect.github.com/run-llama/llama_index/pull/20141))
- add missing toml info
([#&#8203;20186](https://redirect.github.com/run-llama/llama_index/pull/20186))

##### llama-index-vector-stores-couchbase \[0.6.0]

- Add Hyperscale and Composite Vector Indexes support for Couchbase
vector-store
([#&#8203;20170](https://redirect.github.com/run-llama/llama_index/pull/20170))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE3My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
2025-11-21 09:32:39 +00:00
Mend Renovate
ee10723480 chore(deps): update dependency toolbox-llamaindex to v0.5.3 (#1979)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md))
| `==0.5.2` -> `==0.5.3` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/toolbox-llamaindex/0.5.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/toolbox-llamaindex/0.5.2/0.5.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>googleapis/mcp-toolbox-sdk-python
(toolbox-llamaindex)</summary>

###
[`v0.5.3`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-core-v0.5.3):
toolbox-core: v0.5.3

[Compare
Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-llamaindex-v0.5.2...toolbox-llamaindex-v0.5.3)

##### Miscellaneous Chores

- **ci:** Updated the toolbox server version for CI and integration
tests
([#&#8203;388](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/388)),
([#&#8203;414](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/414)),
([#&#8203;421](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/421),
[#&#8203;395](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/395)).
- **deps:** Updated dependencies: `aiohttp` to v3.13.0
([#&#8203;389](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/389)),
`google-auth` to v2.41.1
([#&#8203;383](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/383)),
`isort` to v7
([#&#8203;393](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/393)),
`pytest` to v9
([#&#8203;416](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/416)),
and other non-major Python dependencies
([#&#8203;386](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/386)),
([#&#8203;387](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/387)),
([#&#8203;427](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/427)).

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xMy41IiwidXBkYXRlZEluVmVyIjoiNDIuMTMuNSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
2025-11-21 13:42:33 +05:30
Mend Renovate
b2ea4b7b8f chore(deps): update dependency pytest to v9.0.1 (#1938)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [pytest](https://redirect.github.com/pytest-dev/pytest)
([changelog](https://docs.pytest.org/en/stable/changelog.html)) |
`==9.0.0` -> `==9.0.1` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/pytest/9.0.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/pytest/9.0.0/9.0.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>pytest-dev/pytest (pytest)</summary>

###
[`v9.0.1`](https://redirect.github.com/pytest-dev/pytest/releases/tag/9.0.1)

[Compare
Source](https://redirect.github.com/pytest-dev/pytest/compare/9.0.0...9.0.1)

### pytest 9.0.1 (2025-11-12)

#### Bug fixes

-
[#&#8203;13895](https://redirect.github.com/pytest-dev/pytest/issues/13895):
Restore support for skipping tests via `raise unittest.SkipTest`.
-
[#&#8203;13896](https://redirect.github.com/pytest-dev/pytest/issues/13896):
The terminal progress plugin added in pytest 9.0 is now automatically
disabled when iTerm2 is detected, it generated desktop notifications
instead of the desired functionality.
-
[#&#8203;13904](https://redirect.github.com/pytest-dev/pytest/issues/13904):
Fixed the TOML type of the verbosity settings in the API reference from
number to string.
-
[#&#8203;13910](https://redirect.github.com/pytest-dev/pytest/issues/13910):
Fixed <span class="title-ref">UserWarning: Do not expect
file\_or\_dir</span> on some earlier Python 3.12 and 3.13 point
versions.

#### Packaging updates and notes for downstreams

-
[#&#8203;13933](https://redirect.github.com/pytest-dev/pytest/issues/13933):
The tox configuration has been adjusted to make sure the desired
  version string can be passed into its `package_env` through
  the `SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST` environment
  variable as a part of the release process -- by `webknjaz`.

#### Contributor-facing changes

-
[#&#8203;13891](https://redirect.github.com/pytest-dev/pytest/issues/13891),
[#&#8203;13942](https://redirect.github.com/pytest-dev/pytest/issues/13942):
The CI/CD part of the release automation is now capable of
  creating GitHub Releases without having a Git checkout on
  disk -- by `bluetech` and `webknjaz`.
-
[#&#8203;13933](https://redirect.github.com/pytest-dev/pytest/issues/13933):
The tox configuration has been adjusted to make sure the desired
  version string can be passed into its `package_env` through
  the `SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST` environment
  variable as a part of the release process -- by `webknjaz`.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNzMuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE3My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-11-12 10:08:41 -08:00
Mend Renovate
61739300be chore(deps): update dependency llama-index-llms-google-genai to v0.7.3 (#1886)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.7.1` -> `==0.7.3` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.7.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.7.1/0.7.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-11-10 14:01:45 -08:00
Mend Renovate
edd739c490 chore(deps): update dependency pytest to v9 (#1911)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [pytest](https://redirect.github.com/pytest-dev/pytest)
([changelog](https://docs.pytest.org/en/stable/changelog.html)) |
`==8.4.2` -> `==9.0.0` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/pytest/9.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/pytest/8.4.2/9.0.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>pytest-dev/pytest (pytest)</summary>

###
[`v9.0.0`](https://redirect.github.com/pytest-dev/pytest/releases/tag/9.0.0)

[Compare
Source](https://redirect.github.com/pytest-dev/pytest/compare/8.4.2...9.0.0)

### pytest 9.0.0 (2025-11-05)

#### New features

-
[#&#8203;1367](https://redirect.github.com/pytest-dev/pytest/issues/1367):
**Support for subtests** has been added.

`subtests <subtests>` are an alternative to parametrization, useful in
situations where the parametrization values are not all known at
collection time.

  Example:

  ```python
  def contains_docstring(p: Path) -> bool:
"""Return True if the given Python file contains a top-level
docstring."""
      ...

def test_py_files_contain_docstring(subtests: pytest.Subtests) -> None:
      for path in Path.cwd().glob("*.py"):
          with subtests.test(path=str(path)):
              assert contains_docstring(path)
  ```

Each assert failure or error is caught by the context manager and
reported individually, giving a clear picture of all files that are
missing a docstring.

  In addition, `unittest.TestCase.subTest` is now also supported.

This feature was originally implemented as a separate plugin in
[pytest-subtests](https://redirect.github.com/pytest-dev/pytest-subtests),
but since then has been merged into the core.

  > \[!NOTE]
> This feature is experimental and will likely evolve in future
releases. By that we mean that we might change how subtests are reported
on failure, but the functionality and how to use it are stable.

-
[#&#8203;13743](https://redirect.github.com/pytest-dev/pytest/issues/13743):
Added support for **native TOML configuration files**.

While pytest, since version 6, supports configuration in
`pyproject.toml` files under `[tool.pytest.ini_options]`,
it does so in an "INI compatibility mode", where all configuration
values are treated as strings or list of strings.
  Now, pytest supports the native TOML data model.

In `pyproject.toml`, the native TOML configuration is under the
`[tool.pytest]` table.

  ```toml
  # pyproject.toml
  [tool.pytest]
  minversion = "9.0"
  addopts = ["-ra", "-q"]
  testpaths = [
      "tests",
      "integration",
  ]
  ```

The `[tool.pytest.ini_options]` table remains supported, but both tables
cannot be used at the same time.

If you prefer to use a separate configuration file, or don't use
`pyproject.toml`, you can use `pytest.toml` or `.pytest.toml`:

  ```toml
  # pytest.toml or .pytest.toml
  [pytest]
  minversion = "9.0"
  addopts = ["-ra", "-q"]
  testpaths = [
      "tests",
      "integration",
  ]
  ```

The documentation now (sometimes) shows configuration snippets in both
TOML and INI formats, in a tabbed interface.

  See `config file formats` for full details.

-
[#&#8203;13823](https://redirect.github.com/pytest-dev/pytest/issues/13823):
Added a **"strict mode"** enabled by the `strict` configuration option.

  When set to `true`, the `strict` option currently enables

  - `strict_config`
  - `strict_markers`
  - `strict_parametrization_ids`
  - `strict_xfail`

The individual strictness options can be explicitly set to override the
global `strict` setting.

The previously-deprecated `--strict` command-line flag now enables
strict mode.

If pytest adds new strictness options in the future, they will also be
enabled in strict mode.
Therefore, you should only enable strict mode if you use a pinned/locked
version of pytest,
or if you want to proactively adopt new strictness options as they are
added.

  See `strict mode` for more details.

-
[#&#8203;13737](https://redirect.github.com/pytest-dev/pytest/issues/13737):
Added the `strict_parametrization_ids` configuration option.

When set, pytest emits an error if it detects non-unique parameter set
IDs,
rather than automatically making the IDs unique by adding <span
class="title-ref">0</span>, <span class="title-ref">1</span>, ... to
them.
  This can be particularly useful for catching unintended duplicates.

-
[#&#8203;13072](https://redirect.github.com/pytest-dev/pytest/issues/13072):
Added support for displaying test session **progress in the terminal
tab** using the [OSC
9;4;](https://conemu.github.io/en/AnsiEscapeCodes.html#ConEmu_specific_OSC)
ANSI sequence.
When pytest runs in a supported terminal emulator like ConEmu, Gnome
Terminal, Ptyxis, Windows Terminal, Kitty or Ghostty,
  you'll see the progress in the terminal tab or window,
  allowing you to monitor pytest's progress at a glance.

This feature is automatically enabled when running in a TTY. It is
implemented as an internal plugin. If needed, it can be disabled as
follows:

- On a user level, using `-p no:terminalprogress` on the command line or
via an environment variable `PYTEST_ADDOPTS='-p no:terminalprogress'`.
- On a project configuration level, using `addopts = "-p
no:terminalprogress"`.

-
[#&#8203;478](https://redirect.github.com/pytest-dev/pytest/issues/478):
Support PEP420 (implicit namespace packages) as <span
class="title-ref">--pyargs</span> target when
`consider_namespace_packages` is <span class="title-ref">true</span> in
the config.

Previously, this option only impacted package imports, now it also
impacts tests discovery.

-
[#&#8203;13678](https://redirect.github.com/pytest-dev/pytest/issues/13678):
Added a new `faulthandler_exit_on_timeout` configuration option set to
"false" by default to let <span class="title-ref">faulthandler</span>
interrupt the <span class="title-ref">pytest</span> process after a
timeout in case of deadlock.

Previously, a <span class="title-ref">faulthandler</span> timeout would
only dump the traceback of all threads to stderr, but would not
interrupt the <span class="title-ref">pytest</span> process.

  \-- by `ogrisel`.

-
[#&#8203;13829](https://redirect.github.com/pytest-dev/pytest/issues/13829):
Added support for configuration option aliases via the `aliases`
parameter in `Parser.addini() <pytest.Parser.addini>`.

  Plugins can now register alternative names for configuration options,
allowing for more flexibility in configuration naming and supporting
backward compatibility when renaming options.
The canonical name always takes precedence if both the canonical name
and an alias are specified in the configuration file.

#### Improvements in existing functionality

-
[#&#8203;13330](https://redirect.github.com/pytest-dev/pytest/issues/13330):
Having pytest configuration spread over more than one file (for example
having both a `pytest.ini` file and `pyproject.toml` with a
`[tool.pytest.ini_options]` table) will now print a warning to make it
clearer to the user that only one of them is actually used.

  \-- by `sgaist`

-
[#&#8203;13574](https://redirect.github.com/pytest-dev/pytest/issues/13574):
The single argument `--version` no longer loads the entire plugin
infrastructure, making it faster and more reliable when displaying only
the pytest version.

Passing `--version` twice (e.g., `pytest --version --version`) retains
the original behavior, showing both the pytest version and plugin
information.

  > \[!NOTE]
> Since `--version` is now processed early, it only takes effect when
passed directly via the command line. It will not work if set through
other mechanisms, such as `PYTEST_ADDOPTS` or `addopts`.

-
[#&#8203;13823](https://redirect.github.com/pytest-dev/pytest/issues/13823):
Added `strict_xfail` as an alias to the `xfail_strict` option,
  `strict_config` as an alias to the `--strict-config` flag,
  and `strict_markers` as an alias to the `--strict-markers` flag.
This makes all strictness options consistently have configuration
options with the prefix `strict_`.

-
[#&#8203;13700](https://redirect.github.com/pytest-dev/pytest/issues/13700):
<span class="title-ref">--junitxml</span> no longer prints the <span
class="title-ref">generated xml file</span> summary at the end of the
pytest session when <span class="title-ref">--quiet</span> is given.

-
[#&#8203;13732](https://redirect.github.com/pytest-dev/pytest/issues/13732):
Previously, when filtering warnings, pytest would fail if the filter
referenced a class that could not be imported. Now, this only outputs a
message indicating the problem.

-
[#&#8203;13859](https://redirect.github.com/pytest-dev/pytest/issues/13859):
Clarify the error message for <span
class="title-ref">pytest.raises()</span> when a regex <span
class="title-ref">match</span> fails.

-
[#&#8203;13861](https://redirect.github.com/pytest-dev/pytest/issues/13861):
Better sentence structure in a test's expected error message.
Previously, the error message would be "expected exception must be
\<expected>, but got \<actual>". Now, it is "Expected \<expected>, but
got \<actual>".

#### Removals and backward incompatible breaking changes

-
[#&#8203;12083](https://redirect.github.com/pytest-dev/pytest/issues/12083):
Fixed a bug where an invocation such as <span class="title-ref">pytest
a/ a/b</span> would cause only tests from <span
class="title-ref">a/b</span> to run, and not other tests under <span
class="title-ref">a/</span>.

The fix entails a few breaking changes to how such overlapping arguments
and duplicates are handled:

1. <span class="title-ref">pytest a/b a/</span> or <span
class="title-ref">pytest a/ a/b</span> are equivalent to <span
class="title-ref">pytest a</span>; if an argument overlaps another
arguments, only the prefix remains.
2. <span class="title-ref">pytest x.py x.py</span> is equivalent to
<span class="title-ref">pytest x.py</span>; previously such an
invocation was taken as an explicit request to run the tests from the
file twice.

If you rely on these behaviors, consider using `--keep-duplicates
<duplicate-paths>`, which retains its existing behavior (including the
bug).

-
[#&#8203;13719](https://redirect.github.com/pytest-dev/pytest/issues/13719):
Support for Python 3.9 is dropped following its end of life.

-
[#&#8203;13766](https://redirect.github.com/pytest-dev/pytest/issues/13766):
Previously, pytest would assume it was running in a CI/CD environment if
either of the environment variables <span class="title-ref">$CI</span>
or <span class="title-ref">$BUILD\_NUMBER</span> was defined;
now, CI mode is only activated if at least one of those variables is
defined and set to a *non-empty* value.

-
[#&#8203;13779](https://redirect.github.com/pytest-dev/pytest/issues/13779):
**PytestRemovedIn9Warning deprecation warnings are now errors by
default.**

Following our plan to remove deprecated features with as little
disruption as
possible, all warnings of type `PytestRemovedIn9Warning` now generate
errors
  instead of warning messages by default.

**The affected features will be effectively removed in pytest 9.1**, so
please consult the
`deprecations` section in the docs for directions on how to update
existing code.

In the pytest `9.0.X` series, it is possible to change the errors back
into warnings as a
  stopgap measure by adding this to your `pytest.ini` file:

  ```ini
  [pytest]
  filterwarnings =
      ignore::pytest.PytestRemovedIn9Warning
  ```

  But this will stop working when pytest `9.1` is released.

**If you have concerns** about the removal of a specific feature, please
add a
  comment to `13779`.

#### Deprecations (removal in next major release)

-
[#&#8203;13807](https://redirect.github.com/pytest-dev/pytest/issues/13807):
`monkeypatch.syspath_prepend() <pytest.MonkeyPatch.syspath_prepend>` now
issues a deprecation warning when the prepended path contains legacy
namespace packages (those using `pkg_resources.declare_namespace()`).
  Users should migrate to native namespace packages (`420`).
  See `monkeypatch-fixup-namespace-packages` for details.

#### Bug fixes

-
[#&#8203;13445](https://redirect.github.com/pytest-dev/pytest/issues/13445):
Made the type annotations of `pytest.skip` and friends more
spec-complaint to have them work across more type checkers.

-
[#&#8203;13537](https://redirect.github.com/pytest-dev/pytest/issues/13537):
Fixed a bug in which `ExceptionGroup` with only `Skipped` exceptions in
teardown was not handled correctly and showed as error.

-
[#&#8203;13598](https://redirect.github.com/pytest-dev/pytest/issues/13598):
Fixed possible collection confusion on Windows when short paths and
symlinks are involved.

-
[#&#8203;13716](https://redirect.github.com/pytest-dev/pytest/issues/13716):
Fixed a bug where a nonsensical invocation like `pytest x.py[a]` (a file
cannot be parametrized) was silently treated as `pytest x.py`. This is
now a usage error.

-
[#&#8203;13722](https://redirect.github.com/pytest-dev/pytest/issues/13722):
Fixed a misleading assertion failure message when using `pytest.approx`
on mappings with differing lengths.

-
[#&#8203;13773](https://redirect.github.com/pytest-dev/pytest/issues/13773):
Fixed the static fixture closure calculation to properly consider
transitive dependencies requested by overridden fixtures.

-
[#&#8203;13816](https://redirect.github.com/pytest-dev/pytest/issues/13816):
Fixed `pytest.approx` which now returns a clearer error message when
comparing mappings with different keys.

-
[#&#8203;13849](https://redirect.github.com/pytest-dev/pytest/issues/13849):
Hidden `.pytest.ini` files are now picked up as the config file even if
empty.
  This was an inconsistency with non-hidden `pytest.ini`.

-
[#&#8203;13865](https://redirect.github.com/pytest-dev/pytest/issues/13865):
Fixed <span class="title-ref">--show-capture</span> with <span
class="title-ref">--tb=line</span>.

-
[#&#8203;13522](https://redirect.github.com/pytest-dev/pytest/issues/13522):
Fixed `pytester` in subprocess mode ignored all :attr\`pytester.plugins
\<pytest.Pytester.plugins>\` except the first.

Fixed `pytester` in subprocess mode silently ignored non-str
`pytester.plugins <pytest.Pytester.plugins>`.
  Now it errors instead.
If you are affected by this, specify the plugin by name, or switch the
affected tests to use `pytester.runpytest_inprocess
<pytest.Pytester.runpytest_inprocess>` explicitly instead.

#### Packaging updates and notes for downstreams

-
[#&#8203;13791](https://redirect.github.com/pytest-dev/pytest/issues/13791):
Minimum requirements on `iniconfig` and `packaging` were bumped to
`1.0.1` and `22.0.0`, respectively.

#### Contributor-facing changes

-
[#&#8203;12244](https://redirect.github.com/pytest-dev/pytest/issues/12244):
Fixed self-test failures when <span class="title-ref">TERM=dumb</span>.
-
[#&#8203;12474](https://redirect.github.com/pytest-dev/pytest/issues/12474):
Added scheduled GitHub Action Workflow to run Sphinx linkchecks in repo
documentation.
-
[#&#8203;13621](https://redirect.github.com/pytest-dev/pytest/issues/13621):
pytest's own testsuite now handles the `lsof` command hanging (e.g. due
to unreachable network filesystems), with the affected selftests being
skipped after 10 seconds.
-
[#&#8203;13638](https://redirect.github.com/pytest-dev/pytest/issues/13638):
Fixed deprecated `gh pr new` command in `scripts/prepare-release-pr.py`.
The script now uses `gh pr create` which is compatible with GitHub CLI
v2.0+.
-
[#&#8203;13695](https://redirect.github.com/pytest-dev/pytest/issues/13695):
Flush <span class="title-ref">stdout</span> and <span
class="title-ref">stderr</span> in <span
class="title-ref">Pytester.run</span> to avoid truncated outputs in
<span class="title-ref">test\_faulthandler.py::test\_timeout</span> on
CI -- by `ogrisel`.
-
[#&#8203;13771](https://redirect.github.com/pytest-dev/pytest/issues/13771):
Skip <span
class="title-ref">test\_do\_not\_collect\_symlink\_siblings</span> on
Windows environments without symlink support to avoid false negatives.
-
[#&#8203;13841](https://redirect.github.com/pytest-dev/pytest/issues/13841):
`tox>=4` is now required when contributing to pytest.
-
[#&#8203;13625](https://redirect.github.com/pytest-dev/pytest/issues/13625):
Added missing docstrings to `pytest_addoption()`, `pytest_configure()`,
and `cacheshow()` functions in `cacheprovider.py`.

#### Miscellaneous internal changes

-
[#&#8203;13830](https://redirect.github.com/pytest-dev/pytest/issues/13830):
Configuration overrides (`-o`/`--override-ini`) are now processed during
startup rather than during `config.getini() <pytest.Config.getini>`.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-11-10 18:59:34 +00:00
Mend Renovate
98e3f6abe4 chore(deps): update dependency llama-index-llms-google-genai to v0.7.1 (#1841)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.6.2` -> `==0.7.1` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.7.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.6.2/0.7.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-11-03 15:46:45 -08:00
Mend Renovate
fdca92cefb chore(deps): update dependency llama-index-llms-google-genai to v0.6.2 (#1725)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.6.1` -> `==0.6.2` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.6.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.6.1/0.6.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNDMuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE0My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-10-29 21:58:44 +00:00
Mend Renovate
01ac3134c0 chore(deps): update dependency llama-index to v0.14.6 (#1785)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.4` -> `==0.14.6` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.6?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.4/0.14.6?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.6`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-26)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.5...v0.14.6)

##### llama-index-core \[0.14.6]

- Add allow\_parallel\_tool\_calls for non-streaming
([#&#8203;20117](https://redirect.github.com/run-llama/llama_index/pull/20117))
- Fix invalid use of field-specific metadata
([#&#8203;20122](https://redirect.github.com/run-llama/llama_index/pull/20122))
- update doc for SemanticSplitterNodeParser
([#&#8203;20125](https://redirect.github.com/run-llama/llama_index/pull/20125))
- fix rare cases when sentence splits are larger than chunk size
([#&#8203;20147](https://redirect.github.com/run-llama/llama_index/pull/20147))

##### llama-index-embeddings-bedrock \[0.7.0]

- Fix BedrockEmbedding to support Cohere v4 response format
([#&#8203;20094](https://redirect.github.com/run-llama/llama_index/pull/20094))

##### llama-index-embeddings-isaacus \[0.1.0]

- feat: Isaacus embeddings integration
([#&#8203;20124](https://redirect.github.com/run-llama/llama_index/pull/20124))

##### llama-index-embeddings-oci-genai \[0.4.2]

- Update OCI GenAI cohere models
([#&#8203;20146](https://redirect.github.com/run-llama/llama_index/pull/20146))

##### llama-index-llms-anthropic \[0.9.7]

- Fix double token stream in anthropic llm
([#&#8203;20108](https://redirect.github.com/run-llama/llama_index/pull/20108))
- Ensure anthropic content delta only has user facing response
([#&#8203;20113](https://redirect.github.com/run-llama/llama_index/pull/20113))

##### llama-index-llms-baseten \[0.1.7]

- add GLM
([#&#8203;20121](https://redirect.github.com/run-llama/llama_index/pull/20121))

##### llama-index-llms-helicone \[0.1.0]

- integrate helicone to llama-index
([#&#8203;20131](https://redirect.github.com/run-llama/llama_index/pull/20131))

##### llama-index-llms-oci-genai \[0.6.4]

- Update OCI GenAI cohere models
([#&#8203;20146](https://redirect.github.com/run-llama/llama_index/pull/20146))

##### llama-index-llms-openai \[0.6.5]

- chore: openai vbump
([#&#8203;20095](https://redirect.github.com/run-llama/llama_index/pull/20095))

##### llama-index-readers-imdb-review \[0.4.2]

- chore: Update selenium dependency in imdb-review reader
([#&#8203;20105](https://redirect.github.com/run-llama/llama_index/pull/20105))

##### llama-index-retrievers-bedrock \[0.5.0]

- feat(bedrock): add async support for AmazonKnowledgeBasesRetriever
([#&#8203;20114](https://redirect.github.com/run-llama/llama_index/pull/20114))

##### llama-index-retrievers-superlinked \[0.1.3]

- Update README.md
([#&#8203;19829](https://redirect.github.com/run-llama/llama_index/pull/19829))

##### llama-index-storage-kvstore-postgres \[0.4.2]

- fix: Replace raw SQL string interpolation with proper SQLAlchemy
parameterized APIs in PostgresKVStore
([#&#8203;20104](https://redirect.github.com/run-llama/llama_index/pull/20104))

##### llama-index-tools-mcp \[0.4.3]

- Fix BasicMCPClient resource signatures
([#&#8203;20118](https://redirect.github.com/run-llama/llama_index/pull/20118))

##### llama-index-vector-stores-postgres \[0.7.1]

- Add GIN index support for text array metadata in PostgreSQL vector
store
([#&#8203;20130](https://redirect.github.com/run-llama/llama_index/pull/20130))

###
[`v0.14.5`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-15)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.4...v0.14.5)

##### llama-index-core \[0.14.5]

- Remove debug print
([#&#8203;20000](https://redirect.github.com/run-llama/llama_index/pull/20000))
- safely initialize RefDocInfo in Docstore
([#&#8203;20031](https://redirect.github.com/run-llama/llama_index/pull/20031))
- Add progress bar for multiprocess loading
([#&#8203;20048](https://redirect.github.com/run-llama/llama_index/pull/20048))
- Fix duplicate node positions when identical text appears multiple
times in document
([#&#8203;20050](https://redirect.github.com/run-llama/llama_index/pull/20050))
- chore: tool call block - part 1
([#&#8203;20074](https://redirect.github.com/run-llama/llama_index/pull/20074))

##### llama-index-instrumentation \[0.4.2]

- update instrumentation package metadata
([#&#8203;20079](https://redirect.github.com/run-llama/llama_index/pull/20079))

##### llama-index-llms-anthropic \[0.9.5]

-  feat(anthropic): add prompt caching model validation utilities
([#&#8203;20069](https://redirect.github.com/run-llama/llama_index/pull/20069))
- fix streaming thinking/tool calling with anthropic
([#&#8203;20077](https://redirect.github.com/run-llama/llama_index/pull/20077))
- Add haiku 4.5 support
([#&#8203;20092](https://redirect.github.com/run-llama/llama_index/pull/20092))

##### llama-index-llms-baseten \[0.1.6]

- Baseten provider Kimi K2 0711, Llama 4 Maverick and Llama 4 Scout
Model APIs deprecation
([#&#8203;20042](https://redirect.github.com/run-llama/llama_index/pull/20042))

##### llama-index-llms-bedrock-converse \[0.10.5]

- feat: List Claude Sonnet 4.5 as a reasoning model
([#&#8203;20022](https://redirect.github.com/run-llama/llama_index/pull/20022))
- feat: Support global cross-region inference profile prefix
([#&#8203;20064](https://redirect.github.com/run-llama/llama_index/pull/20064))
- Update utils.py for opus 4.1
([#&#8203;20076](https://redirect.github.com/run-llama/llama_index/pull/20076))
- 4.1 opus bedrockconverse missing in funcitoncalling models
([#&#8203;20084](https://redirect.github.com/run-llama/llama_index/pull/20084))
- Add haiku 4.5 support
([#&#8203;20092](https://redirect.github.com/run-llama/llama_index/pull/20092))

##### llama-index-llms-fireworks \[0.4.4]

- Add Support for Custom Models in Fireworks LLM
([#&#8203;20023](https://redirect.github.com/run-llama/llama_index/pull/20023))
- fix(llms/fireworks): Cannot use Fireworks Deepseek V3.1-20006 issue
([#&#8203;20028](https://redirect.github.com/run-llama/llama_index/pull/20028))

##### llama-index-llms-oci-genai \[0.6.3]

- Add support for xAI models in OCI GenAI
([#&#8203;20089](https://redirect.github.com/run-llama/llama_index/pull/20089))

##### llama-index-llms-openai \[0.6.4]

- Gpt 5 pro addition
([#&#8203;20029](https://redirect.github.com/run-llama/llama_index/pull/20029))
- fix collecting final response with openai responses streaming
([#&#8203;20037](https://redirect.github.com/run-llama/llama_index/pull/20037))
- Add support for GPT-5 models in utils.py (JSON\_SCHEMA\_MODELS)
([#&#8203;20045](https://redirect.github.com/run-llama/llama_index/pull/20045))
- chore: tool call block - part 1
([#&#8203;20074](https://redirect.github.com/run-llama/llama_index/pull/20074))

##### llama-index-llms-sglang \[0.1.0]

- Added Sglang llm integration
([#&#8203;20020](https://redirect.github.com/run-llama/llama_index/pull/20020))

##### llama-index-readers-gitlab \[0.5.1]

- feat(gitlab): add pagination params for repository tree and issues
([#&#8203;20052](https://redirect.github.com/run-llama/llama_index/pull/20052))

##### llama-index-readers-json \[0.4.2]

- vbump the JSON reader
([#&#8203;20039](https://redirect.github.com/run-llama/llama_index/pull/20039))

##### llama-index-readers-web \[0.5.5]

- fix: ScrapflyReader Pydantic validation error
([#&#8203;19999](https://redirect.github.com/run-llama/llama_index/pull/19999))

##### llama-index-storage-chat-store-dynamodb \[0.4.2]

- bump dynamodb chat store deps
([#&#8203;20078](https://redirect.github.com/run-llama/llama_index/pull/20078))

##### llama-index-tools-mcp \[0.4.2]

- 🐛 fix(tools/mcp): Fix dict type handling and reference resolution in …
([#&#8203;20082](https://redirect.github.com/run-llama/llama_index/pull/20082))

##### llama-index-tools-signnow \[0.1.0]

- feat(signnow): SignNow mcp tools integration
([#&#8203;20057](https://redirect.github.com/run-llama/llama_index/pull/20057))

##### llama-index-tools-tavily-research \[0.4.2]

- feat: Add Tavily extract function for URL content extraction
([#&#8203;20038](https://redirect.github.com/run-llama/llama_index/pull/20038))

##### llama-index-vector-stores-azurepostgresql \[0.2.0]

- Add hybrid search to Azure PostgreSQL integration
([#&#8203;20027](https://redirect.github.com/run-llama/llama_index/pull/20027))

##### llama-index-vector-stores-milvus \[0.9.3]

- fix: Milvus get\_field\_kwargs()
([#&#8203;20086](https://redirect.github.com/run-llama/llama_index/pull/20086))

##### llama-index-vector-stores-opensearch \[0.6.2]

- fix(opensearch): Correct version check for efficient filtering
([#&#8203;20067](https://redirect.github.com/run-llama/llama_index/pull/20067))

##### llama-index-vector-stores-qdrant \[0.8.6]

- fix(qdrant): Allow async-only initialization with hybrid search
([#&#8203;20005](https://redirect.github.com/run-llama/llama_index/pull/20005))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTYuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE1Ni4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-10-27 21:26:51 +00:00
Mend Renovate
012d7de67e chore(deps): update dependency llama-index to v0.14.4 (#1626)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.3` -> `==0.14.4` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.4?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.3/0.14.4?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.4`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-03)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.3...v0.14.4)

##### llama-index-core \[0.14.4]

- fix pre-release installs
([#&#8203;20010](https://redirect.github.com/run-llama/llama_index/pull/20010))

##### llama-index-embeddings-anyscale \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-embeddings-baseten \[0.1.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-embeddings-fireworks \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-embeddings-opea \[0.2.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-embeddings-text-embeddings-inference \[0.4.2]

- Fix authorization header setup logic in text embeddings inference
([#&#8203;19979](https://redirect.github.com/run-llama/llama_index/pull/19979))

##### llama-index-llms-anthropic \[0.9.3]

- feat: add anthropic sonnet 4.5
([#&#8203;19977](https://redirect.github.com/run-llama/llama_index/pull/19977))

##### llama-index-llms-anyscale \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-azure-openai \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-baseten \[0.1.5]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-bedrock-converse \[0.9.5]

- feat: Additional support for Claude Sonnet 4.5
([#&#8203;19980](https://redirect.github.com/run-llama/llama_index/pull/19980))

##### llama-index-llms-deepinfra \[0.5.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-everlyai \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-fireworks \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-google-genai \[0.6.2]

- Fix for ValueError: ChatMessage contains multiple blocks, use 'ChatMe…
([#&#8203;19954](https://redirect.github.com/run-llama/llama_index/pull/19954))

##### llama-index-llms-keywordsai \[1.1.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-localai \[0.5.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-mistralai \[0.8.2]

- Update list of MistralAI LLMs
([#&#8203;19981](https://redirect.github.com/run-llama/llama_index/pull/19981))

##### llama-index-llms-monsterapi \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-nvidia \[0.4.4]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-ollama \[0.7.4]

- Fix `TypeError: unhashable type: 'dict'` in Ollama stream chat with
tools
([#&#8203;19938](https://redirect.github.com/run-llama/llama_index/pull/19938))

##### llama-index-llms-openai \[0.6.1]

- feat(OpenAILike): support structured outputs
([#&#8203;19967](https://redirect.github.com/run-llama/llama_index/pull/19967))

##### llama-index-llms-openai-like \[0.5.3]

- feat(OpenAILike): support structured outputs
([#&#8203;19967](https://redirect.github.com/run-llama/llama_index/pull/19967))

##### llama-index-llms-openrouter \[0.4.2]

- chore(openrouter,anthropic): add py.typed
([#&#8203;19966](https://redirect.github.com/run-llama/llama_index/pull/19966))

##### llama-index-llms-perplexity \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-portkey \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-sarvam \[0.2.1]

- fixed Sarvam Integration and Typos (Fixes
[#&#8203;19931](https://redirect.github.com/run-llama/llama_index/issues/19931))
([#&#8203;19932](https://redirect.github.com/run-llama/llama_index/pull/19932))

##### llama-index-llms-upstage \[0.6.4]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-llms-yi \[0.4.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-memory-bedrock-agentcore \[0.1.0]

- feat: Bedrock AgentCore Memory integration
([#&#8203;19953](https://redirect.github.com/run-llama/llama_index/pull/19953))

##### llama-index-multi-modal-llms-openai \[0.6.2]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-readers-confluence \[0.4.4]

- Fix: Respect cloud parameter when fetching child pages in ConfluenceR…
([#&#8203;19983](https://redirect.github.com/run-llama/llama_index/pull/19983))

##### llama-index-readers-service-now \[0.2.2]

- Bug Fix :- Not Able to Fetch Page whose latest is empty or null
([#&#8203;19916](https://redirect.github.com/run-llama/llama_index/pull/19916))

##### llama-index-selectors-notdiamond \[0.4.0]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-tools-agentql \[1.2.0]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-tools-playwright \[0.3.1]

- chore: fix playwright tests
([#&#8203;19946](https://redirect.github.com/run-llama/llama_index/pull/19946))

##### llama-index-tools-scrapegraph \[0.2.2]

- feat: update scrapegraphai
([#&#8203;19974](https://redirect.github.com/run-llama/llama_index/pull/19974))

##### llama-index-vector-stores-chroma \[0.5.3]

- docs: fix query method docstring in ChromaVectorStore Fixes
[#&#8203;19969](https://redirect.github.com/run-llama/llama_index/issues/19969)
([#&#8203;19973](https://redirect.github.com/run-llama/llama_index/pull/19973))

##### llama-index-vector-stores-mongodb \[0.8.1]

- fix llm deps for openai
([#&#8203;19944](https://redirect.github.com/run-llama/llama_index/pull/19944))

##### llama-index-vector-stores-postgres \[0.7.0]

- fix index creation in postgres vector store
([#&#8203;19955](https://redirect.github.com/run-llama/llama_index/pull/19955))

##### llama-index-vector-stores-solr \[0.1.0]

- Add ApacheSolrVectorStore Integration
([#&#8203;19933](https://redirect.github.com/run-llama/llama_index/pull/19933))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xMzEuOSIsInVwZGF0ZWRJblZlciI6IjQxLjEzMS45IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
2025-10-24 14:31:39 -04:00
Mend Renovate
530f1cc406 chore(deps): update dependency llama-index-llms-google-genai to v0.6.1 (#1562)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.6.0` -> `==0.6.1` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.6.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.6.0/0.6.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xMzAuMSIsInVwZGF0ZWRJblZlciI6IjQxLjEzMC4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
2025-10-16 16:55:22 -04:00
Mend Renovate
e5f643f929 chore(deps): update dependency llama-index-llms-google-genai to v0.6.0 (#1547)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.5.1` -> `==0.6.0` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.6.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.5.1/0.6.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
2025-09-25 14:41:38 +00:00
Mend Renovate
785be3d8a4 chore(deps): update dependency llama-index to v0.14.3 (#1548)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.14.2` -> `==0.14.3` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.14.2/0.14.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.3`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-24)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.2...v0.14.3)

##### llama-index-core \[0.14.3]

- Fix Gemini thought signature serialization
([#&#8203;19891](https://redirect.github.com/run-llama/llama_index/pull/19891))
- Adding a ThinkingBlock among content blocks
([#&#8203;19919](https://redirect.github.com/run-llama/llama_index/pull/19919))

##### llama-index-llms-anthropic \[0.9.0]

- Adding a ThinkingBlock among content blocks
([#&#8203;19919](https://redirect.github.com/run-llama/llama_index/pull/19919))

##### llama-index-llms-baseten \[0.1.4]

- added kimik2 0905 and reordered list for validation
([#&#8203;19892](https://redirect.github.com/run-llama/llama_index/pull/19892))
- Baseten Dynamic Model APIs Validation
([#&#8203;19893](https://redirect.github.com/run-llama/llama_index/pull/19893))

##### llama-index-llms-google-genai \[0.6.0]

- Add missing FileAPI support for documents
([#&#8203;19897](https://redirect.github.com/run-llama/llama_index/pull/19897))
- Adding a ThinkingBlock among content blocks
([#&#8203;19919](https://redirect.github.com/run-llama/llama_index/pull/19919))

##### llama-index-llms-mistralai \[0.8.0]

- Adding a ThinkingBlock among content blocks
([#&#8203;19919](https://redirect.github.com/run-llama/llama_index/pull/19919))

##### llama-index-llms-openai \[0.6.0]

- Adding a ThinkingBlock among content blocks
([#&#8203;19919](https://redirect.github.com/run-llama/llama_index/pull/19919))

##### llama-index-protocols-ag-ui \[0.2.2]

- improve how state snapshotting works in AG-UI
([#&#8203;19934](https://redirect.github.com/run-llama/llama_index/pull/19934))

##### llama-index-readers-mongodb \[0.5.0]

- Use PyMongo Asynchronous API instead of Motor
([#&#8203;19875](https://redirect.github.com/run-llama/llama_index/pull/19875))

##### llama-index-readers-paddle-ocr \[0.1.0]

- \[New Package] Add PaddleOCR Reader for extracting text from images in
PDFs
([#&#8203;19827](https://redirect.github.com/run-llama/llama_index/pull/19827))

##### llama-index-readers-web \[0.5.4]

- feat(readers/web-firecrawl): migrate to Firecrawl v2 SDK
([#&#8203;19773](https://redirect.github.com/run-llama/llama_index/pull/19773))

##### llama-index-storage-chat-store-mongo \[0.3.0]

- Use PyMongo Asynchronous API instead of Motor
([#&#8203;19875](https://redirect.github.com/run-llama/llama_index/pull/19875))

##### llama-index-storage-kvstore-mongodb \[0.5.0]

- Use PyMongo Asynchronous API instead of Motor
([#&#8203;19875](https://redirect.github.com/run-llama/llama_index/pull/19875))

##### llama-index-tools-valyu \[0.5.0]

- Add Valyu Extractor and Fast mode
([#&#8203;19915](https://redirect.github.com/run-llama/llama_index/pull/19915))

##### llama-index-vector-stores-azureaisearch \[0.4.2]

- Fix/llama index vector stores azureaisearch fix
([#&#8203;19800](https://redirect.github.com/run-llama/llama_index/pull/19800))

##### llama-index-vector-stores-azurepostgresql \[0.1.0]

- Add support for Azure PostgreSQL
([#&#8203;19709](https://redirect.github.com/run-llama/llama_index/pull/19709))

##### llama-index-vector-stores-qdrant \[0.8.5]

- Add proper compat for old sparse vectors
([#&#8203;19882](https://redirect.github.com/run-llama/llama_index/pull/19882))

##### llama-index-vector-stores-singlestoredb \[0.4.2]

- Fix SQLi Vulnerability in SingleStore Db
([#&#8203;19914](https://redirect.github.com/run-llama/llama_index/pull/19914))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
2025-09-25 14:25:27 +00:00
Mend Renovate
a5ef166fcb chore(deps): update dependency llama-index-llms-google-genai to v0.5.1 (#1529)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.5.0` -> `==0.5.1` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.5.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.5.0/0.5.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
2025-09-23 11:04:08 +00:00
Mend Renovate
8c4e6f88b7 chore(deps): update dependency toolbox-llamaindex to v0.5.2 (#1532)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md))
| `==0.5.1` -> `==0.5.2` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/toolbox-llamaindex/0.5.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/toolbox-llamaindex/0.5.1/0.5.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>googleapis/mcp-toolbox-sdk-python
(toolbox-llamaindex)</summary>

###
[`v0.5.2`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-core-v0.5.2):
toolbox-core: v0.5.2

[Compare
Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-llamaindex-v0.5.1...toolbox-llamaindex-v0.5.2)

##### Miscellaneous Chores

- **deps:** update python-nonmajor
([#&#8203;372](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/372))
([d915624](d9156246fd))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
2025-09-23 14:32:30 +05:30
Mend Renovate
bae94285a6 chore(deps): update dependency toolbox-llamaindex to v0.5.1 (#1510)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md))
| `==0.5.0` -> `==0.5.1` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/toolbox-llamaindex/0.5.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/toolbox-llamaindex/0.5.0/0.5.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>googleapis/mcp-toolbox-sdk-python
(toolbox-llamaindex)</summary>

###
[`v0.5.1`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-core-v0.5.1):
toolbox-core: v0.5.1

[Compare
Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-llamaindex-v0.5.0...toolbox-llamaindex-v0.5.1)

##### Bug Fixes

- **toolbox-core:** Use typing.Annotated for reliable parameter
descriptions instead of docstrings
([#&#8203;371](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/371))
([eb76680](eb76680d24))

##### Documentation

- Update langgraph sample in toolbox-core
([#&#8203;366](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/366))
([fe35082](fe35082104))

##### Miscellaneous Chores

- Remove redundant test for
test\_add\_auth\_token\_getter\_unused\_token
([#&#8203;347](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/347))
([dccaf1b](dccaf1bd70))
- Remove duplicate header check during initialization
([#&#8203;357](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/357))
([888170b](888170b3c3))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
2025-09-19 05:55:34 +00:00
Mend Renovate
10a0c09c1f chore(deps): update dependency llama-index to v0.14.2 (#1487)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [llama-index](https://redirect.github.com/run-llama/llama_index) |
`==0.13.6` -> `==0.14.2` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index/0.14.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index/0.13.6/0.14.2?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>run-llama/llama_index (llama-index)</summary>

###
[`v0.14.2`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-15)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.1...v0.14.2)

##### `llama-index-core` \[0.14.2]

- fix: handle data urls in ImageBlock
([#&#8203;19856](https://redirect.github.com/run-llama/llama_index/issues/19856))
- fix: Move IngestionPipeline docstore document insertion after
transformations
([#&#8203;19849](https://redirect.github.com/run-llama/llama_index/issues/19849))
- fix: Update IngestionPipeline async document store insertion
([#&#8203;19868](https://redirect.github.com/run-llama/llama_index/issues/19868))
- chore: remove stepwise usage of workflows from code
([#&#8203;19877](https://redirect.github.com/run-llama/llama_index/issues/19877))

##### `llama-index-embeddings-fastembed` \[0.5.0]

- feat: make fastembed cpu or gpu optional
([#&#8203;19878](https://redirect.github.com/run-llama/llama_index/issues/19878))

##### `llama-index-llms-deepseek` \[0.2.2]

- feat: pass context\_window to super in deepseek llm
([#&#8203;19876](https://redirect.github.com/run-llama/llama_index/issues/19876))

##### `llama-index-llms-google-genai` \[0.5.0]

- feat: Add GoogleGenAI FileAPI support for large files
([#&#8203;19853](https://redirect.github.com/run-llama/llama_index/issues/19853))

##### `llama-index-readers-solr` \[0.1.0]

- feat: Add Solr reader integration
([#&#8203;19843](https://redirect.github.com/run-llama/llama_index/issues/19843))

##### `llama-index-retrievers-alletra-x10000-retriever` \[0.1.0]

- feat: add AlletraX10000Retriever integration
([#&#8203;19798](https://redirect.github.com/run-llama/llama_index/issues/19798))

##### `llama-index-vector-stores-oracledb` \[0.3.2]

- feat: OraLlamaVS Connection Pool Support + Filtering
([#&#8203;19412](https://redirect.github.com/run-llama/llama_index/issues/19412))

##### `llama-index-vector-stores-postgres` \[0.6.8]

- feat: Add `customize_query_fn` to PGVectorStore
([#&#8203;19847](https://redirect.github.com/run-llama/llama_index/issues/19847))

###
[`v0.14.1`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-14)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.0...v0.14.1)

##### `llama-index-core` \[0.14.1]

- feat: add verbose option to RetrieverQueryEngine for detailed output
([#&#8203;19807](https://redirect.github.com/run-llama/llama_index/issues/19807))
- feat: feat: add support for additional kwargs in
`aget_text_embedding_batch` method
([#&#8203;19808](https://redirect.github.com/run-llama/llama_index/issues/19808))
- feat: add `thinking_delta` field to AgentStream events to expose llm
reasoning
([#&#8203;19785](https://redirect.github.com/run-llama/llama_index/issues/19785))
- fix: Bug fix agent streaming thinking delta pydantic validation
([#&#8203;19828](https://redirect.github.com/run-llama/llama_index/issues/19828))
- fix: handle positional args and kwargs both in tool calling
([#&#8203;19822](https://redirect.github.com/run-llama/llama_index/issues/19822))

##### `llama-index-instrumentation` \[0.4.1]

- feat: add sync event/handler support
([#&#8203;19825](https://redirect.github.com/run-llama/llama_index/issues/19825))

##### `llama-index-llms-google-genai` \[0.4.0]

- feat: Add VideoBlock and GoogleGenAI video input support
([#&#8203;19823](https://redirect.github.com/run-llama/llama_index/issues/19823))

##### `llama-index-llms-ollama` \[0.7.3]

- fix: Fix bug using Ollama with Agents and None tool\_calls in final
message
([#&#8203;19844](https://redirect.github.com/run-llama/llama_index/issues/19844))

##### `llama-index-llms-vertex` \[0.6.1]

- fix: align complete/acomplete responses
([#&#8203;19806](https://redirect.github.com/run-llama/llama_index/issues/19806))

##### `llama-index-readers-confluence` \[0.4.3]

- chore: Bump version constraint for atlassian-python-api to include 4.x
([#&#8203;19824](https://redirect.github.com/run-llama/llama_index/issues/19824))

##### `llama-index-readers-github` \[0.6.2]

- fix: Make url optional
([#&#8203;19851](https://redirect.github.com/run-llama/llama_index/issues/19851))

##### `llama-index-readers-web` \[0.5.3]

- feat: Add OlostepWebReader Integration
([#&#8203;19821](https://redirect.github.com/run-llama/llama_index/issues/19821))

##### `llama-index-tools-google` \[0.6.2]

- feat: Add optional creds argument to GoogleCalendarToolSpec
([#&#8203;19826](https://redirect.github.com/run-llama/llama_index/issues/19826))

##### `llama-index-vector-stores-vectorx` \[0.1.0]

- feat: Add vectorx vectorstore
([#&#8203;19758](https://redirect.github.com/run-llama/llama_index/issues/19758))

###
[`v0.14.0`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-08)

[Compare
Source](https://redirect.github.com/run-llama/llama_index/compare/v0.13.6...v0.14.0)

**NOTE:** All packages have been bumped to handle the latest
llama-index-core version.

##### `llama-index-core` \[0.14.0]

- breaking: bumped `llama-index-workflows` dependency to 2.0
- Improve stacktraces clarity by avoiding wrapping errors in
WorkflowRuntimeError
  - Remove deprecated checkpointer feature
  - Remove deprecated sub-workflows feature
- Remove deprecated `send_event` method from Workflow class (still
existing on the Context class)
- Remove deprecated `stream_events()` methods from Workflow class (still
existing on the Context class)
  - Remove deprecated support for stepwise execution

##### `llama-index-llms-openai` \[0.5.6]

- feat: add support for document blocks in openai chat completions
([#&#8203;19809](https://redirect.github.com/run-llama/llama_index/issues/19809))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-09-18 07:06:07 +00:00
Mend Renovate
cf65ba1d31 chore(deps): update dependency llama-index-llms-google-genai to v0.5.0 (#1488)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| llama-index-llms-google-genai | `==0.3.0` -> `==0.5.0` |
[![age](https://developer.mend.io/api/mc/badges/age/pypi/llama-index-llms-google-genai/0.5.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/pypi/llama-index-llms-google-genai/0.3.0/0.5.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
2025-09-18 05:37:21 +00:00
Harsh Jha
00e1c4c3c6 test: added tests for python quickstart (#1196)
Added quickstart_test.py files for each Python sample, which compile and
run the agent as a standalone application to validate its end-to-end
functionality.
The test condition ensures the sample runs to completion and produces an
output which confirms the agent is not breaking. Additionally, i
introduced a secondary check for essential keywords from a golden.txt
file, logging their presence without failing the test.

Running test file:
execute this cmd from terminal
```
ORCH_NAME=adk pytest
```

---------
2025-09-17 06:05:03 +00:00