mirror of
https://github.com/googleapis/genai-toolbox.git
synced 2026-01-09 15:38:08 -05:00
f87ed05aacfe552cec4722c04ac014b588c78b5b
20 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
f87ed05aac |
chore(deps): update pip (#2215)
This PR contains the following updates: | Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) | |---|---|---|---| | [google-adk](https://redirect.github.com/google/adk-python) ([changelog](https://redirect.github.com/google/adk-python/blob/main/CHANGELOG.md)) | `==1.19.0` → `==1.21.0` |  |  | | [google-genai](https://redirect.github.com/googleapis/python-genai) | `==1.52.0` → `==1.56.0` |  |  | | [langchain](https://redirect.github.com/langchain-ai/langchain) ([source](https://redirect.github.com/langchain-ai/langchain/tree/HEAD/libs/langchain), [changelog](https://redirect.github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain%3D%3D1%22)) | `==1.1.0` → `==1.2.0` |  |  | | [langchain-google-vertexai](https://redirect.github.com/langchain-ai/langchain-google) ([source](https://redirect.github.com/langchain-ai/langchain-google/tree/HEAD/libs/vertexai), [changelog](https://redirect.github.com/langchain-ai/langchain-google/releases?q=%22vertexai%22)) | `==3.1.0` → `==3.2.0` |  |  | | [langgraph](https://redirect.github.com/langchain-ai/langgraph) ([source](https://redirect.github.com/langchain-ai/langgraph/tree/HEAD/libs/langgraph), [changelog](https://redirect.github.com/langchain-ai/langgraph/releases)) | `==1.0.4` → `==1.0.5` |  |  | | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.10` → `==0.14.12` |  |  | | llama-index-llms-google-genai | `==0.7.3` → `==0.8.3` |  |  | | [pytest](https://redirect.github.com/pytest-dev/pytest) ([changelog](https://docs.pytest.org/en/stable/changelog.html)) | `==9.0.1` → `==9.0.2` |  |  | | [toolbox-core](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python) ([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/CHANGELOG.md)) | `==0.5.3` → `==0.5.4` |  |  | | [toolbox-langchain](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python) ([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/CHANGELOG.md)) | `==0.5.3` → `==0.5.4` |  |  | | [toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python) ([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md)) | `==0.5.3` → `==0.5.4` |  |  | --- ### Release Notes <details> <summary>google/adk-python (google-adk)</summary> ### [`v1.21.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1210-2025-12-11) [Compare Source](https://redirect.github.com/google/adk-python/compare/v1.20.0...v1.21.0) ##### Features - **\[Interactions API Support]** - The newly released Gemini [Interactions API](https://ai.google.dev/gemini-api/docs/interactions) is supported in ADK now. To use it: ```Python Agent( model=Gemini( model="gemini-3-pro-preview", use_interactions_api=True, ), name="...", description="...", instruction="...", ) ``` see [samples](https://redirect.github.com/google/adk-python/tree/main/contributing/samples/interactions_api) for details - **\[Services]** - Add `add_session_to_memory` to `CallbackContext` and `ToolContext` to explicitly save the current session to memory ([7b356dd]( |
||
|
|
d08dd144ad |
chore(deps): update dependency llama-index to v0.14.10 (#2092)
This PR contains the following updates: | Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.8` -> `==0.14.10` |  |  | --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.10`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-04) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.9...v0.14.10) ##### llama-index-core \[0.14.10] - feat: add mock function calling llm ([#​20331](https://redirect.github.com/run-llama/llama_index/pull/20331)) ##### llama-index-llms-qianfan \[0.4.1] - test: fix typo 'reponse' to 'response' in variable names ([#​20329](https://redirect.github.com/run-llama/llama_index/pull/20329)) ##### llama-index-tools-airweave \[0.1.0] - feat: add Airweave tool integration with advanced search features ([#​20111](https://redirect.github.com/run-llama/llama_index/pull/20111)) ##### llama-index-utils-qianfan \[0.4.1] - test: fix typo 'reponse' to 'response' in variable names ([#​20329](https://redirect.github.com/run-llama/llama_index/pull/20329)) ### [`v0.14.9`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-12-02) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.8...v0.14.9) ##### llama-index-agent-azure \[0.2.1] - fix: Pin azure-ai-projects version to prevent breaking changes ([#​20255](https://redirect.github.com/run-llama/llama_index/pull/20255)) ##### llama-index-core \[0.14.9] - MultiModalVectorStoreIndex now returns a multi-modal ContextChatEngine. ([#​20265](https://redirect.github.com/run-llama/llama_index/pull/20265)) - Ingestion to vector store now ensures that \_node-content is readable ([#​20266](https://redirect.github.com/run-llama/llama_index/pull/20266)) - fix: ensure context is copied with async utils run\_async ([#​20286](https://redirect.github.com/run-llama/llama_index/pull/20286)) - fix(memory): ensure first message in queue is always a user message after flush ([#​20310](https://redirect.github.com/run-llama/llama_index/pull/20310)) ##### llama-index-embeddings-bedrock \[0.7.2] - feat(embeddings-bedrock): Add support for Amazon Bedrock Application Inference Profiles ([#​20267](https://redirect.github.com/run-llama/llama_index/pull/20267)) - fix:(embeddings-bedrock) correct extraction of provider from model\_name ([#​20295](https://redirect.github.com/run-llama/llama_index/pull/20295)) - Bump version of bedrock-embedding ([#​20304](https://redirect.github.com/run-llama/llama_index/pull/20304)) ##### llama-index-embeddings-voyageai \[0.5.1] - VoyageAI correction and documentation ([#​20251](https://redirect.github.com/run-llama/llama_index/pull/20251)) ##### llama-index-llms-anthropic \[0.10.3] - feat: add anthropic opus 4.5 ([#​20306](https://redirect.github.com/run-llama/llama_index/pull/20306)) ##### llama-index-llms-bedrock-converse \[0.12.2] - fix(bedrock-converse): Only use guardrail\_stream\_processing\_mode in streaming functions ([#​20289](https://redirect.github.com/run-llama/llama_index/pull/20289)) - feat: add anthropic opus 4.5 ([#​20306](https://redirect.github.com/run-llama/llama_index/pull/20306)) - feat(bedrock-converse): Additional support for Claude Opus 4.5 ([#​20317](https://redirect.github.com/run-llama/llama_index/pull/20317)) ##### llama-index-llms-google-genai \[0.7.4] - Fix gemini-3 support and gemini function call support ([#​20315](https://redirect.github.com/run-llama/llama_index/pull/20315)) ##### llama-index-llms-helicone \[0.1.1] - update helicone docs + examples ([#​20208](https://redirect.github.com/run-llama/llama_index/pull/20208)) ##### llama-index-llms-openai \[0.6.10] - Smallest Nit ([#​20252](https://redirect.github.com/run-llama/llama_index/pull/20252)) - Feat: Add gpt-5.1-chat model support ([#​20311](https://redirect.github.com/run-llama/llama_index/pull/20311)) ##### llama-index-llms-ovhcloud \[0.1.0] - Add OVHcloud AI Endpoints provider ([#​20288](https://redirect.github.com/run-llama/llama_index/pull/20288)) ##### llama-index-llms-siliconflow \[0.4.2] - \[Bugfix] None check on content in delta in siliconflow LLM ([#​20327](https://redirect.github.com/run-llama/llama_index/pull/20327)) ##### llama-index-node-parser-docling \[0.4.2] - Relax docling Python constraints ([#​20322](https://redirect.github.com/run-llama/llama_index/pull/20322)) ##### llama-index-packs-resume-screener \[0.9.3] - feat: Update pypdf to latest version ([#​20285](https://redirect.github.com/run-llama/llama_index/pull/20285)) ##### llama-index-postprocessor-voyageai-rerank \[0.4.1] - VoyageAI correction and documentation ([#​20251](https://redirect.github.com/run-llama/llama_index/pull/20251)) ##### llama-index-protocols-ag-ui \[0.2.3] - fix: correct order of ag-ui events to avoid event conflicts ([#​20296](https://redirect.github.com/run-llama/llama_index/pull/20296)) ##### llama-index-readers-confluence \[0.6.0] - Refactor Confluence integration: Update license to MIT, remove requirements.txt, and implement HtmlTextParser for HTML to Markdown conversion. Update dependencies and tests accordingly. ([#​20262](https://redirect.github.com/run-llama/llama_index/pull/20262)) ##### llama-index-readers-docling \[0.4.2] - Relax docling Python constraints ([#​20322](https://redirect.github.com/run-llama/llama_index/pull/20322)) ##### llama-index-readers-file \[0.5.5] - feat: Update pypdf to latest version ([#​20285](https://redirect.github.com/run-llama/llama_index/pull/20285)) ##### llama-index-readers-reddit \[0.4.1] - Fix typo in README.md for Reddit integration ([#​20283](https://redirect.github.com/run-llama/llama_index/pull/20283)) ##### llama-index-storage-chat-store-postgres \[0.3.2] - \[FIX] Postgres ChatStore automatically prefix table name with "data\_" ([#​20241](https://redirect.github.com/run-llama/llama_index/pull/20241)) ##### llama-index-vector-stores-azureaisearch \[0.4.4] - `vector-azureaisearch`: check if user agent already in policy before add it to azure client ([#​20243](https://redirect.github.com/run-llama/llama_index/pull/20243)) - fix(azureaisearch): Add close/aclose methods to fix unclosed client session warnings ([#​20309](https://redirect.github.com/run-llama/llama_index/pull/20309)) ##### llama-index-vector-stores-milvus \[0.9.4] - Fix/consistency level param for milvus ([#​20268](https://redirect.github.com/run-llama/llama_index/pull/20268)) ##### llama-index-vector-stores-postgres \[0.7.2] - Fix postgresql dispose ([#​20312](https://redirect.github.com/run-llama/llama_index/pull/20312)) ##### llama-index-vector-stores-qdrant \[0.9.0] - fix: Update qdrant-client version constraints ([#​20280](https://redirect.github.com/run-llama/llama_index/pull/20280)) - Feat: update Qdrant client to 1.16.0 ([#​20287](https://redirect.github.com/run-llama/llama_index/pull/20287)) ##### llama-index-vector-stores-vertexaivectorsearch \[0.3.2] - fix: update blob path in batch\_update\_index ([#​20281](https://redirect.github.com/run-llama/llama_index/pull/20281)) ##### llama-index-voice-agents-openai \[0.2.2] - Smallest Nit ([#​20252](https://redirect.github.com/run-llama/llama_index/pull/20252)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xOS45IiwidXBkYXRlZEluVmVyIjoiNDIuMzIuMiIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==--> Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com> |
||
|
|
baf1bd1a97 |
chore(deps): update dependency llama-index to v0.14.8 (#1831)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.6` -> `==0.14.8` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.8`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-11-10) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.7...v0.14.8) ##### llama-index-core \[0.14.8] - Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" ([#​20098](https://redirect.github.com/run-llama/llama_index/pull/20098)) - Add buffer to image, audio, video and document blocks ([#​20153](https://redirect.github.com/run-llama/llama_index/pull/20153)) - fix(agent): Handle multi-block ChatMessage in ReActAgent ([#​20196](https://redirect.github.com/run-llama/llama_index/pull/20196)) - Fix/20209 ([#​20214](https://redirect.github.com/run-llama/llama_index/pull/20214)) - Preserve Exception in ToolOutput ([#​20231](https://redirect.github.com/run-llama/llama_index/pull/20231)) - fix weird pydantic warning ([#​20235](https://redirect.github.com/run-llama/llama_index/pull/20235)) ##### llama-index-embeddings-nvidia \[0.4.2] - docs: Edit pass and update example model ([#​20198](https://redirect.github.com/run-llama/llama_index/pull/20198)) ##### llama-index-embeddings-ollama \[0.8.4] - Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) ([#​20230](https://redirect.github.com/run-llama/llama_index/pull/20230)) ##### llama-index-llms-anthropic \[0.10.2] - feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming ([#​20206](https://redirect.github.com/run-llama/llama_index/pull/20206)) - chore: remove unsupported models ([#​20211](https://redirect.github.com/run-llama/llama_index/pull/20211)) ##### llama-index-llms-bedrock-converse \[0.11.1] - feat: integrate bedrock converse with tool call block ([#​20099](https://redirect.github.com/run-llama/llama_index/pull/20099)) - feat: Update model name extraction to include 'jp' region prefix and … ([#​20233](https://redirect.github.com/run-llama/llama_index/pull/20233)) ##### llama-index-llms-google-genai \[0.7.3] - feat: google genai integration with tool block ([#​20096](https://redirect.github.com/run-llama/llama_index/pull/20096)) - fix: non-streaming gemini tool calling ([#​20207](https://redirect.github.com/run-llama/llama_index/pull/20207)) - Add token usage information in GoogleGenAI chat additional\_kwargs ([#​20219](https://redirect.github.com/run-llama/llama_index/pull/20219)) - bug fix google genai stream\_complete ([#​20220](https://redirect.github.com/run-llama/llama_index/pull/20220)) ##### llama-index-llms-nvidia \[0.4.4] - docs: Edit pass and code example updates ([#​20200](https://redirect.github.com/run-llama/llama_index/pull/20200)) ##### llama-index-llms-openai \[0.6.8] - FixV2: Correct DocumentBlock type for OpenAI from 'input\_file' to 'file' ([#​20203](https://redirect.github.com/run-llama/llama_index/pull/20203)) - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-llms-upstage \[0.6.5] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-packs-streamlit-chatbot \[0.5.2] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-packs-voyage-query-engine \[0.5.2] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-postprocessor-nvidia-rerank \[0.5.1] - docs: Edit pass ([#​20199](https://redirect.github.com/run-llama/llama_index/pull/20199)) ##### llama-index-readers-web \[0.5.6] - feat: Add ScrapyWebReader Integration ([#​20212](https://redirect.github.com/run-llama/llama_index/pull/20212)) - Update Scrapy dependency to 2.13.3 ([#​20228](https://redirect.github.com/run-llama/llama_index/pull/20228)) ##### llama-index-readers-whisper \[0.3.0] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-storage-kvstore-postgres \[0.4.3] - fix: Ensure schema creation only occurs if it doesn't already exist ([#​20225](https://redirect.github.com/run-llama/llama_index/pull/20225)) ##### llama-index-tools-brightdata \[0.2.1] - docs: add api key claim instructions ([#​20204](https://redirect.github.com/run-llama/llama_index/pull/20204)) ##### llama-index-tools-mcp \[0.4.3] - Added test case for issue 19211. No code change ([#​20201](https://redirect.github.com/run-llama/llama_index/pull/20201)) ##### llama-index-utils-oracleai \[0.3.1] - Update llama-index-core dependency to 0.12.45 ([#​20227](https://redirect.github.com/run-llama/llama_index/pull/20227)) ##### llama-index-vector-stores-lancedb \[0.4.2] - fix: FTS index recreation bug on every LanceDB query ([#​20213](https://redirect.github.com/run-llama/llama_index/pull/20213)) ### [`v0.14.7`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-30) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.6...v0.14.7) ##### llama-index-core \[0.14.7] - Feat/serpex tool integration ([#​20141](https://redirect.github.com/run-llama/llama_index/pull/20141)) - Fix outdated error message about setting LLM ([#​20157](https://redirect.github.com/run-llama/llama_index/pull/20157)) - Fixing some recently failing tests ([#​20165](https://redirect.github.com/run-llama/llama_index/pull/20165)) - Fix: update lock to latest workflow and fix issues ([#​20173](https://redirect.github.com/run-llama/llama_index/pull/20173)) - fix: ensure full docstring is used in FunctionTool ([#​20175](https://redirect.github.com/run-llama/llama_index/pull/20175)) - fix api docs build ([#​20180](https://redirect.github.com/run-llama/llama_index/pull/20180)) ##### llama-index-embeddings-voyageai \[0.5.0] - Updating the VoyageAI integration ([#​20073](https://redirect.github.com/run-llama/llama_index/pull/20073)) ##### llama-index-llms-anthropic \[0.10.0] - feat: integrate anthropic with tool call block ([#​20100](https://redirect.github.com/run-llama/llama_index/pull/20100)) ##### llama-index-llms-bedrock-converse \[0.10.7] - feat: Add support for Bedrock Guardrails streamProcessingMode ([#​20150](https://redirect.github.com/run-llama/llama_index/pull/20150)) - bedrock structured output optional force ([#​20158](https://redirect.github.com/run-llama/llama_index/pull/20158)) ##### llama-index-llms-fireworks \[0.4.5] - Update FireworksAI models ([#​20169](https://redirect.github.com/run-llama/llama_index/pull/20169)) ##### llama-index-llms-mistralai \[0.9.0] - feat: mistralai integration with tool call block ([#​20103](https://redirect.github.com/run-llama/llama_index/pull/20103)) ##### llama-index-llms-ollama \[0.9.0] - feat: integrate ollama with tool call block ([#​20097](https://redirect.github.com/run-llama/llama_index/pull/20097)) ##### llama-index-llms-openai \[0.6.6] - Allow setting temp of gpt-5-chat ([#​20156](https://redirect.github.com/run-llama/llama_index/pull/20156)) ##### llama-index-readers-confluence \[0.5.0] - feat(confluence): make SVG processing optional to fix pycairo install… ([#​20115](https://redirect.github.com/run-llama/llama_index/pull/20115)) ##### llama-index-readers-github \[0.9.0] - Add GitHub App authentication support ([#​20106](https://redirect.github.com/run-llama/llama_index/pull/20106)) ##### llama-index-retrievers-bedrock \[0.5.1] - Fixing some recently failing tests ([#​20165](https://redirect.github.com/run-llama/llama_index/pull/20165)) ##### llama-index-tools-serpex \[0.1.0] - Feat/serpex tool integration ([#​20141](https://redirect.github.com/run-llama/llama_index/pull/20141)) - add missing toml info ([#​20186](https://redirect.github.com/run-llama/llama_index/pull/20186)) ##### llama-index-vector-stores-couchbase \[0.6.0] - Add Hyperscale and Composite Vector Indexes support for Couchbase vector-store ([#​20170](https://redirect.github.com/run-llama/llama_index/pull/20170)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE3My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> |
||
|
|
ee10723480 |
chore(deps): update dependency toolbox-llamaindex to v0.5.3 (#1979)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python) ([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md)) | `==0.5.2` -> `==0.5.3` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>googleapis/mcp-toolbox-sdk-python (toolbox-llamaindex)</summary> ### [`v0.5.3`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-core-v0.5.3): toolbox-core: v0.5.3 [Compare Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-llamaindex-v0.5.2...toolbox-llamaindex-v0.5.3) ##### Miscellaneous Chores - **ci:** Updated the toolbox server version for CI and integration tests ([#​388](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/388)), ([#​414](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/414)), ([#​421](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/421), [#​395](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/395)). - **deps:** Updated dependencies: `aiohttp` to v3.13.0 ([#​389](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/389)), `google-auth` to v2.41.1 ([#​383](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/383)), `isort` to v7 ([#​393](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/393)), `pytest` to v9 ([#​416](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/416)), and other non-major Python dependencies ([#​386](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/386)), ([#​387](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/387)), ([#​427](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/427)). </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xMy41IiwidXBkYXRlZEluVmVyIjoiNDIuMTMuNSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> |
||
|
|
b2ea4b7b8f |
chore(deps): update dependency pytest to v9.0.1 (#1938)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [pytest](https://redirect.github.com/pytest-dev/pytest) ([changelog](https://docs.pytest.org/en/stable/changelog.html)) | `==9.0.0` -> `==9.0.1` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>pytest-dev/pytest (pytest)</summary> ### [`v9.0.1`](https://redirect.github.com/pytest-dev/pytest/releases/tag/9.0.1) [Compare Source](https://redirect.github.com/pytest-dev/pytest/compare/9.0.0...9.0.1) ### pytest 9.0.1 (2025-11-12) #### Bug fixes - [#​13895](https://redirect.github.com/pytest-dev/pytest/issues/13895): Restore support for skipping tests via `raise unittest.SkipTest`. - [#​13896](https://redirect.github.com/pytest-dev/pytest/issues/13896): The terminal progress plugin added in pytest 9.0 is now automatically disabled when iTerm2 is detected, it generated desktop notifications instead of the desired functionality. - [#​13904](https://redirect.github.com/pytest-dev/pytest/issues/13904): Fixed the TOML type of the verbosity settings in the API reference from number to string. - [#​13910](https://redirect.github.com/pytest-dev/pytest/issues/13910): Fixed <span class="title-ref">UserWarning: Do not expect file\_or\_dir</span> on some earlier Python 3.12 and 3.13 point versions. #### Packaging updates and notes for downstreams - [#​13933](https://redirect.github.com/pytest-dev/pytest/issues/13933): The tox configuration has been adjusted to make sure the desired version string can be passed into its `package_env` through the `SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST` environment variable as a part of the release process -- by `webknjaz`. #### Contributor-facing changes - [#​13891](https://redirect.github.com/pytest-dev/pytest/issues/13891), [#​13942](https://redirect.github.com/pytest-dev/pytest/issues/13942): The CI/CD part of the release automation is now capable of creating GitHub Releases without having a Git checkout on disk -- by `bluetech` and `webknjaz`. - [#​13933](https://redirect.github.com/pytest-dev/pytest/issues/13933): The tox configuration has been adjusted to make sure the desired version string can be passed into its `package_env` through the `SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST` environment variable as a part of the release process -- by `webknjaz`. </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNzMuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE3My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> |
||
|
|
61739300be |
chore(deps): update dependency llama-index-llms-google-genai to v0.7.3 (#1886)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.7.1` -> `==0.7.3` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> |
||
|
|
edd739c490 |
chore(deps): update dependency pytest to v9 (#1911)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [pytest](https://redirect.github.com/pytest-dev/pytest) ([changelog](https://docs.pytest.org/en/stable/changelog.html)) | `==8.4.2` -> `==9.0.0` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>pytest-dev/pytest (pytest)</summary> ### [`v9.0.0`](https://redirect.github.com/pytest-dev/pytest/releases/tag/9.0.0) [Compare Source](https://redirect.github.com/pytest-dev/pytest/compare/8.4.2...9.0.0) ### pytest 9.0.0 (2025-11-05) #### New features - [#​1367](https://redirect.github.com/pytest-dev/pytest/issues/1367): **Support for subtests** has been added. `subtests <subtests>` are an alternative to parametrization, useful in situations where the parametrization values are not all known at collection time. Example: ```python def contains_docstring(p: Path) -> bool: """Return True if the given Python file contains a top-level docstring.""" ... def test_py_files_contain_docstring(subtests: pytest.Subtests) -> None: for path in Path.cwd().glob("*.py"): with subtests.test(path=str(path)): assert contains_docstring(path) ``` Each assert failure or error is caught by the context manager and reported individually, giving a clear picture of all files that are missing a docstring. In addition, `unittest.TestCase.subTest` is now also supported. This feature was originally implemented as a separate plugin in [pytest-subtests](https://redirect.github.com/pytest-dev/pytest-subtests), but since then has been merged into the core. > \[!NOTE] > This feature is experimental and will likely evolve in future releases. By that we mean that we might change how subtests are reported on failure, but the functionality and how to use it are stable. - [#​13743](https://redirect.github.com/pytest-dev/pytest/issues/13743): Added support for **native TOML configuration files**. While pytest, since version 6, supports configuration in `pyproject.toml` files under `[tool.pytest.ini_options]`, it does so in an "INI compatibility mode", where all configuration values are treated as strings or list of strings. Now, pytest supports the native TOML data model. In `pyproject.toml`, the native TOML configuration is under the `[tool.pytest]` table. ```toml # pyproject.toml [tool.pytest] minversion = "9.0" addopts = ["-ra", "-q"] testpaths = [ "tests", "integration", ] ``` The `[tool.pytest.ini_options]` table remains supported, but both tables cannot be used at the same time. If you prefer to use a separate configuration file, or don't use `pyproject.toml`, you can use `pytest.toml` or `.pytest.toml`: ```toml # pytest.toml or .pytest.toml [pytest] minversion = "9.0" addopts = ["-ra", "-q"] testpaths = [ "tests", "integration", ] ``` The documentation now (sometimes) shows configuration snippets in both TOML and INI formats, in a tabbed interface. See `config file formats` for full details. - [#​13823](https://redirect.github.com/pytest-dev/pytest/issues/13823): Added a **"strict mode"** enabled by the `strict` configuration option. When set to `true`, the `strict` option currently enables - `strict_config` - `strict_markers` - `strict_parametrization_ids` - `strict_xfail` The individual strictness options can be explicitly set to override the global `strict` setting. The previously-deprecated `--strict` command-line flag now enables strict mode. If pytest adds new strictness options in the future, they will also be enabled in strict mode. Therefore, you should only enable strict mode if you use a pinned/locked version of pytest, or if you want to proactively adopt new strictness options as they are added. See `strict mode` for more details. - [#​13737](https://redirect.github.com/pytest-dev/pytest/issues/13737): Added the `strict_parametrization_ids` configuration option. When set, pytest emits an error if it detects non-unique parameter set IDs, rather than automatically making the IDs unique by adding <span class="title-ref">0</span>, <span class="title-ref">1</span>, ... to them. This can be particularly useful for catching unintended duplicates. - [#​13072](https://redirect.github.com/pytest-dev/pytest/issues/13072): Added support for displaying test session **progress in the terminal tab** using the [OSC 9;4;](https://conemu.github.io/en/AnsiEscapeCodes.html#ConEmu_specific_OSC) ANSI sequence. When pytest runs in a supported terminal emulator like ConEmu, Gnome Terminal, Ptyxis, Windows Terminal, Kitty or Ghostty, you'll see the progress in the terminal tab or window, allowing you to monitor pytest's progress at a glance. This feature is automatically enabled when running in a TTY. It is implemented as an internal plugin. If needed, it can be disabled as follows: - On a user level, using `-p no:terminalprogress` on the command line or via an environment variable `PYTEST_ADDOPTS='-p no:terminalprogress'`. - On a project configuration level, using `addopts = "-p no:terminalprogress"`. - [#​478](https://redirect.github.com/pytest-dev/pytest/issues/478): Support PEP420 (implicit namespace packages) as <span class="title-ref">--pyargs</span> target when `consider_namespace_packages` is <span class="title-ref">true</span> in the config. Previously, this option only impacted package imports, now it also impacts tests discovery. - [#​13678](https://redirect.github.com/pytest-dev/pytest/issues/13678): Added a new `faulthandler_exit_on_timeout` configuration option set to "false" by default to let <span class="title-ref">faulthandler</span> interrupt the <span class="title-ref">pytest</span> process after a timeout in case of deadlock. Previously, a <span class="title-ref">faulthandler</span> timeout would only dump the traceback of all threads to stderr, but would not interrupt the <span class="title-ref">pytest</span> process. \-- by `ogrisel`. - [#​13829](https://redirect.github.com/pytest-dev/pytest/issues/13829): Added support for configuration option aliases via the `aliases` parameter in `Parser.addini() <pytest.Parser.addini>`. Plugins can now register alternative names for configuration options, allowing for more flexibility in configuration naming and supporting backward compatibility when renaming options. The canonical name always takes precedence if both the canonical name and an alias are specified in the configuration file. #### Improvements in existing functionality - [#​13330](https://redirect.github.com/pytest-dev/pytest/issues/13330): Having pytest configuration spread over more than one file (for example having both a `pytest.ini` file and `pyproject.toml` with a `[tool.pytest.ini_options]` table) will now print a warning to make it clearer to the user that only one of them is actually used. \-- by `sgaist` - [#​13574](https://redirect.github.com/pytest-dev/pytest/issues/13574): The single argument `--version` no longer loads the entire plugin infrastructure, making it faster and more reliable when displaying only the pytest version. Passing `--version` twice (e.g., `pytest --version --version`) retains the original behavior, showing both the pytest version and plugin information. > \[!NOTE] > Since `--version` is now processed early, it only takes effect when passed directly via the command line. It will not work if set through other mechanisms, such as `PYTEST_ADDOPTS` or `addopts`. - [#​13823](https://redirect.github.com/pytest-dev/pytest/issues/13823): Added `strict_xfail` as an alias to the `xfail_strict` option, `strict_config` as an alias to the `--strict-config` flag, and `strict_markers` as an alias to the `--strict-markers` flag. This makes all strictness options consistently have configuration options with the prefix `strict_`. - [#​13700](https://redirect.github.com/pytest-dev/pytest/issues/13700): <span class="title-ref">--junitxml</span> no longer prints the <span class="title-ref">generated xml file</span> summary at the end of the pytest session when <span class="title-ref">--quiet</span> is given. - [#​13732](https://redirect.github.com/pytest-dev/pytest/issues/13732): Previously, when filtering warnings, pytest would fail if the filter referenced a class that could not be imported. Now, this only outputs a message indicating the problem. - [#​13859](https://redirect.github.com/pytest-dev/pytest/issues/13859): Clarify the error message for <span class="title-ref">pytest.raises()</span> when a regex <span class="title-ref">match</span> fails. - [#​13861](https://redirect.github.com/pytest-dev/pytest/issues/13861): Better sentence structure in a test's expected error message. Previously, the error message would be "expected exception must be \<expected>, but got \<actual>". Now, it is "Expected \<expected>, but got \<actual>". #### Removals and backward incompatible breaking changes - [#​12083](https://redirect.github.com/pytest-dev/pytest/issues/12083): Fixed a bug where an invocation such as <span class="title-ref">pytest a/ a/b</span> would cause only tests from <span class="title-ref">a/b</span> to run, and not other tests under <span class="title-ref">a/</span>. The fix entails a few breaking changes to how such overlapping arguments and duplicates are handled: 1. <span class="title-ref">pytest a/b a/</span> or <span class="title-ref">pytest a/ a/b</span> are equivalent to <span class="title-ref">pytest a</span>; if an argument overlaps another arguments, only the prefix remains. 2. <span class="title-ref">pytest x.py x.py</span> is equivalent to <span class="title-ref">pytest x.py</span>; previously such an invocation was taken as an explicit request to run the tests from the file twice. If you rely on these behaviors, consider using `--keep-duplicates <duplicate-paths>`, which retains its existing behavior (including the bug). - [#​13719](https://redirect.github.com/pytest-dev/pytest/issues/13719): Support for Python 3.9 is dropped following its end of life. - [#​13766](https://redirect.github.com/pytest-dev/pytest/issues/13766): Previously, pytest would assume it was running in a CI/CD environment if either of the environment variables <span class="title-ref">$CI</span> or <span class="title-ref">$BUILD\_NUMBER</span> was defined; now, CI mode is only activated if at least one of those variables is defined and set to a *non-empty* value. - [#​13779](https://redirect.github.com/pytest-dev/pytest/issues/13779): **PytestRemovedIn9Warning deprecation warnings are now errors by default.** Following our plan to remove deprecated features with as little disruption as possible, all warnings of type `PytestRemovedIn9Warning` now generate errors instead of warning messages by default. **The affected features will be effectively removed in pytest 9.1**, so please consult the `deprecations` section in the docs for directions on how to update existing code. In the pytest `9.0.X` series, it is possible to change the errors back into warnings as a stopgap measure by adding this to your `pytest.ini` file: ```ini [pytest] filterwarnings = ignore::pytest.PytestRemovedIn9Warning ``` But this will stop working when pytest `9.1` is released. **If you have concerns** about the removal of a specific feature, please add a comment to `13779`. #### Deprecations (removal in next major release) - [#​13807](https://redirect.github.com/pytest-dev/pytest/issues/13807): `monkeypatch.syspath_prepend() <pytest.MonkeyPatch.syspath_prepend>` now issues a deprecation warning when the prepended path contains legacy namespace packages (those using `pkg_resources.declare_namespace()`). Users should migrate to native namespace packages (`420`). See `monkeypatch-fixup-namespace-packages` for details. #### Bug fixes - [#​13445](https://redirect.github.com/pytest-dev/pytest/issues/13445): Made the type annotations of `pytest.skip` and friends more spec-complaint to have them work across more type checkers. - [#​13537](https://redirect.github.com/pytest-dev/pytest/issues/13537): Fixed a bug in which `ExceptionGroup` with only `Skipped` exceptions in teardown was not handled correctly and showed as error. - [#​13598](https://redirect.github.com/pytest-dev/pytest/issues/13598): Fixed possible collection confusion on Windows when short paths and symlinks are involved. - [#​13716](https://redirect.github.com/pytest-dev/pytest/issues/13716): Fixed a bug where a nonsensical invocation like `pytest x.py[a]` (a file cannot be parametrized) was silently treated as `pytest x.py`. This is now a usage error. - [#​13722](https://redirect.github.com/pytest-dev/pytest/issues/13722): Fixed a misleading assertion failure message when using `pytest.approx` on mappings with differing lengths. - [#​13773](https://redirect.github.com/pytest-dev/pytest/issues/13773): Fixed the static fixture closure calculation to properly consider transitive dependencies requested by overridden fixtures. - [#​13816](https://redirect.github.com/pytest-dev/pytest/issues/13816): Fixed `pytest.approx` which now returns a clearer error message when comparing mappings with different keys. - [#​13849](https://redirect.github.com/pytest-dev/pytest/issues/13849): Hidden `.pytest.ini` files are now picked up as the config file even if empty. This was an inconsistency with non-hidden `pytest.ini`. - [#​13865](https://redirect.github.com/pytest-dev/pytest/issues/13865): Fixed <span class="title-ref">--show-capture</span> with <span class="title-ref">--tb=line</span>. - [#​13522](https://redirect.github.com/pytest-dev/pytest/issues/13522): Fixed `pytester` in subprocess mode ignored all :attr\`pytester.plugins \<pytest.Pytester.plugins>\` except the first. Fixed `pytester` in subprocess mode silently ignored non-str `pytester.plugins <pytest.Pytester.plugins>`. Now it errors instead. If you are affected by this, specify the plugin by name, or switch the affected tests to use `pytester.runpytest_inprocess <pytest.Pytester.runpytest_inprocess>` explicitly instead. #### Packaging updates and notes for downstreams - [#​13791](https://redirect.github.com/pytest-dev/pytest/issues/13791): Minimum requirements on `iniconfig` and `packaging` were bumped to `1.0.1` and `22.0.0`, respectively. #### Contributor-facing changes - [#​12244](https://redirect.github.com/pytest-dev/pytest/issues/12244): Fixed self-test failures when <span class="title-ref">TERM=dumb</span>. - [#​12474](https://redirect.github.com/pytest-dev/pytest/issues/12474): Added scheduled GitHub Action Workflow to run Sphinx linkchecks in repo documentation. - [#​13621](https://redirect.github.com/pytest-dev/pytest/issues/13621): pytest's own testsuite now handles the `lsof` command hanging (e.g. due to unreachable network filesystems), with the affected selftests being skipped after 10 seconds. - [#​13638](https://redirect.github.com/pytest-dev/pytest/issues/13638): Fixed deprecated `gh pr new` command in `scripts/prepare-release-pr.py`. The script now uses `gh pr create` which is compatible with GitHub CLI v2.0+. - [#​13695](https://redirect.github.com/pytest-dev/pytest/issues/13695): Flush <span class="title-ref">stdout</span> and <span class="title-ref">stderr</span> in <span class="title-ref">Pytester.run</span> to avoid truncated outputs in <span class="title-ref">test\_faulthandler.py::test\_timeout</span> on CI -- by `ogrisel`. - [#​13771](https://redirect.github.com/pytest-dev/pytest/issues/13771): Skip <span class="title-ref">test\_do\_not\_collect\_symlink\_siblings</span> on Windows environments without symlink support to avoid false negatives. - [#​13841](https://redirect.github.com/pytest-dev/pytest/issues/13841): `tox>=4` is now required when contributing to pytest. - [#​13625](https://redirect.github.com/pytest-dev/pytest/issues/13625): Added missing docstrings to `pytest_addoption()`, `pytest_configure()`, and `cacheshow()` functions in `cacheprovider.py`. #### Miscellaneous internal changes - [#​13830](https://redirect.github.com/pytest-dev/pytest/issues/13830): Configuration overrides (`-o`/`--override-ini`) are now processed during startup rather than during `config.getini() <pytest.Config.getini>`. </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> |
||
|
|
98e3f6abe4 |
chore(deps): update dependency llama-index-llms-google-genai to v0.7.1 (#1841)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.6.2` -> `==0.7.1` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> |
||
|
|
fdca92cefb |
chore(deps): update dependency llama-index-llms-google-genai to v0.6.2 (#1725)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.6.1` -> `==0.6.2` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNDMuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE0My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Averi Kitsch <akitsch@google.com> |
||
|
|
01ac3134c0 |
chore(deps): update dependency llama-index to v0.14.6 (#1785)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.4` -> `==0.14.6` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.6`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-26) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.5...v0.14.6) ##### llama-index-core \[0.14.6] - Add allow\_parallel\_tool\_calls for non-streaming ([#​20117](https://redirect.github.com/run-llama/llama_index/pull/20117)) - Fix invalid use of field-specific metadata ([#​20122](https://redirect.github.com/run-llama/llama_index/pull/20122)) - update doc for SemanticSplitterNodeParser ([#​20125](https://redirect.github.com/run-llama/llama_index/pull/20125)) - fix rare cases when sentence splits are larger than chunk size ([#​20147](https://redirect.github.com/run-llama/llama_index/pull/20147)) ##### llama-index-embeddings-bedrock \[0.7.0] - Fix BedrockEmbedding to support Cohere v4 response format ([#​20094](https://redirect.github.com/run-llama/llama_index/pull/20094)) ##### llama-index-embeddings-isaacus \[0.1.0] - feat: Isaacus embeddings integration ([#​20124](https://redirect.github.com/run-llama/llama_index/pull/20124)) ##### llama-index-embeddings-oci-genai \[0.4.2] - Update OCI GenAI cohere models ([#​20146](https://redirect.github.com/run-llama/llama_index/pull/20146)) ##### llama-index-llms-anthropic \[0.9.7] - Fix double token stream in anthropic llm ([#​20108](https://redirect.github.com/run-llama/llama_index/pull/20108)) - Ensure anthropic content delta only has user facing response ([#​20113](https://redirect.github.com/run-llama/llama_index/pull/20113)) ##### llama-index-llms-baseten \[0.1.7] - add GLM ([#​20121](https://redirect.github.com/run-llama/llama_index/pull/20121)) ##### llama-index-llms-helicone \[0.1.0] - integrate helicone to llama-index ([#​20131](https://redirect.github.com/run-llama/llama_index/pull/20131)) ##### llama-index-llms-oci-genai \[0.6.4] - Update OCI GenAI cohere models ([#​20146](https://redirect.github.com/run-llama/llama_index/pull/20146)) ##### llama-index-llms-openai \[0.6.5] - chore: openai vbump ([#​20095](https://redirect.github.com/run-llama/llama_index/pull/20095)) ##### llama-index-readers-imdb-review \[0.4.2] - chore: Update selenium dependency in imdb-review reader ([#​20105](https://redirect.github.com/run-llama/llama_index/pull/20105)) ##### llama-index-retrievers-bedrock \[0.5.0] - feat(bedrock): add async support for AmazonKnowledgeBasesRetriever ([#​20114](https://redirect.github.com/run-llama/llama_index/pull/20114)) ##### llama-index-retrievers-superlinked \[0.1.3] - Update README.md ([#​19829](https://redirect.github.com/run-llama/llama_index/pull/19829)) ##### llama-index-storage-kvstore-postgres \[0.4.2] - fix: Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore ([#​20104](https://redirect.github.com/run-llama/llama_index/pull/20104)) ##### llama-index-tools-mcp \[0.4.3] - Fix BasicMCPClient resource signatures ([#​20118](https://redirect.github.com/run-llama/llama_index/pull/20118)) ##### llama-index-vector-stores-postgres \[0.7.1] - Add GIN index support for text array metadata in PostgreSQL vector store ([#​20130](https://redirect.github.com/run-llama/llama_index/pull/20130)) ### [`v0.14.5`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-15) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.4...v0.14.5) ##### llama-index-core \[0.14.5] - Remove debug print ([#​20000](https://redirect.github.com/run-llama/llama_index/pull/20000)) - safely initialize RefDocInfo in Docstore ([#​20031](https://redirect.github.com/run-llama/llama_index/pull/20031)) - Add progress bar for multiprocess loading ([#​20048](https://redirect.github.com/run-llama/llama_index/pull/20048)) - Fix duplicate node positions when identical text appears multiple times in document ([#​20050](https://redirect.github.com/run-llama/llama_index/pull/20050)) - chore: tool call block - part 1 ([#​20074](https://redirect.github.com/run-llama/llama_index/pull/20074)) ##### llama-index-instrumentation \[0.4.2] - update instrumentation package metadata ([#​20079](https://redirect.github.com/run-llama/llama_index/pull/20079)) ##### llama-index-llms-anthropic \[0.9.5] - ✨ feat(anthropic): add prompt caching model validation utilities ([#​20069](https://redirect.github.com/run-llama/llama_index/pull/20069)) - fix streaming thinking/tool calling with anthropic ([#​20077](https://redirect.github.com/run-llama/llama_index/pull/20077)) - Add haiku 4.5 support ([#​20092](https://redirect.github.com/run-llama/llama_index/pull/20092)) ##### llama-index-llms-baseten \[0.1.6] - Baseten provider Kimi K2 0711, Llama 4 Maverick and Llama 4 Scout Model APIs deprecation ([#​20042](https://redirect.github.com/run-llama/llama_index/pull/20042)) ##### llama-index-llms-bedrock-converse \[0.10.5] - feat: List Claude Sonnet 4.5 as a reasoning model ([#​20022](https://redirect.github.com/run-llama/llama_index/pull/20022)) - feat: Support global cross-region inference profile prefix ([#​20064](https://redirect.github.com/run-llama/llama_index/pull/20064)) - Update utils.py for opus 4.1 ([#​20076](https://redirect.github.com/run-llama/llama_index/pull/20076)) - 4.1 opus bedrockconverse missing in funcitoncalling models ([#​20084](https://redirect.github.com/run-llama/llama_index/pull/20084)) - Add haiku 4.5 support ([#​20092](https://redirect.github.com/run-llama/llama_index/pull/20092)) ##### llama-index-llms-fireworks \[0.4.4] - Add Support for Custom Models in Fireworks LLM ([#​20023](https://redirect.github.com/run-llama/llama_index/pull/20023)) - fix(llms/fireworks): Cannot use Fireworks Deepseek V3.1-20006 issue ([#​20028](https://redirect.github.com/run-llama/llama_index/pull/20028)) ##### llama-index-llms-oci-genai \[0.6.3] - Add support for xAI models in OCI GenAI ([#​20089](https://redirect.github.com/run-llama/llama_index/pull/20089)) ##### llama-index-llms-openai \[0.6.4] - Gpt 5 pro addition ([#​20029](https://redirect.github.com/run-llama/llama_index/pull/20029)) - fix collecting final response with openai responses streaming ([#​20037](https://redirect.github.com/run-llama/llama_index/pull/20037)) - Add support for GPT-5 models in utils.py (JSON\_SCHEMA\_MODELS) ([#​20045](https://redirect.github.com/run-llama/llama_index/pull/20045)) - chore: tool call block - part 1 ([#​20074](https://redirect.github.com/run-llama/llama_index/pull/20074)) ##### llama-index-llms-sglang \[0.1.0] - Added Sglang llm integration ([#​20020](https://redirect.github.com/run-llama/llama_index/pull/20020)) ##### llama-index-readers-gitlab \[0.5.1] - feat(gitlab): add pagination params for repository tree and issues ([#​20052](https://redirect.github.com/run-llama/llama_index/pull/20052)) ##### llama-index-readers-json \[0.4.2] - vbump the JSON reader ([#​20039](https://redirect.github.com/run-llama/llama_index/pull/20039)) ##### llama-index-readers-web \[0.5.5] - fix: ScrapflyReader Pydantic validation error ([#​19999](https://redirect.github.com/run-llama/llama_index/pull/19999)) ##### llama-index-storage-chat-store-dynamodb \[0.4.2] - bump dynamodb chat store deps ([#​20078](https://redirect.github.com/run-llama/llama_index/pull/20078)) ##### llama-index-tools-mcp \[0.4.2] - 🐛 fix(tools/mcp): Fix dict type handling and reference resolution in … ([#​20082](https://redirect.github.com/run-llama/llama_index/pull/20082)) ##### llama-index-tools-signnow \[0.1.0] - feat(signnow): SignNow mcp tools integration ([#​20057](https://redirect.github.com/run-llama/llama_index/pull/20057)) ##### llama-index-tools-tavily-research \[0.4.2] - feat: Add Tavily extract function for URL content extraction ([#​20038](https://redirect.github.com/run-llama/llama_index/pull/20038)) ##### llama-index-vector-stores-azurepostgresql \[0.2.0] - Add hybrid search to Azure PostgreSQL integration ([#​20027](https://redirect.github.com/run-llama/llama_index/pull/20027)) ##### llama-index-vector-stores-milvus \[0.9.3] - fix: Milvus get\_field\_kwargs() ([#​20086](https://redirect.github.com/run-llama/llama_index/pull/20086)) ##### llama-index-vector-stores-opensearch \[0.6.2] - fix(opensearch): Correct version check for efficient filtering ([#​20067](https://redirect.github.com/run-llama/llama_index/pull/20067)) ##### llama-index-vector-stores-qdrant \[0.8.6] - fix(qdrant): Allow async-only initialization with hybrid search ([#​20005](https://redirect.github.com/run-llama/llama_index/pull/20005)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTYuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE1Ni4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com> |
||
|
|
012d7de67e |
chore(deps): update dependency llama-index to v0.14.4 (#1626)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.3` -> `==0.14.4` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.4`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-03) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.3...v0.14.4) ##### llama-index-core \[0.14.4] - fix pre-release installs ([#​20010](https://redirect.github.com/run-llama/llama_index/pull/20010)) ##### llama-index-embeddings-anyscale \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-embeddings-baseten \[0.1.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-embeddings-fireworks \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-embeddings-opea \[0.2.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-embeddings-text-embeddings-inference \[0.4.2] - Fix authorization header setup logic in text embeddings inference ([#​19979](https://redirect.github.com/run-llama/llama_index/pull/19979)) ##### llama-index-llms-anthropic \[0.9.3] - feat: add anthropic sonnet 4.5 ([#​19977](https://redirect.github.com/run-llama/llama_index/pull/19977)) ##### llama-index-llms-anyscale \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-azure-openai \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-baseten \[0.1.5] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-bedrock-converse \[0.9.5] - feat: Additional support for Claude Sonnet 4.5 ([#​19980](https://redirect.github.com/run-llama/llama_index/pull/19980)) ##### llama-index-llms-deepinfra \[0.5.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-everlyai \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-fireworks \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-google-genai \[0.6.2] - Fix for ValueError: ChatMessage contains multiple blocks, use 'ChatMe… ([#​19954](https://redirect.github.com/run-llama/llama_index/pull/19954)) ##### llama-index-llms-keywordsai \[1.1.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-localai \[0.5.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-mistralai \[0.8.2] - Update list of MistralAI LLMs ([#​19981](https://redirect.github.com/run-llama/llama_index/pull/19981)) ##### llama-index-llms-monsterapi \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-nvidia \[0.4.4] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-ollama \[0.7.4] - Fix `TypeError: unhashable type: 'dict'` in Ollama stream chat with tools ([#​19938](https://redirect.github.com/run-llama/llama_index/pull/19938)) ##### llama-index-llms-openai \[0.6.1] - feat(OpenAILike): support structured outputs ([#​19967](https://redirect.github.com/run-llama/llama_index/pull/19967)) ##### llama-index-llms-openai-like \[0.5.3] - feat(OpenAILike): support structured outputs ([#​19967](https://redirect.github.com/run-llama/llama_index/pull/19967)) ##### llama-index-llms-openrouter \[0.4.2] - chore(openrouter,anthropic): add py.typed ([#​19966](https://redirect.github.com/run-llama/llama_index/pull/19966)) ##### llama-index-llms-perplexity \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-portkey \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-sarvam \[0.2.1] - fixed Sarvam Integration and Typos (Fixes [#​19931](https://redirect.github.com/run-llama/llama_index/issues/19931)) ([#​19932](https://redirect.github.com/run-llama/llama_index/pull/19932)) ##### llama-index-llms-upstage \[0.6.4] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-llms-yi \[0.4.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-memory-bedrock-agentcore \[0.1.0] - feat: Bedrock AgentCore Memory integration ([#​19953](https://redirect.github.com/run-llama/llama_index/pull/19953)) ##### llama-index-multi-modal-llms-openai \[0.6.2] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-readers-confluence \[0.4.4] - Fix: Respect cloud parameter when fetching child pages in ConfluenceR… ([#​19983](https://redirect.github.com/run-llama/llama_index/pull/19983)) ##### llama-index-readers-service-now \[0.2.2] - Bug Fix :- Not Able to Fetch Page whose latest is empty or null ([#​19916](https://redirect.github.com/run-llama/llama_index/pull/19916)) ##### llama-index-selectors-notdiamond \[0.4.0] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-tools-agentql \[1.2.0] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-tools-playwright \[0.3.1] - chore: fix playwright tests ([#​19946](https://redirect.github.com/run-llama/llama_index/pull/19946)) ##### llama-index-tools-scrapegraph \[0.2.2] - feat: update scrapegraphai ([#​19974](https://redirect.github.com/run-llama/llama_index/pull/19974)) ##### llama-index-vector-stores-chroma \[0.5.3] - docs: fix query method docstring in ChromaVectorStore Fixes [#​19969](https://redirect.github.com/run-llama/llama_index/issues/19969) ([#​19973](https://redirect.github.com/run-llama/llama_index/pull/19973)) ##### llama-index-vector-stores-mongodb \[0.8.1] - fix llm deps for openai ([#​19944](https://redirect.github.com/run-llama/llama_index/pull/19944)) ##### llama-index-vector-stores-postgres \[0.7.0] - fix index creation in postgres vector store ([#​19955](https://redirect.github.com/run-llama/llama_index/pull/19955)) ##### llama-index-vector-stores-solr \[0.1.0] - Add ApacheSolrVectorStore Integration ([#​19933](https://redirect.github.com/run-llama/llama_index/pull/19933)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xMzEuOSIsInVwZGF0ZWRJblZlciI6IjQxLjEzMS45IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com> |
||
|
|
530f1cc406 |
chore(deps): update dependency llama-index-llms-google-genai to v0.6.1 (#1562)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.6.0` -> `==0.6.1` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xMzAuMSIsInVwZGF0ZWRJblZlciI6IjQxLjEzMC4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com> |
||
|
|
e5f643f929 |
chore(deps): update dependency llama-index-llms-google-genai to v0.6.0 (#1547)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.5.1` -> `==0.6.0` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> |
||
|
|
785be3d8a4 |
chore(deps): update dependency llama-index to v0.14.3 (#1548)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.2` -> `==0.14.3` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.3`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-24) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.2...v0.14.3) ##### llama-index-core \[0.14.3] - Fix Gemini thought signature serialization ([#​19891](https://redirect.github.com/run-llama/llama_index/pull/19891)) - Adding a ThinkingBlock among content blocks ([#​19919](https://redirect.github.com/run-llama/llama_index/pull/19919)) ##### llama-index-llms-anthropic \[0.9.0] - Adding a ThinkingBlock among content blocks ([#​19919](https://redirect.github.com/run-llama/llama_index/pull/19919)) ##### llama-index-llms-baseten \[0.1.4] - added kimik2 0905 and reordered list for validation ([#​19892](https://redirect.github.com/run-llama/llama_index/pull/19892)) - Baseten Dynamic Model APIs Validation ([#​19893](https://redirect.github.com/run-llama/llama_index/pull/19893)) ##### llama-index-llms-google-genai \[0.6.0] - Add missing FileAPI support for documents ([#​19897](https://redirect.github.com/run-llama/llama_index/pull/19897)) - Adding a ThinkingBlock among content blocks ([#​19919](https://redirect.github.com/run-llama/llama_index/pull/19919)) ##### llama-index-llms-mistralai \[0.8.0] - Adding a ThinkingBlock among content blocks ([#​19919](https://redirect.github.com/run-llama/llama_index/pull/19919)) ##### llama-index-llms-openai \[0.6.0] - Adding a ThinkingBlock among content blocks ([#​19919](https://redirect.github.com/run-llama/llama_index/pull/19919)) ##### llama-index-protocols-ag-ui \[0.2.2] - improve how state snapshotting works in AG-UI ([#​19934](https://redirect.github.com/run-llama/llama_index/pull/19934)) ##### llama-index-readers-mongodb \[0.5.0] - Use PyMongo Asynchronous API instead of Motor ([#​19875](https://redirect.github.com/run-llama/llama_index/pull/19875)) ##### llama-index-readers-paddle-ocr \[0.1.0] - \[New Package] Add PaddleOCR Reader for extracting text from images in PDFs ([#​19827](https://redirect.github.com/run-llama/llama_index/pull/19827)) ##### llama-index-readers-web \[0.5.4] - feat(readers/web-firecrawl): migrate to Firecrawl v2 SDK ([#​19773](https://redirect.github.com/run-llama/llama_index/pull/19773)) ##### llama-index-storage-chat-store-mongo \[0.3.0] - Use PyMongo Asynchronous API instead of Motor ([#​19875](https://redirect.github.com/run-llama/llama_index/pull/19875)) ##### llama-index-storage-kvstore-mongodb \[0.5.0] - Use PyMongo Asynchronous API instead of Motor ([#​19875](https://redirect.github.com/run-llama/llama_index/pull/19875)) ##### llama-index-tools-valyu \[0.5.0] - Add Valyu Extractor and Fast mode ([#​19915](https://redirect.github.com/run-llama/llama_index/pull/19915)) ##### llama-index-vector-stores-azureaisearch \[0.4.2] - Fix/llama index vector stores azureaisearch fix ([#​19800](https://redirect.github.com/run-llama/llama_index/pull/19800)) ##### llama-index-vector-stores-azurepostgresql \[0.1.0] - Add support for Azure PostgreSQL ([#​19709](https://redirect.github.com/run-llama/llama_index/pull/19709)) ##### llama-index-vector-stores-qdrant \[0.8.5] - Add proper compat for old sparse vectors ([#​19882](https://redirect.github.com/run-llama/llama_index/pull/19882)) ##### llama-index-vector-stores-singlestoredb \[0.4.2] - Fix SQLi Vulnerability in SingleStore Db ([#​19914](https://redirect.github.com/run-llama/llama_index/pull/19914)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> |
||
|
|
a5ef166fcb |
chore(deps): update dependency llama-index-llms-google-genai to v0.5.1 (#1529)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.5.0` -> `==0.5.1` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> |
||
|
|
8c4e6f88b7 |
chore(deps): update dependency toolbox-llamaindex to v0.5.2 (#1532)
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
|
[toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python)
([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md))
| `==0.5.1` -> `==0.5.2` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.
---
### Release Notes
<details>
<summary>googleapis/mcp-toolbox-sdk-python
(toolbox-llamaindex)</summary>
###
[`v0.5.2`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-core-v0.5.2):
toolbox-core: v0.5.2
[Compare
Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-llamaindex-v0.5.1...toolbox-llamaindex-v0.5.2)
##### Miscellaneous Chores
- **deps:** update python-nonmajor
([#​372](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/372))
([d915624](
|
||
|
|
bae94285a6 |
chore(deps): update dependency toolbox-llamaindex to v0.5.1 (#1510)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [toolbox-llamaindex](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python) ([changelog](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-llamaindex/CHANGELOG.md)) | `==0.5.0` -> `==0.5.1` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>googleapis/mcp-toolbox-sdk-python (toolbox-llamaindex)</summary> ### [`v0.5.1`](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/releases/tag/toolbox-core-v0.5.1): toolbox-core: v0.5.1 [Compare Source](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/compare/toolbox-llamaindex-v0.5.0...toolbox-llamaindex-v0.5.1) ##### Bug Fixes - **toolbox-core:** Use typing.Annotated for reliable parameter descriptions instead of docstrings ([#​371](https://redirect.github.com/googleapis/mcp-toolbox-sdk-python/issues/371)) ([eb76680]( |
||
|
|
10a0c09c1f |
chore(deps): update dependency llama-index to v0.14.2 (#1487)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.13.6` -> `==0.14.2` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.2`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-15) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.1...v0.14.2) ##### `llama-index-core` \[0.14.2] - fix: handle data urls in ImageBlock ([#​19856](https://redirect.github.com/run-llama/llama_index/issues/19856)) - fix: Move IngestionPipeline docstore document insertion after transformations ([#​19849](https://redirect.github.com/run-llama/llama_index/issues/19849)) - fix: Update IngestionPipeline async document store insertion ([#​19868](https://redirect.github.com/run-llama/llama_index/issues/19868)) - chore: remove stepwise usage of workflows from code ([#​19877](https://redirect.github.com/run-llama/llama_index/issues/19877)) ##### `llama-index-embeddings-fastembed` \[0.5.0] - feat: make fastembed cpu or gpu optional ([#​19878](https://redirect.github.com/run-llama/llama_index/issues/19878)) ##### `llama-index-llms-deepseek` \[0.2.2] - feat: pass context\_window to super in deepseek llm ([#​19876](https://redirect.github.com/run-llama/llama_index/issues/19876)) ##### `llama-index-llms-google-genai` \[0.5.0] - feat: Add GoogleGenAI FileAPI support for large files ([#​19853](https://redirect.github.com/run-llama/llama_index/issues/19853)) ##### `llama-index-readers-solr` \[0.1.0] - feat: Add Solr reader integration ([#​19843](https://redirect.github.com/run-llama/llama_index/issues/19843)) ##### `llama-index-retrievers-alletra-x10000-retriever` \[0.1.0] - feat: add AlletraX10000Retriever integration ([#​19798](https://redirect.github.com/run-llama/llama_index/issues/19798)) ##### `llama-index-vector-stores-oracledb` \[0.3.2] - feat: OraLlamaVS Connection Pool Support + Filtering ([#​19412](https://redirect.github.com/run-llama/llama_index/issues/19412)) ##### `llama-index-vector-stores-postgres` \[0.6.8] - feat: Add `customize_query_fn` to PGVectorStore ([#​19847](https://redirect.github.com/run-llama/llama_index/issues/19847)) ### [`v0.14.1`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-14) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.0...v0.14.1) ##### `llama-index-core` \[0.14.1] - feat: add verbose option to RetrieverQueryEngine for detailed output ([#​19807](https://redirect.github.com/run-llama/llama_index/issues/19807)) - feat: feat: add support for additional kwargs in `aget_text_embedding_batch` method ([#​19808](https://redirect.github.com/run-llama/llama_index/issues/19808)) - feat: add `thinking_delta` field to AgentStream events to expose llm reasoning ([#​19785](https://redirect.github.com/run-llama/llama_index/issues/19785)) - fix: Bug fix agent streaming thinking delta pydantic validation ([#​19828](https://redirect.github.com/run-llama/llama_index/issues/19828)) - fix: handle positional args and kwargs both in tool calling ([#​19822](https://redirect.github.com/run-llama/llama_index/issues/19822)) ##### `llama-index-instrumentation` \[0.4.1] - feat: add sync event/handler support ([#​19825](https://redirect.github.com/run-llama/llama_index/issues/19825)) ##### `llama-index-llms-google-genai` \[0.4.0] - feat: Add VideoBlock and GoogleGenAI video input support ([#​19823](https://redirect.github.com/run-llama/llama_index/issues/19823)) ##### `llama-index-llms-ollama` \[0.7.3] - fix: Fix bug using Ollama with Agents and None tool\_calls in final message ([#​19844](https://redirect.github.com/run-llama/llama_index/issues/19844)) ##### `llama-index-llms-vertex` \[0.6.1] - fix: align complete/acomplete responses ([#​19806](https://redirect.github.com/run-llama/llama_index/issues/19806)) ##### `llama-index-readers-confluence` \[0.4.3] - chore: Bump version constraint for atlassian-python-api to include 4.x ([#​19824](https://redirect.github.com/run-llama/llama_index/issues/19824)) ##### `llama-index-readers-github` \[0.6.2] - fix: Make url optional ([#​19851](https://redirect.github.com/run-llama/llama_index/issues/19851)) ##### `llama-index-readers-web` \[0.5.3] - feat: Add OlostepWebReader Integration ([#​19821](https://redirect.github.com/run-llama/llama_index/issues/19821)) ##### `llama-index-tools-google` \[0.6.2] - feat: Add optional creds argument to GoogleCalendarToolSpec ([#​19826](https://redirect.github.com/run-llama/llama_index/issues/19826)) ##### `llama-index-vector-stores-vectorx` \[0.1.0] - feat: Add vectorx vectorstore ([#​19758](https://redirect.github.com/run-llama/llama_index/issues/19758)) ### [`v0.14.0`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-09-08) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.13.6...v0.14.0) **NOTE:** All packages have been bumped to handle the latest llama-index-core version. ##### `llama-index-core` \[0.14.0] - breaking: bumped `llama-index-workflows` dependency to 2.0 - Improve stacktraces clarity by avoiding wrapping errors in WorkflowRuntimeError - Remove deprecated checkpointer feature - Remove deprecated sub-workflows feature - Remove deprecated `send_event` method from Workflow class (still existing on the Context class) - Remove deprecated `stream_events()` methods from Workflow class (still existing on the Context class) - Remove deprecated support for stepwise execution ##### `llama-index-llms-openai` \[0.5.6] - feat: add support for document blocks in openai chat completions ([#​19809](https://redirect.github.com/run-llama/llama_index/issues/19809)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> |
||
|
|
cf65ba1d31 |
chore(deps): update dependency llama-index-llms-google-genai to v0.5.0 (#1488)
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | llama-index-llms-google-genai | `==0.3.0` -> `==0.5.0` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> |
||
|
|
00e1c4c3c6 |
test: added tests for python quickstart (#1196)
Added quickstart_test.py files for each Python sample, which compile and run the agent as a standalone application to validate its end-to-end functionality. The test condition ensures the sample runs to completion and produces an output which confirms the agent is not breaking. Additionally, i introduced a secondary check for essential keywords from a golden.txt file, logging their presence without failing the test. Running test file: execute this cmd from terminal ``` ORCH_NAME=adk pytest ``` --------- |