mirror of
https://github.com/googleapis/genai-toolbox.git
synced 2026-02-02 03:05:17 -05:00
This PR contains the following updates: | Package | Change | Age | Confidence | |---|---|---|---| | [llama-index](https://redirect.github.com/run-llama/llama_index) | `==0.14.6` -> `==0.14.8` | [](https://docs.renovatebot.com/merge-confidence/) | [](https://docs.renovatebot.com/merge-confidence/) | --- ### Release Notes <details> <summary>run-llama/llama_index (llama-index)</summary> ### [`v0.14.8`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-11-10) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.7...v0.14.8) ##### llama-index-core \[0.14.8] - Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" ([#​20098](https://redirect.github.com/run-llama/llama_index/pull/20098)) - Add buffer to image, audio, video and document blocks ([#​20153](https://redirect.github.com/run-llama/llama_index/pull/20153)) - fix(agent): Handle multi-block ChatMessage in ReActAgent ([#​20196](https://redirect.github.com/run-llama/llama_index/pull/20196)) - Fix/20209 ([#​20214](https://redirect.github.com/run-llama/llama_index/pull/20214)) - Preserve Exception in ToolOutput ([#​20231](https://redirect.github.com/run-llama/llama_index/pull/20231)) - fix weird pydantic warning ([#​20235](https://redirect.github.com/run-llama/llama_index/pull/20235)) ##### llama-index-embeddings-nvidia \[0.4.2] - docs: Edit pass and update example model ([#​20198](https://redirect.github.com/run-llama/llama_index/pull/20198)) ##### llama-index-embeddings-ollama \[0.8.4] - Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) ([#​20230](https://redirect.github.com/run-llama/llama_index/pull/20230)) ##### llama-index-llms-anthropic \[0.10.2] - feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming ([#​20206](https://redirect.github.com/run-llama/llama_index/pull/20206)) - chore: remove unsupported models ([#​20211](https://redirect.github.com/run-llama/llama_index/pull/20211)) ##### llama-index-llms-bedrock-converse \[0.11.1] - feat: integrate bedrock converse with tool call block ([#​20099](https://redirect.github.com/run-llama/llama_index/pull/20099)) - feat: Update model name extraction to include 'jp' region prefix and … ([#​20233](https://redirect.github.com/run-llama/llama_index/pull/20233)) ##### llama-index-llms-google-genai \[0.7.3] - feat: google genai integration with tool block ([#​20096](https://redirect.github.com/run-llama/llama_index/pull/20096)) - fix: non-streaming gemini tool calling ([#​20207](https://redirect.github.com/run-llama/llama_index/pull/20207)) - Add token usage information in GoogleGenAI chat additional\_kwargs ([#​20219](https://redirect.github.com/run-llama/llama_index/pull/20219)) - bug fix google genai stream\_complete ([#​20220](https://redirect.github.com/run-llama/llama_index/pull/20220)) ##### llama-index-llms-nvidia \[0.4.4] - docs: Edit pass and code example updates ([#​20200](https://redirect.github.com/run-llama/llama_index/pull/20200)) ##### llama-index-llms-openai \[0.6.8] - FixV2: Correct DocumentBlock type for OpenAI from 'input\_file' to 'file' ([#​20203](https://redirect.github.com/run-llama/llama_index/pull/20203)) - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-llms-upstage \[0.6.5] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-packs-streamlit-chatbot \[0.5.2] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-packs-voyage-query-engine \[0.5.2] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-postprocessor-nvidia-rerank \[0.5.1] - docs: Edit pass ([#​20199](https://redirect.github.com/run-llama/llama_index/pull/20199)) ##### llama-index-readers-web \[0.5.6] - feat: Add ScrapyWebReader Integration ([#​20212](https://redirect.github.com/run-llama/llama_index/pull/20212)) - Update Scrapy dependency to 2.13.3 ([#​20228](https://redirect.github.com/run-llama/llama_index/pull/20228)) ##### llama-index-readers-whisper \[0.3.0] - OpenAI v2 sdk support ([#​20234](https://redirect.github.com/run-llama/llama_index/pull/20234)) ##### llama-index-storage-kvstore-postgres \[0.4.3] - fix: Ensure schema creation only occurs if it doesn't already exist ([#​20225](https://redirect.github.com/run-llama/llama_index/pull/20225)) ##### llama-index-tools-brightdata \[0.2.1] - docs: add api key claim instructions ([#​20204](https://redirect.github.com/run-llama/llama_index/pull/20204)) ##### llama-index-tools-mcp \[0.4.3] - Added test case for issue 19211. No code change ([#​20201](https://redirect.github.com/run-llama/llama_index/pull/20201)) ##### llama-index-utils-oracleai \[0.3.1] - Update llama-index-core dependency to 0.12.45 ([#​20227](https://redirect.github.com/run-llama/llama_index/pull/20227)) ##### llama-index-vector-stores-lancedb \[0.4.2] - fix: FTS index recreation bug on every LanceDB query ([#​20213](https://redirect.github.com/run-llama/llama_index/pull/20213)) ### [`v0.14.7`](https://redirect.github.com/run-llama/llama_index/blob/HEAD/CHANGELOG.md#2025-10-30) [Compare Source](https://redirect.github.com/run-llama/llama_index/compare/v0.14.6...v0.14.7) ##### llama-index-core \[0.14.7] - Feat/serpex tool integration ([#​20141](https://redirect.github.com/run-llama/llama_index/pull/20141)) - Fix outdated error message about setting LLM ([#​20157](https://redirect.github.com/run-llama/llama_index/pull/20157)) - Fixing some recently failing tests ([#​20165](https://redirect.github.com/run-llama/llama_index/pull/20165)) - Fix: update lock to latest workflow and fix issues ([#​20173](https://redirect.github.com/run-llama/llama_index/pull/20173)) - fix: ensure full docstring is used in FunctionTool ([#​20175](https://redirect.github.com/run-llama/llama_index/pull/20175)) - fix api docs build ([#​20180](https://redirect.github.com/run-llama/llama_index/pull/20180)) ##### llama-index-embeddings-voyageai \[0.5.0] - Updating the VoyageAI integration ([#​20073](https://redirect.github.com/run-llama/llama_index/pull/20073)) ##### llama-index-llms-anthropic \[0.10.0] - feat: integrate anthropic with tool call block ([#​20100](https://redirect.github.com/run-llama/llama_index/pull/20100)) ##### llama-index-llms-bedrock-converse \[0.10.7] - feat: Add support for Bedrock Guardrails streamProcessingMode ([#​20150](https://redirect.github.com/run-llama/llama_index/pull/20150)) - bedrock structured output optional force ([#​20158](https://redirect.github.com/run-llama/llama_index/pull/20158)) ##### llama-index-llms-fireworks \[0.4.5] - Update FireworksAI models ([#​20169](https://redirect.github.com/run-llama/llama_index/pull/20169)) ##### llama-index-llms-mistralai \[0.9.0] - feat: mistralai integration with tool call block ([#​20103](https://redirect.github.com/run-llama/llama_index/pull/20103)) ##### llama-index-llms-ollama \[0.9.0] - feat: integrate ollama with tool call block ([#​20097](https://redirect.github.com/run-llama/llama_index/pull/20097)) ##### llama-index-llms-openai \[0.6.6] - Allow setting temp of gpt-5-chat ([#​20156](https://redirect.github.com/run-llama/llama_index/pull/20156)) ##### llama-index-readers-confluence \[0.5.0] - feat(confluence): make SVG processing optional to fix pycairo install… ([#​20115](https://redirect.github.com/run-llama/llama_index/pull/20115)) ##### llama-index-readers-github \[0.9.0] - Add GitHub App authentication support ([#​20106](https://redirect.github.com/run-llama/llama_index/pull/20106)) ##### llama-index-retrievers-bedrock \[0.5.1] - Fixing some recently failing tests ([#​20165](https://redirect.github.com/run-llama/llama_index/pull/20165)) ##### llama-index-tools-serpex \[0.1.0] - Feat/serpex tool integration ([#​20141](https://redirect.github.com/run-llama/llama_index/pull/20141)) - add missing toml info ([#​20186](https://redirect.github.com/run-llama/llama_index/pull/20186)) ##### llama-index-vector-stores-couchbase \[0.6.0] - Add Hyperscale and Composite Vector Indexes support for Couchbase vector-store ([#​20170](https://redirect.github.com/run-llama/llama_index/pull/20170)) </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE3My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com> Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>