mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-21 04:57:58 -05:00
- Add generate_block_docs.py script that introspects block code to
generate markdown
- Support manual content preservation via <!-- MANUAL: --> markers
- Add migrate_block_docs.py to preserve existing manual content from git
HEAD
- Add CI workflow (docs-block-sync.yml) to fail if docs drift from code
- Add Claude PR review workflow (docs-claude-review.yml) for doc changes
- Add manual LLM enhancement workflow (docs-enhance.yml)
- Add GitBook configuration (.gitbook.yaml, SUMMARY.md)
- Fix non-deterministic category ordering (categories is a set)
- Add comprehensive test suite (32 tests)
- Generate docs for 444 blocks with 66 preserved manual sections
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Extensively test code generation for the docs pages
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces an automated documentation pipeline for blocks and
integrates it into CI.
>
> - Adds `scripts/generate_block_docs.py` (+ tests) to introspect blocks
and generate `docs/integrations/**`, preserving `<!-- MANUAL: -->`
sections
> - New CI workflows: **docs-block-sync** (fails if docs drift),
**docs-claude-review** (AI review for block/docs PRs), and
**docs-enhance** (optional LLM improvements)
> - Updates existing Claude workflows to use `CLAUDE_CODE_OAUTH_TOKEN`
instead of `ANTHROPIC_API_KEY`
> - Improves numerous block descriptions/typos and links across backend
blocks to standardize docs output
> - Commits initial generated docs including
`docs/integrations/README.md` and many provider/category pages
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
631e53e0f6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
1.7 KiB
1.7 KiB
Read CSV
What it is
A block that reads and processes CSV (Comma-Separated Values) files.
What it does
This block takes CSV content as input, processes it, and outputs the data as individual rows and a complete dataset.
How it works
The Read CSV block takes the contents of a CSV file and splits it into rows and columns. It can handle different formatting options, such as custom delimiters and quote characters. The block processes the CSV data and outputs each row individually, as well as the complete dataset.
Inputs
| Input | Description |
|---|---|
| Contents | The CSV data as a string |
| Delimiter | The character used to separate values in the CSV (default is comma ",") |
| Quotechar | The character used to enclose fields containing special characters (default is double quote '"') |
| Escapechar | The character used to escape special characters (default is backslash "") |
| Has_header | Indicates whether the CSV has a header row (default is true) |
| Skip_rows | The number of rows to skip at the beginning of the CSV (default is 0) |
| Strip | Whether to remove leading and trailing whitespace from values (default is true) |
| Skip_columns | A list of column names to exclude from the output (default is an empty list) |
Outputs
| Output | Description |
|---|---|
| Row | A dictionary representing a single row of the CSV, with column names as keys and cell values as values |
| All_data | A list of dictionaries containing all rows from the CSV |
Possible use case
This block could be used in a data analysis pipeline to import and process customer information from a CSV file. The individual rows could be used for real-time processing, while the complete dataset could be used for batch analysis or reporting.