mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-13 00:58:16 -05:00
<!-- Clearly explain the need for these changes: -->
gitbook branch has changes that need synced to dev
### Changes 🏗️
Pull changes from gitbook into dev
<!-- Concisely describe all of the changes made in this pull request:
-->
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Migrates documentation to GitBook and removes the old MkDocs setup.
>
> - Removes MkDocs configuration and infra: `docs/mkdocs.yml`,
`docs/netlify.toml`, `docs/overrides/main.html`,
`docs/requirements.txt`, and JS assets (`_javascript/mathjax.js`,
`_javascript/tablesort.js`)
> - Updates `docs/content/contribute/index.md` to describe GitBook
workflow (gitbook branch, editing, previews, and `SUMMARY.md`)
> - Adds GitBook navigation file `docs/platform/SUMMARY.md` and a new
platform overview page `docs/platform/what-is-autogpt-platform.md`
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7e118b5a8. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Documentation**
* Updated contribution guide for new documentation platform and workflow
* Added new platform overview and navigation documentation
* **Chores**
* Removed MkDocs configuration and related dependencies
* Removed deprecated JavaScript integrations and deployment overrides
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
1.7 KiB
1.7 KiB
Read CSV
What it is
A block that reads and processes CSV (Comma-Separated Values) files.
What it does
This block takes CSV content as input, processes it, and outputs the data as individual rows and a complete dataset.
How it works
The Read CSV block takes the contents of a CSV file and splits it into rows and columns. It can handle different formatting options, such as custom delimiters and quote characters. The block processes the CSV data and outputs each row individually, as well as the complete dataset.
Inputs
| Input | Description |
|---|---|
| Contents | The CSV data as a string |
| Delimiter | The character used to separate values in the CSV (default is comma ",") |
| Quotechar | The character used to enclose fields containing special characters (default is double quote '"') |
| Escapechar | The character used to escape special characters (default is backslash "") |
| Has_header | Indicates whether the CSV has a header row (default is true) |
| Skip_rows | The number of rows to skip at the beginning of the CSV (default is 0) |
| Strip | Whether to remove leading and trailing whitespace from values (default is true) |
| Skip_columns | A list of column names to exclude from the output (default is an empty list) |
Outputs
| Output | Description |
|---|---|
| Row | A dictionary representing a single row of the CSV, with column names as keys and cell values as values |
| All_data | A list of dictionaries containing all rows from the CSV |
Possible use case
This block could be used in a data analysis pipeline to import and process customer information from a CSV file. The individual rows could be used for real-time processing, while the complete dataset could be used for batch analysis or reporting.