Update requirements.txt

This commit is contained in:
CRD716
2023-10-22 23:24:30 -05:00
committed by Senko Rasic
parent f368c3dbba
commit 6f2171acec
3 changed files with 17 additions and 18 deletions

View File

@@ -58,10 +58,9 @@ https://github.com/Pythagora-io/gpt-pilot/assets/10895136/0495631b-511e-451b-93d
# 🔌 Requirements # 🔌 Requirements
- **Python 3.9+**
- **Python 3.9-3.12** - **PostgreSQL** (Optional, default database is SQLite)
- **PostgreSQL** (optional, projects default is SQLite) - DB is needed for multiple reasons like continuing app development. If you have to stop at any point or the app crashes, go back to a specific step so that you can change some later steps in development, and easier debugging, in future we will add functionality to update project (change some things in existing project or add new features to the project and so on).
- DB is needed for multiple reasons like continuing app development. If you have to stop at any point or the app crashes, go back to a specific step so that you can change some later steps in development, and easier debugging, in future we will add functionality to update project (change some things in existing project or add new features to the project and so on..)
# 🚦How to start using gpt-pilot? # 🚦How to start using gpt-pilot?
@@ -88,7 +87,7 @@ All generated code will be stored in the folder `workspace` inside the folder na
## 🐳 How to start gpt-pilot in docker? ## 🐳 How to start gpt-pilot in docker?
1. `git clone https://github.com/Pythagora-io/gpt-pilot.git` (clone the repo) 1. `git clone https://github.com/Pythagora-io/gpt-pilot.git` (clone the repo)
2. Update the `docker-compose.yml` environment variables, which can be done via `docker compose config` . if you use local model, please go to [https://localai.io/basics/getting_started/](https://localai.io/basics/getting_started/) start. 2. Update the `docker-compose.yml` environment variables, which can be done via `docker compose config`. If you wish to use a local model, please go to [https://localai.io/basics/getting_started/](https://localai.io/basics/getting_started/).
3. By default, GPT Pilot will read & write to `~/gpt-pilot-workspace` on your machine, you can also edit this in `docker-compose.yml` 3. By default, GPT Pilot will read & write to `~/gpt-pilot-workspace` on your machine, you can also edit this in `docker-compose.yml`
4. run `docker compose build`. this will build a gpt-pilot container for you. 4. run `docker compose build`. this will build a gpt-pilot container for you.
5. run `docker compose up`. 5. run `docker compose up`.
@@ -139,7 +138,7 @@ See also [What's the purpose of arguments.password / User.password?](https://git
## `advanced` ## `advanced`
The Architect, by default, favors certain technologies, including: The Architect, by default, favors certain technologies, including:
- Node.JS - Node.JS
- MongoDB - MongoDB
@@ -225,7 +224,7 @@ Here are a couple of example apps GPT Pilot created by itself:
2. **The app needs to be written step by step as a developer would write it** - Let's say you want to create a simple app, know everything you need to code, and have the entire architecture in your head. Even then, you won't code it out entirely, then run it for the first time and debug all the issues simultaneously. Instead, you will implement something simple, like add routes, run it, see how it works, and then move on to the next task. This way, you can debug issues as they arise. The same should be the case when AI codes. It will make mistakes for sure, so in order for it to have an easier time debugging issues and for the developer to understand what is happening, the AI shouldn't just spit out the entire codebase at once. Instead, the app should be developed step by step just like a developer would code it - e.g. setup routes, add database connection, etc. <br><br> 2. **The app needs to be written step by step as a developer would write it** - Let's say you want to create a simple app, know everything you need to code, and have the entire architecture in your head. Even then, you won't code it out entirely, then run it for the first time and debug all the issues simultaneously. Instead, you will implement something simple, like add routes, run it, see how it works, and then move on to the next task. This way, you can debug issues as they arise. The same should be the case when AI codes. It will make mistakes for sure, so in order for it to have an easier time debugging issues and for the developer to understand what is happening, the AI shouldn't just spit out the entire codebase at once. Instead, the app should be developed step by step just like a developer would code it - e.g. setup routes, add database connection, etc. <br><br>
3. **The approach needs to be scalable** so that AI can create a production-ready app: 3. **The approach needs to be scalable** so that AI can create a production-ready app:
1. **Context rewinding** - for solving each development task, the context size of the first message to the LLM has to be relatively the same. For example, the context size of the first LLM message while implementing development task #5 has to be more or less the same as the first message while developing task #50. Because of this, the conversation needs to be rewound to the first message upon each task. [See the diagram here](https://blogpythagora.files.wordpress.com/2023/08/pythagora-product-development-frame-3-1.jpg?w=1714). 1. **Context rewinding** - for solving each development task, the context size of the first message to the LLM has to be relatively the same. For example, the context size of the first LLM message while implementing development task #5 has to be more or less the same as the first message while developing task #50. Because of this, the conversation needs to be rewound to the first message upon each task. [See the diagram here](https://blogpythagora.files.wordpress.com/2023/08/pythagora-product-development-frame-3-1.jpg?w=1714).
2. **Recursive conversations** are LLM conversations set up to be used “recursively”. For example, if GPT Pilot detects an error, it needs to debug it, but lets say that another error happens during the debugging process. Then, GPT Pilot needs to stop debugging the first issue, fix the second one, and get back to fixing the first issue. This is a very important concept that, I believe, needs to work to make AI build large and scalable apps by itself. It works by rewinding the context and explaining each error in the recursion separately. Once the deepest level error is fixed, we move up in the recursion and continue fixing that error. We do this until the entire recursion is completed. 2. **Recursive conversations** are LLM conversations set up to be used “recursively”. For example, if GPT Pilot detects an error, it needs to debug it, but lets say that another error happens during the debugging process. Then, GPT Pilot needs to stop debugging the first issue, fix the second one, and get back to fixing the first issue. This is a very important concept that, I believe, needs to work to make AI build large and scalable apps by itself. It works by rewinding the context and explaining each error in the recursion separately. Once the deepest level error is fixed, we move up in the recursion and continue fixing that error. We do this until the entire recursion is completed.
3. **TDD (Test Driven Development)** - for GPT Pilot to be able to scale the codebase, it will need to be able to create new code without breaking previously written code. There is no better way to do this than working with TDD methodology. For each code that GPT Pilot writes, it needs to write tests that check if the code works as intended so that all previous tests can be run whenever new changes are made. 3. **TDD (Test Driven Development)** - for GPT Pilot to be able to scale the codebase, it will need to be able to create new code without breaking previously written code. There is no better way to do this than working with TDD methodology. For each code that GPT Pilot writes, it needs to write tests that check if the code works as intended so that all previous tests can be run whenever new changes are made.
The idea is that AI won't be able to (at least in the near future) create apps from scratch without the developer being involved. That's why we created an interactive tool that generates code but also requires the developer to check each step so that they can understand what's going on and so that the AI can have a better overview of the entire codebase. The idea is that AI won't be able to (at least in the near future) create apps from scratch without the developer being involved. That's why we created an interactive tool that generates code but also requires the developer to check each step so that they can understand what's going on and so that the AI can have a better overview of the entire codebase.

View File

@@ -116,7 +116,7 @@ def get_os_info():
} }
if os_info["OS"] == "Linux": if os_info["OS"] == "Linux":
os_info["Distribution"] = ' '.join(distro.linux_distribution(full_distribution_name=True)) os_info["Distribution"] = distro.name(pretty=True)
elif os_info["OS"] == "Windows": elif os_info["OS"] == "Windows":
os_info["Win32 Version"] = ' '.join(platform.win32_ver()) os_info["Win32 Version"] = ' '.join(platform.win32_ver())
elif os_info["OS"] == "Mac": elif os_info["OS"] == "Mac":

View File

@@ -1,14 +1,14 @@
blessed==1.20.0 blessed==1.20.0
certifi==2023.5.7 certifi==2023.7.22
charset-normalizer==3.2.0 charset-normalizer==3.3.2
colorama==0.4.6 colorama==0.4.6
distro==1.8.0 distro==1.8.0
idna==3.4 idna==3.4
jsonschema==4.19.1 jsonschema==4.19.2
Jinja2==3.1.2 Jinja2==3.1.2
MarkupSafe==2.1.3 MarkupSafe==2.1.3
peewee==3.16.2 peewee==3.16.3
prompt-toolkit==3.0.39 prompt-toolkit==3.0.40
psutil==5.9.6 psutil==5.9.6
psycopg2-binary==2.9.9 psycopg2-binary==2.9.9
python-dotenv==1.0.0 python-dotenv==1.0.0
@@ -17,11 +17,11 @@ pytest==7.4.2
pyyaml==6.0.1 pyyaml==6.0.1
questionary==1.10.0 questionary==1.10.0
readchar==4.0.5 readchar==4.0.5
regex==2023.6.3 regex==2023.10.3
requests==2.31.0 requests==2.31.0
six==1.16.0 six==1.16.0
termcolor==2.3.0 termcolor==2.3.0
tiktoken==0.4.0 tiktoken==0.5.1
urllib3==1.26.6 urllib3==1.26.7
wcwidth==0.2.6 wcwidth==0.2.8
yaspin==2.4.0 yaspin==2.5.0