Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command for pre-seeding missing #3042

Closed
1 task done
psvensson opened this issue Apr 23, 2023 · 7 comments
Closed
1 task done

Command for pre-seeding missing #3042

psvensson opened this issue Apr 23, 2023 · 7 comments
Labels
docker setup Issues with getting Auto-GPT setup on local machines Stale

Comments

@psvensson
Copy link
Contributor

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Docker

Which version of Auto-GPT are you using?

Master (branch)

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

The README (https://github.com/Significant-Gravitas/Auto-GPT/blob/master/docs/configuration/memory.md) saty that pre-seeding could be done by running the command python data_ingestion.py. However this does not work in docker.

The file data_ingestion.py is not there.

On Dicord Bill Scgumacher thought that the functionality now had moved into the autogpt module, but it was not possible to start it from there either (runnin exec inside a docker container)

Current behavior 😯

python -m autogpt.data_ingestion
/usr/local/bin/python: No module named autogpt.data_ingestion

Expected behavior 🤔

Being able to pre-seed memory, and the README to reflect how the system works plz

Your prompt 📝

# no prompt

Your Logs 📒

python -m autogpt:data_ingestion
/usr/local/bin/python: No module named autogpt:data_ingestion

@mikehnta00
Copy link

Ok, So the file didn't make it into your docker container. You have to mount that file location into the docker container (or something like that). Ask chatGPT how to bring files into your docker container.

I used 'Docker Compose' and in that .yaml file you will see other file locations that are set up to be brought into the container when its created.

@psvensson
Copy link
Contributor Author

Ok, So the file didn't make it into your docker container. You have to mount that file location into the docker container (or something like that). Ask chatGPT how to bring files into your docker container.

I used 'Docker Compose' and in that .yaml file you will see other file locations that are set up to be brought into the container when its created.

I did try that actually. I copied the file to the auto_gpt_workspace and tried to execute while standing in different directories, but I always got a messaeg that the module autogpt could not be found.

But wait oh wow.

I then did set the PYTHONPATH env var (which was not set in the container at all), but while trying out to run the command as if it was inside the autogpt module.

Anyway, when I now again tried to just run python ./auto_gpt_workspace/data_ingestion.py

it worked.

So two things are needed then;

  1. Make sure that the Docker container has correct PYTHONPATH set (to the home directory)
  2. Copy over data_ingestion.py

@k-boikov k-boikov added setup Issues with getting Auto-GPT setup on local machines docker labels Apr 23, 2023
@boings
Copy link

boings commented Apr 24, 2023

I created a pull request which copies over data_ingestion.py into the container. I did not, however, run into any issues with python not being on the path...

@psvensson
Copy link
Contributor Author

I don't know pythong, but the problem was not that the python program was not on the path, the problems was that an apparently important pythong environment called PYTHONPATH was not set.

Could you add

export PYTHONPATH=/home/appuser

to the PR?

@Boostrix
Copy link
Contributor

Boostrix commented May 10, 2023

what is the status of this ?
There's a handful of related requests here suggesting to introduce an "inspect workspace" command (see #528, a portion of which I implemented to at least partially recover a crashed agent's workspace as part of PR #4063).

I think, it would be a worthwhile addition to expose some sort of "inspect" command, which would be the equivalent of list_files + read_files, so that the agent can form a hypothesis - based on path/location, file names, extensions, creation time stamp etc - a dedicated command for "pre-seeding" would not seem like such a bad idea actually ?

@github-actions
Copy link
Contributor

github-actions bot commented Sep 6, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 6, 2023
@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docker setup Issues with getting Auto-GPT setup on local machines Stale
Projects
None yet
Development

No branches or pull requests

5 participants