-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Command for pre-seeding missing #3042
Comments
Ok, So the file didn't make it into your docker container. You have to mount that file location into the docker container (or something like that). Ask chatGPT how to bring files into your docker container. I used 'Docker Compose' and in that .yaml file you will see other file locations that are set up to be brought into the container when its created. |
I did try that actually. I copied the file to the auto_gpt_workspace and tried to execute while standing in different directories, but I always got a messaeg that the module autogpt could not be found. But wait oh wow. I then did set the PYTHONPATH env var (which was not set in the container at all), but while trying out to run the command as if it was inside the autogpt module. Anyway, when I now again tried to just run python ./auto_gpt_workspace/data_ingestion.py it worked. So two things are needed then;
|
I created a pull request which copies over data_ingestion.py into the container. I did not, however, run into any issues with python not being on the path... |
I don't know pythong, but the problem was not that the python program was not on the path, the problems was that an apparently important pythong environment called PYTHONPATH was not set. Could you add export PYTHONPATH=/home/appuser to the PR? |
what is the status of this ? I think, it would be a worthwhile addition to expose some sort of "inspect" command, which would be the equivalent of list_files + read_files, so that the agent can form a hypothesis - based on path/location, file names, extensions, creation time stamp etc - a dedicated command for "pre-seeding" would not seem like such a bad idea actually ? |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Which Operating System are you using?
Docker
Which version of Auto-GPT are you using?
Master (branch)
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
The README (https://github.com/Significant-Gravitas/Auto-GPT/blob/master/docs/configuration/memory.md) saty that pre-seeding could be done by running the command python data_ingestion.py. However this does not work in docker.
The file data_ingestion.py is not there.
On Dicord Bill Scgumacher thought that the functionality now had moved into the autogpt module, but it was not possible to start it from there either (runnin exec inside a docker container)
Current behavior 😯
python -m autogpt.data_ingestion
/usr/local/bin/python: No module named autogpt.data_ingestion
Expected behavior 🤔
Being able to pre-seed memory, and the README to reflect how the system works plz
Your prompt 📝
# no prompt
Your Logs 📒
The text was updated successfully, but these errors were encountered: