-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not creating pinecone index #711
Comments
Pinecone seems to take a while to clear the file. Check everything and try it again, if you would? I had a similar experience but it dumped after a while and I was able to reinitialize a new index. |
I just started with this script today, brand new Pinecone account and it never created the Pinecone index ever |
I figured it out, there is a new ENV var that needs to be configured It'll default to using local cache for memory, which gets a little ADHD I've noticed. You can configure the memory to accept:
Redis is giving me errors currently, but Pinecone and LocalCache work fine |
I created a PR to solve this #794 |
I'm on windows 11 and i don't have the memory backend variable in my env. I would show you but im a super noob and don't know how to insert it in any fancy ways :) Offtopic: I'm super confused when it comes to pinecone. I've followed all the instructions to the point but i cant see any changes in my pinecone index in pinecone.io (really not sure what index type to use either, used recommendations from another dude who got it to work.) and i set up the api and region correctly. And i reran the requirements.txt in python and followed all the other tips. Can anyone just reassure me that its not harmful to my hardware to use localcache? Will using pinecone extend its memory? It'ts super short term and im having trouble instructing it to take notes before it forgets what it has done. But yeah i really haven't understood what pinecone or redis actually will improve for autogpt as using localcache works good. Having the other ordinary issues of Invalid JSON, that it tries to use gpt 4 when i dont have access and instructed it to use gpt3/3.5, and that really is an issue cuz that happens every time it tries to evaluate code. The problems with api limit i think is server related, having the issue just sometimes. Sorry for all the oftopic info :) |
@sjnt1 you can just add the env var into the .env file and it'll work You don't need to configure anything in Pinecone, let AutoGPT do it. I found localcache gets a little ADHD when starting/stopping and Pinecone has been much better for memory across multiple agents. I'm currently getting errors with Redis, it does work but the index errors. I haven't bothered to diagnose it because I'm not going to use it 🤷♂️ |
Thanks that worked. But now i get this problem: "The index exceeds the project quota of 1 pods by 1 pods. Upgrade your account or change the project settings to increase the quota." I have a cosine metric, p1.x1 pods, 1536 dimensions. Its latin to me but apparantly the 1 pod by 1 pod isnt sufficient. I dont want to have to pay for pinecone, not yet before i have tried it atleast, so what settings should one go for when creating a pinecone index, for it to work best with autogpt? |
@sjnt1 As I said in my previous comment, you are not supposed to create the index because AutoGPT does it for you. Delete it, and then re-run Auto-GPT and it will create what it needs. |
Sorry. I'm kinda overwhelmed with information and other stuff so i get tunnel sight. I had to re-run it a couple of times after deleting the index for it to work. But now it does, so ty again. I cant tell the difference yet tho, if anything it has a harder time summarizing alot of chunks at a time than before, Can i ask if you or anyone else know the reason for why it thinks so fast on some YT videos? But for me it takes like 10x slower. Can it be since they have gpt4 API access? I've read that GPT+ and the API key don't have anything to do with each other and i have ok computer so it shouldnt take that long for it to think. |
@sjnt1 it runs slow for me too and I have an M1 Pro with 32GB of memory I'll be rewriting this into Crystal for better performance anyway. This code is super simple and in it's infancy |
Infancy is perfect for me :) But wish it didn't have to be that slow. Im not sure if its pinecone related but my AI cant even read the wiki site of meaning of life without shutting down on 5 or 8 out og 17 packets, because of model overload. |
Duplicates
Steps to reproduce 🕹
I by mistake deleted my pinecone index. When i fire up auto-gpt it does not make me a new one but functions correctly. I have tried starting fresh, but same issue. Works fine but is not triggering pinecone.
Current behavior 😯
i download a fresh copy and fill in the .env and it starts but does not create a pinecone index.
Expected behavior 🤔
Every other time ive loaded it it creates the index and then communicates with it.
Your prompt 📝
# Paste your prompt here
The text was updated successfully, but these errors were encountered: