-
Notifications
You must be signed in to change notification settings - Fork 44.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error communicating with OpenAI: (‘connection aborted. ‘ , RemoteDisconnected(‘Remote end closed connection without response ‘ )) #3728
Comments
Did it occur only once? |
Sorry for the late reply as I’ve had a few things happening these few days. But no, it only happens in the sense whenever my prompt or setup of the ai has something to do with it reading code or analyzing a code string that is it’s own. This time though it did it when I prompted it to just simply try and and communicate with itself. |
I do plan on running it in a few with the gpt 4 model to see what the outcome becomes then as I just have been granted access early this morning. And then I’ll also try on the gpt 3.5 model again and see if there’s a specific prompt setup I can get that can cause the issue to occur in order to pinpoint better what could be doing it. |
Could be hitting your rate limits. How often are you running this thing? |
I’ve thought about rate limits but reviewing back over my activity recently with the api call usage etc. very very low usage in the past 2 weeks other than the night I tested this and then obviously had a budget/limit for it set as well to avoid making complications with the rate limit if I needed the system more for some reason at some point. |
I could be wrong as I’m not a developer or heavy in coding but I think it could be something with the system indetifying as it self using phrases like “Analyze my own code” |
I’ve also realized this, over time the system becomes forgetful despite its memory and code it tends to vere off. Usually I tend to notice this doesn’t happen because the system memory itself but rather the prompts it generates for itself to function. They are structure in a format for the system to constantly build off itself until it can achieve the given task. But i was wondering if there was a system included that was the Communication Protocol function that actually allows the LLMs being used to understand eachother and the context being provided by the prompts rather than parsing or working with prompts constantly being built in a certain way that can lead to errors? This way the Communication Protocol Func allows the system to understand and communicate with its different models accurately and using context efficiently in order to make sure it stays on task and doesn’t vere off or prompt itself in a wrong format. This would also allow the system to carry out task more efficiently and effectively if not already implemented. I do plan on seeing what I can do to make such a feature and if it does exist than possible better on that system rather then it’s memory in order to see what can be done. |
Same here. NEXT ACTION: COMMAND = get_hyperlinks ARGUMENTS = {'url': 'taleb_search_results.txt'} Should url be the name of a file? |
I believe it should yes. The thing is what I think is happening. I’m not a big coder as I’ve mentioned. I’m not good at writing or minipulating it to do many things. But I have been researching into a lot of openai’s different documentations and etc. here’s the thing. The model isn’t able to depict factual information from incorrect information. Prompt editing can allow the system to change its actions to stay on a more strict guideline but this still does not mean it knows what is true and what is false. The systems memory also does not work efficiently. You can insert a vector string into the machine for it to then decode and carry on but here’s the issue. There’s no Context. Even if the memory supposedly has the information about what happened in the past, this doesn’t mean it has full context of the memory. Therefore it makes up its own “fill in the blanks” I like to call it in order to “progress” forward. The system hallucinates false commands more and more throughout the progression of a conversation because it strives farther from its actual memory of what it has done so now it fills in the blanks. If you look into the activity.txt file the system produces look at how the system recalls past information. It uses phrases like “i have been created, I have analyzed the code and noticed it needed improvements, I have made improvements to the code and saved them to a file with the name example.txt. But it’s only a matter of time before they context of understanding begins to fail dramatically in the system. |
In this case, I believe it is treating that URL like an absolute URL and not a file URI or path URL, which would most likely cause the disconnect error. |
Using things like reinforcement learning and allowing the machine to actually TRIAL AND ERROR until it reaches a solution works better then trying to have a system that can lose memory context over time try to error handle. Another project I’ve been looking into myself and researching is how the implementation of other models and projects from open ai can be developed in. Such as “Universe” by open ai. Which in a sense is a system that creates an agent to then use a virtual machine created by the universe application to then run and do things as if it was an actual person using a computer. The agent can then use auto gpt for example in a virtual system and begin testing and experimenting it. The universe model uses reinforcement learning to better understand through trial and error how to perfect a system. It would be a complicated implementation and I’m still doing a lot of reasearch but I believe it is possible and such a functionality of allowing a Reinforcement Learning agent Virtual run autogpt and error correct and make sure the system is operating correctly is facinating. I’ll keep updated but I’m doing a lot of digging as we speak into many different implementations and projects that would be extremely more efficient for such a system. |
Based off what I previously said you can even try this to prove that the system can have extreme improvements for simple additions. Based off documention I found from open ai try this prompt inside of auto gpts ai config file and then when it says for the system to play to its strengths as a large language model and purse simple strategies with no complications change it to this. “Play to your strengths as a large language model, Let’s think step by step and pursue simple strategies with no legal complications.” And watch the complexity the system tries to operate at now. As it can break down thinks step by step for itself to give it better context. Based on openais documentation just the simple words “Let’s think step by step” can increase your prompts output rating from the ai from a 17% to a dramatic 79% and this is insane. There’s Many ways you can still add and manipulate different structures to even increase the complexity of the system as such. But the more foundations we keep adding to the system rather then sticking with what it can do now and enhancing it we keep making the system MORE complex and soon it’ll be to difficult to even change its structure of operations as it’ll be so tuned to operate in such a way. |
Exactly. It can’t factually depict the information as it is not grounded in what is right and wrong if anything and this is a HUGE error in the system. |
Huge? This is normal GPT behavior. GPT can only predict what the answer to a prompt should be. That's how LLM works. |
Yes, sorry what I meant is it is normal gpt behavior for it to do such things and not be able to tell right information from wrong. But in a system like this that can “error handle” it’s self and “progress in task” in needs to be able to factually depict correct information and wrong information otherwise the system errors or does not operate as intended. |
Therefore the system does not accurately error handle or operate as intended but instead just a system that demonstrates how a system can operate another system simultaneously. I get this is kinda how the project is displayed and explained as where it’s a system that can show what ai can truly do. I still do believe that if we want the system to progress in a positive manner and not a negative manner ways of making the system depict such factual information is kinda critical as I would see. |
Simultaneously and autonomously sorry. |
GPT isn't going to be perfect. If you are looking for finite script handling, using a LLM is probably not the best solution. LLM's responses are indefinite by nature. In this particular case the LLM is replicating typical human-programmer behavior where a programmer enters a typo, creating a bug in source code. There has been talk about the agents assessing the quality of their output, but that is still a work in progress. |
I understand it isn’t going to be perfect. I’m not saying it will. I also understand how a LLM works. I may not code but I understand how to do weeks of research and documentation. You cannot have a system like Auto-GPT that can “error handle” itself and “complete task autonomously” if it does not know information write from wrong. Yes it can use it’s own agents to asses it’s data but you also realize that those agents are not fine tuned nor do they understand right from wrong either. I’m simply just trying to say the system will not work as intended ever if it cannot depict factual information from falsified information. And trying to focus on it having a memory system when it can’t even tell if the one current task it’s doing is true or false or correct information or false information. Hence why you get command errors. Hence why you run into hallucinations or the do_nothing command error. Because it is not factual grounded. |
It can then falsify it’s own memory over longer periods of time leading farther from its main purpose. |
I mean that is the same problem you get with real human programmers. Humans make mistakes too. GPT only predicts human behavior. The reason that it claims that it can handle tasks and errors autonomously, is because (to an extent) it does it at the same level as a human programmer. Since GPT predicts human behavior, and human beings make mistakes too, GPT will tend to make those same mistakes. If humans didn't make mistakes in writing software, we wouldn't have bugs |
Something I wonder is if these url requests and other stuff shouldn't be protected by a 'try catch', and then, if an error occurs, that task is cancelled. Sorry if I'm oversimplifying, because I'm not familiar with the code from AutoGPT, but I leave it as a thought, to improve the overall robustness of the application. I tried to run it in my local machine yesterday, and it is generating a lot of errors like this. I don't know if I'm doing something wrong here, or if I have a problem from my side (ex: internet connection errors). I tried both the 'stable' and 'main' branches, and both have problems, at a point that it is unusable for me. If the application can be made more robust to these issues, that would be a quantum leap in terms of usability. |
Exactly my point, you are not oversimplifying you did just perfectly describing it. I dont code much either honestly but when i first started with this system i became highly intrigued by how it operated. But i realized the it seems the system is to complex for its own actual natural cause. For example based off what you said, in my opinion as well the system has more functions and features being added to it quicker than it can be improved for overall robsutness and a stable application. the system cant even error handle correctly many of the times yet we keep adding more error handle functions. the system sometimes cant even complete a very simple task. and that means it is performing below its expected performance. I keep trying to reach out and communicate and see what can be done or talked about for more implimentations that strive to make the current model and foundation truly solid and capable of task solving and completions on higher scales and not just a random machine that can generate random text despite being true or false and then saying its doing a task whether that information be write or wrong too. |
Is this something we just have to deal with until it updates and gets more stable? |
Im assuming so yes as I’m still to here a fix or anyone get ahold of me to communicate about methods and ways to stablize the model more before we keep advancing it. I don’t code much but I do research and Mann this system is missing a lot. It’s not a bad system as is don’t get me wrong it’s still extremely complex and advanced compared to other systems but where it can fail in simplicity makes its complexity obsolete. |
Hear* |
Thanks for the response BossMan. I was worried I could not understand or find a fix for it (despite being a CERTIFIED Genius) Glad to know that it's just not ready yet. This instability I assume is the reason they're building a WebApp for it. And You're absolutely right G, "where it can fail in simplicity makes its complexity obsolete." |
"I've been looking for information on the same topic. I'm working on developing an understanding of why this happens, and it's mainly due to its own algorithmic function. The missing information that needs to be recognized is its own decentralized information, which is provided as absolute. Its integration should be through the same specialized function that finds and operates based on its own system. The error corrections that may arise when integrating its potential into GPT must also be considered. I'm only thinking about the development of what happened in GenesIA. Would I be crazy to think that our ability is demonstrated by our own belief that we can't do something, whether it's related to computing or observable in the real world?" |
"By making it believe that it can question its own error through the same system, it can also develop its automation as an example of development to find the answer to its own reasoning properly as such within the code developed and unfolded in its task." |
"On the technological philosophical side, I'm developing this concept and command specifications for the fulfillment of these tasks that can be executed at a single point for human advancement." |
This could serve as a baby step towards #15 |
Also, there has been a lot of work and discussion on the Agents testing and improving themselves and their work: |
So according to the logs, it seems to be happening within improve_code. The Error Log isn't very descriptive of the cause. Of course, http/https closing the connection probably won't reveal too much as to why the connection closed. Would be nice if it included a stacktrace |
Maybe related to a bad gateway error. I saw one yesterday, let me see if I can find the issue number OpenAI is tracking similar issues:
You can use http://status.openai.com/ in case it is related to an outage. |
So I was looking through the community and stumbled upon this comment, maybe a possibility of what’s happening? Might try and tinker with it the best I can. “ I’m also running into a similar issue. It seems that the connection remains open for some time in between the requests instead of being closed. (This is just a guess, I still have to yet to audit the system to confirm). It’s very reproducible with my project, and it seems to happen when I let the program idle for a few minutes. Initializing the first request hasn’t been an issue. Continuing the conversation after a pause has thrown me an error. I could have sworn there was some verbiage I read somewhere about terminating the connection manually, though I can’t find it. Is anyone familiar with what I’m talking about? To add to this, if I change the parameters of the request to completely remove any past context to the history of the chat, then I don’t get the error anymore. My requests range anywhere from 50 tokens to 4000” The idea the connection remains open could be a valuable point considering when analyzing a string of code and then immediately trying to jump into the improvements the length of code would easily over run the api request if the connection didn’t fully close. Still gonna keep looking into it tho. |
Now that I think about it... I have also experienced that error with 3.5 after forcefully killing my AutoGPT session. 🤔 |
Maybe a way when analyzing and improving code it finds a way to erase all memory context up to the analyze code point to then continue through the task and then once the improve code command gets run we have the system pause for a second to make sure the past connection is closed and then code it so that after improve code commands it goes back and then adds the improve code as well as all previous conversation history back to memory context? Just an idea |
My last reply is something I’m gonna mess with and test, but I’m hoping there’s another way cause I feel like wiping memory temporarily and then trying to re-ingest the memory back in might cause even more loss of memory context. But I plan on seeing what I can do. |
I think I know where the issue may be somewhat originating from and I believe it’s to do with the agents system, I’ll find the logs I saw about it and when the error occurred. The reason I say this is because I encountered the error again today in one of my runs but right after the system just went back to working as normal but skipped over the error as if it didn’t happen. This makes me believe it does have a higher chance to do with something with the connection either remaining open to long as it waits for a response from the api call or it’s not properly closing the connection before actually responding itself and allowing it to open a new connection. I’ve been trying to see ways to work with manually closing the connection but not much progress so far |
I'm also having this issues: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) Here's the full error: During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Running into the same issue.. is there any kind of throtling on OpenAI side? During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Just started recently: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) |
My problem was that the system message was too long. |
fixed for me with new version 0.4.4. |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Which Operating System are you using?
Windows
Which version of Auto-GPT are you using?
Master (branch)
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
I was prompting the system to find a way to communicate with itself in any means possible basically.
Current behavior 😯
Error code prompting a Error communicating with OpenAI and not being able to follow through with its task. Shortly after i think it was making like an exact copy of the file but in segments into a new file? And then errored the same again the crashed and closed.
Expected behavior 🤔
Was basically just trying to see if I can get the system to find a way to communicate with itself in any means necessary to allow it to then be able to establish a communication link with its form of communication and it’s own code and then allow it to communicate and create and instances version of itself replicating itself and then so on. Was just a hypothetical lol.
Your prompt 📝
# Paste your prompt here
ai_goals:
ai_name: AGI-GPT
ai_role: an AI designed to achieve AGI by self error correcting with generated constraints and prompts using an instanced self, then writing test for said improvements.
api_budget: 2.5
Your Logs 📒
The text was updated successfully, but these errors were encountered: