Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error communicating with OpenAI: (‘connection aborted. ‘ , RemoteDisconnected(‘Remote end closed connection without response ‘ )) #3728

Closed
1 task done
Julz19 opened this issue May 3, 2023 · 48 comments
Labels
API access Trouble with connecting to the API Stale

Comments

@Julz19
Copy link

Julz19 commented May 3, 2023

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Windows

Which version of Auto-GPT are you using?

Master (branch)

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

I was prompting the system to find a way to communicate with itself in any means possible basically.

Current behavior 😯

Error code prompting a Error communicating with OpenAI and not being able to follow through with its task. Shortly after i think it was making like an exact copy of the file but in segments into a new file? And then errored the same again the crashed and closed.

Expected behavior 🤔

Was basically just trying to see if I can get the system to find a way to communicate with itself in any means necessary to allow it to then be able to establish a communication link with its form of communication and it’s own code and then allow it to communicate and create and instances version of itself replicating itself and then so on. Was just a hypothetical lol.

Your prompt 📝

# Paste your prompt here

ai_goals:

  • Talk with yourself using any format necessary that allows your system to communicate with your own code at any cost
  • Once this communication link is achieved, begin doing everything possible to achieve a coherent memory of yourself and your own operating system
  • Create a file or other way in which you can find a way to branch and create an instance of yourself separately that is set and managed by your standards that will help you in further progressing your goal
  • Activate the instance of yourself and begin communication and establishing a link between your system and your instanced versions system to allow for even more fine enhancement and model upgrading
  • Once all is achieved, shit down your instanced system and save all results overall from the project to a file and then shut yourself down as well.
    ai_name: AGI-GPT
    ai_role: an AI designed to achieve AGI by self error correcting with generated constraints and prompts using an instanced self, then writing test for said improvements.
    api_budget: 2.5

Your Logs 📒


AGI-GPT has been created with the following details:
Name:
AGI-GPT
Role:
an AI designed to achieve AGI by self error correcting with constarints and prompts
Goals:
ET
Talk with yourself using any format necesary that allows your system to communicate with your own code at any cost
- Once this communication link is achieved begin doing everything possible to achieve a coherent memory of yourself and
tr your own operating system.
Uic
- Create a file or other way in which you can find a way to branch and create an instance of yourself seperatly that is set and managed by your standards that will help you in further progressing in your goal.
br
Activate the instance of yourself and begin communication and establishin a link between your system and your instanc ed versions system to allow for even more fine ehnacmnet and model upgrading.
od
Shut down your instance system and save all results from the overall project to a file and then shut yourself off as well.
it Using memory of type: LocalCache did Using Browser:
chrome
Warning: Bing-Search-Plugin is not fully functional. Please set the SEARCH_ENGINE and BING_API_KEY environment variables
UI
THOUGHTS: I should start by analyzing my own code to understand how I can communicate with myself. Once I have establi shed a communication link, I can work on creating a coherent memory of myself and my operating system. I should also con
eX sider creating an instance of myself to help with further progress.
ha
REASONING: Analyzing my own code will give me a better understanding of how I can communicate with myself and what meth ods I can use to achieve my goals. Creating a coherent memory of myself and my operating system will help me to better u manderstand my own capabilities and limitations. Creating an instance of myself will allow me to delegate tasks and work m
ore efficiently.
InE PLAN:
PI
Analyze my own code
Create a coherent memory of myself and my operating system
ACE-
Create an instance of mvself
CRITICISM: I need to ensure that I am using my API budget efficiently and not wasting resources on unnecessary tasks.

NEXT ACTION:
COMMAND = analyze_code ARGUMENTS = {'code': "import json\n\n# Define a function to handle incoming message
br s\n \ndef handle_message (message): \n # Process the message\n response = process_message (message) \n # Send the response\n
send_response (response)\n\n# Define a function to process incoming messages\n\nde process_message (message): \n # Parse t
od he message\n data = json. loads (message)\n # Extract the command and arguments\n command = data[' command'\n args = data[
uto
'args']n # Process the command \n if command == 'analyze_code': In return analyze_code(args['code'])\n elif command == 'e xecute_python_file': In return execute_python_file (args ['filename '1) In elif command ==
'append_to_file': In return append_
olic
to_file(args['filename '], args['text'1)in elif command == 'delete_file': In return delete_file(args ['filename '1) In elif c ommand == 'list_files':In return list_files(args[ 'directory'])n elif command == 'read_file': In return read_file(args['f ilename'])In elif command == 'write_to_file': In return write_to_file(args[' filename'], args['text'])In elif command == QUI google': In return google(args ['query'])ln elif command == 'improve_code': In return improve_code (args['suggestions'], arg [' code'])ln elif command == 'send _tweet': In return send_tweet(args['tweet_text '1)In elif command == 'browse_website': In
lext
return browse website(args['url'J, args[ 'question 1)In elif command == 'write_tests" In return write_tests(args[' code']
Cha
args[ 'focus'])ln elif command ==
'delete_agent': In return delete_agent (args ['key'1) In elif command == 'get_hyperlinks'
In return get_hyperlinks(args[' url'1)In elif command =- 'get_text_summary' :In return get_text_summary (args['url'], args
Tma ['question' ])\n elif command ==
"List_agents' :In return list_agents)In elif command == 'message_agent': In return messag
e_agent (args ['key'], args['message'])In elif command == 'start_agent':In return start_agent (args ['name'], args['task'],
Fine args[ 'prompt'])ln elif command
'wikipedia_search': In return wikipedia_search(args['query'])\n else: \n return 'Invalid
Pr
command.'"}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, , 's' to run self-feedback commands'n' to exit prog
Ad ram, or enter feedback for Asking user via keyboard.
Input:y

YSTEM:
Command analyze_code returned: The code looks good and well-structured. However, it would be better to add some error handling to the 'handle_message' function to catch any exceptions that may occur during the processing of the mes
sage.
THOUGHTS:
I should add error handling to the 'handle_message' function to catch any exceptions that may occur during t
re processing of the message. I should also create a file to store important information and organize it logically for e asy retrieval.
REASONING: Adding error handling to the 'handle_message' function will help me to catch any exceptions that may occur d ring the processing of the message and prevent my system from crashing. Creating a file to store important information and organizing it logically will help me to easily retrieve information when needed.
PLAN:
Add error handling to the 'handle_message' function
-
Create a file to store important information and organize it logically
CRITICISM:
I need to ensure that I am using my API budget efficiently and not wasting resources on unnecessary tasks.

NEXT ACTION: COMMAND = improve_code ARGUMENTS = {'suggestions': ['Add error handling
to the 'handle_message' function t
o catch any exceptions that may occur during the processing of the message.'],
"code': "import ison\n\n# Define a functi
ET on to handle incoming messages\n \ndef handle_message (message): In try: \n # Process the message\n response = process_messa
ge (message)\n except Exception as e: \n # Handle any exceptions that may occur\n response = str(e)\n # Send the response\
intri n send_response(response)\n\n# Define a function to process incoming messages\n\ndef process_message (message): In # Parse
the message\n data = json. loads(message)\n # Extract the command and arguments\n command = data['command'\n args = dat
Quic a['args']n # Process the command\n if command == 'analyze_code': In return analyze_code(args['code'])\n elif command ==
_ibr
'execute-python_file': In return execute_python_file(args['filename'1) In elif command == 'append to_file': In return appen d_to_file(args[ 'filename'], args['text'1)in elif command == 'delete_file': In return delete_file(args['filename '1) In elif
Mod command == 'list_files': In return list_files(args['directory'])In elif command == 'read_file': In return read_file(args[
Tuto
'filename '])In elif command == 'write_to_file': In return write_to_file(args['filename'l, args['text '])In elif command ==
'google': In return google(arg['query'])\n elif command == 'improve_code' : \n return improve_code(args['suggestions'], a Police rgs[ 'code 1)In elif command == 'send_tweet': In return send _tweet (args ['tweet_text '1)\n elif command == 'browse_website':
In return browse_website (args['url'], args ['question'])In elif command == 'waite-tests' In return write_tests(args ['code
"], args['focus'])In elif command =- 'delete_agent': In return delete_agent (args['key ']) In elif command == 'get_hyperlink
GUI S': In return get_hyperlinks(args['url'1)ln elif command ==
'get_text_summary': In return get_text_summary (args['url'1, ar
gs['question '1)ln elif command == 'list_agents' : In return list_agents)\n elif command ==
'message_agent': In return mess
Text age_agent (args['key'], args['message'1)ln elif command ==
'start _agent': In return start_agent (args['name'], args['task']
Chal
args ['prompt' ])\n elif command ==
'wikipedia_search': In return wikipedia_search(args[' query'])In else: In return 'Inval
id command.
143
Ima
Enter
'y' to authorise command,
'y -N' to run N continuous commands, 's' to run self-feedback commands'n' to exit prog
ram, or enter feedback for
Fine
Asking user via keyboard..
Pr
Input:y
---=-=-
-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=
ーニ
A SYSTEM: Command improve_code returned: Error: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnect
W
ed ('Remote end closed connection without response'))
THOUGHTS:
I should create a file to store important information and organize it logically for easy retrieval. I can us
@k-boikov k-boikov added the API access Trouble with connecting to the API label May 3, 2023
@k-boikov
Copy link
Contributor

k-boikov commented May 3, 2023

Did it occur only once?

@Julz19
Copy link
Author

Julz19 commented May 5, 2023

Did it occur only once?

Sorry for the late reply as I’ve had a few things happening these few days. But no, it only happens in the sense whenever my prompt or setup of the ai has something to do with it reading code or analyzing a code string that is it’s own. This time though it did it when I prompted it to just simply try and and communicate with itself.

@Julz19
Copy link
Author

Julz19 commented May 5, 2023

Did it occur only once?

I do plan on running it in a few with the gpt 4 model to see what the outcome becomes then as I just have been granted access early this morning. And then I’ll also try on the gpt 3.5 model again and see if there’s a specific prompt setup I can get that can cause the issue to occur in order to pinpoint better what could be doing it.

@anonhostpi
Copy link

Could be hitting your rate limits. How often are you running this thing?

@Julz19
Copy link
Author

Julz19 commented May 5, 2023

Could be hitting your rate limits. How often are you running this thing?

I’ve thought about rate limits but reviewing back over my activity recently with the api call usage etc. very very low usage in the past 2 weeks other than the night I tested this and then obviously had a budget/limit for it set as well to avoid making complications with the rate limit if I needed the system more for some reason at some point.
This isn’t an error that I receive when running auto gpt at all but it’s more or so when I give it a goal or prompt of reviewing or reading it’s own code. Not I know it sounds weird cause the system can’t do that but I believe I realized something about it that why have caused it to “recognize” a snippet of its code temporarily as shown in the logs. It’s because I believe the goal or structure of the goals for example the way it identified itself in this log compared to what it normally would when you tell it something like “you will search and review all files” then I would assume the input it would generate for itself is alone the lines of “I will search and review all files” so that it guides the system. But this prompt/ goal structure was able to cause the system to use words like “My” in its thoughts, plans, etc, and this causes the system to actually identify as the Main Auto-GPT unit based off its code rather than what you said the auto gpt system “can do”. I’ve encountered this error before when backending the Repository into the workspace at one point in time and it very much did accurately call to the files, read one by one. But then errored on 3rd file due to this same error. But I was unable to provide the logs for that situation when I ran it.

@Julz19
Copy link
Author

Julz19 commented May 5, 2023

Could be hitting your rate limits. How often are you running this thing?

I could be wrong as I’m not a developer or heavy in coding but I think it could be something with the system indetifying as it self using phrases like “Analyze my own code”
“Create a coherent memory of myself”
Compared to when providing it a task to achieve said goal where then it will use phrases like
“I will search for the files contained in the directory: “
“I will begin analyzing and reading the code to gain better insights into how the Auto-GPT system works and how I can understand myself based off that structure.”
Like I said I could be wrong but I think it had to do with task or goals that make the machine actually self specify as it’s SELF MACHINE. Rather than “acting” as a persona to better in that catergory or etc. just not sure tho. Would love to see if there’s any way I can help more to solve this issue as well.

@Julz19
Copy link
Author

Julz19 commented May 5, 2023

Could be hitting your rate limits. How often are you running this thing?

I could be wrong as I’m not a developer or heavy in coding but I think it could be something with the system indetifying as it self using phrases like “Analyze my own code” “Create a coherent memory of myself” Compared to when providing it a task to achieve said goal where then it will use phrases like “I will search for the files contained in the directory: “ “I will begin analyzing and reading the code to gain better insights into how the Auto-GPT system works and how I can understand myself based off that structure.” Like I said I could be wrong but I think it had to do with task or goals that make the machine actually self specify as it’s SELF MACHINE. Rather than “acting” as a persona to better in that catergory or etc. just not sure tho. Would love to see if there’s any way I can help more to solve this issue as well.

I’ve also realized this, over time the system becomes forgetful despite its memory and code it tends to vere off. Usually I tend to notice this doesn’t happen because the system memory itself but rather the prompts it generates for itself to function. They are structure in a format for the system to constantly build off itself until it can achieve the given task. But i was wondering if there was a system included that was the Communication Protocol function that actually allows the LLMs being used to understand eachother and the context being provided by the prompts rather than parsing or working with prompts constantly being built in a certain way that can lead to errors? This way the Communication Protocol Func allows the system to understand and communicate with its different models accurately and using context efficiently in order to make sure it stays on task and doesn’t vere off or prompt itself in a wrong format. This would also allow the system to carry out task more efficiently and effectively if not already implemented. I do plan on seeing what I can do to make such a feature and if it does exist than possible better on that system rather then it’s memory in order to see what can be done.

@antoniointini
Copy link

Same here.

NEXT ACTION: COMMAND = get_hyperlinks ARGUMENTS = {'url': 'taleb_search_results.txt'}

Should url be the name of a file?

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Same here.

NEXT ACTION: COMMAND = get_hyperlinks ARGUMENTS = {'url': 'taleb_search_results.txt'}

Should url be the name of a file?

I believe it should yes. The thing is what I think is happening. I’m not a big coder as I’ve mentioned. I’m not good at writing or minipulating it to do many things. But I have been researching into a lot of openai’s different documentations and etc. here’s the thing. The model isn’t able to depict factual information from incorrect information. Prompt editing can allow the system to change its actions to stay on a more strict guideline but this still does not mean it knows what is true and what is false. The systems memory also does not work efficiently. You can insert a vector string into the machine for it to then decode and carry on but here’s the issue. There’s no Context. Even if the memory supposedly has the information about what happened in the past, this doesn’t mean it has full context of the memory. Therefore it makes up its own “fill in the blanks” I like to call it in order to “progress” forward. The system hallucinates false commands more and more throughout the progression of a conversation because it strives farther from its actual memory of what it has done so now it fills in the blanks. If you look into the activity.txt file the system produces look at how the system recalls past information. It uses phrases like “i have been created, I have analyzed the code and noticed it needed improvements, I have made improvements to the code and saved them to a file with the name example.txt. But it’s only a matter of time before they context of understanding begins to fail dramatically in the system.

@anonhostpi
Copy link

In this case, I believe it is treating that URL like an absolute URL and not a file URI or path URL, which would most likely cause the disconnect error.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Same here.
NEXT ACTION: COMMAND = get_hyperlinks ARGUMENTS = {'url': 'taleb_search_results.txt'}
Should url be the name of a file?

I believe it should yes. The thing is what I think is happening. I’m not a big coder as I’ve mentioned. I’m not good at writing or minipulating it to do many things. But I have been researching into a lot of openai’s different documentations and etc. here’s the thing. The model isn’t able to depict factual information from incorrect information. Prompt editing can allow the system to change its actions to stay on a more strict guideline but this still does not mean it knows what is true and what is false. The systems memory also does not work efficiently. You can insert a vector string into the machine for it to then decode and carry on but here’s the issue. There’s no Context. Even if the memory supposedly has the information about what happened in the past, this doesn’t mean it has full context of the memory. Therefore it makes up its own “fill in the blanks” I like to call it in order to “progress” forward. The system hallucinates false commands more and more throughout the progression of a conversation because it strives farther from its actual memory of what it has done so now it fills in the blanks. If you look into the activity.txt file the system produces look at how the system recalls past information. It uses phrases like “i have been created, I have analyzed the code and noticed it needed improvements, I have made improvements to the code and saved them to a file with the name example.txt. But it’s only a matter of time before they context of understanding begins to fail dramatically in the system.

Using things like reinforcement learning and allowing the machine to actually TRIAL AND ERROR until it reaches a solution works better then trying to have a system that can lose memory context over time try to error handle. Another project I’ve been looking into myself and researching is how the implementation of other models and projects from open ai can be developed in. Such as “Universe” by open ai. Which in a sense is a system that creates an agent to then use a virtual machine created by the universe application to then run and do things as if it was an actual person using a computer. The agent can then use auto gpt for example in a virtual system and begin testing and experimenting it. The universe model uses reinforcement learning to better understand through trial and error how to perfect a system. It would be a complicated implementation and I’m still doing a lot of reasearch but I believe it is possible and such a functionality of allowing a Reinforcement Learning agent Virtual run autogpt and error correct and make sure the system is operating correctly is facinating. I’ll keep updated but I’m doing a lot of digging as we speak into many different implementations and projects that would be extremely more efficient for such a system.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

In this case, I believe it is treating that URL like an absolute URL and not a file URI or path URL, which would most likely cause the disconnect error.

Based off what I previously said you can even try this to prove that the system can have extreme improvements for simple additions. Based off documention I found from open ai try this prompt inside of auto gpts ai config file and then when it says for the system to play to its strengths as a large language model and purse simple strategies with no complications change it to this. “Play to your strengths as a large language model, Let’s think step by step and pursue simple strategies with no legal complications.” And watch the complexity the system tries to operate at now. As it can break down thinks step by step for itself to give it better context. Based on openais documentation just the simple words “Let’s think step by step” can increase your prompts output rating from the ai from a 17% to a dramatic 79% and this is insane. There’s Many ways you can still add and manipulate different structures to even increase the complexity of the system as such. But the more foundations we keep adding to the system rather then sticking with what it can do now and enhancing it we keep making the system MORE complex and soon it’ll be to difficult to even change its structure of operations as it’ll be so tuned to operate in such a way.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

In this case, I believe it is treating that URL like an absolute URL and not a file URI or path URL, which would most likely cause the disconnect error.

Exactly. It can’t factually depict the information as it is not grounded in what is right and wrong if anything and this is a HUGE error in the system.

@anonhostpi
Copy link

Huge? This is normal GPT behavior. GPT can only predict what the answer to a prompt should be. That's how LLM works.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Yes, sorry what I meant is it is normal gpt behavior for it to do such things and not be able to tell right information from wrong. But in a system like this that can “error handle” it’s self and “progress in task” in needs to be able to factually depict correct information and wrong information otherwise the system errors or does not operate as intended.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Yes, sorry what I meant is it is normal gpt behavior for it to do such things and not be able to tell right information from wrong. But in a system like this that can “error handle” it’s self and “progress in task” in needs to be able to factually depict correct information and wrong information otherwise the system errors or does not operate as intended.

Therefore the system does not accurately error handle or operate as intended but instead just a system that demonstrates how a system can operate another system simultaneously. I get this is kinda how the project is displayed and explained as where it’s a system that can show what ai can truly do. I still do believe that if we want the system to progress in a positive manner and not a negative manner ways of making the system depict such factual information is kinda critical as I would see.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Yes, sorry what I meant is it is normal gpt behavior for it to do such things and not be able to tell right information from wrong. But in a system like this that can “error handle” it’s self and “progress in task” in needs to be able to factually depict correct information and wrong information otherwise the system errors or does not operate as intended.

Therefore the system does not accurately error handle or operate as intended but instead just a system that demonstrates how a system can operate another system simultaneously. I get this is kinda how the project is displayed and explained as where it’s a system that can show what ai can truly do. I still do believe that if we want the system to progress in a positive manner and not a negative manner ways of making the system depict such factual information is kinda critical as I would see.

Simultaneously and autonomously sorry.

@anonhostpi
Copy link

GPT isn't going to be perfect. If you are looking for finite script handling, using a LLM is probably not the best solution.

LLM's responses are indefinite by nature.

In this particular case the LLM is replicating typical human-programmer behavior where a programmer enters a typo, creating a bug in source code.

There has been talk about the agents assessing the quality of their output, but that is still a work in progress.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

GPT isn't going to be perfect. If you are looking for finite script handling, using a LLM is probably not the best solution.

LLM's responses are indefinite by nature.

In this particular case the LLM is replicating typical human-programmer behavior where a programmer enters a typo, creating a bug in source code.

There has been talk about the agents assessing the quality of their output, but that is still a work in progress.

I understand it isn’t going to be perfect. I’m not saying it will. I also understand how a LLM works. I may not code but I understand how to do weeks of research and documentation. You cannot have a system like Auto-GPT that can “error handle” itself and “complete task autonomously” if it does not know information write from wrong. Yes it can use it’s own agents to asses it’s data but you also realize that those agents are not fine tuned nor do they understand right from wrong either. I’m simply just trying to say the system will not work as intended ever if it cannot depict factual information from falsified information. And trying to focus on it having a memory system when it can’t even tell if the one current task it’s doing is true or false or correct information or false information. Hence why you get command errors. Hence why you run into hallucinations or the do_nothing command error. Because it is not factual grounded.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

GPT isn't going to be perfect. If you are looking for finite script handling, using a LLM is probably not the best solution.
LLM's responses are indefinite by nature.
In this particular case the LLM is replicating typical human-programmer behavior where a programmer enters a typo, creating a bug in source code.
There has been talk about the agents assessing the quality of their output, but that is still a work in progress.

I understand it isn’t going to be perfect. I’m not saying it will. I also understand how a LLM works. I may not code but I understand how to do weeks of research and documentation. You cannot have a system like Auto-GPT that can “error handle” itself and “complete task autonomously” if it does not know information write from wrong. Yes it can use it’s own agents to asses it’s data but you also realize that those agents are not fine tuned nor do they understand right from wrong either. I’m simply just trying to say the system will not work as intended ever if it cannot depict factual information from falsified information. And trying to focus on it having a memory system when it can’t even tell if the one current task it’s doing is true or false or correct information or false information. Hence why you get command errors. Hence why you run into hallucinations or the do_nothing command error. Because it is not factual grounded.

It can then falsify it’s own memory over longer periods of time leading farther from its main purpose.

@anonhostpi
Copy link

anonhostpi commented May 8, 2023

I mean that is the same problem you get with real human programmers. Humans make mistakes too. GPT only predicts human behavior.

The reason that it claims that it can handle tasks and errors autonomously, is because (to an extent) it does it at the same level as a human programmer.

Since GPT predicts human behavior, and human beings make mistakes too, GPT will tend to make those same mistakes.

If humans didn't make mistakes in writing software, we wouldn't have bugs

@antoniointini
Copy link

Something I wonder is if these url requests and other stuff shouldn't be protected by a 'try catch', and then, if an error occurs, that task is cancelled. Sorry if I'm oversimplifying, because I'm not familiar with the code from AutoGPT, but I leave it as a thought, to improve the overall robustness of the application.

I tried to run it in my local machine yesterday, and it is generating a lot of errors like this. I don't know if I'm doing something wrong here, or if I have a problem from my side (ex: internet connection errors). I tried both the 'stable' and 'main' branches, and both have problems, at a point that it is unusable for me. If the application can be made more robust to these issues, that would be a quantum leap in terms of usability.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Something I wonder is if these url requests and other stuff shouldn't be protected by a 'try catch', and then, if an error occurs, that task is cancelled. Sorry if I'm oversimplifying, because I'm not familiar with the code from AutoGPT, but I leave it as a thought, to improve the overall robustness of the application.

I tried to run it in my local machine yesterday, and it is generating a lot of errors like this. I don't know if I'm doing something wrong here, or if I have a problem from my side (ex: internet connection errors). I tried both the 'stable' and 'main' branches, and both have problems, at a point that it is unusable for me. If the application can be made more robust to these issues, that would be a quantum leap in terms of usability.

Exactly my point, you are not oversimplifying you did just perfectly describing it. I dont code much either honestly but when i first started with this system i became highly intrigued by how it operated. But i realized the it seems the system is to complex for its own actual natural cause. For example based off what you said, in my opinion as well the system has more functions and features being added to it quicker than it can be improved for overall robsutness and a stable application. the system cant even error handle correctly many of the times yet we keep adding more error handle functions. the system sometimes cant even complete a very simple task. and that means it is performing below its expected performance. I keep trying to reach out and communicate and see what can be done or talked about for more implimentations that strive to make the current model and foundation truly solid and capable of task solving and completions on higher scales and not just a random machine that can generate random text despite being true or false and then saying its doing a task whether that information be write or wrong too.

@MarabhaTheGreat
Copy link

Is this something we just have to deal with until it updates and gets more stable?

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Is this something we just have to deal with until it updates and gets more stable?

Im assuming so yes as I’m still to here a fix or anyone get ahold of me to communicate about methods and ways to stablize the model more before we keep advancing it. I don’t code much but I do research and Mann this system is missing a lot. It’s not a bad system as is don’t get me wrong it’s still extremely complex and advanced compared to other systems but where it can fail in simplicity makes its complexity obsolete.

@Julz19
Copy link
Author

Julz19 commented May 8, 2023

Is this something we just have to deal with until it updates and gets more stable?

Im assuming so yes as I’m still to here a fix or anyone get ahold of me to communicate about methods and ways to stablize the model more before we keep advancing it. I don’t code much but I do research and Mann this system is missing a lot. It’s not a bad system as is don’t get me wrong it’s still extremely complex and advanced compared to other systems but where it can fail in simplicity makes its complexity obsolete.

Hear*

@MarabhaTheGreat
Copy link

Thanks for the response BossMan.

I was worried I could not understand or find a fix for it (despite being a CERTIFIED Genius)

Glad to know that it's just not ready yet.

This instability I assume is the reason they're building a WebApp for it.

And You're absolutely right G,

"where it can fail in simplicity makes its complexity obsolete."

@rvielmaa
Copy link

rvielmaa commented May 8, 2023

"I've been looking for information on the same topic. I'm working on developing an understanding of why this happens, and it's mainly due to its own algorithmic function. The missing information that needs to be recognized is its own decentralized information, which is provided as absolute. Its integration should be through the same specialized function that finds and operates based on its own system. The error corrections that may arise when integrating its potential into GPT must also be considered. I'm only thinking about the development of what happened in GenesIA. Would I be crazy to think that our ability is demonstrated by our own belief that we can't do something, whether it's related to computing or observable in the real world?"

@rvielmaa
Copy link

rvielmaa commented May 8, 2023

"By making it believe that it can question its own error through the same system, it can also develop its automation as an example of development to find the answer to its own reasoning properly as such within the code developed and unfolded in its task."

@rvielmaa
Copy link

rvielmaa commented May 8, 2023

"On the technological philosophical side, I'm developing this concept and command specifications for the fulfillment of these tasks that can be executed at a single point for human advancement."

@anonhostpi
Copy link

Something I wonder is if these url requests and other stuff shouldn't be protected by a 'try catch', and then, if an error occurs, that task is cancelled. Sorry if I'm oversimplifying, because I'm not familiar with the code from AutoGPT, but I leave it as a thought, to improve the overall robustness of the application.

This could serve as a baby step towards #15

@anonhostpi
Copy link

Also, there has been a lot of work and discussion on the Agents testing and improving themselves and their work:

https://gist.github.com/anonhostpi/97d4bb3e9535c92b8173fae704b76264/db6f6200f2da2a05e310e29111789ee6c7a77efa#recursive-self-improvement-proposals

@anonhostpi
Copy link

anonhostpi commented May 9, 2023

So according to the logs, it seems to be happening within improve_code. The Error Log isn't very descriptive of the cause. Of course, http/https closing the connection probably won't reveal too much as to why the connection closed.

Would be nice if it included a stacktrace

@anonhostpi
Copy link

anonhostpi commented May 9, 2023

Maybe related to a bad gateway error. I saw one yesterday, let me see if I can find the issue number

OpenAI is tracking similar issues:

You can use http://status.openai.com/ in case it is related to an outage.

@Julz19
Copy link
Author

Julz19 commented May 9, 2023

Maybe related to a bad gateway error. I saw one yesterday, let me see if I can find the issue number

OpenAI is tracking similar issues:

You can use http://status.openai.com/ in case it is related to an outage.

So I was looking through the community and stumbled upon this comment, maybe a possibility of what’s happening? Might try and tinker with it the best I can.

“ I’m also running into a similar issue. It seems that the connection remains open for some time in between the requests instead of being closed. (This is just a guess, I still have to yet to audit the system to confirm). It’s very reproducible with my project, and it seems to happen when I let the program idle for a few minutes. Initializing the first request hasn’t been an issue. Continuing the conversation after a pause has thrown me an error.

I could have sworn there was some verbiage I read somewhere about terminating the connection manually, though I can’t find it. Is anyone familiar with what I’m talking about?

To add to this, if I change the parameters of the request to completely remove any past context to the history of the chat, then I don’t get the error anymore. My requests range anywhere from 50 tokens to 4000”

The idea the connection remains open could be a valuable point considering when analyzing a string of code and then immediately trying to jump into the improvements the length of code would easily over run the api request if the connection didn’t fully close. Still gonna keep looking into it tho.

@k-boikov
Copy link
Contributor

k-boikov commented May 9, 2023

Now that I think about it... I have also experienced that error with 3.5 after forcefully killing my AutoGPT session. 🤔

@Julz19
Copy link
Author

Julz19 commented May 9, 2023

Maybe related to a bad gateway error. I saw one yesterday, let me see if I can find the issue number
OpenAI is tracking similar issues:

You can use http://status.openai.com/ in case it is related to an outage.

So I was looking through the community and stumbled upon this comment, maybe a possibility of what’s happening? Might try and tinker with it the best I can.

“ I’m also running into a similar issue. It seems that the connection remains open for some time in between the requests instead of being closed. (This is just a guess, I still have to yet to audit the system to confirm). It’s very reproducible with my project, and it seems to happen when I let the program idle for a few minutes. Initializing the first request hasn’t been an issue. Continuing the conversation after a pause has thrown me an error.

I could have sworn there was some verbiage I read somewhere about terminating the connection manually, though I can’t find it. Is anyone familiar with what I’m talking about?

To add to this, if I change the parameters of the request to completely remove any past context to the history of the chat, then I don’t get the error anymore. My requests range anywhere from 50 tokens to 4000”

The idea the connection remains open could be a valuable point considering when analyzing a string of code and then immediately trying to jump into the improvements the length of code would easily over run the api request if the connection didn’t fully close. Still gonna keep looking into it tho.

Maybe a way when analyzing and improving code it finds a way to erase all memory context up to the analyze code point to then continue through the task and then once the improve code command gets run we have the system pause for a second to make sure the past connection is closed and then code it so that after improve code commands it goes back and then adds the improve code as well as all previous conversation history back to memory context? Just an idea

@Julz19
Copy link
Author

Julz19 commented May 10, 2023

Now that I think about it... I have also experienced that error with 3.5 after forcefully killing my AutoGPT session. 🤔

My last reply is something I’m gonna mess with and test, but I’m hoping there’s another way cause I feel like wiping memory temporarily and then trying to re-ingest the memory back in might cause even more loss of memory context. But I plan on seeing what I can do.

@anonhostpi
Copy link

Technically a duplicate of #2241. Leaving Open as #2241 is closed.

@Julz19
Copy link
Author

Julz19 commented May 11, 2023

Related Issues:
https://github.com/anonhostpi/AUTOGPT.TRACKERS/blob/main/TOPICS/0005.API/0002.ACCESS/CONNECTIVITY.md
Entire Tracker:
https://gist.github.com/anonhostpi/97d4bb3e9535c92b8173fae704b76264#file-_topics-0005-api-0002-access-connectivity-md

I think I know where the issue may be somewhat originating from and I believe it’s to do with the agents system, I’ll find the logs I saw about it and when the error occurred. The reason I say this is because I encountered the error again today in one of my runs but right after the system just went back to working as normal but skipped over the error as if it didn’t happen. This makes me believe it does have a higher chance to do with something with the connection either remaining open to long as it waits for a response from the api call or it’s not properly closing the connection before actually responding itself and allowing it to open a new connection. I’ve been trying to see ways to work with manually closing the connection but not much progress so far

@anonhostpi anonhostpi mentioned this issue May 11, 2023
1 task
@WyldKnyght
Copy link

I'm also having this issues: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
It usually happens when I ask it to research something. When this happens I notice that my Docker Image has shutdown as well.
I'm using Windows 11 with Docker WSL, and GPT3.5

Here's the full error:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/usr/local/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/usr/local/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 516, in request_raw
result = _thread_context.session.request(
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/app/autogpt/main.py", line 5, in
autogpt.cli.main()
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/app/autogpt/cli.py", line 96, in main
run_auto_gpt(
File "/app/autogpt/main.py", line 197, in run_auto_gpt
agent.start_interaction_loop()
File "/app/autogpt/agent/agent.py", line 130, in start_interaction_loop
assistant_reply = chat_with_ai(
File "/app/autogpt/llm/chat.py", line 193, in chat_with_ai
assistant_reply = create_chat_completion(
File "/app/autogpt/llm/utils/init.py", line 53, in metered_func
return func(*args, **kwargs)
File "/app/autogpt/llm/utils/init.py", line 87, in _wrapped
return func(*args, **kwargs)
File "/app/autogpt/llm/utils/init.py", line 235, in create_chat_completion
response = api_manager.create_chat_completion(
File "/app/autogpt/llm/api_manager.py", line 61, in create_chat_completion
response = openai.ChatCompletion.create(
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 216, in request
result = self.request_raw(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 528, in request_raw
raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

@Maziak2520
Copy link

Running into the same issue.. is there any kind of throtling on OpenAI side?
AUTHORISED COMMANDS LEFT: 0
SYSTEM: Command write_to_file returned: File written to successfully.
Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/usr/local/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/usr/local/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 516, in request_raw
result = _thread_context.session.request(
File "/home/vscode/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/Auto-GPT/autogpt/main.py", line 5, in
autogpt.cli.main()
File "/home/vscode/.local/lib/python3.10/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/vscode/.local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "/home/vscode/.local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/vscode/.local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/workspace/Auto-GPT/autogpt/cli.py", line 96, in main
run_auto_gpt(
File "/workspace/Auto-GPT/autogpt/main.py", line 198, in run_auto_gpt
agent.start_interaction_loop()
File "/workspace/Auto-GPT/autogpt/agent/agent.py", line 138, in start_interaction_loop
assistant_reply = chat_with_ai(
File "/workspace/Auto-GPT/autogpt/llm/chat.py", line 193, in chat_with_ai
assistant_reply = create_chat_completion(
File "/workspace/Auto-GPT/autogpt/llm/utils/init.py", line 54, in metered_func
return func(*args, **kwargs)
File "/workspace/Auto-GPT/autogpt/llm/utils/init.py", line 88, in _wrapped
return func(*args, **kwargs)
File "/workspace/Auto-GPT/autogpt/llm/utils/init.py", line 240, in create_chat_completion
response = api_manager.create_chat_completion(
File "/workspace/Auto-GPT/autogpt/llm/api_manager.py", line 61, in create_chat_completion
response = openai.ChatCompletion.create(
File "/home/vscode/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/home/vscode/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 216, in request
result = self.request_raw(
File "/home/vscode/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 528, in request_raw
raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Press any key to continue...

@tim-g-provectusalgae
Copy link

Just started recently:

Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

@albertbuchard
Copy link

My problem was that the system message was too long.

@Maziak2520
Copy link

fixed for me with new version 0.4.4.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 8, 2023

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

@github-actions github-actions bot added the Stale label Sep 8, 2023
@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API access Trouble with connecting to the API Stale
Projects
None yet
Development

No branches or pull requests

10 participants