Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: HTTP Error 401: Unauthorized #1

Open
cary-rowen opened this issue Nov 17, 2023 · 15 comments
Open

error: HTTP Error 401: Unauthorized #1

cary-rowen opened this issue Nov 17, 2023 · 15 comments

Comments

@cary-rowen
Copy link

Hi @cartertemm

I created a new key, I have a paid GPT4 account, and when I try to identify the current navigation object I get the following error:
error: HTTP Error 401: Unauthorized****

Is the visual description permission only provided to paid accounts? I have a key for a free account and have the same problem, so I don't know what to do now.

Thanks
Cary

@cartertemm
Copy link
Owner

cartertemm commented Nov 17, 2023

Hello Cary,

This is an interesting observation.

From the OpenAI vision API documentation:

GPT-4 with vision is currently available to all developers who have access to GPT-4 via the gpt-4-vision-preview model and the Chat Completions API which has been updated to support image inputs. Note that the Assistants API does not currently support image inputs.

My assumption was that all OpenAI platform accounts with a balance > $0 and an API key had access, excluding Chat GPT+ customers, which is a separate API.

  • Is this issue occurring every time you try to get information for an object? Sometimes 401 is returned after multiple requests as a means of rate limiting. When this happens, I usually wait around a minute, then it begins working again.
  • If so, have you double checked that the API key for your paid account is valid?

If neither turns out to be the case, here is the python script I use to troubleshoot image completions. Are you getting the same result after saving and running it?

import base64
import requests


def encode_image(image_path):
	with open(image_path, "rb") as image_file:
		return base64.b64encode(image_file.read()).decode('utf-8')


def image_complete(api_key, image_path, prompt="Describe this image in as much detail as possible"):
	base64_image = encode_image(image_path)
	headers = {
		"Content-Type": "application/json",
		"Authorization": f"Bearer {api_key}"
	}
	payload = {
		"model": "gpt-4-vision-preview",
		"messages": [{
			"role": "user",
			"content": [{
				"type": "text",
				"text": prompt
			},
			{
				"type": "image_url",
				"image_url": {
				"url": f"data:image/jpeg;base64,{base64_image}"
			}
		}]}],
		"max_tokens": 300
	}
	response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
	return response.json()

if __name__ == "__main__":
	api_key = input("Please enter your API key: ")
	image_path = input("Please enter the path to your image file: ")
	print(image_complete(api_key, image_path))

@cary-rowen
Copy link
Author

Hi @cartertemm
The code you provided is useful, it gives output like this:

{'error': {'message': 'The model `gpt-4-vision-preview` does not exist or you do not have access to it. Learn more: http
s://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}                                                                                                   

According to the output above, although I can use GPT4 on chat.openai.com it doesn't mean I can use GPT4API.
is that so?

@cartertemm
Copy link
Owner

@cary-rowen

This OpenAI forum topic seems to suggest that in addition to maintaining a balance in your account, topping off on credits (i.e. $5) might trigger availability of the vision preview API.

You mention ChatGPT... Purchasing Chat GPT plus does not automatically provide platform credits. Let me know if you have an account balance by going to https://platform.openai.com/usage. Here are the steps I'd recommend to troubleshoot further:

  • Ensure your account balance is > $0
  • Delete and re-generate your API key
  • If you are able, purchase credits in any amount and wait 5 to 10 minutes to see whether requests return the expected response

In either case, the add-on should've announced the error, so I'll get that working in the next release.

@cary-rowen
Copy link
Author

When I call the api key using the code you gave, I do see the call to the API on this page.

https://platform.openai.com/docs/api-reference

{'error': {'message': 'The model `gpt-4-vision-preview` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}} 

@cary-rowen
Copy link
Author

I entered the following web page and saw: Credit remaining $4.52
https://platform.openai.com/account/billing/overview

@cartertemm
Copy link
Owner

If you go to the billing overview page again: https://platform.openai.com/account/billing/overview

And then click on billing history, what does it show as the date for your last invoice? From the link in the error response:

If you've made a successful payment of $1 or more, you'll be able to access the GPT-4 API (8k).

I'm not sure whether that includes the vision preview, but it would be good information to have.

@aaronr7734
Copy link

My guess is the credit on the account isn't money that @cary-rowen added, but trial credits that they have yet to use up.
You need to have a billing agreement set up with OpenAI before these models become available. Trial credits don't count.

@cartertemm
Copy link
Owner

@cary-rowen were you able to get this working with @aaronr7734's suggestions and the steps provided above?

I've gotten emails from a couple others who are getting the same message, so once we nail down a solution it would be good to add a snippet explaining it in the readme.

@mcsedmak
Copy link

I was having the same problem. I also assumed my chatgpt plus account would suffice.

First I created my key, but it did not work.
I used the above step to add a payment method to my API account
I then added $10 to my account
It didn't work
I waited 5 minutes
Now it works.

@cary-rowen
Copy link
Author

Thanks @cartertemm @aaronr7734 @mcsedmak
I think this problem is indeed caused by our account not having permission to access gpt4vision.

Since I was in China and had to submit the request through a mirror, I used @cartertemm's code sample above and replaced the URL with my mirror address, which returned the results as expected.
But when I tried to replace the URL in description service.py line 157 of the add-on, the expected data was not returned.

IO - speech.speech.speak (10:15:47.164) - MainThread (14520):
Speaking ['日志片段开始点已标记,再按一次复制到剪贴板']
IO - inputCore.InputManager.executeGesture (10:15:48.549) - winInputHook (10816):
Input: kb(laptop):shift+NVDA+u
IO - tones.beep (10:15:48.887) - Thread-12 (14440):
Beep at pitch 300, for 200 ms, left volume 50, right volume 50
IO - speech.speech.speak (10:15:48.889) - Thread-12 (14440):
Speaking ['Retrieving description...']
IO - tones.beep (10:15:49.965) - Thread-12 (14440):
Beep at pitch 150, for 200 ms, left volume 50, right volume 50
ERROR - stderr (10:15:49.968) - Thread-12 (14440):
Exception in thread Thread-12:
Traceback (most recent call last):
  File "threading.pyc", line 926, in _bootstrap_inner
  File "threading.pyc", line 870, in run
  File "C:\Users\cary\AppData\Roaming\nvda\addons\AIContentDescriber\globalPlugins\AIContentDescriber\__init__.py", line 208, in describe_image
    message = service.process(file, **ch.config[service.name])
  File "C:\Users\cary\AppData\Roaming\nvda\addons\AIContentDescriber\globalPlugins\AIContentDescriber\description_service.py", line 157, in process
    response = post(url="https://1aee1707-a7dc-48e6-bcc3-3043433f1c29.api.agi.dreamforest.net/v1/chat/completions", headers=headers, data=json.dumps(payload).encode('utf-8'), timeout=timeout)
  File "C:\Users\cary\AppData\Roaming\nvda\addons\AIContentDescriber\globalPlugins\AIContentDescriber\description_service.py", line 84, in post
    error_text = json.loads(error_text)
  File "json\__init__.pyc", line 348, in loads
  File "json\decoder.pyc", line 337, in decode
  File "json\decoder.pyc", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
IO - inputCore.InputManager.executeGesture (10:15:50.912) - winInputHook (10816):
Input: kb(laptop):shift+control+NVDA+f1

Expected json data:

{'id': 'chatcmpl-8NtPSj8zLqNuFc6T3h3ehijXDrg2Y', 'object': 'chat.completion', 'created': 1700705230, 'model': 'gpt-4-1106-vision-preview', 'usage': {'prompt_tokens': 1138, 'completion_tokens': 224, 'total_tokens': 1362}, 'choices': [{'message': {'role': 'assistant', 'content': 'This image displays the front and back of a bank card, specifically from China Construction Bank. The card features chip technology, as seen by the gold chip on the front left side, and is branded as a UnionPay card with ATM capability, suggested by the logos present.\n\nOn the front, the card has a colorful design with what appears to be a landscape scene including grass and trees. The card number, card holder\'s name (which is censored for privacy), and expiration date (12/27) are clearly visible. The cardholder\'s name field displays only "****," concealing the actual name.\n\nTurning to the back, there\'s a magnetic stripe at the top, a space for an authorized signature with a signature present (which is blurred for privacy), and printed text in Chinese which likely includes bank information and customer service details. The "QuickPass" logo, indicating contactless payment technology, is at the bottom right. Additionally, there\'s a printed URL and hotline number at the bottom. The back also contains a card security code, usually required for online transactions, which is censored in this image.'}, 'finish_details': {'type': 'stop', 'stop': '<|fim_suffix|>'}, 'index': 0}]}

@cary-rowen
Copy link
Author

based on above:

  • @cartertemm Is it possible to open the option of filling in the mirror URL?
  • Due to time constraints, I did not debug this error carefully. Can you take a look?

Thanks
Cary

@cartertemm
Copy link
Owner

I've been thinking about how we wish to address this problem for others that might want to use the add-on in China.

Do you know if the API is unavailable across the entirety of China, or just your region? I've searched for information on this and am unfortunately coming up short.

It would appear that the mirror you are using is working slightly differently, triggering an IOError of some kind. I am hesitant to modify the error handling logic for this specifically.

request body: "The log fragment starts with a mark, then copy it once to the clipboard"

Instead of supporting different mirrors (which could each inject other information into the response), would it help to allow for an HTTP or sox proxy?

@cary-rowen
Copy link
Author

Hi @cartertemm

Do you know if the API is unavailable across the entirety of China, or just your region? I've searched for information on this and am unfortunately coming up short.

From what I've read, it's unavailable across the entirety of China.


It would appear that the mirror you are using is working slightly differently, triggering an IOError of some kind. I am hesitant to modify the error handling logic for this specifically.

I might want you to do this if it doesn't have negative consequences, but ultimately it's up to you.


Instead of supporting different mirrors (which could each inject other information into the response), would it help to allow for an HTTP or sox proxy?

But proxy servers may not be easy to obtain.

@mzanm
Copy link
Contributor

mzanm commented Apr 16, 2024

I think the issue could be fixed if the addon used on the api the regular gpt-4turbo, as it's now out of preview and support vision. It might be helpful to add a selector on what openai model in case of any possible regressions in the newer model for some use cases. The http error, It might be usable on free credit since it's no longer preview.

@cartertemm
Copy link
Owner

You can now choose between GPT4 turbo and GPT4 vision preview in the manage models dialogue, so it'd be an easy test. But I think the region-based limitation has more to do with a choice on the part of OpenAI itself, or perhaps it's being blocked by the firewall.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants