Skip to content

Commit

Permalink
Update AI Foundry branding (#38947)
Browse files Browse the repository at this point in the history
* update azure ai foundry branding

* restore AI projects files
  • Loading branch information
rohit-ganguly authored Dec 19, 2024
1 parent 51870e8 commit 0a9f368
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 10 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ def evaluate(
:paramtype data_mapping: Optional[Dict[str, str]]
:keyword output_path: The local folder path to save evaluation artifacts to if set
:paramtype output_path: Optional[str]
:keyword tracking_uri: Tracking uri to log evaluation results to AI Studio
:keyword tracking_uri: Tracking uri to log evaluation results to AI Foundry
:paramtype tracking_uri: Optional[str]
:return: A EvaluationResult object.
:rtype: ~azure.ai.generative.evaluate.EvaluationResult
Expand Down
9 changes: 4 additions & 5 deletions sdk/ai/azure-ai-inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ Use the Inference client library (in preview) to:
The Inference client library supports AI models deployed to the following services:

* [GitHub Models](https://github.com/marketplace/models) - Free-tier endpoint for AI models from different providers
* Serverless API endpoints and Managed Compute endpoints - AI models from different providers deployed from [Azure AI Studio](https://ai.azure.com). See [Overview: Deploy models, flows, and web apps with Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview).
* Azure OpenAI Service - OpenAI models deployed from [Azure OpenAI Studio](https://oai.azure.com/). See [What is Azure OpenAI Service?](https://learn.microsoft.com/azure/ai-services/openai/overview). Although we recommend you use the official [OpenAI client library](https://pypi.org/project/openai/) in your production code for this service, you can use the Azure AI Inference client library to easily compare the performance of OpenAI models to other models, using the same client library and Python code.
* Serverless API endpoints and Managed Compute endpoints - AI models from different providers deployed from [Azure AI Foundry](https://ai.azure.com). See [Overview: Deploy models, flows, and web apps with Azure AI Foundry](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview).
* Azure OpenAI Service - OpenAI models deployed from [Azure AI Foundry](https://oai.azure.com/). See [What is Azure OpenAI Service?](https://learn.microsoft.com/azure/ai-services/openai/overview). Although we recommend you use the official [OpenAI client library](https://pypi.org/project/openai/) in your production code for this service, you can use the Azure AI Inference client library to easily compare the performance of OpenAI models to other models, using the same client library and Python code.

The Inference client library makes services calls using REST API version `2024-05-01-preview`, as documented in [Azure AI Model Inference API](https://aka.ms/azureai/modelinference).

Expand All @@ -27,18 +27,17 @@ The Inference client library makes services calls using REST API version `2024-0
### Prerequisites

* [Python 3.8](https://www.python.org/) or later installed, including [pip](https://pip.pypa.io/en/stable/).
Studio.
* For GitHub models
* The AI model name, such as "gpt-4o" or "mistral-large"
* A GitHub personal access token. [Create one here](https://github.com/settings/tokens). You do not need to give any permissions to the token. The token is a string that starts with `github_pat_`.
* For Serverless API endpoints or Managed Compute endpoints
* An [Azure subscription](https://azure.microsoft.com/free).
* An [AI Model from the catalog](https://ai.azure.com/explore/models) deployed through Azure AI Studio.
* An [AI Model from the catalog](https://ai.azure.com/explore/models) deployed through Azure AI Foundry.
* The endpoint URL of your model, in of the form `https://<your-host-name>.<your-azure-region>.models.ai.azure.com`, where `your-host-name` is your unique model deployment host name and `your-azure-region` is the Azure region where the model is deployed (e.g. `eastus2`).
* Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.
* For Azure OpenAI (AOAI) service
* An [Azure subscription](https://azure.microsoft.com/free).
* An [OpenAI Model from the catalog](https://oai.azure.com/resource/models) deployed through Azure OpenAI Studio.
* An [OpenAI Model from the catalog](https://oai.azure.com/resource/models) deployed through Azure AI Foundry.
* The endpoint URL of your model, in the form `https://<your-resouce-name>.openai.azure.com/openai/deployments/<your-deployment-name>`, where `your-resource-name` is your globally unique AOAI resource name, and `your-deployment-name` is your AI Model deployment name.
* Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.
* An api-version. Latest preview or GA version listed in the `Data plane - inference` row in [the API Specs table](https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions). At the time of writing, latest GA version was "2024-06-01".
Expand Down
4 changes: 2 additions & 2 deletions sdk/ai/azure-ai-inference/tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ The instructions below are for running tests locally, on a Windows machine, agai

## Prerequisites

The live tests were written against the AI models mentioned below. You will need to deploy these two in [Azure AI Studio](https://ai.azure.com/) and have the endpoint and key for each one of them.
The live tests were written against the AI models mentioned below. You will need to deploy these two in [Azure AI Foundry](https://ai.azure.com/) and have the endpoint and key for each one of them.

- `Mistral-Large` for chat completion tests, including tool tests
- `Cohere-embed-v3-english` for embedding tests
<!-- - `TBD` for image generation tests -->

In addition, you will need to deploy a gpt-4o model in the Azure OpenAI Studio, and have the endpoint and key for it:
In addition, you will need to deploy a gpt-4o model in the Azure AI Foundry, and have the endpoint and key for it:

- `gpt-4o` on Azure OpenAI (AOAI), for chat completions tests with image input

Expand Down
2 changes: 1 addition & 1 deletion sdk/ai/azure-ai-resources/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ For a more complete set of Azure libraries, see https://aka.ms/azsdk/python/all.
- Python 3.7 or later is required to use this package.
- You must have an [Azure subscription][azure_subscription].
- An [Azure Machine Learning Workspace][workspace].
- An [Azure AI Studio project][ai_project].
- An [Azure AI Foundry project][ai_project].

### Install the package
Install the Azure AI generative package for Python with pip:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def from_config(
) -> "AIClient":
"""Returns a client from an existing Azure AI project using a file configuration.
To get the required details, you can go to the Project Overview page in the AI Studio.
To get the required details, you can go to the Project Overview page in AI Foundry.
You can save a project's details in a JSON configuration file using this format:
Expand Down

0 comments on commit 0a9f368

Please sign in to comment.