Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
holtskinner authored Dec 13, 2024
2 parents a1d2738 + 96e567f commit b6e7e5d
Show file tree
Hide file tree
Showing 9 changed files with 7,902 additions and 7,958 deletions.
5,152 changes: 2,578 additions & 2,574 deletions gemini/agents/research-multi-agents/intro_research_multi_agents_gemini_2_0.ipynb

Large diffs are not rendered by default.

3,336 changes: 1,647 additions & 1,689 deletions gemini/code-execution/intro_code_execution.ipynb

Large diffs are not rendered by default.

1,115 changes: 558 additions & 557 deletions gemini/multimodal-live-api/real_time_rag_bank_loans_gemini_2_0.ipynb

Large diffs are not rendered by default.

3,654 changes: 1,829 additions & 1,825 deletions gemini/multimodal-live-api/real_time_rag_retail_gemini_2_0.ipynb

Large diffs are not rendered by default.

2,258 changes: 1,129 additions & 1,129 deletions gemini/reasoning-engine/tutorial_langgraph_rag_agent.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
streamlit
google-cloud-spanner
itables
itables==2.1.5
streamlit-navigation-bar
streamlit-extras
streamlit-agraph
Expand Down

Large diffs are not rendered by default.

183 changes: 87 additions & 96 deletions gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,22 +33,22 @@
"\n",
"<table align=\"left\">\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Ftuning%2Fsupervised_finetuning_using_gemini_qa.ipynb\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Ftuning%2Fgen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
" </a>\n",
" </td> \n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Workbench\n",
" </a>\n",
" </td>\n",
" <td style=\"text-align: center\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
" </a>\n",
" </td>\n",
Expand All @@ -58,23 +58,23 @@
"\n",
"<b>Share to:</b>\n",
"\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
"<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
"<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
"<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/53/X_logo_2023_original.svg\" alt=\"X logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
"<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
"</a>\n",
"\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
"<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/tuning/gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb\" target=\"_blank\">\n",
" <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
"</a> "
]
Expand Down Expand Up @@ -160,8 +160,7 @@
"source": [
"### Install the Google GenAI SDK and other required packages\n",
"\n",
"The new Google Gen AI SDK provides a unified interface to Gemini through both the Gemini Developer API and the Gemini API on Vertex AI. With a few exceptions, code that runs on one platform will run on both. This means that you can prototype an application using the Developer API and then migrate the application to Vertex AI without rewriting your code.\n",
"\n"
"The new Google Gen AI SDK provides a unified interface to Gemini through both the Gemini Developer API and the Gemini API on Vertex AI. With a few exceptions, code that runs on one platform will run on both. This means that you can prototype an application using the Developer API and then migrate the application to Vertex AI without rewriting your code.\n"
]
},
{
Expand Down Expand Up @@ -257,8 +256,8 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Nqwi-5ufWp_B",
"cellView": "code"
"cellView": "code",
"id": "Nqwi-5ufWp_B"
},
"outputs": [],
"source": [
Expand All @@ -274,9 +273,7 @@
"\n",
"LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
"\n",
"client = genai.Client(\n",
" vertexai=True, project=PROJECT_ID, location=LOCATION\n",
")"
"client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
]
},
{
Expand All @@ -298,7 +295,6 @@
"source": [
"from collections import Counter\n",
"import json\n",
"import time\n",
"import random\n",
"\n",
"# Vertex AI SDK\n",
Expand All @@ -309,9 +305,8 @@
"import pandas as pd\n",
"import plotly.graph_objects as go\n",
"from plotly.subplots import make_subplots\n",
"from IPython.display import Markdown, display\n",
"\n",
"import vertexai\n",
"\n",
"vertexai.init(project=PROJECT_ID, location=LOCATION)\n",
"\n",
"from google.cloud import aiplatform\n",
Expand Down Expand Up @@ -612,43 +607,43 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "udTxzY8mpGYf"
},
"outputs": [],
"source": [
"def get_predictions(question: str, model_version: str) -> str:\n",
"\n",
" prompt = question\n",
" base_model = model_version\n",
" prompt = question\n",
" base_model = model_version\n",
"\n",
" response = client.models.generate_content(\n",
" model = base_model,\n",
" contents = prompt,\n",
" config={\n",
" 'system_instruction': systemInstruct,\n",
" 'temperature': 0.3,\n",
" },\n",
" )\n",
" response = client.models.generate_content(\n",
" model=base_model,\n",
" contents=prompt,\n",
" config={\n",
" \"system_instruction\": systemInstruct,\n",
" \"temperature\": 0.3,\n",
" },\n",
" )\n",
"\n",
" return response.text"
],
"metadata": {
"id": "udTxzY8mpGYf"
},
"execution_count": null,
"outputs": []
" return response.text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PFvwmGll3MIv"
},
"outputs": [],
"source": [
"test_answer = test_df[\"answers\"].iloc[row_dataset]\n",
"response = get_predictions(test_question, base_model)\n",
"\n",
"print(f\"Gemini response: {response}\")\n",
"print(f\"Actual answer: {test_answer}\")"
],
"metadata": {
"id": "PFvwmGll3MIv"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -938,41 +933,41 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gdcy4umfpGZE"
},
"outputs": [],
"source": [
"train_dataset = f\"\"\"{BUCKET_URI}/squad_train.jsonl\"\"\"\n",
"validation_dataset = f\"\"\"{BUCKET_URI}/squad_train.jsonl\"\"\"\n",
"\n",
"training_dataset= {\n",
" 'gcs_uri': train_dataset,\n",
"training_dataset = {\n",
" \"gcs_uri\": train_dataset,\n",
"}\n",
"\n",
"validation_dataset = types.TuningValidationDataset(gcs_uri=validation_dataset)"
],
"metadata": {
"id": "gdcy4umfpGZE"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NkboVUkoqWSp"
},
"outputs": [],
"source": [
"sft_tuning_job = client.tunings.tune(\n",
" base_model=base_model,\n",
" training_dataset=training_dataset,\n",
" config=types.CreateTuningJobConfig(\n",
" adapter_size = 'ADAPTER_SIZE_EIGHT',\n",
" epoch_count = 1, # set to one to keep time and cost low\n",
" tuned_model_display_name=\"gemini-flash-1.5-qa\"\n",
")\n",
" adapter_size=\"ADAPTER_SIZE_EIGHT\",\n",
" epoch_count=1, # set to one to keep time and cost low\n",
" tuned_model_display_name=\"gemini-flash-1.5-qa\",\n",
" ),\n",
")\n",
"sft_tuning_job"
],
"metadata": {
"id": "NkboVUkoqWSp"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
Expand All @@ -996,26 +991,26 @@
},
{
"cell_type": "code",
"source": [
"sft_tuning_job.state"
],
"execution_count": null,
"metadata": {
"id": "WECSLyPRth6M"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"sft_tuning_job.state"
]
},
{
"cell_type": "code",
"source": [
"tuning_job = client.tunings.get(name=sft_tuning_job.name)\n",
"tuning_job"
],
"execution_count": null,
"metadata": {
"id": "_iwz4lhUDC_f"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"tuning_job = client.tunings.get(name=sft_tuning_job.name)\n",
"tuning_job"
]
},
{
"cell_type": "markdown",
Expand All @@ -1040,15 +1035,15 @@
},
{
"cell_type": "code",
"source": [
"experiment_name = tuning_job.experiment\n",
"experiment_name"
],
"execution_count": null,
"metadata": {
"id": "_IoiiRH5Lhpf"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"experiment_name = tuning_job.experiment\n",
"experiment_name"
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -1125,16 +1120,10 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "DL07j7u__iZx",
"outputId": "c31ad64a-cf9e-45d7-b625-e4a7dbf49cc9",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 542
}
"id": "DL07j7u__iZx"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/html": [
"<html>\n",
Expand Down Expand Up @@ -1170,7 +1159,8 @@
"</html>"
]
},
"metadata": {}
"metadata": {},
"output_type": "display_data"
}
],
"source": [
Expand Down Expand Up @@ -1248,14 +1238,14 @@
},
{
"cell_type": "code",
"source": [
"get_predictions(prompt, tuned_model)"
],
"execution_count": null,
"metadata": {
"id": "ifhRboiCOBje"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"get_predictions(prompt, tuned_model)"
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -1283,12 +1273,12 @@
},
{
"cell_type": "markdown",
"source": [
"After running the evaluation you can see that the model generally performs better on our use case after fine-tuning. Of course, depending on things like use case or data quality performance will differ."
],
"metadata": {
"id": "kBawjkvKQ_Q-"
}
},
"source": [
"After running the evaluation you can see that the model generally performs better on our use case after fine-tuning. Of course, depending on things like use case or data quality performance will differ."
]
},
{
"cell_type": "code",
Expand All @@ -1306,7 +1296,8 @@
],
"metadata": {
"colab": {
"provenance": []
"name": "gen_ai_sdk_supervised_finetuning_using_gemini_qa.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
Expand All @@ -1315,4 +1306,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}
3 changes: 2 additions & 1 deletion vision/use-cases/hey_llm/src/main.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,9 @@ import type {GenerateContentResponse} from '@google-cloud/vertexai';

/**
* Vertex AI location. Change this const if you want to use another location.
* us-central1 is chosen as default to currently provide the most model availability. See [Vertex AI locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations) for more details.
*/
const LOCATION = 'asia-northeast1';
const LOCATION = 'us-central1';

/**
* Default Gemini model to use.
Expand Down

0 comments on commit b6e7e5d

Please sign in to comment.