Skip to content

Latest commit

 

History

History
154 lines (99 loc) · 4.42 KB

README.md

File metadata and controls

154 lines (99 loc) · 4.42 KB

Google AI Python SDK

PyPI version Python support PyPI - Downloads

The Google AI Python SDK enables developers to use Google's state-of-the-art generative AI models (like Gemini and PaLM) to build AI-powered features and applications. This SDK supports use cases like:

  • Generate text from text-only input
  • Generate text from text-and-images input (multimodal) (for Gemini only)
  • Build multi-turn conversations (chat)
  • Embedding

For example, with just a few lines of code, you can access Gemini's multimodal capabilities to generate text from text-and-image input:

model = genai.GenerativeModel('gemini-pro-vision')

cookie_picture = {
    'mime_type': 'image/png',
    'data': Path('cookie.png').read_bytes()
}
prompt = "Give me a recipe for this:"

response = model.generate_content(
    content=[prompt, cookie_picture]
)
print(response.text)

Try out the API

Install from PyPI.

pip install google-generativeai

Obtain an API key from AI Studio, then configure it here.

Import the SDK and load a model.

import google.generativeai as genai

genai.configure(api_key=os.environ["API_KEY"])

model = genai.GenerativeModel('gemini-pro')

Use GenerativeModel.generate_content to have the model complete some initial text.

response = model.generate_content("The opposite of hot is")
print(response.text)  # cold.

Use GenerativeModel.start_chat to have a discussion with a model.

chat = model.start_chat()
response = chat.send_message('Hello, what should I have for dinner?')
print(response.text) #  'Here are some suggestions...'
response = chat.send_message("How do I cook the first one?")

Installation and usage

Run pip install google-generativeai.

For detailed instructions, you can find a quickstart for the Google AI Python SDK in the Google documentation.

This quickstart describes how to add your API key and install the SDK in your app, initialize the model, and then call the API to access the model. It also describes some additional use cases and features, like streaming, embedding, counting tokens, and controlling responses.

Documentation

Find complete documentation for the Google AI SDKs and the Gemini model in the Google documentation: https://ai.google.dev/docs

Contributing

See Contributing for more information on contributing to the Google AI JavaScript SDK.

Developers who use the PaLM API

Migrate to use the Gemini API

Check our migration guide in the Google documentation.

Installation and usage for the PaLM API

Install from PyPI.

pip install google-generativeai

Obtain an API key from AI Studio, then configure it here.

import google.generativeai as palm

palm.configure(api_key=os.environ["PALM_API_KEY"])

Use palm.generate_text to have the model complete some initial text.

response = palm.generate_text(prompt="The opposite of hot is")
print(response.result)  # cold.

Use palm.chat to have a discussion with a model.

response = palm.chat(messages=["Hello."])
print(response.last) #  'Hello! What can I help you with?'
response.reply("Can you tell me a joke?")

Documentation for the PaLM API

Colab magics

Once installed, use the Python client via the %%palm Colab magic. Read the full guide.

%%palm
The best thing since sliced bread is

License

The contents of this repository are licensed under the Apache License, version 2.0.