Skip to content

Commit

Permalink
Fixed validation method for AzureOpenAI
Browse files Browse the repository at this point in the history
  • Loading branch information
valentinfrlch committed Dec 23, 2024
1 parent 147821b commit fd035d0
Show file tree
Hide file tree
Showing 4 changed files with 102 additions and 85 deletions.
71 changes: 56 additions & 15 deletions blueprints/event_summary.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ blueprint:
name: AI Event Summary (LLM Vision v1.3.1)
author: valentinfrlch
description: >
AI-powered security event summaries for frigate or camera entities.
AI-powered security event summaries for frigate or camera entities.
Sends a notification with a preview to your phone that is updated dynamically when the AI summary is available.
domain: automation
source_url: https://github.com/valentinfrlch/ha-llmvision/blob/main/blueprints/event_summary.yaml
Expand Down Expand Up @@ -42,23 +42,43 @@ blueprint:
integration: mobile_app
camera_entities:
name: Camera Entities
description: (Camera and Frigate mode) List of camera entities to monitor
description: >-
(Camera and Frigate mode)
List of camera entities to monitor
default: []
selector:
entity:
multiple: true
filter:
domain: camera
object_type:
name: Included Object Type(s)
description: >-
(Frigate mode only)
Only run if frigate labels the object as one of these. (person, dog, bird, etc)
default: []
selector:
text:
multiline: false
multiple: true
trigger_state:
name: Trigger State
description: (Camera mode only) Trigger the automation when your cameras change to this state.
description: >-
(Camera mode only)
Trigger the automation when your cameras change to this state.
default: 'recording'
selector:
text:
multiline: false
motion_sensors:
name: Motion Sensor
description: (Camera mode only) Set if your cameras don't change state. Use the same order used for camera entities.
description: >-
(Camera mode only)
Set if your cameras don't change state. Use the same order used for camera entities.
default: []
selector:
entity:
Expand All @@ -67,7 +87,10 @@ blueprint:
domain: binary_sensor
preview_mode:
name: Preview Mode
description: (Camera mode only) Choose between a live preview or a snapshot of the event
description: >-
(Camera mode only)
Choose between a live preview or a snapshot of the event
default: 'Live Preview'
selector:
select:
Expand All @@ -84,22 +107,32 @@ blueprint:
max: 60
tap_navigate:
name: Tap Navigate
description: Path to navigate to when notification is opened (e.g. /lovelace/cameras)
description: >-
Path to navigate to when notification is opened (e.g. /lovelace/cameras).
To have use the same input which was sent to the ai engine, use
`{{video if video != '''' else image}}`
default: "/lovelace/0"
selector:
text:
multiline: false
duration:
name: Duration
description: (Camera mode only) How long to record before analyzing (in seconds)
description: >-
(Camera mode only)
How long to record before analyzing (in seconds)
default: 5
selector:
number:
min: 1
max: 60
max_frames:
name: Max Frames
description: (Camera and Frigate mode) How many frames to analyze. Picks frames with the most movement.
description: >-
(Camera and Frigate mode)
How many frames to analyze. Picks frames with the most movement.
default: 3
selector:
number:
Expand Down Expand Up @@ -170,11 +203,12 @@ variables:
{% set ns = namespace(device_names=[]) %}
{% for device_id in notify_devices %}
{% set device_name = device_attr(device_id, "name") %}
{% set sanitized_name = "mobile_app_" + device_name | lower | regex_replace("[^a-z0-9 ]", "") | replace(" ", "_") %}
{% set sanitized_name = "mobile_app_" + device_name | lower | regex_replace("[' -]", "_") | regex_replace("[^a-z0-9_]", "") %}
{% set ns.device_names = ns.device_names + [sanitized_name] %}
{% endfor %}
{{ ns.device_names }}
camera_entities_list: !input camera_entities
object_types_list: !input object_type
motion_sensors_list: !input motion_sensors
camera_entity: >
{% if mode == 'Camera' %}
Expand Down Expand Up @@ -230,6 +264,10 @@ variables:
Use "critical" only for possible burglaries and similar events. "time-sensitive" could be a courier at the front door or an event of similar importance.
Reply with these replies exactly.
max_exceeded: silent

mode: single

trigger:
- platform: mqtt
topic: "frigate/events"
Expand All @@ -247,9 +285,10 @@ condition:
- condition: template
value_template: >
{% if mode == 'Frigate' %}
{{ trigger.payload_json["type"] == "end" and (state_attr(this.entity_id, 'last_triggered') is none or (now() - state_attr(this.entity_id, 'last_triggered')).total_seconds() / 60 > cooldown) and ('camera.' + trigger.payload_json['after']['camera']|lower) in camera_entities_list }}
{% else %}
{{ state_attr(this.entity_id, 'last_triggered') is none or (now() - state_attr(this.entity_id, 'last_triggered')).total_seconds() / 60 > cooldown }}
{{ trigger.payload_json["type"] == "end"
and ('camera.' + trigger.payload_json['after']['camera']|lower) in camera_entities_list
and ((object_types_list|length) == 0 or ((trigger.payload_json['after']['label']|lower) in object_types_list))
}}
{% endif %}
Expand Down Expand Up @@ -293,7 +332,7 @@ action:
max_tokens: 3
temperature: 0.1
response_variable: importance

# Cancel automation if event not deemed important
- choose:
- conditions:
Expand Down Expand Up @@ -365,7 +404,7 @@ action:
temperature: !input temperature
expose_images: "{{true if preview_mode == 'Snapshot'}}"
response_variable: response


- choose:
- conditions:
Expand All @@ -388,4 +427,6 @@ action:
clickAction: !input tap_navigate #Android
tag: "{{tag}}"
group: "{{group}}"
interruption-level: passive
interruption-level: passive

- delay: '00:{{cooldown|int}}:00'
4 changes: 3 additions & 1 deletion custom_components/llmvision/config_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
CONF_AZURE_BASE_URL,
CONF_AZURE_DEPLOYMENT,
CONF_AZURE_VERSION,
ENDPOINT_AZURE,
CONF_ANTHROPIC_API_KEY,
CONF_GOOGLE_API_KEY,
CONF_GROQ_API_KEY,
Expand Down Expand Up @@ -188,7 +189,8 @@ async def async_step_azure(self, user_input=None):
user_input["provider"] = self.init_info["provider"]
try:
azure = AzureOpenAI(self.hass, api_key=user_input[CONF_AZURE_API_KEY], endpoint={
'base_url': user_input[CONF_AZURE_BASE_URL],
'base_url': ENDPOINT_AZURE,
'endpoint': user_input[CONF_AZURE_BASE_URL],
'deployment': user_input[CONF_AZURE_DEPLOYMENT],
'api_version': user_input[CONF_AZURE_VERSION]
})
Expand Down
2 changes: 1 addition & 1 deletion custom_components/llmvision/const.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,4 @@
ENDPOINT_GROQ = "https://api.groq.com/openai/v1/chat/completions"
ENDPOINT_LOCALAI = "{protocol}://{ip_address}:{port}/v1/chat/completions"
ENDPOINT_OLLAMA = "{protocol}://{ip_address}:{port}/api/chat"
ENDPOINT_AZURE = "https://{base_url}/openai/deployments/{deployment}/chat/completions?api-version={api_version}"
ENDPOINT_AZURE = "{base_url}openai/deployments/{deployment}/chat/completions?api-version={api_version}"
Loading

0 comments on commit fd035d0

Please sign in to comment.