Skip to content

Commit

Permalink
Create python-integration-tests.yml (#763)
Browse files Browse the repository at this point in the history
### Motivation and Context
Run python integration tests on every push to main, and every 12 hours. 

These integration tests will run on every push to main as well as twice
a day. At the time of this PR, we are seeing ~3 PRs to the python SK per
day. Each Integration test run will make ~32 AI requests. For each PR,
12 runs will be made: 3 each for python 3.8, 3.9, 3.10, and 3.11 for a
total of ~400 calls per push to main.

We should re-evaluate the trigger on pushes to main if PRs to the python
SK significantly increase.

### Description
- add workflow definition for integration tests
- Example run:
https://github.com/microsoft/semantic-kernel/actions/runs/4874609026/jobs/8695771345?pr=763
- For push triggers, add path checks for dotnet/** or python/**
depending on the type of workflow
- pinned requirements.txt dependencies to the versions pinned in the
pyproject.toml file to ensure parity between pip package dependencies
and local code

**Note**: Integration tests are failing on windows with python 3.8 -
already captured a task to address the problem
  • Loading branch information
awharrison-28 authored May 4, 2023
1 parent 46e4440 commit 05c586b
Show file tree
Hide file tree
Showing 18 changed files with 165 additions and 32 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/dotnet-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ permissions:
contents: read

jobs:
build:
dotnet-build-and-test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
Expand Down
24 changes: 24 additions & 0 deletions .github/workflows/dotnet-format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,31 @@ on:
branches: [ "main", "feature*" ]

jobs:
check-for-dotnet-changes:
runs-on: ubuntu-latest
outputs:
output1: ${{ steps.filter.outputs.dotnet}}
steps:
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
dotnet:
- 'dotnet/**'
- 'samples/dotnet/**'
- uses: actions/checkout@v3
# run only if 'dotnet' files were changed
- name: dotnet changes found
if: steps.filter.outputs.dotnet == 'true'
run: echo "dotnet file"
# run only if not 'dotnet' files were changed
- name: no dotnet changes found
if: steps.filter.outputs.dotnet != 'true'
run: echo "NOT dotnet file"

check-format:
needs: check-for-dotnet-changes
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
runs-on: ubuntu-latest

steps:
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/dotnet-integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ on:
workflow_dispatch:
push:
branches: ["main", "feature*"]
paths:
- 'dotnet/**'
- 'samples/dotnet/**'

permissions:
contents: read
Expand Down
30 changes: 30 additions & 0 deletions .github/workflows/dotnet-pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,35 @@ permissions:
contents: read

jobs:
check-for-dotnet-changes:
runs-on: ubuntu-latest
outputs:
output1: ${{ steps.filter.outputs.dotnet}}
steps:
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
dotnet:
- 'dotnet/**'
- 'samples/dotnet/**'
- uses: actions/checkout@v3
# run only if 'dotnet' files were changed
- name: dotnet changes found
if: steps.filter.outputs.dotnet == 'true'
run: echo "dotnet file"
# run only if not 'dotnet' files were changed
- name: no dotnet changes found
if: steps.filter.outputs.dotnet != 'true'
run: echo "NOT dotnet file"

build:
strategy:
matrix:
os: [ubuntu-latest]
configuration: [Release, Debug]
runs-on: ${{ matrix.os }}
needs: check-for-dotnet-changes
env:
NUGET_CERT_REVOCATION_MODE: offline
steps:
Expand All @@ -27,13 +50,15 @@ jobs:
clean: true

- name: Setup .NET
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
uses: actions/setup-dotnet@v3
with:
dotnet-version: 6.0.x
env:
NUGET_AUTH_TOKEN: ${{ secrets.GPR_READ_TOKEN }}

- uses: actions/cache@v3
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
with:
path: ~/.nuget/packages
# Look to see if there is a cache hit for the corresponding requirements file
Expand All @@ -43,28 +68,33 @@ jobs:
- name: Find solutions
shell: bash
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
run: echo "solutions=$(find ./ -type f -name "*.sln" | tr '\n' ' ')" >> $GITHUB_ENV

- name: Restore dependencies
shell: bash
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
run: |
for solution in ${{ env.solutions }}; do
dotnet restore $solution
done
- name: Build
shell: bash
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
run: |
for solution in ${{ env.solutions }}; do
dotnet build $solution --no-restore --configuration ${{ matrix.configuration }}
done
- name: Find unit test projects
shell: bash
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
run: echo "projects=$(find ./dotnet -type f -name "*.UnitTests.csproj" | tr '\n' ' ')" >> $GITHUB_ENV

- name: Test
shell: bash
if: needs.check-for-dotnet-changes.outputs.output1 == 'true'
run: |
for project in ${{ env.projects }}; do
dotnet test $project --no-build --verbosity normal --logger trx --results-directory ./TestResults --configuration ${{ matrix.configuration }}
Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,24 @@ on:
branches: [ "main", "feature*" ]

jobs:
paths-filter:
check-for-python-changes:
runs-on: ubuntu-latest
outputs:
output1: ${{ steps.filter.outputs.python}}
steps:
- uses: actions/checkout@v3
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
python:
- 'python/**'
- uses: actions/checkout@v3
# run only if 'python' files were changed
- name: python tests
- name: python changes found
if: steps.filter.outputs.python == 'true'
run: echo "Python file"
# run only if not 'python' files were changed
- name: not python tests
- name: no python changes found
if: steps.filter.outputs.python != 'true'
run: echo "NOT python file"

Expand All @@ -32,8 +32,8 @@ jobs:
matrix:
python-version: ["3.8"]
runs-on: ubuntu-latest
needs: paths-filter
if: needs.paths-filter.outputs.output1 == 'true'
needs: check-for-python-changes
if: needs.check-for-python-changes.outputs.output1 == 'true'
timeout-minutes: 5
steps:
- run: echo "/root/.local/bin" >> $GITHUB_PATH
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/node-pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ on:
pull_request:
branches: ["main"]
paths:
- "samples/"
- 'samples/**'

jobs:
build:
Expand Down
58 changes: 58 additions & 0 deletions .github/workflows/python-integration-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#
# This workflow will run all python integrations tests.
#

name: Python Integration Tests

on:
workflow_dispatch:
push:
branches: [ "main"]
paths:
- 'python/**'
schedule:
- cron: '0 */12 * * *' # Run every 12 hours: midnight UTC and noon UTC


permissions:
contents: read

jobs:
python-integration-tests:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11"]
os: [ ubuntu-latest, windows-latest, macos-latest ]

steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install poetry pytest
cd python && poetry install
- name: Run Integration Tests
shell: bash
env: # Set Azure credentials secret as an input
Python_Integration_Tests: Python_Integration_Tests
AzureOpenAI__Label: azure-text-davinci-003
AzureOpenAIEmbedding__Label: azure-text-embedding-ada-002
AzureOpenAI__DeploymentName: ${{ vars.AZUREOPENAI__DEPLOYMENTNAME }}
AzureOpenAIChat__DeploymentName: ${{ vars.AZUREOPENAI__CHAT__DEPLOYMENTNAME }}
AzureOpenAIEmbeddings__DeploymentName: ${{ vars.AZUREOPENAIEMBEDDING__DEPLOYMENTNAME }}
AzureOpenAI__Endpoint: ${{ secrets.AZUREOPENAI__ENDPOINT }}
AzureOpenAIEmbeddings__Endpoint: ${{ secrets.AZUREOPENAI__ENDPOINT }}
AzureOpenAI__ApiKey: ${{ secrets.AZUREOPENAI__APIKEY }}
AzureOpenAIEmbeddings__ApiKey: ${{ secrets.AZUREOPENAI__APIKEY }}
Bing__ApiKey: ${{ secrets.BING__APIKEY }}
OpenAI__ApiKey: ${{ secrets.OPENAI__APIKEY }}
run: |
cd python
poetry run pytest ./tests/integration
16 changes: 8 additions & 8 deletions .github/workflows/python-unit-tests.yml
Original file line number Diff line number Diff line change
@@ -1,36 +1,36 @@
name: Python tests
name: Python Unit Tests

on:
workflow_dispatch:
pull_request:
branches: [ "main", "feature*" ]

jobs:
paths-filter:
check-for-python-changes:
runs-on: ubuntu-latest
outputs:
output1: ${{ steps.filter.outputs.python}}
steps:
- uses: actions/checkout@v3
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
python:
- 'python/**'
- uses: actions/checkout@v3
# run only if 'python' files were changed
- name: python tests
- name: python changes found
if: steps.filter.outputs.python == 'true'
run: echo "Python file"
# run only if not 'python' files were changed
- name: not python tests
- name: no python changes found
if: steps.filter.outputs.python != 'true'
run: echo "NOT python file"

build:
python-unit-tests:
runs-on: ${{ matrix.os }}
needs: paths-filter
if: needs.paths-filter.outputs.output1 == 'true'
needs: check-for-python-changes
if: needs.check-for-python-changes.outputs.output1 == 'true'
strategy:
fail-fast: false
matrix:
Expand Down
6 changes: 6 additions & 0 deletions python/DEV_SETUP.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,12 @@ You can run the integration tests under the [tests/integration](tests/integratio
poetry install
poetry run pytest tests/integration

You can also run all the tests together under the [tests](tests/) folder.

cd python
poetry install
poetry run pytest tests

# Tools and scripts

## Pipeline checks
Expand Down
12 changes: 6 additions & 6 deletions python/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
openai==0.27.*
numpy==1.24.*
aiofiles>=23.1.0
transformers>=4.28.0
sentence-transformers>=2.2.2
torch>=2.0.0
openai==0.27.0
numpy==1.24.2
aiofiles==23.1.0
transformers==4.28.0
sentence-transformers==2.2.2
torch==2.0.0
9 changes: 7 additions & 2 deletions python/semantic_kernel/kernel_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,10 @@ def get_chat_service_service_id(self, service_id: Optional[str] = None) -> str:
def get_text_embedding_generation_service_id(
self, service_id: Optional[str] = None
) -> str:
if service_id is None or service_id not in self._text_embedding_generation_services:
if (
service_id is None
or service_id not in self._text_embedding_generation_services
):
if self._default_text_embedding_generation_service is None:
raise ValueError("No default embedding service is set")
return self._default_text_embedding_generation_service
Expand Down Expand Up @@ -230,7 +233,9 @@ def remove_chat_service(self, service_id: str) -> "KernelConfig":
self._default_chat_service = next(iter(self._chat_services), None)
return self

def remove_text_embedding_generation_service(self, service_id: str) -> "KernelConfig":
def remove_text_embedding_generation_service(
self, service_id: str
) -> "KernelConfig":
if service_id not in self._text_embedding_generation_services:
raise ValueError(
f"AI service with service_id '{service_id}' does not exist"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ async def summarize_function_test(kernel: sk.Kernel):
{{$input}}
{{$input2}}
(hyphenated words count as 1 word)
Give me the TLDR in exactly 5 words:
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@


@pytest.mark.asyncio
@pytest.mark.xfail(
raises=AssertionError,
reason="Azure OpenAI may throttle requests, preventing this test from passing",
)
async def test_azure_chat_completion_with_skills():
kernel = sk.Kernel()

Expand All @@ -21,11 +25,12 @@ async def test_azure_chat_completion_with_skills():
else:
# Load credentials from .env file
deployment_name, api_key, endpoint = sk.azure_openai_settings_from_dot_env()
deployment_name = "gpt-35-turbo"
deployment_name = "gpt-4"

# Configure LLM service
kernel.config.add_text_completion_service(
"text_completion", sk_oai.AzureChatCompletion(deployment_name, endpoint, api_key)
kernel.config.add_chat_service(
"chat_completion",
sk_oai.AzureChatCompletion(deployment_name, endpoint, api_key),
)

await e2e_text_completion.summarize_function_test(kernel)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ async def test_azure_text_completion_with_skills():

# Configure LLM service
kernel.config.add_text_completion_service(
"text_completion", sk_oai.AzureTextCompletion(deployment_name, endpoint, api_key)
"text_completion",
sk_oai.AzureTextCompletion(deployment_name, endpoint, api_key),
)

await e2e_text_completion.summarize_function_test(kernel)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
@pytest.mark.asyncio
@pytest.mark.xfail(
raises=AssertionError,
reason="OpenAI may throtle requests, preventing this test from passing",
reason="OpenAI may throttle requests, preventing this test from passing",
)
async def test_oai_chat_service_with_skills():
kernel = sk.Kernel()
Expand Down
Loading

0 comments on commit 05c586b

Please sign in to comment.