Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 1ES CI for DTFx.AS v2 #1139

Merged
merged 35 commits into from
Jul 18, 2024

Conversation

davidmrdavid
Copy link
Collaborator

@davidmrdavid davidmrdavid commented Jul 16, 2024

The primary goal of this PR is to add a public 1ES ADO testing CI for DTFx.AS v2. This pipeline's results (and a messy development history) can be found here: https://dev.azure.com/azfunc/public/_build?definitionId=881&_a=summary

While migrating our CI pipeline, I encountered previously stable tests becoming flaky, as well as the fact that some tests were previously not running (and were failing once enabled) plus other 1ES-specific idiosyncrasies. This PR aims to resolve them all.

To help with reviews, I'll add comments throughout the diff to clarify why some part of the code had to be changed. I feel comfortable about most changes except for one: the addition of System.Runtime.CompilerServices.Unsafe v6.0.0 as an explicit dependency in DTFx.Core.

Without this change, some DTFx.Core would fail with the following exception trace:

Could not load file or assembly 'System.Runtime.CompilerServices.Unsafe, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Could not find or load a specific file. (0x80131621)
File name: 'System.Runtime.CompilerServices.Unsafe, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at System.Diagnostics.ActivityTraceId.SetToRandomBytes(Span1 outBytes)    at System.Diagnostics.ActivitySpanId.CreateRandom()    at DurableTask.Core.TaskOrchestrationDispatcher.ProcessScheduleTaskDecision(ScheduleTaskOrchestratorAction scheduleTaskOrchestratorAction, OrchestrationRuntimeState runtimeState, Boolean includeParameters, Activity parentTraceActivity) in /_/src/DurableTask.Core/TaskOrchestrationDispatcher.cs:line 1016    at DurableTask.Core.TaskOrchestrationDispatcher.OnProcessWorkItemAsync(TaskOrchestrationWorkItem workItem) in /_/src/DurableTask.Core/TaskOrchestrationDispatcher.cs:line 412    at DurableTask.Core.TaskOrchestrationDispatcher.OnProcessWorkItemAsync(TaskOrchestrationWorkItem workItem) in /_/src/DurableTask.Core/TaskOrchestrationDispatcher.cs:line 598    at DurableTask.Core.TaskOrchestrationDispatcher.OnProcessWorkItemSessionAsync(TaskOrchestrationWorkItem workItem) in /_/src/DurableTask.Core/TaskOrchestrationDispatcher.cs:line 211    at DurableTask.Core.WorkItemDispatcher1.ProcessWorkItemAsync(WorkItemDispatcherContext context, Object workItemObj) in /_/src/DurableTask.Core/WorkItemDispatcher.cs:line 374

This exception occurs in the Distributed Tracing path, as you can see by the mention of System.Diagnostics.ActivitySpanId in the stack, which is a type we get from our System.Diagnostics.DiagnosticSource dependency.

I tried researching this, and found this GH thread with the following excerpt:

Unsafe package isn't referenced anymore on netstandard2.0 or net462 which means that consumers of this package now get the transitive Unsafe dependency of System.Memory which is super old (4.5.3 vs 6.0.0).

Since we recently changed DTFx.Core to use only netstandard2.0 (as per https://github.com/Azure/durabletask/pull/1125/files), it's possible this is the reason why this error is only showing up now. Unfortunately, I'm unable to reproduce this locally, but this reproduces pretty consistently on the 1ES CI.

As a relevant aside: Durable Functions v2 has been incompatible with Azure Functions v1 for a while now. We never got to the bottom of that investigation, but I have a rough recollection that the reason for it was an exception like this being thrown in Functions V1. Perhaps that's worth revisiting.

In any case - I need some .NET expertise to help me validate what's the problem here and if the right solution is indeed to add the explicit dependency to CompilerServices.Unsafe v6.x.

Other than that, I'll annotate the PR below. Thanks

See the CI passing for the latest commit here: https://dev.azure.com/azfunc/public/_build/results?buildId=172877&view=results

@@ -2252,6 +2252,7 @@ public async Task ScheduledStart_Activity_GetStatus_Returns_ScheduledStart(bool
/// End-to-end test which validates a simple orchestrator function that calls an activity function
/// and checks the OpenTelemetry trace information
/// </summary>
[TestCategory("DisabledInCI")]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these disabled tests have not been running for a long time, and they're failing in the main branch as well. They weren't running because they're conditionally compiled not to run in NET462, and our previous testing task in ADO was configured to run tests found in the net462 dll.

Now, the test task does discover these tests, but I've disabled them as they aren't passing. @bachuv can you please add this as a To-Fix before Dist. Tracing GA?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll add this task to the list.

<PackageReference Include="Microsoft.WindowsAzure.ConfigurationManager" version="3.2.1" />
<PackageReference Include="WindowsAzure.Storage" version="8.7.0" />
</ItemGroup>

<ItemGroup Condition="'$(TargetFramework)' != 'net462'">
<PackageReference Include="Azure.Monitor.OpenTelemetry.Exporter" Version="1.0.0-beta.3" />
<PackageReference Include="Microsoft.Extensions.Logging" Version="3.1.6" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newtonsoft.json is inherited by the DTFx.Core project

Comment on lines +21 to +22
<PackageReference Include="Microsoft.Extensions.Logging" Version="2.2.0" />
<PackageReference Include="Microsoft.Extensions.Logging.Console" Version="2.2.0" />
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this Logging.Console dependency because I've configured the tests to emit traces to STDOUT, which makes debugging failed ADO tests much easier. I decided to unify the dependencies to version 2.2.0 for simplicy sake and to avoid conditional compilation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2.2.0 is quite old. Can we standardize to 6.0.0 as part of the v3 change?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had some issues getting the code to build on 1ES with more recent versions. I'm open to upgrading, but since this is the test project, I'd like to push that to be a separate PR.

Comment on lines -51 to -53
<None Update="testhost.dll.config" Condition="'$(TargetFramework)' == 'netcoreapp2.1'">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't have a netcoreapp2.1 TFM

Comment on lines -43 to +46
// TODO: Add a logger provider so we can collect these logs in memory.
LoggerFactory = new LoggerFactory(),
LoggerFactory = new LoggerFactory().AddConsole(LogLevel.Trace),
};
#pragma warning restore CS0618 // Type or member is obsolete
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as mentioned, I've added logging to STDOUT on our tests to facilitate debugging. The current logging library we use is pretty old, which is why we have to disable the warning here. I tried using a newer library, but then had issues with the NET462 TFM. For speed, I decided to use the old library instead, which works.

Comment on lines -8 to -14
<ItemGroup Condition="'$(TargetFramework)' == 'net462'">
<PackageReference Include="EnterpriseLibrary.SemanticLogging.EventSourceAnalyzer" Version="2.0.1406.1" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.9.0" />
<PackageReference Include="MSTest.TestAdapter" Version="1.4.0" />
<PackageReference Include="MSTest.TestFramework" Version="1.4.0" />
</ItemGroup>

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there was no reason for separate dependencies anymore

[TestMethod]
[DataTestMethod]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was throwing a warning. TestMethods cannot have DataRows, for that you need DataTestMethod instead. The VSTesting task is forgiving enough here, but I changed this to remove the warning

Comment on lines 10 to 11
<PackageReference Include="MSTest.TestAdapter" Version="3.1.1" />
<PackageReference Include="MSTest.TestFramework" Version="3.1.1" />
<PackageReference Include="MSTest.TestAdapter" Version="1.4.0" />
<PackageReference Include="MSTest.TestFramework" Version="1.4.0" />
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have an inconsistent use of MSTest, some use version 1.4.0 and others use versoin 3.1.1. The use of different frameworks led to warnings. I've downgraded to the common denominator, as I encountered issues upgrading them all to 3.1.1 (can't remember which unfortunately).

Comment on lines +412 to +416
// before sending the event, wait a few seconds to ensure the sub-orchestrator exists
// otherwise, we risk a race condition where the event is dicarded because the instances table
// does not yet have the sub-orchestrator instance in it.
await context.CreateTimer<object>(context.CurrentUtcDateTime.AddSeconds(10), state: null);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test suddenly became flaky. So added this to make it less flakey

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the right thing to do given the timeline we're working with, but I'm curious to better understand why this particular pattern isn't working reliably.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I know the root cause - if the ExternalEvent arrives before the ExecutionStarted event, then the IsExecutableInstance check will return false. We do that here:

bool IsExecutableInstance(OrchestrationRuntimeState runtimeState, IList<TaskMessage> newMessages, out string message)
{
if (runtimeState.ExecutionStartedEvent == null && !newMessages.Any(msg => msg.Event is ExecutionStartedEvent))
{
var instanceId = newMessages[0].OrchestrationInstance.InstanceId;
if (DurableTask.Core.Common.Entities.AutoStart(instanceId, newMessages))
{
message = null;
return true;
}
else
{
// A non-zero event count usually happens when an instance's history is overwritten by a
// new instance or by a ContinueAsNew. When history is overwritten by new instances, we
// overwrite the old history with new history (with a new execution ID), but this is done
// gradually as we build up the new history over time. If we haven't yet overwritten *all*
// the old history and we receive a message from the old instance (this happens frequently
// with canceled durable timer messages) we'll end up loading just the history that hasn't
// been fully overwritten. We know it's invalid because it's missing the ExecutionStartedEvent.
message = runtimeState.Events.Count == 0 ? "No such instance" : "Invalid history (may have been overwritten by a newer instance)";
return false;
}

So we return "no such instance" as seen above, and we end up discarding the message as it is for an instanceID that never existed. That code is here:

if (!this.IsExecutableInstance(session.RuntimeState, orchestrationWorkItem.NewMessages, out string warningMessage))
{
// If all messages belong to the same execution ID, then all of them need to be discarded.
// However, it's also possible to have messages for *any* execution ID batched together with messages
// to a *specific* (non-executable) execution ID. Those messages should *not* be discarded since
// they might be consumable by another orchestration with the same instance id but different execution ID.
var messagesToDiscard = new List<MessageData>();
var messagesToAbandon = new List<MessageData>();
foreach (MessageData msg in session.CurrentMessageBatch)
{
if (msg.TaskMessage.OrchestrationInstance.ExecutionId == session.Instance.ExecutionId)
{
messagesToDiscard.Add(msg);
}
else
{
messagesToAbandon.Add(msg);
}
}
// If no messages have a matching execution ID, then delete all of them. This means all the
// messages are external (external events, termination, etc.) and were sent to an instance that
// doesn't exist or is no longer in a running state.
if (messagesToDiscard.Count == 0)
{
messagesToDiscard.AddRange(messagesToAbandon);
messagesToAbandon.Clear();
}
// Add all abandoned messages to the deferred list. These messages will not be deleted right now.
// If they can be matched with another orchestration, then great. Otherwise they will be deleted
// the next time they are picked up.
messagesToAbandon.ForEach(session.DeferMessage);
var eventListBuilder = new StringBuilder(orchestrationWorkItem.NewMessages.Count * 40);
foreach (MessageData msg in messagesToDiscard)
{
eventListBuilder.Append(msg.TaskMessage.Event.EventType.ToString()).Append(',');
}
this.settings.Logger.DiscardingWorkItem(
this.azureStorageClient.QueueAccountName,
this.settings.TaskHubName,
session.Instance.InstanceId,
session.Instance.ExecutionId,
orchestrationWorkItem.NewMessages.Count,
session.RuntimeState.Events.Count,
eventListBuilder.ToString(0, eventListBuilder.Length - 1) /* remove trailing comma */,
warningMessage);
// The instance has already completed or never existed. Delete this message batch.
await this.DeleteMessageBatchAsync(session, messagesToDiscard);
await this.ReleaseTaskOrchestrationWorkItemAsync(orchestrationWorkItem);
return null;
}

To be clear, the piece where we end up sending external events to the discard list is this particular section:

// If no messages have a matching execution ID, then delete all of them. This means all the
// messages are external (external events, termination, etc.) and were sent to an instance that
// doesn't exist or is no longer in a running state.
if (messagesToDiscard.Count == 0)
{
messagesToDiscard.AddRange(messagesToAbandon);
messagesToAbandon.Clear();
}

So it's an ordering issue. In theory, I think we could be more conservative and not discard events immediately, but that can creating poison messages. Please let me know if I'm missing something.

Comment on lines 70 to 71
testAssembly: '**\bin\**\netcoreapp3.1\DurableTask.Core.Tests.dll'
# TODO: for some reason we're unable to run the net462 tests in the CI
testAssembly: '**\bin\Debug\netcoreapp3.1\DurableTask.Core.Tests.dll'
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, I want to have the pattern \**\bin\**\DurableTask.Core.Tests.dll here. However, when I input that, 1ES complains that there's not tests to be found in the net462 dll. This makes no sense to me since I can run those tests locally. If I may, I'd like to push for this to remain a "ToDo"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the netcoreapp3.1 part of the path necessary? I see you didn't include it in line 90 and 109, further below.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had netcoreapp3.1 simply because there is a subset of tests in net462 that fail to run on 1ES (again, not locally). It's the tests in test/DurableTask.Core.Tests/DispatcherMiddlewareTests.cs.

I've now changed the pattern back to '**\bin\**\DurableTask.Core.Tests.dll' and simply excluded the tests in test/DurableTask.Core.Tests/DispatcherMiddlewareTests.cs through and #if !NET462 exclusion. It's hacky - but otherwise the tests fail with the errors seen here: https://dev.azure.com/azfunc/public/_build/results?buildId=173021&view=ms.vss-test-web.build-test-results-tab

Copy link
Member

@cgillum cgillum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Left a few comments.

I don't know how to explain the need for explicitly adding System.Runtime.CompilerServices.Unsafe, unless perhaps it's related to differences in how binplacing works on these machines. When running locally, do you see this assembly anywhere in your build output on the file system?

// configure logging so traces are emitted during tests.
// This facilitates debugging when tests fail.

#pragma warning disable CS0618 // Type or member is obsolete
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add comments here with reminders to update these dependencies?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, just did here -> bc9a5ea

Comment on lines +412 to +416
// before sending the event, wait a few seconds to ensure the sub-orchestrator exists
// otherwise, we risk a race condition where the event is dicarded because the instances table
// does not yet have the sub-orchestrator instance in it.
await context.CreateTimer<object>(context.CurrentUtcDateTime.AddSeconds(10), state: null);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the right thing to do given the timeline we're working with, but I'm curious to better understand why this particular pattern isn't working reliably.

Comment on lines 70 to 71
testAssembly: '**\bin\**\netcoreapp3.1\DurableTask.Core.Tests.dll'
# TODO: for some reason we're unable to run the net462 tests in the CI
testAssembly: '**\bin\Debug\netcoreapp3.1\DurableTask.Core.Tests.dll'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the netcoreapp3.1 part of the path necessary? I see you didn't include it in line 90 and 109, further below.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the changes that were made in this file?

Copy link
Collaborator Author

@davidmrdavid davidmrdavid Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. So previously I had:

  - task: VSTest@2
    displayName: 'Run tests'
    inputs:
      testAssemblyVer2: |
        ${{ parameters.testAssembly }}
        !**\obj\**
        testFiltercriteria: 'TestCategory!=DisabledInCI'
        vsTestVersion: 17.0
        distributionBatchType: basedOnExecutionTime
        platform: 'any cpu'
        configuration: 'Debug'
        diagnosticsEnabled: True
        collectDumpOn: always
        rerunFailedTests: true
        rerunFailedThreshold: 30
        rerunMaxAttempts: 3

As you can see, everything below testAssemblyVer2 is at the same level indentation. That's an issue, since testFiltercriteria and the params below are not an input to testAssemblyVer2, instead they're properties that are supposed to be at the same level. So I've indented them back/out so that they're properly parsed by 1ES.

So now I have something equivalent to this (notice the indent)

  - task: VSTest@2
    displayName: 'Run tests'
    inputs:
      testAssemblyVer2: |
        ${{ parameters.testAssembly }}
        !**\obj\**
      testFiltercriteria: 'TestCategory!=DisabledInCI'
      vsTestVersion: 17.0
      distributionBatchType: basedOnExecutionTime
      platform: 'any cpu'
      configuration: 'Debug'
      diagnosticsEnabled: True
      collectDumpOn: always
      rerunFailedTests: true
      rerunFailedThreshold: 30
      rerunMaxAttempts: 3

Copy link
Collaborator

@jviau jviau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe your assembly load issue is a combo of .netstandard and that we reference System.Diagnostics.DiagnosticSource/5.0.0, but then Azure Storage brings in a higher version. That package depends on the unsafe package, so I guess that is not transitively being upleveled?

Either way, I don't know if referencing Unsafe directly is the answer vs taking this chance to upgrade DiagnosticSource package.

@@ -39,6 +39,7 @@
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="2.2.0" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="System.Diagnostics.DiagnosticSource" Version="5.0.1" />
<PackageReference Include="System.Runtime.CompilerServices.Unsafe" Version="6.0.0" />
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are going v3 with DTFx, I think we should try and just upgrade some packages here:

Microsoft.Extensions.Logging.Abstractions -> 6.0.0
System.Diagnostics.DiagnosticSource -> 6.0.0

See if that fixes your problem

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, maybe that's it. I'll try to diagnostics source change first, that could be it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No luck with just upgrading DiagnosticsSource -> https://dev.azure.com/azfunc/public/_build/results?buildId=173059&view=ms.vss-test-web.build-test-results-tab .

Let me try to rev the logging abstractions package as well next

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still no luck that failed too -> https://dev.azure.com/azfunc/public/_build/results?buildId=173069&view=ms.vss-test-web.build-test-results-tab

I'll keep the packages on the 6.x version, and also include the compilerServices.Unsafe pkg

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I increased the System.Diagnostics.DiagnosticSource pkg to 6.0.1 and that indeed fixed the issue :-) .
I did opt against increasing the logging abstractions package just because I want to avoid too many transitive dependency changes that may create issues unifying with the rest of bundles. If we have reason to think this is fine, I'm happy to add that in a follow up PR. For now though, I'd like to proceed as-is to unblock a few PRs.

Comment on lines +21 to +22
<PackageReference Include="Microsoft.Extensions.Logging" Version="2.2.0" />
<PackageReference Include="Microsoft.Extensions.Logging.Console" Version="2.2.0" />
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2.2.0 is quite old. Can we standardize to 6.0.0 as part of the v3 change?


this.client = new TaskHubClient(service);
}

[TestCleanup]
public async Task TestCleanup()
[TestCleanup()]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: undo this change

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You caught this PR as I was testing some weird stuff here. It's been reverted.

{
await this.worker!.StopAsync(true);
this.worker!.StopAsync(true).RunSynchronously();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is RunSynchronously from? Typically you sync wait for tasks with .GetAwaiter().GetResult();

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned - You caught this PR as I was testing some weird stuff here. It's been reverted.

@@ -23,6 +23,7 @@ namespace DurableTask.Core
using DurableTask.Core.Logging;
using DurableTask.Core.Middleware;
using DurableTask.Core.Tracing;
using ActivityStatusCode = Tracing.ActivityStatusCode;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after upgrading DTFx.Core's System.Diagnostics.DiagnosticSource to 6.x, there's some ambiguous definitions here. So I needed to add this alias to disambiguate

name: activityName,
activityName,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after upgrading DTFx.Core's System.Diagnostics.DiagnosticSource to 6.x, these named parameters create ambiguous method invocations here (there's multiple versions of this method with the same parameters in different orders). Using a positional argument here disambiguates the call.

@@ -10,6 +10,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// ----------------------------------------------------------------------------------
#if !NET462 // for some reasons these tests are not discoverable on 1ES, leading to the test getting aborted. TODO: Needs investigation
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the .net462 flavor of the tests are not discoverable? The .NET Core flavor is working?

@davidmrdavid
Copy link
Collaborator Author

As mentioned here, upgrading System.Diagnostics.DiagnosticSource to 6.0.1 did fix the DLL loading issue.

I'm going to take a bit of optimistic liberty here to speed up some blocked PRs and merged on the basis of the previous approval. Goes without saying I'm happy to incorporate any follow-up feedback that may arise after this. I just would like to unblock some of the PRs that are waiting for this CI to be in place.

So merging now, but please reach out for feedback / follow up requests. Thanks!

@davidmrdavid davidmrdavid merged commit e73e75f into azure-storage-v12 Jul 18, 2024
42 checks passed
@cgillum
Copy link
Member

cgillum commented Jul 18, 2024

The latest changes look good to me. ✅

I had one clarifying question about some tests that were disabled, but it's non-blocking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants