Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow opt-out of optimistic concurrency/rows affected check in the model #10443

Open
Tracked by #28479 ...
jsacapdev opened this issue Nov 30, 2017 · 27 comments
Open
Tracked by #28479 ...
Assignees
Milestone

Comments

@jsacapdev
Copy link
Contributor

A trigger is created on the database to delete an row if it meets some condition.

An entity is added to the table that matches that condition.

The call to SaveChanges() throws.

Is this by design? Is there a workaround? (other than using a third party library)

Exception message:

"Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions."

Stack trace:

"   at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ThrowAggregateUpdateConcurrencyException(Int32 commandIndex, Int32 expectedRowsAffected, Int32 rowsAffected)\r\n   at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeResultSetWithPropagation(Int32 commandIndex, RelationalDataReader reader)\r\n   at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.Consume(RelationalDataReader reader)\r\n   at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.Execute(IRelationalConnection connection)\r\n   at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.Execute(Tuple`2 parameters)\r\n   at Microsoft.EntityFrameworkCore.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)\r\n   at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult](IExecutionStrategy strategy, TState state, Func`2 operation)\r\n   at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.Execute(IEnumerable`1 commandBatches, IRelationalConnection connection)\r\n   at Microsoft.EntityFrameworkCore.Storage.RelationalDatabase.SaveChanges(IReadOnlyList`1 entries)\r\n   at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(IReadOnlyList`1 entriesToSave)\r\n   at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(Boolean acceptAllChangesOnSuccess)\r\n   at Microsoft.EntityFrameworkCore.DbContext.SaveChanges(Boolean acceptAllChangesOnSuccess)\r\n   at Microsoft.EntityFrameworkCore.DbContext.SaveChanges()\r\n   at AlrightTriggerConsole.Program.Main(String[] args) in C:\\playground\\AlrightTrigger\\AlrightTriggerConsole\\Program.cs:line 30"

Steps to reproduce

  1. Ran the program to create a Blog.

  2. Ran the program to create a post.

  3. Manually created the trigger.

  4. Ran the program again to create a post.

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.EntityFrameworkCore;

namespace AlrightTriggerConsole
{
    class Program
    {
        static void Main(string[] args)
        {
            try
            {
                using (var db = new BloggingContext())
                {
                    // if (!db.Blogs.Any())
                    // {
                    //     db.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/" });
                    // }

                    var blog = db.Blogs.FirstOrDefault();

                    if (blog.Posts == null)
                    {
                        blog.Posts = new List<Post> { };
                    }

                    blog.Posts.Add(new Post { Content = "test1", Title = "test1" });

                    var count = db.SaveChanges();
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
            }
        }
    }


    public class BloggingContext : DbContext
    {
        public DbSet<Blog> Blogs { get; set; }
        public DbSet<Post> Posts { get; set; }

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
            optionsBuilder.UseSqlServer("Server=localhost;Database=BlogDb;User Id=sa;Password=Password12345;");
        }
    }

    public class Blog
    {
        public int BlogId { get; set; }
        public string Url { get; set; }

        public List<Post> Posts { get; set; }
    }

    public class Post
    {
        public int PostId { get; set; }
        public string Title { get; set; }
        public string Content { get; set; }

        public int BlogId { get; set; }
        public Blog Blog { get; set; }
    }
}

CREATE TRIGGER RemoveTestPosts
ON [dbo].[Posts]
FOR INSERT
AS
DELETE FROM [dbo].[Posts] WHERE Content = 'test1'
GO

Further technical details

EF Core version:
Microsoft.AspNetCore.All 2.0.3
Microsoft.EntityFrameworkCore 2.0.1

Database Provider:
Microsoft.EntityFrameworkCore.SqlServer
SQL Server is running in docker

Operating system:
Visual Studio Code

Dotnet:
PS C:\playground\AlrightTrigger\AlrightTriggerConsole> dotnet --info
.NET Command Line Tools (2.0.3)

Product Information:
Version: 2.0.3
Commit SHA-1 hash: 12f0c7efcc

Runtime Environment:
OS Name: Windows
OS Version: 10.0.14393
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\2.0.3\

Microsoft .NET Core Shared Framework Host

Version : 2.0.3
Build : a9190d4a75f4a982ae4b4fa8d1a24526566c69df

@ajcvickers ajcvickers changed the title Trigger causes Database operation expected to affect 1 row(s) but actually affected 0 row(s) Allow opt-out of rows affected check Dec 1, 2017
@ajcvickers ajcvickers added this to the Backlog milestone Dec 1, 2017
@ajcvickers
Copy link
Contributor

@jsacapdev EF Core currently expects the rows affected returned to match what is expected to happen in the database based on the model and conceptual operation being performed. In cases like this where a trigger is being used and the conceptual operation is being changed it would be useful to be able to opt out of this checking. For Adds specifically, see also #7038

@Suchiman
Copy link
Contributor

Suchiman commented Dec 2, 2017

@jsacapdev does it work if you follow the recommendation to add SET NOCOUNT ON in the trigger?
e.g.

CREATE TRIGGER RemoveTestPosts
ON [dbo].[Posts]
FOR INSERT
AS
SET NOCOUNT ON;
DELETE FROM [dbo].[Posts] WHERE Content = 'test1';
GO

@jsacapdev
Copy link
Contributor Author

@Suchiman setting nocount on does not work

@ajcvickers would you be open to accepting a PR to get this fixed? I ask as you may need to consider the impact of the change required. If you are open to accepting a PR, we can start to discuss your thoughts on a design moving forward?

@ajcvickers
Copy link
Contributor

@jsacapdev Yes, we would certainly consider a PR for this. I'll get back to you on a potential approach.

@ajcvickers ajcvickers removed this from the Backlog milestone Dec 4, 2017
@ajcvickers
Copy link
Contributor

We discussed this and came to the following initial conclusions:

  • Any API that changes this must both cause the state manager to ignore the rows affected and also update the generated SQL to not collect the information
    • This is because collecting the rows affected can add both complexity and perf hits to the generated SQL and if we're not going to use the results, then this SQL should not be generated at all
  • Issue PERF: Investigate optimized rows affected round-tripping when batching #7038 is about doing this for just inserts, where we don't "use" rows affected anyway
    • This means the change can be made for inserts without any API surface--just never collect rows affected for inserts and never check it in the state manager
  • For updates and deletes, we need to decide on an API surface. Possibilities:
    • Configured in the model for per-entity type or for the whole model
    • A context-bound option that disables it for all entities in any model
    • A flag to SaveChanges that can turn the behavior on and off
    • Some combination of the above

As of now, we haven't settled on the API questions, so marking this issue as needs-design.

@jsacapdev If you only need this for inserts, then I would suggest tackling #7038 since this doesn't require API changes. If you need this for updates and deletes as well, then we're going to have to do more design work, and this may take some time. Also, we are surprised that using SET NOCOUNT ON does not work--can you provide any more details on why this doesn't work?

@ajcvickers ajcvickers added this to the Backlog milestone Dec 4, 2017
@jsacapdev
Copy link
Contributor Author

jsacapdev commented Dec 6, 2017

@ajcvickers What I have found is that with both triggers (with and without NOCOUNT) I dont get a PostId returned when either query is generated and executed on the database. So the logic in ConsumeResultSetWithPropagation() that checks for a populated data reader fails and it throws back with the same behaviour.

INSERT INTO [Posts] ([BlogId], [Content], [Title])
VALUES (1, 'test1', 'test1');
SELECT [PostId]
FROM [Posts]
WHERE @@ROWCOUNT = 1 AND [PostId] = scope_identity();

@ajcvickers ajcvickers removed this from the Backlog milestone Dec 6, 2017
@ajcvickers
Copy link
Contributor

@jsacapdev Can you explain more about what the scenarios are for using the triggers? (If, for example, the result of doing the insert is that the record doesn't exist after the insert, then this is going to be very hard for EF to reason about. However, it seems likely that the real reason for using the trigger is something different, so we'd like to understand more.

@jsacapdev
Copy link
Contributor Author

@ajcvickers if i am honest, its not a very general use case, and its just as simple as understanding why once a triggered has been fired do i get an exception throw back. i appreciate your time looking into this and the information you have provided. and i have learnt a little bit more about the product.

@ajcvickers ajcvickers added this to the Backlog milestone Dec 7, 2017
@justin1291
Copy link

@ajcvickers I am having the same issue with my "UPSERT" trigger. Weirdly enough, my error

An unhandled exception occurred while processing the request.

DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities 

Odd note, inserting test data from VS code returns 1 row affected, with the trigger catching and updating it.

Trigger code:

IF EXISTS (SELECT * FROM sysobjects WHERE TYPE = 'TR'
     AND NAME = 'VISITOR_IF_EXISTS_UPDATE_TIME')
     DROP TRIGGER VISITOR_IF_EXISTS_UPDATE_TIME
GO
CREATE TRIGGER VISITOR_IF_EXISTS_UPDATE_TIME
   ON Visitor
   INSTEAD OF INSERT
AS 
BEGIN
IF NOT EXISTS (SELECT V.IP_ADDRESS from Visitor V, INSERTED I
 WHERE V.IP_ADDRESS = I.IP_ADDRESS)
    INSERT INTO VISITOR (IP_ADDRESS, COUNTRY_NAME, REGION_NAME, CITY, LATITUDE, LONGITUDE)
SELECT IP_ADDRESS, COUNTRY_NAME, REGION_NAME, CITY, LATITUDE, LONGITUDE
 FROM INSERTED
ELSE
UPDATE Visitor
    SET Visitor.UPDATE_DATE = (SYSDATETIME()),
    Visitor.UPDATE_COUNT = V.UPDATE_COUNT + 1
  FROM Visitor V, INSERTED I
WHERE V.IP_ADDRESS = I.IP_ADDRESS
END

@ajcvickers
Copy link
Contributor

@AndriySvyryd @divega Should we document what the update pipeline expects back for the various cases? For example, if EF is doing an insert in batch mode and there are store-generated values coming back.

@saluce65
Copy link

The issue I'm having with INSTEAD OF INSERT triggers is that Entity Framework is batching the insert statement with a select statement using scope_identity() ... since the INSTEAD OF trigger has its own scope, the select statement fails.

Perhaps an option to have inserts use @@IDENTITY instead of scope_identity() would allow users to use INSTEAD OF triggers without resorting to returning result sets, which is deprecated in SQL Server per https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-2017#returning-results. That option could be added to the SqlServerDbContextOptionsBuilder class.

@PetrMotlicek
Copy link

PetrMotlicek commented Apr 1, 2019

The issue I'm having with INSTEAD OF INSERT triggers is that Entity Framework is batching the insert statement with a select statement using scope_identity() ... since the INSTEAD OF trigger has its own scope, the select statement fails.

Perhaps an option to have inserts use @@IDENTITY instead of scope_identity() would allow users to use INSTEAD OF triggers without resorting to returning result sets, which is deprecated in SQL Server per https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-2017#returning-results. That option could be added to the SqlServerDbContextOptionsBuilder class.

I have INSTEAD OF INSERT trigger too - just collecting some third-party ad-hoc data and in the instead-of-trigger having logic avoiding duplications via unique key doing actually a data merge - insert or update.

In fact there is no transactional concurrency - third-party data are collected just-in-time - storing only needed ones, not PRE-IMPORTING all the third-party entities before running an application.

I thought using ALTERNATE (composed) KEY would be the way, but not: there is still

WHERE @@ROWCOUNT = 1 AND [Id] = scope_identity();

@@rowcount is 1, but scope_identity is NULL.

May be using alternate key in the way:

WHERE @@ROWCOUNT = 1 AND ([Id] = scope_identity() OR (ALTERNATE_KEY_COLUMN1 = xyz AND ALTERNATE_KEY_COLUMN2 = 123456789))

could be easy to implement?

@ajcvickers ajcvickers added this to the Backlog milestone Jul 7, 2022
@roji roji changed the title Allow opt-out of rows affected check Allow opt-out of optimistic concurrency/rows affected check Oct 2, 2022
@roji
Copy link
Member

roji commented Oct 2, 2022

Note an important perf aspect here: optimistic concurrency checks are a blocker for #27532; we can't insert the commit into the batch, since we need to throw and roll back if there's a concurrency exception, but we only know that on the client after the transaction gets committed. So we must do an additional roundtrip, which has a cost.

@roji
Copy link
Member

roji commented Oct 2, 2022

See Twitter poll specifically about the deletion question (also discussed in #27562).

Some reason why concurrency checks around deletion can be useful:

  • In non-concurrent situations (single-user), failure to delete a row indicates a program bug, which you may want surfaced.
  • Various users expressed that they don't want an exception, but would like to have the possibility of knowing how many rows were affected. SaveChanges does return this, but if there are several batched updates, you can't correlate rows affected per change (users may opt out of batching in such cases, doing just one delete in their SaveChanges). Note that this does remove the obstacle to Allow managing transactions via SQL commands in the update pipeline #27532 (no exception means no rollback).
  • See other reasons in Concurrency detection/exceptions and delete #27562 (comment)

@roji roji changed the title Allow opt-out of optimistic concurrency/rows affected check Allow opt-out of optimistic concurrency/rows affected check in the model Oct 7, 2022
@roji
Copy link
Member

roji commented Oct 7, 2022

We've had various discussions around this recently, here's a summary.

  • When a concurrency token isn't defined, there's little value in optimistic concurrency checks:
    • When Update is racing against Update, the concurrency checks don't do anything (since the row is always there), and the user already isn't protected against concurrent changes that make the row inconsistent. A concurrency token is always necessary for that. The one exception is if the user is trying to detect whether the row was deleted and recreated, but that seems very niche.
    • When Update is racing against Delete, we currently throw a concurrency exception if the Delete occurs first. However, this is just a normal race condition, and the Update could just as well occur before the Delete, in which case no exception is thrown. So there's little value in the check, and removing it probably wouldn't qualify as a breaking change.
    • When Insert is racing against Update or Delete, we have the same situation - just a normal race condition: the Update or Delete fail if happening before the Insert, but not after.
    • When Delete is racing against Delete, the database ends up in the same state regardless of which Delete occurred first (the row is gone); we think that in the majority of cases, raising a concurrency exception here provides very little value, and in fact is detrimental by failing operations, showing uninteresting error popups, etc. However, there are some rare scenarios where the exception could be useful:
      • Scenarios where actual removal of the row is important within a larger transaction. For example, a transaction that closes an account (removing its row) and at the same time transfers its outstanding balance somewhere else should only succeed if the row is actually deleted. Note that if updates could racing as well, a concurrency token must typically be defined for this to be safe (since the outstanding amount could change after the time its loaded). So without a token, the check is useful mainly for "immutable row" scenarios, where the row can't be updated since it was loaded, only removed.
      • Somewhat similarly, the transaction may contain auditing, where another row is inserted into an audit table only when the main row is actually deleted. The whole transaction should fail it the main row was actually deleted.
      • See this comment.
  • Because of the above, in an ideal world it may be better to only do concurrency checks if a concurrency token is defined (i.e. defining a concurrency token acts as an opt-in). However, changing that behavior now would be a silent breaking change for scenarios which rely on it.
  • A non-breaking change would be to add metadata to configure whether concurrency checks should be performed (a) never, (b) on update only, (c) on delete only, or (d) always (can be a flags enum). A user convention can then disable the check for all entities without a concurrency token, etc.
  • Perf-wise, the big advantage of disabling the concurrency check is that it would allow batching the transaction management (BEGIN/COMMIT), removing 1-2 network roundtrips. This is because the concurrency check requires us to read back the rows affected and possible roll the transaction back, but if we batch the COMMIT it's already too late.
  • Note that there's also Add option to disable concurrency checking at the SaveChanges level #19056 for disabling the checks for a particular SaveChanges (for all entities involved?).

Thanks @divega for valuable conversation (here)!

@roji
Copy link
Member

roji commented Oct 19, 2022

Another good reason to opt out of EF's optimistic concurrency is if you're using repeatable read transactions (or serializable/snapshot).

Repeatable read transactions are somewhat similar to optimistic concurrency, but managed by the database instead of by EF: the unit-of-work only succeeds if there was no conflict, otherwise an error is raised and the application has to retry. Serializable/snapshot transactions have overhead costs which vary across databases (e.g. may introduce additional locking), and also provide more than just checking that rows haven't changed (e.g. they guarantee a stable view of the database throughout the transaction).

@DaleMckeown

This comment was marked as off-topic.

@roji

This comment was marked as off-topic.

@DaleMckeown

This comment was marked as off-topic.

@roji

This comment was marked as off-topic.

@DaleMckeown

This comment was marked as off-topic.

@WycoffJohn
Copy link

Another use case that is affected by this DbUpdateConcurrencyException is updating a disconnected entity that does not exist.
i.e. Update method receives a disconnected entity with a primary key and attaches it to the context. The primary key shouldnt have been modified but if it was (very possible in disconnected scenario (api/web)), the call to SaveChanges will result in 0 rows updated and it will through the DbUpdateConcurrencyException.

Opting out of this behavior and reading the rows updated would be a much better solution than catching every DbUpdateConcurrencyException.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests