-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] Performance drop SQL Server compatibility #2471
Comments
Please try to add |
Thanks for the update! The execution time has gotten a bit better but is still slower compared to compatibiliy level 110.
New connection string: Do you have any documentation on why CSV files are generated for this program every day with different sizes, and I'm concerned that larger files will result in significantly longer execution times. Thanks again. |
You can read about this property here. It enables usage of bulk copy API, when prepared statement is used. If you have that possibility, you can try to play with bulk copy API directly and check if batch size property in |
I can't explain the performance impact between the SQL versions, have you checked with the SQL Server team? Can you give us the results when comparing SQL Server 2012 to 2019, when using the same driver version and same compatibility level? Additionally, can you supply us with the execution plan between the two runs, we can see if there are any major differences that would explain the performance impact. We could also take a look at the driver logs (though the above execution plan and test results are more helpful). For improving performance in general, here are our tips:
|
Thanks for the feedback! I'm currently working on some tests based on your suggestions and will get back to you as soon as possible. |
Hi! Here are the results on different driver versions and compatibility levels:
As we can see, all driver versions on compatibily level 150 are affected. While running SQL Server Profiler on both compatibility levels, I observed that for a specific code and date ('7985', '20160701'), the duration of the DELETES got significantly worse. As Jeffery requested the execution plan between the two runs, I remembered that I had once enabled Query Store on SQL Server 2019 (at compatibility level 150) while the program was running and noticed that the plan was changing mid-run. At compatibility level 110, the plan did not change. While researching, I found that starting from SQL Server 2016, the Database Engine changed, and Trace Flag 2371 is enabled by default. This changes the threshold used by the auto-update statistics process to a dynamic threshold. So I decided to check if statistics were changing mid-run, explaining why DELETEs duration got worse and why the plan was changing. And voilá. So, it has nothing to do with the driver but with the database engine's changes. Anyway, thank you! |
Very interesting, thank you for looking into this. We'll go ahead and close this issue, but I'm hoping in anyone else in the future has this same issue they come across this thread. |
Question
I have this java program that reads a CSV file and updates a table (performing deletes and inserts) in batches using
java.sql.PreparedStatement.addBatch
andjava.sql.PreparedStatement.executeBatch
. I noticed a significant performance drop when migrating from SQL Server 2012 to SQL Server 2019 and changing the database compatibility level from 110 to 150.It appears that while using the database in compatibility level 110, batches are trully being generated, but in compatibility level 150, deletes/inserts are being made line by line? Does that make sense?
Since this java program is a little bit old, we were still using sqljdbc4 4.2. We tried migrating to mssql-jdbc 12.6.2.jre8, but the problem still persists.
The connection string is:
jdbc:sqlserver://localhost:1433;databaseName=XXX;selectMethod=cursor;encrypt=true;trustServerCertificate=true;
Is there anything missing on the connection string? Is this a known problem?
The text was updated successfully, but these errors were encountered: