-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PostgreSQL error in trigger function #19
Comments
I've narrowed the problem down now. A pretty minimal setup to reproduce it would be the following:
Having two transactions that change different columns of the same entry out of order results in the trigger function attempting (and failing) to create a log entry with a negative interval (which obviously fails):
contents of the data table:
what pg_recall tries to write to the log table:
The problem (or one of them) is that t1's trigger function sees (and tries to update) data that wasn't there when the transaction started (because there was a lock on that record. Possible fix 1:if the update in the trigger function fails, use
(works the same for more than two transactions)
Possible fix 2:If the update in the trigger function fails, reorder the existing log entries:
The logical order issue is a real strong disadvantage that (in my opinion) disqualifies that attempt (as with it it's possible to call recall.at('tblName', now()) and get different results to what's in the data table) Possible fix 3:Log only the final change (i.e. in case of that error, put the new data values in the current log entry instead of setting its end time and creating a new one)
My favorite right now is fix 1, but before deciding on a fix I'll take a few more days to investigate a little further. |
Apparently when there's high load, it's possible that the trigger function isn't as bullet proof as expected.
I'm (very rarely, but still) get the following error on a table that every 30secs gets ~100-200 updates (pretty much all at the same time):
The table in question looks like this (although I think that should be irrelevant for this issue):
The log table's at ~570k entries right now and the error's been triggered maybe 30 times so far.
As I said, changes do tend to occur all roughly at the same time every 30 seconds and it's possible that the same data record in the data table is updated by two different sources at roughly the same time (so maybe it's an atomicity transactional issue, although afair trigger functions should be pretty much atomic).
Anyway, that's definitely something that needs to be sorted out before a v1.0 release (and before pushing the extension to PGXN for that matter...).
PS.: I know I'm storing the timestamp twice for each log entry (because of the
ts
column in the data table), but the overhead's pretty tinyThe text was updated successfully, but these errors were encountered: