mySQL Connector fails to load any records after "Timed out waiting to flush EmbeddedEngine" #4010
Labels
duplicate
This issue or pull request already exists
priority/high
High priority
type/bug
Something isn't working
Expected Behavior
mySql connector is not loading the entire table as would be expected. The job eventually ends in "success" status but not all records are ingested.
Current Behavior
Table -> 2.2M records
Size -> 1 GB
Schema ->
BorrowersTableSchema.pdf
Running on EC2 Docker with 16 GB RAM
It was doing great until it got to about 1,290,000 of 2,221,620 of records loaded and now I just get
[1;31mERROR[m i.d.e.EmbeddedEngine(commitOffsets):973 - {} - Timed out waiting to flush EmbeddedEngine{id=optimus} offsets to storage
(ScreenShots)
So after the run above finished it only imported the 1.29M of 2.2M records. Seems like everything after that error above was excluded.
Run #1 (Run 15) Logs ->
logs-15-0 (1).txt
So I tried a full reset of data and ran the sync again. This time it only imported 366, 844 records of the total 2.2M
Run #2 (Run 17)
logs-17-0 (2).txt
So I ran it a third time (this time without resetting the data) and I saw about the same results as before with 355,179 records synced.
The final log entries of the mySQL connector throw the following errors:
2021-06-09 19:24:06 ERROR i.d.e.EmbeddedEngine(commitOffsets):973 - {} - Timed out waiting to flush EmbeddedEngine{id=optimus} offsets to storage 2021-06-09 19:24:35 INFO i.d.e.EmbeddedEngine(stop):996 - {} - Stopping the embedded engine 2021-06-09 19:24:47 INFO i.d.c.c.BaseSourceTask(stop):192 - {} - Stopping down connector 2021-06-09 19:24:53 INFO i.d.c.m.MySqlConnectorTask(doStop):453 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Stopping MySQL connector task 2021-06-09 19:24:56 INFO i.d.c.m.ChainedReader(stop):121 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - ChainedReader: Stopping the snapshot reader 2021-06-09 19:24:56 INFO i.d.c.m.AbstractReader(stop):140 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Discarding 67 unsent record(s) due to the connector shutting down 2021-06-09 19:24:56 INFO i.d.c.m.AbstractReader(stop):140 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Discarding 7 unsent record(s) due to the connector shutting down 2021-06-09 19:25:02 INFO i.d.e.EmbeddedEngine(stop):1004 - {} - Waiting for PT5M for connector to stop 2021-06-09 19:25:52 WARN i.d.j.JdbcConnection(doClose):961 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Failed to close database connection by calling close(), attempting abort() 2021-06-09 19:26:01 INFO i.d.c.m.MySqlConnectorTask(completeReaders):491 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Connector task finished all work and is now shutdown 2021-06-09 19:26:07 INFO i.d.j.JdbcConnection(lambda$doClose$3):945 - {} - Connection gracefully closed 2021-06-09 19:26:07 INFO i.d.c.m.SnapshotReader(execute):754 - {dbz.connectorContext=snapshot, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Step 7: rolling back transaction after abort 2021-06-09 19:26:10 ERROR i.d.e.EmbeddedEngine(commitOffsets):973 - {} - Timed out waiting to flush EmbeddedEngine{id=optimus} offsets to storage 2021-06-09 19:26:16 INFO i.a.i.s.m.DebeziumRecordPublisher(lambda$start$1):93 - {} - Debezium engine shutdown.
So the jobs are showing as successfully finished, but all of the records of the tables were not synced.
Final Logs
logs-18-0 (2).txt
Please note that during these possible memory errors, while the error occurred we still had 8 GB (50%) of our memory available on the instance.
Steps to Reproduce
Severity of the bug for you
Very HIGH
Airbyte Version
0.24.7
Connector Version (if applicable)
mySQL 0.3.7
Snowflake 0.3.9
The text was updated successfully, but these errors were encountered: