Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mySQL Connector fails to load any records after "Timed out waiting to flush EmbeddedEngine" #4010

Closed
tylerdelange opened this issue Jun 9, 2021 · 1 comment
Labels
duplicate This issue or pull request already exists priority/high High priority type/bug Something isn't working

Comments

@tylerdelange
Copy link

tylerdelange commented Jun 9, 2021

Expected Behavior

mySql connector is not loading the entire table as would be expected. The job eventually ends in "success" status but not all records are ingested.

Current Behavior

Table -> 2.2M records
Size -> 1 GB
Schema ->
BorrowersTableSchema.pdf

Running on EC2 Docker with 16 GB RAM
It was doing great until it got to about 1,290,000 of 2,221,620 of records loaded and now I just get
[1;31mERROR[m i.d.e.EmbeddedEngine(commitOffsets):973 - {} - Timed out waiting to flush EmbeddedEngine{id=optimus} offsets to storage

(ScreenShots)
image (1)

image (2)

So after the run above finished it only imported the 1.29M of 2.2M records. Seems like everything after that error above was excluded.
Run #1 (Run 15) Logs ->
logs-15-0 (1).txt

So I tried a full reset of data and ran the sync again. This time it only imported 366, 844 records of the total 2.2M
Run #2 (Run 17)
logs-17-0 (2).txt

So I ran it a third time (this time without resetting the data) and I saw about the same results as before with 355,179 records synced.
The final log entries of the mySQL connector throw the following errors:

2021-06-09 19:24:06 ERROR i.d.e.EmbeddedEngine(commitOffsets):973 - {} - Timed out waiting to flush EmbeddedEngine{id=optimus} offsets to storage 2021-06-09 19:24:35 INFO i.d.e.EmbeddedEngine(stop):996 - {} - Stopping the embedded engine 2021-06-09 19:24:47 INFO i.d.c.c.BaseSourceTask(stop):192 - {} - Stopping down connector 2021-06-09 19:24:53 INFO i.d.c.m.MySqlConnectorTask(doStop):453 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Stopping MySQL connector task 2021-06-09 19:24:56 INFO i.d.c.m.ChainedReader(stop):121 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - ChainedReader: Stopping the snapshot reader 2021-06-09 19:24:56 INFO i.d.c.m.AbstractReader(stop):140 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Discarding 67 unsent record(s) due to the connector shutting down 2021-06-09 19:24:56 INFO i.d.c.m.AbstractReader(stop):140 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Discarding 7 unsent record(s) due to the connector shutting down 2021-06-09 19:25:02 INFO i.d.e.EmbeddedEngine(stop):1004 - {} - Waiting for PT5M for connector to stop 2021-06-09 19:25:52 WARN i.d.j.JdbcConnection(doClose):961 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Failed to close database connection by calling close(), attempting abort() 2021-06-09 19:26:01 INFO i.d.c.m.MySqlConnectorTask(completeReaders):491 - {dbz.connectorContext=task, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Connector task finished all work and is now shutdown 2021-06-09 19:26:07 INFO i.d.j.JdbcConnection(lambda$doClose$3):945 - {} - Connection gracefully closed 2021-06-09 19:26:07 INFO i.d.c.m.SnapshotReader(execute):754 - {dbz.connectorContext=snapshot, dbz.connectorName=optimus, dbz.connectorType=MySQL} - Step 7: rolling back transaction after abort 2021-06-09 19:26:10 ERROR i.d.e.EmbeddedEngine(commitOffsets):973 - {} - Timed out waiting to flush EmbeddedEngine{id=optimus} offsets to storage 2021-06-09 19:26:16 INFO i.a.i.s.m.DebeziumRecordPublisher(lambda$start$1):93 - {} - Debezium engine shutdown.

So the jobs are showing as successfully finished, but all of the records of the tables were not synced.

Final Logs
logs-18-0 (2).txt

Please note that during these possible memory errors, while the error occurred we still had 8 GB (50%) of our memory available on the instance.

Steps to Reproduce

  1. mySQL Connector
  2. snowflake Connector

Severity of the bug for you

Very HIGH

Airbyte Version

0.24.7

Connector Version (if applicable)

mySQL 0.3.7
Snowflake 0.3.9

@tylerdelange tylerdelange added the type/bug Something isn't working label Jun 9, 2021
@marcosmarxm marcosmarxm added the priority/high High priority label Jun 10, 2021
@subodh1810 subodh1810 added the duplicate This issue or pull request already exists label Jun 23, 2021
@subodh1810
Copy link
Contributor

Marking this as duplicate of #3969 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists priority/high High priority type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants