Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freshdesk returns limited records amount from contacts source #7207

Closed
d0vyda5 opened this issue Oct 20, 2021 · 6 comments · Fixed by #8017
Closed

Freshdesk returns limited records amount from contacts source #7207

d0vyda5 opened this issue Oct 20, 2021 · 6 comments · Fixed by #8017

Comments

@d0vyda5
Copy link

d0vyda5 commented Oct 20, 2021

Enviroment

  • Airbyte version: 0.30.20-alpha
  • OS Version / Instance: MacOS BigSur
  • Deployment: Docker
  • Source Connector and version: Freshdesk 0.2.7
  • Destination Connector and version: Google Cloud Storage 0.1.2
  • Severity: High
  • Step where error happened: Sync job

Current Behavior

When syncing contacts from Freshdesk with start_date parameter to retrieve historic data, only limited amount of records are returned. Each time, only 49000 records are synced from contacts to the destination folder.

Expected Behavior

Records, which are in contacts table, starting from the start_date parameter should be returned.

I have noticed that in the pull request #6442 there was mentioned that added start_date parameter can be used with listed incremental streams.

upon discussion with @vitaliizazmic we agreed to add start_date to the following incremental streams:

contacts
tickets
companies
satisfaction ratings
the default value is now - 30 days

we will always use updated_since parameters in tickets streams (because old logic rely on created tickets, not updated, which break dependant streams - conversations)

Logs

Syncing was completed without any errors, however it was missing records.

LOG

2021-10-14 11:20:57 INFO () WorkerRun(call):47 - Executing worker wrapper. Airbyte version: 0.30.20-alpha
2021-10-14 11:20:57 INFO () TemporalAttemptExecution(get):94 - Executing worker wrapper. Airbyte version: 0.30.20-alpha
2021-10-14 11:20:57 WARN () Databases(createPostgresDatabaseWithRetry):38 - Waiting for database to become available...
2021-10-14 11:20:57 INFO () JobsDatabaseInstance(lambda$static$2):25 - Testing if jobs database is ready...
2021-10-14 11:20:57 INFO () Databases(createPostgresDatabaseWithRetry):55 - Database available!
2021-10-14 11:20:57 INFO () DefaultReplicationWorker(run):82 - start sync worker. job id: 77 attempt id: 0
2021-10-14 11:20:57 INFO () DefaultReplicationWorker(run):91 - configured sync modes: {null.contacts=incremental - append}
2021-10-14 11:20:57 INFO () DefaultAirbyteDestination(start):58 - Running destination...
2021-10-14 11:20:57 INFO () LineGobbler(voidCall):65 - Checking if airbyte/destination-gcs:0.1.2 exists...
2021-10-14 11:20:57 INFO () LineGobbler(voidCall):65 - airbyte/destination-gcs:0.1.2 was found locally.
2021-10-14 11:20:57 INFO () DockerProcessFactory(create):127 - Preparing command: docker run --rm --init -i -v airbyte_workspace:/data -v /tmp/airbyte_local:/local -w /data/77/0 --network host --log-driver none airbyte/destination-gcs:0.1.2 write --config destination_config.json --catalog destination_catalog.json
2021-10-14 11:20:57 INFO () LineGobbler(voidCall):65 - Checking if airbyte/source-freshdesk:0.2.7 exists...
2021-10-14 11:20:57 INFO () DockerProcessFactory(create):127 - Preparing command: docker run --rm --init -i -v airbyte_workspace:/data -v /tmp/airbyte_local:/local -w /data/77/0 --network host --log-driver none airbyte/source-freshdesk:0.2.7 read --config source_config.json --catalog source_catalog.json
2021-10-14 11:20:57 INFO () LineGobbler(voidCall):65 - airbyte/source-freshdesk:0.2.7 was found locally.
2021-10-14 11:20:57 INFO () DefaultReplicationWorker(lambda$getDestinationOutputRunnable$3):226 - Destination output thread started.
2021-10-14 11:20:57 INFO () DefaultReplicationWorker(run):119 - Waiting for source thread to join.
2021-10-14 11:20:57 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):190 - Replication thread started.
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(internalLog):90 - Starting syncing SourceFreshdesk
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(internalLog):90 - Syncing contacts stream
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:20:59 �[32mINFO�[m i.a.i.b.IntegrationRunner(run):96 - {} - Running integration: io.airbyte.integrations.destination.gcs.GcsDestination
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:20:59 �[32mINFO�[m i.a.i.b.IntegrationCliParser(parseOptions):135 - {} - integration args: {catalog=destination_catalog.json, write=null, config=destination_config.json}
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:20:59 �[32mINFO�[m i.a.i.b.IntegrationRunner(run):100 - {} - Command: WRITE
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:20:59 �[32mINFO�[m i.a.i.b.IntegrationRunner(run):101 - {} - Integration config: IntegrationConfig{command=WRITE, configPath='destination_config.json', catalogPath='destination_catalog.json', statePath='null'}
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:20:59 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - {} - Unknown keyword examples - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2021-10-14 11:20:59 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:20:59 �[33mWARN�[m c.n.s.JsonMetaSchema(newValidator):338 - {} - Unknown keyword airbyte_secret - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2021-10-14 11:21:00 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:21:00 �[32mINFO�[m i.a.i.d.s.S3FormatConfigs(getS3FormatConfig):42 - {} - S3 format config: {"flattening":"Root level flattening","format_type":"CSV","part_size_mb":5}
2021-10-14 11:21:00 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:21:00 �[32mINFO�[m i.a.i.d.g.c.GcsCsvWriter(<init>):74 - {} - Full GCS path for stream 'contacts': airbyte-freshdesk/data_sync/test4/contacts/2021_10_14_1634210460912_0.csv
2021-10-14 11:21:00 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:21:00 �[32mINFO�[m i.a.i.d.s.u.S3StreamTransferManagerHelper(getDefault):75 - {} - PartSize arg is set to 5 MB
2021-10-14 11:21:01 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:21:01 �[32mINFO�[m a.m.s.StreamTransferManager(getMultiPartOutputStreams):329 - {} - Initiated multipart upload to airbyte-freshdesk/data_sync/test4/contacts/2021_10_14_1634210460912_0.csv with full ID ABPnzm7i5ot_6slHQHVtKNqqY4J1U4TN6lxnMQu1b9K1nLQKVPukgIUbSg8Q1RfT638t3tR_
2021-10-14 11:21:02 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 1000
2021-10-14 11:21:05 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 2000
2021-10-14 11:21:08 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 3000
2021-10-14 11:21:11 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 4000
2021-10-14 11:21:58 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 5000
2021-10-14 11:22:02 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 6000
2021-10-14 11:22:05 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 7000
2021-10-14 11:22:08 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 8000
2021-10-14 11:22:12 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 9000
2021-10-14 11:22:59 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 10000
2021-10-14 11:23:02 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 11000
2021-10-14 11:23:06 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 12000
2021-10-14 11:23:09 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 13000
2021-10-14 11:23:13 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 14000
2021-10-14 11:23:59 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 15000
2021-10-14 11:24:03 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 16000
2021-10-14 11:24:06 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 17000
2021-10-14 11:24:10 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 18000
2021-10-14 11:24:14 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 19000
2021-10-14 11:25:00 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 20000
2021-10-14 11:25:04 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 21000
2021-10-14 11:25:08 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 22000
2021-10-14 11:25:12 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 23000
2021-10-14 11:25:15 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 24000
2021-10-14 11:26:00 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 25000
2021-10-14 11:26:04 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 26000
2021-10-14 11:26:08 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 27000
2021-10-14 11:26:12 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 28000
2021-10-14 11:26:16 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 29000
2021-10-14 11:27:01 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 30000
2021-10-14 11:27:05 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 31000
2021-10-14 11:27:09 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 32000
2021-10-14 11:27:13 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 33000
2021-10-14 11:27:18 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 34000
2021-10-14 11:28:01 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 35000
2021-10-14 11:28:06 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 36000
2021-10-14 11:28:10 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 37000
2021-10-14 11:28:14 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 38000
2021-10-14 11:28:19 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 39000
2021-10-14 11:29:02 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 40000
2021-10-14 11:29:06 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 41000
2021-10-14 11:29:08 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:29:08 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - {} - [Manager uploading to airbyte-freshdesk/data_sync/test4/contacts/2021_10_14_1634210460912_0.csv with id ABPnzm7i5...T638t3tR_]: Finished uploading [Part number 1 containing 5.01 MB]
2021-10-14 11:29:11 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 42000
2021-10-14 11:29:16 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 43000
2021-10-14 11:29:20 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 44000
2021-10-14 11:30:03 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 45000
2021-10-14 11:30:08 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 46000
2021-10-14 11:30:13 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 47000
2021-10-14 11:30:18 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 48000
2021-10-14 11:30:59 INFO () DefaultReplicationWorker(lambda$getReplicationRunnable$2):203 - Records read: 49000
2021-10-14 11:31:02 INFO () DefaultAirbyteStreamFactory(internalLog):90 - Advancing bookmark for Contacts stream from None to 2021-10-14T11:03:17+00:00
2021-10-14 11:31:02 INFO () DefaultAirbyteStreamFactory(internalLog):90 - Finished syncing SourceFreshdesk
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(run):121 - Source thread complete.
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(run):122 - Waiting for destination thread to join.
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m i.a.i.b.FailureTrackingAirbyteMessageConsumer(close):80 - {} - Airbyte message consumer: succeeded.
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m i.a.i.d.g.w.BaseGcsWriter(close):129 - {} - Uploading remaining data for stream 'contacts'.
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m a.m.s.MultiPartOutputStream(close):158 - {} - Called close() on [MultipartOutputStream for parts 1 - 10000]
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m a.m.s.MultiPartOutputStream(close):158 - {} - Called close() on [MultipartOutputStream for parts 1 - 10000]
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[33mWARN�[m a.m.s.MultiPartOutputStream(close):160 - {} - [MultipartOutputStream for parts 1 - 10000] is already closed
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m a.m.s.StreamTransferManager(uploadStreamPart):558 - {} - [Manager uploading to airbyte-freshdesk/data_sync/test4/contacts/2021_10_14_1634210460912_0.csv with id ABPnzm7i5...T638t3tR_]: Finished uploading [Part number 2 containing 7.08 MB]
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m a.m.s.StreamTransferManager(complete):395 - {} - [Manager uploading to airbyte-freshdesk/data_sync/test4/contacts/2021_10_14_1634210460912_0.csv with id ABPnzm7i5...T638t3tR_]: Completed
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m i.a.i.d.g.w.BaseGcsWriter(close):131 - {} - Upload completed for stream 'contacts'.
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(lambda$getDestinationOutputRunnable$3):231 - state in DefaultReplicationWorker from Destination: io.airbyte.protocol.models.AirbyteMessage@32f21399[type=STATE,log=<null>,spec=<null>,connectionStatus=<null>,catalog=<null>,record=<null>,state=io.airbyte.protocol.models.AirbyteStateMessage@3c35ebb7[data={"contacts":{"updated_at":"2021-10-14T11:03:17+00:00"}},additionalProperties={}],additionalProperties={}]
2021-10-14 11:31:03 INFO () DefaultAirbyteStreamFactory(lambda$create$0):53 - 2021-10-14 11:31:03 �[32mINFO�[m i.a.i.b.IntegrationRunner(run):153 - {} - Completed integration: io.airbyte.integrations.destination.gcs.GcsDestination
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(run):124 - Destination thread complete.
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(run):152 - sync summary: io.airbyte.config.ReplicationAttemptSummary@53001acb[status=completed,recordsSynced=49900,bytesSynced=23756357,startTime=1634210457277,endTime=1634211063661]
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(run):159 - Source output at least one state message
2021-10-14 11:31:03 INFO () DefaultReplicationWorker(run):165 - State capture: Updated state to: Optional[io.airbyte.config.State@7cedbf39[state={"contacts":{"updated_at":"2021-10-14T11:03:17+00:00"}}]]
2021-10-14 11:31:03 INFO () TemporalAttemptExecution(get):115 - Stopping cancellation check scheduling...
2021-10-14 11:31:03 INFO () SyncWorkflow$ReplicationActivityImpl(replicate):178 - sync summary: io.airbyte.config.StandardSyncOutput@2528c6dd[standardSyncSummary=io.airbyte.config.StandardSyncSummary@40a3d535[status=completed,recordsSynced=49900,bytesSynced=23756357,startTime=1634210457277,endTime=1634211063661],state=io.airbyte.config.State@7cedbf39[state={"contacts":{"updated_at":"2021-10-14T11:03:17+00:00"}}],outputCatalog=io.airbyte.protocol.models.ConfiguredAirbyteCatalog@2d46cdaa[streams=[io.airbyte.protocol.models.ConfiguredAirbyteStream@4d19641a[stream=io.airbyte.protocol.models.AirbyteStream@2e644d4a[name=contacts,jsonSchema={"type":"object","$schema":"http://json-schema.org/draft-07/schema#","properties":{"id":{"type":"integer"},"name":{"type":"string"},"email":{"type":"string"},"phone":{"type":["string","integer","null"]},"active":{"type":"boolean"},"mobile":{"type":["string","integer","null"]},"address":{"type":["string","null"]},"language":{"type":"string"},"job_title":{"type":["string","null"]},"time_zone":{"type":"string"},"company_id":{"type":["integer","null"]},"created_at":{"type":"string"},"twitter_id":{"type":["integer","null"]},"updated_at":{"type":"string"},"csat_rating":{"type":["integer","null"]},"description":{"type":["string","null"]},"facebook_id":{"type":["integer","null"]},"custom_fields":{"type":"object"},"preferred_source":{"type":"string"},"unique_external_id":{"type":["string","null"]}}},supportedSyncModes=[incremental],sourceDefinedCursor=true,defaultCursorField=[updated_at],sourceDefinedPrimaryKey=[],namespace=<null>,additionalProperties={}],syncMode=incremental,cursorField=[updated_at],destinationSyncMode=append,primaryKey=[],additionalProperties={}]],additionalProperties={}]]

Steps to Reproduce

  1. Create a Freshdesk connector on account which has contacts older than 30 days.
  2. In start_date parameter enter a date older than 30 days.
  3. Run sync job.
  4. In the destination folder only limited amount of records are returned.

Are you willing to submit a PR?

No

@d0vyda5 d0vyda5 added the type/bug Something isn't working label Oct 20, 2021
@harshithmullapudi
Copy link
Contributor

Hey @d0vyda5 thanks for reporting this. I added this to our connector-roadmap so the team can look at it.

@augan-rymkhan
Copy link
Contributor

augan-rymkhan commented Nov 12, 2021

Hey, @d0vyda5

In the log provided here we can see: recordsSynced=49900

  • Freshdesk recommends avoiding making calls referencing page numbers over 500 (deep pagination). These are performance intensive calls on Freshdesck side and you may suffer from extremely long response times.

  • The maximum number of records that can be retrieved per page is 100.

You can check the currentent implementation here
Where maximum_page = 500

So we are limited by 49 900 records (499 х 100).

A potential solution can be:
if contacts/ endpoint can return records sorted by updated_at in asc order we could take updated_at value from the last record in page=499 then send requests with _updated_since=<last_record_updated_at>&page=1
But contacts/ endpoint does not support order_by feature.

I wrote about this issue to Freshdesk support. They answered that, there is no option to pull records beyond 500 pages.
order_by works with tickets/ alone and not with contacts/.
Adding ordering feature to endpoints other that tickets/ is not in their roadmap for now.

@sherifnada Can we use Contact Export API instead ?
Freshdesk support wrote, that all the present contacts would be exported in CSV by this API.

@sherifnada
Copy link
Contributor

@augan-rymkhan thanks for the great write up.

It seems that we should only expect this issue to happen when doing the initial sync, as all further syncs can be filtered using the updated_since parameter.

Therefore, i suggest the following UX:

  1. The first time a user syncs the freshdesk connector, we should use the export endpoint, keeping track of the maximum updated_at field in all the records. (I suspect this will also be much faster than using the list endpoint)
  2. afterwards, everytime the user syncs the contacts stream, we use the /list endpoint and use the parameter since_updated_at to filter incrementally as normal

WDYT?

@augan-rymkhan
Copy link
Contributor

@augan-rymkhan thanks for the great write up.

It seems that we should only expect this issue to happen when doing the initial sync, as all further syncs can be filtered using the updated_since parameter.

Therefore, i suggest the following UX:

  1. The first time a user syncs the freshdesk connector, we should use the export endpoint, keeping track of the maximum updated_at field in all the records. (I suspect this will also be much faster than using the list endpoint)
  2. afterwards, everytime the user syncs the contacts stream, we use the /list endpoint and use the parameter since_updated_at to filter incrementally as normal

WDYT?

@sherifnada I think it's a good solution. I'll implement this approach then. Thanks!

@augan-rymkhan
Copy link
Contributor

augan-rymkhan commented Nov 15, 2021

@sherifnada Unfortunately, not all contact fields are available for exporting including updated_at, created_at, id
Only fields below are available for exporting:

  • name,
  • job_title,
  • email,
  • phone,
  • mobile,
  • twitter_id,
  • company_name,
  • address,
  • time_zone,
  • language,
  • tag_names,
  • description,
  • client_manager,
  • unique_external_id,
  • twitter_profile_status,
  • twitter_followers_count
  1. There is another endpoint Filter Contacts:

GET 'https://domain.freshdesk.com/api/v2/search/contacts?query="updated_at:>2020-10-22 AND updated_at:<2020-10-22"'

It allows to filter by date: ?query="updated_at:>2020-10-22 AND updated_at:<2020-10-22"
We could implement slicing based on date

But it also has limitations:
records per page = 30
max_page = 10
We can get only 300 records per slice (day).

Also its response doesn't include fields:

  • csat_rating
  • preferred_source

@augan-rymkhan
Copy link
Contributor

@d0vyda5

We merged fix for this issue into master and released a new version of the connector.

Upgrade your connector to version 0.2.9 and get started. To upgrade your connector version, go to the admin panel in the left hand side of the UI, find this connector in the list, and input the latest connector version.

Please let us know if you have any further questions.

Enjoy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment