Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing ES Promotion: dashboard Reporting Download CSV Field Formatters and Scripted Fields Download CSV export of a saved search panel #96000

Closed
spalger opened this issue Mar 31, 2021 · 6 comments · Fixed by #96097
Assignees
Labels
blocker (Deprecated) Feature:Reporting Use Reporting:Screenshot, Reporting:CSV, or Reporting:Framework instead failed-es-promotion PR sent skipped-test Team:Presentation Presentation Team for Dashboard, Input Controls, and Canvas v7.13.0 v8.0.0

Comments

@spalger
Copy link
Contributor

spalger commented Mar 31, 2021

The latest 8.0 and 7.x ES snapshots are failing when trying to create CSV exports with the following:

https://kibana-ci.elastic.co/job/elasticsearch+snapshots+verify/2533/execution/node/539/log/

[00:14:17]                   │ proc [kibana]   log   [20:53:50.132] [error][csv_searchsource_immediate][csv_searchsource_immediate][execute-job][plugins][reporting] KbnServerError: search_phase_execution_exception
[00:14:17]                   │ proc [kibana]     at getKbnServerError (/dev/shm/workspace/kibana-build-xpack-23/src/plugins/kibana_utils/server/report_server_error.js:39:10)
[00:14:17]                   │ proc [kibana]     at search (/dev/shm/workspace/kibana-build-xpack-23/src/plugins/data/server/search/es_search/es_search_strategy.js:62:45)
[00:14:17]                   │ proc [kibana]     at runMicrotasks (<anonymous>)
[00:14:17]                   │ proc [kibana]     at processTicksAndRejections (internal/process/task_queues.js:93:5) {
[00:14:17]                   │ proc [kibana]   statusCode: 500,
[00:14:17]                   │ proc [kibana]   errBody: {
[00:14:17]                   │ proc [kibana]     error: {
[00:14:17]                   │ proc [kibana]       root_cause: [Array],
[00:14:17]                   │ proc [kibana]       type: 'search_phase_execution_exception',
[00:14:17]                   │ proc [kibana]       reason: 'all shards failed',
[00:14:17]                   │ proc [kibana]       phase: 'query',
[00:14:17]                   │ proc [kibana]       grouped: true,
[00:14:17]                   │ proc [kibana]       failed_shards: [Array],
[00:14:17]                   │ proc [kibana]       caused_by: [Object]
[00:14:17]                   │ proc [kibana]     },
[00:14:17]                   │ proc [kibana]     status: 500
[00:14:17]                   │ proc [kibana]   }
[00:14:17]                   │ proc [kibana] }
[00:14:17]                   │ proc [kibana]   log   [20:53:50.134] [warning][csv_searchsource_immediate][csv_searchsource_immediate][execute-job][plugins][reporting] No scrollId to clear!
[00:14:17]                   │ proc [kibana]   log   [20:53:50.135] [error][csv_searchsource_immediate][plugins][reporting] {"root_cause":[{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id]."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"babynames","node":"bZwEE_drSSCw4MQs9a0HHg","reason":{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id]."}}],"caused_by":{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id].","caused_by":{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id]."}}}
[00:14:17]                   │ proc [kibana]   log   [20:53:50.135] [error][http] {"root_cause":[{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id]."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"babynames","node":"bZwEE_drSSCw4MQs9a0HHg","reason":{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id]."}}],"caused_by":{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id].","caused_by":{"type":"unsupported_operation_exception","reason":"Cannot fetch values for internal field [_id]."}}}
[00:14:17]                   │ proc [kibana] Sending error to Elastic APM { id: '5b4b81450e2ea5e89f5b54af73bda3b6' }
[00:14:17]                   │ proc [kibana]  error  [20:53:50.014]  Error: Internal Server Error
[00:14:17]                   │ proc [kibana]     at HapiResponseAdapter.toInternalError (/dev/shm/workspace/kibana-build-xpack-23/src/core/server/http/router/response_adapter.js:61:19)
[00:14:17]                   │ proc [kibana]     at Router.handle (/dev/shm/workspace/kibana-build-xpack-23/src/core/server/http/router/router.js:177:34)
[00:14:17]                   │ proc [kibana]     at runMicrotasks (<anonymous>)
[00:14:17]                   │ proc [kibana]     at processTicksAndRejections (internal/process/task_queues.js:93:5)
[00:14:17]                   │ proc [kibana]     at handler (/dev/shm/workspace/kibana-build-xpack-23/src/core/server/http/router/router.js:124:50)
[00:14:17]                   │ proc [kibana]     at exports.Manager.execute (/dev/shm/workspace/kibana-build-xpack-23/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
[00:14:17]                   │ proc [kibana]     at Object.internals.handler (/dev/shm/workspace/kibana-build-xpack-23/node_modules/@hapi/hapi/lib/handler.js:46:20)
[00:14:17]                   │ proc [kibana]     at exports.execute (/dev/shm/workspace/kibana-build-xpack-23/node_modules/@hapi/hapi/lib/handler.js:31:20)
[00:14:17]                   │ proc [kibana]     at Request._lifecycle (/dev/shm/workspace/kibana-build-xpack-23/node_modules/@hapi/hapi/lib/request.js:370:32)
[00:14:17]                   │ proc [kibana]     at Request._execute (/dev/shm/workspace/kibana-build-xpack-23/node_modules/@hapi/hapi/lib/request.js:279:9)
[00:14:17]                   │ proc [kibana] Sending error to Elastic APM { id: '545f180f37e55716b988c7b00706fc2a' }

It's unclear which change in ES is causing this error but it's consistent across both versions so I'm pretty.

Skipped

master/8.0: aa81dc5 + 0a6851c
7.x/7.13: e1f8c81 + 1999332

@spalger spalger added blocker (Deprecated) Feature:Reporting Use Reporting:Screenshot, Reporting:CSV, or Reporting:Framework instead Team:Presentation Presentation Team for Dashboard, Input Controls, and Canvas v8.0.0 failed-es-promotion v7.13.0 labels Mar 31, 2021
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-presentation (Team:Presentation)

spalger added a commit that referenced this issue Mar 31, 2021
spalger added a commit that referenced this issue Mar 31, 2021
@spalger
Copy link
Contributor Author

spalger commented Apr 1, 2021

From what I can tell the root cause here is also causing "Reporting APIs CSV Generation from SearchSource non-timebased Handle _id and _index columns" to fail, so I've skipped it here with a reference to this issue, though that test only seems to be failing in master.

cc @tsullivan since it seems you worked on both these tests in #88303

@tsullivan
Copy link
Member

tsullivan commented Apr 2, 2021

These might have begun failing because of: elastic/elasticsearch#70575

With this change, we allow this if the field is explicitely
queried for using its name, but won't include metadata fields when e.g.
requesting all fields via "*".

With this change, not all metadata fields will be retrievable by using its name,
but support for "_size" and "_doc_count" (which is fetched from source) is
added. Support for other metadata field types will need to be decided case by
case and an appropriate ValueFetcher needs to be supplied.

An example of the error:

The search that is posted (simplified example):

POST /mydata/_search
{
  "fields": [
    "_id",
    "name",
    "role",
    "value"
  ]
}

The response from Elasticsearch is:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "unsupported_operation_exception",
        "reason" : "Cannot fetch values for internal field [_id]."
      }
    ],
    "type" : "search_phase_execution_exception",
    "reason" : "all shards failed",
    "phase" : "query",
    "grouped" : true,
    "failed_shards" : [
      {
        "shard" : 0,
        "index" : "babynames",
        "node" : "_aQJrH3ER2uQXCmgN_07_w",
        "reason" : {
          "type" : "unsupported_operation_exception",
          "reason" : "Cannot fetch values for internal field [_id]."
        }
      }
    ],
    "caused_by" : {
      "type" : "unsupported_operation_exception",
      "reason" : "Cannot fetch values for internal field [_id].",
      "caused_by" : {
        "type" : "unsupported_operation_exception",
        "reason" : "Cannot fetch values for internal field [_id]."
      }
    }
  },
  "status" : 500
}

Update:

These might have begun failing because of: elastic/elasticsearch#70575

I've confirmed with the developers of that PR that this was the culprit

@spalger
Copy link
Contributor Author

spalger commented Apr 5, 2021

Thank for looking into this @tsullivan, do you have an idea how we're going to fix this?

@tsullivan
Copy link
Member

@spalger I have a fix in progress for this: #96097

CSV Export was re-written in #88303, targeted for 7.13, for runtime fields compatibility, and it opened up the code to this failure. I will try to get this fix into 7.13 as well.

spalger added a commit that referenced this issue Apr 7, 2021
@spalger
Copy link
Contributor Author

spalger commented Apr 7, 2021

Backporting the skip for non-timebased Handle _id and _index columns to 7.x 1999332 as it's now failing there too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocker (Deprecated) Feature:Reporting Use Reporting:Screenshot, Reporting:CSV, or Reporting:Framework instead failed-es-promotion PR sent skipped-test Team:Presentation Presentation Team for Dashboard, Input Controls, and Canvas v7.13.0 v8.0.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants