You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The dataframe will be destroyed on source change. This becomes problematic when we would store the session ID into the meta info in the dataframe.
Because if the user switches source from an async query result to another source then back to the same source from async query before. We shouldnt need to do the short polling for 30 seconds again because we should still have the same session ID.
Another problem was that it expected the source to be an index. Part of the virtual index pattern creation would get the mappings for that source. If the underlying source is not an index then that API will fail. So we should address that as well.
⚠️ Blocker for async queries
Expected Result
Keep a cache of dataframes instead of a single dataframe. If the dataframe is null, then create a dataframe with a name and null schema.
Don't destroy the dataframe when switching source.
Requirements
Modify to keep dataframe service to dataframes service
Create dataframe with name being the source if no dataframe for source exists in cache
No errors for the temp index pattern creation (or generalize it to be a schema call that plugins can also replace)
kavilla
changed the title
Convert dataframe service to a dataframes service
[Discover-Next] support dataframes, not just a single dataframe
Jun 14, 2024
Problem
The dataframe will be destroyed on source change. This becomes problematic when we would store the session ID into the meta info in the dataframe.
Because if the user switches source from an async query result to another source then back to the same source from async query before. We shouldnt need to do the short polling for 30 seconds again because we should still have the same session ID.
Another problem was that it expected the source to be an index. Part of the virtual index pattern creation would get the mappings for that source. If the underlying source is not an index then that API will fail. So we should address that as well.
Expected Result
Keep a cache of dataframes instead of a single dataframe. If the dataframe is null, then create a dataframe with a name and null schema.
Don't destroy the dataframe when switching source.
Requirements
Additional info
@sejli implemented a solution for the temp branch: https://github.com/sejli/OpenSearch-Dashboards/blob/0771fc900e877a79afe73b5b6d593be036d2e0f7/src/plugins/data/common/data_frames/_df_cache.ts#L37
It's great and solves the problem for async queries but since we have time we should take a holistic approach.
The text was updated successfully, but these errors were encountered: