-
Notifications
You must be signed in to change notification settings - Fork 890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Unified Data Exploration in OpenSearch Dashboards #5507
Comments
@anirudha @kgcreative @dagneyb @ashwin-pc Would love to get your thoughts on the above proposal. |
Hey @ahopp. Really quick do we mean DQL? I also do not see Lucene in the proposal. Should we call it out as well since I see it in the screenshot.
Is this information available for public access. I am curious on the variety of users, meaning if novice users to experienced users were sampled, and have all shared the same feelings. Also, it seems targeted to users who specifically have the Observability plugin installed. Community members who fork OpenSearch Dashboards might not have that installed. So has there been any consideration into pulling Observability into core repos if it is going to be an initiative by the project to modify the existing core plugins to be consistent with external plugins? It would also add more weight to consistency problem since if I was just forking this repo for my org, my users would not have the same problem. Would also like to see the context around the integration points assuming that the new sections would depend on OpenSearch having ML-Commons, Observability, Query Workbench installed to support the features here. Which do not come apart of the base OpenSearch and a little bit of a blocker since there seems to be compatibility issues with ML-Commons and OpenSearch > 2.11. The new view does look powerful but it does seem to add a lot of features and a lot of entries which could be a little bit overwhelming when comparing it to its previous view. If I was to say the functional purpose of Discover was to quickly discover your data within your cluster, which we can still do in the north star but I kind of get lost in the amount of buttons and inputs available. One cool thing about Discover is the coloring of fields that is configurable (using painless scripting, for example if I wanted to color fields that's value was greater than a specific number) and unknown fields (fields not defined in index patterns but return in response). Are we sure the experienced user has feature parity with all query languages? I would like to understand if these new mock-ups were also user researched to accomplish their task. Another thing is that the filter bar and menus are global. I don't think it makes sense to just focus on unifying the data explorer by retaining the old filter bar and menus in other parts of the application. I'd imagine that would be confusing for users especially if they have a saved search on their dashboard and have the old filter bar and then click into it to get a different experience. It would also probably be unnecessary (and more work) to over complicate the logic for the data explorer plugin to have it's own query experience. I would like to see the filter bar and data selector experience thought about universally within OpenSearch Dashboards and OpenSearch Dashboards plugins. As to avoid moving the confusion from Discover and Log Analytics to the confusion to why Data Explorer and dashboards/visualizations are different. Thank you! |
@ahopp this is a great north star proposal! Thanks for all the detailed videos and summaries of each experience. Here are some of the questions i have with the approach:
One high level comment on the proposal, it would be nice to differentiate between view specific features and framework features. Most of the features specified here seem to be specific to the Document explorer view. Many of them dont apply for example to the Visualization view (e.g. AI assistant works very differently for the Visualization view and the Doc viewer). Another example of this is the contextual data explorer. This seems to be something that the framework needs to support and not a view. |
Thank you all for the proposal. This is a huge transition. I would like to discuss several questions and concerns regarding this proposal.
To summarize my thoughts here, even with the initial design, more clear definitions and technical pre-requisite research/investigation are essential to understand and navigate the complexities inherent in this proposal. This foundational work will shape the decision-making process, ensuring that any changes made are feasible, user-friendly, and enhance the overall functionality of OpenSearch Dashboards. This approach is not just about technical feasibility but also about aligning with user expectations and existing workflows. |
@kavilla, @ashwin-pc, @ananzh thanks for the comments, questions, and recommendations. I'm going to try and respond to all in one comment but feel free to ask for more clarity if I miss something. In terms of research: I can work with our researcher to see how we can best share out some of the studies we've conducted that informed the concepts presented above. Generally, we spoke with users that use both OpenSearch and other similar tools - they were typically intermediate to expert. A key bias to mention is the research focused on observability use cases - however, we supplemented this information with research from other parts of our product (alerting, and general dashboards baselining), comparative analysis into other tools, and secondary research into data exploration for large data. Though the concepts show the bias towards observability use cases we have created space for other experiences and functionality (view switcher + custom views, extension points through tabs, augmented layering etc). As we move from the observability use case to others we aim to iterate and expand this North Star and leverage focused use case research to evolve holistically. @ashwin-pc I think this lightly addresses some of your specific questions around views and the broader application of concepts outlined in the concept. Our next step is to put this iteration in front of users for feedback through a formal usability and desirability study. I'm sure that will lead to iterations - thanks for calling that out. With regard to the more tactical and implementation questions. These are all great callouts and a reminder to think holistically when we get to implementation. The goal of these concepts isn't to illustrate exact things we should build, but rather visualize ideas and experiences that we think users would find useful. As we start to break down the ideas outlined above I'm sure we will be faced with the challenges in all your comments - but I don't think those limitations should stop us from ideating on what a good experience for users might look like without them. With that said I think there are clear challenges with our foundational platform that make it hard to implement some of the ideas outlined. @ananzh's comment does a great job of outlining some of them but I'm sure there are more and would love to have more of these discussions. Keep an eye out on the issues @ahopp linked in the initial proposal. These are related to the incremental steps. Hope this provides some clarity. |
I'd like to propose creating in-product documentation for both DQL and Lucene (query string query language). I've recently documented both, and it would be nice to have a version of those at the users' fingertips when they are in Dashboards. Happy to help with reformatting or otherwise adjusting the documentation as necessary. |
This would be a great improvement. More direct integration with documentation would be ideal in general, but internal to OpenSearch Dashboards would be the best UX IMO. |
To add, we have some of this on the OpenSearch blog. In particular this blogpost has some information on the findings. Some of the high-level findings that you may be interested in would be;
This research also informed this RFC (#4298) that might be interesting to those curious about the research. Finally, the blogpost on community insights (here) highlights more findings that are relevant here; "While some participants called out the importance of setting up the index templates correctly for an improved downstream experience, others highlighted frustrations with this step in dashboard creation as being distinct from the experience of creating dashboards with other tools. Participants also noted the benefits of creating standards in the encoding and permissions processes." It goes on to highlight; "Participants in this study called out many inconsistencies and deficiencies in the visualization functionality in OpenSearch. This included color coding, interchangeability of charts, filter application, and zooming in and out of a chart. Given the range of dashboard creator needs, participants expect OpenSearch to streamline dashboard creation and, in addition, provide enhanced functionality in order to maximize usage of OpenSearch Dashboards." |
I wanted to follow-up on this and highlight some other inputs on why we might want to strive for a more unified view of the Dashboards experience. First, we should strive for consistency when possible. I think this highlights some of the reasons I think we should push for a more unified experience (e.g., remove the convoluted experience due to overlapping applications with similar functionalities but differing user interfaces) and built a more intuitive, integrated solution for all data exploration tasks.
|
Updated UX for the following proposal can be found here: #6092 |
Overview
This proposal aims to address the challenges faced by users in data exploration within OpenSearch Dashboards (OSD). Users have reported a convoluted experience due to overlapping applications with similar functionalities but differing user interfaces (e.g., Discover and Log Analytics). This redundancy also complicates the process of querying data, as users must additionally decide on a query language (ex. DSL, PPL, or SQL). Through our exploration of this issue, we’ve learned that users spend a good amount of time exploring our tools vs using them to explore.
Background
We explored how users explore data, what tools they use, and what they expect. We also did an audit of our current tools to better understand how they are similar and different. Though this initial focus was on the Observability and Log analytics portion we feel the resulting framework can extend to other use cases (outlined below). Our research and user feedback have highlighted the need for simplicity in technology for effective long-term adoption. The existing redundancy in tools like Discover and Log Analytics, and the requirement to choose between different query languages (DSL, PPL, or SQL) have been identified as key pain points. Based on these explorations, we have iterated on a strategy that focuses on what is best for our users (agnostic of historical context or decisions) based on direct user feedback and user research.
Objective
The objective is to develop a unified Data Exploration environment within OpenSearch Dashboards, providing an intuitive, integrated solution for all data exploration tasks. This will reduce cognitive load, streamline the user experience, and enhance productivity. The cornerstone of our updated product strategy is the conviction that simplicity in technology is not just an advantage - it is a necessity for long-term adoption and usage. With this ethos at its core, we are proposing the development of a unified Data Exploration experience, designed to break down the barriers posed by having multiple, segregated tools - aligned with the more cohesive strategy, e.g., all else being equal, features should look, feel, and behave consistently across experiences.
Proposal
To harness the full potential of data-driven insights, we are advocating for a transformative consolidation within the OpenSearch Dashboards offering. The proposed Data Explorer is not merely an enhancement but a reimagining of the data exploration process. This initiative will serve as a unifying interface framework integrated within OpenSearch Dashboards, streamlining the multitude of use case specific tools into coherent and adaptable views. This will come in two parts - 1/ Creating an unified framework (Part 1: Framework) see and 2/ Unifying OSD use cases within that frame work.
Part 1: Framework
Data Explorer is built on a framework that aims to provide a structure and consistency without being overly restrictive. For more details refer to our original Engineering Github Proposal (Note: the term workspace is now replaced with the term canvas). The interface framework is made up of 4 key sections (see Figure 1 below):
Figure 1 - Interface Framework
Context Bar
The context bar is a sticky element that provides users with access to the highest level of exploration tools that help them set context and get context for their exploration. Users want to explore data with the right tools for that data. To support this need the context bar houses two high level context setting features. The first is a view selector (allows users to switch between curated exploration tools for documents, visualizations, metrics or a custom one they define) and a data selector (allows users to select the data they would like to explore). Based on these elements users gain access to tools relevant to what they are trying to do without giving them the kitchen sink. As we move right users have options that allow them to configure the view to their needs (explore in split screen, or layer in additional data like alerts etc). Finally, to the right users have access to a time range selector, and run button that allows them to control the scope of an exploration and execute it.
Figure 2 - Context Bar
Explorer Actions
The explorer actions available to users are dictated by the view and what it enables. Once again we are offering a structure so users know where to go to perform specific actions, but giving the view flexibility to define their experience without dictating it to them.
Figure 3 - Explorer Actions
Panel
The collapsable panel is contextual depending on the data source / connection and view. We see the panel being used for overviews (all the fields in a data source), clarifying what is being explored (showing scope of exploration), or as additional navigation element space. Once again we are leaving this space open to views to dictate how they want to use it but are offering a structure on how it could be used.
Figure 4 - Panel
Canvas
The Canvas is where the exploration happens and can house whatever a view needs to facilitate data exploration based on their use case. This can include things like a query builder for log exploration or a chart builder for visualizations, even search relevance functionality for search use cases. The canvas is also configurable by users with the option to add exploration modules in the form of tabs. These modules allow the user to tailor a view to their unique needs and allows other applications integration points into existing views.
Figure 5 - Canvas
This framework provides an opinionated structure that helps view builders create unique experiences within a cohesive structure. It also allows users to explore in a familiar interface no matter the view they are exploring through. This way we maintain consistency in our product without being restrictive and hindering the flywheel.
Part 2: Unify Use Cases Under Framework
While a framework aims to provide a streamlined experience for users by reducing the complexity of navigating between different tools or modules, it only works if we adopt the same framework for our current use cases. When use cases are integrated, users can move smoothly from one workflow to another without the cognitive load of reorienting themselves in a different interface or learning new commands. The second step of the data exploration updates is including current use cases within this framework. We’ve provide conceptual mockups for some of the primary OSD use cases below.
Note: The following mockups are NOT FINAL. They aim to be aspirational and help guide feature development. These concepts are subject to change as we learn and implement incrementally.
Use Case 1: Monitoring
As a user I want to understand an interesting event I noticed in a chart in my Dashboard. I am looking to understand context, importance of the occurrence, and use this information to decide on next steps if needed.
Contextual.Data.Explorer.mp4
A user is monitoring a Dashboard and sees an alert on a particular visualization. They hover on the alert to learn more.
They find this interesting and click on the alert to gather more context through a light and contextual data explorer that shows the documents for the event but also ones surrounding it.
From here they have multiple paths to continue the exploration - they can either continue the exploration in Data Explorer by carrying the context to the full tool OR they can enter other feature flows (alerting, traces etc) OR they can copy a query and take the context to another part of Dashboards ( Dev Tools, query workbench etc).
Use Case 2: Root cause and impact analysis
As a user I want to get to the root cause of an interesting occurrence and understand the impact of this occurrence. I am looking to dig deep on the occurrence, correlate across multiple data sets to asses scope, and compose my findings for sharing out so decisions can be made.
Query.Stacking.mp4
Users can leverage PPL in either a builder (assistive experience for unfamiliar users) or a code experience (for power users with familiarity) to explore their data. They can also make use of other UI functionality like an LLM assistant, drag and drop or re-ordering to help present information in a way that supports their exploration needs.
As users explore they may want to run multiple queries against the same data or even other data sources. Through this layering users can view correlations, cause and effect, and help dive deeper or see the bigger impact of an event.
Users can make use of a robust data selection experience that allows them to select data at multiple levels, while also allowing them to save data into various forms (index patterns, accelerated indexes, tables, etc.)
Use Case 3: Visualizations
As a user I want to be able to transition from a log exploration into visualizations so I can better understand data, and also create artifacts for communicating root cause, and impact.
View.Switching.mp4
At any point in a users exploration they can translate their document exploration into other flows, for example visualizations. In this case users select the visualization tab and are presented with recommended visualizations.
They can leverage the OpenSearch Assistant to help them form visualization recommendations. From here they can chose to preview or carry the context into the visualization exploration viewer of Data Explorer and make edits to the recommendation.
Once in the visualization explorer view they can make use of a query or a GUI to form visualizations
Use Case 4: Assistive exploration
As a user I want to be able to make use of GenAi to help me explore my data and uncover insights faster.
GenAI.Insights.mp4
GenAi.Query.assistant.mp4
The OpenSearch assistant insights help users understand events faster directing them to areas of interest for additional exploration
The OpenSearch assistant helps users form queries and improve them by appearing contextually to the query bar
The OpenSearch assistant provides insights based on data sources selected and can be used as a data exploration companion. It can also help answer questions as they explore so they don’t have to leave context to get context.
Additional Data Explorer Features
Language selector
A unified language selector allows users to select the query language they prefer for the data sources they want to explore. Users are able to set a default query language through the Dashboards advanced settings.
Query stacking
Users are able to stack queries to allow for iterative exploration, correlation, and extracting insights by exploring over multiple data sets in one view
Extensible tabs
Users are able to further customize views by adding tabs that would assist them in exploration. For example a user can add a security analytics tab that will allow them to view security analytics features in context of data explorer.
Data layering
Users have options to layer in data to augment their exploration as needed
Augmentation.mp4
Request For Comments
I know this is a ton of information, but we appreciate any input. This issue is the culmination of extensive parallel thinking across multiple issues and proposals and collaborative efforts. We encourage you to review the proposal in detail and share your thoughts, no matter how big or small. Let's work together to make this project not only a reflection of our team's efforts but also a testament to the strength and wisdom of our community.
Related Issues
#4165
#5407
#5251
#4991
#4482
#5504
The text was updated successfully, but these errors were encountered: