You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Business Justification
80% of what end users perceive as "load time" comes from the processing, parsing, and rendering of hundreds of individual objects per page in their browser on their device. In order to offer a competitive observability solution, Elastic needs to be able to monitor end-user functionality, front-end UX and performance metrics for single pages and complex multi-step user journeys.
The first step in delivering real browser based Synthetics is to align with our existing Uptime use case and focus on providing users with the capability of scripting a multi-step journey through a website, capturing the up/down status of each page (step) and the load time time for each page (step) and then bring that data into Elasticsearch in ECS compliant fields and display within the existing Uptime UI.
Initially we will ship a limited subset of the granular information available from a real browser based check to support the up/down checks, filmstrips and basic performance metrics via a self managed real browser testing engine. Our plan is to iterate and capture and store additional timing points, visuals and device resource impact metrics as well as host agents in Cloud in future milestones.
Personas / User Stories
As a Traditional Ops Engineer/SRE
I want to know when our multi-step journeys are not functioning as expected in production
So that I am aware of issues that may not be captured by a simple up/down check that doesn't load and interact with the contents of the page
And be shown the visual of the page in question to help reduct our MTTR
As an Elastic Product Manager
I want to have the ability to synthetically measure multi-step up/down scripted tests
So that I can start to feed this to interested customers (and the cloud team) to start gathering meaningful feedback
This milestone will be a big improvement for Elastic Observability and a solid step towards our Client Side Monitoring solution. Given the early state of development, it is entirely possible that the underlying engine and scripting interface may be modified or swapped out entirely however it is our goal to store metrics in ECS compliant fields so the data will persist irrespective of the underlying real browser harness.
ACs:
This release will be marked as experimental.
Users can specify journeys in the heartbeat.yml file, using a JS DSL based on Playwright. See elastic/synthetic-monitoring for more info.
Design can just be identical to the original PoC @andrewvc created unless something better is introduced
Screenshot shipped for end of each step
(stretch) filmstrip of full journey shipped
(stretch) request waterfall shipped with journeys
Node will be invoked by heartbeat using npx <node-packagename> with script passed to stdin
After meeting with @urso and @andrewkroh we discussed a variety of options.
For the purposes of this MVP we'll try to release as a docker container with the node library bundled using the forked process model we currently use. We'll disable seccomp within the container and use the container for security. In the future we may have to figure something else out to get the mix of UX / security we want.
We'll have to start another discussion about security for full test suites etc. that may change the threat model, and will have to discuss how this works in the context of apt packages etc.
Business Justification
80% of what end users perceive as "load time" comes from the processing, parsing, and rendering of hundreds of individual objects per page in their browser on their device. In order to offer a competitive observability solution, Elastic needs to be able to monitor end-user functionality, front-end UX and performance metrics for single pages and complex multi-step user journeys.
The first step in delivering real browser based Synthetics is to align with our existing Uptime use case and focus on providing users with the capability of scripting a multi-step journey through a website, capturing the up/down status of each page (step) and the load time time for each page (step) and then bring that data into Elasticsearch in ECS compliant fields and display within the existing Uptime UI.
Initially we will ship a limited subset of the granular information available from a real browser based check to support the up/down checks, filmstrips and basic performance metrics via a self managed real browser testing engine. Our plan is to iterate and capture and store additional timing points, visuals and device resource impact metrics as well as host agents in Cloud in future milestones.
Personas / User Stories
As a Traditional Ops Engineer/SRE
I want to know when our multi-step journeys are not functioning as expected in production
So that I am aware of issues that may not be captured by a simple up/down check that doesn't load and interact with the contents of the page
And be shown the visual of the page in question to help reduct our MTTR
As an Elastic Product Manager
I want to have the ability to synthetically measure multi-step up/down scripted tests
So that I can start to feed this to interested customers (and the cloud team) to start gathering meaningful feedback
This milestone will be a big improvement for Elastic Observability and a solid step towards our Client Side Monitoring solution. Given the early state of development, it is entirely possible that the underlying engine and scripting interface may be modified or swapped out entirely however it is our goal to store metrics in ECS compliant fields so the data will persist irrespective of the underlying real browser harness.
ACs:
npx <node-packagename>
with script passed tostdin
Tasks/TODO
The text was updated successfully, but these errors were encountered: