Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automatic analysis from JMC #85

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

johnaohara
Copy link

Allow assertions against JMC automatic analysis rules based on whether rule has fired and severity.

@johnaohara
Copy link
Author

@gunnarmorling I have finally found some time to open a PR for JMC analysis. This PR will load rules from the published JMC artefacts (org.openjdk.jmc.flightrecorder.rules and org.openjdk.jmc.flightrecorder.rules.jdk). There is still more that can be done around custom rules, configuring severity and expanding on assertions, but initially demonstrates the scope and type of assertions that could be made against automated analysis

@phejl
Copy link
Contributor

phejl commented Aug 19, 2021

@johnaohara This looks great! I just added couple of suggestions.

Copy link
Member

@gunnarmorling gunnarmorling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot, @johnahara! A few minor comments inline. Overall, I think it'd be great to see a really compelling example for the value of using the rule mechanism over simply asserting plain JFR events.

README.md Outdated Show resolved Hide resolved
//Inspect score of rule
assertThat(analysisResults)
.contains(FullGcRule.class)
.scoresLessThan(80);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does that score mean?

Taking a step back, I'm still trying to wrap my head around the capabilities of the JMC rules. Are there other real-world examples coming to mind which would show the power of that feature?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each rule can generate a score between 0-100. It roughly translates to the risk rating of a problem that has been detected. 0 being least likely to cause an issue to 100 being most likely to be problem. The score implementation is specific to the Rule that is triggered.

e.g. for the ExceptionRule it is a score based on number of exceptions per second : https://github.com/openjdk/jmc/blob/master/core/org.openjdk.jmc.flightrecorder.rules.jdk/src/main/java/org/openjdk/jmc/flightrecorder/rules/jdk/exceptions/ExceptionRule.java#L109

For the SockertWriteRule, it is score calculated from the maximum duration of all socket writes: https://github.com/openjdk/jmc/blob/master/core/org.openjdk.jmc.flightrecorder.rules.jdk/src/main/java/org/openjdk/jmc/flightrecorder/rules/jdk/io/SocketWriteRule.java#L135

This assertion allows users who are familiar with the Rules to assert that risk scores did not cross a certain threshold

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I see. Aren't we back to relying on characteristics of the environment then though? E.g. Exceptions per second might differ on the general performance and load of the machine running the test. So I'm wondering how stable/portable such test would be.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe they were poor examples, I think some of the rules will be dependent on environment and others not. At present all the rules available from JMC are loaded. Maybe I could curate a list of Rules that do not rely on environment and only load them?

* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.moditect.jfrunit;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this class user-facing? Or should it be moved to the internal package?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking about this now. Atm it is not user facing, but it is tightly coupled to JfrEvents. I think it depends on the workflow you would expect users to generate the analysis.
i.e.

        jfrEvents.awaitEvents();

        List<IResult> analysisResults = JfrAnalysis.analyse(jfrEvents);

or

        jfrEvents.awaitEvents();

        List<IResult> analysisResults = jfrEvents.automaticAnalysis();

or even, as @phejl mentioned previously

        List<IResult> analysisResults = jfrEvents.automaticAnalysis();

where jfrEvents.awaitEvents() is called in jfrEvents.automaticAnalysis()

In the last 2 examples, JfrAnalysis would be an internal class. In the first it would be user-facing.

What do you think is best in terms of user experience? To have the analysis to be exposed as the JfrEvents api, or as a separate, user-facing class that consumes JfrEvents?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of discoverability, I'd prefer the latter two, as you'll find the method easier than in comparison to it being static, declared on a class which you need to know about in the first place.

src/test/java/org/moditect/jfrunit/JfrUnitTest.java Outdated Show resolved Hide resolved
@phejl
Copy link
Contributor

phejl commented Aug 19, 2021

@johnaohara If I understand the code correctly it dumps the events to the same file which is later rewritten at the end of the test (or sonner by another analysis if we will allow that). Wouldn't it be better having it isolated? So the file used for analysis would be suffixed with something like -analysis-1 and would be kept for possible later check? WDYT @gunnarmorling ?

@johnaohara
Copy link
Author

johnaohara commented Aug 20, 2021

@johnaohara If I understand the code correctly it dumps the events to the same file which is later rewritten at the end of the test (or sonner by another analysis if we will allow that). Wouldn't it be better having it isolated? So the file used for analysis would be suffixed with something like -analysis-1 and would be kept for possible later check? WDYT @gunnarmorling ?

@phejl I had assumed that all historic events would be preserved in between writing to disk, i.e. an new events would be appended to the existing jfr file, although I need to validate this assumption.

I understand the concern if the output file was overwritten and events that had previously been written to disk were lost. I also can see the concern if a user was to run multiple automatic analysis during a test and want to be able to inspect events after the test has completed. If the JFR recording is overwritten multiple times with each automatic analysis, a user would not be able to reload a JFR recording exactly as it was at the point the analysis was performed.

I did investigate converting the JfrEvents.events from a Queue<RecordedEvent> to a IItemCollection to pass directly into org.openjdk.jmc.flightrecorder.rules.util.RulesToolkit.evaluateParallel() but this at first appeared to be overly convoluted and it was much easier to write recorded events to disk and reload from disk for analysis.

I can dump the recording used for analysis to a separate file for clarity, but will double check that dumping the recording does not reset the recording and the JFR events generated by a single test are not split over multiple recordings.

@johnaohara
Copy link
Author

@gunnarmorling an example of using the analysis over individual events could be a generic catch all, "I want to assert that there are no WARNINGS in from the automated analysis", e.g.

    @EnableConfiguration("profile")
    public void automatedExceptionAnalysis() throws Exception {

        //simulate some user code that silently discards a lot of exceptions
        for (int i = 0; i < 20_000; i++) {
            try{
                throw new MethodNotFoundException();
            } catch (MethodNotFoundException methodNotFoundException){
                //silently swallow exception
            }
        }

        jfrEvents.awaitEvents();

        List<IResult> analysisResults = jfrEvents.automaticAnalysis();

        assertThat(analysisResults.size()).isGreaterThan(0);

        JmcAutomaticAnalysisAssert.assertThat(analysisResults)
                .haveSeverityLessThan(Severity.WARNING);

    }

In this example, the test would fail as the org.openjdk.jmc.flightrecorder.rules.jdk.exceptions.ExceptionRule would generate a WARNING analysis result.

(N.B. the haveSeverityLessThan() method is not currently in the JfrAnalysisAssert api, but I have a working proyotype in a new JmcAutomaticAnalysis api that I will push shortly)

@johnaohara
Copy link
Author

johnaohara commented Aug 20, 2021

@johnaohara If I understand the code correctly it dumps the events to the same file which is later rewritten at the end of the test...

@phejl I checked the behaviour of JFR when events are dumped to file multiple times, and jfr is writing all events out to disk with every dump and duplicating events. I will separate the analysis dump files so events are not duplicated in the final jfr's

@gunnarmorling
Copy link
Member

an example of using the analysis over individual events could be a generic catch all, "I want to assert that there are no WARNINGS in from the automated analysis"

Ok, that could be useful, yes. On the exception case in particular, couldn't the same be achieved though like so:

assertThat(jfrEvents).contains(
    event("jdk.ExceptionStatistics").with("throwables", 0);

?

Note I don't mean to push back, I think integration with the rules API can be useful. I just would love to see some really compelling example or use case.

@gunnarmorling
Copy link
Member

There's a failure in the new test on CI btw:

JmcAutomaticAnalysisTest.automatedExceptionAnalysis:88 Expected to contain severity greater than: Warning

@johnaohara
Copy link
Author

There's a failure in the new test on CI btw:

JmcAutomaticAnalysisTest.automatedExceptionAnalysis:88 Expected to contain severity greater than: Warning

Yes, that is ironic; I added that test as an example, and it "passed on my machine"™

@gunnarmorling
Copy link
Member

Yes, that is ironic; I added that test as an example, and it "passed on my machine"™

LOL 🤣 .

@gunnarmorling
Copy link
Member

Hey @johnaohara, any further insight on the test failure here?

…rules that have fired and tests based on severity
- Renamed JfrAnalysis -> JmcAutomaticAnalysis
- Moved JmcAutomaticAnalysis to internal package
- Separated tests for Automated Analysis
- Add to JmcAutomaticAnalysisAssert api to assert based on IReport
  severity
- Generate new JFR dump for each analysis
- Removed JfrAnalysisResults, replaced with List<IResult>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants