Skip to content
Wilson Mar edited this page Jan 24, 2016 · 5 revisions

The diagram below is gradually revealed in this video with narrative text.

oss-perf-v20-wm

NOTE: Different colors in the diagram represent "ownership" (who does what) within a particular organization. Other organizations will like use different colors to indicate status.

This was created in response to the need for a more nimble, comprehensive, yet lower cost approach to measure and thus improve the speed, capacity, and reliability of high-traffic web and native mobile apps, what many call "performance engineering".

BTW, If you see a typo that needs fixing or an idea that should be considered, please fork this repo, edit the file, and send us a pull request. Better yet, join us to revolutionalize the industry. Organizations not in the business of selling performance engineering software and services to the public, it’s natural for us to develop framework as open source on public github repos so we can share the benefits as well as development costs, but also to ensure continuity of skills and effort.

## Background Why Here is what even commercial vendors have not yet delivered:

A. Eliminate errors in program coding source by automatic generation of programming code based on specs. Although various attempts at generating UI code have not taken hold due to complexity, generation of APIs is less complex of an undertaking.

B. Test immediately in the dev. lifecycle through automatic generation of test automation scripts and API mock scripts. Making changes easy, fast, and safe enables fix-fast which makes systems more "correct" than monolithic design.

C. Automatic alerts of slow execution speeds during automated functional testing discovered automatically by machine learning robots rather than tedious manual examination of logs.

D. Automatically cycle though variations of several configurations during a single manual run initiation. More important than hands-free, this enables performance analysis to go beyond merely testing to engineering.

The objective here is to reduce the amount of manual effort (and human errors) to conducting tests through automation.

E. Centralizing data from various views of system behavior under stress (or not) can be analyzed together will provide the basis for identifying trends and other insights using both manual and "machine learning" techniques. Machine learning would identify more minute issues more comprehensively.

Sending JMeter to the ELK stack means that within JMeter listeners are not needed. So Kibana would replace what Blazemeter displays.

## Narrative of diagram
Capabilities Components
We have a typical web server responding to both native mobile and desktop browser traffic over the public internet. app server
To provision servers and deploy apps we use open-source software Docker & Puppet
Having a quick way to bring up servers with different configurations configs
enable us to tune settings (such as max threads) for the most throughput at the least cost. run variations
virtual user scripts that JMeter code
run on servers which take the place of humans on real browsers and mobile devices. master & slaves
These scripts reference sample (or sham) data. Data
generated for testing. sham data-gen
We want runs to kick off automatically Jenkins CI
when code is committed to a specific branch in a git repo. Github
Our system depends on several vendor APIs being available all the time, Vendor APIs
so we mock (or virtualize) those services to ensure constant access during testing. Wiremock
One of the benefits of a microservice architecture is it simplifies API calls enough to be defined in a database Swagger
from which client code can be generated automatically. codegen
Generation of Jmeter code enables us to create micro-benchmarking code DURING development JMeter-gen
rather than manually coding load emulation scripts in some editor. editor
Scanning code (using SonarQube) according to coding rules defined by the team ensures quality code rather than trying to test quality into code. SonarQube
During runs:
by centralizing and normalizing several sources of metrics, we can better correlate where bottlenecks occur across the landscape. Logstash
Data collected include logs of how many virual users are necessary to impose certain load levels, run logs
log entries issued from within app code and the OS server Logs
plus measurements such as garbage collection monitor stream
obtained by monitoring agents agents
and pattern of network packets where applicable. Network mon
To collect a large number of logs, intermediate servers (such as RabbitMQ) may be added. Logstash scale
As for analysis of run results:
The central repository is indexed into various dimensions ElastiSearch
for visualizations over time and "sliced and diced" for insight. Kibana
The visualizations include static objectives and targets to compare against live data. ref. data
To measure time taken by browsers to execute client application JavaScript:
browser instances are controlled by Selenium Web Driver
code that manipulate the browser UI like real people do, Selenium code
just as native mobile app test automation code Appium Code
are controlled Appium Driver
so that timings are captured BrowserMob Proxy
into files of specific resources by each user monitored. HAR files
"Machine learning" Programs scan the Elastisearch server to Python
identify the levels where thresholds
alerts should be sent out to prioritize human review and action. alerts
## Components list The remainder of this page describes the components mentioned in the [diagram and narrative above](#TheVision).

Jenkins/CI builds / initiates the various programs listed below on a schedule or when a build is requested.

Selenium and JMeter load generators are "slave nodes" which Jenkins calls to do work.

Alternatives to Jenkins include https://travis-ci.org/ cloud service which many repos use from github. Travis is free for open source, but imposes a fee ($129/month and up) for private use on their cloud.

Jenkins is free to install on premises.

JMeter Servers

JMeter scripts ramp up load on servers using less test server resources than Selenium scripts because they do not maintain a copy of the DOM of each user.

Cloud environment. Because it usually takes several servers to emulate enough load on an application server under test, JMeter is often run within a cloud envrionment such as Amazon.

https://github.com/oliverlloyd/jmeter-ec2 Automates running Apache JMeter on Amazon EC2

https://github.com/flood-io/ruby-jmeter is a Ruby based DSL for building JMeter test plans

configs is the set of configuration settings controlling VM memory and other aspects of the server.

**Monitoring** via agents (or JMX) include innovations such as: **Network monintoring**

Selenium WebDriver controls desktop browsers as if humans were tapping on the keyboard and moving the mouse around a browser.

HAR files

Python

Appium Controller controls native mobile smart phones as if humans were swiping and tapping the screen.

Appium Code controls native mobile smart phones as if humans were swiping and tapping the screen.

Logstash collects data from JMeter, HAR files, web server logs, web app logs, etc. into a common location with a common date format.

Elasticsearch combines and indexes logs from several sources.

Kibana displays dashboards from filtered data indexed on several dimensions. See Analysis section below.

Swagger gen at https://github.com/swagger-api/swagger-codegen generates code from a repository of API requirements (Swagger).



## Authors Contact information for authors of this repo:

Wilson Mar, @wilsonmar, wilsonmar at gmail, 310.320-7878 https://www.linkedin.com/in/wilsonmar Skype: wilsonmar4

Anil Mainali, @mainalidfw, mainalidfw at gmail https://www.linkedin.com/in/anilmainali

Clone this wiki locally