Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should we consolidate our test suites? #305

Closed
GUI opened this issue Nov 12, 2016 · 2 comments
Closed

Should we consolidate our test suites? #305

GUI opened this issue Nov 12, 2016 · 2 comments
Milestone

Comments

@GUI
Copy link
Member

GUI commented Nov 12, 2016

For background, we currently have 2 somewhat distinct test suites in API Umbrella's code base:

  • Proxy/Gatekeeper/integration tests: Written in NodeJS, using Mocha. This is probably where a slight majority of our tests are. These run against the full API Umbrella stack, and test the features in our custom proxy layer (e.g., API key validation, analytics logging, etc), check for proper proxying behavior (URL length limits, cache control handling, etc), make sure all our components work together, tests for failure scenarios (what happens when the database goes down), and tests for other edge-cases.
  • web-app and admin-ui tests: Written in Ruby, using RSpec. Within these tests, there's really 2 sub-suites:
    • Rails controller tests for all our admin CRUD APIs.
    • Browser-based tests using Capybara for our Ember.js admin interface (where we test filling out forms and clicking things in a real browser).

So why consider consolidating or changing any of this testing setup? The more immediate driver is the upgrade of the admin-ui component to Ember 2 (#257). This impacts our testing setup in several ways:

  • We need to change our Capybara tests. The Ember code is being split out of the web-app Rails component and becoming a standalone Ember CLI app. This means the current Capybara tests embedded within the web-app component need to be revisited in some fashion.
  • NodeJS compatibility issues. The current Mocha tests only run against NodeJS 0.10. The admin-ui Ember CLI app needs a newer version of NodeJS to build during development. This means either overhauling parts of the current Mocha test suite, or maintaining 2 versions of NodeJS on your development box (one version for for running tests and one version for building the Ember app--certainly doable, just an extra hassle).
    • The current Mocha tests cannot currently run against newer versions of NodeJS due to compatibility issues with some of the packages we use. Most notably, the curl library we use, curler, is not compatible, and appears to be unmaintained. The only compatible curl library I found for newer Node versions, node-libcurl, is a lower-level library, so it would require some rewriting of our tests. node-libcurl also cannot be built against CentOS 6, due to GCC version incompatibilities, and CentOS 6 happens to be our current default development environment (certainly solvable by using a newer OS VM for development, but again, somewhat of a hassle).

So the Ember upgrade is what started some of this thought process to revisit our overall test setup, but there's been some longer standing issues with our test environment that have also been lurking in the back of my mind for a while that might be nice to cleanup:

  • Node's HTTP library isn't ideal for our type of low-level HTTP testing. Most of our Mocha test suite that tests various proxying behavior uses Node's default HTTP library for making requests. However, we've encountered bugs with Node's HTTP library (http: OPTIONS shouldn't default to chunked encoding nodejs/node-v0.x-archive#7725, Faulty browser url encoding emulation for slash flipping nodejs/node-v0.x-archive#8458), or we've run into issues where Node's HTTP library is more opinionated and can't make the types of requests we need to make (e.g., it won't allow requests to be made with invalid UTF-8 character sequences, but we need to test that, since other clients are capable of passing those in). In those cases, we have to revert to using some type of lower-level curl library to make the HTTP requests, but as I discussed above, there's not a lot of libcurl options for Node, and they aren't as well supported.
  • Overlapping tests between the suites. There's not a ton of this, but there are some overlapping tests between the Ruby and NodeJS test suites. Some of this stems from historical reasons, but some of it is needed to check behavior of the web-app pieces when running under the full API Umbrella stack (rather than in the fake Rails test environment). Similarly, we also jump through some hoops on the web-app test suite to ensure we're testing against the same versions of dependencies that API Umbrella's full stack bundles (eg, Elasticsearch and Mongodb).
  • Flakey tests in current suite. See Intermittent test failures #303 for more details. This is certainly solvable in our current suite, but it is something we need to dive into regardless.
  • Debugging awkwardness in Node tests. In our Node tests, a number of our tests involve making parallel HTTP requests. We use the async library to help with this, but unless you're careful, any test failures can lead to strange backtraces and unintuitive errors (basically, if you perform assertions inside the parallel callbacks and one fails, other tests may still be running and fail in other strange ways before everything realizes there's been a failure). This is solvable if you're careful, but it makes debugging the flakey tests mentioned above a bit of a hassle.
  • No randomized ordering of Mocha tests. Generally, I like randomizing test order, to help uncover other potential bugs and make the test suite more robust. But Mocha has lacked this feature for a long time (🚀 Feature: --order option for random test order mochajs/mocha#902). It looks like there's maybe been some recent progress on this, but I've been hoping to see this for a long time now.
  • No ability to run tests in parallel. This affects both the Mocha and RSpec suites. Our full integration test suite can take a long time to run, particularly since we have various tests that need to wait for certain things to occur (for example, waiting to see how HTTP server timeouts affect things, or waiting for the analytics to be logged after making a request). But in a lot of these cases, these tests could be parallelized, which I think would significantly speed things up. There is a mocha.parallel project that adds this capability to Mocha, but as the readme indicates, some of the hook behavior can become unintuitive, and I fear that might be an issue with how much we currently use hooks and nested describe blocks.
  • Asynchronous awkwardness in Node tests. This is more of a personal preference thing, but generally speaking, I find writing a lot of our tests in Node a bit cumbersome due to the asynchronous nature of everything. As noted above, Mocha doesn't actually run any of our tests in parallel, so we aren't really gaining any benefits from the asynchronous nature of the code. I don't universally dislike callbacks/promises/async stuff (and we use various libraries to help), but it still seems to make all our tests more verbose than necessary and more difficult to debug. In reality, we don't want any asynchronous behavior in probably 90% of our tests. JavaScript's newer async/await functionality should help greatly with this type of general issue, but that's not out in a stable version of NodeJS.
  • Language split. Since we have half our tests in NodeJS and half in Ruby, there's obviously a split in languages. While these language choices used to correlate with the individual components, they might not make as much sense now. The NodeJS code is responsible for testing most of the Lua proxy functionality, while the Ruby code is responsible for testing the Ruby APIs and EmberJS admin app.

So I've been wondering whether revisiting our overall test suite might be beneficial. We have to do something to address part of our tests due to the Ember upgrade anyway, but a larger revamp and cleanup of our test suites might have other benefits. Having a single unified test suite might be easier to both run and write new tests for. It might also give us an opportunity to revisit and clean some historical things up that don't make as much sense any more.

If we're going to unify our tests with a single suite, then what are our options? Here's some general ideas I've looked at:

NodeJS

Let's assume we'll continue to use Mocha. There's obviously lots of other testing options for NodeJS, but most of our tests are currently already in Mocha.

  • Pros
    • Already used by our tests. A little more than half our tests are currently written in NodeJS/Mocha.
    • Browser-based testing. There are options like Nightwatch.js for doing Selenium tests, similar to our current Capybara tests.
    • JavaScript will likely always be in our stack. As long as we have an client-side app, like the admin, then JavaScript will likely always be used in some fashion within API Umbrella.
  • Cons
    • Node's HTTP library not ideal for testing. Discussed in more detail above, but curl-based testing is probably a better fit.
    • Browser-based testing. I'm definitely not as familiar with Nighwatch.js, but it seems primarily oriented around running tests against Selenium. Capybara is nice since it allows running against different drivers, like PhantomJS or Selenium. I think Nightwatch.js has the ability to run against PhantomJS, but it requires the use of GhostDriver, which looks like it might be unmaintained now, so this might be more of an unknown.
    • No random test ordering. Discussed in more detail above.
    • No parallel tests. Discussed in more detail above.
    • Async debugging and coding awkwardness: Discussed in more detail above.

Ruby

Currently, our Ruby-based tests are written in RSpec. But I'll also talk about some functionality that would be available if we were to use minitest (Ruby's built-in testing library) instead.

  • Pros
    • Already used by our tests. A little less than half our tests are currently written in Ruby/RSpec.
    • Good libcurl support. There's a ton of libcurl-based HTTP libraries in Ruby.
    • Parallel tests (minitest only). minitest supports test parallelization. As mentioned above, this could have benefits to speeding up our test suite.
    • Random test ordering.
    • Browser-based testing. We're already using Capybara for browser tests, and Capybara's ability to run against something headless like PhantomJS (which we're currently using), and against Selenium (if ever ever wanted to do more cross-browser testing) is nice.
    • No callback/async awkwardness. In most of our tests where we don't need any asynchronous behavior, Ruby's code seems easier to read and maintain.
  • Cons
    • Ruby may not always be in our language stack. See Should we migrate the web-app component to Lua? #304. If we migrate our APIs to Lua, then it might be strange to have code consist of Lua (server-side) and JavaScript (client-side), but then have Ruby still running all our tests.
      • That being said, I don't necessarily hate this (but I'm obviously familiar with Ruby), and I think there is precedent for this kind of testing setup. I probably see it more often with C/C++ projects, but I think there are cases where the code is written in one language, and integration tests are maintained in a higher-level, "easier" language. For example, TrafficServer is a C++ project, that's been using Python for their tests: https://github.com/apache/trafficserver/tree/6.2.0/ci/tsqa

Lua

Since we're using Lua for the proxy component, what about Lua for our tests? I primarily looked at Busted, which seems like one of the most popular testing frameworks for Lua.

  • Pros
    • No callback/async awkwardness. Similar to Ruby, I find the more serial nature of coding for the majority of our tests easier to maintain in this type of language.
    • Random test ordering.
  • Cons
    • curl/general library support. There's not a lot of libcurl options for Lua out there that seem well supported. While there are some other libraries out there like, lua-resty-http, that would utilize OpenResty, I still worry about running into various edge-cases, like we've seen with NodeJS's HTTP library. libcurl is definitely a known quantity which is nice to have. And while I think Lua has worked great for our proxy layer inside nginx, I worry we might run into various other testing libraries that would be useful to have, but might not currently exist in Lua's smaller ecosystem.
    • No browser-based testing. I'm not aware of any browser-based testing tools in Lua.
    • No parallel tests.

Test::Nginx

TestNginx is sort of OpenResty's informal standard for testing, as well for nginx modules written in C.

  • Pros
    • Low level nginx tests: There's some interesting capabilities in Test::Nginx that test lower-level nginx functionality that could be useful. For example, it has functionality to test for memory leaks, run against Valgrind, and check nginx's error log for unexpected output.
    • Random test ordering.
  • Cons
    • Testing can be awkward: I use Test::Nginx in a couple other projects I maintain, and while it's ability to integrate tightly with nginx is useful, I generally find how you write tests in it somewhat awkward (particularly for the type of tests we currently perform in API Umbrella). In Test::Nginx, your test essentially consists of hitting a single URL endpoint, and then checking the response's output. So any complex testing logic must be wrapped up inside that URL endpoint that's being hit in a way that's easily testable by simple string matching on the output.
    • Adds Perl into the mix: Test::Nginx is written in Perl, so it would add that to the language mix. Although, you don't actually see a lot of Perl, due to the way the tests have to be wrapped up in a request/response cycle inside nginx's config.
    • No browser-based testing.
    • No parallel tests.
@GUI
Copy link
Member Author

GUI commented Nov 12, 2016

Well, I had started writing up this issue many months ago, so I had intended to post this quite a while ago. In the intervening months, I've actually gone pretty far down this path of consolidating the tests, and I picked Ruby and minitest to do that. However, that work is still not complete (it's probably around 75% complete currently), and regardless, I'm still definitely interested in any feedback. So if you don't think Ruby and minitest are the right choice, or you don't think consolidating the tests makes sense at all, or you have any other feedback, I'm all ears.

The current work is in the ruby-integration-tests branch.

Personally, though, I have found writing the tests in Ruby and minitest to be a better fit (but this just might be my personal preference). The effort of translating tests is pretty monotonous, but it's given an opportunity to clean things up as I go. Obviously, some of the benefits I'm seeing are due to simply redoing things when you always have the benefit of hindsight and being able to re-think things. But in any case, here are a few examples of things I've liked in the new unified test suite:

  • I picked minitest over RSpec for the parallelization features (which are proving nice), but this was actually my first time using minitest. Generally, I'm appreciating the fact that it's just Ruby classes instead of a DSL. Tenderlove has a good writeup on this. I've found this particularly handy when dealing with the parallelization features, since things like class variables, instance variables, and mutexes behave as expected in Ruby.
  • In both Mocha and RSpec, I probably overuse before_all/before_each callbacks, describe/context blocks nested several levels deep, and "shared examples." By getting rid of context nesting, and using classes to separate things, it's helped focus each test file (there's more files, but they each contain less code). And by using standard Ruby methods to reuse functionality (instead of "shared examples"), I've found the code easier to read and trace through. Compare this RSpec example (buried multiple levels deep, using multiple layers of callbacks and also a shared example), to this new test (where the test setup is all more immediate, and it's using standard method calls for reusable tests).
  • Not having to deal with all the NodeJS callbacks and error handling is nice. Compare this Mocha example to this new example. The test could probably still use some comments/explanation in both cases, but purely from a code standpoint, the Ruby code is much more succinct, and I personally find it easier to follow.
  • If we do decide to start migrating any of the web-app APIs to Lua (see Should we migrate the web-app component to Lua? #304), the API tests now exist outside of the Rails app, so the tests can remain the same. All the tests are essentially integration tests now, so they don't care about the implementation details of where the API lives in the code.
  • By using a single test suite, it also simplifies running everything in our CircleCI test environment. CircleCI will automatically distribute tests across multiple nodes, instead of the custom script we currently have to use to achieve something similar. This is also one of the reasons I've sort of been holding off on merging the Ember upgrades into master, since doing so now would mean we'd have to sort through getting the multiple versions of Node.js running in the CI environment, and dealing with how to distribute the tests.

@GUI
Copy link
Member Author

GUI commented Dec 8, 2016

The new test suite has landed in master as part of #312

@GUI GUI closed this as completed Dec 8, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant