Skip to content

Commit

Permalink
Add the new integration test framework (#12368)
Browse files Browse the repository at this point in the history
This commit is a first draft of the revised integration test framework which provides:
- A new directory, integration-tests-ex that holds the new integration test structure. (For now, the existing integration-tests is left unchanged.)
- Maven module druid-it-tools to hold code placed into the Docker image.
- Maven module druid-it-image to build the Druid-only test image from the tarball produced in distribution. (Dependencies live in their "official" image.)
- Maven module druid-it-cases that holds the revised tests and the framework itself. The framework includes file-based test configuration, test-specific clients, test initialization and updated versions of some of the common test support classes.

The integration test setup is primarily a huge mass of details. This approach refactors many of those details: from how the image is built and configured to how the Docker Compose scripts are structured to test configuration. An extensive set of "readme" files explains those details. Rather than repeat that material here, please consult those files for explanations.
  • Loading branch information
paul-rogers authored Aug 24, 2022
1 parent 0bc9f9f commit cfed036
Show file tree
Hide file tree
Showing 205 changed files with 20,703 additions and 230 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -25,5 +25,5 @@ README
.pmdruleset.xml
.java-version
integration-tests/gen-scripts/
/bin/
bin/
*.hprof
52 changes: 27 additions & 25 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ addons:
# Add various options to make 'mvn install' fast and skip javascript compile (-Ddruid.console.skip=true) since it is not
# needed. Depending on network speeds, "mvn -q install" may take longer than the default 10 minute timeout to print any
# output. To compensate, use travis_wait to extend the timeout.
install: ./check_test_suite.py && travis_terminate 0 || echo 'Running Maven install...' && MAVEN_OPTS='-Xmx3000m' travis_wait 15 ${MVN} clean install -q -ff -pl '!distribution,!:it-tools,!:it-image' ${MAVEN_SKIP} ${MAVEN_SKIP_TESTS} -T1C && ${MVN} install -q -ff -pl 'distribution' ${MAVEN_SKIP} ${MAVEN_SKIP_TESTS}
install: ./check_test_suite.py && travis_terminate 0 || echo 'Running Maven install...' && MAVEN_OPTS='-Xmx3000m' travis_wait 15 ${MVN} clean install -q -ff -pl '!distribution,!:druid-it-tools,!:druid-it-image,!:druid-it-cases' ${MAVEN_SKIP} ${MAVEN_SKIP_TESTS} -T1C && ${MVN} install -q -ff -pl 'distribution' ${MAVEN_SKIP} ${MAVEN_SKIP_TESTS}

# There are 3 stages of tests
# 1. Tests - phase 1
Expand All @@ -72,7 +72,7 @@ jobs:
- name: "animal sniffer checks"
stage: Tests - phase 1
script: ${MVN} animal-sniffer:check --fail-at-end

- name: "checkstyle"
script: ${MVN} checkstyle:checkstyle --fail-at-end

Expand Down Expand Up @@ -347,7 +347,7 @@ jobs:
<<: *test_processing_module
name: "(openjdk8) other modules test"
env:
- MAVEN_PROJECTS='!processing,!indexing-hadoop,!indexing-service,!extensions-core/kafka-indexing-service,!extensions-core/kinesis-indexing-service,!server,!web-console,!integration-tests,!:it-image,!:it-tools'
- MAVEN_PROJECTS='!processing,!indexing-hadoop,!indexing-service,!extensions-core/kafka-indexing-service,!extensions-core/kinesis-indexing-service,!server,!web-console,!integration-tests,!:druid-it-tools,!:druid-it-image,!:druid-it-cases'

- <<: *test_other_modules
name: "(openjdk11) other modules test"
Expand Down Expand Up @@ -457,9 +457,9 @@ jobs:
docker exec -it druid-$v sh -c 'dmesg | tail -3' ;
done

- <<: *integration_batch_index
name: "(Compile=openjdk8, Run=openjdk8) batch index integration test with Indexer"
env: TESTNG_GROUPS='-Dgroups=batch-index' JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='indexer'
#- <<: *integration_batch_index
# name: "(Compile=openjdk8, Run=openjdk8) batch index integration test with Indexer"
# env: TESTNG_GROUPS='-Dgroups=batch-index' JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='indexer'

- &integration_input_format
name: "(Compile=openjdk8, Run=openjdk8) input format integration test"
Expand Down Expand Up @@ -666,16 +666,33 @@ jobs:
name: "(Compile=openjdk8, Run=openjdk8) other integration tests with Indexer"
env: TESTNG_GROUPS='-DexcludedGroups=batch-index,input-format,input-source,perfect-rollup-parallel-batch-index,kafka-index,query,query-retry,query-error,realtime-index,security,ldap-security,s3-deep-storage,gcs-deep-storage,azure-deep-storage,hdfs-deep-storage,s3-ingestion,kinesis-index,kinesis-data-format,kafka-transactional-index,kafka-index-slow,kafka-transactional-index-slow,kafka-data-format,hadoop-s3-to-s3-deep-storage,hadoop-s3-to-hdfs-deep-storage,hadoop-azure-to-azure-deep-storage,hadoop-azure-to-hdfs-deep-storage,hadoop-gcs-to-gcs-deep-storage,hadoop-gcs-to-hdfs-deep-storage,aliyun-oss-deep-storage,append-ingestion,compaction,high-availability,upgrade,shuffle-deep-store,custom-coordinator-duties' JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='indexer'

- <<: *integration_tests
name: "(Compile=openjdk8, Run=openjdk8) leadership and high availability integration tests"
jdk: openjdk8
env: TESTNG_GROUPS='-Dgroups=high-availability' JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='middleManager' OVERRIDE_CONFIG_PATH='./environment-configs/test-groups/prepopulated-data'
#- <<: *integration_tests
# name: "(Compile=openjdk8, Run=openjdk8) leadership and high availability integration tests"
# jdk: openjdk8
# env: TESTNG_GROUPS='-Dgroups=high-availability' JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='middleManager' OVERRIDE_CONFIG_PATH='./environment-configs/test-groups/prepopulated-data'

- <<: *integration_query
name: "(Compile=openjdk8, Run=openjdk8) query integration test (mariaDB)"
jdk: openjdk8
env: TESTNG_GROUPS='-Dgroups=query' USE_INDEXER='middleManager' MYSQL_DRIVER_CLASSNAME='org.mariadb.jdbc.Driver' OVERRIDE_CONFIG_PATH='./environment-configs/test-groups/prepopulated-data'

# Revised ITs.
- &integration_tests_ex
name: "(Compile=openjdk8, Run=openjdk8) leadership and high availability integration tests (new)"
stage: Tests - phase 2
jdk: openjdk8
services: *integration_test_services
env: JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='middleManager'
# Uses the install defined above. Then, builds the test tools and docker image,
# and runs one IT. If tests fail, echos log lines of any of
# the Druid services that did not exit normally.
script: ./it.sh travis HighAvailability

- <<: *integration_tests_ex
name: "(Compile=openjdk8, Run=openjdk8) batch index integration test with Indexer (new)"
env: JVM_RUNTIME='-Djvm.runtime=8' USE_INDEXER='indexer'
script: ./it.sh travis BatchIndex

# END - Integration tests for Compile with Java 8 and Run with Java 8

# START - Integration tests for Compile with Java 8 and Run with Java 11
Expand Down Expand Up @@ -756,21 +773,6 @@ jobs:

# END - Integration tests for Compile with Java 8 and Run with Java 11

# BEGIN - Revised integration tests

# Experimental build of the revised integration test Docker image.
# Actual tests will come later.
- name: "experimental docker tests"
stage: Tests - phase 2
# Uses the install defined above. Then, builds the test tools and docker image,
# and run the various IT tests. If tests fail, echos log lines of any of
# the Druid services that did not exit normally.
# Run though install to ensure the test tools are installed, and the docker
# image is built. The tests only need verify.
script: ${MVN} install -P dist,test-image -rf :distribution ${MAVEN_SKIP} -DskipUTs=true

# END - Revised integration tests

- &integration_batch_index_k8s
name: "(Compile=openjdk8, Run=openjdk8, Cluster Build On K8s) ITNestedQueryPushDownTest integration test"
stage: Tests - phase 2
Expand Down
19 changes: 10 additions & 9 deletions core/src/main/java/org/apache/druid/guice/PolyBind.java
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,13 @@
import java.util.Properties;

/**
* Provides the ability to create "polymorphic" bindings. Where the polymorphism is actually just making a decision
* based on a value in a Properties.
*
* The workflow is that you first create a choice by calling createChoice(). Then you create options using the binder
* returned by the optionBinder() method. Multiple different modules can call optionBinder and all options will be
* reflected at injection time as long as equivalent interface Key objects are passed into the various methods.
* Provides the ability to create "polymorphic" bindings where the polymorphism is actually just making a decision
* based on a value in Properties.
* <p>
* The workflow is that you first create a choice by calling {@code createChoice()}. Then you create options using
* the binder returned by the {@code optionBinder()} method. Multiple different modules can call
* {@code optionBinder()} and all options will be reflected at injection time as long as equivalent interface
* {@code Key} objects are passed into the various methods.
*/
@PublicApi
public class PolyBind
Expand Down Expand Up @@ -110,10 +111,10 @@ public static <T> ScopedBindingBuilder createChoiceWithDefault(
}

/**
* Binds an option for a specific choice. The choice must already be registered on the injector for this to work.
* Binds an option for a specific choice. The choice must already be registered on the injector for this to work.
*
* @param binder the binder for the injector that is being configured
* @param interfaceKey the interface that will have an option added to it. This must equal the
* @param interfaceKey the interface that will have an option added to it. This must equal the
* Key provided to createChoice
* @param <T> interface type
* @return A MapBinder that can be used to create the actual option bindings.
Expand Down Expand Up @@ -195,7 +196,7 @@ public T get()

if (provider == null) {
throw new ProvisionException(
StringUtils.format("Unknown provider[%s] of %s, known options[%s]", implName, key, implsMap.keySet())
StringUtils.format("Unknown provider [%s] of %s, known options [%s]", implName, key, implsMap.keySet())
);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@
*/
public class MetadataStorageConnectorConfig
{
public static final String PROPERTY_BASE = "druid.metadata.storage.connector";

@JsonProperty
private boolean createTables = true;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@
*/
public class MetadataStorageTablesConfig
{
public static final String PROPERTY_BASE = "druid.metadata.storage.tables";

public static MetadataStorageTablesConfig fromBase(String base)
{
return new MetadataStorageTablesConfig(base, null, null, null, null, null, null, null, null, null, null);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@
import org.junit.Assert;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
Expand Down Expand Up @@ -249,6 +250,7 @@ public void testWithCacheAndFetch() throws IOException
}

@Test
@Ignore("See issue #12638")
public void testWithLargeCacheAndSmallFetch() throws IOException
{
final TestPrefetchableTextFilesFirehoseFactory factory =
Expand Down Expand Up @@ -336,6 +338,7 @@ public void testTimeout() throws IOException
}

@Test
@Ignore("See issue #12638")
public void testReconnectWithCacheAndPrefetch() throws IOException
{
final TestPrefetchableTextFilesFirehoseFactory factory =
Expand Down
8 changes: 4 additions & 4 deletions core/src/test/java/org/apache/druid/guice/PolyBindTest.java
Original file line number Diff line number Diff line change
Expand Up @@ -112,17 +112,17 @@ public void configure(Binder binder)
}
catch (Exception e) {
Assert.assertTrue(e instanceof ProvisionException);
Assert.assertTrue(e.getMessage().contains("Unknown provider[c] of Key[type=org.apache.druid.guice.PolyBindTest$Gogo"));
Assert.assertTrue(e.getMessage().contains("Unknown provider [c] of Key[type=org.apache.druid.guice.PolyBindTest$Gogo"));
}
try {
Assert.assertEquals("B", injector.getInstance(Key.get(Gogo.class, Names.named("reverse"))).go());
Assert.fail(); // should never be reached
}
catch (Exception e) {
Assert.assertTrue(e instanceof ProvisionException);
Assert.assertTrue(e.getMessage().contains("Unknown provider[c] of Key[type=org.apache.druid.guice.PolyBindTest$Gogo"));
Assert.assertTrue(e.getMessage().contains("Unknown provider [c] of Key[type=org.apache.druid.guice.PolyBindTest$Gogo"));
}

// test default property value
Assert.assertEquals("B", injector.getInstance(GogoSally.class).go());
props.setProperty("sally", "a");
Expand All @@ -136,7 +136,7 @@ public void configure(Binder binder)
}
catch (Exception e) {
Assert.assertTrue(e instanceof ProvisionException);
Assert.assertTrue(e.getMessage().contains("Unknown provider[c] of Key[type=org.apache.druid.guice.PolyBindTest$GogoSally"));
Assert.assertTrue(e.getMessage().contains("Unknown provider [c] of Key[type=org.apache.druid.guice.PolyBindTest$GogoSally"));
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ public MySQLConnector(
)
{
super(config, dbTables);
log.info("Loading \"MySQL\" metadata connector driver %s", driverConfig.getDriverClassName());
log.info("Loading MySQL metadata connector driver %s", driverConfig.getDriverClassName());
tryLoadDriverClass(driverConfig.getDriverClassName(), true);

if (driverConfig.getDriverClassName().contains("mysql")) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,11 @@

public class MySQLConnectorDriverConfig
{
public static final String MYSQL_DRIVER = "com.mysql.jdbc.Driver";
public static final String MARIA_DB_DRIVER = "org.mariadb.jdbc.Driver";

@JsonProperty
private String driverClassName = "com.mysql.jdbc.Driver";
private String driverClassName = MYSQL_DRIVER;

@JsonProperty
public String getDriverClassName()
Expand Down
1 change: 1 addition & 0 deletions extensions-core/testing-tools/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/bin/
115 changes: 107 additions & 8 deletions integration-tests-ex/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,116 @@

# Revised Integration Tests

This directory builds a Docker image for Druid. Later revisions
use the image to run revised integration tests.
This directory builds a Docker image for Druid, then uses that image, along
with test configuration to run tests. This version greatly evolves the
integration tests from the earlier form. See the [History](docs/history.md)
section for details.

The `it-base` project is built as part of the normal build,
though it is used only for the Docker image.
## Shortcuts

To build the image:
List of the most common commands once you're familiar with the framework.
If you are new to the framework, see [Quickstart](docs/quickstart.md) for
an explanation.

### Build Druid

To make the text a bit simpler, define a variable for the standard settings:

```bash
export MAVEN_IGNORE=-P skip-static-checks,skip-tests -Dmaven.javadoc.skip=true

```bash
mvn clean package -P dist $MAVEN_IGNORE -T1.0C
```

### Build the Test Image

```bash
cd $DRUID_DEV/integration-tests-ex/image
mvn install -P test-image $MAVEN_IGNORE
```

### Run an IT from the Command Line

```bash
mvn verify -P IT-<category> -pl :druid-it-cases $MAVEN_IGNORE
```

Where `<category>` is one of the test categories.

Or

```bash
mvn $USUAL_CAVEATS -P test-image
cd $DRUID_DEV/integration-tests-ex/cases
mvn verify -P skip-static-checks,docker-tests,IT-<category> \
-Dmaven.javadoc.skip=true -DskipUTs=true \
-pl :druid-it-cases
```

Where `$USUAL_CAVEATS` are your favorite options to turn
off static checks, UTs, etc.
### Run an IT from the IDE

Start the cluster:

```bash
cd $DRUID_DEV/integration-tests-ex/cases
./cluster.sh <category> up
```

Where `<category>` is one of the test categories. Then launch the
test as a JUnit test.

## Contents

* [Goals](#Goals)
* [Quickstart](docs/quickstart.md)
* [Create a new test](docs/guide.md)
* [Maven configuration](docs/maven.md)
* [Travis integration](docs/travis.md)
* [Docker image](docs/docker.md)
* [Druid configuration](docs/druid-config.md)
* [Docker Compose configuration](docs/compose.md)
* [Test configuration](docs/test-config.md)
* [Test structure](docs/tests.md)
* [Test runtime semantics](docs/runtime.md)
* [Scripts](docs/scripts.md)
* [Dependencies](docs/dependencies.md)
* [Debugging](docs/debugging.md)

Background information

* [Next steps](docs/next-steps.md)
* [Test conversion](docs/conversion.md) - How to convert existing tests.
* [History](docs/history.md) - Comparison with prior integration tests.

## Goals

The goal of the present version is to simplify development.

* Speed up the Druid test image build by avoiding download of
dependencies. (Instead, any such dependencies are managed by
Maven and reside in the local build cache.)
* Use official images for dependencies to avoid the need to
download, install, and manage those dependencies.
* Make it is easy to manually build the image, launch
a cluster, and run a test against the cluster.
* Convert tests to JUnit so that they will easily run in your
favorite IDE, just like other Druid tests.
* Use the actual Druid build from `distribution` so we know
what is tested.
* Leverage, don't fight, Maven.
* Run the integration tests easily on a typical development machine.
By meeting these goals, you can quickly:
* Build the Druid distribution.
* Build the Druid image. (< 1 minute)
* Launch the cluster for the particular test. (a few seconds)
* Run the test any number of times in your debugger.
* Clean up the test artifacts.
The result is that the fastest path to develop a Druid patch or
feature is:
* Create a normal unit test and run it to verify your code.
* Create an integration test that double-checks the code in
a live cluster.
1 change: 1 addition & 0 deletions integration-tests-ex/cases/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/bin/
File renamed without changes.
Loading

0 comments on commit cfed036

Please sign in to comment.