Skip to content
This repository has been archived by the owner on Sep 17, 2024. It is now read-only.

issues/146 - review Fleet test and sync with stand-alone test #150

Closed
wants to merge 1 commit into from

Conversation

EricDavisX
Copy link
Contributor

#146

  • per issue I am reviewing and syncing this up to what we checked in for stand-alone agent.

  • I also added 3 new scenarios, the last of which can be copy / pasted into a new feature file for packages assessment, one for each package we want to assess, tho I don't think we can expect data to show up unless we actually have that software running so we'll have to chop off the last bit of the assessment.

@new-configuration-new-package
Scenario: Add a new config and a new package and assign an agent
Given an agent is deployed to Fleet
When a new configuration named "Test - custom logs" is created
And the "custom logs" package datasource is added to the "Test - custom logs" configuration
And the Agent is assigned to the configuration "Test - custom logs"
And the "Test - custom logs" configuration shows the "custom logs" datasource added
Then there is new data in the index from agent from "custom logs" stream

@apmmachine
Copy link
Contributor

apmmachine commented Jun 30, 2020

💔 Tests Failed

Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: [Branch indexing]

  • Start Time: 2020-07-10T06:26:03.855+0000

  • Duration: 16 min 25 sec

Test stats 🧪

Test Results
Failed 4
Passed 26
Skipped 13
Total 43

Test errors

Expand to view the tests failures

  • Name: Initializing / End-To-End Tests / ingest-manager_stand_alone_mode / Starting the agent starts backend processes – Stand-alone Agent Mode

    • Age: 1
    • Duration: 7.59435
    • Error Details: Step the "filebeat" process is "started" on the host
  • Name: Initializing / End-To-End Tests / ingest-manager_stand_alone_mode / Deploying a stand-alone agent – Stand-alone Agent Mode

    • Age: 1
    • Duration: 27.663292
    • Error Details: Step there is new data in the index from agent
  • Name: Initializing / End-To-End Tests / ingest-manager_stand_alone_mode / Stopping the agent container stops data going into ES – Stand-alone Agent Mode

    • Age: 1
    • Duration: 1.8269631
    • Error Details: Step the "elastic-agent" docker container is stopped
  • Name: Initializing / Tests / Sanity checks / checkgherkinlint – pre_commit.lint

    • Age: 3
    • Duration: 0
    • Error Details: error

Steps errors

Expand to view the steps failures

  • Name: Run functional tests for ingest-manager:stand_alone_mode

    • Description:

    • Duration: 6 min 19 sec

    • Start Time: 2020-07-10T06:36:06.879+0000

    • log

  • Name: Error signal

    • Description:

    • Duration: 0 min 0 sec

    • Start Time: 2020-07-10T06:42:26.054+0000

    • log

  • Name: General Build Step

    • Description: [2020-07-10T06:42:26.522Z] Archiving artifacts
      hudson.AbortException: script returned exit code 1

    • Duration: 0 min 0 sec

    • Start Time: 2020-07-10T06:42:26.517+0000

    • log

Log output

Expand to view the last 100 lines of log output

[2020-07-10T06:40:32.265Z] time="2020-07-10T06:40:32Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:40:32.265Z] time="2020-07-10T06:40:32Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=1m41.730191951s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=26 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:40:36.477Z] time="2020-07-10T06:40:36Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:40:36.477Z] time="2020-07-10T06:40:36Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=1m45.716411462s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=27 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:40:43.067Z] time="2020-07-10T06:40:42Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:40:43.067Z] time="2020-07-10T06:40:42Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=1m51.979934444s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=28 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:40:46.373Z] time="2020-07-10T06:40:45Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:40:46.373Z] time="2020-07-10T06:40:45Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=1m55.513682726s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=29 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:40:52.963Z] time="2020-07-10T06:40:52Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:40:52.963Z] time="2020-07-10T06:40:52Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m2.341167928s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=30 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:40:59.553Z] time="2020-07-10T06:40:58Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:40:59.553Z] time="2020-07-10T06:40:58Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m8.325599351s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=31 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:04.842Z] time="2020-07-10T06:41:03Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:04.842Z] time="2020-07-10T06:41:03Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m13.445528186s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=32 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:06.755Z] time="2020-07-10T06:41:06Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:06.755Z] time="2020-07-10T06:41:06Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m16.0877664s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=33 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:10.061Z] time="2020-07-10T06:41:09Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:10.061Z] time="2020-07-10T06:41:09Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m19.380141933s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=34 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:15.351Z] time="2020-07-10T06:41:15Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:15.351Z] time="2020-07-10T06:41:15Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m24.91768401s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=35 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:23.653Z] time="2020-07-10T06:41:22Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:23.653Z] time="2020-07-10T06:41:22Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m32.294660765s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=36 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:26.206Z] time="2020-07-10T06:41:25Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:26.206Z] time="2020-07-10T06:41:25Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m35.192695878s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=37 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:31.504Z] time="2020-07-10T06:41:31Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:31.504Z] time="2020-07-10T06:41:31Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m40.667593737s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=38 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:34.052Z] time="2020-07-10T06:41:33Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:34.052Z] time="2020-07-10T06:41:33Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m43.463988522s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=39 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:40.640Z] time="2020-07-10T06:41:39Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:40.640Z] time="2020-07-10T06:41:39Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m49.424933119s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=40 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:43.940Z] time="2020-07-10T06:41:43Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:43.940Z] time="2020-07-10T06:41:43Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m53.433777071s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=41 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:47.242Z] time="2020-07-10T06:41:47Z" level=warning msg="Error executing request" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" method=GET url="http://localhost:5601/status"
[2020-07-10T06:41:47.243Z] time="2020-07-10T06:41:47Z" level=warning msg="The Kibana instance is not healthy yet" elapsedTime=2m56.80089432s error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" retry=42 statusEndpoint="http://localhost:5601/status"
[2020-07-10T06:41:47.243Z] time="2020-07-10T06:41:47Z" level=error msg="The Kibana instance could not get the healthy status" error="Get http://localhost:5601/status: dial tcp 127.0.0.1:5601: connect: connection refused" minutes=3m0s
[2020-07-10T06:41:48.186Z] Pulling elastic-agent (docker.elastic.co/beats/elastic-agent:8.0.0-SNAPSHOT)...
[2020-07-10T06:41:48.758Z] 8.0.0-SNAPSHOT: Pulling from beats/elastic-agent
[2020-07-10T06:41:52.987Z] Starting ingest-manager_package-registry_1 ... 
[2020-07-10T06:41:52.987Z] ingest-manager_elasticsearch_1 is up-to-date
[2020-07-10T06:41:54.949Z] 
Starting ingest-manager_package-registry_1 ... done
[2020-07-10T06:41:54.949Z] ERROR: for kibana  Container "efc096fe47b2" is unhealthy.
[2020-07-10T06:41:54.949Z] Encountered errors while bringing up the project.
[2020-07-10T06:41:54.949Z] time="2020-07-10T06:41:54Z" level=error msg="Could not deploy the elastic-agent"
[2020-07-10T06:41:55.524Z] Starting ingest-manager_package-registry_1 ... 
[2020-07-10T06:41:55.524Z] ingest-manager_elasticsearch_1 is up-to-date
[2020-07-10T06:41:56.731Z] 
Starting ingest-manager_package-registry_1 ... done
Starting ingest-manager_kibana_1           ... 
[2020-07-10T06:42:24.295Z] 
Starting ingest-manager_kibana_1           ... done
[2020-07-10T06:42:24.295Z] ERROR: for elastic-agent  Container "e667b1d8a238" is unhealthy.
[2020-07-10T06:42:24.295Z] Encountered errors while bringing up the project.
[2020-07-10T06:42:24.295Z] time="2020-07-10T06:42:22Z" level=error msg="Could not deploy the elastic-agent"
[2020-07-10T06:42:24.295Z] Starting ingest-manager_package-registry_1 ... 
[2020-07-10T06:42:24.295Z] ingest-manager_elasticsearch_1 is up-to-date
[2020-07-10T06:42:24.295Z] 
Starting ingest-manager_package-registry_1 ... done
[2020-07-10T06:42:24.295Z] ERROR: for kibana  Container "efc096fe47b2" is unhealthy.
[2020-07-10T06:42:24.295Z] Encountered errors while bringing up the project.
[2020-07-10T06:42:24.558Z] time="2020-07-10T06:42:24Z" level=error msg="Could not deploy the elastic-agent"
[2020-07-10T06:42:25.503Z] Stopping ingest-manager_elasticsearch_1 ... 
[2020-07-10T06:42:25.767Z] 
Stopping ingest-manager_elasticsearch_1 ... done
Removing ingest-manager_kibana_1           ... 
[2020-07-10T06:42:25.767Z] Removing ingest-manager_package-registry_1 ... 
[2020-07-10T06:42:25.767Z] Removing ingest-manager_elasticsearch_1    ... 
[2020-07-10T06:42:26.028Z] 
Removing ingest-manager_elasticsearch_1    ... done

Removing ingest-manager_package-registry_1 ... done

Removing ingest-manager_kibana_1           ... done
Removing network ingest-manager_default
[2020-07-10T06:42:26.028Z] <?xml version="1.0" encoding="UTF-8"?>
[2020-07-10T06:42:26.028Z] <testsuites name="main" tests="3" skipped="0" failures="3" errors="0" time="286.447594119">
[2020-07-10T06:42:26.028Z]   <testsuite name="Fleet Mode Agent" tests="0" skipped="0" failures="0" errors="0" time="0"></testsuite>
[2020-07-10T06:42:26.028Z]   <testsuite name="Stand-alone Agent Mode" tests="3" skipped="0" failures="3" errors="0" time="37.084611981">
[2020-07-10T06:42:26.028Z]     <testcase name="Starting the agent starts backend processes" status="failed" time="7.594349822">
[2020-07-10T06:42:26.028Z]       <failure message="Step a stand-alone agent is deployed: Could not run compose file: [/var/lib/jenkins/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/.op/compose/services/elastic-agent/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [up -d]. exit status 1"></failure>
[2020-07-10T06:42:26.028Z]       <error message="Step the &#34;filebeat&#34; process is &#34;started&#34; on the host" type="skipped"></error>
[2020-07-10T06:42:26.028Z]       <error message="Step the &#34;metricbeat&#34; process is &#34;started&#34; on the host" type="skipped"></error>
[2020-07-10T06:42:26.028Z]     </testcase>
[2020-07-10T06:42:26.028Z]     <testcase name="Deploying a stand-alone agent" status="failed" time="27.663292147">
[2020-07-10T06:42:26.028Z]       <failure message="Step a stand-alone agent is deployed: Could not run compose file: [/var/lib/jenkins/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/.op/compose/services/elastic-agent/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [up -d]. exit status 1"></failure>
[2020-07-10T06:42:26.028Z]       <error message="Step there is new data in the index from agent" type="skipped"></error>
[2020-07-10T06:42:26.028Z]     </testcase>
[2020-07-10T06:42:26.029Z]     <testcase name="Stopping the agent container stops data going into ES" status="failed" time="1.8269630989999999">
[2020-07-10T06:42:26.029Z]       <failure message="Step a stand-alone agent is deployed: Could not run compose file: [/var/lib/jenkins/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/.op/compose/services/elastic-agent/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [up -d]. exit status 1"></failure>
[2020-07-10T06:42:26.029Z]       <error message="Step the &#34;elastic-agent&#34; docker container is stopped" type="skipped"></error>
[2020-07-10T06:42:26.029Z]       <error message="Step there is no new data in the index after agent shuts down" type="skipped"></error>
[2020-07-10T06:42:26.029Z]     </testcase>
[2020-07-10T06:42:26.029Z]   </testsuite>
[2020-07-10T06:42:26.029Z] </testsuites>make: *** [functional-test] Error 1
[2020-07-10T06:42:26.029Z] Makefile:36: recipe for target 'functional-test' failed
[2020-07-10T06:42:26.029Z] + echo 'ERROR: functional-test failed'
[2020-07-10T06:42:26.029Z] ERROR: functional-test failed
[2020-07-10T06:42:26.029Z] + exit_status=1
[2020-07-10T06:42:26.029Z] + sed -e 's/^[ \t]*//; s#>.*failed$#>#g' outputs/TEST-ingest-manager-stand_alone_mode
[2020-07-10T06:42:26.029Z] + grep -E '^<.*>$'
[2020-07-10T06:42:26.029Z] + exit 1
[2020-07-10T06:42:26.066Z] Recording test results
[2020-07-10T06:42:26.522Z] Archiving artifacts
[2020-07-10T06:42:26.607Z] Failed in branch ingest-manager_stand_alone_mode
[2020-07-10T06:42:27.910Z] Stage "Release" skipped due to earlier failure(s)
[2020-07-10T06:42:28.719Z] Running on worker-854309 in /var/lib/jenkins/workspace/stack_e2e-testing-mbp_PR-150
[2020-07-10T06:42:28.752Z] [INFO] getVaultSecret: Getting secrets
[2020-07-10T06:42:28.815Z] Masking supported pattern matches of $VAULT_ADDR or $VAULT_ROLE_ID or $VAULT_SECRET_ID
[2020-07-10T06:42:30.643Z] + chmod 755 generate-build-data.sh
[2020-07-10T06:42:30.643Z] + ./generate-build-data.sh https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-150/ https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-150/runs/3 FAILURE 985386
[2020-07-10T06:42:30.643Z] INFO: curl https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-150/runs/3/steps/?limit=10000 -o steps-info.json
[2020-07-10T06:42:34.764Z] INFO: curl https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-150/runs/3/tests/?status=FAILED -o tests-errors.json
[2020-07-10T06:42:34.764Z] INFO: curl https://apm-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/stack/e2e-testing-mbp/PR-150/runs/3/log/ -o pipeline-log.txt


@revoke-token
Scenario: Revoking the enrollment token for an agent
Given an agent is deployed to Fleet
And the agent is un-enrolled
And the "agent" process is "stopped" on the host
When the enrollment token is revoked
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that we are going to need the ability to create tokens using APIs, so that we can revoke it without affecting the default one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discussed via chat - to confirm here, I agree - we should manage this better:

to create a new token (and they are config specific)
POST to /api/ingest_manager/fleet/enrollment-api-keys

with a POST body like:
{“name”:“test-token”,“config_id”:“99a28950-bae3-11ea-9dd9-f131eddc2919"}

the config_id is the id of the configuration and name is arbitrary name

to get the config_ids available, you can call a GET to /api/ingest_manager/agent_configs
and walk the returned list, config_id is listed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it working!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More context on this: we decided to have each scenario to prepare its state (i.e. enrollment token) so that we can have each scenario idempotent

Comment on lines +48 to +72
@package-added-to-default-config
Scenario: Execute packages api calls
Given an agent is deployed to Fleet
And the package list api returns successfully
When the "Cisco" latest package version is installed successfull
And a "Cisco" package datasource is added to the 'default' configuration
Then the "default" configuration shows the "Cisco" datasource added

@new-agent-configuration
Scenario: Assign an Agent to a new configuration
Given an agent is deployed to Fleet
And the agent is listed in Fleet as online
When a new configuration named "Test Fleet" is created
And the Agent is assigned to the configuration "Test Fleet"
Then a new enrollment token is created
And there is new data in the index from agent

@new-configuration-new-package
Scenario: Add a new config and a new package and assign an agent
Given an agent is deployed to Fleet
When a new configuration named "Test - custom logs" is created
And the "custom logs" package datasource is added to the "Test - custom logs" configuration
And the Agent is assigned to the configuration "Test - custom logs"
And the "Test - custom logs" configuration shows the "custom logs" datasource added
Then there is new data in the index from agent from "custom logs" stream
Copy link
Contributor

@mdelapenya mdelapenya Jul 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wdyt about adding these three scenarios once we complete the above ones?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd like to hustle thru these as well, they are pertinent and blocking us from expanding out packages testing here. But if we want to put them to a separate update that's fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing! Let me complete the existing ones and I'lll add this. Let's keep this open until then. We are on the right (and fast) track!

@mdelapenya
Copy link
Contributor

Hey Eric, I'm downloading this branch to resolve conflicts locally and push it again as a new PR (keeping your commits, of course). I'll close this one.

As a side note, I'd recommend using your fork to send PRs (traditional open source flow), so that we do not add other branches to this repo than the needed ones (master, and releases). I suggest this because then the CI will catch the branch and use cloud resources to build the branch plus the cloud resources for the PR.

I also noticed you forked mdelapenya/e2e-testing. I suggest removing it and forking this one. I have the best practice of keeping my remote up-to-date with upstream, but that could not be the case the moment this project starts receiving contributions by more team folks.

@EricDavisX
Copy link
Contributor Author

indeed - i'll fork it right! Thanks Manu.

@EricDavisX EricDavisX deleted the ingest-fleet-test-updates branch July 22, 2020 20:51
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants