Skip to content
This repository has been archived by the owner on Sep 17, 2024. It is now read-only.

fix: remove references to the default policy #281

Merged
merged 1 commit into from
Sep 10, 2020

Conversation

mdelapenya
Copy link
Contributor

What does this PR do?

This PR updates the specs/scenarios removing any reference to the default policy, because since #279 we are creating a policy for each scenario.

This change does not affect the tests, as we are internally using the policyID, so it should work as expected.

Why is it important?

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have run the Unit tests for the CLI, and they are passing locally
  • I have run the End-2-End tests for the suite I'm working on, and they are passing locally
  • I have noticed new Go dependencies (run make notice in the proper directory)

How to test this PR locally

Run:

$ cd e2e/_suites/ingest-manager
$ OP_LOG_LEVEL=TRACE DEVELOPER_MODE=true godog -t "agent_endpoint_integration"

Related issues

Internally, we were using the ID, so it should work as expected, but the
specs were not in sync
@mdelapenya mdelapenya self-assigned this Sep 10, 2020
@mdelapenya mdelapenya requested a review from a team September 10, 2020 11:22
@mdelapenya mdelapenya marked this pull request as ready for review September 10, 2020 11:23
@elasticmachine
Copy link
Contributor

elasticmachine commented Sep 10, 2020

💔 Tests Failed

Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: [Started by user Manuel de la Peña, Replayed #2]

  • Start Time: 2020-09-10T17:29:10.427+0000

  • Duration: 25 min 37 sec

Test stats 🧪

Test Results
Failed 5
Passed 17
Skipped 0
Total 22

Test errors

Expand to view the tests failures

  • Name: Initializing / End-To-End Tests / ingest-manager_agent_endpoint_integration / Changing an Agent policy is reflected in the Security App – Agent Endpoint Integration

    • Age: 1
    • Duration: 273.35626
    • Error Details: Step the policy will reflect the change in the Security App: No endpoint-security events where found for the agent in the 64beddf0-f38d-11ea-8496-019efe90795f policy
  • Name: Initializing / End-To-End Tests / ingest-manager_agent_endpoint_integration / Adding the Endpoint Integration to an Agent makes the host to show in Security App – Agent Endpoint Integration

    • Age: 3
    • Duration: 160.78186
    • Error Details: Step the host name is shown in the Administration view in the Security App as "online": The host b62876916b47 is not listed in the Administration view in the Security App
  • Name: Initializing / End-To-End Tests / ingest-manager_agent_endpoint_integration / Deploying an Endpoint makes policies to appear in the Security App – Agent Endpoint Integration

    • Age: 3
    • Duration: 149.00931
    • Error Details: Step the policy response will be shown in the Security App: The policy response is not listed as 'success' in the Administration view in the Security App yet
  • Name: Initializing / End-To-End Tests / ingest-manager_fleet_mode / Restarting the centos host with persistent agent restarts backend processes – Fleet Mode Agent

    • Age: 3
    • Duration: 262.1662
    • Error Details: Step the "metricbeat" process is in the "started" state on the host
  • Name: Initializing / End-To-End Tests / ingest-manager_stand_alone_mode / Deploying a stand-alone agent – Stand-alone Agent

    • Age: 3
    • Duration: 178.56544
    • Error Details: Step there is new data in the index from agent: Not enough hits in the index yet. Current: 0, Desired: 50

Steps errors

Expand to view the steps failures

  • Name: Docker login

    • Description:

    • Duration: 0 min 19 sec

    • Start Time: 2020-09-10T17:34:13.223+0000

    • log

  • Name: Run functional tests for ingest-manager:agent_endpoint_integration

    • Description:

    • Duration: 18 min 39 sec

    • Start Time: 2020-09-10T17:34:37.854+0000

    • log

  • Name: Error signal

    • Description:

    • Duration: 0 min 0 sec

    • Start Time: 2020-09-10T17:52:17.360+0000

    • log

  • Name: General Build Step

    • Description: [2020-09-10T17:52:17.992Z] Archiving artifacts
      hudson.AbortException: script returned exit code 1

    • Duration: 0 min 0 sec

    • Start Time: 2020-09-10T17:52:17.986+0000

    • log

  • Name: Docker login

    • Description:

    • Duration: 0 min 18 sec

    • Start Time: 2020-09-10T17:34:20.671+0000

    • log

  • Name: Run functional tests for ingest-manager:stand_alone_mode

    • Description:

    • Duration: 8 min 47 sec

    • Start Time: 2020-09-10T17:34:45.215+0000

    • log

  • Name: Error signal

    • Description:

    • Duration: 0 min 0 sec

    • Start Time: 2020-09-10T17:42:32.581+0000

    • log

  • Name: General Build Step

    • Description: [2020-09-10T17:42:33.006Z] Archiving artifacts
      hudson.AbortException: script returned exit code 1

    • Duration: 0 min 0 sec

    • Start Time: 2020-09-10T17:42:33.001+0000

    • log

  • Name: Docker login

    • Description:

    • Duration: 0 min 19 sec

    • Start Time: 2020-09-10T17:34:25.924+0000

    • log

  • Name: Run functional tests for ingest-manager:fleet_mode

    • Description:

    • Duration: 19 min 42 sec

    • Start Time: 2020-09-10T17:35:02.639+0000

    • log

  • Name: Error signal

    • Description:

    • Duration: 0 min 0 sec

    • Start Time: 2020-09-10T17:53:44.852+0000

    • log

  • Name: General Build Step

    • Description: [2020-09-10T17:53:45.418Z] Archiving artifacts
      hudson.AbortException: script returned exit code 1

    • Duration: 0 min 0 sec

    • Start Time: 2020-09-10T17:53:45.411+0000

    • log

Log output

Expand to view the last 100 lines of log output

[2020-09-10T17:53:15.639Z] time="2020-09-10T17:53:15Z" level=debug msg="Docker compose executed." cmd="[exec -T debian-systemd elastic-agent enroll http://kibana:5601 RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ== -f --insecure]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_elastic-agent_1 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:16.591Z] time="2020-09-10T17:53:16Z" level=debug msg="Docker compose executed." cmd="[exec -T debian-systemd systemctl start elastic-agent]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_elastic-agent_1 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:17.545Z] time="2020-09-10T17:53:17Z" level=debug msg="The token was deleted" tokenID=ee0069af-faf7-4b2a-9d44-8c12d07b4473
[2020-09-10T17:53:17.545Z] time="2020-09-10T17:53:17Z" level=debug msg="Token was revoked" token="RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ==" tokenID=ee0069af-faf7-4b2a-9d44-8c12d07b4473
[2020-09-10T17:53:18.494Z] ingest-manager_package-registry_1 is up-to-date
[2020-09-10T17:53:18.494Z] ingest-manager_elasticsearch_1 is up-to-date
[2020-09-10T17:53:18.494Z] Recreating ingest-manager_debian-systemd_elastic-agent_1 ... 
[2020-09-10T17:53:18.494Z] ingest-manager_kibana_1 is up-to-date
[2020-09-10T17:53:20.435Z] 
Recreating ingest-manager_debian-systemd_elastic-agent_1 ... done
time="2020-09-10T17:53:20Z" level=debug msg="Docker compose executed." cmd="[up -d]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_debian-systemd_2 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:21.386Z] 
[2020-09-10T17:53:21.386Z] WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
[2020-09-10T17:53:21.386Z] 
[2020-09-10T17:53:21.386Z] Reading package lists...
[2020-09-10T17:53:21.386Z] Building dependency tree...
[2020-09-10T17:53:21.386Z] Reading state information...
[2020-09-10T17:53:21.386Z] The following NEW packages will be installed:
[2020-09-10T17:53:21.386Z]   elastic-agent
[2020-09-10T17:53:23.313Z] 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
[2020-09-10T17:53:23.313Z] Need to get 0 B/110 MB of archives.
[2020-09-10T17:53:23.313Z] After this operation, 130 MB of additional disk space will be used.
[2020-09-10T17:53:23.313Z] Get:1 /elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb elastic-agent amd64 8.0.0 [110 MB]
[2020-09-10T17:53:23.313Z] debconf: delaying package configuration, since apt-utils is not installed
[2020-09-10T17:53:23.313Z] Selecting previously unselected package elastic-agent.
[2020-09-10T17:53:23.313Z] (Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 7400 files and directories currently installed.)
[2020-09-10T17:53:23.313Z] Preparing to unpack .../elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb ...
[2020-09-10T17:53:23.313Z] Unpacking elastic-agent (8.0.0) ...
[2020-09-10T17:53:24.267Z] Setting up elastic-agent (8.0.0) ...
[2020-09-10T17:53:24.531Z] Processing triggers for systemd (232-25+deb9u12) ...
[2020-09-10T17:53:24.532Z] time="2020-09-10T17:53:24Z" level=debug msg="Docker compose executed." cmd="[exec -T debian-systemd apt install /elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb -y]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_debian-systemd_2 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:25.482Z] Synchronizing state of elastic-agent.service with SysV service script with /lib/systemd/systemd-sysv-install.
[2020-09-10T17:53:25.482Z] Executing: /lib/systemd/systemd-sysv-install enable elastic-agent
[2020-09-10T17:53:25.745Z] time="2020-09-10T17:53:25Z" level=debug msg="Docker compose executed." cmd="[exec -T debian-systemd systemctl enable elastic-agent]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_debian-systemd_2 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:26.344Z] The Elastic Agent is currently in BETA and should not be used in production
[2020-09-10T17:53:26.922Z] 2020-09-10T17:53:26.623Z	DEBUG	kibana/client.go:170	Request method: POST, path: /api/ingest_manager/fleet/agents/enroll
[2020-09-10T17:53:26.922Z] fail to enroll: fail to execute request to Kibana: Status code: 401, Kibana returned an error: Unauthorized, message: [security_exception] missing authentication credentials for REST request [/_security/_authenticate], with { header={ WWW-Authenticate={ 0="ApiKey" & 1="Basic realm=\"security\" charset=\"UTF-8\"" } } }
[2020-09-10T17:53:26.922Z] time="2020-09-10T17:53:26Z" level=error msg="Could not execute command in container" command="[elastic-agent enroll http://kibana:5601 RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ== -f --insecure]" error="Could not run compose file: [/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [exec -T debian-systemd elastic-agent enroll http://kibana:5601 RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ== -f --insecure]. exit status 1" service=debian-systemd
[2020-09-10T17:53:26.922Z] time="2020-09-10T17:53:26Z" level=error msg="Could not enroll the agent with the token" command="[elastic-agent enroll http://kibana:5601 RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ== -f --insecure]" error="Could not run compose file: [/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [exec -T debian-systemd elastic-agent enroll http://kibana:5601 RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ== -f --insecure]. exit status 1" image=debian-systemd service=debian-systemd tag=stretch token="RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ=="
[2020-09-10T17:53:26.922Z] time="2020-09-10T17:53:26Z" level=debug msg="As expected, it's not possible to enroll an agent with a revoked token" err="Could not run compose file: [/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml] - Local Docker compose exited abnormally whilst running docker-compose: [exec -T debian-systemd elastic-agent enroll http://kibana:5601 RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ== -f --insecure]. exit status 1" token="RHkwbWVYUUJ6M1RuMEdvVlhmdGM6UFBNSXp0dGVTTW1WQm9DcDJ4bHJ1UQ=="
[2020-09-10T17:53:26.922Z] time="2020-09-10T17:53:26Z" level=debug msg="Un-enrolling agent in Fleet" agentID=d5ff5c87-3fd0-47e7-a637-841f5e1061aa hostname=238db1fa13f2
[2020-09-10T17:53:28.851Z] time="2020-09-10T17:53:28Z" level=debug msg="Fleet agent was unenrolled" agentID=d5ff5c87-3fd0-47e7-a637-841f5e1061aa
[2020-09-10T17:53:29.113Z] Stopping ingest-manager_debian-systemd_debian-systemd_2 ... 
[2020-09-10T17:53:30.510Z] 
Stopping ingest-manager_debian-systemd_debian-systemd_2 ... done
Removing ingest-manager_debian-systemd_debian-systemd_2 ... 
[2020-09-10T17:53:30.511Z] 
Removing ingest-manager_debian-systemd_debian-systemd_2 ... done
Going to remove ingest-manager_debian-systemd_debian-systemd_2
[2020-09-10T17:53:30.511Z] time="2020-09-10T17:53:30Z" level=debug msg="Docker compose executed." cmd="[rm -fvs debian-systemd]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/services/debian-systemd/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_debian-systemd_2 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:30.511Z] time="2020-09-10T17:53:30Z" level=debug msg="Service removed from compose" profile=ingest-manager service=debian-systemd
[2020-09-10T17:53:30.511Z] time="2020-09-10T17:53:30Z" level=debug msg="The token was deleted" tokenID=ee0069af-faf7-4b2a-9d44-8c12d07b4473
[2020-09-10T17:53:30.511Z] time="2020-09-10T17:53:30Z" level=info msg="Integration deleted from the configuration" integration= packageConfigId= policyID=776980d0-f38e-11ea-a3b5-63f51cb7a85a version=
[2020-09-10T17:53:38.686Z] time="2020-09-10T17:53:37Z" level=debug msg="The policy was deleted" policyID=776980d0-f38e-11ea-a3b5-63f51cb7a85a
[2020-09-10T17:53:38.686Z] time="2020-09-10T17:53:37Z" level=debug msg="Destroying ingest-manager runtime dependencies"
[2020-09-10T17:53:38.686Z] Stopping ingest-manager_kibana_1           ... 
[2020-09-10T17:53:38.686Z] Stopping ingest-manager_elasticsearch_1    ... 
[2020-09-10T17:53:38.686Z] Stopping ingest-manager_package-registry_1 ... 
[2020-09-10T17:53:44.272Z] 
Stopping ingest-manager_kibana_1           ... done

Stopping ingest-manager_package-registry_1 ... done

Stopping ingest-manager_elasticsearch_1    ... done
Removing ingest-manager_kibana_1           ... 
[2020-09-10T17:53:44.272Z] Removing ingest-manager_elasticsearch_1    ... 
[2020-09-10T17:53:44.272Z] Removing ingest-manager_package-registry_1 ... 
[2020-09-10T17:53:44.535Z] 
Removing ingest-manager_package-registry_1 ... done

Removing ingest-manager_elasticsearch_1    ... done

Removing ingest-manager_kibana_1           ... done
Removing network ingest-manager_default
[2020-09-10T17:53:44.536Z] time="2020-09-10T17:53:44Z" level=debug msg="Docker compose executed." cmd="[down]" composeFilePaths="[/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/.op/compose/profiles/ingest-manager/docker-compose.yml]" env="map[centos_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167 centos_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-x86_64.rpm centos_systemdContainerName:ingest-manager_centos-systemd_centos-systemd_2 centos_systemdTag:latest debian_systemdAgentBinarySrcPath:/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922 debian_systemdAgentBinaryTargetPath:/elastic-agent-8.0.0-SNAPSHOT-linux-amd64.deb debian_systemdContainerName:ingest-manager_debian-systemd_debian-systemd_2 debian_systemdTag:stretch kibanaConfigPath:/var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281/src/github.com/elastic/e2e-testing/e2e/_suites/ingest-manager/configurations/kibana.config.yml stackVersion:8.0.0-SNAPSHOT]" profile=ingest-manager
[2020-09-10T17:53:44.536Z] time="2020-09-10T17:53:44Z" level=debug msg="Elastic Agent binary was removed." installer=centos-systemd path=/tmp/elastic-agent-8.0.0-SNAPSHOT-x86_64.rpm577415167
[2020-09-10T17:53:44.797Z] time="2020-09-10T17:53:44Z" level=debug msg="Elastic Agent binary was removed." installer=debian-systemd path=/tmp/elastic-agent-8.0.0-SNAPSHOT-amd64.deb342723922
[2020-09-10T17:53:44.797Z] <?xml version="1.0" encoding="UTF-8"?>
[2020-09-10T17:53:44.797Z] <testsuites name="main" tests="14" skipped="0" failures="1" errors="0" time="1083.35759059">
[2020-09-10T17:53:44.797Z]   <testsuite name="Fleet Mode Agent" tests="14" skipped="0" failures="1" errors="0" time="850.171798086">
[2020-09-10T17:53:44.797Z]     <testcase name="Deploying the centos agent" status="passed" time="32.016687055"></testcase>
[2020-09-10T17:53:44.797Z]     <testcase name="Deploying the debian agent" status="passed" time="41.158641045"></testcase>
[2020-09-10T17:53:44.797Z]     <testcase name="Starting the centos agent starts backend processes" status="passed" time="19.209090998"></testcase>
[2020-09-10T17:53:44.797Z]     <testcase name="Starting the debian agent starts backend processes" status="passed" time="21.26653532"></testcase>
[2020-09-10T17:53:44.797Z]     <testcase name="Stopping the centos agent stops backend processes" status="passed" time="11.267057452"></testcase>
[2020-09-10T17:53:44.797Z]     <testcase name="Stopping the debian agent stops backend processes" status="passed" time="12.53042321"></testcase>
[2020-09-10T17:53:44.797Z]     <testcase name="Restarting the centos host with persistent agent restarts backend processes" status="failed" time="262.166194064">
[2020-09-10T17:53:44.797Z]       <failure message="Step the &#34;filebeat&#34; process is in the &#34;started&#34; state on the host: filebeat process is not running in the container yet"></failure>
[2020-09-10T17:53:44.797Z]       <error message="Step the &#34;metricbeat&#34; process is in the &#34;started&#34; state on the host" type="skipped"></error>
[2020-09-10T17:53:44.797Z]     </testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Restarting the debian host with persistent agent restarts backend processes" status="passed" time="21.724326863"></testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Un-enrolling the centos agent" status="passed" time="12.309169177"></testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Un-enrolling the debian agent" status="passed" time="14.333326935"></testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Re-enrolling the centos agent" status="passed" time="34.417371518"></testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Re-enrolling the debian agent" status="passed" time="33.076565128"></testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Revoking the enrollment token for the centos agent" status="passed" time="31.625092804"></testcase>
[2020-09-10T17:53:44.798Z]     <testcase name="Revoking the enrollment token for the debian agent" status="passed" time="22.476525472"></testcase>
[2020-09-10T17:53:44.798Z]   </testsuite>
[2020-09-10T17:53:44.798Z] </testsuites>make: *** [functional-test] Error 1
[2020-09-10T17:53:44.798Z] Makefile:45: recipe for target 'functional-test' failed
[2020-09-10T17:53:44.798Z] + echo 'ERROR: functional-test failed'
[2020-09-10T17:53:44.798Z] ERROR: functional-test failed
[2020-09-10T17:53:44.798Z] + exit_status=1
[2020-09-10T17:53:44.798Z] + sed -e 's/^[ \t]*//; s#>.*failed$#>#g' outputs/TEST-ingest-manager-fleet_mode
[2020-09-10T17:53:44.798Z] + grep -E '^<.*>$'
[2020-09-10T17:53:44.798Z] + exit 1
[2020-09-10T17:53:44.867Z] Recording test results
[2020-09-10T17:53:45.418Z] Archiving artifacts
[2020-09-10T17:53:45.525Z] Failed in branch ingest-manager_fleet_mode
[2020-09-10T17:53:46.600Z] Stage "Release" skipped due to earlier failure(s)
[2020-09-10T17:53:46.924Z] Running on worker-1244230 in /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-281
[2020-09-10T17:53:46.969Z] [INFO] getVaultSecret: Getting secrets
[2020-09-10T17:53:47.039Z] Masking supported pattern matches of $VAULT_ADDR or $VAULT_ROLE_ID or $VAULT_SECRET_ID
[2020-09-10T17:53:48.997Z] + chmod 755 generate-build-data.sh
[2020-09-10T17:53:48.997Z] + ./generate-build-data.sh https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-281/ https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-281/runs/3 FAILURE 1477139
[2020-09-10T17:53:48.997Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-281/runs/3/steps/?limit=10000 -o steps-info.json
[2020-09-10T17:53:58.359Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-281/runs/3/tests/?status=FAILED -o tests-errors.json
[2020-09-10T17:53:59.073Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-281/runs/3/log/ -o pipeline-log.txt

@EricDavisX
Copy link
Contributor

Hi, thanks Manu - so the implication here is that each scenario runs in a separate, newly created (for that test) policy? That is certain to avoid failures / clean-up related problems we may have had, its a good addition.

I see 2 areas to discuss / confirm:

  • this will require us to implement a new step syntax and supporting implementation to allow testing the change of Agent from one policy to another, which is a core test the Endpoint team (or myself) should intend to write soon. If we could keep the parameterized "policy name" usage that might be wise and save ourselves more work later (if it is already implemented, it may not be. Maybe its not hard to fix / change later... this isn't a must-have, just wanted us to think.

  • this also precludes us from testing the default policy itself - which is a unique case. I actually don't know if its created programmatically at Fleet setup time or if its hard-coded in any way. If its at all hard-coded we'd want to implement some new specific test that uses it, but with the assurance that no test scenario or code changes it in any way (which should be easy enough). @ph can you or someone help confirm that (about the default policy creation at Fleet setup time) ?

@mdelapenya
Copy link
Contributor Author

mdelapenya commented Sep 10, 2020

Hi, thanks Manu - so the implication here is that each scenario runs in a separate, newly created (for that test) policy? That is certain to avoid failures / clean-up related problems we may have had, its a good addition.

Yes, the policy (master)/configuration (7.9.x) will be created before each scenario, so that each agent will be enrolled using this new policy.

I see 2 areas to discuss / confirm:

  • this will require us to implement a new step syntax and supporting implementation to allow testing the change of Agent from one policy to another, which is a core test the Endpoint team (or myself) should intend to write soon. If we could keep the parameterized "policy name" usage that might be wise and save ourselves more work later (if it is already implemented, it may not be. Maybe its not hard to fix / change later... this isn't a must-have, just wanted us to think.

I'd suggest working on this at the moment we need it, not now.

  • this also precludes us from testing the default policy itself - which is a unique case. I actually don't know if its created programmatically at Fleet setup time or if its hard-coded in any way. If its at all hard-coded we'd want to implement some new specific test that uses it, but with the assurance that no test scenario or code changes it in any way (which should be easy enough). @ph can you or someone help confirm that (about the default policy creation at Fleet setup time) ?

It's created at the beginning of the test execution, in the beforeSuite (before any scenario) hook. We have a Fleet setup, which calls a kibana URL for that. I guess it creates the default policy. I'd ask about what the implications of using the default policy are, to understand if we need specific scenarios for it. As of today, where a policy seems to be exactly the same as the default one (or at least I cannot distinguish any difference), I'd treat default as any other one, so I do not see a reason to not use the created ones... Unless we find a specific use case for the default policy

@EricDavisX
Copy link
Contributor

  • this also precludes us from testing the default policy itself - which is a unique case. I actually don't know if its created programmatically at Fleet setup time or if its hard-coded in any way. If its at all hard-coded we'd want to implement some new specific test that uses it, but with the assurance that no test scenario or code changes it in any way (which should be easy enough). @ph can you or someone help confirm that (about the default policy creation at Fleet setup time) ?

[The policy the test creates is] created at the beginning of the test execution, in the beforeSuite (before any scenario) hook. We have a Fleet setup, which calls a kibana URL for that. I guess it creates the default policy. I'd ask about what the implications of using the default policy are...

@ph or @jen-huang maybe - Manu's question is exactly what I would like to confirm. The policies are supposed to be the same, and are currently - if there is any need to test the 'default' policy especially we need to know it now so we can create a new test case for it.

@ph
Copy link
Contributor

ph commented Sep 10, 2020

@EricDavisX @mdelapenya What you are describing is exactly what is happening behind the scene when setup is run the default policy is created and necessary packages are installed for the system integration. Testing the default policy would be testing logs and metrics are collected on the machine and I think we should have a test for that, just for sanity.

Note, I would keep the "assertion" as minimal as possible, because the system integration is in flux at the moment and we are iterating to reduce the scope of that integration. Reduce the metrics and the logs we are collecting and dealing with the difference between the operating system.

@mdelapenya
Copy link
Contributor Author

I think we can merge this, and move the discussion about default policies to another issue, as we already merged the usage of a new policy for each scenario in #279

@mdelapenya
Copy link
Contributor Author

As discussed on Slack, I'm gonna merge this little one. Thanks!

@mdelapenya mdelapenya merged commit adc7f7a into elastic:master Sep 10, 2020
@mdelapenya mdelapenya deleted the 279-fix-default-config branch September 10, 2020 18:16
mdelapenya added a commit to mdelapenya/e2e-testing that referenced this pull request Sep 10, 2020
Internally, we were using the ID, so it should work as expected, but the
specs were not in sync
# Conflicts:
#	e2e/_suites/ingest-manager/features/agent_endpoint_integration.feature
#	e2e/_suites/ingest-manager/fleet.go
mdelapenya added a commit that referenced this pull request Sep 10, 2020
… | fix: remove references to the default policy (#281) backport for 7.9.x (#283)

* feat: support creating (and removing) a policy for each scenario (#279)

* feat: support creating a policy per scenario, not reusing default one

* chore: extract tear down code to afterScenario methods

* fix: move creation of the policy to the before scenario hook

* fix: use proper URL after bad copy&paste

* chore: add debug logs for removals
# Conflicts:
#	e2e/_suites/ingest-manager/fleet.go
#	e2e/_suites/ingest-manager/ingest-manager_test.go

* fix: remove references to the default policy (#281)

Internally, we were using the ID, so it should work as expected, but the
specs were not in sync
# Conflicts:
#	e2e/_suites/ingest-manager/features/agent_endpoint_integration.feature
#	e2e/_suites/ingest-manager/fleet.go

* chore: remove dependency added by mistake
@EricDavisX
Copy link
Contributor

agree - thanks. and it seems (still confirming) that we need not re-build any specific test for the Ingest Manager provided 'pre-built' policy, I believe it is programmatically created and is like any other policy so this change was quite harmless to test coverage (just needed to be confirmed). Also, indeed, we can continue tests of the System Integration at all speed. I found we didn't have a ticket for that, so I added one: https://github.com/elastic/e2e-testing/issues/284

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants