Skip to content
This repository has been archived by the owner on Sep 17, 2024. It is now read-only.

feat: add first scenario for Fleet Server #900

Merged
merged 30 commits into from
Apr 19, 2021

Conversation

mdelapenya
Copy link
Contributor

@mdelapenya mdelapenya commented Mar 16, 2021

What does this PR do?

This PR adds a feature file for Fleet Server, adding one scenario checking that when an agent is deployed with Fleet Server mode, then Fleet Server is enabled.

To allow that, we had to do some refactors, explained below:

  • We enabled Fleet Server in Kibana config: xpack.fleet.agents.fleetServerEnabled: true
  • We added a stronger manner to capture the Default Policy. In the past, we simply got the first child item in the policies list, as we were sure there was only one policy. Now with Fleet Server, a new policy is created, and the way to distinguish both is filtering by the is_default and is_default_fleet_server attributes in the policy.
  • We added a struct to handle Fleet Config, for both modes: Fleet mode and Fleet server mode. This struct will hold the configuration to handle the connection to Elasticsearch and the policy used to enroll the fleet server agent. A pointer to this config will be returned each time an agent is deployed to fleet, and the internals of this config will be different for Fleet Server, as it will request the default Fleet Server policy from Kibana, and will store its ID in the struct. This field will be used to identify what flags will be used when enrolling/installing the agent. For that reason there is a flags() method returning an array of flags to use in the install/enroll commands. Finally, this structure will be passed to the different command to be reused, specially the EnrollmentToken.
  • We abstracted the deployAgentWithInstaller method, so that it's reusable for existing scenarios, and the new one for fleet server.

It's important to notice that the final step in the new scenario is not finished, it's returning a Pending error. This is because, at the moment of sending this PR, it's not clear to me how to checck that Fleet Server has been enabled/deployed successfully.

Why is it important?

It will bring the first scenario for fleet server, also improving code base health after refactors.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have run the Unit tests for the CLI, and they are passing locally
  • I have run the End-2-End tests for the suite I'm working on, and they are passing locally
  • I have noticed new Go dependencies (run make notice in the proper directory)

Author's Checklist

  • @blakerouse could you help out with the assertions to perform here in the scenario?

How to test this PR locally

SUITE="fleet" TAGS="fleet_server && centos" TIMEOUT_FACTOR=3 LOG_LEVEL=TRACE DEVELOPER_MODE=true make -C e2e functional-test

Related issues

@mdelapenya mdelapenya self-assigned this Mar 16, 2021
@mdelapenya mdelapenya requested review from a team, blakerouse and michalpristas March 16, 2021 10:15
@mdelapenya mdelapenya added the Team:Elastic-Agent Label for the Agent team label Mar 16, 2021
@elasticmachine
Copy link
Contributor

elasticmachine commented Mar 16, 2021

💔 Tests Failed

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: Pull request #900 updated

  • Start Time: 2021-04-19T18:01:04.662+0000

  • Duration: 46 min 36 sec

  • Commit: a557f88

Test stats 🧪

Test Results
Failed 5
Passed 118
Skipped 0
Total 123

Trends 🧪

Image of Build Times

Image of Tests

Test errors 5

Expand to view the tests failures

Initializing / End-To-End Tests / ubuntu-18.04_fleet_fleet_mode_agent / [empty] – TEST-fleet.xml
  • no error details
  • Expand to view the stacktrace

     Test report file /var/lib/jenkins/workspace/PR-900-17-f824fc69-41b1-464b-ad04-e7fca0edaec5/src/github.com/elastic/e2e-testing/outputs/TEST-fleet.xml was length 0 
    

Initializing / End-To-End Tests / ubuntu-18.04_fleet_backend_processes / [empty] – TEST-fleet.xml
  • no error details
  • Expand to view the stacktrace

     Test report file /var/lib/jenkins/workspace/PR-900-17-3d90a921-ff4b-46af-b5d0-92dd0f023e65/src/github.com/elastic/e2e-testing/outputs/TEST-fleet.xml was length 0 
    

Initializing / End-To-End Tests / ubuntu-18.04_fleet_fleet_server / [empty] – TEST-fleet.xml
  • no error details
  • Expand to view the stacktrace

     Test report file /var/lib/jenkins/workspace/PR-900-17-1f6fe263-217c-4e2e-9d0c-3c02dad3a393/src/github.com/elastic/e2e-testing/outputs/TEST-fleet.xml was length 0 
    

Initializing / End-To-End Tests / ubuntu-18.04_fleet_agent_endpoint_integration / [empty] – TEST-fleet.xml
  • no error details
  • Expand to view the stacktrace

     Test report file /var/lib/jenkins/workspace/PR-900-17-3468503e-dd37-4b6e-b21a-8b1f70e24ac8/src/github.com/elastic/e2e-testing/outputs/TEST-fleet.xml was length 0 
    

Initializing / End-To-End Tests / ubuntu-18.04_fleet_stand_alone_agent / [empty] – TEST-fleet.xml
  • no error details
  • Expand to view the stacktrace

     Test report file /var/lib/jenkins/workspace/PR-900-17-805d70bb-08ab-4dd4-9b9a-d3f7fa772bd0/src/github.com/elastic/e2e-testing/outputs/TEST-fleet.xml was length 0 
    

Steps errors 5

Expand to view the steps failures

Run functional tests for fleet:fleet_mode_agent && ~@nightly && ~debian
  • Took 6 min 13 sec . View more details on here
  • Description: .ci/scripts/functional-test.sh "fleet" "fleet_mode_agent && ~@nightly && ~debian" "8.0.0-SNAPSHOT" "8.0.0-SNAPSHOT"
Run functional tests for fleet:fleet_server && ~@nightly && ~debian
  • Took 6 min 7 sec . View more details on here
  • Description: .ci/scripts/functional-test.sh "fleet" "fleet_server && ~@nightly && ~debian" "8.0.0-SNAPSHOT" "8.0.0-SNAPSHOT"
Run functional tests for fleet:agent_endpoint_integration && ~@nightly && ~debian
  • Took 6 min 12 sec . View more details on here
  • Description: .ci/scripts/functional-test.sh "fleet" "agent_endpoint_integration && ~@nightly && ~debian" "8.0.0-SNAPSHOT" "8.0.0-SNAPSHOT"
Run functional tests for fleet:stand_alone_agent && ~@nightly && ~ubi8
  • Took 6 min 11 sec . View more details on here
  • Description: .ci/scripts/functional-test.sh "fleet" "stand_alone_agent && ~@nightly && ~ubi8" "8.0.0-SNAPSHOT" "8.0.0-SNAPSHOT"
Run functional tests for fleet:backend_processes && ~@nightly && ~debian
  • Took 6 min 13 sec . View more details on here
  • Description: .ci/scripts/functional-test.sh "fleet" "backend_processes && ~@nightly && ~debian" "8.0.0-SNAPSHOT" "8.0.0-SNAPSHOT"

Log output

Expand to view the last 100 lines of log output

[2021-04-19T18:45:21.342Z] [Checks API] No suitable checks publisher found.
[2021-04-19T18:45:21.362Z] Archiving artifacts
[2021-04-19T18:45:21.403Z] Running in /var/lib/jenkins/workspace/PR-900-17-1ed5655b-ee47-4d87-9df1-cae086da353c/src/github.com/elastic/e2e-testing
[2021-04-19T18:45:21.706Z] + go clean -modcache
[2021-04-19T18:45:22.351Z] time="2021-04-19T18:45:21Z" level=debug msg="Response information" hits=4 status="200 OK" took=2
[2021-04-19T18:45:22.351Z] time="2021-04-19T18:45:21Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=38.090354771s index=metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu retry=12
[2021-04-19T18:45:26.910Z] {"level":"debug","time":"2021-04-19T18:45:26Z","message":"sent request with 0 transactions, 2 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:45:27.673Z] time="2021-04-19T18:45:26Z" level=debug msg="Response information" hits=4 status="200 OK" took=2
[2021-04-19T18:45:27.673Z] time="2021-04-19T18:45:26Z" level=warning msg="Waiting for more hits in the index" currentHits=4 desiredHits=5 elapsedTime=43.25124702s index=metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu retry=13
[2021-04-19T18:45:28.306Z] {"level":"debug","time":"2021-04-19T18:45:27Z","message":"gathering metrics"}
[2021-04-19T18:45:31.003Z] time="2021-04-19T18:45:30Z" level=debug msg="Response information" hits=5 status="200 OK" took=3
[2021-04-19T18:45:31.003Z] time="2021-04-19T18:45:30Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=47.028299884s retries=14
[2021-04-19T18:45:31.003Z] time="2021-04-19T18:45:30Z" level=debug msg="Response information" hits=5 status="200 OK" took=2
[2021-04-19T18:45:31.003Z] time="2021-04-19T18:45:30Z" level=info msg="Hits number satisfied" currentHits=5 desiredHits=5 elapsedTime=8.059593ms retries=1
[2021-04-19T18:45:31.269Z] Stopping metricbeat_metricbeat_1 ... 
[2021-04-19T18:45:31.850Z] 
Stopping metricbeat_metricbeat_1 ... done
Removing metricbeat_metricbeat_1 ... 
[2021-04-19T18:45:31.850Z] 
Removing metricbeat_metricbeat_1 ... done
Going to remove metricbeat_metricbeat_1
[2021-04-19T18:45:31.850Z] time="2021-04-19T18:45:31Z" level=debug msg="Docker compose executed." cmd="[rm -fvs metricbeat]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/profiles/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false MYSQL_PATH:/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql MYSQL_VARIANT:mysql MYSQL_VERSION:5.7.12 indexName:metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT mysqlTag:5.7.12 serviceName:mysql stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:45:31.850Z] time="2021-04-19T18:45:31Z" level=debug msg="Service removed from compose" profile=metricbeat service=metricbeat
[2021-04-19T18:45:31.850Z] {"level":"debug","time":"2021-04-19T18:45:31Z","message":"sent request with 0 transactions, 6 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:45:32.429Z] Stopping metricbeat_mysql_1 ... 
[2021-04-19T18:45:34.388Z] 
Stopping metricbeat_mysql_1 ... done
Removing metricbeat_mysql_1 ... 
[2021-04-19T18:45:34.388Z] 
Removing metricbeat_mysql_1 ... done
Going to remove metricbeat_mysql_1
[2021-04-19T18:45:34.389Z] time="2021-04-19T18:45:34Z" level=debug msg="Docker compose executed." cmd="[rm -fvs mysql]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/profiles/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false MYSQL_PATH:/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql MYSQL_VARIANT:mysql MYSQL_VERSION:5.7.12 indexName:metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT mysqlTag:5.7.12 serviceName:mysql stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:45:34.389Z] time="2021-04-19T18:45:34Z" level=debug msg="Service removed from compose" profile=metricbeat service=mysql
[2021-04-19T18:45:34.389Z] time="2021-04-19T18:45:34Z" level=debug msg="Index deleted using Elasticsearch Go client" indexName=metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu status="400 Bad Request"
[2021-04-19T18:45:34.389Z] time="2021-04-19T18:45:34Z" level=debug msg="Index Alias deleted using Elasticsearch Go client" indexAlias=metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu status="400 Bad Request"
[2021-04-19T18:45:34.665Z] {"level":"debug","time":"2021-04-19T18:45:34Z","message":"sent request with 1 transaction, 2 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:45:35.245Z] Stopping metricbeat_elasticsearch_1 ... 
[2021-04-19T18:45:36.642Z] 
Stopping metricbeat_elasticsearch_1 ... done
Removing metricbeat_elasticsearch_1 ... 
[2021-04-19T18:45:36.643Z] 
Removing metricbeat_elasticsearch_1 ... done
Removing network metricbeat_default
[2021-04-19T18:45:36.907Z] time="2021-04-19T18:45:36Z" level=debug msg="Docker compose executed." cmd="[down --remove-orphans]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/profiles/metricbeat/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false MYSQL_PATH:/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql MYSQL_VARIANT:mysql MYSQL_VERSION:5.7.12 indexName:metricbeat-8.0.0-mysql-mysql-5.7.12-oj6yo5nu logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/.op/compose/services/mysql/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT mysqlTag:5.7.12 serviceName:mysql stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:45:36.907Z] {"level":"debug","time":"2021-04-19T18:45:36Z","message":"sent request with 1 transaction, 2 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:45:37.071Z] [INFO] Stopping Filebeat Docker container
[2021-04-19T18:45:37.360Z] + docker exec -t 8905e0256f9b536ba7afe23e7832dcaf3eec08baf45efb6cfda683d24998aecb chmod -R ugo+rw /output
[2021-04-19T18:45:37.940Z] + docker stop --time 30 8905e0256f9b536ba7afe23e7832dcaf3eec08baf45efb6cfda683d24998aecb
[2021-04-19T18:45:38.206Z] 8905e0256f9b536ba7afe23e7832dcaf3eec08baf45efb6cfda683d24998aecb
[2021-04-19T18:45:38.230Z] Archiving artifacts
[2021-04-19T18:45:38.347Z] {"level":"debug","time":"2021-04-19T18:45:38Z","message":"sent request with 0 transactions, 0 spans, 0 errors, 6 metricsets"}
[2021-04-19T18:45:39.119Z] Recording test results
[2021-04-19T18:45:39.461Z] [Checks API] No suitable checks publisher found.
[2021-04-19T18:45:39.485Z] Archiving artifacts
[2021-04-19T18:45:39.537Z] Running in /var/lib/jenkins/workspace/PR-900-17-27f554c6-ce91-4b9f-8eeb-34700b169e2c/src/github.com/elastic/e2e-testing
[2021-04-19T18:45:39.856Z] + go clean -modcache
[2021-04-19T18:46:00.608Z] {"level":"debug","time":"2021-04-19T18:45:57Z","message":"gathering metrics"}
[2021-04-19T18:46:07.225Z] Found orphan containers (metricbeat_haproxy_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[2021-04-19T18:46:07.226Z] metricbeat_elasticsearch_1 is up-to-date
[2021-04-19T18:46:07.226Z] Creating metricbeat_metricbeat_1 ... 
[2021-04-19T18:46:08.201Z] 
Creating metricbeat_metricbeat_1 ... done
{"level":"debug","time":"2021-04-19T18:46:08Z","message":"sent request with 0 transactions, 0 spans, 0 errors, 1 metricset"}
[2021-04-19T18:46:08.201Z] time="2021-04-19T18:46:08Z" level=debug msg="Docker compose executed." cmd="[up -d]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/profiles/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/metricbeat/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false HAPROXY_PATH:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy HAPROXY_VERSION:1.6.15 haproxyTag:1.6.15 indexName:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT serviceName:haproxy stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:46:08.202Z] time="2021-04-19T18:46:08Z" level=info msg="Metricbeat is running configured for the service" metricbeatVersion=8.0.0-SNAPSHOT service=haproxy serviceVersion=1.6.15
[2021-04-19T18:46:08.202Z] time="2021-04-19T18:46:08Z" level=warning msg="There was an error executing the query" desiredHits=5 elapsedTime=37.295325ms error="Error getting response from Elasticsearch. Status: 404 Not Found, ResponseError: map[error:map[index:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv index_uuid:_na_ reason:no such index [metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv] resource.id:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv resource.type:index_or_alias root_cause:[map[index:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv index_uuid:_na_ reason:no such index [metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv] resource.id:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv resource.type:index_or_alias type:index_not_found_exception]] type:index_not_found_exception] status:404]" index=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv retry=1
[2021-04-19T18:46:08.781Z] time="2021-04-19T18:46:08Z" level=warning msg="There was an error executing the query" desiredHits=5 elapsedTime=639.651745ms error="Error getting response from Elasticsearch. Status: 404 Not Found, ResponseError: map[error:map[index:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv index_uuid:_na_ reason:no such index [metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv] resource.id:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv resource.type:index_or_alias root_cause:[map[index:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv index_uuid:_na_ reason:no such index [metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv] resource.id:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv resource.type:index_or_alias type:index_not_found_exception]] type:index_not_found_exception] status:404]" index=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv retry=2
[2021-04-19T18:46:10.184Z] time="2021-04-19T18:46:09Z" level=warning msg="There was an error executing the query" desiredHits=5 elapsedTime=1.667657971s error="Error getting response from Elasticsearch. Status: 404 Not Found, ResponseError: map[error:map[index:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv index_uuid:_na_ reason:no such index [metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv] resource.id:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv resource.type:index_or_alias root_cause:[map[index:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv index_uuid:_na_ reason:no such index [metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv] resource.id:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv resource.type:index_or_alias type:index_not_found_exception]] type:index_not_found_exception] status:404]" index=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv retry=3
[2021-04-19T18:46:11.136Z] time="2021-04-19T18:46:10Z" level=debug msg="Response information" hits=0 status="200 OK" took=1
[2021-04-19T18:46:11.136Z] time="2021-04-19T18:46:10Z" level=warning msg="Waiting for more hits in the index" currentHits=0 desiredHits=5 elapsedTime=2.730419935s index=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv retry=4
[2021-04-19T18:46:13.697Z] time="2021-04-19T18:46:13Z" level=debug msg="Response information" hits=0 status="200 OK" took=1
[2021-04-19T18:46:13.698Z] time="2021-04-19T18:46:13Z" level=warning msg="Waiting for more hits in the index" currentHits=0 desiredHits=5 elapsedTime=5.37024719s index=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv retry=5
[2021-04-19T18:46:19.014Z] {"level":"debug","time":"2021-04-19T18:46:18Z","message":"sent request with 0 transactions, 7 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:46:19.280Z] time="2021-04-19T18:46:19Z" level=debug msg="Response information" hits=10 status="200 OK" took=2
[2021-04-19T18:46:19.280Z] time="2021-04-19T18:46:19Z" level=info msg="Hits number satisfied" currentHits=10 desiredHits=5 elapsedTime=10.914608188s retries=6
[2021-04-19T18:46:19.280Z] time="2021-04-19T18:46:19Z" level=debug msg="Response information" hits=10 status="200 OK" took=2
[2021-04-19T18:46:19.280Z] time="2021-04-19T18:46:19Z" level=info msg="Hits number satisfied" currentHits=10 desiredHits=5 elapsedTime=8.46841ms retries=1
[2021-04-19T18:46:19.859Z] Stopping metricbeat_metricbeat_1 ... 
[2021-04-19T18:46:20.125Z] 
Stopping metricbeat_metricbeat_1 ... done
Removing metricbeat_metricbeat_1 ... 
[2021-04-19T18:46:20.390Z] 
Removing metricbeat_metricbeat_1 ... done
Going to remove metricbeat_metricbeat_1
[2021-04-19T18:46:20.390Z] time="2021-04-19T18:46:20Z" level=debug msg="Docker compose executed." cmd="[rm -fvs metricbeat]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/profiles/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false HAPROXY_PATH:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy HAPROXY_VERSION:1.6.15 haproxyTag:1.6.15 indexName:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT serviceName:haproxy stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:46:20.391Z] time="2021-04-19T18:46:20Z" level=debug msg="Service removed from compose" profile=metricbeat service=metricbeat
[2021-04-19T18:46:20.967Z] Stopping metricbeat_haproxy_1 ... 
[2021-04-19T18:46:29.159Z] {"level":"debug","time":"2021-04-19T18:46:27Z","message":"gathering metrics"}
[2021-04-19T18:46:29.159Z] {"level":"debug","time":"2021-04-19T18:46:29Z","message":"sent request with 0 transactions, 4 spans, 0 errors, 1 metricset"}
[2021-04-19T18:46:32.503Z] 
Stopping metricbeat_haproxy_1 ... done
Removing metricbeat_haproxy_1 ... 
[2021-04-19T18:46:32.504Z] 
Removing metricbeat_haproxy_1 ... done
Going to remove metricbeat_haproxy_1
[2021-04-19T18:46:32.504Z] time="2021-04-19T18:46:31Z" level=debug msg="Docker compose executed." cmd="[rm -fvs haproxy]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/profiles/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/metricbeat/docker-compose.yml /var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false HAPROXY_PATH:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy HAPROXY_VERSION:1.6.15 haproxyTag:1.6.15 indexName:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT serviceName:haproxy stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:46:32.504Z] time="2021-04-19T18:46:31Z" level=debug msg="Service removed from compose" profile=metricbeat service=haproxy
[2021-04-19T18:46:32.504Z] time="2021-04-19T18:46:31Z" level=debug msg="Index deleted using Elasticsearch Go client" indexName=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv status="400 Bad Request"
[2021-04-19T18:46:32.504Z] time="2021-04-19T18:46:31Z" level=debug msg="Index Alias deleted using Elasticsearch Go client" indexAlias=metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv status="400 Bad Request"
[2021-04-19T18:46:32.504Z] {"level":"debug","time":"2021-04-19T18:46:32Z","message":"sent request with 1 transaction, 2 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:46:32.766Z] Stopping metricbeat_elasticsearch_1 ... 
[2021-04-19T18:46:33.718Z] 
Stopping metricbeat_elasticsearch_1 ... done
Removing metricbeat_elasticsearch_1 ... 
[2021-04-19T18:46:33.718Z] 
Removing metricbeat_elasticsearch_1 ... done
Removing network metricbeat_default
[2021-04-19T18:46:33.982Z] time="2021-04-19T18:46:33Z" level=debug msg="Docker compose executed." cmd="[down --remove-orphans]" composeFilePaths="[/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/profiles/metricbeat/docker-compose.yml]" env="map[BEAT_STRICT_PERMS:false HAPROXY_PATH:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy HAPROXY_VERSION:1.6.15 haproxyTag:1.6.15 indexName:metricbeat-8.0.0-haproxy-1.6.15-dlrugqvv logLevel:debug metricbeatConfigFile:/var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/.op/compose/services/haproxy/_meta/config.yml metricbeatDockerNamespace:beats metricbeatPlatform:linux/amd64 metricbeatTag:8.0.0-SNAPSHOT serviceName:haproxy stackVersion:8.0.0-SNAPSHOT]" profile=metricbeat
[2021-04-19T18:46:34.246Z] {"level":"debug","time":"2021-04-19T18:46:34Z","message":"sent request with 1 transaction, 2 spans, 0 errors, 0 metricsets"}
[2021-04-19T18:46:34.356Z] [INFO] Stopping Filebeat Docker container
[2021-04-19T18:46:34.644Z] + docker exec -t a3ec28e907f9673c51fc4bd7b291c1eb714142dbc425b73db40db8dc2dc66377 chmod -R ugo+rw /output
[2021-04-19T18:46:35.247Z] + docker stop --time 30 a3ec28e907f9673c51fc4bd7b291c1eb714142dbc425b73db40db8dc2dc66377
[2021-04-19T18:46:35.512Z] a3ec28e907f9673c51fc4bd7b291c1eb714142dbc425b73db40db8dc2dc66377
[2021-04-19T18:46:35.534Z] Archiving artifacts
[2021-04-19T18:46:36.440Z] Recording test results
[2021-04-19T18:46:36.789Z] [Checks API] No suitable checks publisher found.
[2021-04-19T18:46:36.816Z] Archiving artifacts
[2021-04-19T18:46:36.866Z] Running in /var/lib/jenkins/workspace/PR-900-17-2a8d7023-ebff-468e-8280-0f92962b2806/src/github.com/elastic/e2e-testing
[2021-04-19T18:46:37.170Z] + go clean -modcache
[2021-04-19T18:46:39.491Z] Stage "Release" skipped due to earlier failure(s)
[2021-04-19T18:46:40.044Z] Running on Jenkins in /var/lib/jenkins/workspace/e2e-tests_e2e-testing-mbp_PR-900
[2021-04-19T18:46:40.115Z] [INFO] getVaultSecret: Getting secrets
[2021-04-19T18:46:40.236Z] Masking supported pattern matches of $VAULT_ADDR or $VAULT_ROLE_ID or $VAULT_SECRET_ID
[2021-04-19T18:46:41.180Z] + chmod 755 generate-build-data.sh
[2021-04-19T18:46:41.180Z] + ./generate-build-data.sh https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-900/ https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-900/runs/17 FAILURE 2736250
[2021-04-19T18:46:41.431Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/e2e-tests/e2e-testing-mbp/PR-900/runs/17/steps/?limit=10000 -o steps-info.json

🐛 Flaky test report

❕ There are test failures but not known flaky tests.

Expand to view the summary

Test stats 🧪

Test Results
Failed 5
Passed 118
Skipped 0
Total 123

Genuine test errors 5

💔 There are test failures but not known flaky tests, most likely a genuine test failure.

  • Name: Initializing / End-To-End Tests / ubuntu-18.04_fleet_fleet_mode_agent / [empty] – TEST-fleet.xml
  • Name: Initializing / End-To-End Tests / ubuntu-18.04_fleet_backend_processes / [empty] – TEST-fleet.xml
  • Name: Initializing / End-To-End Tests / ubuntu-18.04_fleet_fleet_server / [empty] – TEST-fleet.xml
  • Name: Initializing / End-To-End Tests / ubuntu-18.04_fleet_agent_endpoint_integration / [empty] – TEST-fleet.xml
  • Name: Initializing / End-To-End Tests / ubuntu-18.04_fleet_stand_alone_agent / [empty] – TEST-fleet.xml

@start-fleet-server
Scenario Outline: Deploying the <os> fleet-server agent
When a "<os>" agent is deployed to Fleet with "tar" installer in fleet-server mode
Then Fleet server is enabled
Copy link
Contributor Author

@mdelapenya mdelapenya Mar 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EricDavisX @blakerouse I wrote this Then clause, but I'd like to know if there is another assertion that should be done here. Something like: the elastic-agent process is started, or the Fleet app in Kibana shows "FooBar" in the Fleet page, or elasticsearch contains THIS doc in THAT index. Preferredly queried by an API call.

Could you help me here in writing the right expected behaviour?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not too familiar with the structure of these files so I might not be fully understanding the context.

I believe your asking if its possible to known if the Fleet Server is running correct. The simplest way is to check that the Agent is reported Healthy in Kibana. That might seem to simple but the only way for the Agent running a Fleet Server to show in Kibana as healthy is if it can communicate to its local Fleet Server and that Fleet Server can write to elasticsearch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind about the code structure yet, I'm still interested in the behavior of the product without considering internal details/implementations.

With that in mind:

  • what ES query should we write to verify that?
  • is it enough to check the healthy status for the promoted-to-server agent?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check can be the same as for all Agents, that it is listed as 'healthy' in the Agents API list call. A secondary check could be to assess if the Fleet Server process is running on the host, as noted in this example:

edavis-mbp:elastic-agent-8.0.0-SNAPSHOT-darwin-x86_64-infra-build edavis$ ps ax | grep elastic
SNAPSHOT-darwin-x86_64/fleet-server --agent-mode -E logging.level=info -E http.enabled=true -E http.host=unix:///Library/Elastic/Agent/data/tmp/default/fleet-server/fleet-server.sock -E logging.json=true -E logging.ecs=true -E logging.files.path=/Library/Elastic/Agent/data/elastic-agent-53d75c/logs/default -E logging.files.name=fleet-server-json.log -E logging.files.keepfiles=7 -E logging.files.permission=0640 -E logging.files.interval=1h -E path.data=/Library/Elastic/Agent/data/elastic-agent-53d75c/run/default/fleet-server--8.0.0-SNAPSHOT

This example is from macOS, but the the process name is the same.

@@ -374,8 +375,8 @@ func (i *TARPackage) Preinstall() error {

// simplify layout
cmds := [][]string{
[]string{"rm", "-fr", "/elastic-agent"},
[]string{"mv", fmt.Sprintf("/%s-%s-%s-%s", i.artifact, i.version, i.OS, i.arch), "/elastic-agent"},
{"rm", "-fr", "/elastic-agent"},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove some leftovers: the type is automatically inferred by the Go compiler

@EricDavisX
Copy link
Contributor

This is wonderful Manu. Thank you. I will review expectations with the team if we don't hear feedback.

Copy link
Contributor

@jalvz jalvz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking good!
What is needed to land it?

so that the Agent is able to communicate with them

@start-fleet-server
Scenario Outline: Deploying the <os> fleet-server agent enables Fleet Server
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is surely because of my lack of understanding of how fleet-server works, but I don't find this phrasing very clear: there is no such a thing as "fleet-server agent", right? There is an Elastic Agent that can start a Fleet Server process: shouldn't this be in the scenario definition?

Since "Fleet" is such a ubiquitous term, it might be worth to be more explicit every time is it used: Is it Fleet API, Fleet UI, etc.
Related: since this spec is meant to be semi-formal, and we put a lot of effort in standarisation elsewhere (linting, etc), It would be good to define components canonically. Eg., always "Elastic Agent", and not a combination of "Elastic Agent", "Agent", "the agent", etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly agree with this. 💯

What about sending the refactors in a follow-up? Let's create an issue to standardise the names across Fleet test suite. Maybe @EricDavisX can help with the wording.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to the name, yes. It could be rephrased. Let me add a suggestion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another note about this pr vs a separate one... we have a new feature file which is helpful, but to me, it is mostly helpful in confirming we have the code right here, with a straightforward usage. We'll need to adapt (now or in a separate PR) all of the:
Given a "" agent is deployed to Fleet with "tar" installer
to
When a "" agent is deployed to Fleet with "tar" installer in fleet-server mode
...because any non-Fleet server usage (in Fleet) will not be supported. The only non-Fleet-Server usage will be as a stand-alone Agent mode.

So, all the following feature files will need review and update, at least in some way (listing them out explicitly):
e2e/_suites/fleet/features/agent_endpoint_integration.feature
e2e/_suites/fleet/features/backend_processes.feature
e2e/_suites/fleet/features/linux_integration.feature
e2e/_suites/fleet/features/fleet_mode_agent.feature

The stand_alone_agent file is the exception:
e2e/_suites/fleet/features/stand_alone_agent.feature

  • we have opportunity there to improve the stand-alone test to include Docker usage with Fleet Server. And it may be a great and easy way to spin up a 2nd Agent to connect to the first (the first Agent also running the Fleet Server, and the 2nd being a 'normal' Agent)

So, knowing this now, I'm not sure if we want to keep this first PR small and merge it so we have some passing scenario knowing the others will fail, or if we'll want to push forward with slightly larger impact across more files after we see it working.

}

err = installer.InstallFn(containerName, token)
var fleetConfig *FleetConfig
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't actually tried, but I think it would be a bit simpler to have just 1 kind of config and 1 way of creating it, eg: func NewAgentConfig(token, fleetServerMode bool) (*AgentConfig, error)

Credentials, url and port can also be hardcoded in the only place they are used.

Copy link
Contributor Author

@mdelapenya mdelapenya Mar 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, let me send a follow-up commit with that, thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented in f5a9f46

@mdelapenya
Copy link
Contributor Author

mdelapenya commented Apr 6, 2021

@EricDavisX @adam-stokes @jalvz This PR is failing because the way we are bootstrapping the initial elastic-agent is wrong. We would need an example of the canonical command that does that:

When I want to bootstrap the fleet server in a host
Then I run the FOO command with FLAGS and VALUES

As a follow-up, once this is merged and passing, we'd like to add scenarios for another agent using the bootstrapped fleet-server. Something like:

Given a fleet-server already bootstrapped
When I enroll a second agent in Fleet using the fleet-server
Then the agent is "online" in Fleet

@adam-stokes
Copy link
Contributor

According to the documentation we should be starting up the environment, copying over the elastic-agent.yml to the directory where we run elastic-agent install from:

https://www.elastic.co/guide/en/fleet/7.12/run-elastic-agent-standalone.html

That should do the bootstrapping and allow us to continue. We will also have to workaround elastic/beats#24950 until that is fixed as well

@blakerouse
Copy link

@adam-stokes You should not need a custom elastic-agent.yml at all when running in Fleet mode. Elastic Agent will not even read it, it will just copy it to make a backup and write a new one.

@EricDavisX
Copy link
Contributor

there is a lot to unpack here, I wanted to finally chime in more with some thoughts. This comment from you @mdelapenya to @jalvz I wanted to clarify some bits:

Not sure if the latter should be done at the unit level in libbeat.

I think the libbeat code is not the same as the Agent - and we can't test all or much of Agent by hitting libbeat itself or any libbeat / beat (only) installer.

Secondly: even if we maintain the install tests, why the other tests (enrolling, ingestion etc.) go trough custom installers? Since it all runs on Docker, shouldn't we just run the stock images as they are?

I'm probably missing something about multiple platform support, but well, that's what the questions are for :)

There are two modes, and please correct me @EricDavisX if I'm wrong: stand-alone and fleet. While the stand-alone mode uses the stock images (docker.elastic.co/beats/elastic-agent:8.0.0-SNAPSHOT), which is automatically auto-enrolled in Fleet when started, the fleet mode works differently in the way that represents a valid use case: As a sysadmin, I have my centos machine and want to install the agent and enroll it. For that reason this fleet mode uses a plain Centos container and perform the installation of the artifacts (TAR, RPM, DEB) first.

Indeed there are many different artifacts all of which we want to test. Docker is just ONE way to run the stand-alone Agent, but any artifact can be used with a config that does NOT include Fleet usage and it will be a 'stand-alone' mode Agent. We are using the Docker container as our stand-alone test, because it was easy to do so (and we just happened to code it up first, as such). We could expand the Docker Container tests to NOT be stand-alone mode and this would be a nice enhancement, but we have so many priorities, it has not gotten high in the list yet.

Also wanted to call out my prior comment that all Agents (except Stand-alone mode Agents) will need to connect to a Fleet Server to send data. Coming ASAP that will be the requirement, so we'll have to consider making each test (except for Stand-alone mode if easier to leave it alone for now) to be a Fleet-Server using test. The implication there too is that the 'real' test for Fleet Server is to have a second container or Agent connect to a separate Fleet Server (Agent) to include that communication part of the architecture (which was formerly in Kibana but is now on the edge, on the host, in Fleet Server, running as part of Agent).

@EricDavisX
Copy link
Contributor

EricDavisX commented Apr 14, 2021

@EricDavisX @adam-stokes @jalvz This PR is failing because the way we are bootstrapping the initial elastic-agent is wrong. We would need an example of the canonical command that does that:

When I want to bootstrap the fleet server in a host
Then I run the FOO command with FLAGS and VALUES

As a follow-up, once this is merged and passing, we'd like to add scenarios for another agent using the bootstrapped fleet-server. Something like:

Given a fleet-server already bootstrapped
When I enroll a second agent in Fleet using the fleet-server
Then the agent is "online" in Fleet

I have not seen it working end to end yet but we're close and I think just waiting on a new build from Infa, so by the time this is read on Wednesday Apr 14 it should work! Unless there are more bugs. So, the code I know to bootstrap the Agent, including Fleet Server process, using 'install' command is:
./elastic-agent install -f --fleet-server https://elastic:{{elastic_password}}@{{es_url}}:9200"

Then, with a healthy Fleet Server (Agent) running, the code to install a non-Fleet Server Agents to use it is:
./elastic-agent install -f --url=https://{{fleet-server-host}}:8220 --enrollment-token={{ enroll_token }}"

  • the enrollment token should relate to the policy you want the Agent to run (Default Policy or with whichever integrations you want)

NOTE, if we are using insecure communications between the Agent(s) and Fleet Server we can remove the 's' in the https and use the below parameter at the end of the command:
--fleet-server-insecure-http
^ the above goes in the Fleet Server bootstrap line. And then the relating line for Agent install, you use:
--insecure
Using https and security is the default.

Also - there is a new Fleet level param to be set in a Kibana API, we should already be using this, but it can be extended to include the Fleet Server URL (planning ahead for whichever host we want to use - if we don't do this, no other host can connect):
PUT to "https://{{ kibana_domain_name }}/api/fleet/settings"
with body:
{"fleet_server_hosts": ["https://{{fleet-server-host}}:8220"]}
This is the same API where we are currently setting the Kibana URLs, so it may be changing. Also changing is the parameter '--fleet-server' maybe changing to '--fleet-server-es' to make it more clear that it is actually the es connection (that fleet server uses)

@blakerouse
Copy link

  • the enrollment token should relate to a policy with Fleet Server integration.

The enrollment token for the Elastic Agent not running Fleet Server should be for the Default policy, not the Fleet Server default policy.

@EricDavisX
Copy link
Contributor

EricDavisX commented Apr 14, 2021

Ah, thank you Blake! I updated the note in line above.

* master:
  chore: add debug info for the payload (elastic#1044)
  chore: add debug traces for the webhook payload (elastic#1043)
  fix: wrong interpolation (elastic#1042)
  Update Elastic Agent to not use Kibana (elastic#1036)
  fix: apply X version for non-master branches (elastic#1037)
  fix: add NodeJS to PATH (elastic#1035)
  fix: use an agent when building kibana (elastic#1030)
  fix(jjb): use a branch that exists (elastic#1029)
  remove uninstall step (elastic#1017)
  fix: delay checking stale agent version until it's used (elastic#1016)
  fix: use same JJB than in custom kibana (elastic#1010)
  chore: simplify PR template (elastic#1011)
  feat: support passing KIBANA_VERSION (elastic#905)
  [mergify] assign the original author (elastic#1009)
  Remove the agent config file parameters for stand alone (elastic#983)
  Uniquify the stand-alone step for checking agent status (elastic#993)
@mdelapenya
Copy link
Contributor Author

mdelapenya commented Apr 15, 2021

I've seen this error when running the following command:

./elastic-agent install -f --fleet-server https://elastic:{{elastic_password}}@{{es_url}}:9200"

Error: unknown flag: --fleet-server
Usage:
  elastic-agent install [flags]

Flags:
  -p, --ca-sha256 string                    Comma separated list of certificate authorities hash pins used for certificate verifications
  -a, --certificate-authorities string      Comma separated list of root certificate for server verifications
  -t, --enrollment-token string             Enrollment token to use to enroll Agent into Fleet
      --fleet-server-cert string            Certificate to use for exposed Fleet Server HTTPS endpoint
      --fleet-server-cert-key string        Private key to use for exposed Fleet Server HTTPS endpoint
      --fleet-server-es string              Start and run a Fleet Server along side this Elastic Agent connecting to the provided elasticsearch
      --fleet-server-es-ca string           Path to certificate authority to use with communicate with elasticsearch
      --fleet-server-host string            Fleet Server HTTP binding host (overrides the policy)
      --fleet-server-insecure-http          Expose Fleet Server over HTTP (not recommended; insecure)
      --fleet-server-policy string          Start and run a Fleet Server on this specific policy
      --fleet-server-port uint16            Fleet Server HTTP binding port (overrides the policy)
      --fleet-server-service-token string   Service token to use for communication with elasticsearch
  -f, --force                               Force overwrite the current and do not prompt for confirmation
  -h, --help                                help for install
  -i, --insecure                            Allow insecure connection to Kibana
  -k, --kibana-url string                   URL of Kibana to enroll Agent into Fleet
      --staging string                      Configures agent to download artifacts from a staging build
      --url string                          URL to enroll Agent into Fleet

Global Flags:
  -c, --c string                     Configuration file, relative to path.config (default "elastic-agent.yml")
  -d, --d string                     Enable certain debug selectors
  -e, --e                            Log to stderr and disable syslog/file output
      --environment environmentVar   set environment being ran in (default default)
      --path.config string           Config path is the directory Agent looks for its config file (default "/elastic-agent")
      --path.home string             Agent root path (default "/elastic-agent")
      --path.logs string             Logs path contains Agent log output (default "/elastic-agent")
  -v, --v                            Log at INFO level

I think you mean --fleet-server-host and --fleet-server-port, right? I see --fleet-server-service-token, is this required to bootstrap the fleet server agent? And how would we pass the ES credentials?

@mdelapenya
Copy link
Contributor Author

Also running this command starts the agent but not bootstrapped:

/elastic-agent/elastic-agent install \
   --force --fleet-server-insecure-http --fleet-server-host elasticsearch --fleet-server-port 9200

@EricDavisX
Copy link
Contributor

The last build had a check-in to change the parameter from fleet-server to fleet-server-es. that much I know. I am trying to gather communication about pending / coming changes but it is a challenging effort as we didn't have Infra builds to validate with until today.

@blakerouse
Copy link

@mdelapenya @EricDavisX is correct it is now --fleet-server-es versus previous --fleet-server.

@adam-stokes adam-stokes mentioned this pull request Apr 16, 2021
10 tasks
@mdelapenya mdelapenya marked this pull request as ready for review April 16, 2021 06:10
@@ -15,5 +15,6 @@ xpack.fleet.enabled: true
xpack.fleet.registryUrl: http://package-registry:8080
xpack.fleet.agents.enabled: true
xpack.fleet.agents.elasticsearch.host: http://elasticsearch:9200
xpack.fleet.agents.fleetServerEnabled: true
xpack.fleet.agents.kibana.host: http://kibana:5601
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might need xpack.fleet.agents.fleet_server.hosts: [http://?:5601] instead here with the most recent builds. Probably worth rerunning.

@adam-stokes adam-stokes merged commit a150734 into elastic:master Apr 19, 2021
mergify bot pushed a commit that referenced this pull request Apr 20, 2021
* chore: capture Fleet's default policy in a stronger manner

* chore: support passing the field for is_default policy

* chore: remove inferred type for array

* chore: enable fleet server in kibana config

* chore: create fleet config struct

This type will hold information about Fleet config, supporting building
the proper flags during enrollment

* chore: refactor enroll command logic to use the new struct

* chore: check if the fleet-server field exists when retrieving the policy

* chore: refactor install to support fleet-server

The flags used for installing/enrolling an agent will be generated from
the new FleetConfig struct. Because of that, we are moving a pointer to
that fleet config to the install command

* feat: add first scenario for fleet server

* chore: add fleet server branch to the CI

* chore: set Then clause for the scenario

* chore: remove step

* fix: define fallback when checking agent status

* chore: simplify creation of Fleet configs

* fix: forgot to rename variable

* WIP

* chore: rename scenario

* fix: wrong merge conflicts resolution

* chore: support passing environment when running a command in a container

* chore: run elastic agent commands passing an env

* WIP

* chore: separate bootstrapping an agent from connecting to a fleet server agent

* fix: use proper fleet-server flags

Co-authored-by: Adam Stokes <[email protected]>
(cherry picked from commit a150734)
mdelapenya added a commit that referenced this pull request Apr 20, 2021
* chore: capture Fleet's default policy in a stronger manner

* chore: support passing the field for is_default policy

* chore: remove inferred type for array

* chore: enable fleet server in kibana config

* chore: create fleet config struct

This type will hold information about Fleet config, supporting building
the proper flags during enrollment

* chore: refactor enroll command logic to use the new struct

* chore: check if the fleet-server field exists when retrieving the policy

* chore: refactor install to support fleet-server

The flags used for installing/enrolling an agent will be generated from
the new FleetConfig struct. Because of that, we are moving a pointer to
that fleet config to the install command

* feat: add first scenario for fleet server

* chore: add fleet server branch to the CI

* chore: set Then clause for the scenario

* chore: remove step

* fix: define fallback when checking agent status

* chore: simplify creation of Fleet configs

* fix: forgot to rename variable

* WIP

* chore: rename scenario

* fix: wrong merge conflicts resolution

* chore: support passing environment when running a command in a container

* chore: run elastic agent commands passing an env

* WIP

* chore: separate bootstrapping an agent from connecting to a fleet server agent

* fix: use proper fleet-server flags

Co-authored-by: Adam Stokes <[email protected]>
(cherry picked from commit a150734)

Co-authored-by: Manuel de la Peña <[email protected]>
mdelapenya added a commit to mdelapenya/e2e-testing that referenced this pull request Apr 21, 2021
* master:
  v2 refactor (elastic#1008)
  fix: use a version of kibana with the fix for fleet-server validations (elastic#1055)
  feat: add first scenario for Fleet Server (elastic#900)
  fix: do not use GT_REPO variable, use fixed repo name instead (elastic#1049)
@mdelapenya mdelapenya deleted the 438-fleet-server-scenarios branch April 22, 2021 08:42
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Team:Elastic-Agent Label for the Agent team v7.13.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants