Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[platform] Initialization done after error #1445

Closed
nor3th opened this issue Jun 24, 2021 · 13 comments
Closed

[platform] Initialization done after error #1445

nor3th opened this issue Jun 24, 2021 · 13 comments
Labels
question Further information is requested solved use to identify issue that has been solved (must be linked to the solving PR)

Comments

@nor3th
Copy link
Contributor

nor3th commented Jun 24, 2021

Description

After cleaning up an old opencti docker instance and creating a new one, there are issues with recreating the indices.

Environment

  1. OS (where OpenCTI server runs): Ubuntu 20.04
  2. OpenCTI version: 4.5.5
  3. OpenCTI client: frontend
  4. Other environment details:

Reproducible Steps

Steps to create the smallest reproducible scenario:

  1. have elasticsearch and redis as separate containers to have them accessible for other services as well
  2. docker-compose -up -d (opencti docker container)
  3. using opencti a bit
  4. docker-compose down -v
  5. curl -XDELETE http://elastic.local:9200/opencti\* (#delete all opencti incides)
  6. docker-compose up -d (#start a new and clean opencti instance)

Expected Output

I would expect either expect the initialization to succeed or fail entirely, but not the "Plattform initialization done" message after an error.

Actual Output

{"category":"APP","version":"4.5.5","level":"info","message":"[OPENCTI] Starting platform","timestamp":"2021-06-24T19:05:30.793Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] ElasticSearch is alive","timestamp":"2021-06-24T19:05:30.834Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] Minio is alive","timestamp":"2021-06-24T19:05:30.844Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] RabbitMQ is alive","timestamp":"2021-06-24T19:05:32.570Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] Redis is alive","timestamp":"2021-06-24T19:05:32.571Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] Python3 is available","timestamp":"2021-06-24T19:05:34.001Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] Starting platform initialization","timestamp":"2021-06-24T19:05:34.006Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] New platform detected, initialization...","timestamp":"2021-06-24T19:05:34.012Z"}
{"error":{"name":"DatabaseError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-06-24T19:06:11.607Z","data":{"reason":"Error creating index","http_status":500,"category":"technical","error":{"name":"ResponseError","meta":{"body":{"error":{"root_cause":[{"type":"resource_already_exists_exception","reason":"index [opencti_history-000002/9V9-hj_-T2Obo0RpUkwTaw] already exists","index_uuid":"9V9-hj_-T2Obo0RpUkwTaw","index":"opencti_history-000002"}],"type":"resource_already_exists_exception","reason":"index [opencti_history-000002/9V9-hj_-T2Obo0RpUkwTaw] already exists","index_uuid":"9V9-hj_-T2Obo0RpUkwTaw","index":"opencti_history-000002"},"status":400},"statusCode":400,"headers":{"content-type":"application/json; charset=UTF-8","content-length":"429"},"meta":{"context":null,"request":{"params":{"method":"PUT","path":"/opencti_history-000002","body":"{\"aliases\":{\"opencti_history\":{}},\"settings\":{\"index\":{\"max_result_window\":100000},\"analysis\":{\"normalizer\":{\"string_normalizer\":{\"type\":\"custom\",\"filter\":[\"lowercase\",\"asciifolding\"]}}}},\"mappings\":{\"dynamic_templates\":[{\"integers\":{\"match_mapping_type\":\"long\",\"mapping\":{\"type\":\"integer\"}}},{\"strings\":{\"match_mapping_type\":\"string\",\"mapping\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"normalizer\":\"string_normalizer\",\"ignore_above\":512}}}}}],\"properties\":{\"standard_id\":{\"type\":\"text\",\"fields\":{\"keyword\":{\"type\":\"keyword\",\"normalizer\":\"string_normalizer\",\"ignore_above\":512}}},\"timestamp\":{\"type\":\"date\"},\"created\":{\"type\":\"date\"},\"modified\":{\"type\":\"date\"},\"first_seen\":{\"type\":\"date\"},\"last_seen\":{\"type\":\"date\"},\"start_time\":{\"type\":\"date\"},\"stop_time\":{\"type\":\"date\"},\"published\":{\"type\":\"date\"},\"valid_from\":{\"type\":\"date\"},\"valid_until\":{\"type\":\"date\"},\"observable_date\":{\"type\":\"date\"},\"event_date\":{\"type\":\"date\"},\"received_time\":{\"type\":\"date\"},\"processed_time\":{\"type\":\"date\"},\"completed_time\":{\"type\":\"date\"},\"ctime\":{\"type\":\"date\"},\"mtime\":{\"type\":\"date\"},\"atime\":{\"type\":\"date\"},\"confidence\":{\"type\":\"integer\"},\"x_opencti_report_status\":{\"type\":\"integer\"},\"attribute_order\":{\"type\":\"integer\"},\"base_score\":{\"type\":\"integer\"},\"is_family\":{\"type\":\"boolean\"},\"number_observed\":{\"type\":\"integer\"},\"x_opencti_negative\":{\"type\":\"boolean\"},\"default_assignation\":{\"type\":\"boolean\"},\"x_opencti_detection\":{\"type\":\"boolean\"},\"x_opencti_order\":{\"type\":\"integer\"},\"import_expected_number\":{\"type\":\"integer\"},\"import_processed_number\":{\"type\":\"integer\"},\"x_opencti_score\":{\"type\":\"integer\"},\"connections\":{\"type\":\"nested\"}}}}","querystring":"","headers":{"user-agent":"elasticsearch-js/7.13.0 (linux 5.4.0-72-generic-x64; Node.js v15.11.0)","x-elastic-client-meta":"es=7.13.0,js=15.11.0,t=7.13.0,hc=15.11.0","content-type":"application/json","content-length":"1640"},"timeout":30000},"options":{},"id":14},"name":"elasticsearch-js","connection":{"url":"http://db-elastic-new-1:9200/","id":"http://elastic.local:9200/","headers":{},"deadCount":0,"resurrectTimeout":0,"_openRequests":9,"status":"alive","roles":{"master":true,"data":true,"ingest":true,"ml":false}},"attempts":1,"aborted":false}}}},"internalData":{}},"category":"APP","version":"4.5.5","level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-06-24T19:06:11.607Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] Platform initialization done","timestamp":"2021-06-24T19:06:11.612Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[OPENCTI] Starting platform","timestamp":"2021-06-24T19:06:43.349Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] ElasticSearch is alive","timestamp":"2021-06-24T19:06:43.379Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] Minio is alive","timestamp":"2021-06-24T19:06:43.395Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] RabbitMQ is alive","timestamp":"2021-06-24T19:06:43.410Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] Redis is alive","timestamp":"2021-06-24T19:06:43.410Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[CHECK] Python3 is available","timestamp":"2021-06-24T19:06:43.749Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] Starting platform initialization","timestamp":"2021-06-24T19:06:43.755Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] New platform detected, initialization...","timestamp":"2021-06-24T19:06:43.762Z"}
{"error":{"name":"ConfigurationError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-06-24T19:06:43.765Z","data":{"reason":"[INIT] Fail initialize schema, index already exists","http_status":500,"category":"technical"},"internalData":{}},"category":"APP","version":"4.5.5","level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-06-24T19:06:43.765Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] Platform initialization done","timestamp":"2021-06-24T19:06:43.767Z"}

Additional information

I have no clue why elastic fails in the first place, since all opencti indices removed from the cluster. The elastic logs show no errors at all. I had this error previously and the only way to get rid of the "index already exists" error was to remove the elastic cluster and then start everything fresh again.

I also changed the index creation pattern (1,2,3), but no difference.

@nor3th
Copy link
Contributor Author

nor3th commented Jun 24, 2021

Seems like this is related to #1363

@SamuelHassine
Copy link
Member

SamuelHassine commented Jun 24, 2021

Hello @nor3th,

We've rebuilt 4.5.5 several times after the release to avoid this problem. can you check your Docker image checksum against the Docker hub latest 4.5.5?

Kind regards,
Samuel

@SamuelHassine
Copy link
Member

By the way, the error:

,"index":"opencti_history-000002"}],"type":"resource_already_exists_exception","reason":"index [opencti_history-000002/9V9-hj_-T2Obo0RpUkwTaw] already exists","index_uuid":"9V9-hj_-T2Obo0RpUkwTaw","index":"opencti_history-000002"},"status":400},"

Is pretty clear :)

@SamuelHassine
Copy link
Member

So maybe just because your ElasticSearch has not been fully purged.

@nor3th
Copy link
Contributor Author

nor3th commented Jun 24, 2021

Is pretty clear :)

In theory I totally agree with you, but I have no clue why this resource would already exist. I even used a new index creation pattern which I have never used before and the error still appeared.

So maybe just because your ElasticSearch has not been fully purged.

How do you verify that elastic has fully purged the indices? I usually then check with curl -s "http://elastic.local:9200/_cat/indices" | grep opencti if they are gone.

We've rebuilt 4.5.5 several times after the release to avoid this problem. can you check your Docker image checksum against the Docker hub latest 4.5.5?

To be honest I am not able to produce an answer to this. I was able to pull an update for the 4.5.5 image, however I wasn't able to find a match between the checksum of https://hub.docker.com/layers/opencti/platform/4.5.5/images/sha256-2e327f5b57e1b34e883ebb3b00552d0c1fbfc0719268d8c1c4a1daa65aa9c919?context=explore and my output from

$ docker pull opencti/platform:4.5.5                                                                                                                    git:master*
4.5.5: Pulling from opencti/platform
Digest: sha256:86fe9064bb0b769676d269a58a3b0cbe6778ab311dd62e52104825636cd996c5
Status: Image is up to date for opencti/platform:4.5.5
docker.io/opencti/platform:4.5.5

I just tried again with the newest 4.5.5 image and the error persists. I now even restarted and updated the elastic cluster, but still the same problem.

@nor3th
Copy link
Contributor Author

nor3th commented Jun 24, 2021

Does it maybe have anything to do with the usage of the aliases?
The two times when this error appeared (the first time I think was a month ago), was always after reusing an elastic server for an entirely new opencti instance. Removing the elastic server usually did the trick. Is there any other information stored somewhere else on the elastic server besides in any of the opencti* indices?

@nor3th
Copy link
Contributor Author

nor3th commented Jun 25, 2021

So I played around with my setup a bit and it seems more like an OpenCTI issue.

In command #1 I checked that there are no more opencti indices present, in #2 I created a new opencti-history index, in #3 I tried the same thing again and elastic returned that the index already exists. In #4 I deleted the index and in #5 (a few seconds later) I created again an index with the same name. This behaviour at least indicates that elastic seems to purge indices quite fast.

$ curl -s "http://elastic.local:9200/_cat/indices"  | less
$ curl -X PUT "http://elastic.local:9200/opencti-history-000001?pretty"
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "index" : "opencti-history-000001"
}
$ curl -X PUT "http://elastic.local:9200/opencti-history-000001?pretty"
{
  "error" : {
    "root_cause" : [
      {
        "type" : "resource_already_exists_exception",
        "reason" : "index [opencti-history-000001/Mw1Ju1RLTOW-UrswTyCfVQ] already exists",
        "index_uuid" : "Mw1Ju1RLTOW-UrswTyCfVQ",
        "index" : "opencti-history-000001"
      }
    ],
    "type" : "resource_already_exists_exception",
    "reason" : "index [opencti-history-000001/Mw1Ju1RLTOW-UrswTyCfVQ] already exists",
    "index_uuid" : "Mw1Ju1RLTOW-UrswTyCfVQ",
    "index" : "opencti-history-000001"
  },
  "status" : 400
}
$ curl -X XDELETE http://elastic.local:9200/opencti-history-000001
{"acknowledged":true}%
$ curl -X PUT "http://elastic.local:9200/opencti-history-000001?pretty"
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "index" : "opencti-history-000001"
}

Is OpenCTI trying to create the same index twice maybe?

Another thing speaking for OpenCTI is that the index seems to be initiated during the initialization phase. Here I also used an index pattern I have never used before. As expected, the the "index already created" error is thrown right aftwards.

$ curl "http://elastic.local:9200/opencti_history-000005?pretty"
{
  "opencti_history-000005" : {
    "aliases" : {
      "opencti_history" : { }
    },
    "mappings" : {
      "dynamic_templates" : [
        {
          "integers" : {
            "match_mapping_type" : "long",
            "mapping" : {
              "type" : "integer"
            }
          }
        },
        {
          "strings" : {
            "match_mapping_type" : "string",
            "mapping" : {
              "fields" : {
                "keyword" : {
                  "normalizer" : "string_normalizer",
                  "ignore_above" : 512,
                  "type" : "keyword"
                }
              },
              "type" : "text"
            }
          }
        }
      ],
      "properties" : {
        "atime" : {
          "type" : "date"
        },
[...]
        "x_opencti_score" : {
          "type" : "integer"
        }
      }
    },
    "settings" : {
      "index" : {
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        },
        "number_of_shards" : "1",
        "provided_name" : "opencti_history-000005",
        "max_result_window" : "100000",
        "creation_date" : "1624603861712",
        "analysis" : {
          "normalizer" : {
            "string_normalizer" : {
              "filter" : [
                "lowercase",
                "asciifolding"
              ],
              "type" : "custom"
            }
          }
        },
        "number_of_replicas" : "1",
        "uuid" : "lD6o-wtFSSm0pAewW_GONA",
        "version" : {
          "created" : "7130199"
        }
      }
    }
  }
}

Another thing with the error log which I am a bit confused about is that OpenCTI says the initialization phase went well even though it didn't. Is that maybe due to the asynchronous way of how JavaScript operates? The await should prevent that though...

{"error":{"name":"ConfigurationError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-06-24T19:06:43.765Z","data":{"reason":"[INIT] Fail initialize schema, index already exists","http_status":500,"category":"technical"},"internalData":{}},"category":"APP","version":"4.5.5","level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-06-24T19:06:43.765Z"}
{"category":"APP","version":"4.5.5","level":"info","message":"[INIT] Platform initialization done","timestamp":"2021-06-24T19:06:43.767Z"}

@nor3th
Copy link
Contributor Author

nor3th commented Jun 25, 2021

Alright I give up. I have no idea what the reason for this behavior is exactly. I tried:

  • old docker-compose file and a new elastic server (Same error)
  • old docker-compose file and a host server restart (Same error)
  • removing all opencti docker images and downloading them all over again, with the old docker config (Same error)

I am starting OpenCTI using the docker compose files and with docker-compose up -d. What ended up doing the trick was to reduce the connectors to the minimum

  • import/export stix
  • import report
  • export csv
  • opencti connector

And then starting the docker-compose setup worked without any issues.

It still bugs me though, that the "Platform initialization done" message appears AFTER the ConfigurationError shows up. This MUST NOT happen. Is it possible due the the amount of different docker containers starting at the same time, the system has too much to do and a race condition fucks everything up?

From what I understand is that OpenCTI is trying twice to create the same index, for what ever reason...

@SamuelHassine
Copy link
Member

Hello @nor3th,

Can you please provide me with the docker-compose.yml that causes the errors initially (with all connectors enabled)?

Kind regards,
Samuel

@nor3th
Copy link
Contributor Author

nor3th commented Jun 26, 2021

Hello @SamuelHassine

Here are the docker-compose.yml and the .env file with which I was able to reproduce the issue multiple times. During the startup of all containers more or less the entire available RAM of my server is consumed.

docker-compose.yml.txt (rename to docker-compose.yml)
env.txt (rename to .env)

@SamuelHassine
Copy link
Member

Hello @nor3th,

Thanks for sharing your exact configuration (careful to the included password which is now burnt). I've just tried your configuration on my local machine (Ubuntu 20.04), here is what I've done:

$ service docker stop
$ rm -Rf /var/lib/docker/* # purge everything really everything
$ service docker start

Then put your 2 files in a directory and then:
docker-compose up

Everything is working fine, no indices problem, platform is starting:

image

Kind regards,
Samuel

@SamuelHassine SamuelHassine added the question Further information is requested label Jun 29, 2021
@nor3th
Copy link
Contributor Author

nor3th commented Jun 29, 2021

careful to the included password which is now burnt

Thanks for the hint. I created the .env file with that script anyways, so it's not the one on my own machine :)
OpenCTI-Platform/docker#32

What happens if you add even more connectors? I don't think that the config for the connectors even has to be valid as long as they start a docker container and consume memory.

@SamuelHassine SamuelHassine added the solved use to identify issue that has been solved (must be linked to the solving PR) label Jan 8, 2023
@robbiemueller
Copy link

Hello @nor3th,

Thanks for sharing your exact configuration (careful to the included password which is now burnt). I've just tried your configuration on my local machine (Ubuntu 20.04), here is what I've done:

$ service docker stop
$ rm -Rf /var/lib/docker/* # purge everything really everything
$ service docker start

Then put your 2 files in a directory and then: docker-compose up

Everything is working fine, no indices problem, platform is starting:

image

Kind regards, Samuel

Im getting the same issue. I tried this and many other suggestions I found and nothing seems to work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested solved use to identify issue that has been solved (must be linked to the solving PR)
Projects
None yet
Development

No branches or pull requests

3 participants