Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bitwarden Unified with MariaDB leads to "SIGABRT (core dumped); not expected" #2718

Open
1 task done
Tracked by #2480
l4rm4nd opened this issue Feb 19, 2023 · 19 comments
Open
1 task done
Tracked by #2480
Labels
bug bw-unified-deploy An Issue related to Bitwarden unified deployment

Comments

@l4rm4nd
Copy link

l4rm4nd commented Feb 19, 2023

Steps To Reproduce

  1. Define the docker-compose.yml file as mentioned on https://bitwarden.com/help/install-and-deploy-unified-beta/
  2. Define settings.env from https://github.com/bitwarden/server/blob/master/docker-unified/settings.env
  3. Run docker compose up

docker-compose.yml:

version: "3.8"

services:
  bitwarden-unified:
    container_name: bitwarden_unified
    depends_on:
      - bitwarden-unified-db
    env_file:
      - settings.env
    image: bitwarden/self-host:beta
    restart: always
    ports:
      - "8888:8080"
    volumes:
      - ./bitwarden-unified/data:/etc/bitwarden

  bitwarden-unified-db:
    environment:
      MARIADB_USER: "bitwarden"
      MARIADB_PASSWORD: "Secure_MariaDB_Password1"
      MARIADB_DATABASE: "bitwarden_vault"
      MARIADB_RANDOM_ROOT_PASSWORD: "true"
    image: mariadb:10
    container_name: bitwarden_unified_db
    restart: always
    volumes:
      - ./bitwarden-unified/mariadb:/var/lib/mysql

setting.env:

#####################
# Required Settings #
#####################

# Server hostname
BW_DOMAIN=bitwarden.example.com

# Database
# Available providers are sqlserver, postgresql, or mysql/mariadb
BW_DB_PROVIDER=mysql
BW_DB_SERVER=bitwarden-unified-db
BW_DB_DATABASE=bitwarden_vault
BW_DB_USERNAME=bitwarden
BW_DB_PASSWORD=Secure_MariaDB_Password1

# Installation information
# Get your ID and key from https://bitwarden.com/host/
BW_INSTALLATION_ID=<ID> # masked for this issue
BW_INSTALLATION_KEY=<KEY> # masked for this issue

Expected Result

The Bitwarden Unified instance should be running and be available on http://127.0.0.1:8888

Actual Result

Recreating bitwarden_unified_db ... done
Recreating bitwarden_unified    ... done
Attaching to bitwarden_unified_db, bitwarden_unified
bitwarden_unified       | Adding group `bitwarden' (GID 1000) ...
bitwarden_unified_db    | 2023-02-19 18:25:25+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.11.2+maria~ubu2204 started.
bitwarden_unified       | Done.
bitwarden_unified_db    | 2023-02-19 18:25:25+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
bitwarden_unified_db    | 2023-02-19 18:25:25+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.11.2+maria~ubu2204 started.
bitwarden_unified_db    | 2023-02-19 18:25:25+00:00 [Note] [Entrypoint]: MariaDB upgrade not required
bitwarden_unified       | Adding user `bitwarden' ...
bitwarden_unified       | Adding new user `bitwarden' (1000) with group `bitwarden' ...
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] Starting MariaDB 10.11.2-MariaDB-1:10.11.2+maria~ubu2204 source revision cafba8761af55ae16cc69c9b53a341340a845b36 as process 1
bitwarden_unified       | Not creating home directory `/home/bitwarden'.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Number of transaction pools: 1
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Using liburing
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Completed initialization of buffer pool
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes)
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=46702
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: 128 rollback segments are active.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ...
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: log sequence number 46918; transaction id 14
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] Plugin 'FEEDBACK' is disabled.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] InnoDB: Buffer pool(s) load completed at 230219 18:25:25
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] Server socket created on IP: '0.0.0.0'.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] Server socket created on IP: '::'.
bitwarden_unified_db    | 2023-02-19 18:25:25 0 [Note] mariadbd: ready for connections.
bitwarden_unified_db    | Version: '10.11.2-MariaDB-1:10.11.2+maria~ubu2204'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/admin.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/api.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/events.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/icons.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/identity.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/notifications.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/scim.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,955 INFO Included extra file "/etc/supervisor.d/sso.ini" during parsing
bitwarden_unified       | 2023-02-19 18:25:27,957 INFO RPC interface 'supervisor' initialized
bitwarden_unified       | 2023-02-19 18:25:27,957 CRIT Server 'unix_http_server' running without any HTTP authentication checking
bitwarden_unified       | 2023-02-19 18:25:27,957 INFO supervisord started with pid 1
bitwarden_unified       | 2023-02-19 18:25:28,959 INFO spawned: 'identity' with pid 68
bitwarden_unified       | 2023-02-19 18:25:28,960 INFO spawned: 'admin' with pid 69
bitwarden_unified       | 2023-02-19 18:25:28,961 INFO spawned: 'api' with pid 70
bitwarden_unified       | 2023-02-19 18:25:28,961 INFO spawned: 'icons' with pid 71
bitwarden_unified       | 2023-02-19 18:25:28,962 INFO spawned: 'nginx' with pid 72
bitwarden_unified       | 2023-02-19 18:25:28,963 INFO spawned: 'notifications' with pid 73
bitwarden_unified       | 2023-02-19 18:25:29,164 INFO exited: icons (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:29,167 INFO exited: api (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:29,167 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:29,184 INFO exited: identity (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:29,185 INFO exited: notifications (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:30,187 INFO spawned: 'identity' with pid 145
bitwarden_unified       | 2023-02-19 18:25:30,189 INFO spawned: 'admin' with pid 146
bitwarden_unified       | 2023-02-19 18:25:30,190 INFO spawned: 'api' with pid 147
bitwarden_unified       | 2023-02-19 18:25:30,191 INFO spawned: 'icons' with pid 148
bitwarden_unified       | 2023-02-19 18:25:30,192 INFO spawned: 'notifications' with pid 149
bitwarden_unified       | 2023-02-19 18:25:30,372 INFO exited: icons (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:30,385 INFO exited: notifications (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:30,420 INFO exited: identity (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:30,420 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:30,421 INFO exited: api (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:32,424 INFO spawned: 'identity' with pid 215
bitwarden_unified       | 2023-02-19 18:25:32,425 INFO spawned: 'admin' with pid 216
bitwarden_unified       | 2023-02-19 18:25:32,426 INFO spawned: 'api' with pid 217
bitwarden_unified       | 2023-02-19 18:25:32,427 INFO spawned: 'icons' with pid 218
bitwarden_unified       | 2023-02-19 18:25:32,428 INFO spawned: 'notifications' with pid 219
bitwarden_unified       | 2023-02-19 18:25:32,592 INFO exited: icons (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:32,620 INFO exited: identity (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:32,620 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:32,620 INFO exited: api (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:32,620 INFO exited: notifications (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:35,625 INFO spawned: 'identity' with pid 285
bitwarden_unified       | 2023-02-19 18:25:35,626 INFO spawned: 'admin' with pid 286
bitwarden_unified       | 2023-02-19 18:25:35,627 INFO spawned: 'api' with pid 287
bitwarden_unified       | 2023-02-19 18:25:35,628 INFO spawned: 'icons' with pid 288
bitwarden_unified       | 2023-02-19 18:25:35,629 INFO spawned: 'notifications' with pid 289
bitwarden_unified       | 2023-02-19 18:25:35,772 INFO exited: api (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:35,793 INFO gave up: api entered FATAL state, too many start retries too quickly
bitwarden_unified       | 2023-02-19 18:25:35,794 INFO exited: icons (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:35,804 INFO gave up: icons entered FATAL state, too many start retries too quickly
bitwarden_unified       | 2023-02-19 18:25:35,804 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:35,824 INFO gave up: admin entered FATAL state, too many start retries too quickly
bitwarden_unified       | 2023-02-19 18:25:35,824 INFO exited: notifications (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:35,840 INFO gave up: notifications entered FATAL state, too many start retries too quickly
bitwarden_unified       | 2023-02-19 18:25:35,840 INFO exited: identity (terminated by SIGABRT (core dumped); not expected)
bitwarden_unified       | 2023-02-19 18:25:36,842 INFO gave up: identity entered FATAL state, too many start retries too quickly
bitwarden_unified       | 2023-02-19 18:25:44,852 INFO success: nginx entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)

Screenshots or Videos

image

image

Additional Context

No response

Githash Version

502 Bad Gateway

Environment Details

  • Docker Compose version v2.15.1
  • Linux 5.19.0-kali2-amd64 Debian 5.19.11-1kali2 (2022-10-10) x86_64 GNU/Linux

Database Image

image: mariadb:10

Issue-Link

#2480

Issue Tracking Info

  • I understand that work is tracked outside of Github. A PR will be linked to this issue should one be opened to address it, but Bitwarden doesn't use fields like "assigned", "milestone", or "project" to track progress.
@l4rm4nd l4rm4nd added bug bw-unified-deploy An Issue related to Bitwarden unified deployment labels Feb 19, 2023
@Greenderella
Copy link
Member

Hi there,

Thank you for your report!

I was able to reproduce this issue, and have flagged this to our engineering team.

If you wish to add any further information/screenshots/recordings etc., please feel free to do so at any time - our engineering team will be happy to review these.

Thanks once again!

@ghost
Copy link

ghost commented Mar 3, 2023

I had this issue and checked the admin.log file. The bitwarden container wasn't able to talk to my mariadb container. Easily fixed by adding the bitwarden container to the same network as mariadb.

@l4rm4nd
Copy link
Author

l4rm4nd commented Mar 3, 2023

I had this issue and checked the admin.log file. The bitwarden container wasn't able to talk to my mariadb container. Easily fixed by adding the bitwarden container to the same network as mariadb.

It's a single compose file and all containers run in the same network.

@ghost
Copy link

ghost commented Mar 3, 2023

It's a single compose file and all containers run in the same network.

Indeed my config was just special and maria wasn't in the default network. Might find something useful in the failed services logs if you're lucky.

@oschmidteu
Copy link

Same problem for me.
admin.log gives me the following:

2023-03-13 11:09:40.007 +00:00 [INF] Migrating database.
2023-03-13 11:10:00.104 +00:00 [INF] Starting job DeleteSendsJob at "2023-03-13T11:10:00.1025207Z".
2023-03-13 11:10:00.225 +00:00 [ERR] Error performing DeleteSendsJob.
System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'startIndex')
   at System.String.LastIndexOf(Char value, Int32 startIndex, Int32 count)
   at Microsoft.Data.Common.ADP.IsEndpoint(String dataSource, String prefix)
   at Microsoft.Data.Common.ADP.IsAzureSynapseOnDemandEndpoint(String dataSource)
   at Microsoft.Data.SqlClient.SqlConnection.CacheConnectionStringProperties()
   at Microsoft.Data.SqlClient.SqlConnection.set_ConnectionString(String value)
   at Microsoft.Data.SqlClient.SqlConnection..ctor(String connectionString)
   at Bit.Infrastructure.Dapper.Repositories.SendRepository.GetManyByDeletionDateAsync(DateTime deletionDateBefore) in /source/src/Infrastructure.Dapper/Repositories/SendRepository.cs:line 35
   at Bit.Admin.Jobs.DeleteSendsJob.ExecuteJobAsync(IJobExecutionContext context) in /source/src/Admin/Jobs/DeleteSendsJob.cs:line 26
   at Bit.Core.Jobs.BaseJob.Execute(IJobExecutionContext context) in /source/src/Core/Jobs/BaseJob.cs:line 19
2023-03-13 11:10:00.229 +00:00 [INF] Finished job DeleteSendsJob at "2023-03-13T11:10:00.2289549Z".
2023-03-13 11:10:03.640 +00:00 [INF] Migrating database.

Docker Log:

bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/admin.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/api.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/events.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/icons.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/identity.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/nginx.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/notifications.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/scim.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,566 INFO Included extra file "/etc/supervisor.d/sso.ini" during parsing
bitwarden_1  | 2023-03-13 11:09:14,587 INFO RPC interface 'supervisor' initialized
bitwarden_1  | 2023-03-13 11:09:14,587 CRIT Server 'unix_http_server' running without any HTTP authentication checking
bitwarden_1  | 2023-03-13 11:09:14,588 INFO supervisord started with pid 1
bitwarden_1  | 2023-03-13 11:09:15,591 INFO spawned: 'identity' with pid 66
bitwarden_1  | 2023-03-13 11:09:15,593 INFO spawned: 'admin' with pid 67
bitwarden_1  | 2023-03-13 11:09:15,595 INFO spawned: 'api' with pid 68
bitwarden_1  | 2023-03-13 11:09:15,597 INFO spawned: 'icons' with pid 69
bitwarden_1  | 2023-03-13 11:09:15,600 INFO spawned: 'nginx' with pid 70
bitwarden_1  | 2023-03-13 11:09:15,607 INFO spawned: 'notifications' with pid 71
bitwarden_1  | 2023-03-13 11:09:30,838 INFO success: identity entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:09:30,838 INFO success: admin entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:09:30,838 INFO success: api entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:09:30,838 INFO success: icons entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:09:30,838 INFO success: nginx entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:09:30,839 INFO success: notifications entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:09:41,392 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_1  | 2023-03-13 11:09:42,395 INFO spawned: 'admin' with pid 174
bitwarden_1  | 2023-03-13 11:09:57,412 INFO success: admin entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:10:03,678 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_1  | 2023-03-13 11:10:04,679 INFO spawned: 'admin' with pid 201
bitwarden_1  | 2023-03-13 11:10:20,110 INFO success: admin entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:10:27,141 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)
bitwarden_1  | 2023-03-13 11:10:28,143 INFO spawned: 'admin' with pid 235
bitwarden_1  | 2023-03-13 11:10:43,162 INFO success: admin entered RUNNING state, process has stayed up for > than 15 seconds (startsecs)
bitwarden_1  | 2023-03-13 11:10:49,392 INFO exited: admin (terminated by SIGABRT (core dumped); not expected)

@jdisco
Copy link

jdisco commented Mar 29, 2023

I was able to reproduce this easily.
I believe it is caused by renaming the containers in the docker-compose.yml. My guess is something inside the bitwarden container looks for the DNS of db container by the hardcoded name "db"

Looks like OP changed the services:

services:
  bitwarden:
  db:

to

services:
  bitwarden-unified:
  bitwarden-unified-db:

It will still run, however if you change the container name with the "container_name:" variable.

@l4rm4nd
Copy link
Author

l4rm4nd commented Mar 29, 2023

I was able to reproduce this easily.
I believe it is caused by renaming the containers in the docker-compose.yml. My guess is something inside the bitwarden container looks for the DNS of db container by the hardcoded name "db"

Looks like OP changed the services:

services:
  bitwarden:
  db:

to

services:
  bitwarden-unified:
  bitwarden-unified-db:

It will still run, however if you change the container name with the "container_name:" variable.

Hmm this would be weird. The database container name is referenced in the env file so it should work technically if you rename stuff.

@perryflynn
Copy link

In my case the application was not checking if the BW_INSTALLATION_ID was defined and crashed when it's converted null into a Guid.

@fl0wm0ti0n
Copy link

i run also into this, is there a solution?

@Ontahm
Copy link

Ontahm commented Nov 6, 2023

@fl0wm0ti0n

I dove a bit into this and it seems it's caused by mariadb compatibility somehow.

An easy workaround in to use a postgres database and use one of the latest versions of the bitwarden/self-host

This should do for anyone still struggling with this issue to get an easy setup working

Please note that Postgres docker image used here (latest) has multiple vulnerability,

I would not advise running this server and exposing if not in a local network.

---
version: "3.8"

services:
  bitwarden:
    depends_on:
      - db
    env_file:
      - settings.env
    image: bitwarden/self-host:2023.10.1-beta
    restart: always
    ports:
      - "8888:8080"
    volumes:
      - ./bitwarden:/etc/bitwarden

  db:
    environment:
      POSTGRES_USER: "bitwarden"
      POSTGRES_PASSWORD: "Y0URpa$$Word"
      POSTGRES_DB: "bitwarden_vault"
    image: postgres:latest
    restart: always
    volumes:
      - ./data:/var/lib/postgresql/data

volumes:
  bitwarden:
  data:

@l4rm4nd
Copy link
Author

l4rm4nd commented Nov 6, 2023

@fl0wm0ti0n

I dove a bit into this and it seems it's caused by mariadb compatibility somehow.

An easy workaround in to use a postgres database and use one of the latest versions of the bitwarden/self-host

I can confirm that the image works with postgresql. I've a working example here.

Please note that Postgres docker image used here (latest) has multiple vulnerability,

I would not advise running this server and exposing if not in a local network.

Although the postgresql docker image lists publicly known CVEs, this is not a problem. Nearly each docker image has known vulnerabilities, often from an upstream library or local OS dependencies. MariaDB for example has also known CVEs for their latest image (see here).

So don't be fooled into believing this is a big issue and you cannot securely expose an application with latest postgresql/mariadb docker image. The maintainers are tracking CVEs thoroughly and will fix relevant vulnerabilities in a timely manner. Some CVEs just do not have priority for fixing, as they (often) cannot be remotely exploited. Therefore, they will stay unfixed and be found by image vuln scanners.

But of course, never expose a database server to the Internet. Unnecessary anyhow.

@Ontahm
Copy link

Ontahm commented Nov 7, 2023

@l4rm4nd

That was a quick draft to highlight the fact that the image works with postgres.

Better safe than sorry :) I respect what malicious and talented people can do with very little. And trust me, you should too.

I am very much shocked by how lightly you take the subject, and how assertive you sound.

I beg to disagree with lots and I personally would simply follow some easy better practices:

  • Not use "latest" in production.
  • Use some alpine version of postgres that do not seem to have known CVEs. (e.g. postgres:alpine3.18 )
  • Light up my container scan reports by trying to have as little CVEs in my containers.

It might often not be a big issue, probably, but you cannot deny it is problematic still and it can get pretty ugly in some very very rare cases.

P.S. Please update the title to make it clear the issue is linked to mariadb, this could help people link this issue with potential duplicates.

@l4rm4nd l4rm4nd changed the title SIGABRT (core dumped); not expected Bitwarden Unified with MariaDB leads to "SIGABRT (core dumped); not expected" Nov 7, 2023
@l4rm4nd
Copy link
Author

l4rm4nd commented Nov 7, 2023

@Ontahm

Sorry, didn't want to come off as rude or assertive.

I am very much shocked by how lightly you take the subject, and how assertive you sound.

Just wanted to state that the world is not that dark as you painted it. If you have a look into the publicly known CVEs for postgresql and mariadb, you may see that those are not directly exploitable. Using an alpine image with no referenced CVEs is much better though, true!

I do not take such things lightly, as I work professionally in the security field. So likelihood, impact and the resulting risk is important. CVSS scores are most often boosted and do not include relevant factors such as exploitability and environmental factors. This will hopefully be fixed by CVSS 4.0 now.

I personally would simply follow some easy better practices

I am fully on your side here. Sorry if it came off otherwise.

Please update the title to make it clear the issue is linked to mariadb, this could help people link this issue with potential duplicates.

Done.

@fl0wm0ti0n
Copy link

fl0wm0ti0n commented Nov 19, 2023

@Ontahm

@fl0wm0ti0n

I dove a bit into this and it seems it's caused by mariadb compatibility somehow.

An easy workaround in to use a postgres database and use one of the latest versions of the bitwarden/self-host

This should do for anyone still struggling with this issue to get an easy setup working

Please note that Postgres docker image used here (latest) has multiple vulnerability,

Thanks, i did already after i asked here for a solution, i forgot to tell. it worked for me with this:

version: '3.5'

services:
  bitwarden:
    depends_on:
      - postgres
    env_file:
      - './settings.env'
    image: bitwarden/self-host:beta
    restart: always
    ports:
      - "5543:443"
      - "5580:8080"
    networks:
      traefik:
        ipv4_address: 172.20.0.30
    volumes:
      - bitwarden_data:/etc/bitwarden
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.bitwarden.rule=Host(`XXX`)"
      - "traefik.http.routers.bitwarden.tls=true"
      - "traefik.http.routers.bitwarden.entrypoints=websecure"
      - "traefik.http.routers.bitwarden.tls.certresolver=myresolver"
      - "traefik.http.services.bitwarden.loadbalancer.server.port=8080"
    container_name: bitwarden
networks:
  traefik:
    external: true

volumes:
  bitwarden_data:
    driver: local

And the Postgres container:

services:
  postgres:
    image: postgres:15
    restart: always
    environment:
      POSTGRES_USER: XXX
      POSTGRES_PASSWORD: XXX
      POSTGRES_DB: postgres
    ports:
      - "5432:5432"
    networks:
      traefik:
        ipv4_address: 172.20.0.20
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - './init-db_postgres_bitwarden.sql:/docker-entrypoint-initdb.d/init-db.sql:ro'
      - './init-db_postgres_photoprism.sql:/docker-entrypoint-initdb.d/init-db_photoprism.sql:ro'
      - './init-db_postgres_nextcloud.sql:/docker-entrypoint-initdb.d/init-db_nextcloud.sql:ro'
    container_name: postgres�
  pgadmin:
    container_name: pgadmin4
    image: dpage/pgadmin4:latest
    restart: always
    environment:
      - PGADMIN_DEFAULT_EMAIL=XXX
      - PGADMIN_DEFAULT_PASSWORD=XXX
    networks:
      traefik:
        ipv4_address: 172.20.0.21
    ports:
      - "5050:80"
    volumes:
      - pgadmin_data:/var/lib/pgadmin

networks:
  traefik:
    external: true

volumes:
    postgres_data:
      driver: local
    pgadmin_data:
      driver: local

and the initial DB creation script "init-db.sql":

CREATE USER XXX WITH PASSWORD 'XXX';
CREATE DATABASE bitwarden_vault;
GRANT ALL PRIVILEGES ON DATABASE bitwarden_vault TO XXX;

@waltherB
Copy link

Just fine that psql is better.
But is there any workaround for mariadb or any way of migrating the mariadb to psql?

@quiode
Copy link

quiode commented Jan 22, 2024

For me, it works fine on 2024.1.0-beta but fails with 2024.1.1-beta.

@danepowell
Copy link
Contributor

For me, it works fine on 2024.1.0-beta but fails with 2024.1.1-beta.

I wonder if the root cause here is actually #3651 ?

I arrived at this issue via Google, trying to figure out why 2024.1.1-beta is crashing, but it looks like it's fixed in #3651

@waltherB
Copy link

Thanks @quiode and @danepowell Reverting to bitwarden/self-host:2024.1.0-beta from "latest" and running the fix from #3651 : ALTER TABLE Grant DROP COLUMN Id; and ALTER TABLE Grant ADD PRIMARY KEY PK_Grant (Key); got my vault back :-)

@malcolmradelet
Copy link

Thanks @quiode and @danepowell Reverting to bitwarden/self-host:2024.1.0-beta from "latest" and running the fix from #3651 : ALTER TABLE Grant DROP COLUMN Id; and ALTER TABLE Grant ADD PRIMARY KEY PK_Grant (Key); got my vault back :-)

I also had to revert to this image. The latest image was giving NGINX errors before and after running the SQL commands.

To be clear, I was getting MARIADB errors mentioned above, as well as an NGINX error in the bitwarden container:

WARN exited: nginx (exit status 1; not expected)

Resolved now as mentioned above.

For others needing concise instructions:

  1. Update your docker compose file to image: bitwarden/self-host:2024.1.0-beta
  2. Log into the mariadb/mysql container
  3. Open MYSQL mysql localhost -u bitwarden -p bitwarden_vault
  4. Enter the password from BW_DB_PASSWORD or MARIADB_PASSWORD (both should be the same)
  5. Run 2 SQL commands and restart the container
ALTER TABLE `Grant` DROP COLUMN `Id`;
ALTER TABLE `Grant` ADD PRIMARY KEY `PK_Grant` (`Key`);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug bw-unified-deploy An Issue related to Bitwarden unified deployment
Projects
None yet
Development

No branches or pull requests