-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
suggestion: please install ca-certificates by default #15
Comments
Hmm, interesting thought. We don't include either of Also, we do now have a separate @paultag what're your thoughts on including |
Hurm, good question. Is there anything in the default image that's capable of a TLS request? Looking though the image's binaries and which ones are ldd'd against gnutls (since I don't see OpenSSL), none of them look too exciting in terms of making outbound TLS requests. This could be a "gotcha" if someone does something like drops a Go binary onto the platform and finds no CA bundle without installing something that would have a transitive dependency on the CA bundle, but I'm not sure if we should optimize for that just yet. I can see both sides of this argument, but I'm not sure if we ought to make this call as Docker image maintainers, but by the same token, I'm not convinced big-D Debian will ever bring in the CA Bundle by default (until TLS is a hard requirement for apt, because something something SPARC something something). The use-case of Docker images is basically assured to have networking and a server, which does change the tradeoffs a bit. @stapelberg can we get a bit more information on how you discovered this, and what steps you needed to take to debug the lack of the bundle in your image? |
This is exactly what @Merovius recently did and how we discovered this. Myself, I ran into the same issue previously as well, e.g. when setting up my git-mirror Dockerfile. My take on this is: if we want to optimize for people using Debian itself, this isn’t necessary. But if we want people to build useful things on top of Debian, we should go for it. I’m thinking of Go binaries, bundler deployments, npm deployments, etc. etc. |
So, I'm really torn on this. Installing I'm not yet convinced this is something that anyone except users deploying a static binary will hit, which is a fucking pain, since we never think about programs and operating systems in terms of promising an interface or resource to eachother. This strikes me as the same class of issues as trying to run a binary that requires a newer kernel version for a core feature, libraries shelling out to weirdo programs that aren't always obvious, or reading files all over the filesystem that aren't always in place. The real trick here is what's the cost benefit. How hard is this to debug (will this ever fail quietly?) what are the implications of it failing, and is it worth shipping for every image, and do we want to start shipping every bit and bob to make sure folks can plop a static binary into the image and run it? Should users be expected to install packages or resources needed to run their static binary? Or we can take the cowards way out and use tls everywhere and punt forever |
Yeah i'll sleep on this. I'm not sure what the right thing to do here is. |
This is especially essential as I'm facing a problem where we're behind a corporate proxy that requires trusting a root self signed certificate authority in order to access the internet. There's no way to |
@ycprog if you're able to download images from the Docker Hub, it should be really trivial to set up an automated build (with a repository link, so it's auto-rebuilt any time |
I ran into a similar situation in regards to a corporate proxy. |
I just hit this issue. Was a bit surprised that |
Something I just realized that's relevant to this thread but isn't mentioned is that the |
My Go program running in the Debian container fails any HTTP requests towards sites using Let's Encrypt's certs with an error message as Simply adding and running
From Docker's build log, guess this does the magic by adding the missing root ca.
|
I have the same problem in my company. We have a internal artifactory, access by https with an official cert. But I have first to install |
Add common CA certs to the Dockerfile. The Debian image doesn't include them by default: debuerreotype/docker-debian-artifacts#15. Most likely a user will want to use the reporter functionality hence it should trust some root CAs such as DigiCert and so on.
Summary of this post: Argument in favor of This describe the same case as previously pointed at by following comments: #15 (comment), #15 (comment) A few remarks regarding past answers
Yes
Users using
The simplified version of this which only builds one image based one one architecture might be, doing so for every tag and every architecture provided in docker.io for debian seems to be another scope. Moreover such "intelligent derivation of images" would likely be a fragile/half-baked solution. See below for more details. The case for resolving the chicken-egg issue for First of all let's put aside the question of whether one should consume debian packages over However the fact that The following FROM docker.io/debian
RUN sed -i "s#http://deb.debian.org#https://deb.debian.org#g" /etc/apt/sources.list
RUN sed -i "s#http://security.debian.org#https://security.debian.org#g" /etc/apt/sources.list
# This will fails with "No system certificates available. Try installing ca-certificates."
RUN apt-get update && apt-get --assume-yes install curl The solution for users in the above situations are weird workarounds:
In the previous example one might say that the alternative to consume over
Now to the real impediment that takes this problem and leads to a contamination of it which causes previously proposed naive solutions (#15 (comment)) to be not applicable. Container image are meant to be reused and part of derivation. For the reasons explained here and especially due to the last part leaving users to implement fragile solution, it seems that the cost-benefit balance leans towards having As well worth to point out that Alpine and Centos official container images are not suffering from this issue. Pure opinion based
Rant |
While interesting, adding A potential workaround for users wanting 100% fully TLS-using images (especially if you can pull this image, which thus means your host would necessarily have to have a reasonable set of certificates) would be something like the following: FROM debian:bullseye-slim
RUN sed -i -e 's/http:/https:/g' /etc/apt/sources.list
COPY ca-certificates.crt /etc/ssl/certs/
RUN apt-get update && apt-get install -y ca-certificates then, build with $ docker build -f Dockerfile /etc/ssl/certs
Sending build context to Docker daemon 339.5kB
Step 1/4 : FROM debian:bullseye-slim
---> 1e40bc10bc1f
Step 2/4 : RUN sed -i -e 's/http:/https:/g' /etc/apt/sources.list
---> Running in dd90fe0b4154
Removing intermediate container dd90fe0b4154
---> dc69264372bb
Step 3/4 : COPY ca-certificates.crt /etc/ssl/certs/
---> 57d6b8a9dbd5
Step 4/4 : RUN apt-get update && apt-get install -y ca-certificates
---> Running in c172bb123a1f
Get:1 https://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
Get:2 https://deb.debian.org/debian bullseye InRelease [113 kB]
Get:3 https://deb.debian.org/debian bullseye-updates InRelease [36.8 kB]
Get:4 https://deb.debian.org/debian bullseye/main amd64 Packages [8178 kB]
Get:5 https://security.debian.org/debian-security bullseye-security/main amd64 Packages [29.4 kB]
Fetched 8401 kB in 1s (6277 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
openssl
The following NEW packages will be installed:
ca-certificates openssl
0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded.
Need to get 1009 kB of archives.
After this operation, 1891 kB of additional disk space will be used.
Get:1 https://deb.debian.org/debian bullseye/main amd64 ca-certificates all 20210119 [158 kB]
Get:2 https://security.debian.org/debian-security bullseye-security/main amd64 openssl amd64 1.1.1k-1+deb11u1 [851 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 1009 kB in 0s (8372 kB/s)
Selecting previously unselected package openssl.
(Reading database ... 6653 files and directories currently installed.)
Preparing to unpack .../openssl_1.1.1k-1+deb11u1_amd64.deb ...
Unpacking openssl (1.1.1k-1+deb11u1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20210119_all.deb ...
Unpacking ca-certificates (20210119) ...
Setting up openssl (1.1.1k-1+deb11u1) ...
Setting up ca-certificates (20210119) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/x86_64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Updating certificates in /etc/ssl/certs...
129 added, 0 removed; done.
Processing triggers for ca-certificates (20210119) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container c172bb123a1f
---> 2844a19fda72
Successfully built 2844a19fda72 (That also reasonably updates once that "real" |
On Tue, Aug 31, 2021 at 05:29:24PM -0700, Tianon Gravi wrote:
... to be including nothing more than `debootstrap --variant=minbase` ---
... users wanting 100% fully TLS-using images ...
Those users should add variant `minbasepc` to `debootstrap`
where `minbasepc` is `minbase` plus certificates.
After that come back and whine for a build of such image.
|
Nah, introduce the variant |
I'd like to add one workaround that works well in my situation: we have systems without Internet access, but we do mirror Debian on our Artifactory server. When building images based on the official Debian images, this creates a bootstrap problem: we need to add packages from our mirror, but our Artifactory is only available over HTTPS. The workaround is to disable certificate checking for installing ca-certificates, like so:
|
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as spam.
This comment was marked as spam.
I would love to see these images default to HTTPS for apt sources, and include There is discussion about making HTTPS be used by default on new installs: And the Debian Vagrant images now default to HTTPS apt sources, but those already included |
The counterargument to defaulting to HTTPS is that the only security benefit is that it requires eavesdroppers to analyze the traffic patterns to find out the size of files downloaded and guess the list of packages that way instead of seeing them directly. Repository security is provided by the signatures on the package list, not the certificates on the servers. Requiring HTTPS for Debian mirrors creates a vendor lock-in effect as we can no longer use donated server capacity, as we don't have a way to generate an arbitrary number of valid certificates for a given host name, so this would make the project beholden to big content delivery networks. |
My original suggestion to add ca-certificates by default was not for the use-case of installing packages. Instead, my observation is that nothing I want to do with a Debian container works out of the box: I can’t clone a git repository to compile my software in a CI pipeline. I can’t have my programs query kernel.org for the current Linux version. I can’t push CI artifacts to a cloud service provider. My use-cases either don’t need a network at all, or they need ca-certificates. I think having networking work out of the box is not too much to ask for :) |
Well, we're definitely not going to have What I'd suggest is trying either
|
If we wanted to use SSL to guard against these, we'd also have to pin the certificates to a trustworthy set of CAs as well, and also provide a mechanism that allows users to un-trust CAs without potentially breaking updates. It's not a simple change, but it attaches APT to the SSL trust mechanisms and enforces policy on them, specifically "do not disable any of the CAs that CDNs use for proxy certificates." Specifically proxy certificates are problematic because these are installed on thousands of machines that either share private keys, or have a mechanism to generate thousands of valid certificates quickly. I'm not convinced that this will give any significant amount of extra protection, but it will cause reliability issues, and the solution proposed in the thread to simply disable certificate validation to avoid those issues would degrade security to an even worse point than before.
That is not useful though, because these will have to be manually configured, as they do not get certificates for the |
Simon Richter:
>> The counterargument to defaulting to HTTPS is that the only security benefit is that it requires eavesdroppers to analyze the traffic patterns to find out the size of files downloaded and guess the list of packages that way instead of seeing them directly.
> This is unfortunately not true. If you think the GPG signatures alone are enough, consider these CVEs:
If we wanted to use SSL to guard against these, we'd also have to pin the certificates to a trustworthy set of CAs as well, and also provide a mechanism that allows users to un-trust CAs without potentially breaking updates. It's not a simple change, but it attaches APT to the SSL trust mechanisms and enforces policy on them, specifically "do not disable any of the CAs that CDNs use for proxy certificates."
Specifically proxy certificates are problematic because these are installed on thousands of machines that either share private keys, or have a mechanism to generate thousands of valid certificates quickly.
I'm not convinced that this will give any significant amount of extra protection, but it will cause reliability issues, and the solution proposed in the thread to simply disable certificate validation to avoid those issues would degrade security to an even worse point than before.
Security is not a binary property. Even GPG has had vulns, so there are clearly
times when GPG signatures alone are not enough. Yes, TLS with cert pinning is
stronger than TLS without. TLS requires an active attack to even see the
contents while plain HTTP works with passive attack: anything on the network
path can just listen. On top of that, unencrypted traffic makes traffic
injection attacks drastically simpler to do.
> Luckily, there are many mirrors that also provide HTTPS. It is not just the major CDNs.
That is not useful though, because these will have to be manually configured, as they do not get certificates for the `deb.debian.org` name, so I cannot simply redirect `deb.debian.org` to the nearest mirror with a DNS view as I can with HTTP.
Yes, it is currently easier to use deb.debian.org. That setup is what is
driving centralization, not the use of HTTPS. With F-Droid, our downloader to
automatically chooses between a list of official and user-maintained mirrors.
So that actually encourages decentralization. The right fix to the centralizing
effect of deb.debian.org is to make apt work smoothly and automatically without
needing a single domain name hosted by a single company. The plumbing is
already there with things like apt-transport-mirror.
|
The argument for having HTTPS support available in the base image is quite simple, I think: you have to jump through enormous hoops to get there if its not already in the base image, as has been documented here and many other places plentyful. The cost to have it in the image is minimal. Why is this even a discussion? |
@stefanbethke to answer why this is a discussion: there was a time when the code for supporting HTTPS was a plugin to apt, e.g. apt-transport-https. That meant that apt had a much simpler code path when using HTTP sources (e.g. no TLS library and related code). Since Debian/buster, the HTTPS support has been built into apt, so that argument for using HTTP by default no longer applies. Also, the original apt threat model did not include privacy concerns, so defending against metadata leaks was not part of the picture. It is now clear that we also need to consider metadata leaks in apt's security model. For example, the most effective exploits are 0days, and 0days are only valuable as long as they are not known by the software maintainers. Someone looking to exploit an 0day will want to target specific machines to avoid making the vuln known to the world. Metadata leaks are essential for targetting. HTTPS limits the scope of metadata leaks by a large factor. |
If an attacker is able to MITM the connection to HTTPS would at least guarantee that you're talking to an actual Debian mirror rather than an impersonator. I'm not very knowledgeable about Debian's repositories, so please excuse me if I'm mistaken and this attack is somehow mitigated. But otherwise, using HTTPS is not just about privacy. |
Yes, that's mitigated; see https://wiki.debian.org/DebianRepository/Format#Date.2C_Valid-Until |
@stefanbethke it's a discussion because it pulls in extra infrastructure that is not required otherwise, and it breaks a semi-common deployment scenario, where In well-connected places where Internet flat rates are available, that's only a handful of large installations, mostly cloud hosters; in places where Internet is expensive, that's a common setup. |
@GyrosGeier I don't understand why enabling HTTPS support in the image breaks redirecting http://deb.debian.org. Btw, that DNS change requires breaking DNSsec, which in itself is problematic. Quite the opposite, enabling HTTPS enabled specifying a mirror that is only available over HTTPS, a very common setup in medium and large enterprises. |
Recently this discussion deviated to pros & cons of using HTTPS for deb.debian.org, whereas original request was much simpler - just include |
Which could be achieve by providing an extra image So, the "default" image, which matches debian-essential. See #15 (comment) for the descriptions of the image names Yes, as far as I understand it, the original request already granted. I think this issue is open to document the "what you want is available under a different image name" |
Something like
On top of what @stefanbethke said about breaking DNSSec, the days of overriding things at the network level are over. The right place for that kind of configuration is in the end points, e.g. the clients. For example, |
"Patches welcome" |
Debian Docker images will soon default to HTTPS for apt sources, so force it now: debuerreotype/docker-debian-artifacts#15
I just ran into this with an internal Nexus proxy with a certificate signed by an internal CA behind a corporate firewall that blocks most outgoing connections. My solution was something along the lines of this: FROM docker.io/library/debian:bullseye
# Install internal CA certificate where update-ca-certificates can pick it up
RUN mkdir -p /usr/local/share/ca-certificates
COPY internal-ca.crt /usr/local/share/ca-certificates
# Temporarily make the internal CA the only trusted CA
RUN mkdir -p /etc/ssl/certs
RUN cp /usr/local/share/ca-certificates/internal-ca.crt /etc/ssl/certs/ca-certificates.crt
# Set apt repository location
RUN printf -- '\
deb https://nexus.lan/repository/debian-proxy bullseye main\n\
deb https://nexus.lan/repository/debian-proxy bullseye-updates main\n\
deb https://nexus.lan/repository/debian-security-proxy bullseye-security main\n' > /etc/apt/sources.list
# Install ca-certificates, this will run update-ca-certificates which will include our internal CA in the trust
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y ca-certificates && apt-get clean
# Rest of your Dockerfile goes from here |
My company blocks firewall access to debian packages by default, which means to build a docker image I have to download the CA certs and update them and replace all the sources. I can do it all manually similar to how @Raniz85 is doing it but I'm not going to lie if |
As I understand the original point made is not about tools but whether the image is able to use https out of the box, this being a common use case for users. It feels like the examples given for context were unnecessarily turned into a strawman.
Those idea are always circulating around the same idea that users are directly consuming the base image whereas base container images are meant to be derived and reused. Adding variations to a base image is not addressing the issue, as introducing flavors only address the case of direct consumption of base image and not derived images. The goal of a base image being to address the common use cases into one base image and not several flavors. See previous comment 15#issuecomment-907653538 What is being said by the answers is that using TLS connections is not an enough shared use case to make it to this base image. Moreover the collections of workaround how to resolve the chicken-egg for the case described here (15#issuecomment-907653538, which as stated by others is not at all related to a discussion about http vs https for packages) are questionable compared to migrate to a saner base image that is being able to use the main layer7 network protocol (https) out of the box. |
I've used base Debian image to build custom Python image and was really confused why sending emails via SMTP with TLS did not work. Took some time to understand that it required |
If Debian isn't going include something as fundamental as ca certs in its default container images (the ones listed on Docker Hub) then I think the documentation should at least say so. I just got bitten by this with my Debian based image that runs a statically compiled Rust binary. |
I have wasting my time because of my Golang apps cannot connect to MongoDB Atlas which I suspect that MongoDB driver is the culprit so I try disable the TLS and realize that Debian is not including ca-certificates in default |
Currently, the ca-certificates package is not included in the debian docker image. Nowadays, this essentially equals not being able to make outbound connections to the internet by default. Given TLS’s pervasiveness, could we install ca-certificates by default?
The text was updated successfully, but these errors were encountered: