Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: Deploy via CloudRun #2465

Merged
merged 7 commits into from
Apr 22, 2019
Merged

Chore: Deploy via CloudRun #2465

merged 7 commits into from
Apr 22, 2019

Conversation

lukeed
Copy link
Member

@lukeed lukeed commented Apr 21, 2019

Added custom Dockerfile and a simple Makefile (for speed/simplicity). The Makefile can be transformed into a scripts/* file...whatever you want.

To build the container, you run make docker from inside the site directory. This assume that you've already built the Sapper app!

I could make docker require that the build runs first, but I left it as is for speed/testing.

Once the image has been built, you can enter it with the following:

$ docker run -p 3000:3000 -it <IMAGE> sh
# eg: docker run -p 3000:3000 -it gcr.io/todo_google_project_name/svelte-website:39afa1c1 sh

From here, you can ls -alh and whatnot to explore the contents.

Port 3000 gets forwarded for testing, since that's set as the default process.env.PORT value.

# from inside the image
$ node __sapper__/build
# Open localhost:3000 in browser~

TODOs

@Rich-Harris You need to setup a GCP project, if you haven't already. Then you will enter that project's name inside the Makefile. I don't think this is particularly sensitive, but you can inject it thru your CD pipeline, whatever it is.

You'll also need to update the DNS for svelte.dev via Cloudflare and setup the CloudRun "domain mapping".

Finally, this does nothing with ENV values. I haven't followed the trail much, but personally I have them live on the platform itself. CloudRun (as with many other services) allow you to configure ENV values on the service itself, which makes them available to the running container. This basically means that you can configure them once (inside GCP's GUI if you want) and then never have to touch them again, even between deployments.

I can help you with any of these steps if you want. Happy to help.


I didn't remove the now.json file in case it was still needed for reference?

@lukeed lukeed requested a review from Rich-Harris April 21, 2019 20:43
@codecov-io
Copy link

codecov-io commented Apr 21, 2019

Codecov Report

Merging #2465 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #2465   +/-   ##
=======================================
  Coverage   91.83%   91.83%           
=======================================
  Files           1        1           
  Lines          49       49           
=======================================
  Hits           45       45           
  Misses          4        4

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3f10f35...30b4799. Read the comment docs.

@Conduitry
Copy link
Member

A couple of things -

Is there a reason to have a two-step build process in the Dockerfile? It looks like that assumes that you've already built the Sapper project yourself anyway, so I'm not sure what the two steps are for.

I also think it might be nicer to not assume Docker is installed locally, and to instead just use gcloud to upload a tar to google and build the image there. This would require a .gcloudignore so that we don't upload tons of stuff we don't need. It would also probably be a good idea to have a .dockerignore so that if you do use Docker to build the image, it doesn't send everything to the Docker backend (although this is a lot less of an issue than sending everything over the network).

What I arrived at when I was fiddling with this stuff a week or so ago was to have a .dockerignore with my ignored files in it, and a .gcloudignore containing #!include:.dockerignore - a special include comment that just brings in the contents of .dockerignore.

The .dockerignore that I had with some test Sapper projects is:

/*
!/Dockerfile
!/package*.json
!/__sapper__
/__sapper__/*
!/__sapper__/build
!/static

which specifically whitelists certain files and directories. This would also need to be updated to include the /content directory.

Lastly, is there a reason you're not using one of the official Node Alpine images? Do those include extra stuff you're trying to avoid?

@Conduitry
Copy link
Member

Oh, is the answer to my first and last questions that the latter image only includes Node itself and not npm/Yarn?

@lukeed
Copy link
Member Author

lukeed commented Apr 21, 2019

The first pass is a full box, with npm/yarn. We use this one to build npm install the production dependencies. That's it – as you noted, the actual build process happens ahead of time (outside docker phase) which relies on all the devDeps.

The second pass is Alpine + node only, which shaves off all of "Node's devdeps" if you wanna think of it like that.

In both phases, we're manually copying over the files/directories we need. No need for endless list of ignores. Here's what the final image contains:

/app # ls -alh
total 256
drwxr-xr-x    1 root     root        4.0K Apr 21 20:32 .
drwxr-xr-x    1 root     root        4.0K Apr 21 21:03 ..
drwxr-xr-x    3 root     root        4.0K Apr 21 20:29 __sapper__
drwxr-xr-x    6 root     root        4.0K Apr 21 20:29 content
drwxr-xr-x  116 root     root        4.0K Apr 21 20:29 node_modules
-rw-r--r--    1 root     root      224.0K Apr 21 20:29 package-lock.json
-rw-r--r--    1 root     root        2.1K Apr 21 19:54 package.json
drwxr-xr-x    9 root     root        4.0K Apr 21 20:29 static

This is all that we need to actually run the website.

@Conduitry
Copy link
Member

I think that specifying ENV PORT 3000 in the Dockerfile makes sense, as I believe that variable is absolutely necessary for GCR to even function. Other secrets etc. could still live in the GCR config on the site somewhere.


Yeah I realized that was probably the reason for the two-phase build right after I asked it. Having a .dockerignore still is slightly valuable though, because it's read by the Docker client and stops various files from even being transferred to the Docker engine. Even if you only COPY specific files, the Docker engine still will get everything by default. This is much more important if we're instead building with gcloud (which, as I mentioned, I think would be nicer and more portable), as without a .gcloudignore everything will get sent over the network before they build the image.

@lukeed
Copy link
Member Author

lukeed commented Apr 21, 2019

Correct. GCR requires process.env.PORT access. The EXPOSE is just a default port to listen on. The app itself is already pulling from process.env, so nothing else needs to be done on the docker front. I have the same Dockerfile config running for some existing GCR sites.


I didn't set up gcloud builds here, only because I didn't want to impose more Google stuff on you guys. It totally works though & has less to run locally, so I'm in favor.

It also gives a massive free tier.

@Conduitry
Copy link
Member

Okay never mind about that stuff about PORT. I was under the mistaken impression that the container used the PORT env to tell Google what port it's listening on - when instead, the PORT env is what Google uses to tell the container what port it needs to listen on.


Since we need to have gcloud installed anyway to be able to deploy the service (and to be able to auth with the gcr.io registry), yeah I think using gcloud builds to build the image remotely makes more sense. In this case, we do want to have a .gcloudignore (and imo, it should just point to a .dockerignore where all the action is).

No strong opinions about whether EXPOSE 3000 should stay. I guess it is nice in that it tells a human reading the file which port the server is going to listen on by default.

@Rich-Harris
Copy link
Member

I have no strong opinions (or even weak opinions tbh) about docker push vs gcloud builds submit as both seem to work equally well on my computer at the moment, no small thanks to @lukeed, so whatever you reckon is best is good with me. Other than that, 👍

Aside from some caching headers, webpagetest.org now thinks we're doing a pretty good job:

Screen Shot 2019-04-21 at 19 51 16

@Rich-Harris Rich-Harris merged commit f468703 into master Apr 22, 2019
@Rich-Harris Rich-Harris deleted the deploy/cloudrun branch April 22, 2019 00:43
@lukeed
Copy link
Member Author

lukeed commented Apr 22, 2019

More often than not, something like this project will wanna use Cloud Build. Again, I didn't here (initially) because I didn't want it to seem/feel like it was an all-or-nothing in order to use GCR. This is the bare bones approach to getting it working, which was also the fastest way since the old way was getting pricey, fast.

But yes, using Cloud Build will 100% require an ignore-file so that we're not sending up a million files to be built.

@@ -0,0 +1,24 @@
HASH := `git rev-parse --short HEAD`

SERVICE := svelte-website
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe document how to create the service in GC somewhere?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants