You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have been using bazel-remote in the following configuration:
AWS S3 bucket
Local copies of bazel-remote on every developer's computer that access S3 but don't write to it
Local copies of bazel-remote on every CI machine that writes to S3
.bazelrc in the repository points at grpc://localhost:9092
However we have no idea if this setup makes sense, or if we should consider a setup more like this one:
AWS S3 bucket
bazel-remote instance(s?) running on AWS with authentication
.bazelrc points at these shared instances
Developers use unauthenticated/readonly access, CI machines have authenticated/readwrite access
It's not clear from the readme how these two setups differ in terms of various tradeoffs, or how this might impact things like compression -- e.g. will bazel compress artifacts before sending them to the bazel-cache, or do we need the local bazel-cache to proxy the requests first?
It would be nice to have a few example setups and some of the tradeoffs to consider with each.
The text was updated successfully, but these errors were encountered:
Hi, adding some example configurations to the docs is a good idea- I'll try to organise that.
To answer your questions, normally I would setup a single bazel-remote instance (with the S3 backend if you wish) and configure CI and developer machines to talk to that, with whichever access you're comfortable using (eg only allow CI to write to the cache). That way you don't need to maintain cache on CI machines between jobs, and the chances of getting a cache hit from bazel-remote's disk cache layer increases (assuming it's a reasonable size for your codebase).
If you don't want developers to be able to write to the same cache as CI, you might consider running a second bazel-remote instance only for developers (possibly also using the same S3 bucket, but not uploading there).
I'm aware of a couple of benefits to running bazel-remote on client machines, both things that could be fixed in bazel itself at some point:
Some people use this to make bazel upload results asynchronously to a central cache, which allows bazel builds to finish sooner while uploads continue for a time afterwards.
This allows the use of bazel-remote's compressed storage mode, which uploads/downloads compressed blobs to the proxy backends (like S3). This can be faster than transferring uncompressed blobs. Bazel itself doesn't support compression yet, you can watch Support compressed Bytestream transfers for remote execution bazelbuild/bazel#12670 for updates on that.
It is also possible to run bazel-remote on the client, and have it talk to a shared bazel-remote instance, but that doesn't currently work with compression. I should try to implement that soon.
We have been using bazel-remote in the following configuration:
However we have no idea if this setup makes sense, or if we should consider a setup more like this one:
It's not clear from the readme how these two setups differ in terms of various tradeoffs, or how this might impact things like compression -- e.g. will bazel compress artifacts before sending them to the bazel-cache, or do we need the local bazel-cache to proxy the requests first?
It would be nice to have a few example setups and some of the tradeoffs to consider with each.
The text was updated successfully, but these errors were encountered: