-
Notifications
You must be signed in to change notification settings - Fork 732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workspaces and monorepo support (add sync --all-packages) #6935
Comments
To expand on the Docker image, this is what I would want to do: FROM python:3.12.5-slim-bookworm AS python-builder
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
# Create a venv at a well-known location so it can be COPY'd later
RUN uv venv /opt/python
# Tell uv to use that venv
ENV UV_PYTHON=/opt/python
WORKDIR /app
COPY uv.lock pyproject.toml /app/
# No need to COPY pyproject.toml of libs - they're all well-specified in uv.lock anyway
# Install the app without all workspace members - ie all 3rd party dependencies
RUN uv sync --locked --no-install-workspace --package=server
COPY packages /app/packages
# Install 1st party dependencies, but only those that are needed
# Also pass the fictional `--no-editable` flag to actually bundle them into the venv
RUN uv sync --locked --no-editable --package=server
FROM python:3.12.5-slim-bookworm AS runtime
# Copy the venv that has all 3rd party and 1st party dependencies, ready for use
COPY --from=python-builder /opt/python /opt/python
ENV PATH="/opt/python/bin:$PATH" I can't do that because:
|
(1) is easy to resolve, would that help? |
(1) Yes, that would be great! For (2), I suspect the only generally useful solution would be to encode the package-specific dependency tree in |
For (2), we're thinking of perhaps a dedicated command like |
Lets track (2) in #5792. |
How is this different than |
I think that does what you're describing? |
#6943 adds support for |
Sorry you're moving too quickly for me! About (1)You're right that Alternatively a more explicit flag in the config like [project]
name = "monorepo-root"
version = "0"
requires-python = "==3.12"
dependencies = ["mylib", "myserver"]
[tool.uv]
dev-dependencies = []
package = false
[tool.uv.sources]
mylib = { workspace = true }
myserver = { workspace = true }
[tool.uv.workspace]
members = ["packages/mylib", "packages/myserver"] On (2) the Docker stuffI don't really understand how #6943 helps but seems sensible anyway. I see three obvious ways (not uv specific) of getting stuff into a Docker image:
All of these require a little pre-Docker script to generate the |
For (2), I thought you wanted to do this:
This now works as expected if you use |
This is also causing some issues for me with 0.4.0+. Locally sync works fine > uv sync
Resolved 341 packages in 76ms
Audited 307 packages in 3ms But when adding > uv sync --frozen
Uninstalled 97 packages in 7.57s
...
Audited 210 packages in 0.25ms The different dependency resolution behavior depending on whether I pass |
Does your root |
No, just a "virtual" workspace, effectively this. [tool.uv]
dev-dependencies = [
"...",
]
[tool.uv.workspace]
members = ['libs/*', 'sandbox'] |
I can look into why you're seeing differences (it sounds like a bug!). I'd suggest migrating to a virtual project though, i.e., adding a |
Adding the |
Great! Still gonna see if I can track down and fix that bug :) |
What @b-phi is talking about is exactly what I mentioned in (1) of my comment up above. Basically you have to add each workspace member in three places. Would be great if that could be made unnecessary (in one of the ways I suggested or some other way). On (2) the Dockerfiles, the command you added helps, but it still doesn't work if there arae dependencies between packages and you haven't yet copied in the files. There's an MRE here. It fails when trying to run the |
I'm confused on (2). We have |
Oh of course, sorry. So (2) I think is resolved. The remaining stuff about getting the right files into the Dockerfile are not really uv's problem. (Although could be helped by stuff like The main point of this issue is (1) but I'm very happy to wait for you to figure out an approach that you're happy with. But I think it would be great to resolve. |
👍 Part of what I'm hearing here too is that we need more + better documentation for this stuff. |
Yeah I don’t blame you, it’s moving really fast. EDIT: adding this here to make it clear to any future travellers why this issue is still open. |
I'm probably biased, but it seems to me that a monorepo with possibly interdependent libs, and independently buildable (most of the time into Docker images) apps is a common pattern - at least it's what workspaces promote. That said, I must say I'm having an amazing experience with uv (and ruff, and Astral in general), and that I'll advocate to use it in all the projects I maintain! |
Jumping in here, managing multiple environments would be very helpful. In our repo, some sub-packages have heavy ML dependencies, others have linux-only dependencies. Ideally I would be able to manage multiple environments for different use cases, e.g. lightweight venv on OSX host, a linux venv that I use via docker, a heavier ML env etc. |
I've managed to do that by defining apps as packages (that you target with If you need very specific environments that are orthogonal to apps, you could create one with |
I've resolved that point by adding all local packages to the root package ( |
There's only one lockfile, so if at the root of your monorepo you run |
One thing preventing us from switching over our monorepo to uv is that its really hard to tell in CI which projects in a workspace actually changed when uv lock changes. We have many apps deployed from a single monorepo and don't want to have to build docker images for all of them every time uv.lock changes (e.g. someone adding a new project or library to the workspace) |
@rokos-angus one way around that would be to have a git-hook/CI step/something that runs |
where i work we use http://github.com/josh-project/josh to figure out what changed (disclaimer: i'm a contributor in that project) with that said I think CI is a separate problem. when it comes to python, here's what i've been able to narrow down the requirements to:
how we solved is for us is a custom script connected via https://github.com/recogni/setuptools-monorepo that resolves those dependencies in a desired way depending on context (for example either to agree to the point of having a shared lockfile, this is often a pain point |
@vlad-ivanov-name I'm slowly working on something similar at https://github.com/carderne/una albeit uv-specific and Hatch not setuptools. It figures out where to find files using uv's I haven't really thought about your point (2). Nor much for (3), but my assumption is that for testing you'd use |
I've compiled an example that works for my purposes that might help some folks looking for a monorepo setup using uv. |
@JasperHG90 that link is 404 for me. |
https://github.com/DavidVujic/python-polylith-example-uv is another example which I think supports this or similar use cases. @DavidVujic |
Thanks for the mention! Yes, if I have understood the things talked about in this issue correctly I think that Polylith in combination with |
I made https://github.com/JuanoD/uv-mono as an example repo. Feel free to correct me if something is wrong |
Sorry was ill these past days 🦠. Is fixed now! Thanks for the heads up. |
Hey @JasperHG90 @JuanoD @gwdekker @carderne, I checked out your examples which are extremely useful as I have been trying to figure out how to setup my teams mono repo for data science and engineering workflows. One thing that I have been struggling with how to still use Any ideas on how to handle such a situation or if you have how did you handle it. |
@nickmuoh it depends on what you want the solution to look like. If you want to have a separate venv for this app I am not sure how you would do that. If you are ok with having one global lock and venv and use pandas==2.0.0 for development but not for deploying your app: in polylith you have one pyproject file on root level and a separate one for each project. So for the project you can add your lower bound version of pandas, so you can still deploy your app while working on supporting pandas 2. |
Not sure if this is related. If unrelated I can open a new issue. I'd like to publish standalone packages from a monorepo that use an (unpublished) shared library. This comment in the hatch repo echoes a similar problem. I have the following package structure where
I would like to publish a standalone wheel for Right now this doesn't work because the Couple of questions
I see that una attempts to support this but I would be more confident in a solution built directly into Proposed solution I feel like what I want is a uv build --package client --wheel --include-workspace-deps |
Assuming dependencies = [
"shared ~= 1.0",
]
[tool.uv.sources]
shared = { path = "../shared", editable = true}
# shared = { workspace = true } # for workspace |
I'd replace
This feels to me like a pretty unique situation you're asking for though that is outside of any standard python project workspace and is very specific to distributing the packages from uv into a wheel. You could probably define a separate package + pyproject.toml already that symlinks in the code from both packages and just have uv build that when needed. I'd suggest opening a separate ticket for your feature request considering workspace support is already available and this ticket probably doesn't have a super clear scope anymore. |
Hi all, I think that thanks for --no-editable, uv works very nicely with dockerfiles / containerfiles. Based on all responses here, I now use this Containerfile, which lives in a package and which I build from the workspace root:
The build command is run from the workspace root:
Of course, the line
It looks to me like this correctly builds a .venv with only the necessary packages and with the workspace packages installed in the .venv. Can someone tell me if I missed something here? |
|
Exciting stuff, thank you! |
I've put a decent amount of effort trying to figure out a workable "monorepo" solution with pip-tools/Rye/etc and now uv. What I mean by a monorepo:
I'm packaging a few thoughts into this issue as I think they're all related, but happy to split things out if any portions of this are more likely to be worked on than others.
Should uv support this?
I think yes. Pants/Bazel/etc are a big step up in complexity and lose a lot of nice UX. uv is shaping up as the defacto Python tool and I think this is a common pattern for medium-sized teams that are trying to move past multirepo but don't want more sophisticated tooling. If you (uv maintainers) are unconvinced (but convince-able), I'm happy to spend more time doing so!
Issues
1. Multiple packages with single lockfile
Unfortunately, uv
v0.4.0
seems to be a step back for this. It's no longer possible touv sync
for the whole workspace (related #6874), and the root project being "virtual" is not really supported. The docs make it clear that uv workspaces aren't (currently) meant for this, but I think that's a mistake. Have separate uv packages isn't a great solution, as you lose the global version locks (which makes housekeeping 10x easier), so you have multiple venvs, multiple pyright/pytest installs/configs etc.For clarity, I'm talking about the structure below. I think adding a
tool.uv.virtual: bool
flag (like Rye has) would be a great step. In that case the root is not a package and can't be built.2. Distributing in Dockerfiles etc
This is I think orthogonal to the issue above. (And much less important, as it's possible to work around it with plugins.) Currently, there's no good way to get an efficient (cacheable) Docker build in a uv workspace. You'd like to do something like the Dockerfile below, but you can't (related #6867).
If that gets resolved, there's another issue, but this is very likely to be outside the scope of uv. Just sharing it for context.
packages/
directory into every Dockerfile (regardless of what they actually need), forcing tons of unnecessary rebuilds.My own solution has been to build wheels that include any dependencies so you can just do this:
Then in Dockerfile:
I've written a tiny Hatch plugin here that injects all the required workspace code into the wheel. This won't work for many use-cases (local dev hot reload) but is one way around the problem of COPYing the entire workspace into the Dockerfile. I don't think there's any solution that solves both together, and at least this way permits efficient Docker builds and simple Dockerfiles. (Note: since uv v0.4.0 the plugin seems to break uv's editable builds, haven't yet looked into why.)
The text was updated successfully, but these errors were encountered: