-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving failures at ResolutionTooDeep
to include more context
#11480
Comments
Thanks for the detailed report. Unfortunately, some sets of requirements trigger exponential behaviour and while we might be able to alter the heuristics we use to "fix" those cases, we'll inevitably break a different case in the process. There has been a lot of work done by volunteers (either pip maintainers or pip users) to examine possible improvements - examples are #10884, sarugaku/resolvelib#64, #10479, #10201. I suggest you read these for background. A search on the issue tracker for "is:issue resolver performance" should find plenty more. In particular, #10479 (comment) might be relevant here. Basically, though, the answer is that it's not even theoretically possible to avoid all pathological behaviour. We've done a lot of work algorithmically, and we have implemented heuristics to address the common cases of slowdown, but it's entirely possible that there are still improvements to be made, but we have to be extremely careful that we don't simply adjust the trade-offs in a way that hurts other users when we change things. If you're interested, we'd welcome suggestions on changes that you think might help. Or if you find a specific feature of your example that might generalise, that would also be useful (as would suggesting approaches to address that feature). In the meantime, however, the following approaches have been useful for others trying to work around long resolve times:
|
@pypa/pip-committers I'm surprised we don't have a "type: performance" label. Also, there's nothing for "known limitation" - I was tempted to mark this as "not a bug" but that seems a bit strong. I've added those two labels for this issue (and other resolver performance issues) - if anyone would prefer to handle this another way, please say so. |
Bug part is that it takes too long, without control. So bug is that missing control on user side. If we could just We could then use this on our automation, and limit impact of time taken, and fail faster. And yes, I understand problem space, that this is "feature" to do backtracking. It is fine when you have small dependency tree. We have that big tree, and just want to make it fail faster. |
Please read some of the other issues that I referenced. This was discussed extensively some time ago. Or if you want to run pip with a maximum duration, there's the Unix |
Hi, I think the only thing needed would be the ability to control that If that value could be controlled we could handle different cases we are seeing. In some cases, we do not want any backtracking; in others, 200 steps would make sense, and only in the most complex case would we even try with the full 2000000, just because that can mean 4+ hours of resolution time. The backtracking problem is really hard, and there are problems that I don't think it can (or even should) solve. So just having the ability to control how much backtracking is done would solve this issue. |
Please read the other issues. This has been extensively discussed, and the issue is that there's no way to meaningfully know what a "reasonable" value is, without trying it. By the time you know you need to set the value smaller, it's too late - you've already taken 4 hours. And setting the value lower will never get a better result, it'll simply fail sooner (which is of no help if you've already determined that it's going to fail). In particular, the value has no useful link to how long pip is going to take to fail. If you're convinced that setting this value is useful, I suggest you write a wrapper script that monkeypatches pip to let you do so, and try that out. If you find good evidence that setting the value is of benefit, feel free to present an argument based on actual real-world use cases. Also note that if you want to set a time limit on how long pip will run when trying to resolve an install, you're better using something like the Unix |
That's literally the documentation that https://pip.pypa.io/warnings/backtracking points to -- the link printed in the error messages. This value is set so high, to ensure that people don't hit it in anything other than the most pathological cases. |
If you'd like for this value to be set to something lower, that's... a tractable request -- I'm not opposed to the idea that we should reduce that number. :) |
To be clear, nor am I. I am uncomfortable about making it user-settable, though, as we'll just end up with people wanting advice on what to set it to. And if we do reduce the number, I'd like to see some evidence that any proposed new value is (in some sense) better than the current value. |
As well as changing arbitrary variables like As an aside I spent a few hours today reproducing the test case and looking if it could be improved with backjumping. A year ago I made a hacky branch to test backjumping (which I talked a bit about here). Whether backjumping would have ultimately helped OP in general I am not sure, it certainly fails quicker on this specific requirements because it discards non-viable paths and fails by trying very old versions of That said I think I will have time to work on open source again later this year and I will take another crack to see if I can definitively improve Pip's backtracking with backjumping or similar, I will use this as one of the test cases. |
I think there's a related issue here as well. Understanding the cause of a pathological backtracking issue is a hard problem, and it's entirely possible that even with more information in the logs, many users simply won't have the knowledge or understanding to diagnose the issue and pinpoint the cause of the problem. So while there's unlikely to be any downside to improving the instrumentation of the resolution process, I fear that it won't actually improve things in a practical sense for affected end users. |
While I agree with your general sentiment I would argue that if Pip gave enough information that some users could solve their issue for some cases of pathological backtracking that would be a big win over the current situation. |
I wonder if it’d make sense to do optional telemetry only to people who set a custom resolution round. This may help collect real-world numbers while minimising the creepy aspect (it’s clearly an opt-in and you are always free to just use the default). |
We are seeing a related issue - in some environments we do not want the backtracking behaviour at all and being able to set try_to_avoid_resolution_too_deep=1 for those test jobs with an explicit environment variable or pip runtime option would be essential to avoid pip (incorrectly) selecting an old and completely obsolete version of a dependency and running tests against that (which will be useless anyway) instead of failing with a clear and explicit error message like - "hey, your requirements can not be satisfied because that one thing you depend on needs a newer Python version". User-configurable setting is important because pip is being used in very different environments - in some the goal is to get a working environment at whatever cost, in others quickly failing to set up the environment (and producing an error message that explains why!) is actually more important. And only the user/admin/dev setting that environment up knows what they expect/want from pip. |
Do you mean actual telemetry here (as in, pip uploads usage data to a central server somewhere)? If so, then I didn't think that was a route we wanted to take. If we do want to consider adding telemetry to pip, then I think there's a lot of places I'd rather see us have usage reporting than the resolver. For example, logging how many people only install from wheels, how many people use extra indexes, how many people install to locations other than virtual environments, ... If you just mean "opt in uploading of data", then why not just add a
What exactly do you mean by this? You only ever want to consider the latest version on PyPI? That's easy using a simple filtering proxy for PyPI. But I suspect it would very often just fail. Or you can fully pin requirements, which avoids all backtracking. Or something else? Most approaches I can think of would fail on pretty much anything that had any form of dependency overlap. If you can describe the algorithm you'd expect pip to use when given a set of requirements and constraint files, which determines what to install without using backtracking at all, then we can consider that as an option. But remember that the reason we removed the old resolver (which worked without backtracking) was because it gave incorrect results - any proposed approach must give correct results or fail1. Having said all of this, if you really want to try experimenting with import pip._internal.resolution.resolvelib.resolver
orig_resolve = pip._internal.resolution.resolvelib.resolver.RLResolver.resolve
def hacked_resolve(self, requirements, max_rounds):
print("In hacked resolve!!!")
return orig_resolve(self, requirements, 1)
pip._internal.resolution.resolvelib.resolver.RLResolver.resolve = hacked_resolve
# Copied from pip/__main__.py
import sys, os, warnings
if sys.path[0] in ("", os.getcwd()):
sys.path.pop(0)
warnings.filterwarnings(
"ignore", category=DeprecationWarning, module=".*packaging\\.version"
)
from pip._internal.cli.main import main as _main
sys.exit(_main()) I just tried it, and Footnotes
|
We have an internal package repository and during testing of secondary packages, it is pretty natural to have them depend on some of our primary internal packages during their verification jobs. Say we have a package "basepacakge" that has had couple hundred release over the years and we have a "otherpackage" that is also being developed here. "otherpackage" just depends on "basepackage" without pinning (as it is generally expected to work on all versions). Recently we hit a bug where pip for some reason decided that it does not like to install the latest version of our "basepackage", so each time the verification run started, pip would spend lots of time to do a couple hundred requests to out internal package repository until it backtracked to a nearly 3 year old version of our basepackage that it liked and installed that as a dependency. Right after that it failed, because the other dependencies of "otherpackage" were pinned to versions that are more recent that pinned versions of that ancient basepackage version. So we get a failure with pretty pointless error message about version conflict between and ancient and current version of some deeper dependency. Instead we would like to know why pip was not installing the latest version of the requested package, as we would have expected it to. And at that point we would also like that job to fail then, because there is no point in testing against an old base version. You will just be seeing already fixed bugs and be really confused. |
So set your repository index (or layer a proxy in front of it) to only publish the latest versions. You don't need (or want) to stop pip backtracking for this use case, as far as I can see. |
@aigarius Can you confirm that you are using the latest version of pip?
That would mean that you can't even have dependencies -- a round is the resolver evaluating a single candidate (or set of them). You won't be able to install more packages than set in that variable. I'm personally fine if we implemented a |
Agreed, that seems like a relatively easy option to describe/understand. I have my reservations about whether in practice, it would solve the issues people are having, but that's more a question for the people likely to use such an option. Would any of the people in this thread hitting problems be interested in creating a PR for this? |
Dropping a zero brings us down from 4 hours to ~30 minutes of trying to resolve (assuming similar per-round average timing). I can see the argument that it is a more reasonable amount of time for failure in automation. :) |
--no-backtracking-in-resolve would likely solve our issues (or more specifically, expose where we have wrong assumptions in our processes/code) We are seeing the unexpected behavior when using pip 22.2. Not publishing old versions is sadly not a solution for us as we also need to support re-execution of tests with old release of our final software/hardware package that would then require (pinned) versions of our tools that were released/versioned together with that package at the time. Today I also observed pip happily downgrading an installed package to resolve a dependency conflict, that would likely also be a good idea to be able to disable with a dedicated flag. |
Personally, I do not see how opening that hard-coded value to be configurable would cause problems / need to add guidance. Fine-tuning and tweaking the hard-coded default value of Using time alone to determine if resolution worked or not is not a viable option, because we are running these things on all sorts of machines, cross-platform, VMs containers so relying on time is just asking for trouble, this is why the backtracking amount is a lot better solution.
Having the ability to set the We for example know that one of our locked-down / frozen dependencies consists of 170 sub-dependencies, so we could easily set the amount of allowed backtracking to be some percentage of the value we know, which would enable us to get some minor resolution fixes, but also avoid the 4 hour worst-case scenarios.. this has huge affects in CI/CD and end-user cases where the user is definitely NOT the one how know what to do. What problems would come from reading the value for |
If the backtracking takes longer than a few minutes typically something is wrong, I'd consider it a good idea to cap resolution times /depth by default so people can opt in when the edge cases hit For most usages i believe resolution time is less than a minute, and for people with tricky sets, they should be guided into giving pip hints instead of hoping for the best |
@vjmp I am not part of pip but I would say that if you are distributing your installation via conda dependency files it would be significantly easier to keep everything inside conda. Either fix the rpaframework feedstock on conda-forge or create your own feedstock and publish it to your own conda channel and then there is no dependency on pip at all. This is a double win because there is no guarantee that conda and pip will install mutually compatible packages. However if you are only using conda to bootstrap a Python environment I would suggest you also distribute a constraints file to your customers that represents a pinned application install (if you need an example look at Apache Airflow, they install a full environment using every possible requirement, then freeze the environment and use that as their constraints file). Then tell your customers to conda install the bootstrap the Python environment then activate the environment then pip install the Python requirements with the constraints file. With a well specified constraints file pip will not engage in any backtracking. |
No that is not the "official pypa/pip recommendation", and frankly you're coming across a bit passive aggressive here. I was responding to @kariharju and in particular to their "what is the blocker / problem" question. I never once suggested that the solution to your problem was to fork pip and distribute that fork. In my very first response here, I offered some suggestions on now you can address this issue for your system, without any changes to pip. In contrast, reducing the number of resolution rounds wouldn't fix anything, it would just make pip fail faster, with less useful diagnostic output. There have been positive comments from both @pradyunsg and myself on this issue, just not to the simplistic "let people tweak the value" idea. But someone still needs to take one of the options we've said is potentially acceptable, create a PR for it, and handle any follow-up discussions and/or concerns. Which will involve reviewing the issues I linked, and explaining how the proposed solution relates to them. You should not assume that one of the pip maintainers will simply implement this for you - we're all volunteers and we have many other commitments, so waiting for one of us to have enough spare time to pick this up (as well as having the interest to work on this rather than another issue) is likely to be a rather long wait. I'm going to avoid commenting further here. I think I've said all I can usefully add, and the responses are getting too heated so I think it's worth everyone cooling off a bit. |
@pfmoore I'm sorry if I sound passive agressive, english is not my native language. I'm Finn. It was very unclear to me, what "your copy of pip" means, because we are not looking "works on my machine" solutions, and I took it as "fork pip" as Open Source way recommendation. Thank you clearing that out and stating that it is not intention. I also took statement "Nobody willing to do the work to get consensus on a solution and then implement it." litterally, meaning that there will be no solution for this from pypa/pip side. And that we are on our own. @pfmoore Thank you and others for your time and patience. And my deepest apologies if I have offended you or others. |
@notatallshaw It is other way round, our customers write RPA robots, and provide what ever mix of dependencies to our orchestration to organize as environment where those robots can be run. We provide "templated" start for customers to build on, but they decide what versions they want to use. And this is all working fine on "happy path" cases. Example (repo) in top of this issue, is actually "a robot", and there are examples given how to see success and failures. Problem is that when there is "not so happy path" and pip goes backtracking mode, 100+ dependencies cause environment resolution take long time to fail. And "time" here is "relative concept", since machine and network performances change from customer to customer and from location to location. That is why fixed wallclock timeouts are not good solutions, and people are bad for understanding what time means for particular computer [performance]. Some backtracking might be ok, but if it would be configurable (as number of cycles), then it would be consistent independent of machine or network speeds (when set of dependencies remain same). Worst of all cases would be, that on fast machine with fast internet, resolver finishes successfully, but slower machine it would "timeout", which makes things not-repeatable, and not-consistent, and not-debuggable. That would make it "bad customer experience". Yes, we recommend customers to use conda-forge dependencies where available, but getting all RPA framework dependencies to to "conda-forge" is quite "Mission Impossible". And yes, I know, there is never right solution, only solutions that are selected and implemented. |
No problem. I probably over-reacted. Apart from anything else, it didn't occur to me that you might not be a native speaker - that's entirely on me and I apologise.
Unfortunately, that probably is accurate. The pip maintainers team consists of maybe 4 or 5 people1, all of whom are unpaid volunteers. And most of our time is spent doing support, project management, and trying to pay down technical debt 🙁 So realistically, a lot of the work on implementing new features falls on community members who have an interest, and/or need those features. And when I said "nobody is willing", I specifically meant that no such community member has yet stepped up to work on this issue (and as you can see from the links I quoted, it's far from being a new problem). There's no easy solution here, and believe me, I'm probably as frustrated as you are with the current situation (there are a bunch of features I'd like to work on, but I simply don't have the time). I hope this makes the situation clearer, though. Footnotes
|
I'm writing this on a car ride to the airport, so apologies for the blunt responses and terseness. No one is asking anyone to fork pip. Paul was referring to his script from #11480 (comment), which I pointed out as having significant caveats in #11480 (comment). There are at least three comments with concrete suggestions, that don't involve exposing a confusing option to users that's difficult to reason about:
Anyone is welcome to drive one of these forward. They're all reasonable improvements that multiple maintainers have spoken in favour of.
Not right now; this is a gap (that's why we've got a known limitation label on this). I'd like us to push this discussion in a direction where we actually make improvements, instead of talking past each other. So, here's a concrete and somewhat aggressive position that I'm gonna take: controlling resolver "rounds" is not something we're going to expose to users. If there's going to be any more discussion about exposing that to end users, I'm going to close this issue out and open separate issues for the specific suggestions I've linked to above. |
If I was in your situation I would still try providing a constraints file to your customers to say "this is what we tested our application against" and "when you are doing pip install please use these constraints", much in the same way Apache Airflow recommends: https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html From my point of view this provides a few benefits:
I may be over simplifying your problem, but I've been very happy using well specified constraints files as a tool to make multiple environments with slightly different requirements work well with each other. |
Yeah apologies on my behalf as well, Friday evening is not the best time to comment on these kinds of things ✋ In this backdrop @pradyunsg three option scoping make a lot more sense and I think the third option of getting predictable outputs that applications can parse from the pip output would probably solve the situation and enable us to determine when enough back-tracking has happened. I'll open our case a bit more to answer @notatallshaw's question: The problem is that we need to be able to built that environment somewhere and also provide tooling for the devs / IT people how are managing the dependencies.. There a CI could be burning a lot of time and energy running back-tracking without any human interaction. So predictable behavior of the base tools is everything to us. I'll check if we could pitch in and try to help out with the notifications. |
Very much this. As the one who was behind designing and implemented the way how we use constraint mechanism in Airflow, your description is spot-on. Also @kariharju - what you seem to need is just to build a reference Docker image automtically in your CI. Some parts of it that have been discussed in #11527 We are doing very much this in Apache Airflow - we have a CI that makes sure to build the right Docker container with all the tools and requirements that are needed to run any tests (and sepearate one for Production) and we are publishing it in Github Registry and built tooling about it so that developers can run any tests and users can use the imges (and Dockerfiles to build their own customized images) in their production. And constraints are crucial part of our pipelines to build the images. You can also see more about the solution we have in my talk https://www.youtube.com/watch?v=_SjMdQLP30s&t=2549s See https://github.com/apache/airflow/blob/main/CI.rst for CI details and https://airflow.apache.org/docs/docker-stack/build.html for production image. |
Fully off-topic this one: |
We do have Docker for dependencies (pinned) of our internal projects, but for the actual internal projects we are now at the point where we distribute them on tarballs and then install those tarballs into the Docker images with a fake pypi index URL set to prevent pip from even trying to install anything else than what is given to it. Oh and "--no-build-isolation" too. IF there is a dependency conflict, it needs to fail and get developers to fix the dependencies rather then try downloading semi-random versions of packages from the network to try to resolve them. Like the last issue was really fun:
There is a space for "try very hard to resolve all issues and get to something that works" and there is a space for "if the resolution is not trivial, then something has gone very, very wrong and any further guesses will only make it harder to debug". And I do not envy pip developers trying to thread that needle. :D |
This kind of set up is not unfamiliar to me and in my experience you have to make a hard choice about what tool it is that creates your Python package environment, and what you're willing to support from your customers. You can use conda to bootstrap your Python and it's binary dependencies, but IMO you need to decide if you are using conda or pip (or poetry etc ) to specify your Python packages. Otherwise you will forever run in to dependency problems like this between yourself and your customers. If you are using pip then a method available to help this situation is constraints. If you are using other tools then they offer their own methods. If you are mixing tools then it's up to you to figure out how they behave together and probably develop your own specific processes to reduce collision and other problems (and I have had to do that several times and that's what led me to this conclusion). But this is just from my experience, maybe someone else has a better solution. |
Agree. I find constraints really, really well working and super powerful the way they are implemented. None of the other package managers have anything equivalent (I was even one of the few people who try to convince Poetry to implement them python-poetry/poetry#3225 - but there seem to be no real interest (which led us to officially discourage people to use Poetry for Airflow and reject their dependency issues redirecting them to "here is the only way you can install Airflow and be sure it works and it's pip + constraints" ). The only drawback for it is that you somehow have to manage and publish the constraints in a public URL with version of your software as part of the URL (this is one other great feature of constraints that you can use http:// url for it). |
BTW. @pfmoore @pradyunsg -> I thought about maybe a proposal to Pypa to host constraints in PyPI as optional metadata (linked to package but updateable unlike the package). That would be a way I could see this could become a "standard". For example you could apply constraints automatically if they are present and you specify WDYT? Would tha fly (of course this is just very, very, very rough idea, it would require a LOT more discussion and consensus - just wanted to hear your comments, whether it's one of: nonsense, maybe, plausible, great idea to explore ... :) |
This issue is effectively resolved on Pip main (28239f9) likely thanks to sarugaku/resolvelib#111 and sarugaku/resolvelib#113. The issue is still reproducible on Python 3.9 Pip 23.0.1 with the following command (takes a very long time with no apparent resolution): pip download -d downloads -r https://raw.githubusercontent.com/vjmp/pipbacktracking/trunk/requirements_fail.txt However on Pip main (28239f9) the following output is quickly given:
I suggest this issue be closed and if there is some different set of requirements that causes long backtracking a new issue should be opened (I would actually be surprised given how powerful both these improvements are). |
Actually, I want to bring down the number for max-rounds, since we’re going to have a more efficient resolve and failing quicker is a good idea IMO. |
+1 to looking into how low we can get the max round number to. |
Here's some numbers for us to consider based on back-of-the-napkin math...
I think there's a significant experience difference in 100_000 vs 2_000_000. The former is clearly better but it will be won't be able to handle really complex graphs. OTOH, we're gonna have fewer rounds thanks to backjumping. I'm comfortable with the middle ground but I'm happy to be aggressive and go down to FWIW, this issue is about improving the messaging so that it's more actionable rather than a blunt "IDK what to do" message. That's a change that'll need to happen on both resolvelib's end and our end IMO -- and backjumping makes it harder to hit this case; doesn't eliminate it. |
ResolutionTooDeep
to include more context
When I wrote this all known instances of I have been working a new PR #12459 (which I describe my reasoning here #12318). It has successfully resolved every new |
Those are excellent improvements, but I think they're logically separate from what this issue is about. At this point, the thing that this issue is tracking IMO is improving the information presented to the user when |
Yeah
…On Sat, Jan 27, 2024, 12:18 PM Pradyun Gedam ***@***.***> wrote:
Those are excellent improvements, but I think they're logically separate
from what this issue is about.
At this point, the thing that this issue is tracking IMO is improving the
information presented to the user when ResolutionTooDeep is raised to
help them act on it. Currently, we're throwing away all the state and
metadata available to the resolver without presenting any information to
the user. It would be useful to present such information to the user and
enable them to do something to trim the large space that the dependency
resolution has tried to explore.
—
Reply to this email directly, view it on GitHub
<#11480 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWUL3J6TEKECHMCCR36IEH3YQVAHZAVCNFSM6AAAAAAQ3SZCT6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJTGI4DEMRXGQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I agree that would be nice if it was useful, I'm just skeptical how likely it is too be useful. At IMO I think it would be more useful for All that said, I would very enthusiastically help test any PR that someone wants to submit that provides a useful message out of |
Description
Example of pip backtracking failure
Repo for this example
... can be found there: https://github.com/vjmp/pipbacktracking
and issues can be found there: https://github.com/pypa/pip/issues
Failing case description
Installing (internally conflicting) "requirements.txt" which has lots of
transient dependencies, with simple command like
pip install -r requirements.txt
with "very simple" looking content of "requirements.txt" looking like:
This will take long time to fail, like 4+ hours.
And note that this specific example applies only on Linux environments.
But I think problem is general, and "old, previously working" requirement sets
can get "rotten" over time, as dependency "future" takes "wrong" turn. This
is because resolver works from latest to oldest, and even one few versions of
some required dependencies can derail resolver into backtracking "mode".
Context of our problem space.
Here are some things, that make this problem to Robocorp customers.
in monetary terms than "fail 4+ hours later on total environment build
failure"
rcc
which is used here to also make thisfailure as repeatable process
processes should be repeatable and reliable and not break, even if time passes
Problem with backtracking
It is very slow to fail.
Currently happy path works (fast enough), but if you derail resolver to unbeaten
path, then resolution takes long time, because in pip source
https://github.com/pypa/pip/blob/main/src/pip/_internal/resolution/resolvelib/resolver.py#L91
there is magical internal variable
try_to_avoid_resolution_too_deep = 2000000
which causes very long search until it fails.
Brute force search for possibly huge search space.
When package, like
rpaframework
below, has something around 100 dependenciesit its dependency tree, even happy path resolution takes 100+ rounds of pip
dependency resolution to find it. When backtracking, (just one) processor
becomes 100% busy for backtracking work.
In automation, there is no "human" to press "Control-C".
and ...
... are nice for
pip
to inform user that it is taking longer than usual, butin our customers automation cases, there is nobody who could see those, or
to press that "Ctrl + C".
This could be improved, if there would be environment variable like
MAX_PIP_RESOLUTION_ROUNDS
instead of having hard coded 2000000 internal limit.Also adding this as environment variable (instead of command line option is
better backwards compatibility, since "extra" environment variable does not
kill old pip version commands, but CLI option will).
Basic setup
What is needed:
to manually install following two things ...)
this feature; this current example uses pip v22.2.2)
Example code
You need
rcc
to run these examples. Or do manual environment setup if you will.You can download rcc binaries from https://downloads.robocorp.com/rcc/releases/index.html
or if you want to more information, see https://github.com/robocorp/rcc
Success case (just for reference)
To run success case as what normal user sees, use this:
And to see debugging output, use this:
Actual failure case (point of this demo)
To run failing case as what normal user sees, use this ... and have patience to wait:
And to see debugging output, use this ... and have patience to wait:
Expected behavior
Faster (and configurable) failure on pip install on complex/big dependency tree.
pip version
22.2.2
Python version
3.9.13
OS
Linux
How to Reproduce
/path/to/rcc run --task fail
Note: no need to install specific python or pip versions, if you use these instructions.
Output
.... and output will continue for next 4+ hours (of course depending on your network speed, machine performance, etc.)
Code of Conduct
The text was updated successfully, but these errors were encountered: