-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Leverage --incremental #1310
Comments
Hi, We are happy to get community's help in improving ts-jest. Right now we are lack of capacity to support all needs from the community. Therefore feel free to help us by your PRs, we really appreciate. |
I attempted to work on this but currently ts-jest assumes that all compilation happens in memory and hands off caching to Jest. Leveraging I'll try to do an incremental-only Jest transformer first and see how it behaves. After that we may consider introducing the feature to ts-jest but that will be a large undertaking. |
Have you checked ts-jest processing doc ? Does it help to bring up some ideas for your work ? |
I did after you told me about this. Thank you for the info. It confirms the level of coupling. I did a PoC of a runner with support for The issue is that it's this very caching that makes us stop using The PoC is less smart than In the end we ended up with something that is just slightly slower than ts-jest but simpler to reason about and with no pressure on the RAM. The biggest challenge for the introduction of incremental in Takeaway: I don't think I'll be working on that feature for now. Sorry. |
Hmm interesting from your explanation. When ts-jest caching was written, there wasn’t incremental build at that point so I can understand why the choice is cached compilered stuffs on RAM. |
Absolutely. It is however architecturally very different from ts-jest. I don't know how to transpose it to what ts-jest does. Unrelated ramble about what I did: I've been having a mitigated success scaling the PoC though. TypeScript can't emit a file with its dependencies. It means that I need to either emit all files before starting to run the tests, or build the dependency graph myself and emit the files one by one on demand. I'd rather not do that. It's also more flaky than ts-jest because it analyses the dependencies statically; if there's a dynamic I also don't like the fact that I have to use a transformer. It means that the runtime first reads the TS file only to be told to read the JS file. I'd rather solve that at the resolver level but I didn't manage to implement a custom resolver for the runtime. Despite all that, and without any sort of caching, on a large codebase it's just a bit slower than ts-jest in terms of speed (whether it's a hot or cold run). However in our case, because we compile first in our CI, we have a cold run with ts-jest but a hot run with the PoC. I think there is tremendous opportunity lying here, but we'll have to wait a bit because, at the moment, Jest's and TypeScript's models seem to be fundamentally incompatible. The strategy of ts-jest, which is to transpile files on demand and aggressively cache everything, will stay the best one for almost all projects. |
Is this you used
I agreed |
Almost. When you do that you lose the watcher and snapshots are improperly located. Here I have a runner that starts a TypeScript program, emits files with your |
FYI, perhaps later we can learn from Angular CLI regarding to this topic. @yacinehmito I managed somehow to use create incremental program for To run the experiment codes,
and |
@ahnpnl your branch link seems to be dead. Could you also provide explanations if you restore it of how to test it on our own project? I'd be willing to try 👍 Thanks |
hi @maxime1992 , the changes are in master now. You can test against master. To use Program with incremental
To use Program without incremental
I would expect the performance of |
Please provide some information what is the current status of the feature ? |
I double that. Tried exactly these settings and didn't notice any difference. What are the prerequisites for this to work? Should my |
it is still experimental feature. Right now we only invoke More help is welcome because this is relative new to implement and not many docs to follow. |
From what I understand from @yacinehmito that:
I don't know anything else which is needed to do. |
Absolutely. Instead of using the caching capabilities offered by Jest, ts-jest could solely rely on Also, Jest's architecture is not suited for this. |
Yes i think Jest's architecture isn't ready for using custom caching capabilities (in this case we want to use cache which is written by ts compiler). At the moment, I think what we can do now is separating these 2 caches. Leave Jest do what it is doing and somehow make internal ts compiler of |
I think this is a good direction to take, but I don't know how to make it work. Then, even if it were the case, I don't know how this would work out when tests run in parallel. Indeed:
This doesn't even touch on the problems on TypeScript's side: the All those problems made me conclude that efficient compilation with Jest is a pipedream, at least until both of these hold true:
|
I think this is doable. This point is starting point of creating
|
When tests run in parallel, transformation seems to happens in each Jest worker, which are different processes. This is why they don't share the global
There's actually a way to do it without this, but we'd need a custom resolver that captures the required path and attempts to emit as a consequence. However I didn't manage to get it right. I made a PoC with a "best attempt" strategy (that doesn't work in all cases, but enough to test it) and performance was not great. There seems to be a significant overhead when calling |
I have looked into ts-loader to replicate their resolvers. However, it doesn’t say anywhere in ts-loader doc about how it works. Do you mind sharing your POC codes ? |
It's in a private repo owned by Spendesk. I'll see if I can get that out next week. |
Hi @yacinehmito , #1784 introduces the usage of TypeScript I think with this API, now it should be possible to achieve what we have been discussed here. However, the implementation is probably limited:
|
Hi @ahnpnl. I just invited you to a private repository where I isolated the code of our custom Jest runner. It is not much, but perhaps will it help.
That's not the part I found difficult. It might be better than some handcrafted reimplementation of the resolution algorithm for forward compatibility and to sort out edge cases, but it was not a fundamental blocker.
This is exactly what I am trying to accomplish. Or rather say that the transformers should not attempt compilation, as they are synchronous and parallel. The runner (which doesn't exist in the case of ts-jest but could be introduced), being asynchronous and unique, could compile. Or not. This is the bit I am struggling with. It's not hard to right a runner that compiles TypeScript code, but it's very hard to write one that works well with Jest's runner. I tried to run both the Jest watcher and the TypeScript watcher. It "works", and by this I mean that it runs. The problem is that I am not aware of an API in TypeScript to know when it's done compiling, nor do I know one for Jest to know when it's ok for it to run the tests again. So what usually happens is that Jest will test the old version of the code. Sometimes. Then, when trying to substitute one watcher for the other, I realised that it was a bit futile: what matters is knowing when something has been compiled, then propagate that to Jest. So essentially, the pipeline would have to look like this:
And even if by some miracle we manage to have this working, we then end up with an orchestration problem: what happens if another file changes while the tests are running and this affects the outcome? We'd have to find a way to either cancel tests, or pause the emission of JS files. All that to say that I don't think integrating Jest and TypeScript at that level is a good approach. It's ideal, but not sensible if we look at the respective implementations. EDIT: Forgot to mention that I changed the approach for the PoC since my last comment. I went with something simpler as you suggested.
That's what the PoC does right now. (The code is in the private repo I have invited you on, and I might share that to the rest of the world once I get around to asking permission) |
From TypeScript APIs, everything about compiler is synchronous. There have been some discussions about making it async but not happen in the near future. The pipeline at point 4 is not optimal because this is the task of the transformer. Since there is no real connection between runner and transformer, so I understand it’s hard to let transformer know when runner finishes compiling. The possible approach for now IMO is: Runner
Transformer
What do you think ? Does this proposal miss anything ? |
@yacinehmito @ahnpnl |
I did not spend more time thinking about this issue, and I am not using Jest anymore, so I will not be able to help. Sorry. |
This is a feature request.
Issue
Our team is working on a large codebase and we are currently having issues with the time our CI takes. We are looking at all the ways to make the build faster.
One of the bottlenecks is the time it takes to run the tests. We investigated and most of time taken to run unit tests is not to actually run the tests but to compile the TS files on-the-fly.
Expected behavior
The thing is that TS files have already been compiled as part of the build step. We would love for ts-jest to leverage this and rely on them.
One idea is to have ts-jest write the compiled files on disk and leverage the
incremental
option. As ts-jest attempts to recompile the code, tsc will see that most of the work was already done.We understand this is a large undertaking and I am ready to help! From a quick glance I already identified ts-jest performs additional transformation to hoist
jest.mock()
and that could be a challenge.Alternatives
We found none of the speed-up alternatives to be satisfying. They are:
We would largely prefer to stick with ts-jest but performance improvements are really necessary for us. Again, I'm ready to work on it.
What do you think about this feature, and would you support a PR introducing it?
The text was updated successfully, but these errors were encountered: