-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean previous build artifacts before rebuilding while watching #61
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there @hamza1311 thanks for taking the time to open this PR. I've been thinking about this issue for a little while, and one thing I'm concerned about with the current implementation is that the directory being cleaned before the build starts may cause intermittent issues where someone may try to refresh and end up getting a 404 from the server.
As an alternative, I was thinking we could do this:
- introduce a new sub dir of
dist
, let's call it.current
(which represents the artifacts being accumulated in the current build. dist/.current
will be deleted before any new build is started.- as the very final step of a successful build, we will delete the contents of the
dist
dir (except for the.current
dir), and then immediately move all of the contents ofdist/.current
up intodist
.
All in all, this algorithm should still be pretty simple, but will guard against intermittent availability issues, and will solve our main problem of vestigial artifact bloat.
Thoughts?
@thedodd, I've been using this code for a couple of days now and have run into that issue. I think its better to pause the serving than to do what you suggested like how With your suggestion, like the current implementation, it isn't clear when refreshing if a new build is served or the last one (for reasons like a build failure, refreshing before the build finished, etc). In case of a build failure, we could also serve a static file which displays build failure like how flask does with its debugger in development mode but obviously without the debugger part. Showing that would be just writing What do you think? |
I don't want to take too much time to debate the issue, however I would just say that we are surfacing errors from cargo, and if a user is every confused, all they have to do is look at their terminal output. If there are errors, then obviously it was an old artifact being served. If there are no errors, then it should only be the new artifacts being served (which should be our policy with Trunk). A user could get into the state I am describing by simply introducing a compilation error in their app while running Using a |
That entire could be solved (imo, in a better way) by just pausing the serving and/or (optionally) serving a static page displaying the error. What do you think about that? |
@hamza1311 taking the path of pausing the server and and/or serving an error page is tantamount to us choosing to voluntarily cause downtime & errors for a users dev server when such is not needed. That should be enough to raise a red flag in the design choice. The alternative I am proposing does not require any changes to the server, the proxy, no new customer error page logic, nothing like that. The proxy and the server stay online, users will not get any errors in their proxy connections, or any other server request, and if they run into compilation errors, their app will still be online and running. Let's definitely prioritize a smooth development experience for trunk users with as few errors as possible. If you are still up to hacking on this, that would be awesome! I'll copy these requirements over to #49 for reference. |
This should allow for a much easier process of adding new pipelines and assets to the mix. Functionality has been preserved. A few small bug fixes have been factored in. Add a Trunk, Yew, YBC (Bulma) example. This includes a minimal Trunk.toml. Add snapshot tests over the yew example app. Added a pull request template, just to help folks remember to knock out a few of the small tasks associated with getting PRs to land. Broke apart the config.rs module into a directory of a few separate modules under src/config. The file was getting fairly large and had three distinct families of types. Simplified the handling of errors in main. This handles returning proper status codes and also prints out the chain of errors associated with the error being logged. The processing of Trunk.toml files has been updated to ensure that all paths declared in the file will be treated as relative to the config file itself (if the paths are indeed relative). Introduced CI caching. closes #58
Update code to use prerelease (as it was previously). Also, updated a dependency to try to help mitigate issues with caching ... not sure why this is failing though. Testing.
Really shouldn't have to be doing this ... CI cache is worth it though. Update README a bit to reflect different installation options.
I'm going to be trying to hack this together. |
I made a mistake of pushing my previous changes to master which resulted in new changes causing conflicts. I will recreate this PR the new fixes. I apologize for any inconvenience that occurred that occurred because of my mistake. |
Closes #49