Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean previous build artifacts before rebuilding while watching #61

Closed
wants to merge 17 commits into from
Closed

Conversation

ranile
Copy link
Contributor

@ranile ranile commented Sep 26, 2020

Closes #49

@ranile ranile changed the title Clean build artifacts before rebuilding while watching Clean previous build artifacts before rebuilding while watching Sep 26, 2020
Copy link
Member

@thedodd thedodd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there @hamza1311 thanks for taking the time to open this PR. I've been thinking about this issue for a little while, and one thing I'm concerned about with the current implementation is that the directory being cleaned before the build starts may cause intermittent issues where someone may try to refresh and end up getting a 404 from the server.

As an alternative, I was thinking we could do this:

  • introduce a new sub dir of dist, let's call it .current (which represents the artifacts being accumulated in the current build.
  • dist/.current will be deleted before any new build is started.
  • as the very final step of a successful build, we will delete the contents of the dist dir (except for the .current dir), and then immediately move all of the contents of dist/.current up into dist.

All in all, this algorithm should still be pretty simple, but will guard against intermittent availability issues, and will solve our main problem of vestigial artifact bloat.

Thoughts?

@ranile
Copy link
Contributor Author

ranile commented Sep 29, 2020

@thedodd, I've been using this code for a couple of days now and have run into that issue. I think its better to pause the serving than to do what you suggested like how webpack (at least when invoked with ng serve) does.

With your suggestion, like the current implementation, it isn't clear when refreshing if a new build is served or the last one (for reasons like a build failure, refreshing before the build finished, etc). In case of a build failure, we could also serve a static file which displays build failure like how flask does with its debugger in development mode but obviously without the debugger part. Showing that would be just writing index.html in the build directory as it will just be wiped on the next build.

What do you think?

@thedodd
Copy link
Member

thedodd commented Sep 29, 2020

With your suggestion, like the current implementation, it isn't clear when refreshing if a new build is served or the last one (for reasons like a build failure, refreshing before the build finished, etc).

I don't want to take too much time to debate the issue, however I would just say that we are surfacing errors from cargo, and if a user is every confused, all they have to do is look at their terminal output. If there are errors, then obviously it was an old artifact being served. If there are no errors, then it should only be the new artifacts being served (which should be our policy with Trunk).


A user could get into the state I am describing by simply introducing a compilation error in their app while running trunk serve, and with the PR as it is now, their artifacts would be gone. So if they needed to refresh, or navigate to another location in the UI which needs to fetch an asset, those assets would be gone. I don't personally like that side-effect.

Using a dist/.current directory as a stage for build artifacts satisfies all of the requirements for the associated discussion in #49, and also avoids these subtle edge cases.

@ranile
Copy link
Contributor Author

ranile commented Sep 30, 2020

That entire could be solved (imo, in a better way) by just pausing the serving and/or (optionally) serving a static page displaying the error. What do you think about that?

@thedodd
Copy link
Member

thedodd commented Oct 2, 2020

@hamza1311 taking the path of pausing the server and and/or serving an error page is tantamount to us choosing to voluntarily cause downtime & errors for a users dev server when such is not needed. That should be enough to raise a red flag in the design choice.

The alternative I am proposing does not require any changes to the server, the proxy, no new customer error page logic, nothing like that. The proxy and the server stay online, users will not get any errors in their proxy connections, or any other server request, and if they run into compilation errors, their app will still be online and running. Let's definitely prioritize a smooth development experience for trunk users with as few errors as possible.

If you are still up to hacking on this, that would be awesome! I'll copy these requirements over to #49 for reference.

siku2 and others added 14 commits October 4, 2020 14:24
This fixes a resulting invalid path delimiter of asset files on Windows platforms by splitting the link first by a normal slash and then repeatedly joining over the path components.
This causes the join method to use the native path delimiter instead of a hardcoded one.
This should allow for a much easier process of adding new pipelines and
assets to the mix. Functionality has been preserved. A few small bug
fixes have been factored in.

Add a Trunk, Yew, YBC (Bulma) example. This includes a minimal
Trunk.toml.

Add snapshot tests over the yew example app.

Added a pull request template, just to help folks remember to knock out
a few of the small tasks associated with getting PRs to land.

Broke apart the config.rs module into a directory of a few separate
modules under src/config. The file was getting fairly large and had
three distinct families of types.

Simplified the handling of errors in main. This handles returning proper
status codes and also prints out the chain of errors associated with the
error being logged.

The processing of Trunk.toml files has been updated to ensure that all
paths declared in the file will be treated as relative to the config
file itself (if the paths are indeed relative).

Introduced CI caching.

closes #58
Update code to use prerelease (as it was previously).

Also, updated a dependency to try to help mitigate issues with caching
... not sure why this is failing though. Testing.
Really shouldn't have to be doing this ... CI cache is worth it though.

Update README a bit to reflect different installation options.
@ranile ranile marked this pull request as draft October 4, 2020 09:26
@ranile
Copy link
Contributor Author

ranile commented Oct 4, 2020

I'm going to be trying to hack this together.

@ranile ranile marked this pull request as ready for review October 4, 2020 11:59
@ranile
Copy link
Contributor Author

ranile commented Oct 4, 2020

I made a mistake of pushing my previous changes to master which resulted in new changes causing conflicts. I will recreate this PR the new fixes.

I apologize for any inconvenience that occurred that occurred because of my mistake.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Clean old build artifacts
4 participants