The Case for Spark #2
Replies: 3 comments 21 replies
-
I think this is amazing and will elevated the usage of PHP forward a ton! |
Beta Was this translation helpful? Give feedback.
-
I’ve created a Spark demo of an inline table editing app. It was a fun process and took me around half an hour to build. It uses signals (via This nicely demonstrates the power of signals combined with real-time Twig template rendering, resulting in only the parts of the DOM that need swapping being sent down over the wire. Go check out the source code (and run it on your own Craft development environment). spark-demo-inline-table-editing.mp4Note
|
Beta Was this translation helpful? Give feedback.
-
Great write-up! 👏👏 Shared this with some other craft devs and one question that came back was: What are possible performance implications with SSE (compared to something like Sprig with regular fetch/ajax calls)? Couldn't find much info about it online. |
Beta Was this translation helpful? Give feedback.
-
Sprig has been a labour of love for the past 4 years, and while it has proven extremely popular, I’ve noticed some shortcomings to its overall approach.
1. Its API is coupled to that of htmx.
Sprig builds on top of htmx and is closely coupled to it. As new features (and scope creep) are added to htmx, they are naturally added to Sprig. So while Sprig adds its own concept of Twig components, the mechanism by which it works is heavily dependent on that of htmx.
2. It only solves part of the problem.
Sprig provides the ability to re-render Twig templates on the back-end and swap them into the DOM on the front-end, but it lacks pure front-end reactivity. I’ve written about the htmx API and how it can be used to react to events on the page, but most developers still reach for a JavaScript library like Alpine JS to facilitate front-end reactivity.
3. It uses an antiquated, suboptimal approach.
Sprig adds reactive components to Twig. While the concept is relatively straightforward to grasp, it results in more Twig code rendered than necessary, more HTML sent over the wire than necessary, and more of the DOM swapped out than necessary. Also, swapping multiple parts of the DOM requires either multiple request-response cycles, or the use of out-of-band swaps (a clumsy API that Sprig attempts to simplify but which can still be confusing).
Spark aims to address the shortcomings above by:
Part of how it does this is by leveraging Datastar, a JavaScript library that combines the core functionality of Alpine JS with that of htmx. Datastar takes a hypermedia-first approach, meaning that you won’t find history support, JavaScript execution in responses, nor any other “bells and whistles”. This is intentional. By embracing the simplicity of hypermedia, the encapsulation of web components (natively or using Lit) and the optimised DOM operations of web browsers, you can build highly performant, hypermedia-driven web apps, without requiring a full-blown JavaScript framework.
Read Datastar’s getting started guide.
Spark uses Datastar for interacting with the back-end (via its back-end plugins), and provides a simple API for which to do so. Take a look at the usage docs to see what it looks like.
So how does Spark actually address the shortcomings above?
By giving you less functionality, Spark
encouragesforces you to drive your web apps using hypermedia. It is a very thin layer on top of Datastar, which gives you a front-end data store that shares its state with (and is driven by) the back-end.Datastar uses SSE (server-sent events), which allows a web server to push multiple, real-time updates to the browser over a single HTTP connection. As each template is rendered, it is sent to the web browser via a server-sent event, meaning that elements are streamed in the response and modified in the DOM as soon as the web browser receives them. This may seem like an unusual super-power, but it makes creating highly performant responses possible [needs more explanation].
But what does one do without all the “bells and whistles” that htmx provides? Is this really progress?? Wasn’t “htmx sucks” only meant as tongue-in-cheek?!?
The idea with Datastar (and Spark) is to go back to first principles while moving the web forward. Web components should be used (natively or using Lit) to provide reusable, encapsulated, and framework-agnostic custom elements. Because when you embrace custom elements you are embracing hypermedia.
If you are interested in hypermedia driven apps (and concepts such as HOWL and HATEOS, I encourage you to give Datastar and Spark a go. I’m specifically interested in your thoughts on the Spark API and its approach to performing actions and modifying the DOM and state.
In my experience, adopting this approach means unlearning a lot of the concepts that using Sprig has spoiled me with. Thanks go to Delaney Gillilan, author of Datastar, who has patiently answered my questions and addressed my reservations, and who has persistently reminded me that “just because you can, doesn’t mean you should”.
Beta Was this translation helpful? Give feedback.
All reactions