forked from w3ctag/design-principles
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.bs
1350 lines (1099 loc) · 71.8 KB
/
index.bs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<pre class="metadata">
Title: Client-side API Design Principles
Group: W3C TAG
Shortname: design-principles
Repository: w3ctag/design-principles
Status: DREAM
Editor: Travis Leithead, Microsoft, [email protected]
Editor: Sangwhan Moon, Invited Expert, https://sangwhan.com
Former Editor: Domenic Denicola, Google https://www.google.com/, https://domenic.me/, [email protected]
Abstract: This document contains a small-but-growing set of design principles collected by the W3C TAG while <a href="https://github.com/w3ctag/spec-reviews/">reviewing</a> specifications.
Default Biblio Status: current
Markup Shorthands: markdown on
Boilerplate: feedback-header off
!By: <a href="https://www.w3.org/2001/tag/">Members of the TAG</a>, past and present
!Participate: <a href="https://github.com/w3ctag/design-principles">GitHub w3ctag/design-principles</a> (<a href="https://github.com/w3ctag/design-principles/issues/new">file an issue</a>; <a href="https://github.com/w3ctag/design-principles/issues?state=open">open issues</a>)
Link Defaults: html (dfn) queue a task/in parallel/reflect
</pre>
<pre class="link-defaults">
spec:html; type:dfn; for:/;
text:browsing context
text:focus update steps
text:task queue
spec:webidl; type:dfn; text:namespace
</pre>
<pre class="anchors">
url: https://tc39.github.io/ecma262/#sec-error-objects; type: interface; text: Error;
url: https://www.w3.org/TR/WebCryptoAPI/#dfn-Crypto; type: interface; text: Crypto
url: https://www.w3.org/TR/payment-request/#dfn-state; type: dfn; spec: payment-request; text: [[state]];
url: https://w3c.github.io/touch-events/#idl-def-touchevent; type: interface; text: TouchEvent
url: https://w3c.github.io/IntersectionObserver/;
type: interface; text: IntersectionObserver; url: #intersectionobserver
type: interface; text: IntersectionObserverEntry; url: #intersectionobserverentry
type: dictionary; text: IntersectionObserverInit; url: #dictdef-intersectionobserverinit
url: https://wicg.github.io/ResizeObserver/;
type: interface; text: ResizeObserver; url: #resizeobserver
url: https://dom.spec.whatwg.org/#ref-for-concept-getelementsbytagname; type: interface; text: getElementsByTagName; for: Document
</pre>
<style>
table.data {
text-align: left;
font-size: small;
}
</style>
<h2 id="api-principles">Principles behind design of Web APIs</h2>
<h3 id="safe-to-browse">It should be safe to visit a web page</h3>
Hyperlinks, links from one page to another, are one of the foundations of the Web.
Following a link, or visiting a web page, should be safe:
users doing this
should not have to fear for the security of their computer
or for essential aspects of their privacy.
(But it's not completely safe,
in the sense that users may face consequences
if their use of the Web is harming others.)
Furthermore, users should understand that it is safe (and how it isn't)
so they can make informed decisions between use of the Web versus other technologies.
Saying “essential aspects” here skips over quite a bit of detail.
The Web today is far from being perfectly private.
One aspect of privacy problems is when reality doesn't meet expectations.
For example,
a person walking down the street generally expects to be recognized by their friends,
but (depending on the country)
may not expect that they walked down that street at that time
to be recorded in a permanent government database.
Online, people might have less understanding of what is or isn't possible,
and their expectations might differ more widely.
<!-- Can we cite research on this??? -->
We can probably make privacy on the Web better than it is today:
* We can improve the user interfaces through which the Web is used
to make it clearer what users of the Web should expect.
* We can change the technical foundations of the Web
so that they match user expectations of privacy.
* We can consider the cases where users would be better off
if expectations were higher,
and in those cases
try to change both technical foundations and expectations
When we add new features to the Web
that might weaken the security or privacy characteristics that the Web currently has,
we should consider the tradeoffs involved in that particular feature.
A new feature with security risks might still make users safer, for example,
because they would download and run native binary software less often,
and downloading and running binaries is much riskier than visiting a Web page.
However, we should also consider the tradeoff
against the *idea* that it is safe to visit Web pages:
adding a risky feature to the Web teaches the Web's users that the Web is less safe,
and this erodes one of the core values of the Web: that it is safe to visit websites.
We should seek to give users a true picture of the safety of the Web
so that they can act based on that understanding of safety
(in comparison to the safety of alternatives),
and at the same time we should seek to keep the Web safe for its users
and work to make it safer.
<h3 id="trusted-ui">Trusted user interface should be trustworthy</h3>
Since users of the Web can follow links to sites they might not know already,
software that lets them access the Web generally provides user interface
to show the user who they're interacting with and how,
such as by displaying part or all of the URL or other information about the site,
or whether the connection is secure.
Since users rely on this information to learn what site they're on
and to make judgments about whether it is trustworthy,
this software is designed so that
sites are not able to override this user interface
or spoof it in a way that would confuse users.
Therefore, when we add new features to the web,
we should consider whether they require new complexity in this user interface,
and whether that complexity would reduce users' ability
to make correct judgments about who they're interacting with
or whether the user interface is trustworthy or spoofed.
<h3 id="consent">Ask users for meaningful consent when appropriate</h3>
In some cases,
we might add features to the Web that are not appropriate to allow for any website a user visits,
but that are reasonable to have if the user agrees to use the feature
based on a reasonable understanding of what will happen as a result.
One example of such a feature might be support for location:
many users seem likely to understand what it means to share their current location with a website
and be able to consent to doing so
(even though they might not fully understand the privacy implications of doing so).
Another example of such a feature would be camera access,
where users understand that they're sharing the image
in the rectangle visible to the camera
(even if they might not understand everything someone might do with the data).
We should not depend on asking the user for consent
(via a permission prompt or other mechanism)
if we don't have a way to express that request in a way
that users will understand what is being requested
and the main implications of giving their consent.
Understanding whether this is the case may require
studying users understanding of potential user interfaces,
although in many cases experienced user experience designers
may be able to extrapolate from previous experience.
We should also not ask users to consent to something
that can also happen if the user does not consent.
The existence of prompts
that request a user's consent for something
(from trusted user interface in their browser or other user agent)
imply that the thing for which consent is being requested
will not happen if the user doesn't consent.
Therefore we should not have user agents request consent
for something that they can't effectively prevent.
Doing so would send the wrong message about the safety characteristics of the Web
and would lead users to be disappointed when the expectations of privacy,
which they learned from trusted user interface,
did not match reality.
Asking users for consent via permission prompts
can reinforce the idea that <a href="#safe-to-browse">the web is safe</a>
by showing the user that certain things won't happen without their permission.
But frequently asking users for consent can also show how scary a place the web is
by showing how many sites are willing to ask for intrusive and unnecessary permissions.
<h2 id="js">JavaScript Language</h2>
<h3 id="js-only">Web APIs are for JavaScript</h3>
The language that web APIs are meant to be used in, and specified for, is JavaScript (also known as
[[!ECMASCRIPT]]). They are not language-agnostic, and are not meant to be.
This is sometimes a confusing point because [[WEBIDL]] descended from the language-agnostic OMG IDL
(and at one point, included "Java Bindings"). Even today, the structure of the document contains a
confusing and redundant division between the "Interface definition language" and the "ECMAScript
binding". Rest assured that this division is simply a historical artifact of document structure,
and does not imply anything about the intent of Web IDL in general. The only reason it remains is
that nobody has taken the time to eradicate it.
As such, when designing your APIs, your primary concern should be with the interface you present to
JavaScript developers. You can freely rely upon language-specific semantics and conventions, with
no need to keep things generalized.
<h3 id="js-rtc">Preserve run-to-completion semantics</h3>
Web APIs are essentially vehicles for extruding C++- (or Rust-) authored capabilities into the
JavaScript code that developers write. As such, it's important to respect the invariants that are
in play in normal JavaScript code. One of the most important of these is <em>run-to-completion
semantics</em>: wherein each turn of the JavaScript event loop is processed completely before
returning control to the user agent.
In particular, this means that JavaScript functions cannot be preempted mid-execution, and thus
that any data observed within the function will stay constant as long as that function is active.
This is not the case in other languages, which allow data races via multithreading or other
techniques—a C function can be preempted at any time, with the bindings it has access to changing
values from one line to the next.
This no-data-races invariant is extensively relied upon in JavaScript programs. As such, the
invariant must never be violated—even by web APIs, which are often implemented in languages that
<em>do</em> allow data races. Although the user agent may be using threads or other techniques to
modify state <a>in parallel</a>, web APIs must never expose such changing state directly to
developers. Instead, they should <a>queue a task</a> to modify author-observable state (such as an
object property).
<h3 id="js-gc">Do not expose garbage collection</h3>
Web APIs should not expose a way for author code to deduce when/if garbage collection of JavaScript
objects has run.
Although some APIs may expose garbage collection, such as some implementations of
{{Document/getElementsByTagName}}, and the JavaScript <a
href="https://github.com/tc39/proposal-weakrefs">WeakRefs proposal</a> (which has multiple
implementer support), API designers are strongly encouraged to avoid points in their own APIs that
depend on this timing.
The reason for this is somewhat subtle. The more that Web API semantics are affected by garbage
collection timing (or whether objects are collected at all), the more programs will be affected by
changes in this timing. But user agents differ significantly in both the timing of garbage
collection and whether certain objects are collected at all, which means the resulting code will be
non-interoperable. Worse, according to the usual rules of game theory as applied to browsers, this
kind of scenario could force other user agents to copy the garbage collection timing of the
original in order to create interoperability. This would cause current garbage collection
strategies to ossify, preventing improvement in one of the most dynamic areas of JavaScript
virtual machine technology.
In particular, this means that you shouldn't expose any API that acts as a weak reference, e.g. with a
property that becomes <code highlight="js">null</code> once garbage collection runs. Such freeing of
memory should be entirely deterministic.
For example, currently, {{Document/getElementsByTagName}} permits reusing the {{HTMLCollection}}
which it creates, when it's called with the same receiver and tag name. In practice, engines reuse
the output if it has not been garbage collected. This creates behavior which differs based on the
details of garbage collection, which is strongly discouraged. If {{Document/getElementsByTagName}}
were designed today, the advice to the specification authors would be to either reliably reuse
the output, or to produce a new {{HTMLCollection}} each time it's invoked.
The case of {{Document/getElementsByTagName}} is particularly insidious because nothing about
the API visibly relates to garbage collection details. By contrast, special APIs for this particular
purpose such as `WeakRef` and `FinalizationGroup` have their GC interaction as a specific part of
their contract with the developer.
<h2 id="api-surface">API Surface Concerns</h2>
<h3 id="naming-is-hard">Naming things</h3>
Naming is hard! We would all like a silver-bullet for naming APIs...
Names take meaning from:
* signposting (the name itself)
* use (how people come to understand the name over time)
* context (the object on the left-hand side, for example)
Consistency is a good principle that helps to create a platform that users can navigate intuitively
and by name association.
Please consult widely on names in your APIs.
Boolean properties, options, or API parameters which are asking a question about
their argument *should not* be prefixed with <code>is</code>, while methods
that serve the same purpose, given that it has no side effects, *should* be
prefixed with <code>is</code> to be consistent with the rest of the platform.
The name should not be directly associated with a brand or specific revision of
the underlying technology whenever possible; technology becomes obsolete, but
since removing APIs from the web is difficult, naming should be generic and
future-proof whenever possible.
<h3 id="feature-detect">New features should be detectable</h3>
The existence of new features should generally be detectable,
so that web content can act appropriately whether the feature is present or not.
This applies both to features that
are not present because they are not implemented,
and to features that may not be present in a particular configuration
(ranging from features that are present only on particular platforms
to features that are
only available in secure contexts).
It should generally be indistinguishable why a feature is unavailable,
so that feature detection code written for one case of unavailability
(e.g., the feature not being implemented yet in some browsers)
also works in other cases
(e.g., a library being used in a non-secure context when
the feature is limited to secure contexts).
Detection should always be possible from script,
but in some cases the feature should also be detectable
in the language where it is used
(such as ''@supports'' in CSS).
<h3 id="attributes-like-data">Attributes should behave like data properties</h3>
[[!WEBIDL]] attributes are reified in JavaScript as accessor properties, i.e. properties with
separate getter and setter functions which can react independently. This is in contrast to the
"default" style of JavaScript properties, data properties, which do not have configurable behavior
but instead can simply be set and retrieved, or optionally marked read-only so that they cannot be
set.
Data property semantics are what are generally expected by JavaScript developers when interfacing
with objects. As such, although getters and setters allow infinite customizability when defining
your Web IDL attributes, you should endeavor to make the resulting accessor properties behave as
much like a data property as possible. Specific guidance in this regard includes:
* Getters must not have any (observable) side effects.
* Getters should not perform any expensive operations. (A notable failure of the platform in this
regard is getters like <code highlight="js">offsetTop</code> performing layout; do not repeat this mistake.)
* Ensure that your attribute's getter returns the same object each time it is called, until some
occurrence in another part of the system causes a logical "reset" of the property's value. In
particular, <code highlight="js">obj.attribute === obj.attribute</code> must always hold, and so returning a
new value from the getter each time is not allowed.
* Whenever possible, preserve values given to the setter for return from the getter. That is,
given <code highlight="js">obj.attribute = x</code>, a subsequent <code highlight="js">obj.attribute === x</code> should be
true. (This will not always be the case, e.g. if a normalization or type conversion step is
necessary, but should be held as a goal for normal code paths.)
<h3 id="live-vs-static">Consider whether objects should be live or static</h3>
Objects returned from functions, attribute getters, etc.,
can either be live or static.
A <dfn export>live object</dfn> is one that continues to reflect
changes made after it was returned to the caller.
A <dfn export>static object</dfn> is one that reflects
the state at the time it was returned.
Objects that are the way state is mutated are generally live.
For example, DOM nodes are returned as live objects,
since they are the API through which attributes are set and other changes
to the DOM are made.
They also reflect changes to the DOM made in other ways
(such as through user interaction with forms).
Objects that represent a collection
that might change over time
(and that are not the way state is mutated)
should generally be returned as static objects.
This is because it is confusing to users of the API when
a collection changes while being iterated.
Because of this,
it is generally considered a mistake that methods like
{{Document/getElementsByTagName()}} return live objects;
{{ParentNode/querySelectorAll()}} was made to return static objects
as a result of this experience.
On the other hand, even though {{URLSearchParams}} represents a collection,
it should be live because the collection is mutated through that object.
Note: It's possible that some of this advice should be reconsidered
for <a>maplike</a> and <a>setlike</a> types,
where iterators have reasonable behavior
for mutation that happens during iteration.
This point likely needs further discussion,
and perhaps further experience of use of these types.
It's also worth considering the implications of having
live versus static objects for the speed of implementations of the API.
When the data needed by an object are expensive to compute up-front,
there is an advantage for that object to be live so that the results
can be computed lazily, such as for {{Window/getComputedStyle()}}.
On the other hand,
if the data needed by an object are expensive to keep up-to-date,
such as for the {{NodeList}} returned from {{ParentNode/querySelectorAll()}},
then providing a static object avoids
having to keep the object updated until it is garbage collected
(which may be substantially after its last use).
Likewise, the choice of live versus static objects
can influence the memory use of an API.
If each call of a method returns a new static object,
and the objects are large,
then substantial amounts of memory can be wasted
until the next garbage collection.
The choice of whether an object is live or static may also
influence whether it should be returned
from an attribute getter or from a method.
See [[#attributes-like-data]].
In particular,
if a result that changes frequently is returned as a static object,
it should probably be returned from a method rather than an attribute getter.
<h3 id="casing-rules">Use casing rules consistent with existing APIs</h3>
Although they haven't always been uniformly followed, through the history of web platform API
design, the following rules have emerged:
<table class="data complex">
<thead>
<tr>
<th></th>
<th>Casing rule</th>
<th>Examples</th>
</tr>
</thead>
<tr>
<th>Methods and properties<br>(Web IDL attributes, operations, and dictionary keys)</th>
<td>Camel case</td>
<td><code highlight="js">document.createAttribute()</code><br>
<code highlight="js">document.compatMode</code></td>
</tr>
<tr>
<th>Classes and mixins<br>(Web IDL interfaces)</th>
<td>Pascal case</td>
<td><code highlight="js">NamedNodeMap</code><br>
<code highlight="js">NonElementParentNode</code></td>
</tr>
<tr>
<th>Initialisms in APIs</th>
<td>All caps, except when the first word in a method or property</td>
<td><code highlight="js">HTMLCollection</code><br>
<code highlight="js">element.innerHTML</code><br>
<code highlight="js">document.bgColor</code></td>
</tr>
<tr>
<th>Repeated initialisms in APIs</th>
<td>Follow the same rule</td>
<td><code highlight="js">HTMLHRElement</code><br>
<code highlight="js">RTCDTMFSender</code><br>
</tr>
<tr>
<th>The abbreviation of "identity"</th>
<td><code highlight="js">Id</code>, except when the first word in a method or property</td>
<td><code highlight="js">node.getElementById()</code><br>
<code highlight="js">event.pointerId</code><br>
<code highlight="js">credential.id</code></td>
</tr>
<tr>
<th>Enumeration values</th>
<td>Lowercase, dash-delimited</td>
<td><code highlight="js">"no-referrer-when-downgrade"</code></td>
</tr>
<tr>
<th>Events</th>
<td>Lowercase, concatenated</td>
<td><code>autocompleteerror</code><br>
<code>languagechange</code></td>
</tr>
<tr>
<th>HTML elements and attributes</th>
<td>Lowercase, concatenated</td>
<td><code highlight="html"><figcaption></code><br>
<code highlight="html"><textarea maxlength></code></td>
</tr>
<tr>
<th>JSON keys</th>
<td>Lowercase, underscore-delimited</td>
<td><code highlight="js">manifest.short_name</code></td>
</tr>
</table>
<div class="non-normative">
Note that in particular, when a HTML attribute is <a>reflected</a> as a property, the attribute
and property's casings will not necessarily match. For example, the HTML attribute
<{img/ismap}> on <{img}> elements is <a>reflected</a> as the
{{HTMLImageElement/isMap}} property on {{HTMLImageElement}}.
The rules for JSON keys are meant to apply to specific JSON file formats sent over HTTP
or stored on disk, and don't apply to the general notion of JavaScript object keys.
Repeated initialisms are particularly non-uniform throughout the platform. Infamous historical
examples that violate the above rules are {{XMLHttpRequest}} and
{{HTMLHtmlElement}}. Do not follow their example; instead always capitalize your
initialisms, even if they are repeated.
</div>
<h3 id="prefer-dict-to-bool">Prefer dictionary parameters over boolean parameters or other unreadable parameters</h3>
APIs should generally prefer dictionary parameters
(with named booleans in the dictionary)
over boolean parameters.
This makes the code that calls the API
<a href="https://ariya.io/2011/08/hall-of-api-shame-boolean-trap">much more readable</a>.
It also makes the API more extensible in the future,
particularly if multiple booleans are needed.
<p class="example">For example,
<code highlight="js">new Event("exampleevent", { bubbles: true, cancelable: false})</code>
is much more readable than
<code highlight="js">new Event("exampleevent", true, false)</code>.
Furthermore,
the booleans in dictionaries need to be designed so that they all default to false.
If booleans default to true, then
<a href="https://lists.w3.org/Archives/Public/public-script-coord/2013OctDec/0302.html">users of the API will find unexpected JavaScript behavior</a> since <code highlight="js">{ passive: false }</code> and <code highlight="js">{ passive: undefined }</code> will produce different results.
But at the same time, it's important to avoid naming booleans in negative ways,
because then code will have confusing double-negatives.
These pieces of advice may sometimes conflict,
but the conflict can be avoided by using opposite words without negatives,
such as “repeat” versus “once”,
“isolate” versus “connect”,
or “private” versus “public”.
Likewise, APIs should use dictionary parameters to avoid other cases
of difficult to understand sequences of parameters.
For example,
<code highlight="js">window.scrollBy({ left: 50, top: 0 })</code>
is more readable than
<code highlight="js">window.scrollBy(50, 0)</code>.
<h3 id="constructors">Classes should have constructors when possible</h3>
By default, [[!WEBIDL]] interfaces are reified in JavaScript as "non-constructible" classes: trying
to create instances of them using the normal pattern in JavaScript, `new X()`, will throw a
{{TypeError}}. This is a poor experience for JavaScript developers, and doesn't match the design of
most of the JavaScript standard library or of the classes JavaScript developers themselves create.
From a naive perspective, it's not even coherent: how can instances of this thing ever exist, if
they can't be constructed? The answer is that the platform has magic powers: it can create instances
of Web IDL-derived classes without ever actually calling their constructor.
You should strive to reduce such magic in designing your own APIs, and give them constructors which
allow JavaScript developers to create them, just like the platform does. This means adding an
appropriate [=constructor operation=] to your interface, and defining the algorithm for
creating new instances of your class.
Apart from reducing the magic in the platform, adding constructors allows JavaScript developers to
create instances of the class for their own purposes, such as testing, mocking, or interfacing with
third-party libraries which accept instances of that class. Additionally, because of how
JavaScript's class design works, it is only possible to create a subclass of the class if it is
constructible.
Sometimes this isn't the right answer: for example, some objects essentially wrap handles to
privileged resources, and it isn't reasonable to create some representation of that privileged
resource in JavaScript so that you can pass it to the constructor. Or the object's lifecycle needs
to be very carefully controlled, so that creating new ones is only possible through privileged
operations. Or the interface represents an abstract base class for which it doesn't make sense to
construct an instance, or create an author-defined subclass. But cases like these should be the
exception, not the norm.
<p class="example">The {{Event}} class, and all its derived interfaces, are constructible. This
allows JavaScript developers to synthesize instances of them, which can be useful in testing or
shimming scenarios. For example, a library designed for {{MouseEvent}} instances can still be used
by another library that accepts as input {{TouchEvent}} instances, by writing a small bit of adapter
code that creates synthetic {{MouseEvent}}s.</p>
<p class="example">The {{DOMTokenList}} class is, very unfortunately, <a
href="https://www.w3.org/Bugs/Public/show_bug.cgi?id=27114">not constructible</a>. This prevents the
creation of <a>custom elements</a> that expose their token list attributes as {{DOMTokenList}}s.</p>
<p class="example">No HTML elements are constructible. This is largely due to some unfortunate
design decisions that <a href="https://github.com/whatwg/html/issues/896">would take some effort to
work around</a>. In the meantime, for the purposes of the <a>customized built-in elements</a>
feature, they have been made subclassable through a specifically-designed one-off solution: the
[{{HTMLConstructor}}] extended attribute.</p>
<p class="example">The {{Window}} class is not constructible, because creating a new window is a
privileged operation with significant side effects. Instead, the {{Window/open|window.open()}}
method is used to create new windows.</p>
<p class="example">The {{ImageBitmap}} class is not constructible, as it represents an immutable,
ready-to-paint bitmap image, and the process of getting it ready to paint must be done
asynchronously. Instead, the {{createImageBitmap()}} method is used to create it.</p>
<p class="example">Several non-constructible classes, like {{Navigator}}, {{History}}, or
{{Crypto}}, are non-constructible because they are singletons representing access to per-window
information. In these cases, the Web IDL <a>namespace</a> feature or some evolution of it might have
been a better fit, but they were designed before namespaces existed, and somewhat exceed the current
capabilities of namespaces. If you're designing a new singleton, strongly consider using a
<a>namespace</a> instead of a non-constructible class; if namespaces don't yet have enough
capabilities, <a href="https://github.com/heycam/webidl/issues/new">file an issue on Web IDL</a> to
discuss.</p>
<div class="note">
<p>For any JavaScript developers still incredulous that it is possible to create an instance of
a non-constructible class, we provide the following code example:</p>
<pre highlight="js">
const secret = {};
class NonConstructible {
constructor(theSecret = undefined) {
if (theSecret !== secret) {
throw new TypeError("Illegal constructor.");
}
// Set up the object.
}
}
</pre>
<p>If the author of this code never exposes the `secret` variable to the outside world, they can
reserve the ability to create instances of `NonConstructible` for themself: they can always do
<code highlight="js">const instance = new NonConstructible(secret)</code>. But the outside world
can never successfully "guess" the secret, and thus will always receive a {{TypeError}} when
trying to construct an instance of `NonConstructible`.</p>
<p>From this perspective, what browsers are doing when they create non-constructible classes
isn't magic: it's just very tricky. The above section is aimed at urging browsers to avoid such
tricks (or their equivalents in C++/Rust/etc.) whenever reasonable.</p>
</div>
<h3 id="promises">Design asynchronous APIs using Promises</h3>
Asynchronous APIs should generally be designed using promises
rather than callback functions.
This is the pattern that we've settled on for the Web platform,
and having APIs consistently use promises means that the APIs are
easier to use together (such as by chaining promises).
This pattern also tends to produce cleaner code than the use
of APIs with callback functions.
Furthermore, you should carefully consider whether an API might need
to be asynchronous before making it a synchronous API.
An API might need to be asynchronous if:
* some implementations may (in at least some cases) wish to prompt the user
or ask the user for permission before allowing use of the API,
* implementations may need to consider information that might be stored
on disk in order to compute the result,
* implementations may need to interact with the network before returning
the result, or
* implementations may wish to do work on another thread or in another
process before returning the result.
For more information on how to design APIs using promises,
and on when to use promises and when not to use promises,
see <strong><a href="https://www.w3.org/2001/tag/doc/promises-guide">Writing
Promise-Using Specifications</a></strong>.
<h3 id="aborting">Cancel asynchronous APIs/operations using AbortSignal</h3>
Async functions that need cancellation should take an `AbortSignal` as part
of an options dictionary.
Example:
```js
const controller = new AbortController();
const signal = controller.signal;
geolocation.read({ signal });
```
Reusing the same primitive everywhere has multiplicative effects throughout
the platform. In particular, there's a common pattern of using a single
`AbortSignal` for a bunch of ongoing operations, and then aborting them
(with the corresponding `AbortController`) when e.g. the user presses cancel,
or a single-page-app navigation occurs, or similar. So the minor extra
complexity for an individual API leads to a large reduction in complexity
when used with multiple APIs together.
There might be cases where cancellation cannot be guaranteed. In these cases,
the `AbortController` can still be used because a call to `abort()` on
`AbortController` is a request to abort. How you react to it depends on your spec.
Note, requestAbort() was considered in the AbortController design instead
of abort(), but the latter was chosen for brevity.
<h3 id="secure-context">Consider limiting new features to secure contexts</h3>
It may save you significant time and effort
to pre-emptively restrict your feature to Secure Contexts.
The TAG is on the record in supporting
an industry-wide move to [Secure the Web](https://www.w3.org/2001/tag/doc/web-https)
and applaud [efforts](https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/)
to shift web traffic to secure connections.
A great deal of effort has gone into debating
which features should be restricted to [Secure Contexts](https://w3c.github.io/webappsec-secure-contexts/).
Opinions vary amongst engine vendors, leading to difficult choices for feature designers.
Some vendors require *all* new features be restricted this way,
whereas others take a more selective approach.
This backdrop makes it difficult to provide
advice about the extent to which your feature should be restricted.
What we *can* highlight is that Secure Context-restricted features
face the least friction in gaining wide adoption amongst these varying regimes.
Specification authors can limit most features defined in
<a href="https://heycam.github.io/webidl/">WebIDL</a>,
to secure contexts
by using the
<code>[<a href="https://w3c.github.io/webappsec-secure-contexts/#integration-idl">SecureContext</a>]</code> extended attribute
on interfaces, namespaces, or their members (such as methods and attributes).
Similar ways of marking features as limited to secure contexts should be added
to other major points where the Web platform is extended over time
(for example, the definition of a new CSS property).
However, for some types of extension points (e.g., dispatching an event),
limitation to secure contexts should just
be defined in normative prose in the specification.
As described in [[#feature-detect]],
the existence of features should generally be detectable,
so that web content can act appropriately if the feature is present or not.
Since the detection should be the same no matter why the feature is unavailable,
a feature that is limited to secure contexts should, in non-secure contexts,
be indistinguishable from a feature that is not implemented.
However, if, for some reason
(a reason that itself requires serious justification),
it is not possible for developers to detect whether a feature is present,
limiting the feature to secure contexts
might cause problems
for libraries that may be used in either secure or non-secure contexts.
If a feature would pose a risk to user privacy or security
without the authentication, integrity, or confidentiality
that is present only in secure contexts,
then the feature must be limited to secure contexts.
One example of a feature that should be limited to secure contexts
is geolocation, since the authentication and confidentiality provided by
secure contexts reduce the risks to user privacy.
<h3 id="string-constants">Constants, enums, and bitmasks</h3>
In many other platforms and programming languages, constants and enums are
commonly expressed using a integer constant, sometimes in conjunction with
a bitmask mechanism.
However on the Web platform, it is more common to use a string constant for
the cases where a constant is needed. This is much more inspection friendly
for both development and expressing the constant codes through a user facing
interface, and in JavaScript engines, using integers offers no significant
performance benefit over strings.
Strings do not directly address the use case for bitmasks. For these cases,
you should use an object dictionary which contains the state that the
bitmask is attempting to express, as object dictionaries can then be passed
around from method to method as needed as easily as the state in a single
bitmask value.
<h2 id="event-design">Event Design</h2>
<h3 id="one-time-events">Use promises for one time events</h3>
Follow the <a href="https://www.w3.org/2001/tag/doc/promises-guide#one-time-events">advice</a>
in the <strong><a href="https://www.w3.org/2001/tag/doc/promises-guide">Writing
Promise-Using Specifications</a></strong> guideline.
<h3 id="promises-and-events">Events should fire before Promises resolve</h3>
In the case that an asynchronous algorithm (Promise based) intends to dispatch
events, then such events should be dispatched before the Promise resolves, rather
than after.
This ensures consistency, for instance if you have event handlers changing
state, you want all that state to be applied before acting on the resolved
promise, as you can subscribe to a promise at any time (before/after it has
resolved or rejected).
<h3 id="dont-invent-event-like">Don't invent your own event listener-like infrastructure</h3>
For recurring events, it could be convenient to create a custom pair of APIs to
`"register"`/`"unregister"`, `"subscribe"`/`"unsubscribe"`, etc., that take a
callback and get invoked multiple times until paired cancel API is called.
Instead, use the existing event registration pattern and separate API controls
to start/stop the underlying process (since event listeners should not have
[[#events-are-for-notification|side-effects]]).
If the callback would have been provided specific data, then this data should
be added to an `Event` object (but see <a href="#state-and-subclassing">State
and `Event` subclasses</a> as this is not always necessary).
In some cases, you can transfer the state that would be surfaced in callback
parameters into a more persistent object which, in turn, can inherit from
`EventTarget`.
In some cases you might not have an object to inherit from `EventTarget` but
it is usually possible to create such an object.
For instance with Web Bluetooth you can add event listeners on a `Characteristic`
object, which is obtained via `getCharacteristic()`. If you need to filter events,
it might be possible to create a filter like
```js
const filter = navigator.nfc.createReadFilter({
recordType: "json"
});
const onMessage = message => { … };
filter.addEventListener('exchange', onMessage);
```
<h3 id="always-add-event-handlers">Always add event handler attributes</h3>
For an object that inherits from {{EventTarget}}, there are two techniques available for registering
an event handler (e.g., an event named "somethingchanged"):
1. {{EventTarget/addEventListener()}} which allows authors to register for the event using the
event's name (i.e.,
<code highlight="js">someobject.addEventListener("somethingchanged", myhandler)</code>) and
2. `onsomethingchanged` IDL attributes which allow one event handler to be directly assigned to the
object (i.e., <code highlight="js">someobject.onsomethingchanged</code>).
Because there are two techniques for registering events on objects inheriting from {{EventTarget}},
authors may be tempted to omit the corresponding [=event handler IDL attributes=]. They may assume
that event handler IDL attributes are a legacy registration technique or are simply not needed
given that {{EventTarget/addEventListener()}} is available as an alternative. However, it is
important to continue to define event handler IDL attributes because:
* they preserve consistency in the platform
* they enable feature-detection for the supported events (see [[#feature-detect]])
So, if the object inherits from {{EventTarget}}, add a corresponding
<code>on<em>yourevent</em></code> [=event handler IDL attribute=] to the interface.
<p class="note">Note that for HTML and SVG elements, it is traditional to add the
[=event handler IDL attributes=] on the {{GlobalEventHandlers}} interface mixin, instead of
directly on the relevant element interface(s). Similarly, [=event handler IDL attributes=]
are traditionally added to {{WindowEventHandlers}} rather than {{Window}}.</p>
<h3 id="events-are-for-notification">Events are for notification</h3>
Try to design DOM events to deliver after-the-fact notifications of changes. It may be tempting to try to trigger side-effects from the action of {{EventTarget/dispatchEvent()}}, but in general this is <a href="https://lists.w3.org/Archives/Public/public-webapps/2014AprJun/0510.html">strongly discouraged</a> as it requires changes to the DOM specification when added. Your design will proceed more quickly if you avoid this pattern.
<h3 id="guard-against-recursion">Guard against potential recursion</h3>
When designing a long-running or complicated algorithm that is initiated by an
API call, events are appropriate for [[#events-are-for-notification|notifying]]
user code of the ongoing process. However, they also introduce the possibility of unexpected
re-execution of the current algorithm before it has finished! Because user code gets
to run in the middle of the algorithm, recursion can happen if that user code triggers
the algorithm again (directly or indirectly).
To address this, if the algorithmic complexity and call graph is reasonably understood, add appropriate
state to track that the algorithm is in progress and terminate immediately if the algorithm
is already concurrently running. This technique is "guarding" the algorithm:
<div class="example">
The following is a technique (e.g., as used in the [=AbortSignal/add|AbortSignal's add algorithm=]
or as a stack-based variation in the [=focus update steps=])
for guarding against unplanned recursion.
In this example, an object of type <dfn noexport interface>MyObject</dfn> has
a method <dfn method for="MyObject">doComplexOpWithEvents()</dfn> and
an internal state attribute [[<dfn noexport attribute for="MyObject">started</dfn>]]
referenced below as
{{MyObject|this}}.[[{{started}}]] whose initial
value is `false`.
The {{doComplexOpWithEvents()}} method must act as follows:
1. If {{MyObject|this}}.[[{{started}}]] is not `false` then throw an {{InvalidStateError}}
{{DOMException}} and terminate this algorithm.
Note: {{doComplexOpWithEvents()}} is not allowed to be run again
(e.g., from within a `"currentlyinprogress"` event handler) before it finishes
its original execution.
2. Set {{MyObject|this}}.[[{{started}}]] to `true`.
3. ...Do complex stuff...
4. [=Fire an event=] named `"currentlyinprogress"` at {{MyObject|this}}.
5. ...Finish up complex stuff...
6. Set {{MyObject|this}}.[[{{started}}]] to `false`.
</div>
Note: A caution about early termination: if the algorithm being terminated would go
on to ensure some critical state consistency, be sure to also make the relevant adjustments
in state before early termination of the algorithm. Not doing so can lead to inconsistent
state and end-user-visible bugs when implemented as-specified.
Note: A caution about throwing exceptions in early termination: keep in mind the scenario
in which developers will be invoking the algorithm, and whether they would reasonably
expect to handle an exception in this [perhaps rare] case. For example, will this
be the only exception in the algorithm?
Sometimes "guarding" as illustrated above cannot be done because of the complexity
of the algorithm, the number of potential algorithm entry-points, or the side-effects
beyond the algorithm's control. In these cases another form of protecting the integrity
of the algorithm's state is to "defer" the event to a subsequent task or microtask.
Deferral ensures that any stack-based recursion is avoided (but does not eliminate
potentially problematic loops, as they could now occur as unending follow-up tasks).
Deferring an event is often specified as "[=queue a task=] to [=fire an event=]...".
By way of illustration, the deprecated
[Mutation Events](https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Mutation_events)
fired in the middle of node insertion and removal algorithms, and
could be triggered by a variety of common API calls like {{Node/appendChild()}}. The side-effects
from recursively running node mutation algorithms caused many years of security issues.
A more robust approach embodied in
{{MutationObserver}}
applies the principle of deferral of notification — in this case to the microtask queue
following the algorithm.
Also note that deferral of events is always necessary if the algorithm that triggers
the event could be running on a different thread or process. In this case deferral
ensures the events can be processed on the correct task in the [=task queue=].
Both approaches to protecting against unwanted recursion have trade-offs. Some things
to consider when choosing the guarding approach:
* all state management can be rationalized within one "turn" of the algorithm (no
need to consider any state changes or API invocations between the time the algorithm
completes and when the event fires).
* events can reveal internal state changes as early as possible when dispatched immediately
in the algorithm (resulting in minimal delay to user code that wants to act on
that state).
* events do not need to carry snapshots (copies) of internal state because user code
running in the event handler can observe relevant state directly on the instance
object they were fired on (avoiding the need to derive a new type of {{Event}}
to hold such state snapshots).
When deferring, event handlers will run after the algorithm ends (or starts
to run [=in parallel=]) and any number of other tasks or microtasks may run in between
that invalidate the object's state. Since the object's state will be unknown when
the deferred event is dispatched, consider the following:
* state relevant to the event should be packaged with the deferred event, usually
involving a new {{Event}}-derived type with new attributes to hold the state.
For example, the {{ProgressEvent}} adds {{ProgressEvent/loaded}},
{{ProgressEvent/total}}, etc. attributes to hold the state.
* any coordination needed among parts of an algorithm using deferred events often
requires defining an explicit state machine (well-defined state transitions) to
ensure that when a deferred event fires, the behavior of inspecting or changing
state is well-defined. For example, in [[payment-request]], the {{PaymentRequest}}'s
[=[[state]]=] internal slot explicitly tracks the object's state
through its well-defined transitions.
* in addition to defining state transitions, each coordinated algorithm usually
applies the guarding technique (anyway) to ensure the algorithm can only proceed
under the appropriate set of states. For example, in [[payment-request]] note
the guards used often around the [=[[state]]=] internal
slot such as in the {{MerchantValidationEvent/complete()}} algorithm.
Finally, dispatching a deferred event that does not seem to require
packaging extra state or defining a state machine for the algorithm as mentioned above,
could mean that all of the state transitions have been completed and that the event
is meant to signal completion of the algorithm. In this case, it's likely that instead
of using an event to signal completion, the API should be designed to return and complete
a {{Promise}} instead. See [[#one-time-events]].
Note: events that expose the possibility of recursion as described in this section
were sometimes called "synchronous events". This terminology is discouraged as it
implies that it is possible to dispatch an event asynchronously. All events are dispatched
synchronously. What is more often implied by "asynchronous event" is to defer firing
an event.
<h3 id="state-and-subclassing">State and {{Event}} subclasses</h3>
It's tempting to create subclasses of {{Event}} for all event types. This is frequently unnecessary. Consider subclassing {{Event}} when adding unique methods and large amounts of state. In all other cases, using a "vanilla" event with state captured in the {{Event/target}} object.
<h3 id="events-vs-observers">How to decide between Events and Observers</h3>
Several recent additions to the platform employ an Observer pattern. {{MutationObserver}}, {{IntersectionObserver}}, {{ResizeObserver}}, and <a href="https://github.com/WICG/indexed-db-observers">IndexedDB Observers</a> provide precedents for new Observer types.
Many designs can be described as either Observers or {{EventTarget}}s. How to decide?
In general, start your design process using an {{EventTarget}} and only move to Observers if and when events can't be made to work well. Using an {{EventTarget}} ensures your feature benefits from improvements to the shared base class, such as the recent addition of the {{AddEventListenerOptions/once}}.
Observers have the following properties:
* Each instance is constructed with a callback, and optionally with some global options.
* Instances observe specific targets, using the `observe()` and `unobserve()` methods. The callback provided in the constructor is invoked when something interesting happens to those targets.
* Callbacks receive change records as arguments. These records contain the details about the interesting thing that happened. Multiple records can be delivered at once.
* Observation can be customized with additional options passed to `observe()`.
* `disconnect()` stops observation.
* `takeRecords()` synchronously returns records for all observed-but-not-yet-delivered occurrences.
<p class="example">
{{MutationObserver}} takes a callback which receives {{MutationRecord}}s. It cannot be customized at construction time, but each observation can be customized using the {{MutationObserverInit}} set of options. It observes {{Node}}s as targets.
</p>
<p class="example">
{{IntersectionObserver}} takes a callback which receives {{IntersectionObserverEntry}}s. It can be customized at construction time using the {{IntersectionObserverInit}} set of options, but each observation is not further customizable. It observers {{Element}}s as targets.
</p>
Observers involve defining a new class, dictionaries for options, and a new type for the delivered records. For the cost, you gain a few advantages:
* Instances can be customized at observation time. The `observe()` method of an `Observer` can take options, allowing per-callback customization. This is not possible with {{EventTarget/addEventListener()}}.
* Reuse of their creation-time customizations on multiple targets.
* Easy disconnection from multiple targets via `disconnect()`.
* Built-in support for synchronously probing system state. Both `Observer`s and {{EventTarget}}s can batch occurrences and deliver them later, but `Observer`s have a `takeRecords()` method which allows synchronously probing waiting until the next batched delivery.
* Because they are single-purpose, you don't need to specify an event type.
`Observer`s and {{EventTarget}}s overlap in the following ways:
* Both can be customized at creation time.
* Both can batch occurrences and deliver them at any time. {{EventTarget}}s don't need to be synchronous; they can use microtask timing, idle timing, animation-frame timing, etc. You don't need an `Observer` to get special timing or batching.
* Neither {{EventTarget}}s nor `Observer`s need participate in a DOM tree (bubbling/capture and cancellation). Most prominent {{EventTarget}}s are {{Node}}s in the DOM tree, but many other events are standalone; e.g. {{IDBDatabase}} and {{XMLHttpRequestEventTarget}}. Even when using {{Node}}s, you can leave your events can be designed to be non-bubbling and non-cancelable to get that `Observer`-esque feel.
<div class="example">
Here is an example of using a hypothetical version of {{IntersectionObserver}} that is an {{EventTarget}} subclass:
<pre highlight="js">
const io = new ETIntersectionObserver(element, { root, rootMargin, threshold });
function listener(e) {
for (const change of e.changes) {
// ...
}
}
io.addEventListener("intersect", listener);
io.removeEventListener("intersect", listener);
</pre>
As you can see, we've lost some functionality compared to the `Observer` version: the ability to easily observe multiple elements with the same options, or the `takeRecords()` and `disconnect()` methods. We're also forced to add the rather-redundant `"intersect"` event type to our subscription calls.
However, we haven't lost the batching, timing, or creation-time customization, and the `ETIntersectionObserver` doesn't participate in a hierarchy. These aspects can be achieved with either design.
</div>
<h2 id="types-and-units">Types and Units</h2>
<h3 id="numeric-types">Use numeric types appropriately</h3>
[[!WEBIDL]] contains many numeric types. However, it is very
rare that its more specific ones are actually appropriate.
JavaScript has only one numeric type, Number: IEEE 754 double-precision
floating point, including ±0, ±Infinity, and NaN (although thankfully only one). The Web IDL "types" are coercion rules that apply when accepting an argument or triggering a setter. For example, a Web IDL <code>unsigned short</code> roughly says: "when someone passes this as an argument, take it modulo 65536 before doing any further processing". That is very rarely a useful thing to do.