Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pointer gestures understanding drag changes - detlev #736

Closed
wants to merge 10 commits into from
29 changes: 16 additions & 13 deletions understanding/21/pointer-gestures.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,17 @@ <h1><strong>Pointer Gestures</strong><br />Understanding SC 2.5.1</h1>

<section id="intent">
<h2>Intent of this Success Criterion</h2>
<p>The intent of this Success Criterion is to ensure that content can be operated using simple inputs on a wide range of pointing devices. This is important for users who cannot perform complex gestures, such as multipoint or path-based gestures, in a precise manner, either because they may lack the accuracy necessary to carry them out or because they use a pointing method that lacks the capability or accuracy.</p>
<p>A path-based gesture involves a user interaction where the gesture's success is dependent on the path of the user's pointer movement and not just the endpoints. Examples include swiping (which relies on the direction of movement) the dragging of a slider thumb, or gestures which trace a prescribed path, as in the drawing of a specific shape. Such paths may be drawn with a finger or stylus on a touchscreen, graphics tablet, or trackpad, or with a mouse, joystick, or similar pointer device.</p>
<p>A user may find it difficult or impossible to accomplish these gestures if they have impaired fine motor control, or if they use a specialized or adapted input device such as a head pointer, eye-gaze system, or speech-controlled mouse emulation.</p>
<p>Note that free-form drag and drop actions are not considered path-based gestures for the purposes of this Success Criterion.</p>
<p>Examples of multipoint gestures include a two-finger pinch zoom, a split tap where one finger rests on the screen and a second finger taps, or a two- or three-finger tap or swipe. A user may find it difficult or impossible to accomplish these if they type and point with a single finger or stick, in addition to any of the causes listed above.</p>
<p>The intent of this Success Criterion is to ensure that content that can be operated via path-based or multipoint pointer gestures can also be operated using single-point activation with a pointer. This is important for pointer users who cannot perform complex gestures in a precise manner, or not at all. Some pointer users may lack the precision or fine motor control to carry out multipoint or path-based gestures. Some users with strongly impaired motor control use a pointing method via an adapted input device such as a head pointer, an eye-gaze system, or speech-controlled mouse emulation. These users rely on single-point activation.</p>
<p>A <strong>path-based</strong> gesture involves an interaction where the user engages a pointer (down event) and moves before disengaging the pointer (up event). The direction, speed, and also the delta between start and end point may each be evaluated to determine what function is triggered.</p>
<p>Examples of path-based gestures include the swiping and dragging of elements, for example, content sliders, swipe-to-reveal controls, drawer-type menus, or control sliders where a thumb can be dragged along a groove to set a value. Path-based gestures may be executed with a finger or stylus on a touchscreen, graphics tablet, or trackpad, or with a mouse, joystick, or similar pointer device.</p>
<p>Alternatives for single-point activation include arrow buttons moving a content slider in a stepwise fashion; a menu button revealing a drawer-type menu; increment/decrement buttons or a numerical input field next to a control slider offering value input; or equivalent static controls made available after activating an element that implements a swipe-to-reveal gesture.</p>
<p>Examples of <strong>multipoint</strong> gestures include a two-finger pinch zoom, a split tap where one finger rests on the screen and a second finger taps, or a two- or three-finger tap or swipe. A user may find it difficult or impossible to accomplish these if they type and point with a single finger or stick, in addition to any of the causes listed above.</p>
<p>Authors must ensure that their content can be operated without such complex gestures. When they implement multipoint or path-based gestures, they must ensure that the functionality can also be operated via single-point activation. Examples of single-point activation on a touchscreen or touchpad include taps, double taps, and long presses. Examples for a mouse, trackpad, head-pointer, or similar device include single clicks, click-and-hold and double clicks.</p>
<p>This Success Criterion applies to author-created gestures, as opposed to gestures defined on the level of operating system or user agent. An example for gestures provided on the operating system level would be swiping down to see system notifications, and gestures for built-in assistive technologies (AT) to focus or activate content, or to call up AT menus. Examples of user-agent-implemented gestures would be horizontal swiping implemented by browsers for navigating within the page history, or vertical swiping to scroll page content.</p>
<p>This Success Criterion applies to author-created gestures, as opposed to gestures defined on the level of operating system or user agent. Examples for gestures provided at the operating system level would be swiping down to see system notifications, and gestures for built-in assistive technologies (AT) to focus or activate content, or to call up AT menus. Examples of user-agent-implemented gestures would be horizontal swiping implemented by browsers for navigating within the page history, or vertical swiping to scroll page content.</p>
<p>While some operating systems may provide ways to define "macros" to replace complex gestures, content authors cannot rely on such a capability because it is not pervasive on all touch-enabled platforms. Moreover, this may work for standard gestures that a user can predefine, but may not work for other author-defined gestures.</p>
<p>This Success Criterion does not require all functionality to be available through pointing devices, but that which is must be available to users who use the pointing device but cannot perform complex gestures. While content authors may provide keyboard commands or other non-pointer mechanisms that perform actions equivalent to complex gestures (see Success Criterion 2.1.1 Keyboard), this is not sufficient to conform to this Success Criterion. That is because some users rely entirely on pointing devices, or find simple pointer inputs much easier than alternatives. For example, a user relying on a head-pointer would find clicking a control to be much more convenient than activating an on-screen keyboard to emulate a keyboard shortcut, and a person who has difficulty memorizing a series of keys (or gestures) may find it much easier to simply click on a labeled control. Therefore, if one or more pointer-based mechanisms are supported, then their benefits should be afforded to users through simple, single-point actions alone.</p>
<p>An exception is made for functionality that is inherently and necessarily based on complex paths or multipoint gestures. For example, entering one's signature may be inherently path-based (although acknowledging something or confirming one's identity need not be).</p>
<p>An exception is made for functionality that is inherently and necessarily based on complex paths or multipoint gestures. For example, drawing letters for handwriting recognition or entering one's signature are inherently path-based (although writing or confirming one's identity need not be).</p>
<p>Gestures that involve dragging in any direction are not in scope for this SC, however, such gestures do require fine motor control. Authors are encouraged to provide non-dragging methods, for instance, a drag and drop operation could also be achieved by selecting an item (with a tap or keyboard interaction) and then selecting its destination as a second step.</p>

<section id="benefits">
<h3>Benefits</h3>
Expand All @@ -34,9 +35,11 @@ <h2>Examples</h2>
<ul>
<li><p>A web site includes a map view that supports both the pinch gesture to zoom into the map content and drag gestures to move the visible area. User interface controls offer the operation via [+] and [-] buttons to zoom in and out, and arrow buttons to pan stepwise in all directions.</p></li>
<li><p>A news site has a horizontal content slider with hidden news teasers that can moved into the viewport via horizontal swiping. It also offers forward and backward arrow buttons for single-point activation to navigate to adjacent slider content.</p></li>
<li><p>A mortgage lending site has a slider control for setting the amount of credit required. The slider can be operated by dragging the thumb, but also by a single tap or click anywhere on the slider groove in order to set the thumb to the chosen position.</p></li>
<li><p>A slider control can be operated by dragging the thumb. Buttons on both sides of the slider increment and decrement the selected value and update the thumb position.</p></li>
<li><p>A kanban widget with several vertical areas representing states in a defined process allows the user to right- or left-swipe elements to move them to an adjacent silo. The user can also accomplish this by selecting the element with a single tap or click, and then activating an arrow button to move the selected element.</p></li>
<li><p>A color choice slider is operated by horizontally dragging a thumb on a groove. Buttons on both sides of the slider allow incrementing and decrementing the selected value and update the thumb position, long presses on the buttons offer continuous step-wise incrementation.</li>
<li><p>A mortgage lending site offers a slider for setting the amount of credit required which is operated by horizontally dragging a thumb on a groove. The thumb position can also be set by single tap or click anywhere on the groove.</p></li>
<li><p>An email application uses a short swipe gesture to quickly archive an email, and a longer swipe to delete the email. The user can also activate the email item and then use buttons offering the same functionality.</p></li>
<li><p>A kanban widget with several vertical areas representing states in a defined process allows the user to right- or left-swipe elements to move them to an adjacent silo. The user can also accomplish this by selecting the element with a single tap or click, and then activating an arrow button to move the selected element.</p></li>

</ul>
</section>
<section id="resources">
Expand All @@ -47,9 +50,9 @@ <h2>Resources</h2>
<h2>Techniques for SC 2.5.1 - Pointer Gestures</h2>
<h3>Sufficient Techniques</h3>
<ul>
<li><p>GXXX: Do not rely on path-based gestures</p></li>
<li><p>GXXX: Do not rely on multipoint gestures</p></li>
<li><p>GXXX: Provide controls that do not require complex gestures and perform the same function as complex gestures</p></li>
<li><p>GXXX: <a href="https://raw.githack.com/w3c/wcag/tech-no-reliance-on-path-or-multipoint/techniques/general/no-reliance-on-path-or-multipoint-gestures.html">Not relying on path or multipoint gestures for operation</a></p></li>
<li><p>GXXX: <a href="https://raw.githack.com/w3c/wcag/tech-ensuring-single-pointer/techniques/general/ensuring-single-pointer.html">Providing controls to achieve the same result as path based or multipoint gestures</a></p></li>
<li><p>GXXX: <a href="https://raw.githack.com/w3c/wcag/tech-providing-single-point-control-sliders/techniques/general/providing-single-point-control-slider.html">Providing a control slider that offers single point activation</a></p></li>
<li><p>GXXX: Single-point activation for spatial positioning and manipuation</p></li>
</ul>

Expand Down