From 3d924544f2d2059f5232cb02ca45ced7d8fd7312 Mon Sep 17 00:00:00 2001
From: michael-n-cooper general failure Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link general sufficient Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link general sufficient Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link css sufficient Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. The Technique us applicable to any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... The objective of this technique is to ensure that users who use a path-based drag-and-drop gesture to move elements from the initial area to a drop target can abort the action after picking up the target. This can be done either by releasing the target outside a drop area, or by moving the target back to its original position in a separate action that undoes the first action. A third option is a dialog asking for confirmation of the action when the item is dropped. Note: This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link sufficient Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate content with single pointer gestures instead. This technique may involve either not using path-based or multi-point gestures to operate content, or providing alternative controls for pointer input that can be operated by a single pointer (e.g. a tap or tap-and-hold on a touch screen, or a click or long press with a mouse or other indirect pointing device). Note: This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Use with any technology that enables shortcuts consisting only of one or more character keys. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... The objective of this technique is to ensure that character-key shortcuts, which are useful for some users but cause trouble for others, can be disabled or remapped by users who find them troublesome. These users include speech input users and some mobile users. There should be a clear way, such as a dialog box, for users to see where single-key shortcuts shortcuts are mapped and disable or remap them. Choices Of keys for remapping should include, but don’t have to be limited to, modifier keys. A best practice is including the ability to map up to 25 character keys as a shortcut. This allows a speech input user to add a spoken shortcut that would work in any speech program. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description A program enables the single-character shortcut “s” To allow the user, whenever the focus is not in a text field, jo jump the focus to the search box. There is no mechanism for the user to disable or remap this shortcut. Whenever the user accidentally hits the “s” key, she loses her place because the focus jumps to the search box at the top of the page. Working example: link Any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The Technique us applicable to any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). The Technique is applicable to any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The Technique is applicable to any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The objective of this technique is to ensure that users who use a path-based drag-and-drop gesture to move elements from the initial area to a drop target can abort the action after picking up the target. This can be done either by releasing the target outside a drop area, or by moving the target back to its original position in a separate action that undoes the first action. A third option is a dialog asking for confirmation of the action when the item is dropped. Note: This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. For content that is draggable, check whether: The objective of this technique is to ensure that users who use a path-based drag-and-drop gesture to move elements from the initial area to a drop target can abort the action after picking up the target. This can be done either by releasing the target outside a drop area, or by moving the target back to its original position in a separate action that undoes the first action. A third option is a dialog asking for confirmation of the action when the item is dropped. The objective of this technique is to ensure that users who use a path-based drag-and-drop action to move elements from the initial area to a drop target can abort the action after picking up the target. This can be done either by releasing the target outside a drop area, or by moving the target back to its original position in a separate action that undoes the first action. A third option is a dialog asking for confirmation of the action when the item is dropped. Note: This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). Note: This technique addresses pointer actions where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The Technique is applicable to any technology that supports the display of additional content on pointer hover. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible when users move the pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Tests must have a test procedure and expected results. Populate the following sections as appropriate. If a technique has multiple alternative testing approaches, add a new section with class="test" for each one, and put the test-procedure and test-results sections inside that. Provide links to external resources that are relevant to users of the technique. This section is optional. The Technique is applicable to any technology that supports the display of additional content on pointer hover. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible when users move the pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Tests must have a test procedure and expected results. Populate the following sections as appropriate. If a technique has multiple alternative testing approaches, add a new section with class="test" for each one, and put the test-procedure and test-results sections inside that. Provide links to external resources that are relevant to users of the technique. This section is optional. The Technique is applicable to any technology that supports the display of additional content on pointer hover. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible when users move the pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Tests must have a test procedure and expected results. Populate the following sections as appropriate. If a technique has multiple alternative testing approaches, add a new section with class="test" for each one, and put the test-procedure and test-results sections inside that. Provide links to external resources that are relevant to users of the technique. This section is optional. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible when users move the pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. When focusing a link in an online encyclodedia, a popup with a content preview appears just above or below the link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content. Description ID: W## Technology: ARIA | CSS | General | HTML | PDF | Script | Server | Text Type: Technique | Failure Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code. The objective of this technique is to ... Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link ID: W## Technology: ARIA | CSS | General | HTML | PDF | Script | Server | Text Type: Technique | Failure Technology: General Type: Technique Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. When the orientation of the page is locked, provide a button to allow a user to change the orientation. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example: link Description Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible and remain permanent until dismissal when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Popup content should also be dismissable without moving the focus so that users can read content covered by the popup. When focusing a link in an online encyclodedia, a popup with a content preview appears just above or below the link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content. Description When focusing a link in an online encyclodedia, a popup with a content preview appears just above or below the link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content. Pressing the Escape key will dismiss (close) the popup content. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible and remain permanent until dismissal when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Popup content should also be dismissable without moving the focus so that users can read content covered by the popup. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible and remain permanent until dismissal when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Popup content should also be dismissable without moving the focus so that users can read content covered by the popup. When focusing a link in an online encyclodedia, a popup with a content preview appears just above or below the link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content. Pressing the Escape key will dismiss (close) the popup content. When focusing a link in an online encyclodedia, a popup with a content preview appears just above or below the link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content. Pressing the Escape key will dismiss (close) the popup content. When focusing a link in an online encyclodedia, a popup with a content preview appears just above or below the link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content. Pressing the Escape key will dismiss (close) the popup content. When focusing a link, a popup with a content preview appears just above or below that link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content when needed. Pressing the Escape key will dismiss (close) the popup content. When focusing a link, a popup with a content preview appears just above or below that link. The user can move the pointer over the popup to move the enlarged section so they can fully read the popup content when needed. Pressing the Escape key will dismiss (close) the popup content. Tests must have a test procedure and expected results. Populate the following sections as appropriate. If a technique has multiple alternative testing approaches, add a new section with class="test" for each one, and put the test-procedure and test-results sections inside that. For popup content that appears on hover or focus over a trigger: The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate content with single pointer gestures instead. This technique may involve either not using path-based or multi-point gestures to operate content, or providing alternative controls for pointer input that can be operated by a single pointer (e.g. a tap or tap-and-hold on a touch screen, or a click or long press with a mouse or other indirect pointing device). The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate content with single pointer. This technique may involve either not using path-based or multi-point gestures to operate content, or providing alternative controls for pointer input that can be operated by a single pointer (e.g. a tap or tap-and-hold on a touch screen, or a click or long press with a mouse or other indirect pointing device). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. Note: This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). For any content that responds to path-based or multi-point pointer gestures: On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. Note: This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). The objective of this technique is to ensure that users who use a path-based drag-and-drop action to move elements from the initial area to a drop target can abort the action after picking up the target. This can be done either by releasing the target outside a drop area, or by moving the target back to its original position in a separate action that undoes the first action. A third option is a dialog asking for confirmation of the action when the item is dropped. The objective of this technique is to ensure that users who use a path-based drag-and-drop action to move an item from the initial location to a drop target can abort the action after picking up the target. This can be done either by releasing the item outside a drop area, or by moving the item back to its original position in a separate action that undoes the first action. A third option is to have a step after the element is dropped onto target, either with a dialog asking for confirmation of the action when the item is dropped, or providing an undo command. Note: This technique addresses pointer actions where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). This technique addresses pointer actions where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. For content that is draggable, check whether: For content that is draggable, check whether the drag-and-drop action can be reversed by: The Technique is applicable to any technology that supports pointer input (e.g. supporting any or all of the following: mouse pointer, touch on touch screen or trackpad, stylus input, or laser pointer input). The objective of this technique is to ensure that users who use a path-based drag-and-drop action to move an item from the initial location to a drop target can abort the action after picking up the target. This can be done either by releasing the item outside a drop area, or by moving the item back to its original position in a separate action that undoes the first action. A third option is to have a step after the element is dropped onto target, either with a dialog asking for confirmation of the action when the item is dropped, or providing an undo command. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader is turned on. AT generally consumes path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keboard needs to stay visible and remain permanent until dismissal when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Popup content should also be dismissable without moving the focus so that users can read content covered by the popup. Content that is displayed in a popup when users move the pointer over a trigger or focus the trigger with the keyboard needs to stay visible and remain permanent until dismissal when users move their pointer over the popup content. Low vision users using screen magnification often see only a small part of the screen. This means that the popup content may not be fully visible in the currently visible section. Often, the position of content visible in the enlarged section changes based on users' mouse movement, so magnification users may move their mouse over partly visible popup content to read it. Web content should therefore ensue that popup content stays vsisible when the pointer moves away from the trigger to (mostly adjacent) popup content. Popup content should also be dismissable without moving the focus so that users can read content covered by the popup. HTML and CSS Components on a page are often many colors and shades. Historically focus indicators have been one color, so they are highly visible when some components have focus and not well seen on other components. For instance, if a focus indicator is dark blue and a button is yellow then it will be well seen, but if the button is also blue it will not be well seen. This technique overcomes that problem with one class that can be used across a site. Although it is possible to create individual CSS classes to address the different buttons across a site, this can be time consuming and easy to miss some types of interactive content. However, if the focus indicator is two colors, a light color and a dark color, then regardless of the color of component it is on, it will always have sufficient contrast. Currently, this can be done by combining the The objective of this technique is to create a two color focus indicator that is always visible and does not require multiple classes to ensure constant sufficient contrast of the focus indicator regardless of the color of the component it is focused on. Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live. Description Working example of combining a dark outline and light text shadow As of this writing there is no one CSS setting that can accomplish this, but there is movement to introduce a new CSS property that could accept
+
+
+ Provide links to external resources that are relevant to users of the technique. This section is optional.Failure due to using an unmodifiable single-key shortcut
+ Metadata
+
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Ensuring that multi-point and path-based gesture functionality can be operated with a single pointer
+ Metadata
+
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Ensuring that drag-and-drop gestures can be cancelled
+ Metadata
+
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Setting the orientation to allow both landscape and portrait
+ Metadata
+
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Metadata
When to Use
- Applicability
+ User Agent and Assistive Technology Support Notes
+ Description
- Examples
- Example Title
- Code sample
-
+
+ Resources<
+ Video of canceled drag-and-drop interaction (item released outside drop target) (Youtube)
Tests
Procedure
-
Expected Results
-
Metadata
When to Use
- When to use
+ Applicability
+ User Agent and Assistive Technology Support Notes
+ Description
- Description
+ Examples
- Example Title
- Code sample
-
+
+Tests
+ Procedure
+
+
Tests
- Procedure
-
-
- Expected Results
-
-
- Expected Results
+
+
When to Use
- Description
Examples
Example Title
- No Off for Search Shortcut
+ Code sample
When to use
Applicability
User Agent and Assistive Technology Support Notes
- Description
@@ -28,7 +28,6 @@ Description
Examples
-
Metadata
When to Use
Applicability
- User Agent and Assistive Technology Support Notes
Metadata
When to Use
Applicability
User Agent and Assistive Technology Support Notes
- Description
User Agent and Assistive Technology Support Notes
+ Examples
From d1f580ad6030e121bf22ba7a8f117e8231f83d5c Mon Sep 17 00:00:00 2001
From: Detlev Fischer Examples
Resources<
+ Resources
Video of canceled drag-and-drop interaction (item released outside drop target) (Youtube)
Resources
Tests
Procedure
-
-
+
Expected Results
-
Ensuring that drag-and-drop gestures can be cancelled
+ Ensuring that drag-and-drop actions can be cancelled
Metadata
@@ -19,9 +19,9 @@ Applicability
Description
- User Agent and Assistive Technology Support Notes
Making content on focus or hover hoverable
+ Metadata
+
+
+
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Resources
+
+
+ Making content on focus or hover hoverable
+ Metadata
+
+
+
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Resources
+
+
+ Making content on focus or hover hoverable
- Metadata
-
-
-
- When to Use
- Description
- Examples
- Example Title
- Code sample
-
- Tests
- Procedure
-
-
- Expected Results
-
-
- Resources
-
-
- When to Use
Description
- Examples
From 242adf54f15e0392f15611e5ab4af320f41e7076 Mon Sep 17 00:00:00 2001
From: Detlev Fischer Making content on focus or hover hoverable
+ Making content on focus or hover hoverable, dissmissible and permanent
Metadata
@@ -22,7 +22,7 @@ Description
Examples
- Example Title
Technique Title
+ Metadata
+ When to Use
+ Description
+ Examples
+ Example Title
+ Code sample
+ Tests
+ Procedure
+
+
+ Expected Results
+
+
+ Technique Title
+ Using a control to allow access to content in different orientations which is otherwise restricted
Metadata
When to Use
- Description
@@ -23,12 +23,9 @@ Description
Examples
- Example Title
- Code sample
- Providing a Button to Change Orientation
+ Tests
Procedure
-
Expected Results
-
When to Use
Description
- Examples
- Example Title
- Content preview popup when keyboard-focusing or hovering over links
+ Code sample
+ <a href="..." onfocus="openTooltip(...)" onmouseover="openTooltip(...)" onblur="closeTooltip(...)" onmouseout="closeTooltip(...)" onkeyup="if (event.key == 23) { closeTooltip(...); event.preventDefault();">Foo
When to Use
Description
- Examples
From b45ffc59e00d366f9f193e37d664651b530058d2 Mon Sep 17 00:00:00 2001
From: Detlev Fischer Examples
Content preview popup when keyboard-focusing or hovering over links
Code sample
-
+ <a href="..." onfocus="openTooltip(...)" onmouseover="openTooltip(...)" onblur="closeTooltip(...)" onmouseout="closeTooltip(...)" onkeyup="if (event.key == 23) { closeTooltip(...); event.preventDefault();">Foo
+
+
+ <!DOCTYPE html>
+<html lang="en">
+<head>
+ <title>Hover & Focus General Technique Example 1</title>
+ <link title="hover-focus-style" rel="stylesheet" href="hover-focus.css" type="text/css" />
+ <script src="hover-focus.js" defer></script>
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+</head>
+
+<body>
+<p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>. Text and popup are <strong>in one link (a)</strong> element.</p>
+</body>
+</html>
+
+
Content preview popup when keyboard-focusing or hovering over links
<p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>. Text and popup are <strong>in one link (a)</strong> element.</p>
</body>
</html>
+
+
+
+
+[role="tooltip"] {
+ position: absolute;
+ left:0;
+ top:1em;
+}
+
+
+[role="tooltip"] {
+ display: none;
+ padding: 0.5em;
+ background:white;
+ color: black;
+ border:solid black 2px;
+ width:10em;
+}
+
+.a-and-tooltip {
+ position: relative;
+}
-
+
-
+
+// trigger and popup inside the same link
+
+var parent = document.getElementById('parent');
+
+
+parent.onmouseover = function() {
+ document.getElementById('popup').style.display = 'block';
+}
+
+parent.onmouseout = function() {
+ document.getElementById('popup').style.display = 'none';
+}
+
+parent.onfocus = function() {
+ document.getElementById('popup').style.display = 'block';
+}
+
+parent.onblur = function() {
+ document.getElementById('popup').style.display = 'none';
+}
+
+// hide when ESC is pressed
+
+document.addEventListener('keydown', (e) => {
+ if ((e.keyCode || e.which) === 27)
+ document.getElementById('popup').style.display = 'none';
+});
+
Examples
Content preview popup when keyboard-focusing or hovering over links
Code sample
-
+
HTML of example 1
+
<!DOCTYPE html>
<html lang="en">
<head>
@@ -43,7 +43,7 @@
Content preview popup when keyboard-focusing or hovering over links
</html>
-
+CSS of example 1
[role="tooltip"] {
display: none;
@@ -65,6 +65,7 @@
Content preview popup when keyboard-focusing or hovering over links
}
+JavaScript of example 1
From 7bf64d3bdd2c1f1feb8df335a55df926e63119fa Mon Sep 17 00:00:00 2001
From: Detlev Fischer
// trigger and popup inside the same link
From 9702739e83d40114ad22b3b8cc3a0e48e313c0a7 Mon Sep 17 00:00:00 2001
From: Detlev Fischer
HTML of example 1
</head>
<body>
-<p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>. Text and popup are <strong>in one link (a)</strong> element.</p>
+<p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger
+<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>. Text and popup are <strong>in one link (a)</strong> element.</p>
</body>
</html>
HTML of example 1
<body>
<p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger
-<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>. Text and popup are <strong>in one link (a)</strong> element.</p>
+<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>.
+Text and popup are <strong>in one link (a)</strong> element.</p>
</body>
</html>
From dfd025f6e37bcffd0c4ed0a09cb7d6995b949f8c Mon Sep 17 00:00:00 2001
From: Detlev Fischer Description
Examples
Content preview popup when keyboard-focusing or hovering over links
- Example 1: Content preview popup when keyboard-focusing or hovering over links
+ HTML of example 1
-
<!DOCTYPE html>
From 90b5968f0509aa08edefaafe4aad74e4de0ab028 Mon Sep 17 00:00:00 2001
From: Detlev Fischer
HTML of example 1
</body>
</html>
+
CSS of example 1
-
+
-
[role="tooltip"] {
display: none;
padding: 0.5em;
@@ -66,9 +66,9 @@
CSS of example 1
top:1em;
}
+
JavaScript of example 1
-
+
// trigger and popup inside the same link
From e22b28b861f1324e87071d19675bfb5e3fde8cdd Mon Sep 17 00:00:00 2001
From: Detlev Fischer
Example 1: Content preview popup when keyboard-focusing or hovering over lin
HTML of example 1
From 61ccb7e69ff3920106b6dbe8e77acc7ff22a8ea9 Mon Sep 17 00:00:00 2001
From: Detlev Fischer
- <!DOCTYPE html>
+<!DOCTYPE html>
<html lang="en">
<head>
- <title>Hover & Focus General Technique Example 1</title>
- <link title="hover-focus-style" rel="stylesheet" href="hover-focus.css" type="text/css" />
+ <title>Hover & Focus General Technique Example 1</title>
+ <link title="hover-focus-style" rel="stylesheet" href="hover-focus.css" type="text/css" />
<script src="hover-focus.js" defer></script>
- <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
</head>
<body>
-<p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger
-<span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>.
-Text and popup are <strong>in one link (a)</strong> element.</p>
+ <p>This is the <a class="a-and-tooltip" id="parent" href="index.html">trigger
+ <span id="popup" role="tooltip">And this additional text gives additional context on the trigger term</span></a>.
+ Text and popup are <strong>in one link (a)</strong> element.</p>
</body>
</html>
JavaScript of example 1
Tests
- Procedure
+
-
Expected Results
-
Description
- Examples
-
-
-
+
+ Tests
Procedure
+
-
Expected Results
-
Ensuring that multi-point and path-based gesture functionality can be operated with a single pointer
@@ -23,14 +23,14 @@ Description
Examples
Ensuring that drag-and-drop actions can be cancelled
@@ -19,21 +19,22 @@ Applicability
Description
- User Agent and Assistive Technology Support Notes
+ User Agent and Assistive Technology Support Notes
Examples
-
-
+
+
Resources
@@ -43,11 +44,11 @@ Resources
Tests
Procedure
-
-
Making content on focus or hover hoverable, dissmissible and permanent
From b333023a7defabd611cd3f3b134ac2a16ad35ec7 Mon Sep 17 00:00:00 2001
From: Alastair Campbell Metadata
When to Use
Applicability
Description
Description
User Agent and Assistive Technology Support Notes
Examples
@@ -36,10 +37,12 @@
Examples
Resources
Video of canceled drag-and-drop interaction (item released outside drop target) (Youtube)
Tests
Techniques
Sufficient
-
+ Failure
+ When to Use
Description
- Examples
From 87f4ad8912098d86f24ae34850516a978e0c90ce Mon Sep 17 00:00:00 2001
From: Alastair Campbell Techniques
Sufficient
-
From 763117dd65e2129e99107d235462c15cf07c0893 Mon Sep 17 00:00:00 2001
From: David MacDonald Creating a two color focus indicator to ensure sufficient contrast with all components
+ Metadata
+
+ When to Use
+ Description
+ text shadow
property with the outline
property on the focus
indicator. Examples
+ Combining a dark outline and light text shadow
+
+ *:focus {
+ box-shadow: 0 0 0px 1px white !important;
+ outline: dotted !important;
+ }
+ *:focus:not(:focus-visible) { outline: none }
+ a:hover { outline: none !important;}
+ a:active { outline: none !important;}
+ Using a single CSS property
+ Tests
+
+ Procedure
+
+
+ Expected Results
+
+
+ Resources
+
+
+ Techniques
Sufficient
The objective of this technique is to ensure that speech input users can operate web content reliably while not adversely affecting other users of assistive technology.
-When speech input users interact with a web page, they usually speak a command followed by the reference to some visible label (like text in a button, a link's text, or the text labelling input fields). For example, they may say "click search" to activate a button labelled Search. When speech recognition software processes speech input and looks for matches, it uses the accessible name of controls. Where there is a mismatch between the text in the label and the text in the accessible name, it can cause issues for the user.
+When speech input users interact with a web page, they usually speak a command followed by the reference to some visible label (such as text beside an input field or inside a button or link). For example, they may say "click search" to activate a button labelled Search. When speech recognition software processes speech input and looks for matches, it uses the accessible name of controls. Where there is a mismatch between the text in the label and the text in the accessible name, it can cause issues for the user.
The simplest way to enable speech input users and meet 2.5.3 Label in Name is to ensure that the accessible name matches the visible text label. The accessible name should be assigned through native elements and semantics where possible. That helps ensure an exact match between the visible label and name. This is covered in the related technique Matching the accessible name to visible label with native semantics.
-Where it is not possible to match the adjacent visible text label through native semantics, authors may use aria-label and aria-labelledby to match the string. Such situations are unusual and tend to occur when there is not a clear 1:1 relationship between User Interface Components and labels. Where users may perceive more inputs than labels, the use of ARIA can be beneficial to ensure the label matches the name.
+Where it is not possible to match the adjacent visible text label through native semantics, authors may use aria-label and aria-labelledby to match the string. Such situations are unusual and tend to occur when there is not a clear 1:1 relationship between user interface components and labels. Especially where users may perceive more inputs than labels, the use of ARIA can be beneficial to ensure the name matches the label.
Determining the appropriate text to designate as the label (and by extension the accessible name) can be confusing when the controls outnumber the potential labels. The remainder of this document outlines scenarios where it is problematic to match the adjacent visible text label through native semantics, but it is still possible to match the accessible name to the label by applying ARIA roles and attributes. The examples are in accordance with the guidance in the 2.5.3 Understanding document.
+The remainder of this document outlines scenarios where it is problematic to match the adjacent visible text label through native semantics, but it is still possible to match the accessible name to the label by applying ARIA roles and attributes. The examples are in accordance with the guidance in the 2.5.3 Understanding document.
Note: These examples are drawn from the web and are not offered as examples of excellent design. They serve as examples of common implementations which pose challenges when trying to meet 2.5.3 as well as incorporate other considerations such as those for 1.3.1 Information and Relationships and 3.3.2 Labels or Instructions.
@@ -40,8 +40,8 @@"Work Phone:" is adjacent to the first of the three inputs, so for the purposes of 2.5.3 it should be considered the first input's label. However, "Work Phone" also describes the group of three inputs. Text in such a scenario is normally referred to as a "group label", but it is important to emphasize that group labels are not part of the accessible name using native HTML semantics. As well, according to the guidance on "Confining the label to adjacent text" in the 2.5.3 Understanding document, the "Work Phone" text should not be considered the label for the other inputs.
Associating the label with the first input can be accomplished through aria-labelledby
, which can reference the id
of the element containing the string (in the following code example, a span
nested inside a parent div
).
The same parent div
can be used to nest the group of related inputs to meet 1.3.1 Information and Relationships, with aria-labelledby
providing the group label by pointing to the same nested span
.
To meet 3.3.2 ("Labels or instructions are provided when content requires user input."), each of the fields can be given a unique name through the use of the title
attribute. Where an accessible name already exists (for the first input), the title
value becomes part of the accessible description. In the absence of another candidate for accessible name, the title
value is used in the calculation for the name (for the second and third inputs).
The parent div
can be used to nest the group of related inputs to meet 1.3.1 Information and Relationships, with the aria role of group
assigned to the div
and aria-labelledby
providing the group label (by pointing to the same nested span
).
To meet 3.3.2 ("Labels or instructions are provided when content requires user input."), each of the fields can be given a unique name through the use of the title
attribute. Where an accessible name already exists (for the first input), the title
value becomes part of the accessible description. In the absence of another candidate for accessible name, the title
value is used in the calculation for the name (for the second and third inputs).
<div role="group" aria-labelledby="groupLabel">
<span id="groupLabel">Work Phone:</span>
@@ -77,9 +77,11 @@ Radio buttons laid out in a matrix
When considering this complex component, it is important to remember that the purpose of 2.5.3 Label in Name is to enable speech input. For such users, the text at the start of each row ("The interaction with the sales staff", "Your experience at the the register", etc ) should be treated as the labels for the first radio buttons to meet 2.5.3. The column headers could also be assigned as names for the corresponding radio buttons in the first row, given their proximity.
- To meet 2.5.3, it is not necessary to assign every radio button the table header text as its accessible name. The text is not adjacent to most of them and may not offer a significant improvement in user experience for speech recognition users. For instance, in this example each row logically represents a radio button group – for instance a user should only be able to give one rating to the interaction of the sales staff. As such, if the speech recognition user can navigate to the first choice in that group via the row header label, the user can use the keyboard API to easily navigate between the choices.
+ To meet 2.5.3, it is not necessary to assign every radio button the table header text as its accessible name. The text is not adjacent to most of them and may not offer a significant improvement in user experience for speech recognition users.
+ For instance, in this example each row logically represents a radio button group – a user should only be able to give one rating to the interaction of the sales staff. As such, if the speech recognition user can navigate to the first choice in that group via the row header label, the user can use the keyboard API to easily navigate between the choices.
From a strictly programmatic perspective, authors may be tempted to treat the column headers ("Very satisfied", "Somewhat satisfied", etc) as the labels for each of the radio buttons. This is consistent with how the simple radio button group was done. However, especially in a survey which contains a number of similar questions, the result does not necessarily improve the speech interaction since there may be dozens of "Very satisfied" radio buttons.
- Many authors will assign the table header values to each input. This may provide better context for screen reader users or potentially more navigation options for a user of speech recognition. However, it should be noted that popular screen readers already extract the table's scope information to provide context to users, and wordy table headers will likely decrease the experience for users rather than enhance it. For instance, relying on only scope attributes of the table headers, some screen readers will announce each row header as a user traverses to new rows, but (depending on the screen reader's configuration) will not announce the row header while the user navigates between cells on the same row. This is an established and efficient means of navigating by screen reader. In contrast, concatenating the row and column headers for every cell into its accessible name will result in the screen reader user hearing that combination announced before the radio button's state for each radio button, for example "The interaction with the sales staff Neither satisfied nor dissatisfied, not checked". Depending on the label length, that may result in verbose output.
+ Many authors will assign the table header values to each input. This is unlikely to improve the interaction for speech-input users, and it has ramifications for screen reader user experience. Giving the header values to each radio button may create a more reliable experience for some users. However, it should be noted that popular screen readers already extract the table's scope information to provide context to users, and wordy table headers will likely decrease the experience for users rather than enhance it. For instance, relying on only scope attributes of the table headers, some screen readers will announce each row header as a user traverses to new rows, but (depending on the screen reader's configuration) will not announce the row header while the user navigates between cells on the same row. This is an established and efficient means of navigating by screen reader since it gives good context with reduced verbosity. In contrast, concatenating the row and column headers for every cell into its accessible name will result in the screen reader user hearing that combination announced before the radio button's state for each radio button, for example "The interaction with the sales staff Neither satisfied nor dissatisfied, not checked". Depending on the label length, that may result in a less welcome interaction.
+ Regardless of decisions on what to doe for each radio button, aria-labelledby
will typically be used to provide the accessible name (since scope
attributes cannot).
To summarize, this matrix is a good example of the need to distinguish between a visible label, as discussed in 2.5.3, and notions of a programmatic label
(and other programmatic relationships) covered in 1.3.1 Information and Relationships.
@@ -142,7 +144,7 @@
Inputs laid out in a matrix
As with the prior example, some authors may elect to use the column and row headers as the accessible names for all the cells. User testing will help determine the best implementation for the target audience. Here are some considerations for matrixes:
- The table header text length. The short text strings in this example could provide context without adding too much verbosity (for screen reader users). Longer strings may be more intrusive.
- - The size of the matrix. Low-vision users who use magnification may have difficulty orienting themselves in large tables (since the text at the beginning of the row and/or column may be out of the viewport). Concatenated header text used as a
title
value for each cell would provide context as a tool tip.
+ - The size of the matrix. Low-vision users who use magnification may have difficulty orienting themselves in large tables (since the text at the beginning of the row and/or column may be out of the viewport). Concatenated header text used as a
title
value via aria-labelledby
for each cell would provide context as a tool tip.
- The nature of the table and position of the inputs. Sometimes the text appearing in the first row or first column of the table is not intended as context for a particular input. The table may not have column or row headers; the value in the first cells may not be meaningful.
- The value of information in neighbouring cells. If a cell is not neighboring a row or column header, then any text in adjacent cells is likely the text closest in proximity. However, whether text in adjacent cells could serve as a label for the cell is entirely dependent on context. Generally, the text content of cells not serving a header function should be disregarded as potential labels for neighbouring cells, particularly if the cell is itself a user input.
From e56e43437ae50365ad85e2c68e6fc18c94ac0928 Mon Sep 17 00:00:00 2001
From: Mike Gower
Date: Mon, 22 Apr 2019 11:03:54 -0700
Subject: [PATCH 112/402] Update G210.html
trimming down examples used in the newly extracted aria technique
---
techniques/general/G210.html | 139 +++--------------------------------
1 file changed, 10 insertions(+), 129 deletions(-)
diff --git a/techniques/general/G210.html b/techniques/general/G210.html
index c615aec020..3d5a4c9591 100644
--- a/techniques/general/G210.html
+++ b/techniques/general/G210.html
@@ -19,13 +19,14 @@ When to Use
Description
The objective of this technique is to ensure that speech input users can operate web content reliably while not adversely affecting other users of assistive technology.
- When speech input users interact with a web page, they usually speak a command followed by the reference to some visible label (like text in a button, a link's text, or the text labelling input fields). For example, they may say "click search" to activate a button labelled Search. When speech recognition software processes speech input and looks for matches, it uses the accessible name of controls. Where there is a mismatch between the text in the label and the text in the accessible name, it can cause issues for the user. The simplest way to enable speech input users and meet 2.5.3 Label in Name is to ensure that the accessible name matches the visible text label.
+ When speech input users interact with a web page, they usually speak a command followed by the reference to some visible label (such as text beside an input field or inside a button or link). For example, they may say "click search" to activate a button labelled Search. When speech recognition software processes speech input and looks for matches, it uses the accessible name of controls. Where there is a mismatch between the text in the label and the text in the accessible name, it can cause issues for the user. The simplest way to enable speech input users and meet 2.5.3 Label in Name is to ensure that the accessible name matches the visible text label.
+
- Examples: Matching through native semantics
- Mapping a visible label to the accessible name is achieved in many technologies by meeting 1.3.1 Information and Relationships through the proper use of native semantics. Many controls derive accessible names by correct nesting of elements, while other elements have specific attributes which are a valid means of providing or referencing an accessible name.
+ Examples
+
+Mapping a visible label to the accessible name is achieved in many technologies by meeting 1.3.1 Information and Relationships through the proper use of native semantics. Many controls derive accessible names by correct nesting of elements, while other elements have specific attributes which are a valid means of providing or referencing an accessible name.
The accessible name should be assigned through native elements and semantics where possible. That helps ensure an exact match between the visible label and name.
-
Anchor text provides both the link's label and its accessible name
Using conventional HTML, the text between the anchor
element's tags provides both the link's visible text and the accessible name "Code of conduct":
@@ -51,11 +52,8 @@ The button text provides the accessible name
Non-working sample of button
- Examples: Matching through use of ARIA roles and attributes
- Where it is not possible to match the adjacent visible text label through native semantics, authors may use aria-label and aria-labelledby to match the string. Such situations are unusual and tend to occur when there is not a clear 1:1 relationship between User Interface Components and labels. Sometimes users may perceive a disparity; either there seem to be more labels than inputs or more inputs than labels. Each scenario can cause confusion about how to properly meet 2.5.3.
- The remainder of this document outlines scenarios where this may be the case, but where it is still possible to match the accessible name to the label by applying the guidance in the 2.5.3 Understanding document.
- Note: These examples are drawn from the web and are not offered as examples of excellent design. They serve as examples of common implementations which pose challenges when trying to meet 2.5.3 as well as incorporate other considerations such as those for 1.3.1 Information and Relationships and 3.3.2 Labels or Instructions.
- Choosing between multiple potential labels for an input
+
+ Determining the appropriate label
Sometimes more than one text string will be positioned in the vicinity of a control that could be considered a candidate for its label. For example, a set of inputs that each have their own labels may also be preceded by a heading, an instruction or a group label (such as an HTML legend/fieldset or an ARIA group or radiogroup). Note that the term "group label" means something different than "label", both programmatically and in regard to 2.5.3 Label in Name.
The Understanding 2.5.3 Label in Name document recommends that only the text string adjacent to or in close proximity to an input should be treated as the label when assessing a control's label for the purposes of meeting 2.5.3 (see the section "Identifying label text for components"). There are both practical and technical reasons for restricting the designation of an input's label in this way. The technical reasons are discussed in the Understanding document's section called Accessible Name and Description Computation specification.
Here are some examples which follow the 'adjacent text' guidance, along with rationales.
@@ -120,44 +118,7 @@ Stacked Labels
- Deciding which elements are labeled if there are fewer labels than controls
- Determining the appropriate text to designate as the label (and by extension the accessible name) can also be confusing when the controls outnumber the potential labels.
-
- A single label for multiple controls: telephone number
- A series of inputs may only have a single group label between them. A common occurrence of this is a telephone entry that has multiple inputs.
-
- "Work Phone " is adjacent to the first of the three inputs, so for the purposes of 2.5.3 it should be considered the first input's label. However, "Work Phone" also describes the group of three inputs. Text in such a scenario is normally referred to as a "group label", but it is important to emphasize that group labels are not part of the accessible name using native HTML semantics. As well, according to the guidance on "Confining the label to adjacent text" in the 2.5.3 Understanding document, the "Work Phone" text should not be considered the label for the other inputs.
- Associating the label with the first input can be accomplished through aria-labelledby
, which can reference the id
of the element containing the string (in the following code example, a span
nested inside a parent div
).
- The same parent div
can be used to nest the group of related inputs to meet 1.3.1 Information and Relationships, with aria-labelledby
providing the group label by pointing to the same nested span
.
- To meet 3.3.2 ("Labels or instructions are provided when content requires user input."), each of the fields can be given a unique name through the use of the title
attribute. Where an accessible name already exists (for the first input), the title
value becomes part of the accessible description. In the absence of another candidate for accessible name, the title
value is used in the calculation for the name (for the second and third inputs).
-
-<div role="group" aria-labelledby="groupLabel">
- <span id="groupLabel">Work Phone:</span>
- <input type="number" aria-labelledby="groupLabel" title="Area Code" min="3" max="3" />
- <input type="number" title="Prefix" min="3" max="3" />
- <input type="number" title="Line Number" min="4" max="4" />
-</div>
-
-
- There are several advantages to constructing the inputs this way, some of which go beyond the requirements of 2.5.3 but are relevant when considering the most appropriate techniques:
-
- - Speech-input users who say "move to work phone" will be moved to the first of the three fields, which is where they would typically wish to begin. If they wish to reposition to the other inputs, it is simple for them to issue a "Press tab" command.
- - Sighted users who wish to have more context can use hover to reveal the
title
value as a tool tip. This can be beneficial for users with low vision or with some cognitive disabilities.
- - Since the
title
attribute is used across all three inputs, the behaviour of the inputs is consistent for many users.
- - The computation for accessible name and description ensures that the value of
title
is always available programmatically, either as the accessible name or description. Thus screen readers will always have access to this information.
-
-
- There are a couple of disadvantages to this construction:
-
- - Screen reader users will hear "Work Phone:" announced twice – as the group name and as the label of the first field. (This is an unfortunate outcome of the new 2.5.3 requirement and will occur for at least one input in a set of inputs with a group label in all implementations.)
- - The keyboard user cannot see a label for the second and third inputs since most browsers do not provide a mechanism to display
title
via keyboard. (Unless the telephone inputs are redesigned to provide persistently visible labels, this will remain a challenge.)
-
-
-
-
+
Range of inputs with few labels
A less common disparity between labels and inputs can occur when a group of radio buttons is set up to elicit a choice across a range. The labels may only be located at each end of the range or may be interspersed at various points in the range.
@@ -166,7 +127,7 @@ Range of inputs with few labels
Figure 6 Line of 5 radio buttons with Hated it and Loved it labels at each end
- The two labels, "Hated it" and "Loved it", are adjacent to the first and last radio buttons, and should be their accessible names. "Rate your response" is the text describing the whole widget and can be associated as the group label. The three middle radio buttons do not have visible labels. In the code example they are given title attributes of "Disliked", "So-so" and "Liked" in order to meet 3.3.2 Labels or Instructions.
+ The two labels, "Hated it" and "Loved it", are adjacent to the first and last radio buttons, and should be their accessible names. "Rate your response" is the text describing the whole widget and can be associated as the group label (here using legend
). The three middle radio buttons do not have visible labels. In the code example they are given title attributes of "Disliked", "So-so" and "Liked" in order to meet 3.3.2 Labels or Instructions.
@@ -183,87 +144,7 @@ Range of inputs with few labels
-
- Radio buttons laid out in a matrix
- A more complex construction involves inputs that are laid out in a grid, with the row and column headings serving as the only possible "labels". This is a common construction in a survey.
-
-
- When considering this complex component, it is important to remember that the purpose of 2.5.3 Label in Name is to enable speech input. For such users, the text at the start of each row ("The interaction with the sales staff", "Your experience at the the register", etc ) should be treated as the labels for the first radio buttons to meet 2.5.3. The column headers could also be assigned as names for the corresponding radio buttons in the first row, given their proximity.
- To meet 2.5.3, it is not necessary to assign every radio button the table header text as its accessible name. The text is not adjacent to most of them and may not offer a significant improvement in user experience for speech recognition users. For instance, in this example each row logically represents a radio button group – for instance a user should only be able to give one rating to the interaction of the sales staff. As such, if the speech recognition user can navigate to the first choice in that group via the row header label, the user can use the keyboard API to easily navigate between the choices.
- From a strictly programmatic perspective, authors may be tempted to treat the column headers ("Very satisfied", "Somewhat satisfied", etc) as the labels for each of the radio buttons. This is consistent with how the simple radio button group was done. However, especially in a survey which contains a number of similar questions, the result does not necessarily improve the speech interaction since there may be dozens of "Very satisfied" radio buttons.
- Many authors will assign the table header values to each input. This may provide better context for screen reader users or potentially more navigation options for a user of speech recognition. However, it should be noted that popular screen readers already extract the table's scope information to provide context to users, and wordy table headers will likely decrease the experience for users rather than enhance it. For instance, relying on only scope attributes of the table headers, some screen readers will announce each row header as a user traverses to new rows, but (depending on the screen reader's configuration) will not announce the row header while the user navigates between cells on the same row. This is an established and efficient means of navigating by screen reader. In contrast, concatenating the row and column headers for every cell into its accessible name will result in the screen reader user hearing that combination announced before the radio button's state for each radio button, for example "The interaction with the sales staff Neither satisfied nor dissatisfied, not checked". Depending on the label length, that may result in verbose output.
- To summarize, this matrix is a good example of the need to distinguish between a visible label, as discussed in 2.5.3, and notions of a programmatic label
(and other programmatic relationships) covered in 1.3.1 Information and Relationships.
-
-
-
- <table>
- <caption>2. How satisfied or dissatisfied are you with each of the following?</caption>
- <thead>
- <tr>
- <th scope="col" title="Feature"></th>
- <th scope="col" id="VS">Very satisfied</th>
- <th scope="col" id="SS">Somewhat satisfied</th>
- <th scope="col" id="NSD">Neither satisfied nor dissatisfied</th>
- <th scope="col" id="SD">Somewhat dissatisfied</th>
- <th scope="col" id="VD">Very dissatisfied</th>
- </tr>
- </thead>
- <tbody>
- <tr>
- <th scope="row" id="interaction">The interaction with the sales staff</th>
- <td><input type="radio" name="interaction" id="VS1" value="VS1" aria-labelledby="interaction VS"></td>
- <td><input type="radio" name="interaction" id="SS1" value="SS1" title="Somewhat satisfied"></td>
- <td><input type="radio" name="interaction" id="NSD1" value="NSD1" title="Neither satisfied nor dissatisfied"></td>
- <td><input type="radio" name="interaction" id="SD1" value="SD1" title="Somewhat dissatisfied"></td>
- <td><input type="radio" name="interaction" id="VD1" value="VD1" title="Very dissatisfied"></td>
- </tr>
-...
-<tr>
- <th scope="row" id="price">The price of the products</th>
- <td><input type="radio" name="price" id="VS5" value="VS5" aria-labelledby="price"></td>
- <td><input type="radio" name="price" id="SS5" value="SS5"></td>
- <td><input type="radio" name="price" id="NSD5" value="NSD5"></td>
- <td><input type="radio" name="price" id="SD5" value="SD5"></td>
- <td><input type="radio" name="price" id="VD5" value="VD5"></td>
- </tr>
- <tr>
- <th scope="row" id="sizes">The sizes available at the store</th>
- <td><input type="radio" name="sizes" id="VS6" value="VS6" aria-labelledby="sizes"></td>
- <td><input type="radio" name="sizes" id="SS6" value="SS6"></td>
- <td><input type="radio" name="sizes" id="NSD6" value="NSD6"></td>
- <td><input type="radio" name="sizes" id="SD6" value="SD6"></td>
- <td><input type="radio" name="sizes" id="VD6" value="VD6"></td>
- </tr>
- </tr>
-</tbody>
-</table>
-
-
-
-
-
-
- Inputs laid out in a matrix
- Other inputs besides radio buttons may be laid out in a matrix. In Figure 10, the cells in a grid are primarily text inputs, with column and row headers.
-
-
- The same basic proximity guidance applies, as in the radio button matrix – since the first column text is adjacent to the second column input, it should be considered its visible label; the top row may also be considered to supply labels for the second row's cells. The correct names for the second column/row cells will be sufficient to allow a speech-input user to reposition to the first editable cell in each row of the matrix by voice command.
- As with the prior example, some authors may elect to use the column and row headers as the accessible names for all the cells. User testing will help determine the best implementation for the target audience. Here are some considerations for matrixes:
-
- - The table header text length. The short text strings in this example could provide context without adding too much verbosity (for screen reader users). Longer strings may be more intrusive.
- - The size of the matrix. Low-vision users who use magnification may have difficulty orienting themselves in large tables (since the text at the beginning of the row and/or column may be out of the viewport). Concatenated header text used as a
title
value for each cell would provide context as a tool tip.
- - The nature of the table and position of the inputs. Sometimes the text appearing in the first row or first column of the table is not intended as context for a particular input. The table may not have column or row headers; the value in the first cells may not be meaningful.
- - The value of information in neighbouring cells. If a cell is not neighboring a row or column header, then any text in adjacent cells is likely the text closest in proximity. However, whether text in adjacent cells could serve as a label for the cell is entirely dependent on context. Generally, the text content of cells not serving a header function should be disregarded as potential labels for neighbouring cells, particularly if the cell is itself a user input.
-
-
- Such considerations will help to determine if there is an appropriate label for any given actionable cell in a grid.
-
+
button
element matches the end of accessible name, which is preceded by hidden katakana characters that provide a phonetic spelling for the kanji characters. The addition of the katakana provides a phonetic pronunciation which the speech recognition application can use to match the spoken phrase.
<button><span class="accessibly-hidden">メールを</span>送信する</button>
Rendered example: - +
This success criteria does not require that controls have a visual boundary indicating the hit area, but if the visual indicator of the control is the only way to identify the control, then that indicator must have sufficient contrast. If text (or an icon) within a button is visible and there is no visual indication of the hit area then the Success Criterion is passed. If a button with text also has a colored border, since the border does not provide the only indication there is no contrast requirement beyond the text contrast (1.4.3 Contrast (Minimum)). Note that for people with cognitive disabilities it is recommended to delineate the boundary of controls to aid in the recognition of controls and therefore the completion of activities.
+This success criteria does not require that controls have a visual boundary indicating the hit area, but if the visual indicator of the control is the only way to identify the control, then that indicator must have sufficient contrast. If text (or an icon) within a button or placeholder text inside a text input is visible and there is no visual indication of the hit area then the Success Criterion is passed. If a button with text also has a colored border, since the border does not provide the only indication there is no contrast requirement beyond the text contrast (1.4.3 Contrast (Minimum)). Note that for people with cognitive disabilities it is recommended to delineate the boundary of controls to aid in the recognition of controls and therefore the completion of activities.
The focus of the Reflow Success Criterion is to enable users to zoom in without having to scroll in two directions. Success Criterion 1.4.4 Resize Text also applies, so it should be possible to increase the size of all text to at least 200% while simultaneously meeting the reflow requirement. If text is reduced in size when people zoom in (or for small-screen usage), it should still be possible to get to 200% enlargement. For example, if text is set at 20px when the window is 1280px wide, at 200% zoom it should be at least 20px (so 200% of the default size), but at 400% zoom it could be set to 10px (therefore still 200% of the default 100% view). It is not required to achieve 200% enlargement at every breakpoint, but it should be possible to get 200% enlargement in some way.
+The focus of the Reflow Success Criterion is to enable users to zoom in without having to scroll in two directions. Success Criterion 1.4.4 Resize Text also applies, so it should be possible to increase the size of all text to at least 200% while simultaneously meeting the reflow requirement. If text is reduced in size when people zoom in (or for small-screen usage), it should still be possible to get to 200% enlargement.
+For example, if at the default browser setting of 100% zoom, text is set at 20px when the window is 1280px wide, the same 20px at 200% zoom would mean a text size of 200%. At 400% zoom, the author may decide to set the text size to 10px: the text would now still be enlarged to 200% compared to the default browser setting of 100% zoom. It is not required to achieve 200% text enlargement at every breakpoint, but it should be possible to get 200% text enlargement in some way.
The intent of this success criterion is to support personalization and preferences in order for more people to use the web, communicate, and interact with society.
Familiar terms and symbols are key for users with a limited vocabulary to being able to use the web. However, what is familiar for some users may not be for other users so programmatically associating user-interface components and icons enables people to load a set of symbols that is appropriate for them.
-This success criteria requires the author to add the context, propose, and meaning of symbols, regions, buttons, links, and fields so that user agents knows what they do and can adapt them to make them understandable for the user. It is achieved by adding semantics or metadata that provides this context. It is similar to adding role information (as required by 4.2.1) but instead of providing information about what the UI component is (such as an image) it provides information about what the component represents (such as a link to the home page).
+This success criteria requires the author to add the context, purpose, and meaning of symbols, regions, buttons, links, and fields so that user agents knows what they do and can adapt them to make them understandable for the user. It is achieved by adding semantics or metadata that provides this context. It is similar to adding role information (as required by 4.2.1) but instead of providing information about what the UI component is (such as an image) it provides information about what the component represents (such as a link to the home page).
Identifying regions of the page allows people to remove or highlight regions with their user agent.
From 7d164802b6b4b239d9070ac978c8dafaebd219ba Mon Sep 17 00:00:00 2001 From: Andrew Kirkpatrickgeneral
+failure
+All technologies that allow the viewing of content to be restricted to one orientation.
+The objective of this technique is to describe how restricting the view of content to a single orientation is a failure to allow content to be viewed in multiple orientations. When content is presented with a restriction to a specific orientation, users must orient their devices to view the content in the orientation that the author imposed. Some users have their devices mounted in a fixed orientation (e.g. on the arm of a power wheelchair), and if the content cannot be viewed in that orientation it creates problems for the user.
+ +If a specific orientation is determined to be essential for the operation and viewing of the content, then this failure technique will not apply.
+The intent of this Success Crition is to reduce accidental activation of keyboard shortcuts. Character key shortcuts work well for many keyboard users, but are inappropriate and frustrating for speech input users — whose means of input is strings of letters — and for keyboard users who are prone to accidentally hit keys. To rectify this issue, authors need to allow users to turn off or reconfigure shortcuts that are made up of only one or more character keys.
-Note that this success criterion doesn’t affect components such as listboxes and drop-down menus. Although these components contain values (words) that may be selected by one or more character keys, the shortcuts are only active when the components have focus. Other components such as menus may be accessed or opened with a single non-character shortcut (e.g., Alt or Alt+F) before pressing a single character key to select an item. This makes the full path to invoking a menu a two-step shortcut that includes a non-printable key. Accesskeys are also not affected because they include modifier keys.
+Note that this success criterion doesn't affect components such as listboxes and drop-down menus. Although these components contain values (words) that may be selected by one or more character keys, the shortcuts are only active when the components have focus. Other components such as menus may be accessed or opened with a single non-character shortcut (e.g., Alt or Alt+F) before pressing a single character key to select an item. This makes the full path to invoking a menu a two-step shortcut that includes a non-printable key. Accesskeys are also not affected because they include modifier keys.
Speech Input users generally work in a single mode where they can use a mix of dictation and speech commands. This works well because the user knows to pause before and after commands, and commands are usually at least two words long. So, for instance, a user might say a bit of dictation, such as "the small boat", then pause, and say a command to delete that dictation, such as "Delete Line". In contrast, if the user were to say the two phrases together without a pause, the whole phrase would come out as dictation (i.e., "the small boat delete line"). Although speech input programs often include modes that listen only for dictation or only for commands, most speech users use the all-encompassing mode all the time because it is a much more efficient workflow. It could decrease command efficiency significantly if users were to change to command mode and back before and after issuing each command.
Speech users can also speak most keyboard commands (e.g., "press Control Foxtrot") without any problems. If the website or app is keyboard enabled, the speech user can also write a native speech macro that calls the keyboard command, such as "This Print" to carry out Ctrl+P.
From 06b441c7b3268566512d169af8f790075a18d70b Mon Sep 17 00:00:00 2001 From: patrickhlaukeF97
From f016542fa6d0c135224ad9a573c95e3ef5d383c8 Mon Sep 17 00:00:00 2001 From: patrickhlaukeWeb content containing interactive widgets such as content carousels, with visible buttons to operate the widget (such as previous/next buttons, or a visible scrollbar/slider). These visible controls are hidden/omitted when a touchscreen is detected, under the assumption that users will simply use touch gestures to operate the widgets.
+Web content containing interactive widgets such as content carousels, with visible buttons to operate the widget (such as previous/next buttons, or a visible scrollbar/slider). These visible controls are hidden/omitted when a touchscreen is detected, under the assumption that users will simply use touch gestures to operate the widgets, and no other alternatives are then provided for keyboard or mouse users.
/* using CSS Media Queries 4 Interaction Media Features
to hide particular elements in the page (such as a container
@@ -108,8 +108,8 @@ Tests
Procedure
- Open the content on a device with a touchscreen and at least one additional input modality - this includes touch-enabled laptops and touchscreen devices (smartphones or tablets) with a paired external keyboard and mouse.
-
- Check that all interactive controls (such as links, form inputs, or complex custom widgets) are still visible/present (compared to the same content when viewed on a device without a touchscreen)
- Check that all interactive controls can be operated using not only the touchscreen, but also the additional input mechanisms (keyboard and mouse)
+ - If the presence of the touchscreen caused interactive controls not to be displayed (compared to the same content when viewed on a device without a touchscreen), check that there are alternative controls/ways for users of other additional input mechanisms to operate the content
The intent of this Success Crition is to reduce accidental activation of keyboard shortcuts. Character key shortcuts work well for many keyboard users, but are inappropriate and frustrating for speech input users — whose means of input is strings of letters — and for keyboard users who are prone to accidentally hit keys. -To rectify this issue, authors need to allow users to turn off or reconfigure shortcuts that are made up of only one or more character keys. +To rectify this issue, authors need to allow users to turn off or reconfigure shortcuts that are made up of only character keys.
Note that this success criterion doesn't affect components such as listboxes and drop-down menus. Although these components contain values (words) that may be selected by one or more character keys, the shortcuts are only active when the components have focus. Other components such as menus may be accessed or opened with a single non-character shortcut (e.g., Alt or Alt+F) before pressing a single character key to select an item. This makes the full path to invoking a menu a two-step shortcut that includes a non-printable key. Accesskeys are also not affected because they include modifier keys.
Speech users can also speak most keyboard commands (e.g., "press Control Foxtrot") without any problems. If the website or app is keyboard enabled, the speech user can also write a native speech macro that calls the keyboard command, such as "This Print" to carry out Ctrl+P.
Single-key shortcuts are the exception. While using single letter keys as controls might be appropriate and efficient for many keyboard users, single-key shortcuts are disastrous for speech users. The reason for this is that when only a single key is used to trip a command, a spoken word can become a barrage of single-key commands if the cursor focus happens to be in the wrong place.
For example, a speech-input user named Kim has her cursor focus in the main window of a web mail application that uses common keyboard shortcuts to navigate ("k"), archive ("y") and mute messages ("m"). A coworker named Mike enters her office and says "Hey Kim" and her microphone picks that up. The Y of "hey" archives the current message. K in "Kim" moves down one conversation and M mutes a message or thread. And, if Kim looks up and says "Hey Mike" without remembering to turn off the microphone, the same three things happen in a different sequence.
-A user interacting with a webpage or web app that doesn't use single-character shortcuts doesn't have this problem. Inadvertent strings of characters from the speech application lack the modifier key and are not interpreted as shortcuts. A speech user filling in a text input form may find that a phrase that is accidentally picked up by the speech microphone results in stray text being entered into the field, but that is easily seen and undone. The Resources section of this page contains links to videos demonstrating these types of issues.
+A user interacting with a webpage or web app that doesn't use single-character shortcuts doesn't have this problem. Inadvertent strings of characters from the speech application are not interpreted as shortcuts if a modifier key is required. A speech user filling in a text input form may find that a phrase that is accidentally picked up by the speech microphone results in stray text being entered into the field, but that is easily seen and undone. The Resources section of this page contains links to videos demonstrating these types of issues.
Two methods may be used to satisfy this condition and prevent such interference:
For most triggers of relatively small size, it is desirable for both methods to be implemented. If the trigger is large, noticing the additional content may be of concern if it appears away from the trigger. In those cases, only the second method may be appropriate.
The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate content with single pointer. This technique may involve either not using path-based or multi-point gestures to operate content, or providing alternative controls for pointer input that can be operated by a single pointer (e.g. a tap or tap-and-hold on a touch screen, or a click or long press with a mouse or other indirect pointing device).
-On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal drag gesture may not move a slider thumb as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent.
+On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent.
This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled).
The intent of this Success Criterion is to ensure that content can be operated using simple inputs on a wide range of pointing devices. This is important for users who cannot perform complex gestures, such as multipoint or path-based gestures, in a precise manner, either because they may lack the accuracy necessary to carry them out or because they use a pointing method that lacks the capability or accuracy.
-A path-based gesture involves a user interaction where the gesture's success is dependent on the path of the user's pointer movement and not just the endpoints. Examples include swiping (which relies on the direction of movement) the dragging of a slider thumb, or gestures which trace a prescribed path, as in the drawing of a specific shape. Such paths may be drawn with a finger or stylus on a touchscreen, graphics tablet, or trackpad, or with a mouse, joystick, or similar pointer device.
-A user may find it difficult or impossible to accomplish these gestures if they have impaired fine motor control, or if they use a specialized or adapted input device such as a head pointer, eye-gaze system, or speech-controlled mouse emulation.
-Note that free-form drag and drop actions are not considered path-based gestures for the purposes of this Success Criterion.
-Examples of multipoint gestures include a two-finger pinch zoom, a split tap where one finger rests on the screen and a second finger taps, or a two- or three-finger tap or swipe. A user may find it difficult or impossible to accomplish these if they type and point with a single finger or stick, in addition to any of the causes listed above.
+The intent of this Success Criterion is to ensure that content can be operated using simple inputs on a wide range of pointing devices. This is important for users who cannot perform complex gestures in a precise manner; users may lack the precision or ability to carry out the gestures or they may use a pointing method that lacks the capability or accuracy to perform multipoint or path-based gestures.
+A path-based gesture involves a user interaction where the gesture's success is dependent on the path of the user's pointer movement and not just the endpoints. Examples include swiping (which relies on the direction of movement) and gestures which trace a prescribed path, as in the drawing of a specific shape. Such paths may be drawn with a finger or stylus on a touchscreen, graphics tablet, or trackpad, or with a mouse, joystick, or similar pointer device.
+A user may find it difficult or impossible to accomplish these gestures if they have impaired fine motor control, or if they use a specialized or adapted input device such as a head pointer, eye-gaze system, or speech-controlled mouse emulation. Note that most dragging actions including drag and drop are not considered path-based gestures for the purposes of this Success Criterion. This is because once an object is selected, it can be dragged in a wayward manner to its destination (endpoint), and need not follow a prescribed path.
+Examples of multipoint gestures include a two-finger pinch zoom, a split tap where one finger rests on the screen and a second finger taps, or a two- or three-finger tap or swipe. A user may find it difficult or impossible to accomplish these if they type and point with a single finger or stick, in addition to any of the causes listed above.
Authors must ensure that their content can be operated without such complex gestures. When they implement multipoint or path-based gestures, they must ensure that the functionality can also be operated via single-point activation. Examples of single-point activation on a touchscreen or touchpad include taps, double taps, and long presses. Examples for a mouse, trackpad, head-pointer, or similar device include single clicks, click-and-hold and double clicks.
-This Success Criterion applies to author-created gestures, as opposed to gestures defined on the level of operating system or user agent. An example for gestures provided on the operating system level would be swiping down to see system notifications, and gestures for built-in assistive technologies (AT) to focus or activate content, or to call up AT menus. Examples of user-agent-implemented gestures would be horizontal swiping implemented by browsers for navigating within the page history, or vertical swiping to scroll page content.
+This Success Criterion applies to author-created gestures, as opposed to gestures defined on the level of operating system or user agent. Examples for gestures provided at the operating system level would be swiping down to see system notifications, and gestures for built-in assistive technologies (AT) to focus or activate content, or to call up AT menus. Examples of user-agent-implemented gestures would be horizontal swiping implemented by browsers for navigating within the page history, or vertical swiping to scroll page content.
While some operating systems may provide ways to define "macros" to replace complex gestures, content authors cannot rely on such a capability because it is not pervasive on all touch-enabled platforms. Moreover, this may work for standard gestures that a user can predefine, but may not work for other author-defined gestures.
This Success Criterion does not require all functionality to be available through pointing devices, but that which is must be available to users who use the pointing device but cannot perform complex gestures. While content authors may provide keyboard commands or other non-pointer mechanisms that perform actions equivalent to complex gestures (see Success Criterion 2.1.1 Keyboard), this is not sufficient to conform to this Success Criterion. That is because some users rely entirely on pointing devices, or find simple pointer inputs much easier than alternatives. For example, a user relying on a head-pointer would find clicking a control to be much more convenient than activating an on-screen keyboard to emulate a keyboard shortcut, and a person who has difficulty memorizing a series of keys (or gestures) may find it much easier to simply click on a labeled control. Therefore, if one or more pointer-based mechanisms are supported, then their benefits should be afforded to users through simple, single-point actions alone.
An exception is made for functionality that is inherently and necessarily based on complex paths or multipoint gestures. For example, entering one's signature may be inherently path-based (although acknowledging something or confirming one's identity need not be).
+Note that although gestures that involve dragging are not typically considered in scope for this SC, such gestures require a higher level of fine motor control. Authors are encouraged to provide non-dragging methods for interacting with the same controls. For instance, although a slider control can be operated by dragging the 'thumb' control, a single tap or click anywhere on the slider groove can move the thumb control to the chosen position. Likewise, buttons on either side of a slider can increment and decrement the selected value and update the thumb position.
A web site includes a map view that supports both the pinch gesture to zoom into the map content and drag gestures to move the visible area. User interface controls offer the operation via [+] and [-] buttons to zoom in and out, and arrow buttons to pan stepwise in all directions.
A news site has a horizontal content slider with hidden news teasers that can moved into the viewport via horizontal swiping. It also offers forward and backward arrow buttons for single-point activation to navigate to adjacent slider content.
A mortgage lending site has a slider control for setting the amount of credit required. The slider can be operated by dragging the thumb, but also by a single tap or click anywhere on the slider groove in order to set the thumb to the chosen position.
A slider control can be operated by dragging the thumb. Buttons on both sides of the slider increment and decrement the selected value and update the thumb position.
A kanban widget with several vertical areas representing states in a defined process allows the user to right- or left-swipe elements to move them to an adjacent silo. The user can also accomplish this by selecting the element with a single tap or click, and then activating an arrow button to move the selected element.
(none currently documented)
-The focus of the Reflow Success Criterion is to enable users to zoom in without having to scroll in two directions. Success Criterion 1.4.4 Resize Text also applies, so it should be possible to increase the size of all text to at least 200% while simultaneously meeting the reflow requirement. If text is reduced in size when people zoom in (or for small-screen usage), it should still be possible to get to 200% enlargement.
+The focus of the Reflow Success Criterion is to enable users to zoom in without having to scroll in two directions. Success Criterion 1.4.4 Resize Text also applies, so it should be possible to increase the size of all text to at least 200% while simultaneously meeting the reflow requirement. For most implementations, the text is expected to continue to enlarge as the magnification increases, so that users can magnify text up to (and beyond) 400%. In an implementation where text does not consistently increase its size as people zoom in (such as when it is tranformed based on a media query to adapt to small-screen usage), it should still be possible to get to 200% enlargement.
For example, if at the default browser setting of 100% zoom, text is set at 20px when the window is 1280px wide, the same 20px at 200% zoom would mean a text size of 200%. At 400% zoom, the author may decide to set the text size to 10px: the text would now still be enlarged to 200% compared to the default browser setting of 100% zoom. It is not required to achieve 200% text enlargement at every breakpoint, but it should be possible to get 200% text enlargement in some way.
Provide information below to help editors associate the technique properly. Contents of the meta section are not output by the processor.
@@ -14,19 +14,19 @@Describe the situations in which to use the technique, such as types of pages, features in use that might use the technique, etc. Do not add references to the part of WCAG to which the technique relates; this is taken from the Understanding pages and inserted in technique pages upon publication.
+This technique applies to nearly all technologies.
Describe how the technique works. This begins with a description of the problem the technique solves, then describes how to apply the technique. The description should be detailed enough to provide all the information a reader needs to be able to apply the technique, without recourse to following example code.
-The objective of this technique is to ...
+The objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event. This allows users interacting with touch screens to utilize the screen as support to stabilize the activation target. Another approach would be to activate on the “pointer down” event, but if the “pointer up” event occurs outside of the control, to immediately undo the action.
Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live.
Description
+An Editable Text box could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, any text is easy to delete. +However, a Submit button would need to either provide a confirmation dialogue or have its event occur on the “pointer up” event. A key on a keyboard application could respond on the “pointer down” event because such behavior would be considered essential for that control.
Code sample
The objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event. This allows users interacting with touch screens to utilize the screen as support to stabilize the activation target. Another approach would be to activate on the “pointer down” event, but if the “pointer up” event occurs outside of the control, to immediately undo the action.
+The objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event. This allows users interacting with touch screens to utilize the screen as support to stabilize the activation target. Another approach would be to activate on the “pointer down” event, but if the “pointer up” event occurs outside of the control, to immediately undo the action.
+The easiest way to meet this success criteria is simply to use the default behavior of controls. The up-event is the default behaviour for almost all controls and any programming or markup language. An Editable Text box could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, any text is easy to delete. However, a Submit button would need to either provide a confirmation dialogue or have its event occur on the “pointer up” event. A key on a keyboard application could respond on the “pointer down” event because such behavior would be considered essential for that control.
Copy the following section for each example. Examples must have a title and a description, and usually have a code sample. Code samples should be elided if necessary to show the core of the technique without necessarily providing all the surrounding code that would also be involved. A working example link references a location where the technique can be shown working live.
+An Editable Text box could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, any text is easy to delete. -However, a Submit button would need to either provide a confirmation dialogue or have its event occur on the “pointer up” event. A key on a keyboard application could respond on the “pointer down” event because such behavior would be considered essential for that control.
-Code sample
- Working example of {Example Title}
+Provide information below to help editors associate the technique properly. Contents of the meta section are not output by the processor.
- - +2.5.2 Pointer Cancellation
+Sufficient
This technique applies to nearly all technologies.
+This technique applies to nearly all technologies.
The objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event. This allows users interacting with touch screens to utilize the screen as support to stabilize the activation target. Another approach would be to activate on the “pointer down” event, but if the “pointer up” event occurs outside of the control, to immediately undo the action.
-The easiest way to meet this success criteria is simply to use the default behavior of controls. The up-event is the default behaviour for almost all controls and any programming or markup language. An Editable Text box could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, any text is easy to delete. However, a Submit button would need to either provide a confirmation dialogue or have its event occur on the “pointer up” event. A key on a keyboard application could respond on the “pointer down” event because such behavior would be considered essential for that control. +
The objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event.
+The easiest way to meet this success criteria is simply to use the default behavior of controls. The up-event is the default behaviour for almost all controls and any programming or markup language.
+In native languages where a control is fired on the down event it is usually for good reason and is easily recoverable. For instance an html input element could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, and as such meets the requirements of the Pointer Cancellataion SC.
+ +In JavaScript use native onclick event
In HTML use a native
+ +In iOS or Android use a native button.
Tests must have a test procedure and expected results. Populate the following sections as appropriate. If a technique has multiple alternative testing approaches, add a new section with class="test" for each one, and put the test-procedure and test-results sections inside that.
-The objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event.
-The easiest way to meet this success criteria is simply to use the default behavior of controls. The up-event is the default behaviour for almost all controls and any programming or markup language.
+The easiest way to meet this success criteria is simply to use the default behavior of controls and not override that behaviour with an explicit down-event trigger. The up-event is the default behaviour for almost all controls and any programming or markup language.
In native languages where a control is fired on the down event it is usually for good reason and is easily recoverable. For instance an html input element could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, and as such meets the requirements of the Pointer Cancellataion SC.
From 9539229b5369d3f25ee654f86eb83f82adf06951 Mon Sep 17 00:00:00 2001 From: David MacDonaldThe objective of this technique is to ensure that users who attempt to interact with a control do not trigger the action of the event accidentally. This can be accomplished most directly by relying on the “pointer up” event.
The easiest way to meet this success criteria is simply to use the default behavior of controls and not override that behaviour with an explicit down-event trigger. The up-event is the default behaviour for almost all controls and any programming or markup language.
-In native languages where a control is fired on the down event it is usually for good reason and is easily recoverable. For instance an html input element could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, and as such meets the requirements of the Pointer Cancellataion SC.
+In native languages where a control is fired on the down event it is usually for good reason and is easily recoverable. For instance, an HTML input element could have the cursor enter the editable area on the “pointer down” event, because the action is trivially reversible, and as such meets the requirements of the Pointer Cancellataion SC.
The visible text inside a button
element matches the beginning of accessible name, which also includes hidden text. The idea of the hidden text is to make the button more descriptive for users of assistive technologies.
<button>Send <span class="accessibly-hidden"> Mail</span></button>
The visible kanji text inside a button
element matches the end of accessible name, which is preceded by hidden katakana characters that provide a phonetic spelling for the kanji characters. The addition of the katakana provides a phonetic pronunciation which the speech recognition application can use to match the spoken phrase.
<button><span class="accessibly-hidden">メールを</span>送信する</button>
- Rendered example: - -
- -The objective of this technique is to ensure that speech input users can operate web content reliably while not adversely affecting other users of assistive technology.
When speech input users interact with a web page, they usually speak a command followed by the reference to some visible label (such as text beside an input field or inside a button or link). For example, they may say "click search" to activate a button labelled Search. When speech recognition software processes speech input and looks for matches, it uses the accessible name of controls. Where there is a mismatch between the text in the label and the text in the accessible name, it can cause issues for the user.
-The simplest way to enable speech input users and meet 2.5.3 Label in Name is to ensure that the accessible name matches the visible text label. The accessible name should be assigned through native elements and semantics where possible. That helps ensure an exact match between the visible label and name. This is covered in the related technique Matching the accessible name to visible label with native semantics.
+The simplest way to enable speech input users and meet 2.5.3 Label in Name is to ensure that the accessible name matches the visible text label. The accessible name should be assigned through native elements and semantics where possible. That helps ensure an exact match between the visible label and name. This is covered in the related technique Matching the accessible name to visible label with native semantics.
Where it is not possible to match the adjacent visible text label through native semantics, authors may use aria-label and aria-labelledby to match the string. Such situations are unusual and tend to occur when there is not a clear 1:1 relationship between user interface components and labels. Especially where users may perceive more inputs than labels, the use of ARIA can be beneficial to ensure the name matches the label.
To meet 2.5.3, it is not necessary to assign every radio button the table header text as its accessible name. The text is not adjacent to most of them and may not offer a significant improvement in user experience for speech recognition users.
For instance, in this example each row logically represents a radio button group – a user should only be able to give one rating to the interaction of the sales staff. As such, if the speech recognition user can navigate to the first choice in that group via the row header label, the user can use the keyboard API to easily navigate between the choices.
From a strictly programmatic perspective, authors may be tempted to treat the column headers ("Very satisfied", "Somewhat satisfied", etc) as the labels for each of the radio buttons. This is consistent with how the simple radio button group was done. However, especially in a survey which contains a number of similar questions, the result does not necessarily improve the speech interaction since there may be dozens of "Very satisfied" radio buttons.
-Many authors will assign the table header values to each input. This is unlikely to improve the interaction for speech-input users, and it has ramifications for screen reader user experience. Giving the header values to each radio button may create a more reliable experience for some users. However, it should be noted that popular screen readers already extract the table's scope information to provide context to users, and wordy table headers will likely decrease the experience for users rather than enhance it. For instance, relying on only scope attributes of the table headers, some screen readers will announce each row header as a user traverses to new rows, but (depending on the screen reader's configuration) will not announce the row header while the user navigates between cells on the same row. This is an established and efficient means of navigating by screen reader since it gives good context with reduced verbosity. In contrast, concatenating the row and column headers for every cell into its accessible name will result in the screen reader user hearing that combination announced before the radio button's state for each radio button, for example "The interaction with the sales staff Neither satisfied nor dissatisfied, not checked". Depending on the label length, that may result in a less welcome interaction.
-Regardless of decisions on what to doe for each radio button, aria-labelledby
will typically be used to provide the accessible name (since scope
attributes cannot).
+
Many authors will assign the table header values to each input. This is unlikely to improve the interaction for speech-input users, and it has ramifications for screen reader user experience. Giving the header values to each radio button may create a more reliable experience for some users. However, it should be noted that popular screen readers already extract the table's scope information to provide context to users, and wordy table headers will likely decrease the experience for users rather than enhance it.
+For instance, relying on only scope attributes of the table headers, some screen readers will announce each row header as a user traverses to new rows, but (depending on the screen reader's configuration) will not announce the row header while the user navigates between cells on the same row. This is an established and efficient means of navigating by screen reader since it gives good context with reduced verbosity.
+In contrast, concatenating the row and column headers for every cell into its accessible name will result in the screen reader user hearing that combination announced before the radio button's state for each radio button, for example "The interaction with the sales staff Neither satisfied nor dissatisfied, not checked". Depending on the label length, that may result in a less welcome interaction.
+Regardless of decisions on what to do for each radio button, aria-labelledby
will typically be used to provide the accessible name (since scope
attributes cannot).
To summarize, this matrix is a good example of the need to distinguish between a visible label, as discussed in 2.5.3, and notions of a programmatic label
(and other programmatic relationships) covered in 1.3.1 Information and Relationships.
From abef6464ffc26b9f548aa935e0d5e2a7ba8420c8 Mon Sep 17 00:00:00 2001
From: Detlev Fischer The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate content with single pointer. This technique may involve either not using path-based or multi-point gestures to operate content, or providing alternative controls for pointer input that can be operated by a single pointer (e.g. a tap or tap-and-hold on a touch screen, or a click or long press with a mouse or other indirect pointing device). The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate a content slider with single pointer. A content slider contains chunks of content in a row. Most of the content may be hidden, only one chunk of content is visible at any time. Left and right Horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing arrow buttons as alternative controls allows the same function for users preferring, or being dependent on, single pointer input. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). For any content that responds to path-based or multi-point pointer gestures: For content sliders that respond to path-based gestures: The objective of this technique is to ensure that users who have difficulties performing path-based or multi-point gestures can operate a content slider with single pointer. A content slider contains chunks of content in a row. Most of the content may be hidden, only one chunk of content is visible at any time. Left and right Horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing arrow buttons as alternative controls allows the same function for users preferring, or being dependent on, single pointer input. The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with single pointer. A content slider contains chunks of content in a row. Most of the content may be hidden, only one chunk of content is visible at any time. Left and right Horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing arrow buttons as alternative controls allows the same function for users preferring, or being dependent on, single pointer input. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with single pointer. A content slider contains chunks of content in a row. Most of the content may be hidden, only one chunk of content is visible at any time. Left and right Horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing arrow buttons as alternative controls allows the same function for users preferring, or being dependent on, single pointer input. The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with a single pointer (e.g., a single tap on a touch screen or a single mouseclick). A content slider contains chunks of content in a row. Usually several chunks of content are hidden, only one chunk is visible at any time. Left and right horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing controls (for example, arrow buttons) as alternative means of input allows advancing the slider with single pointer input. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with a single pointer (e.g., a single tap on a touch screen or a single mouseclick). A content slider contains chunks of content in a row. Usually several chunks of content are hidden, only one chunk is visible at any time. Left and right horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing controls (for example, arrow buttons) as alternative means of input allows advancing the slider with single pointer input. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with a single pointer (e.g., a single tap on a touch screen or a single mouseclick). A content slider contains chunks of content in a row. Usually several chunks of content are hidden, only one chunk is visible at any time. Left and right horizontal swiping over the vible part of the slider content brings adjacent hidden chunks into view. Providing controls (for example, arrow buttons) as alternative means of input allows advancing the slider with single pointer input. The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with a single pointer (e.g., a single tap on a touch screen or a single mouseclick). A content slider contains chunks of content in a row. Usually several chunks of content are hidden, only one chunk is visible at any time. Left and right horizontal swiping over the visible part of the slider brings adjacent hidden chunks of content into view. Providing controls (for example, arrow buttons) as alternative means of input allows advancing the slider with single pointer input. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a content slider with a single pointer (e.g., a single tap on a touch screen or a single mouseclick). A content slider contains chunks of content in a row. Usually several chunks of content are hidden, only one chunk is visible at any time. Left and right horizontal swiping over the visible part of the slider brings adjacent hidden chunks of content into view. Providing controls (for example, arrow buttons) as alternative means of input allows advancing the slider with single pointer input. On touch screen devices, author-supplied path-based and multipoint gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. AT generally consumes a path-based or multipoint gestures so they would not reach the authored content. For example, a horizontal swipe gesture may not reveal a menu as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. On touch screen devices, author-supplied path-based gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. This is because AT generally consumes a path-based gestures so they would not reach the authored content. For example, a horizontal swipe gesture over the content slider will not work as intended by the author, but move the screen reader focus to the next or previous element. Some gestures may work if the user operates "pass-through gestures" which are often unreliable as they may depend on factors of hardware, operating system, operating system "skin", operating system setting, or user agent. This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled). When this Success Criterion is not satisfied, it may be difficult for people with
+ some disabilities to reach the main content of a Web page quickly and easily: The intent of this condition is to ensure that the additional content does not interfere with viewing or operating the page's original content. When magnified, the portion of the page visible in the viewport can be signficantly reduced. Mouse users frequently move the pointer to pan the magnified viewport and display another portion of the screen. However, almost the entire portion of the page visible in this restricted viewport may trigger the additional content, making it difficult for a user to pan without re-triggering the content. A keyboard means of dismissing the additional content provides a workaround. The intent of this condition is to ensure that the additional content does not interfere with viewing or operating the page's original content. When magnified, the portion of the page visible in the viewport can be significantly reduced. Mouse users frequently move the pointer to pan the magnified viewport and display another portion of the screen. However, almost the entire portion of the page visible in this restricted viewport may trigger the additional content, making it difficult for a user to pan without re-triggering the content. A keyboard means of dismissing the additional content provides a workaround. Alternatively, low vision users who can only navigate via the keyboard do not want the small area of their magnified viewport cluttered with trivial hover text. They need a keyboard method of dismissing something that is obscuring the current focal area. Two methods may be used to satisfy this condition and prevent such interference: For most triggers of relatively small size, it is desirable for both methods to be implemented. If the trigger is large, noticing the additional content may be of concern if it appears away from the trigger. In those cases, only the second method may be appropriate. The success criteria allows for input error messages to persist as there are cases that require attention, explicit confirmation or remedial action. Content does not restrict its view and operation to a single display orientation, such as portrait or landscape, unless a specific display orientation is essential. Examples where a particular display orientation may be essential are a bank check, a piano application, slides for a projector or television, or virtual reality content where binary display orientation is not applicable. Examples where a particular display orientation may be essential are a bank check, a piano application, slides for a projector or television, or virtual reality content where content is not necessarily restricted to landscape or portrait display orientation.Ensuring that multi-point and path-based gesture functionality can be operated with a single pointer
+ Providing controls to ensure that a slider can be operated with a single pointer
Metadata
@@ -19,18 +19,16 @@ Applicability
Description
- Examples
+ Example
-
Examples
Tests
Procedure
-
-
Applicability
Description
- Applicability
Description
- Applicability
Description
Example
From ef055c100c6d2088ed94bbd87e6d07395dbebdff Mon Sep 17 00:00:00 2001
From: Detlev Fischer Applicability
Description
- Description
Example
-
Applicability
Description
Intent of Bypass Blocks
Benefits of Bypass Blocks
-
+
-
-
From d085e98f9518c64935caca0a3245d8d8e5fda1b2 Mon Sep 17 00:00:00 2001
From: Alastair Campbell Examples of Bypass Blocks
story; the screen reader user has to listen to 200 words; and the screen magnifier
user must search around for the location of the main body.
Intent
Dismissable
-
@@ -42,6 +42,7 @@
Dismissable
Hoverable
From f9a2821c2f54bf07e9146b2f30936dd4f3c58989 Mon Sep 17 00:00:00 2001
From: Alastair Campbell Orientation
Intent of this Success Criterion
in a fixed orientation (e.g. on the arm of a power wheelchair). Therefore, websites and applications need to support both orientations
by not restricting the orientation. Changes in content or functionality due to the size of display are not covered by this criteria which is focused on restrictions of orientation.
Historically, devices tended to have a fixed-orientation display, and all content was created to match that display orientation. Today, most handhelds and many other devices (e.g., monitors) have a hardware-level ability to dynamically adjust default display orientation based on sensor information. The goal of this Success Criterion is that authors should never restrict content's orientation, thus ensuring that it always match the device display orientation.
+Historically, devices tended to have a fixed-orientation display, and all content was created to match that display orientation. Today, most handhelds and many other devices (e.g., monitors) have a hardware-level ability to dynamically adjust default display orientation based on sensor information. The goal of this Success Criterion is that authors should never restrict content's orientation, thus ensuring that it always match the device display orientation.
-It is important to distinguish between an author's responsibility not to restrict content to a specific orientation, and device-specific settings governing the locking of display orientation.
+It is important to distinguish between an author's responsibility not to restrict content to a specific orientation, and device-specific settings governing the locking of display orientation.
-Many hand-held devices offer a mechanical switch or a system setting (or both) to allow the user to lock the device display to a specific orientation. Where a user decides to lock their entire device to an orientation, all applications are expected to pick up that setting and to display content accordingly.
+Many hand-held devices offer a mechanical switch or a system setting (or both) to allow the user to lock the device display to a specific orientation. Where a user decides to lock their entire device to an orientation, all applications are expected to pick up that setting and to display content accordingly.
-This Success Criterion complements device "lock orientation" settings. A web page that does not restrict its display orientation will always support the system-level orientation setting, since the system setting is picked up by the user agent. Alternatively, where a device-level orientation lock is not in place, the user agent should display the page according to the orientation of the device (either its default, or the current orientation determined by any device sensors).
+This Success Criterion complements device "lock orientation" settings. A web page that does not restrict its display orientation will always support the system-level orientation setting, since the system setting is picked up by the user agent. Alternatively, where a device-level orientation lock is not in place, the user agent should display the page according to the orientation of the device (either its default, or the current orientation determined by any device sensors).
+ +The exceptions for things considered essential is aimed at situations where the content would only be understood in a particular orientation, or where the technology restricts the possible orientations. If content is aimed at a specific environment which is only available in one orientation (such as a television) then the content can restrict the orientation. Technologies such as virtual reality use screens within goggles that cannot change orientation relative to the user's eyes.
This Success Criterion complements device "lock orientation" settings. A web page that does not restrict its display orientation will always support the system-level orientation setting, since the system setting is picked up by the user agent. Alternatively, where a device-level orientation lock is not in place, the user agent should display the page according to the orientation of the device (either its default, or the current orientation determined by any device sensors).
-The exceptions for things considered essential is aimed at situations where the content would only be understood in a particular orientation, or where the technology restricts the possible orientations. If content is aimed at a specific environment which is only available in one orientation (such as a television) then the content can restrict the orientation. Technologies such as virtual reality use screens within goggles that cannot change orientation relative to the user's eyes.
+The exception for things considered essential is aimed at situations where the content would only be understood in a particular orientation, or where the technology restricts the possible orientations. If content is aimed at a specific environment which is only available in one orientation (such as a television) then the content can restrict the orientation. Technologies such as virtual reality use screens within goggles that cannot change orientation relative to the user's eyes.
In HTML use a native
+In HTML use a native <button>
or <a href ....>
.
Find all clickable controls with actions that are irreversible. If this is the case:
The objective of this technique is to ensure: -
In this technique, when a device sensor such as accelerometers or gyroscope is used to gather input: -
The objective of this technique is to ensure that:
+In this technique, when a device sensor such as accelerometers or gyroscope is used to gather input:
+After text is input in a field, shaking a device shows a dialog offering users to undo the input. Supporting use of the backspace key and/or providing a clear button next to the text field offers the same functionality.
-Shake to undo can be turned off at the operating system.
+Shake to undo can be turned off in a settings page.
A slider can be adjusted by tipping the device to the left and right. There are also buttons to achieve the same functionality, and a tick-box that prevents the motion from having an effect.
+Example of slider with motion actuation.
A slider can be adjusted by tipping the device to the left and right. There are also buttons to achieve the same functionality, and a tick-box that prevents the motion from having an effect.
-Example of slider with motion actuation.
+Working example of a slider with motion actuation.
Use this technique on web pages that detect device or user motion such as shaking or tilting and use this motion as a means of input. If the motion itself is essential to the application's function, then this technique does not apply.
This technique also does not relate to movement of users through space as registered by geolocation sensors or beacons, or events observed by the device other than intentional gesturing by the user. It also does not cover indirect motion associated with operating a keyboard, pointer, or assistive technology.
-The objective of this technique is to ensure that:
@@ -29,9 +31,9 @@Here, the link contains visible text and hidden link text. Both together make up the link's accessible name. The visible text comes first. The idea is to make the link more descriptive for users of assistive technologies.
+A link contains visible text and hidden link text. Both together make up the link's accessible name. The visible text comes first. The idea is to make the link more descriptive for users of assistive technologies.
<p>Go to <a href="code-of-conduct.html">Code of conduct <span class="hidden_accessibly"> of ACME Corporation</span></a><p>
Here, the generic link is combined with the heading of the paragraph to give context. It is a variation on the first example, this time using aria-labelledby
. The advantage of this implementation is that it uses existing visible text on the page, and so is more likely to be properly translated during any localization transformations.
A generic link is combined with the heading of the paragraph to give context. It is a variation on the first example, this time using aria-labelledby
. The advantage of this implementation is that it uses existing visible text on the page, and so is more likely to be properly translated during any localization transformations.
- <h4 id="crappy">Insufficient Link Names Invade Community</h4>
-<p>Citizens are reeling from the growing invasion of useless "read more" links appearing in their online resources. <a href="crappy.html" aria-labelledby="generic crappy"><span id="generic">More...</span></a>
+ <h4 id="poor">Insufficient Link Names Invade Community</h4>
+<p>Citizens are reeling from the growing invasion of useless "read more" links appearing in their online resources. <a href="poor.html" aria-labelledby="generic poor"><span id="generic">More...</span></a>
- [The following link opens nothing] Citizens are reeling from the growing invasion of useless "read more" links appearing in their online resources. More...
+[The following link opens nothing] Citizens are reeling from the growing invasion of useless "read more" links appearing in their online resources. More...
aria-label
Where two strings cannot be grammatically or seamlessly combined using aria-labelledby
, aria-label
can be used to make a new name which includes the visible label.
- ...end of news story. <a href="crappy.html" aria-label="Read more about Insufficient link names">Read more</a>
+ ...end of news story. <a href="poor.html" aria-label="Read more about Insufficient link names">Read more</a>
Such implementations can create accessibility issues, especially when the hint between the label and input exceeds a single line, further separating the label from its input. Figure 4 illustrates that the concept of "adjacent text" is a guide for interpretation, but cannot always serve as a hard rule.
+The hint text in such implementations should be kept to a single line where possible, since accessibility issues can arise where a more lengthy hint separates the label from its input. Figure 4 illustrates that the concept of "adjacent text" is a guide for label interpretation, but cannot always serve as a hard rule.
<form>
<label class="label" for="example-2">
@@ -127,7 +127,7 @@ Range of inputs with few labels
Figure 6 Line of 5 radio buttons with Hated it and Loved it labels at each end
- The two labels, "Hated it" and "Loved it", are adjacent to the first and last radio buttons, and should be their accessible names. "Rate your response" is the text describing the whole widget and can be associated as the group label (here using legend
). The three middle radio buttons do not have visible labels. In the code example they are given title attributes of "Disliked", "So-so" and "Liked" in order to meet 3.3.2 Labels or Instructions.
+ The two labels, "Hated it" and "Loved it", are adjacent to the first and last radio buttons, and should be their accessible names. Speech-input users can speak either of these labels to select a radio button, and then use arrow navigation (e.g., "Press right arrow") to modify the selection. "Rate your response" is the text describing the whole widget and can be associated as the group label (here using legend
). The three middle radio buttons do not have visible labels. In the code example they are given title attributes of "Disliked", "So-so" and "Liked" in order to meet 3.3.2 Labels or Instructions.
From 1573ae62e0460e39203a8ad15c7f0a89e399c0b0 Mon Sep 17 00:00:00 2001
From: Mike Gower
Date: Mon, 13 May 2019 14:42:33 -0700
Subject: [PATCH 167/402] Response to example comments
response to AWK and others
---
working-examples/label-in-name-general/example1.html | 2 +-
working-examples/label-in-name-general/example2.html | 2 +-
working-examples/label-in-name-general/example3.html | 8 ++++----
working-examples/label-in-name-general/example4.html | 3 ++-
working-examples/label-in-name-general/example5.html | 2 +-
5 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/working-examples/label-in-name-general/example1.html b/working-examples/label-in-name-general/example1.html
index f88769640d..266ed070cc 100644
--- a/working-examples/label-in-name-general/example1.html
+++ b/working-examples/label-in-name-general/example1.html
@@ -1,5 +1,5 @@
-
+
Example of Simple Radio Button Group
diff --git a/working-examples/label-in-name-general/example2.html b/working-examples/label-in-name-general/example2.html
index d21e7bfa19..a771642e80 100644
--- a/working-examples/label-in-name-general/example2.html
+++ b/working-examples/label-in-name-general/example2.html
@@ -1,5 +1,5 @@
-
+
Example of Stacked Labels
diff --git a/working-examples/label-in-name-general/example3.html b/working-examples/label-in-name-general/example3.html
index 7a924586e0..4ada843417 100644
--- a/working-examples/label-in-name-general/example3.html
+++ b/working-examples/label-in-name-general/example3.html
@@ -1,5 +1,5 @@
-
+
Example of Work Phone input set
@@ -7,9 +7,9 @@
Work Phone:
-
-
-
+
+
+
\ No newline at end of file
diff --git a/working-examples/label-in-name-general/example4.html b/working-examples/label-in-name-general/example4.html
index cf17652a87..59190c27e1 100644
--- a/working-examples/label-in-name-general/example4.html
+++ b/working-examples/label-in-name-general/example4.html
@@ -1,10 +1,11 @@
-
+
Example of Range of Inputs
+
- There are a couple of disadvantages to this construction:
-
- - Screen reader users will hear "Work Phone:" announced twice – as the group name and as the label of the first field. (This is an unfortunate outcome of the new 2.5.3 requirement and will occur for at least one input in a set of inputs with a group label in all implementations.)
- - The keyboard user cannot see a label for the second and third inputs since most browsers do not provide a mechanism to display
title
via keyboard. (Unless the telephone inputs are redesigned to provide persistently visible labels, this will remain a challenge.)
-
-
-
-
A more complex construction involves inputs that are laid out in a grid, with the row and column headings serving as the only possible "labels". This is a common construction in a survey.
- - -When considering this complex component, it is important to remember that the purpose of 2.5.3 Label in Name is to enable speech input. For such users, the text at the start of each row ("The interaction with the sales staff", "Your experience at the the register", etc ) should be treated as the labels for the first radio buttons to meet 2.5.3. The column headers could also be assigned as names for the corresponding radio buttons in the first row, given their proximity.
-To meet 2.5.3, it is not necessary to assign every radio button the table header text as its accessible name. The text is not adjacent to most of them and may not offer a significant improvement in user experience for speech recognition users.
-For instance, in this example each row logically represents a radio button group – a user should only be able to give one rating to the interaction of the sales staff. As such, if the speech recognition user can navigate to the first choice in that group via the row header label, the user can use the keyboard API to easily navigate between the choices.
-From a strictly programmatic perspective, authors may be tempted to treat the column headers ("Very satisfied", "Somewhat satisfied", etc) as the labels for each of the radio buttons. This is consistent with how the simple radio button group was done. However, especially in a survey which contains a number of similar questions, the result does not necessarily improve the speech interaction since there may be dozens of "Very satisfied" radio buttons.
-Many authors will assign the table header values to each input. This is unlikely to improve the interaction for speech-input users, and it has ramifications for screen reader user experience. Giving the header values to each radio button may create a more reliable experience for some users. However, it should be noted that popular screen readers already extract the table's scope information to provide context to users, and wordy table headers will likely decrease the experience for users rather than enhance it.
-For instance, relying on only scope attributes of the table headers, some screen readers will announce each row header as a user traverses to new rows, but (depending on the screen reader's configuration) will not announce the row header while the user navigates between cells on the same row. This is an established and efficient means of navigating by screen reader since it gives good context with reduced verbosity.
-In contrast, concatenating the row and column headers for every cell into its accessible name will result in the screen reader user hearing that combination announced before the radio button's state for each radio button, for example "The interaction with the sales staff Neither satisfied nor dissatisfied, not checked". Depending on the label length, that may result in a less welcome interaction.
-Regardless of decisions on what to do for each radio button, aria-labelledby
will typically be used to provide the accessible name (since scope
attributes cannot).
-
To summarize, this matrix is a good example of the need to distinguish between a visible label, as discussed in 2.5.3, and notions of a programmatic label
(and other programmatic relationships) covered in 1.3.1 Information and Relationships.
-
- <table>
-
- <caption>2. How satisfied or dissatisfied are you with each of the following?</caption>
- <thead>
- <tr>
- <th scope="col" title="Feature"></th>
- <th scope="col" id="VS">Very satisfied</th>
- <th scope="col" id="SS">Somewhat satisfied</th>
- <th scope="col" id="NSD">Neither satisfied nor dissatisfied</th>
- <th scope="col" id="SD">Somewhat dissatisfied</th>
- <th scope="col" id="VD">Very dissatisfied</th>
- </tr>
- </thead>
- <tbody>
- <tr>
- <th scope="row" id="interaction">The interaction with the sales staff</th>
- <td><input type="radio" name="interaction" id="VS1" value="VS1" aria-labelledby="interaction VS"></td>
- <td><input type="radio" name="interaction" id="SS1" value="SS1" title="Somewhat satisfied"></td>
- <td><input type="radio" name="interaction" id="NSD1" value="NSD1" title="Neither satisfied nor dissatisfied"></td>
- <td><input type="radio" name="interaction" id="SD1" value="SD1" title="Somewhat dissatisfied"></td>
- <td><input type="radio" name="interaction" id="VD1" value="VD1" title="Very dissatisfied"></td>
- </tr>
-...
-<tr>
- <th scope="row" id="price">The price of the products</th>
- <td><input type="radio" name="price" id="VS5" value="VS5" aria-labelledby="price"></td>
- <td><input type="radio" name="price" id="SS5" value="SS5"></td>
- <td><input type="radio" name="price" id="NSD5" value="NSD5"></td>
- <td><input type="radio" name="price" id="SD5" value="SD5"></td>
- <td><input type="radio" name="price" id="VD5" value="VD5"></td>
- </tr>
- <tr>
- <th scope="row" id="sizes">The sizes available at the store</th>
- <td><input type="radio" name="sizes" id="VS6" value="VS6" aria-labelledby="sizes"></td>
- <td><input type="radio" name="sizes" id="SS6" value="SS6"></td>
- <td><input type="radio" name="sizes" id="NSD6" value="NSD6"></td>
- <td><input type="radio" name="sizes" id="SD6" value="SD6"></td>
- <td><input type="radio" name="sizes" id="VD6" value="VD6"></td>
- </tr>
- </tr>
-</tbody>
-</table>
-
-
Other inputs besides radio buttons may be laid out in a matrix. In Figure 10, the cells in a grid are primarily text inputs, with column and row headers.
- - -The same basic proximity guidance applies, as in the radio button matrix – since the first column text is adjacent to the second column input, it should be considered its visible label; the top row may also be considered to supply labels for the second row's cells. The correct names for the second column/row cells will be sufficient to allow a speech-input user to reposition to the first editable cell in each row of the matrix by voice command.
-As with the prior example, some authors may elect to use the column and row headers as the accessible names for all the cells. User testing will help determine the best implementation for the target audience. Here are some considerations for matrixes: -
title
value via aria-labelledby
for each cell would provide context as a tool tip.Such considerations will help to determine if there is an appropriate label for any given actionable cell in a grid.
-Provide links to external resources that are relevant to users of the technique. This section is optional.
- -Mathematical expressions are an exception to the previous subsection about symbolic characters. Math symbols can be used as labels; for example "11×3=33" and "A>B" convey meaning. The label should not be overwritten in the accessible name, and substitutions of words where a formula is used should be avoided since there are multiple ways to express the same equation. For example, making the name "eleven multipled by three is equivalent to thirty-three" might mean a user who said "eleven times three equals thirty-three" may not match. It is best to leave the formulas as used in the label and count on the user's familiarity with their speech software to achieve a match. Further, converting a mathematical formula label into an accessible name that is a spelled-out equivalent may create issues for translation. The name should match the label's formula text. Note that a consideration for authors is to use the proper symbol in the formula. For instance 11x3 (with a lower or upper case letter X), 11*3 (with the asterisk symbol), and 11×3 (with the &
times;
symbol) are all easy for sighted users to interpret as meaning the same formula, but may not all be matched to "11 times 3" by the speech recognition software. The proper operator symbol (in this case the times symbol) should be used.
+
Mathematical expressions are an exception to the previous subsection about symbolic characters. Math symbols can be used as labels; for example "11×3=33" and "A>B" convey meaning. The label should not be overwritten in the accessible name, and substitutions of words where a formula is used should be avoided since there are multiple ways to express the same equation. For example, making the name "eleven multiplied by three is equivalent to thirty-three" might mean a user who said "eleven times three equals thirty-three" may not match. It is best to leave the formulas as used in the label and count on the user's familiarity with their speech software to achieve a match. Further, converting a mathematical formula label into an accessible name that is a spelled-out equivalent may create issues for translation. The name should match the label's formula text. Note that a consideration for authors is to use the proper symbol in the formula. For instance 11x3 (with a lower or upper case letter X), 11*3 (with the asterisk symbol), and 11×3 (with the Note that as the zoom percentage increases, the navigation changes first to hide options behind a "More" dropdown menu. As zooming continues, most navigation options are eventually behind a "hamburger" menu button. All the information and functionality is still available from this web page. There is no horizontal scrolling.&
times;
symbol) are all easy for sighted users to interpret as meaning the same formula, but may not all be matched to "11 times 3" by the speech recognition software. The proper operator symbol (in this case the times symbol) should be used.
From 49b7d8b949b7d6eed96f665a3bf6adac5461663a Mon Sep 17 00:00:00 2001
From: Alastair Campbell Specific Benefits of Success Criterion 1.4.10
Examples
Example 1: Responsive Design
-
+
Resources are for information purposes only, no endorsement implied.
Each numbered item in this section represents a technique or combination of techniques + that the WCAG Working Group deems sufficient for meeting this Success Criterion. However, + it is not necessary to use these particular techniques. For information on using other + techniques, see Understanding Techniques for WCAG Success Criteria, particularly the "Other Techniques" section. +
Although not required for conformance, the following additional techniques should + be considered in order to make content more accessible. Not all techniques can be + used or would be effective in all situations. +
+The following are common mistakes that are considered failures of this Success Criterion + by the WCAG Working Group. +
+ +
+ ++ +
+ + + ++ + + + +
+ ++ + + + +
+ + + ++ + + +
+ ++ + + +
+ + + ++ + Email + +
+ ++ + + + Email + +
+ + + +The intent of this Success Criterion is to allow users to prevent animation from being displayed on Web pages. Some users experience distraction or nausea from animated content. For example, if scrolling a page causes elements to move (other than the essential movement associated with scrolling) it can trigger vestibular disorders. Vestibular (inner ear) disorder reactions include dizziness, nausea and headaches. Another animation that is often non-essential is parallax scrolling. Parallax scrolling occurs when backgrounds move at a different rate to foregrounds. Animation that is essential to the functionality or information of a web page is allowed by this Success Criterion.
-"Animation from interactions" applies when a user’s interaction initiates non-essential animation. In contrast, 2.2.2 Pause, Stop, Hide applies when the web page initiates animation.
+"Animation from interactions" applies when a user’s interaction initiates non-essential animation. In contrast, 2.2.2 Pause, Stop, Hide applies when the web page initiates animation.
The impact of animation on people with vestibular disorders can be quite severe. Triggered reactions include nausea, migraine headaches, and potentially needing bed rest to recover.
From c7d995f2f70c4b9e66c13fa7bd6ce9c519d61585 Mon Sep 17 00:00:00 2001 From: patrickhlaukeWeb content containing interactive widgets such as content carousels, with visible buttons to operate the widget (such as previous/next buttons, or a visible scrollbar/slider). These visible controls are hidden/omitted when a touchscreen is detected, under the assumption that users will simply use touch gestures to operate the widgets, and no other alternatives are then provided for keyboard or mouse users.
/* using CSS Media Queries 4 Interaction Media Features
@@ -95,7 +95,7 @@ Hiding/omitting controls for mouse users when a touchscreen is detected
#widget .controls { display: none; }
}
-
+ Depending on the specific implementation, authors may allow mouse interactions with widgets that mirror touch gestures - for instance, allowing mouse users to also drag/swipe carousels, rather than just relying on clickable previous/next controls or scrollbars. In these cases, hiding controls when a touchscreen is detected will still allow users to operate the widget with the mouse (unless this interaction has also been suppressed/omitted when the touchscreen was detected, as per the previous example). However, if the only keyboard-operable controls for the widget were hidden, and no alternative for keyboard users was provided (such as allowing cursor key operation), this situation would still fail Success Criterion 2.5.6.
Generally, these approaches will also result in a failure of Success Criterion 2.1.1 Keyboard and (depending on the touch gesture that the user is expected to perform) Success Criterion 2.5.1 Pointer Gestures.
A web site includes a map view that supports both the pinch gesture to zoom into the map content and drag gestures to move the visible area. User interface controls offer the operation via [+] and [-] buttons to zoom in and out, and arrow buttons to pan stepwise in all directions.
A web site includes a map view that supports the pinch gesture to zoom into the map content. User interface controls offer the operation via [+] and [-] buttons to zoom in and out.
A news site has a horizontal content slider with hidden news teasers that can moved into the viewport via horizontal swiping. It also offers forward and backward arrow buttons for single-point activation to navigate to adjacent slider content.
A kanban widget with several vertical areas representing states in a defined process allows the user to right- or left-swipe elements to move them to an adjacent silo. The user can also accomplish this by selecting the element with a single tap or click, and then activating an arrow button to move the selected element.
A slider control restricts the movement to a strict left & right direction when operated by dragging the thumb. Buttons on both sides of the slider increment and decrement the selected value and update the thumb position.
A custom slider requires movement in a strict left/right direction when operated by dragging the thumb control. Buttons on both sides of the slider increment and decrement the selected value and update the thumb position.
The intent of this Success Criterion is to ensure that content can be operated using simple inputs on a wide range of pointing devices. This is important for users who cannot perform complex gestures in a precise manner; users may lack the precision or ability to carry out the gestures or they may use a pointing method that lacks the capability or accuracy to perform multipoint or path-based gestures.
-A path-based gesture involves an interaction where the user engages a pointer with the display (down event), carries out a directional movement in a pre-determined direction before disengaging the pointer (up event). The direction, speed, and also the delta between start and end point may each be evaluated to determine what function is triggered.
+A path-based gesture involves an interaction where the user engages a pointer with the display (down event), carries out a directional movement in a pre-determined direction before disengaging the pointer (up event). The direction, speed, and also the delta between start and end point may each be evaluated to determine what function is triggered.
Examples include swiping (which relies on the direction of movement) and gestures which trace a prescribed path, as in the drawing of a specific shape. Such paths may be drawn with a finger or stylus on a touchscreen, graphics tablet, or trackpad, or with a mouse, joystick, or similar pointer device.
A user may find it difficult or impossible to accomplish these gestures if they have impaired fine motor control, or if they use a specialized or adapted input device such as a head pointer, eye-gaze system, or speech-controlled mouse emulation. Note that most dragging actions including drag and drop are not considered path-based gestures for the purposes of this Success Criterion. This is because once an object is selected, it can be dragged in a wayward manner to its destination (endpoint), and need not follow a prescribed path.
Examples of multipoint gestures include a two-finger pinch zoom, a split tap where one finger rests on the screen and a second finger taps, or a two- or three-finger tap or swipe. A user may find it difficult or impossible to accomplish these if they type and point with a single finger or stick, in addition to any of the causes listed above.
From 93d0b057a424896f69b2862e3ff52a9a754dae96 Mon Sep 17 00:00:00 2001 From: Andrew KirkpatrickAll functionality should be accessible via pointer input devices, for example, via a mouse pointer, a finger interacting with a touch screen, an electronic pencil/stylus, or a laser pointer.
People operating pointer input devices may not be able to carry out timed or complex gestures. Examples are drag-and-drop gestures and on touch screens, swiping gestures, split taps, or long presses. This Guideline does not discourage the provision of complex and timed gestures by authors. However, where they are used, an alternative method of input should be provided to enable users with motor impairments to interact with content via single untimed pointer gestures.
-Often, people use devices that offer several input methods, for example, mouse input, touch input, keyboard input, and speech input. These should be supported concurrently as users may at any time swich preferred input methods due to situational circumstances, for example, the availablity of a flat support for mouse operation, or situational impediments through motion or changes of ambient light.
+Often, people use devices that offer several input methods, for example, mouse input, touch input, keyboard input, and speech input. These should be supported concurrently as users may at any time swich preferred input methods due to situational circumstances, for example, the availability of a flat support for mouse operation, or situational impediments through motion or changes of ambient light.
A common requirement for pointer interaction is the ability of users to position the pointer over the target. With touch input, the pointer (the finger) is larger and less precise than a mouse cursor. For people with motor impairments, a larger target makes it easier to successfully position the pointer and activate the target.
This document outlines the requirements that the Web Content Accessibility Guidelines Working Group (WCAG WG) has set for the development of Web Content Accessibility Guidelines (WCAG) 2.1. These dot.x requirements build on the existing requirements for WCAG 2.0, and are designed to work in harmony with the WCAG 2.0 standard.
+Web Content Accessibility Guidelines 2.0 (WCAG 2.0) [[WCAG20]] explains how to make Web content accessible to people with disabilities. Since the release of WCAG 2.0 in December 2008 WCAG 2.0 has been widely adopted and implemented. As a result of both feedback from implementers and significant changes in technologies, the WCAG WG is pursuing the development of dot.x specifications and support materials to address special topic areas as needed, including (but not limited to) mobile devices, cognitive impairments and learning disabilities, and low vision.
+The underlying goal of dot.x requirements are the same as for WCAG 2.0 – to promote accessibility of Web content. Dot.x requirements must satisfy additional goals addressed in this document including:
+The Requirements for WCAG 2.0 [[wcag2-req]] provides details used during the development of WCAG 2.0, including key goals related to technology independence, clearly defined conformance requirements, and more which are still relevant and important. As with WCAG 2.0, WCAG 2.1 or other dot.x work will provide techniques and supporting documentation to assist in implementation efforts, and any criteria modified or introduced by a dot.x release will need to be verifiable by implementers.
+Dot.x specifications are expected to offer modifications to existing WCAG 2.0 success criteria as well as offer additional guidelines and success criteria but dot.x requirements may not weaken what is required generally of web content to be considered conformant to either. The result of this is that when a page conforms to WCAG 2.1 or dot.x it must also conform to WCAG 2.0 if new success criteria or conformance requirements in a dot.x specification are not considered in a conformance review.
+For example:
+In WCAG 2.1 or a dot.x specification - an existing success criteria may change in priority from a lower level to a higher level, but not the other way around. For example:
+Group members working on different success criteria should maintain good communication about work in progress with the main Working Group and accross Task Forces to minimize conflicts/duplication of work wherever possible.
+Note for release: Please consider the requirement to make 2.1/dot.x specifications compatible with each other carefully. The Working Group is concerned about whether it is possible to require full compatibility and is also concerned about the difficulty of incorporating requirements for conflicting success criteria into a future guidelines update. Feedback from reviewers on this point is specifically requested.
+WCAG 2.1 will provide additional criteria to address the accessibility of content on mobile devices, as well as for low-vision users and users with cognitive, language, and learning impairments. @@more needed@@
+The WCAG 2.0 Requirements document provides details about conformance that needs to be met for WCAG 2.1/dot.x releases. However WCAG 2.1/Dot.x releases need to provide conformance details that indicate the conformance relationship between them, and existing WCAG 2.0 conformance. WCAG 2.1 must specify that conformance claims indicate that a page conforms to WCAG 2.0 as a base. Future dot.x specifications must conform to its immediate previous ancestor specification as a base.
+WCAG 2.1 will utilize the WCAG 2.0 A / AA / AAA structure. Additional or changed success criteria will specify at what Level those criteria are provided. When a page conforms to WCAG 2.1 at a specific level, that page must conform to WCAG 2.0 at the same level.
+It is important to note that changes in WCAG 2.1 to the level for any existing WCAG 2.0 success criteria need to be made with awareness of the implementability and testability requirements for the new level. For dot.x releases to ensure backwards compatibility level changes must be clear for the immediate previous ancestor specification as a base.
+For example, a success criteria may currently be at Level AAA as a result of very limited testability, and moving that success criteria to Level AA in WCAG 2.1 or a dot.x would require greater testability. In order to succesfully make this transition there is an onus on the task force to provide robust testability where possible.
+Some new success criteria and guidelines in WCAG 2.1 are created that effectily make some changes (strenghen) previous WCAG 2.0 conformance requirements, and a page that conformed to WCAG 2.0 is tested against WCAG 2.1:
+A web page that previously conforms to WCAG 2.0 AA is reviewed against the new WCAG 2.1 specification. In the review, it is determined that the page still meets 1.4.3, which is now a level A criteria, and the page also meets 5.1 (level A), but it does not meet the new 5.2 (level AA).
+As a result, the page could claim to conform to the new WCAG 2.1 success criterion for 1.4.3 Color Contrast [minimum] at level A, and the new 5.1 success criterion (level A), but not the new 5.2 (level AA).
+NOTE:The author may choose to change their claim or not, as it will be possible to conform to WCAG 2.1 success criteria without making an explicit conformance claim.
+Note that most of these terms are further discussed in the Requirements for WCAG 2.0 [[wcag2-req]] +
+Media Accessibility User Requirements [[media-accessibility-reqs]] may also be useful
+ +The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a control slider with a single pointer (e.g., a single tap on a touch screen or a single mouseclick). A control slider allows users to set a value in a certain range, e.g. setting the volume, changing the hue value of a color, putting in the amount of money needed in a loan calculator, or picking a sum to be donated to a charity. Swiping left or right across the slider or dragging the thumb of the slider to the left or the right can change the value dynamically.
-A simple fallback for single point activation is to make the control slider groove clickable. This way, a value can be specified using single point activation over the groove.
-Providing controls (for example, arrow buttons) as alternative means of input allows incrementing otr decrementing the value with single pointer input. This allows for a more fine-grained setting of the value.
+The objective of this technique is to ensure that users who have difficulties performing path-based gestures can operate a control slider (e.g., a single tap on a touch screen or a single mouse click). A control slider allows users to set a value in a certain range, e.g. setting the volume, changing the hue value of a color, putting in the amount of money needed in a loan calculator, or picking a sum to be donated to a charity. A slider that required path-base gestures would use swiping left or right to change the value. Dragging the thumb of the slider to the change the value may also count as a path-based gesture if it requires the user to follow a narrow path.
+A simple fallback for activation without a path-based gesture is to make the control slider groove (horizontal bar) clickable. This way, a value can be specified using single tap or click on the groove.
+Providing controls (e.g., arrow buttons) as alternative also allows incrementing or decrementing the value with single pointer input. This allows for a more fine-grained setting of the value.
This technique addresses gestures where support has been implemented by authors, not gestures provided by the user agent (such as horizontal swiping to move through the browser history or vertical swiping to scroll through page content) or the operating system (e.g., gestures to move between open apps, or call up contextual menus of assistive technologies when these are enabled).
-On touch screen devices, author-supplied path-based gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. This is because AT generally consumes path-based gestures so they would not reach the authored content. When custom controls are built on top of native controls, however, these may also be operable with AT gestures such as vertical swiping to change the value (see example 1).
+On touch screen devices, author-supplied path-based gestures usually do not work when OS level assistive technologies (AT) like a built-in screenreader are turned on. This is because AT generally consumes path-based gestures so they would not reach the authored content. When custom controls are built on top of native controls, however, these may also be operable with AT gestures such as vertical swiping to change the value (see example 1).
A custom control slider built on top of a native slider (input type range) allows users to swipe left and right or drag the slider thumb to change the value of the slider. The sider groove allows single point activation: tapping or clicking it will set the slider thumb to the activated position. +
A custom control slider built on top of a native slider (input type range) allows users to swipe left and right or drag the slider thumb to change the value of the slider. The slider groove allows single point activation: tapping or clicking it will set the slider thumb to the activated position.
- +For control sliders that respond to path-based gestures:
Resources are for information purposes only, no endorsement implied.
Each numbered item in this section represents a technique or combination of techniques - that the WCAG Working Group deems sufficient for meeting this Success Criterion. However, - it is not necessary to use these particular techniques. For information on using other - techniques, see Understanding Techniques for WCAG Success Criteria, particularly the "Other Techniques" section. -
The following are common mistakes that are considered failures of this Success Criterion - by the WCAG Working Group. -
2i#%hh}kZSEyveZozP48hA2O4wNd5_!`=`qvi#f6Av6yLM}6Cc^|Z>dkqmg5e|sT
z%_T+_x
i(o4wDwk_NBVC?#xXqsSt~nU{!J%)nH6|$x*(t
z_bQ2J{7g-=#o0#WjmMel&F|D`PJh{;ximYv?j1);`xM}mMTf(|`yVd;1fU4o?i?-@
zFCh^C%&Ci9vpgc1!~OIcp}IaJ+b;SW;@$Q$PxUTyGRMSmo!ceW)oV2tQmReP_brk+=@0A8s~^Bayl
zJ;1ITHH$#x77$<(6ht%Ogdss(hl1p)4
z&k{e>w