Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refining alternative input #44

Closed
4 tasks done
emily-phet opened this issue May 10, 2020 · 18 comments
Closed
4 tasks done

Refining alternative input #44

emily-phet opened this issue May 10, 2020 · 18 comments

Comments

@emily-phet
Copy link

emily-phet commented May 10, 2020

@zepumph @BLFiedler
The use of arrow keys to control the hands needs the following refinements:

Ideally, moving the hands up and down "feels like" moving a slider as much as possible. Particularly from a non-visual perspective, the side-to-side motion is irrelevant, and consistent, repeatable location of the markers is important (like a slider!).

Also note - there are ideas underway for exploring simultaneous movement of the two hands with the keyboard - see #29. There may ultimately be a need for new keyboard shortcuts and/or a new focusable "object" in the navigation order (which would likely come just after the right hand has focus (like left hand, right hand, both hands...).

Ideally this would be ready for the sound design discussion on Tuesday, as we'll be discussing potential boundary sounds, crossing-grid-line sounds, etc. and being able to feel this out with alternative input would be helpful.

@emily-phet
Copy link
Author

@terracoda @zepumph Is there anyway that you can imagine to have the hand markers actually "be sliders" for those interacting using a focus-based mechanism (e.g., screen reader)? Could we instrument with the same behaviors as a slider and have the screen reader read out as a slider?

I think it might be ok if the markers did not move horizontally at all when focused, and only responded as if any arrow press was incrementing/decrementing the height value.

@terracoda
Copy link
Contributor

terracoda commented May 11, 2020

Hmm, @emily-phet that's an interesting question.

For each of our freely moving objects (balloon, book, ruler) that we have implemented thus far, we have implemented a two-step interaction. We start with a native grab button that launches a custom move interaction.

This two-step approach is likely cumbersome and not ideal for two objects that we want to be moved at the same time. we wouldn't want to force users to have to do two grabs before exploring.

Currently, we have two freely movable hands, which are not pre-fixed by a native grab button. The interaction itself, will be difficult to understand unless it is communicated as a native interaction, or possibly unless the name of the object itself maps so perfectly to native interaction that the learner know exactly what to do, e.g., "Slide left hand" and "Slide right hand".

To address your question @emily-phet, I see two possible approaches. I would need @zepumph to chime in to see if both are actually feasible.

I want to ignore the current PDOM implementation (i.e. headings for Left hand and Right hand).

Approach 1: Fully Custom code techniques - all ARIA

  1. Use custom interaction code like we are doing now code to create two "freely moving" objects but restrict them further so they only move up and down and not side to side.
  2. Using aria-roledescription communicate the interaction as a slider (or vertical slider)
  3. Communicate vertical orientation may be possible with aria-orientation)
  4. Note: I don't think we have access to aria-valuetext unless we use input type="range", so might need a custom solution for combined object and context responses via aria-live
<h2>Play Area</Play>
<h3>My Hands / Hand Position??</h3>
<p>DYNAMIC STATE DESCRIPTION (qualitative or quantitative) of the hand situation.</p>
<div aria-role-description="slider" aria-label="Left Hand" role="application" aria-orientation="vertical">
Left Hand
</div>
<div aria-role-description="slider" aria-label="Right Hand" role="application" aria-orientation="vertical">
Right Hand
</div>
<p>Move hands up or down independently or at the same time to explore/play.</p>

Approach 2: Combination of custom & native code techniques

  1. use custom interaction code in order to address focus issues and to be able use both hands at the same time
  2. use native HTML input type="range" to create two native slider interactions
  3. aria-orientation="vertical" should work without issue on an input element
  4. leverage aria-valuetext to create the positional descriptions (qualitative or quantitative) for each hand. This might be similar to the spheres in Gravity Force Lab (Basics and regular) and John Travoltage's Hand position slider.
  5. Use a dynamic screen summary to give a summarized DYNAMIC STATE DESCRIPTION (qualitative or quantitative) of the hand situation.
<p>DYNAMIC Screen Summary (qualitative or quantitative) of the hand situation.</p>

<H2>Play Area</H2>
<label for="leftHand" >Left Hand</label>
<input  id="leftHand"  type="range" aria-orientation="vertical" 
aria-valuetext="SOME relevant distance or position value for left hand">

<labl for="rightHand">Right Hand</label>
<input id="rightHand" type="range" aria-orientation="vertical" 
aria-valuetext="SOME relevant distance or position value for right hand">

<p>Move hands independently or at the same time to explore/play.</p>

Of course, placement of the help text, dynamic descriptions and which headings we use is all flexible depending on what is possible for the interaction (the interactive hands).

It is difficult to know exactly where to place the surrounding State Descriptions, what words they should be, and how they should be coded without thinking about how the interaction will work.

I hope the different approaches make some kind of sense.

@zepumph
Copy link
Member

zepumph commented May 12, 2020

The two approaches make sense to me! I toyed around with the second one, because using as much native html as possible seems valuable. The issue I had was with this line:

  1. use custom interaction code in order to address focus issues and to be able use both hands at the same time

I agree that the ability to use both hands at the same time is number (1) in my book, but with the outlined approach, I can't figure out a way to do it. More and more I'm trying to think of a parent div as the component, as opposed to having two children. I don't think of "focus" as something that we can apply custom code to in order to manipulate. AFAIK, when focus is on an element, there is no easy way (or comfortable way to the user) to change that focus or have it apply to both elements.

Instead I looked at potentially trying to have a listener on the parent, and having the arrow keys manipulate the right input, and the 'W" and "S" keys manipulate the left input. In this snippet note that we never get 'input' events on the slider, and since they never have focus, we don't have the benefits (AFAIK) of aria-valuetext. Basically this feels a bit like a totally custom solution with two input elements thrown in instead of divs, even though that may not buy us much.


As for Approach 1

For similar reasons as stated above with focus, it still feels challenging to me to have each hand be its own application role. What if we tried putting that application role on the parent div, and seeing if we can explain away the strangeness with help text and aria-live.

I'm sorta shrugging over here. Nothing I have said thus far sounds very nice. I look forward to further exploration with @terracoda and @jessegreenberg.

@emily-phet
Copy link
Author

@zepumph @terracoda (and @BLFiedler as well), let's discuss in tomorrow's RaP research meeting tomorrow. How this particular interaction is supported will be very important, so let's go through the intended interaction experience and potential options together.

@terracoda
Copy link
Contributor

@zepumph, honestly, I was just assuming that we need to use application role to be able to operate the two hands at once. That said, I wasn't sure where it should go.

Considering this more deeply, what if we made one hand use the custom interaction (using the WASD keys or just WS keys) and implement one as a fully native slider.

I'm not sure, how that would present to the user, but it might give us access to one set of aria-valuetext?

@zepumph
Copy link
Member

zepumph commented May 12, 2020

@jessegreenberg and I talked about this today. We feel like it is sorta an "either or" kind of situation. We can either use input elements and try to convey semantics through HTML, or we can attempt to implement a custom solution where you can control both at the same time. We can't see a way to implement a combination of the two.

Before tomorrow I am going to try to implement a fully custom, "combined" approach where you can control both at once. I won't take away the two keyboard drag listeners thought. This will allow us to compare the two strategies, and it will help facilitate discussion.

@zepumph
Copy link
Member

zepumph commented May 13, 2020

On master I made a first pass at keyboard control for the entire ratio. I will try to come back to refine it this evening, but you can play with it on phettest now if you want.

zepumph added a commit that referenced this issue May 13, 2020
@zepumph
Copy link
Member

zepumph commented May 13, 2020

You can play around with this in https://phet-dev.colorado.edu/html/ratio-and-proportion/1.0.0-dev.19/phet/ratio-and-proportion_en_phet.html. To be clear, this version has two different ways of handling the keyboard interaction for the ratio, bother won't stick around. The main point is to play around with the "combined" input (when focus highlight is around the entire ratio). Then you can use the up/down arrow keys for the right ratio, and the w/s keys for the left one.

@emily-phet
Copy link
Author

Just a note on this - I love the idea of this up arrow / down arrow and W/S interaction, but I'm very concerned about requiring so many key presses (four different keys) being required to interact with the heart of the sim. My concerns are that it may be verbose or challenging to make the interaction clear non-visually, and also that for those with mobility impairments it may be very difficult or impossible to enact.

Let's talk about the various ideas for alternative input later today in our meeting, but at the moment, I'm really hopeful we can figure out a way that is more like "two sliders" plus a way for simultaneous interaction, that way at least some interaction is possibly for those using only arrow keys or familiar slider-like input.

@zepumph
Copy link
Member

zepumph commented May 13, 2020

Today our RaP discussion was focused on this issue. We looked at the joint interaction and compared it to a potentially simpler solution (from both the input and description perspective) of only interacting with one OR the other hand.

We discussed potentially including both interactions as possibilities. In this case the join interaction would be after the two sliders in focus order, and could have different/supplementary/summary description when interacting with both at the same time.

Action steps:

@emily-phet
Copy link
Author

emily-phet commented May 18, 2020

@zepumph Thanks for the version with the left, right, both tab order.

Observation: After playing with this version for awhile, I find myself focused on the keypress action (for the 1:2 ratio, I'm focused on "left, right right; left, right, right". I wonder if this will emphasize (implicitly or explicitly) that the distance between the objects is maintained rather than the distance widens, particularly for non-visual learners. I would love to work directly with learners to understand how they interpret this.
@BLFiedler and I discussed a proposed idea (I think partially described in #29, but I think in conversation our thoughts went beyond what is written...Edited to add: There's also a nice idea proposed in #43 as well) for an interaction focused on changing the distance between the objects. My concern with that approach has been that it likely involves the addition of a new visual representation (a bar or something like that indicating "the distance between" - @BLFiedler "Changing the distance between" is perhaps a good paper title!), which is something I'd been trying to avoid if possible. But, if we feel like the auditory display (sound and description) can represent this well, we may need to go there. I wonder if, in a different version of the interaction, W/S moves both markers up/down and the arrow key increases/decreases the distance between we might get a more pedagogically desirable interaction. Then, the hope would be that the learner interaction becomes "up, bigger; up, bigger, bigger; up, bigger, bigger, bigger". In a sense, the learner would be controlling the bar representation (@BLFiedler very similar to Christina's increasing/decreasing line segments(!) we learned about last week).

Let's discuss tomorrow (Tuesday) in both the (likely) new design meeting, and in the sound meeting.

@zepumph This may be too crazy - but would it be possible for you to add a new interactive visual representation, so tab order would go left hand, right hand, both, bar? W/S would move the bar (and hands) up, and arrow keys would increase the length of the bar (making it taller = moving right hand up). If this seems too much for today, that's ok. If a quick conversation would help let me know and we could have a quick phone call.

@emily-phet
Copy link
Author

Argh - trying to mock this up, and keep running into the issue where (when playing with the current prototype with the newly proposed interaction in mind) still get scenarios where the same keyboard presses (e.g., left, right, right; left, right, right) are the result. That's not what I want! @BLFiedler, if my comment above makes sense to you, could you try actually writing out a persons interaction pattern, and see if you get that the keypress pattern ends up being constant, (e.g., left, right, right; left, right, right) instead of something like "up, bigger; up, bigger, bigger; up, bigger, bigger, bigger)? I have to step away now, but I think I'm missing something that maybe you can catch.

@emily-phet
Copy link
Author

Here's the visual mockup I was working on.
Screen Shot 2020-05-18 at 7 09 48 AM

@brettfiedler
Copy link
Member

brettfiedler commented May 18, 2020

Okay - I think I understand what you're going for (edit: I didn't, do now).

1.) Starting screen (success). User tabs 4 times to engage "Bar".
2.) User uses W/S to shift both hands and hears decreasing "in-proportion" tones no matter how they move.
3.) User is out of proportion and uses Up/Down to move the right hand and encounters the perfect "in-proportion" tone.
4.) Users combines W/S shift of both hands with Up/Down of right hand to maintain proportion (cueing Strings). Perfect use is: W x1, Up x2 (effectively moving Left up x1, right up x2) or vice versa., same as current implementation.

Does that seem right? @emily-phet

Regardless of what we do, I am wondering if this focus mode would benefit from slightly modified or a different set of auditory cues...? I'll need to think on that.

Observation: After playing with this version for awhile, I find myself focused on the keypress action (for the 1:2 ratio, I'm focused on "left, right right; left, right, right". I wonder if this will emphasize (implicitly or explicitly) that the distance between the objects is maintained rather than the distance widens, particularly for non-visual learners. I would love to work directly with learners to understand how they interpret this.

I think that's possible, but so long as they understand that every keypress moves it a constant amount, then I believe it harkens to the "driver" scenario for them to understand that if one moves once and the other moves twice that they must be farther apart. It's possible this may benefit from a specific auditory cue for the distance they are apart (I guess I'm thinking of the force sound in GFL:B.. or maybe the mass sound for something short but informative..?)

@brettfiedler
Copy link
Member

Probably a crazy idea - What if the grid tick noise was pitched based on the distance between the hands? You'd only hear it occasionally (which may make it either confusing or less annoying or both. It'd need to exist even when the grid was not on the screen (invisible grid?). It might be distracting as they try to figure out when/why it happens, if not super intuitive feeling.

@zepumph
Copy link
Member

zepumph commented May 19, 2020

To reiterate yet again so that I can make sure I understand it, the interaction is basically like the right hand is attached to the left one, and w/s effectively moves the left hand, but the right one comes along as it goes because it is attached. Then the arrow keys up/down effect the distance between the two hands, so it moves the right hand but only in effect it looks like it is moving the size of the visual bar we will add between the two. I don't quite understand what this would look like with in correlation to snap-to-grid.

@brettfiedler
Copy link
Member

brettfiedler commented May 19, 2020

Just to sum up a big picture statement on our recent Design meeting:

We recognize that there is a difficulty when interacting with RaP through keypress in recognizing that something is changing, that is not reflected in the interaction pattern for success (e.g. Left up x 1, Right up x2). This interaction pattern DOES capture the CONSTANT RATE that you need to apply to the left and right hands when moving continuously (e.g. via touchscreen), but does not inherently capture the "higher, bigger", multiplicative-sense.

A few ideas were tossed out:
1.) "Controlling the distance between" by manipulating a bar (through keypress) that maps to the hand position and spacing.
2.) Velocity controls for continuous movement (instead of grid snap on keypress) on long press of a key (W/S or Up/Down).
3.) Sliders that explicitly control the hand spacing, hand positions and perhaps individual hand(s).

@zepumph zepumph removed their assignment May 22, 2020
@emily-phet
Copy link
Author

See issue #62 for next steps in addressing alternative input design challenges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants