-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refining alternative input #44
Comments
@terracoda @zepumph Is there anyway that you can imagine to have the hand markers actually "be sliders" for those interacting using a focus-based mechanism (e.g., screen reader)? Could we instrument with the same behaviors as a slider and have the screen reader read out as a slider? I think it might be ok if the markers did not move horizontally at all when focused, and only responded as if any arrow press was incrementing/decrementing the height value. |
Hmm, @emily-phet that's an interesting question. For each of our freely moving objects (balloon, book, ruler) that we have implemented thus far, we have implemented a two-step interaction. We start with a native grab button that launches a custom move interaction. This two-step approach is likely cumbersome and not ideal for two objects that we want to be moved at the same time. we wouldn't want to force users to have to do two grabs before exploring. Currently, we have two freely movable hands, which are not pre-fixed by a native grab button. The interaction itself, will be difficult to understand unless it is communicated as a native interaction, or possibly unless the name of the object itself maps so perfectly to native interaction that the learner know exactly what to do, e.g., "Slide left hand" and "Slide right hand". To address your question @emily-phet, I see two possible approaches. I would need @zepumph to chime in to see if both are actually feasible. I want to ignore the current PDOM implementation (i.e. headings for Left hand and Right hand). Approach 1: Fully Custom code techniques - all ARIA
<h2>Play Area</Play>
<h3>My Hands / Hand Position??</h3>
<p>DYNAMIC STATE DESCRIPTION (qualitative or quantitative) of the hand situation.</p>
<div aria-role-description="slider" aria-label="Left Hand" role="application" aria-orientation="vertical">
Left Hand
</div>
<div aria-role-description="slider" aria-label="Right Hand" role="application" aria-orientation="vertical">
Right Hand
</div>
<p>Move hands up or down independently or at the same time to explore/play.</p> Approach 2: Combination of custom & native code techniques
<p>DYNAMIC Screen Summary (qualitative or quantitative) of the hand situation.</p>
<H2>Play Area</H2>
<label for="leftHand" >Left Hand</label>
<input id="leftHand" type="range" aria-orientation="vertical"
aria-valuetext="SOME relevant distance or position value for left hand">
<labl for="rightHand">Right Hand</label>
<input id="rightHand" type="range" aria-orientation="vertical"
aria-valuetext="SOME relevant distance or position value for right hand">
<p>Move hands independently or at the same time to explore/play.</p> Of course, placement of the help text, dynamic descriptions and which headings we use is all flexible depending on what is possible for the interaction (the interactive hands). It is difficult to know exactly where to place the surrounding State Descriptions, what words they should be, and how they should be coded without thinking about how the interaction will work. I hope the different approaches make some kind of sense. |
The two approaches make sense to me! I toyed around with the second one, because using as much native html as possible seems valuable. The issue I had was with this line:
I agree that the ability to use both hands at the same time is number (1) in my book, but with the outlined approach, I can't figure out a way to do it. More and more I'm trying to think of a parent Instead I looked at potentially trying to have a listener on the parent, and having the arrow keys manipulate the right input, and the 'W" and "S" keys manipulate the left input. In this snippet note that we never get 'input' events on the slider, and since they never have focus, we don't have the benefits (AFAIK) of aria-valuetext. Basically this feels a bit like a totally custom solution with two input elements thrown in instead of divs, even though that may not buy us much. As for Approach 1 For similar reasons as stated above with focus, it still feels challenging to me to have each hand be its own application role. What if we tried putting that application role on the parent div, and seeing if we can explain away the strangeness with help text and aria-live. I'm sorta shrugging over here. Nothing I have said thus far sounds very nice. I look forward to further exploration with @terracoda and @jessegreenberg. |
@zepumph @terracoda (and @BLFiedler as well), let's discuss in tomorrow's RaP research meeting tomorrow. How this particular interaction is supported will be very important, so let's go through the intended interaction experience and potential options together. |
@zepumph, honestly, I was just assuming that we need to use application role to be able to operate the two hands at once. That said, I wasn't sure where it should go. Considering this more deeply, what if we made one hand use the custom interaction (using the WASD keys or just WS keys) and implement one as a fully native slider. I'm not sure, how that would present to the user, but it might give us access to one set of aria-valuetext? |
@jessegreenberg and I talked about this today. We feel like it is sorta an "either or" kind of situation. We can either use Before tomorrow I am going to try to implement a fully custom, "combined" approach where you can control both at once. I won't take away the two keyboard drag listeners thought. This will allow us to compare the two strategies, and it will help facilitate discussion. |
On master I made a first pass at keyboard control for the entire ratio. I will try to come back to refine it this evening, but you can play with it on phettest now if you want. |
You can play around with this in https://phet-dev.colorado.edu/html/ratio-and-proportion/1.0.0-dev.19/phet/ratio-and-proportion_en_phet.html. To be clear, this version has two different ways of handling the keyboard interaction for the ratio, bother won't stick around. The main point is to play around with the "combined" input (when focus highlight is around the entire ratio). Then you can use the up/down arrow keys for the right ratio, and the w/s keys for the left one. |
Just a note on this - I love the idea of this up arrow / down arrow and W/S interaction, but I'm very concerned about requiring so many key presses (four different keys) being required to interact with the heart of the sim. My concerns are that it may be verbose or challenging to make the interaction clear non-visually, and also that for those with mobility impairments it may be very difficult or impossible to enact. Let's talk about the various ideas for alternative input later today in our meeting, but at the moment, I'm really hopeful we can figure out a way that is more like "two sliders" plus a way for simultaneous interaction, that way at least some interaction is possibly for those using only arrow keys or familiar slider-like input. |
Today our RaP discussion was focused on this issue. We looked at the joint interaction and compared it to a potentially simpler solution (from both the input and description perspective) of only interacting with one OR the other hand. We discussed potentially including both interactions as possibilities. In this case the join interaction would be after the two sliders in focus order, and could have different/supplementary/summary description when interacting with both at the same time. Action steps:
|
@zepumph Thanks for the version with the left, right, both tab order. Observation: After playing with this version for awhile, I find myself focused on the keypress action (for the 1:2 ratio, I'm focused on "left, right right; left, right, right". I wonder if this will emphasize (implicitly or explicitly) that the distance between the objects is maintained rather than the distance widens, particularly for non-visual learners. I would love to work directly with learners to understand how they interpret this. Let's discuss tomorrow (Tuesday) in both the (likely) new design meeting, and in the sound meeting. @zepumph This may be too crazy - but would it be possible for you to add a new interactive visual representation, so tab order would go left hand, right hand, both, bar? W/S would move the bar (and hands) up, and arrow keys would increase the length of the bar (making it taller = moving right hand up). If this seems too much for today, that's ok. If a quick conversation would help let me know and we could have a quick phone call. |
Argh - trying to mock this up, and keep running into the issue where (when playing with the current prototype with the newly proposed interaction in mind) still get scenarios where the same keyboard presses (e.g., left, right, right; left, right, right) are the result. That's not what I want! @BLFiedler, if my comment above makes sense to you, could you try actually writing out a persons interaction pattern, and see if you get that the keypress pattern ends up being constant, (e.g., left, right, right; left, right, right) instead of something like "up, bigger; up, bigger, bigger; up, bigger, bigger, bigger)? I have to step away now, but I think I'm missing something that maybe you can catch. |
Okay - I think I understand what you're going for (edit: I didn't, do now). 1.) Starting screen (success). User tabs 4 times to engage "Bar". Does that seem right? @emily-phet Regardless of what we do, I am wondering if this focus mode would benefit from slightly modified or a different set of auditory cues...? I'll need to think on that.
I think that's possible, but so long as they understand that every keypress moves it a constant amount, then I believe it harkens to the "driver" scenario for them to understand that if one moves once and the other moves twice that they must be farther apart. It's possible this may benefit from a specific auditory cue for the distance they are apart (I guess I'm thinking of the force sound in GFL:B.. or maybe the mass sound for something short but informative..?) |
Probably a crazy idea - What if the grid tick noise was pitched based on the distance between the hands? You'd only hear it occasionally (which may make it either confusing or less annoying or both. It'd need to exist even when the grid was not on the screen (invisible grid?). It might be distracting as they try to figure out when/why it happens, if not super intuitive feeling. |
To reiterate yet again so that I can make sure I understand it, the interaction is basically like the right hand is attached to the left one, and w/s effectively moves the left hand, but the right one comes along as it goes because it is attached. Then the arrow keys up/down effect the distance between the two hands, so it moves the right hand but only in effect it looks like it is moving the size of the visual bar we will add between the two. I don't quite understand what this would look like with in correlation to snap-to-grid. |
Just to sum up a big picture statement on our recent Design meeting: We recognize that there is a difficulty when interacting with RaP through keypress in recognizing that something is changing, that is not reflected in the interaction pattern for success (e.g. Left up x 1, Right up x2). This interaction pattern DOES capture the CONSTANT RATE that you need to apply to the left and right hands when moving continuously (e.g. via touchscreen), but does not inherently capture the "higher, bigger", multiplicative-sense. A few ideas were tossed out: |
See issue #62 for next steps in addressing alternative input design challenges. |
@zepumph @BLFiedler
The use of arrow keys to control the hands needs the following refinements:
Ideally, moving the hands up and down "feels like" moving a slider as much as possible. Particularly from a non-visual perspective, the side-to-side motion is irrelevant, and consistent, repeatable location of the markers is important (like a slider!).
Also note - there are ideas underway for exploring simultaneous movement of the two hands with the keyboard - see #29. There may ultimately be a need for new keyboard shortcuts and/or a new focusable "object" in the navigation order (which would likely come just after the right hand has focus (like left hand, right hand, both hands...).
Ideally this would be ready for the sound design discussion on Tuesday, as we'll be discussing potential boundary sounds, crossing-grid-line sounds, etc. and being able to feel this out with alternative input would be helpful.
The text was updated successfully, but these errors were encountered: