-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Voicing for a Node should stop when it is removed from a display or made invisible #1300
Comments
@jessegreenberg and I discussed this today, and took some notes on a potential solution (in the patch). We feel like phetsims/utterance-queue#37 is a refactor that is mandatory to do before this, so that we can simplify the number of queues that need to support interrupting (like on visibility change). On hold for now. Index: scenery/js/accessibility/voicing/voicingManager.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/voicingManager.js b/scenery/js/accessibility/voicing/voicingManager.js
--- a/scenery/js/accessibility/voicing/voicingManager.js (revision d6e1062e5d866b9463a8510aeaa93e79b76be9a6)
+++ b/scenery/js/accessibility/voicing/voicingManager.js (date 1634744978898)
@@ -413,6 +413,18 @@
// keep a reference to WebSpeechUtterances in Safari, so the browser doesn't dispose of it before firing, see #215
this.safariWorkaroundUtterances.push( speechSynthUtterance );
+ utterance.interruptEmitter.addListener(()=>{
+ /*
+ if not yet speaking
+ remove from voicingQueue
+ if speaking
+ interrupt the synth.
+
+ TODO: HOW TO SUPPORT TOGGLING VISIBILITY, should it clear?
+ TODO: DO we care about interrupting in UtteranceQueue?
+*/
+ })
+
const startListener = () => {
this.startSpeakingEmitter.emit( stringToSpeak, utterance );
this.currentlySpeakingUtterance = utterance;
Index: scenery/js/display/Instance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Instance.js b/scenery/js/display/Instance.js
--- a/scenery/js/display/Instance.js (revision d6e1062e5d866b9463a8510aeaa93e79b76be9a6)
+++ b/scenery/js/display/Instance.js (date 1634744825468)
@@ -1684,6 +1684,10 @@
this.node.filterChangeEmitter.addListener( this.markRenderStateDirtyListener );
this.node.clipAreaProperty.lazyLink( this.markRenderStateDirtyListener );
this.node.instanceRefreshEmitter.addListener( this.markRenderStateDirtyListener );
+
+
+ // TODO: Where to emit the interrupt?
+ this.node.voicingInterruptEmitter.emit();
}
}
Index: scenery/js/accessibility/voicing/Voicing.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.js b/scenery/js/accessibility/voicing/Voicing.js
--- a/scenery/js/accessibility/voicing/Voicing.js (revision d6e1062e5d866b9463a8510aeaa93e79b76be9a6)
+++ b/scenery/js/accessibility/voicing/Voicing.js (date 1634744825449)
@@ -125,6 +125,17 @@
// @private {Function(event):} - called when this node is focused.
this._voicingFocusListener = this.defaultFocusListener;
+ this.interrupEmitter = new Emitter; // per node
+ // every voicing node get's its own Utterance always, so that we can wire up interruptEmitter to it.
+ this.utterance = new Utterance({
+ interruptEmitter
+ });
+
+ this.instanceChangedEmitter.addListner(()=>{
+ if( invisibile){
+ this.interruptEmitter.emit(this.utterance);
+ }
+ })
// @private {Object} - Input listener that speaks content on focus. This is the only input listener added
// by Voicing, but it is the one that is consistent for all Voicing nodes. On focus, speak the name, object
// response, and interaction hint.
@@ -281,13 +292,14 @@
// {Utterance|null} - The utterance to use if you want this response to be more controlled in the
// UtteranceQueue.
- utterance: null
+ utterance: this.utterance
}, options );
let response = responseCollector.collectResponses( options );
if ( options.utterance ) {
options.utterance.alert = response;
+ options.utterance.interruptEmitter = this.interruptEmitter;
response = options.utterance;
}
this.speakContent( response );
Index: utterance-queue/js/Utterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/Utterance.js b/utterance-queue/js/Utterance.js
--- a/utterance-queue/js/Utterance.js (revision e4376c1507a2d12406cd61b3d702bff53cc99fdf)
+++ b/utterance-queue/js/Utterance.js (date 1634744825479)
@@ -74,7 +74,10 @@
// Options specific to the announcer of the Utterance. See supported options in your specific announcer's
// announce() function (for example AriaLiveAnnouncer.announce())
- announcerOptions: {}
+ announcerOptions: {},
+
+ // ANnouncers should listen to this and interrupt this utterance when called.
+ interruptEmitter: null
}, options );
assert && assert( typeof options.loopAlerts === 'boolean' );
|
From phetsims/joist#752 (comment):
@jessegreenberg took this and discussed a completely different voicing paradigm in which you alert voicing through a Node. Then you can automatically use that Node's visibility to determine if it should be spoken. We would need an additional control like voicingVisible (similar to pdomVisible) to handle the dialog case since it isn't ACTUALLY invisible, just behind a modal pane. We aren't going to dive into that right away, but wanted to note that we were thinking about it. In that other issue, we can get over half way there by just using voicing priority on each Utterance. |
Here are my thoughts on this now:
We presumably want to check whether the Node can speak in UtteranceQueue, before requesting speech from the Announcer. I am trying out a way to do this by
if ( !this._muted && this.subclassCheck( utterance ) && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
this.announcer.announce( utterance, utterance.announcerOptions );
}
Index: utterance-queue/js/UtteranceQueue.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/UtteranceQueue.js b/utterance-queue/js/UtteranceQueue.js
--- a/utterance-queue/js/UtteranceQueue.js (revision dac6515d57527ff041fa41c4d1d8ada25b80973b)
+++ b/utterance-queue/js/UtteranceQueue.js (date 1636741809242)
@@ -407,7 +407,7 @@
const utterance = utteranceWrapper.utterance;
// only speak the utterance if not muted and the Utterance predicate returns true
- if ( !this._muted && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
+ if ( !this._muted && this.subclassCheck( utterance ) && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
this.announcer.announce( utterance, utterance.announcerOptions );
}
Index: scenery/js/accessibility/voicing/TrailCanSpeakTracker.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/TrailCanSpeakTracker.js b/scenery/js/accessibility/voicing/TrailCanSpeakTracker.js
new file mode 100644
--- /dev/null (date 1636741470356)
+++ b/scenery/js/accessibility/voicing/TrailCanSpeakTracker.js (date 1636741470356)
@@ -0,0 +1,70 @@
+// Copyright 2021, University of Colorado Boulder
+
+/**
+ * @author Jesse Greenberg (PhET Interactive Simulations)
+ */
+
+import Property from '../../../../axon/js/Property.js';
+import TinyProperty from '../../../../axon/js/TinyProperty.js';
+import scenery from '../../scenery.js';
+
+class TrailCanSpeakTracker {
+ constructor( trail ) {
+
+ // @private {Trail}
+ this.trail = trail;
+
+ // @public {TinyProperty.<boolean>} - True if all Nodes in the Trail are both visible AND have voicingVisible
+ this.trailCanSpeakProperty = new TinyProperty( this.trail.isVisible() );
+
+ // Hook up listeners to each Node in the trail, so we are notified of changes. Will be removed on disposal.
+ this.multilinks = [];
+ for ( let j = 0; j < this.trail.length; j++ ) {
+ const node = trail.nodes[ j ];
+
+ let multilink = null;
+
+ if ( node.isVoicing ) {
+
+ // we have another Property to add to the multilink
+ multilink = Property.multilink( [ node.visibleProperty, node._voicingVisibleProperty ], ( visible, voicingVisible ) => {
+ this.trailCanSpeakProperty.set( this.trail.isVisible() && this.isTrailVoicingVisible( this.trail ) );
+ } );
+ }
+ else {
+ multilink = Property.multilink( [ node.visibleProperty ], visible => {
+ this.trailCanSpeakProperty.set( this.trail.isVisible() );
+ } );
+ }
+ this.multilinks.push( multilink );
+ }
+ }
+
+ /**
+ * Returns whether or not the Trail indicates that voicing is not visible for a Node. Only Nodes that compose
+ * Voicing can be marked. If a Node does not compose voicing then it
+ * @private
+ * @param trail
+ * @returns {boolean}
+ */
+ isTrailVoicingVisible( trail ) {
+ let i = trail.nodes.length;
+ while ( i-- ) {
+ if ( !trail.nodes[ i ].isVoicing && !trail.nodes[ i ].isVoicingVisible() ) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ /**
+ * @public
+ */
+ dispose() {
+
+ // TODO
+ }
+}
+
+scenery.register( 'TrailCanSpeakTracker', TrailCanSpeakTracker );
+export default TrailCanSpeakTracker;
Index: scenery/js/accessibility/voicing/Voicing.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.js b/scenery/js/accessibility/voicing/Voicing.js
--- a/scenery/js/accessibility/voicing/Voicing.js (revision c6e27d949a0d4d39dec1081ee289d892560493fa)
+++ b/scenery/js/accessibility/voicing/Voicing.js (date 1636742148519)
@@ -21,6 +21,7 @@
* @author Jesse Greenberg (PhET Interactive Simulations)
*/
+import TinyProperty from '../../../../axon/js/TinyProperty.js';
import extend from '../../../../phet-core/js/extend.js';
import inheritance from '../../../../phet-core/js/inheritance.js';
import merge from '../../../../phet-core/js/merge.js';
@@ -29,6 +30,7 @@
import Node from '../../nodes/Node.js';
import scenery from '../../scenery.js';
import InteractiveHighlighting from './InteractiveHighlighting.js';
+import TrailCanSpeakTracker from './TrailCanSpeakTracker.js';
import voicingUtteranceQueue from './voicingUtteranceQueue.js';
// options that are supported by Voicing.js. Added to mutator keys so that Voicing properties can be set with mutate.
@@ -122,6 +124,12 @@
// are assembled into final content for the UtteranceQueue. See ResponsePatternCollection for more details.
this._voicingResponsePatternCollection = ResponsePatternCollection.DEFAULT_RESPONSE_PATTERNS;
+ this._voicingVisibleProperty = new TinyProperty( true );
+
+ this._voicingVisible = true;
+
+ this._trailCanSpeakTrackers = [];
+
// @private {Function(event):} - called when this node is focused.
this._voicingFocusListener = this.defaultFocusListener;
@@ -135,12 +143,52 @@
};
this.addInputListener( this.speakContentOnFocusListener );
+ // whenever instances update, add or remove a TrailCanSpeakTracker
+ this.changedInstanceEmitter.addListener( ( instance, added ) => {
+ if ( added ) {
+ this._voicingVisibleProperty.dispose();
+
+ const newTracker = new TrailCanSpeakTracker( instance.trail );
+ this._trailCanSpeakTrackers.push( newTracker );
+ this._voicingVisibleProperty = DerivedProperty.and( /* all Properties */ );
+ }
+ else {
+
+ // remove the TrailVoicingVisibleTracker...
+ }
+ } );
+
// support passing options through directly on initialize
if ( options ) {
this.mutate( _.pick( options, VOICING_OPTION_KEYS ) );
}
},
+ /**
+ * TODO
+ * @public
+ *
+ * @param {boolean} visible
+ */
+ setVoicingVisible: function( visible ) {
+ assert && assert( typeof visible === 'boolean' );
+ if ( this._voicingVisible !== visible ) {
+ this._voicingVisible = visible;
+ }
+ },
+ set voicingVisible( visible ) { this.setVoicingVisible( visible ); },
+
+ /**
+ * TODO
+ * @public
+ *
+ * @returns {boolean}
+ */
+ isVoicingVisible: function() {
+ return this._voicingVisible;
+ },
+ get voicingVisible() { return this.isVoicingVisible(); },
+
/**
* Speak all responses assigned to this Node. Options allow you to override a responses for this particular
* speech request. Each response is only spoken if the associated Property of responseCollector is true. If
Index: joist/js/toolbar/VoicingToolbarItem.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/joist/js/toolbar/VoicingToolbarItem.js b/joist/js/toolbar/VoicingToolbarItem.js
--- a/joist/js/toolbar/VoicingToolbarItem.js (revision 737f4633156b955b6c4313f203e9ecf34ced6b1d)
+++ b/joist/js/toolbar/VoicingToolbarItem.js (date 1636674619061)
@@ -106,7 +106,7 @@
const voicingEnabledListener = enabled => {
const alert = enabled ? simVoicingOnString : simVoicingOffString;
- this.alertDescriptionUtterance( alert );
+ // this.alertDescriptionUtterance( alert );
joistVoicingUtteranceQueue.addToBack( alert );
};
voicingManager.mainWindowVoicingEnabledProperty.lazyLink( voicingEnabledListener );
|
Discussed briefly with @zepumph - Maybe we can put |
@jessegreenberg and I spent all morning on this problem. Here are some notes and a patch. Basically we decided that the best path forward was to use the trail tracker from #1300 (comment) to listen to the visible and voicingVisible (on every Node) Properties on all Nodes up each instance's trail. That allows us to AND all Properties together to create a Voicing.canSpeakProperty. This can be passed into a new feature of Utterance called Here are some of our raw notes:
Here is the patch from @jessegreenberg: Index: scenery/js/nodes/Node.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/nodes/Node.js b/scenery/js/nodes/Node.js
--- a/scenery/js/nodes/Node.js (revision 87db93ddccfb7fa435e08c5ee87f7be497439a8f)
+++ b/scenery/js/nodes/Node.js (date 1637006135696)
@@ -372,6 +372,11 @@
// NOTE: This is fired synchronously when the clipArea of the Node is toggled
this.clipAreaProperty = new TinyProperty( DEFAULT_OPTIONS.clipArea );
+ // Thinking about putting this on every Node so that we are able to mark this without mixing all of the
+ // Voicing trait into a Node. For example, would would prefer to mark this as false on a ScreenView so
+ // silence it without needing to mix Voicing into ScreenView.
+ this.voicingVisibleProperty = new TinyProperty( true );
+
// @private - Areas for hit intersection. If set on a Node, no descendants can handle events.
this._mouseArea = DEFAULT_OPTIONS.mouseArea; // {Shape|Bounds2} for mouse position in the local coordinate frame
this._touchArea = DEFAULT_OPTIONS.touchArea; // {Shape|Bounds2} for touch and pen position in the local coordinate frame
Index: utterance-queue/js/UtteranceQueue.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/UtteranceQueue.js b/utterance-queue/js/UtteranceQueue.js
--- a/utterance-queue/js/UtteranceQueue.js (revision 08dcf1886afe6a7137d26bf412bc480abb8840a1)
+++ b/utterance-queue/js/UtteranceQueue.js (date 1637004095208)
@@ -423,7 +423,7 @@
const utterance = utteranceWrapper.utterance;
// only speak the utterance if not muted and the Utterance predicate returns true
- if ( !this._muted && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
+ if ( !this._muted && utterance.canAnnounceProperty.value && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
this.announcer.announce( utterance, utterance.announcerOptions );
}
Index: utterance-queue/js/Utterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/Utterance.js b/utterance-queue/js/Utterance.js
--- a/utterance-queue/js/Utterance.js (revision 08dcf1886afe6a7137d26bf412bc480abb8840a1)
+++ b/utterance-queue/js/Utterance.js (date 1636999427613)
@@ -20,6 +20,8 @@
* @author Michael Kauzmann (PhET Interactive Simulations)
*/
+import DerivedProperty from '../../axon/js/DerivedProperty.js';
+import TinyProperty from '../../axon/js/TinyProperty.js';
import validate from '../../axon/js/validate.js';
import merge from '../../phet-core/js/merge.js';
import AlertableDef from './AlertableDef.js';
@@ -62,6 +64,8 @@
// wait in the announcer before being alerted.
predicate: function() { return true; },
+ canAnnounceProperties: [ new TinyProperty( true ) ],
+
// {number} - in ms, how long to wait before the utterance is considered "stable" and stops being
// added to the queue, at which point it will be spoken. Default value chosen because
// it sounds nice in most usages of Utterance with alertStableDelay. If you want to hear the utterance as fast
@@ -98,6 +102,10 @@
// @public (read-only, utterance-queue-internal)
this.predicate = options.predicate;
+ this.canAnnounceProperties = options.canAnnounceProperties;
+
+ this.canAnnounceProperty = DerivedProperty.and( this.canAnnounceProperties );
+
// @public (read-only, utterance-queue-internal) {number} - In ms, how long the utterance should remain in the queue
// before it is read. The queue is cleared in FIFO order, but utterances are skipped until the delay time is less
// than the amount of time the utterance has been in the queue
Index: scenery/js/accessibility/voicing/Voicing.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.js b/scenery/js/accessibility/voicing/Voicing.js
--- a/scenery/js/accessibility/voicing/Voicing.js (revision 87db93ddccfb7fa435e08c5ee87f7be497439a8f)
+++ b/scenery/js/accessibility/voicing/Voicing.js (date 1637005882197)
@@ -21,6 +21,10 @@
* @author Jesse Greenberg (PhET Interactive Simulations)
*/
+import DerivedProperty from '../../../../axon/js/DerivedProperty.js';
+import DynamicProperty from '../../../../axon/js/DynamicProperty.js';
+import Property from '../../../../axon/js/Property.js';
+import TinyProperty from '../../../../axon/js/TinyProperty.js';
import extend from '../../../../phet-core/js/extend.js';
import inheritance from '../../../../phet-core/js/inheritance.js';
import merge from '../../../../phet-core/js/merge.js';
@@ -29,6 +33,7 @@
import Node from '../../nodes/Node.js';
import scenery from '../../scenery.js';
import InteractiveHighlighting from './InteractiveHighlighting.js';
+import TrailCanSpeakTracker from './TrailCanSpeakTracker.js';
import voicingUtteranceQueue from './voicingUtteranceQueue.js';
// options that are supported by Voicing.js. Added to mutator keys so that Voicing properties can be set with mutate.
@@ -122,6 +127,12 @@
// are assembled into final content for the UtteranceQueue. See ResponsePatternCollection for more details.
this._voicingResponsePatternCollection = ResponsePatternCollection.DEFAULT_RESPONSE_PATTERNS;
+ // this.canSpeakProperty = new TinyProperty( true );
+ const canSpeakImplementationProperty = new Property( new TinyProperty( true ) );
+ this.canSpeakProperty = new DynamicProperty( canSpeakImplementationProperty );
+
+ this._trailCanSpeakTrackers = [];
+
// @private {Function(event):} - called when this node is focused.
this._voicingFocusListener = this.defaultFocusListener;
@@ -135,12 +146,57 @@
};
this.addInputListener( this.speakContentOnFocusListener );
+ // whenever instances update, add or remove a TrailCanSpeakTracker
+ this.changedInstanceEmitter.addListener( ( instance, added ) => {
+ // this.canSpeakProperty.dispose();
+ canSpeakImplementationProperty.value.dispose();
+
+ if ( added ) {
+
+ const newTracker = new TrailCanSpeakTracker( instance );
+ this._trailCanSpeakTrackers.push( newTracker );
+ }
+ else {
+
+ // remove the TrailVoicingVisibleTracker and recreate the DerivedProperty
+ const trackerToRemove = _.find( this._trailCanSpeakTrackers, tracker => tracker.instance === instance );
+ this._trailCanSpeakTrackers.splice( trackerToRemove, 1 );
+ }
+
+ canSpeakImplementationProperty.value = DerivedProperty.or( this._trailCanSpeakTrackers.map( tracker => tracker.trailCanSpeakProperty ) );
+ } );
+
// support passing options through directly on initialize
if ( options ) {
this.mutate( _.pick( options, VOICING_OPTION_KEYS ) );
}
},
+ /**
+ * TODO
+ * @public
+ *
+ * @param {boolean} visible
+ */
+ setVoicingVisible: function( visible ) {
+ assert && assert( typeof visible === 'boolean' );
+ if ( this.voicingVisibleProperty.value !== visible ) {
+ this.voicingVisibleProperty.value = visible;
+ }
+ },
+ set voicingVisible( visible ) { this.setVoicingVisible( visible ); },
+
+ /**
+ * TODO
+ * @public
+ *
+ * @returns {boolean}
+ */
+ isVoicingVisible: function() {
+ return this.voicingVisibleProperty.value;
+ },
+ get voicingVisible() { return this.isVoicingVisible(); },
+
/**
* Speak all responses assigned to this Node. Options allow you to override a responses for this particular
* speech request. Each response is only spoken if the associated Property of responseCollector is true. If
Index: scenery-phet/js/buttons/ResetAllButton.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery-phet/js/buttons/ResetAllButton.js b/scenery-phet/js/buttons/ResetAllButton.js
--- a/scenery-phet/js/buttons/ResetAllButton.js (revision 483c50cda2cbe8fe12a72bc258c91669b6cf19ba)
+++ b/scenery-phet/js/buttons/ResetAllButton.js (date 1637004438161)
@@ -78,11 +78,25 @@
super( options );
+
// a11y - when reset all button is fired, disable alerts so that there isn't an excessive stream of alerts
// while many Properties are reset. When callbacks are ended for reset all, enable alerts again and announce an
// alert that everything was reset.
- const resetUtterance = new ActivationUtterance( { alert: resetAllAlertString } );
+ const resetUtterance = new ActivationUtterance( {
+ alert: resetAllAlertString,
+ canAnnounceProperties: [ this.canSpeakProperty ]
+ } );
+
+ window.resetUtterance = resetUtterance;
+ window.resetAllButton = this;
let voicingEnabledOnFire = voicingUtteranceQueue.enabled;
+
+ window.setInterval( () => {
+ this.voicingSpeakFullResponse( {
+ utterance: resetUtterance
+ } );
+ }, 5000 );
+
const ariaEnabledOnFirePerUtteranceQueueMap = new Map(); // Keep track of the enabled of each connected description UtteranceQueue
this.buttonModel.isFiringProperty.lazyLink( isFiring => {
@@ -96,7 +110,9 @@
// restore the enabled state to each utteranceQueue after resetting
voicingUtteranceQueue.enabled = voicingEnabledOnFire;
- this.voicingSpeakFullResponse();
+ this.voicingSpeakFullResponse( {
+ utterance: resetUtterance
+ } );
}
// Handle each connected description UtteranceQueue
Index: scenery/js/accessibility/voicing/TrailCanSpeakTracker.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/TrailCanSpeakTracker.js b/scenery/js/accessibility/voicing/TrailCanSpeakTracker.js
new file mode 100644
--- /dev/null (date 1637005882200)
+++ b/scenery/js/accessibility/voicing/TrailCanSpeakTracker.js (date 1637005882200)
@@ -0,0 +1,58 @@
+// Copyright 2021, University of Colorado Boulder
+
+/**
+ * @author Jesse Greenberg (PhET Interactive Simulations)
+ */
+
+import DerivedProperty from '../../../../axon/js/DerivedProperty.js';
+import scenery from '../../scenery.js';
+
+class TrailCanSpeakTracker {
+ constructor( instance ) {
+
+ this.instance = instance;
+
+ // @private {Trail}
+ this.trail = instance.trail;
+
+ const properties = [];
+ for ( let j = 0; j < this.trail.length; j++ ) {
+ const node = this.trail.nodes[ j ];
+
+ properties.push( node.visibleProperty );
+ properties.push( node.voicingVisibleProperty );
+ }
+
+ this.trailCanSpeakProperty = new DerivedProperty( properties, () => {
+ return this.trail.isVisible() && this.isTrailVoicingVisible( this.trail );
+ } );
+ }
+
+ /**
+ * Returns whether or not the Trail indicates that voicing is not visible for a Node. Only Nodes that compose
+ * Voicing can be marked. If a Node does not compose voicing then it
+ * @private
+ * @param trail
+ * @returns {boolean}
+ */
+ isTrailVoicingVisible( trail ) {
+ let i = trail.nodes.length;
+ while ( i-- ) {
+ if ( trail.nodes[ i ].isVoicing && !trail.nodes[ i ].isVoicingVisible() ) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ /**
+ * @public
+ */
+ dispose() {
+
+ // TODO
+ }
+}
+
+scenery.register( 'TrailCanSpeakTracker', TrailCanSpeakTracker );
+export default TrailCanSpeakTracker; In the interim, here are the next steps:
|
@zepumph @jonathanolson and I met today to discuss the above proposal. Instead of TrailCanSpeakTracker we tried a much less memory intensive approach where we add an Emitter directly to Instance that will let us broadcast changes in We also discussed synchronous vs asynchronous updates. Since we are updating with Instances we are dependent on We considered using RendererSummary for this, but decided against it. RendererSummary is good for keeping track of state information in a high performance way, but it works best when looking down a subree. In our case we want to look up ancestors. This patch has it working very well. We are getting close to a commit point but I ran out of time today. This is where I will come back tomorrow: Index: scenery/js/nodes/Node.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/nodes/Node.js b/scenery/js/nodes/Node.js
--- a/scenery/js/nodes/Node.js (revision 86b804b3b6dbadc5fe02fcce0370086e1892f732)
+++ b/scenery/js/nodes/Node.js (date 1637346811445)
@@ -372,6 +372,11 @@
// NOTE: This is fired synchronously when the clipArea of the Node is toggled
this.clipAreaProperty = new TinyProperty( DEFAULT_OPTIONS.clipArea );
+ // Thinking about putting this on every Node so that we are able to mark this without mixing all of the
+ // Voicing trait into a Node. For example, would would prefer to mark this as false on a ScreenView so
+ // silence it without needing to mix Voicing into ScreenView.
+ this.voicingVisibleProperty = new TinyProperty( true );
+
// @private - Areas for hit intersection. If set on a Node, no descendants can handle events.
this._mouseArea = DEFAULT_OPTIONS.mouseArea; // {Shape|Bounds2} for mouse position in the local coordinate frame
this._touchArea = DEFAULT_OPTIONS.touchArea; // {Shape|Bounds2} for touch and pen position in the local coordinate frame
Index: utterance-queue/js/UtteranceQueue.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/UtteranceQueue.js b/utterance-queue/js/UtteranceQueue.js
--- a/utterance-queue/js/UtteranceQueue.js (revision 08dcf1886afe6a7137d26bf412bc480abb8840a1)
+++ b/utterance-queue/js/UtteranceQueue.js (date 1637346811473)
@@ -423,7 +423,7 @@
const utterance = utteranceWrapper.utterance;
// only speak the utterance if not muted and the Utterance predicate returns true
- if ( !this._muted && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
+ if ( !this._muted && utterance.canAnnounceProperty.value && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
this.announcer.announce( utterance, utterance.announcerOptions );
}
Index: utterance-queue/js/Utterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/Utterance.js b/utterance-queue/js/Utterance.js
--- a/utterance-queue/js/Utterance.js (revision 08dcf1886afe6a7137d26bf412bc480abb8840a1)
+++ b/utterance-queue/js/Utterance.js (date 1637346811464)
@@ -20,6 +20,8 @@
* @author Michael Kauzmann (PhET Interactive Simulations)
*/
+import DerivedProperty from '../../axon/js/DerivedProperty.js';
+import TinyProperty from '../../axon/js/TinyProperty.js';
import validate from '../../axon/js/validate.js';
import merge from '../../phet-core/js/merge.js';
import AlertableDef from './AlertableDef.js';
@@ -62,6 +64,8 @@
// wait in the announcer before being alerted.
predicate: function() { return true; },
+ canAnnounceProperties: [ new TinyProperty( true ) ],
+
// {number} - in ms, how long to wait before the utterance is considered "stable" and stops being
// added to the queue, at which point it will be spoken. Default value chosen because
// it sounds nice in most usages of Utterance with alertStableDelay. If you want to hear the utterance as fast
@@ -98,6 +102,10 @@
// @public (read-only, utterance-queue-internal)
this.predicate = options.predicate;
+ this.canAnnounceProperties = options.canAnnounceProperties;
+
+ this.canAnnounceProperty = DerivedProperty.and( this.canAnnounceProperties );
+
// @public (read-only, utterance-queue-internal) {number} - In ms, how long the utterance should remain in the queue
// before it is read. The queue is cleared in FIFO order, but utterances are skipped until the delay time is less
// than the amount of time the utterance has been in the queue
Index: scenery/js/accessibility/voicing/Voicing.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.js b/scenery/js/accessibility/voicing/Voicing.js
--- a/scenery/js/accessibility/voicing/Voicing.js (revision 86b804b3b6dbadc5fe02fcce0370086e1892f732)
+++ b/scenery/js/accessibility/voicing/Voicing.js (date 1637371658086)
@@ -21,6 +21,7 @@
* @author Jesse Greenberg (PhET Interactive Simulations)
*/
+import TinyProperty from '../../../../axon/js/TinyProperty.js';
import extend from '../../../../phet-core/js/extend.js';
import inheritance from '../../../../phet-core/js/inheritance.js';
import merge from '../../../../phet-core/js/merge.js';
@@ -123,6 +124,15 @@
// are assembled into final content for the UtteranceQueue. See ResponsePatternCollection for more details.
this._voicingResponsePatternCollection = ResponsePatternCollection.DEFAULT_RESPONSE_PATTERNS;
+ // @public (read-only) {TinyProperty} - Indicates whether this Node can speak. A Node can speak if self
+ // and all of its ancestors are visible and voicingVisible.
+ this.canSpeakProperty = new TinyProperty( true );
+
+ // @private {number} - A counter that keeps track of visible and voicingVisible Instances of this Node.
+ // As long as this value is greater than zero, this Node can speak. See onInstanceVisibilityChange
+ // and onInstanceVoicingVisibilityChange for more details.
+ this.voicingCanSpeakCount = 0;
+
// @private {Function(event):} - called when this node is focused.
this._voicingFocusListener = this.defaultFocusListener;
@@ -136,12 +146,125 @@
};
this.addInputListener( this.speakContentOnFocusListener );
+ // @private {Function(Instance):} - Called when visibility or voicingVisibility change for an Instance.
+ this.boundInstanceVisibilityChangeListener = this.onInstanceVisibilityChange.bind( this );
+ this.boundInstanceVoicingVisibilityChangeListener = this.onInstanceVoicingVisibilityChange.bind( this );
+
+ // Whenever an Instance of this Node is added, add listeners that will update the canSpeakProperty.
+ this.changedInstanceEmitter.addListener( ( instance, added ) => {
+ if ( added ) {
+ instance.voicingVisibleEmitter.addListener( this.boundInstanceVoicingVisibilityChangeListener );
+ instance.visibleEmitter.addListener( this.boundInstanceVisibilityChangeListener );
+ }
+ else {
+ instance.voicingVisibleEmitter.removeListener( this.boundInstanceVoicingVisibilityChangeListener );
+ instance.visibleEmitter.removeListener( this.boundInstanceVisibilityChangeListener );
+ }
+
+ // eagerly update the canSpeakProperty and counting variables in addition to adding change listeners
+ this.handleVoicingInstanceChanged( instance, added );
+ } );
+
// support passing options through directly on initialize
if ( options ) {
this.mutate( _.pick( options, VOICING_OPTION_KEYS ) );
}
},
+ /**
+ * When visibility changes on an Instance update the canSpeakProperty. A counting variable keeps track
+ * of the instances attached to the display that are both globally visible and voicingVisible. If any
+ * Instance is voicingVisible and visible, this Node can speak.
+ * @param instance
+ */
+ onInstanceVisibilityChange( instance ) {
+
+ // Since this is called on the change and `visible` is a boolean wasVisible is the not of the current value.
+ // From the change we can determine if the count should be incremented or decremented.
+ const wasVisible = !instance.visible && instance.voicingVisible;
+ const isVisible = instance.visible && instance.voicingVisible;
+
+ if ( wasVisible && !isVisible ) {
+ this.voicingCanSpeakCount--;
+ }
+ else if ( !wasVisible && isVisible ) {
+ this.voicingCanSpeakCount++;
+ }
+
+ this.canSpeakProperty.value = this.voicingCanSpeakCount > 0;
+ },
+
+ /**
+ * When voicingVisible changes on an Instance, update the canSpeakProperty. A counting variable keeps track of
+ * the instances attached to the display that are both globally visible and voicingVisible. If any Instance
+ * is voicingVisible and visible this Node can speak.
+ * @param {Instance} instance
+ * @private
+ */
+ onInstanceVoicingVisibilityChange( instance ) {
+
+ // Since this is called on the change and `visible` is a boolean wasVisible is the not of the current value.
+ // From the change we can determine if the count should be incremented or decremented.
+ const wasVoicingVisible = !instance.voicingVisible && instance.visible;
+ const isVoicingVisible = instance.voicingVisible && instance.visible;
+
+ if ( wasVoicingVisible && !isVoicingVisible ) {
+ this.voicingCanSpeakCount--;
+ }
+ else if ( !wasVoicingVisible && isVoicingVisible ) {
+ this.voicingCanSpeakCount++;
+ }
+
+ this.canSpeakProperty.value = this.voicingCanSpeakCount > 0;
+ },
+
+ /**
+ * Update the canSpeakProperty and counting variable in response to an Instance of this Node being added or
+ * removed.
+ * @param {Instance} instance
+ * @param {boolean} added
+ */
+ handleVoicingInstanceChanged( instance, added ) {
+ const isVisible = instance.visible && instance.voicingVisible;
+ if ( isVisible ) {
+
+ // If the added Instance was visible and voicingVisible it should increment the counter. If the removed
+ // instance is NOT visible/voicingVisible it would not have contributed to the counter so we should not
+ // decrement in that case.
+ this.voicingCanSpeakCount = added ? this.voicingCanSpeakCount + 1 : this.voicingCanSpeakCount - 1;
+ }
+
+ this.canSpeakProperty.value = this.voicingCanSpeakCount > 0;
+ },
+
+ /**
+ * Set the visibility of this Node with respect to Voicing features. Totally separate from graphical display.
+ * When visible, this Node and all of its ancestors will be able to speak with Voicing. When voicingVisible
+ * is false, all Voicing under this Node will be muted.
+ * @public
+ *
+ * @param {boolean} visible
+ */
+ setVoicingVisible: function( visible ) {
+ assert && assert( typeof visible === 'boolean' );
+ if ( this.voicingVisibleProperty.value !== visible ) {
+ this.voicingVisibleProperty.value = visible;
+ }
+ },
+ set voicingVisible( visible ) { this.setVoicingVisible( visible ); },
+
+ /**
+ * Get whether or not this Node has voicingVisible. When false, speech with Voicing under the subtree under
+ * for this Node will not be spoken.
+ * @public
+ *
+ * @returns {boolean}
+ */
+ isVoicingVisible: function() {
+ return this.voicingVisibleProperty.value;
+ },
+ get voicingVisible() { return this.isVoicingVisible(); },
+
/**
* Speak all responses assigned to this Node. Options allow you to override a responses for this particular
* speech request. Each response is only spoken if the associated Property of responseCollector is true. If
Index: scenery/js/display/Display.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Display.js b/scenery/js/display/Display.js
--- a/scenery/js/display/Display.js (revision 86b804b3b6dbadc5fe02fcce0370086e1892f732)
+++ b/scenery/js/display/Display.js (date 1637351344758)
@@ -482,7 +482,7 @@
// pre-repaint phase: update relative transform information for listeners (notification) and precomputation where desired
this.updateDirtyTransformRoots();
// pre-repaint phase update visibility information on instances
- this._baseInstance.updateVisibility( true, true, false );
+ this._baseInstance.updateVisibility( true, true, true, false );
if ( assertSlow ) { this._baseInstance.auditVisibility( true ); }
Index: scenery-phet/js/buttons/ResetAllButton.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery-phet/js/buttons/ResetAllButton.js b/scenery-phet/js/buttons/ResetAllButton.js
--- a/scenery-phet/js/buttons/ResetAllButton.js (revision 96c30a65f2a8a39b39c29398150f6aefc17b4572)
+++ b/scenery-phet/js/buttons/ResetAllButton.js (date 1637366679053)
@@ -78,11 +78,25 @@
super( options );
+
// a11y - when reset all button is fired, disable alerts so that there isn't an excessive stream of alerts
// while many Properties are reset. When callbacks are ended for reset all, enable alerts again and announce an
// alert that everything was reset.
- const resetUtterance = new ActivationUtterance( { alert: resetAllAlertString } );
+ const resetUtterance = new ActivationUtterance( {
+ alert: resetAllAlertString,
+ canAnnounceProperties: [ this.canSpeakProperty ]
+ } );
+
+ window.resetUtterance = resetUtterance;
+ window.resetAllButton = this;
let voicingEnabledOnFire = voicingUtteranceQueue.enabled;
+
+ window.setInterval( () => {
+ this.voicingSpeakFullResponse( {
+ utterance: resetUtterance
+ } );
+ }, 1500 );
+
const ariaEnabledOnFirePerUtteranceQueueMap = new Map(); // Keep track of the enabled of each connected description UtteranceQueue
this.buttonModel.isFiringProperty.lazyLink( isFiring => {
@@ -96,7 +110,9 @@
// restore the enabled state to each utteranceQueue after resetting
voicingUtteranceQueue.enabled = voicingEnabledOnFire;
- this.voicingSpeakFullResponse();
+ this.voicingSpeakFullResponse( {
+ utterance: resetUtterance
+ } );
}
// Handle each connected description UtteranceQueue
Index: scenery/js/display/Instance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Instance.js b/scenery/js/display/Instance.js
--- a/scenery/js/display/Instance.js (revision 86b804b3b6dbadc5fe02fcce0370086e1892f732)
+++ b/scenery/js/display/Instance.js (date 1637371729047)
@@ -103,6 +103,7 @@
this.selfVisible = true; // like relative visibility, but is always true if we are a visibility root
this.visibilityDirty = true; // entire subtree of visibility will need to be updated
this.childVisibilityDirty = true; // an ancestor needs its visibility updated
+ this.voicingVisible = true; // voicing - whether this instance will support speech with the voicing feature
// @private {Object.<instanceId:number,number>} - Maps another instance's `instance.id` {number} => branch index
// {number} (first index where the two trails are different). This effectively operates as a cache (since it's more
@@ -149,6 +150,7 @@
this.visibleEmitter = new TinyEmitter();
this.relativeVisibleEmitter = new TinyEmitter();
this.selfVisibleEmitter = new TinyEmitter();
+ this.voicingVisibleEmitter = new TinyEmitter();
this.cleanInstance( display, trail );
@@ -1526,10 +1528,11 @@
* @public
*
* @param {boolean} parentGloballyVisible - Whether our parent (if any) is globally visible
+ * @param {boolean} parentGloballyVoicingVisible - Whether or parent (if any) is globally voicingVisible.
* @param {boolean} parentRelativelyVisible - Whether our parent (if any) is relatively visible
* @param {boolean} updateFullSubtree - If true, we will visit the entire subtree to ensure visibility is correct.
*/
- updateVisibility( parentGloballyVisible, parentRelativelyVisible, updateFullSubtree ) {
+ updateVisibility( parentGloballyVisible, parentGloballyVoicingVisible, parentRelativelyVisible, updateFullSubtree ) {
// If our visibility flag for ourself is dirty, we need to update our entire subtree
if ( this.visibilityDirty ) {
updateFullSubtree = true;
@@ -1540,7 +1543,10 @@
const wasVisible = this.visible;
const wasRelativeVisible = this.relativeVisible;
const wasSelfVisible = this.selfVisible;
+ const nodeVoicingVisible = this.node.voicingVisibleProperty.value;
+ const wasVoicingVisible = this.voicingVisible;
this.visible = parentGloballyVisible && nodeVisible;
+ this.voicingVisible = parentGloballyVoicingVisible && nodeVoicingVisible;
this.relativeVisible = parentRelativelyVisible && nodeVisible;
this.selfVisible = this.isVisibilityApplied ? true : this.relativeVisible;
@@ -1550,7 +1556,7 @@
if ( updateFullSubtree || child.visibilityDirty || child.childVisibilityDirty ) {
// if we are a visibility root (isVisibilityApplied===true), disregard ancestor visibility
- child.updateVisibility( this.visible, this.isVisibilityApplied ? true : this.relativeVisible, updateFullSubtree );
+ child.updateVisibility( this.visible, this.voicingVisible, this.isVisibilityApplied ? true : this.relativeVisible, updateFullSubtree );
}
}
@@ -1559,13 +1565,16 @@
// trigger changes after we do the full visibility update
if ( this.visible !== wasVisible ) {
- this.visibleEmitter.emit();
+ this.visibleEmitter.emit( this );
}
if ( this.relativeVisible !== wasRelativeVisible ) {
- this.relativeVisibleEmitter.emit();
+ this.relativeVisibleEmitter.emit( this );
}
if ( this.selfVisible !== wasSelfVisible ) {
- this.selfVisibleEmitter.emit();
+ this.selfVisibleEmitter.emit( this );
+ }
+ if ( this.voicingVisible !== wasVoicingVisible ) {
+ this.voicingVisibleEmitter.emit( this );
}
}
@@ -1681,6 +1690,9 @@
this.node.childrenReorderedEmitter.addListener( this.childrenReorderedListener );
this.node.visibleProperty.lazyLink( this.visibilityListener );
+ // Add a listener here to the voicingVisibleProperty as well.
+ this.node.voicingVisibleProperty.lazyLink( this.visibilityListener );
+
this.node.filterChangeEmitter.addListener( this.markRenderStateDirtyListener );
this.node.clipAreaProperty.lazyLink( this.markRenderStateDirtyListener );
this.node.instanceRefreshEmitter.addListener( this.markRenderStateDirtyListener );
@@ -1698,6 +1710,7 @@
this.node.childRemovedEmitter.removeListener( this.childRemovedListener );
this.node.childrenReorderedEmitter.removeListener( this.childrenReorderedListener );
this.node.visibleProperty.unlink( this.visibilityListener );
+ this.node.voicingVisibleProperty.unlink( this.visibilityListener );
this.node.filterChangeEmitter.removeListener( this.markRenderStateDirtyListener );
this.node.clipAreaProperty.unlink( this.markRenderStateDirtyListener );
@@ -1852,6 +1865,11 @@
assertSlow( this.addRemoveCounter === 0,
'Our addRemoveCounter should always be 0 at the end of syncTree' );
+ // TODO: Add more assertions and verity these. We want to make sure that when this Instance
+ // is voicingVisible ALL Nodes above marked as voicingVisible.
+ assertSlow( this.voicingVisible === _.reduce( this.trail.nodes, ( value, node ) => value && node.voicingVisibleProperty.value, true ) );
+ assertSlow( this.visible === _.reduce( this.trail.nodes, ( value, node ) => value && node.visibleProperty.value, true ) );
+
// validate the subtree
for ( let i = 0; i < this.children.length; i++ ) {
const childInstance = this.children[ i ]; |
@zepumph and I came back to this today and confirmed that this is the direction we want to take. We updated the patch with some parts that were missing (like being able to update Utterance.canAnnounceProperties), and brought things up to TypeScript. Changes are working well, with the exception of #1365 so far. Once again, I am out of time to commit. Patch: Index: scenery/js/display/Instance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Instance.js b/scenery/js/display/Instance.js
--- a/scenery/js/display/Instance.js (revision 6ff8888c6fdbdc1af457ccd23757050ea8514466)
+++ b/scenery/js/display/Instance.js (date 1645234612697)
@@ -24,7 +24,7 @@
import arrayRemove from '../../../phet-core/js/arrayRemove.js';
import cleanArray from '../../../phet-core/js/cleanArray.js';
import Poolable from '../../../phet-core/js/Poolable.js';
-import { scenery, Trail, Utils, Renderer, RelativeTransform, Drawable, ChangeInterval, Fittability, BackboneDrawable, CanvasBlock, InlineCanvasCacheDrawable, SharedCanvasCacheDrawable } from '../imports.js';
+import { BackboneDrawable, CanvasBlock, ChangeInterval, Drawable, Fittability, InlineCanvasCacheDrawable, RelativeTransform, Renderer, scenery, SharedCanvasCacheDrawable, Trail, Utils } from '../imports.js';
let globalIdCounter = 1;
@@ -92,6 +92,7 @@
this.selfVisible = true; // like relative visibility, but is always true if we are a visibility root
this.visibilityDirty = true; // entire subtree of visibility will need to be updated
this.childVisibilityDirty = true; // an ancestor needs its visibility updated
+ this.voicingVisible = true; // voicing - whether this instance will support speech with the voicing feature
// @private {Object.<instanceId:number,number>} - Maps another instance's `instance.id` {number} => branch index
// {number} (first index where the two trails are different). This effectively operates as a cache (since it's more
@@ -138,6 +139,7 @@
this.visibleEmitter = new TinyEmitter();
this.relativeVisibleEmitter = new TinyEmitter();
this.selfVisibleEmitter = new TinyEmitter();
+ this.voicingVisibleEmitter = new TinyEmitter();
this.cleanInstance( display, trail );
@@ -1515,10 +1517,11 @@
* @public
*
* @param {boolean} parentGloballyVisible - Whether our parent (if any) is globally visible
+ * @param {boolean} parentGloballyVoicingVisible - Whether or parent (if any) is globally voicingVisible.
* @param {boolean} parentRelativelyVisible - Whether our parent (if any) is relatively visible
* @param {boolean} updateFullSubtree - If true, we will visit the entire subtree to ensure visibility is correct.
*/
- updateVisibility( parentGloballyVisible, parentRelativelyVisible, updateFullSubtree ) {
+ updateVisibility( parentGloballyVisible, parentGloballyVoicingVisible, parentRelativelyVisible, updateFullSubtree ) {
// If our visibility flag for ourself is dirty, we need to update our entire subtree
if ( this.visibilityDirty ) {
updateFullSubtree = true;
@@ -1529,7 +1532,10 @@
const wasVisible = this.visible;
const wasRelativeVisible = this.relativeVisible;
const wasSelfVisible = this.selfVisible;
+ const nodeVoicingVisible = this.node.voicingVisibleProperty.value;
+ const wasVoicingVisible = this.voicingVisible;
this.visible = parentGloballyVisible && nodeVisible;
+ this.voicingVisible = parentGloballyVoicingVisible && nodeVoicingVisible;
this.relativeVisible = parentRelativelyVisible && nodeVisible;
this.selfVisible = this.isVisibilityApplied ? true : this.relativeVisible;
@@ -1539,7 +1545,7 @@
if ( updateFullSubtree || child.visibilityDirty || child.childVisibilityDirty ) {
// if we are a visibility root (isVisibilityApplied===true), disregard ancestor visibility
- child.updateVisibility( this.visible, this.isVisibilityApplied ? true : this.relativeVisible, updateFullSubtree );
+ child.updateVisibility( this.visible, this.voicingVisible, this.isVisibilityApplied ? true : this.relativeVisible, updateFullSubtree );
}
}
@@ -1548,13 +1554,16 @@
// trigger changes after we do the full visibility update
if ( this.visible !== wasVisible ) {
- this.visibleEmitter.emit();
+ this.visibleEmitter.emit( this );
}
if ( this.relativeVisible !== wasRelativeVisible ) {
- this.relativeVisibleEmitter.emit();
+ this.relativeVisibleEmitter.emit( this );
}
if ( this.selfVisible !== wasSelfVisible ) {
- this.selfVisibleEmitter.emit();
+ this.selfVisibleEmitter.emit( this );
+ }
+ if ( this.voicingVisible !== wasVoicingVisible ) {
+ this.voicingVisibleEmitter.emit( this );
}
}
@@ -1670,6 +1679,9 @@
this.node.childrenReorderedEmitter.addListener( this.childrenReorderedListener );
this.node.visibleProperty.lazyLink( this.visibilityListener );
+ // Add a listener here to the voicingVisibleProperty as well.
+ this.node.voicingVisibleProperty.lazyLink( this.visibilityListener );
+
this.node.filterChangeEmitter.addListener( this.markRenderStateDirtyListener );
this.node.clipAreaProperty.lazyLink( this.markRenderStateDirtyListener );
this.node.instanceRefreshEmitter.addListener( this.markRenderStateDirtyListener );
@@ -1687,6 +1699,7 @@
this.node.childRemovedEmitter.removeListener( this.childRemovedListener );
this.node.childrenReorderedEmitter.removeListener( this.childrenReorderedListener );
this.node.visibleProperty.unlink( this.visibilityListener );
+ this.node.voicingVisibleProperty.unlink( this.visibilityListener );
this.node.filterChangeEmitter.removeListener( this.markRenderStateDirtyListener );
this.node.clipAreaProperty.unlink( this.markRenderStateDirtyListener );
@@ -1802,6 +1815,7 @@
this.visibleEmitter.removeAllListeners();
this.relativeVisibleEmitter.removeAllListeners();
this.selfVisibleEmitter.removeAllListeners();
+ this.voicingVisibleEmitter.removeAllListeners();
this.freeToPool();
@@ -1841,6 +1855,11 @@
assertSlow( this.addRemoveCounter === 0,
'Our addRemoveCounter should always be 0 at the end of syncTree' );
+ // TODO: Add more assertions and verity these. We want to make sure that when this Instance
+ // is voicingVisible ALL Nodes above marked as voicingVisible.
+ assertSlow( this.voicingVisible === _.reduce( this.trail.nodes, ( value, node ) => value && node.voicingVisibleProperty.value, true ) );
+ assertSlow( this.visible === _.reduce( this.trail.nodes, ( value, node ) => value && node.visibleProperty.value, true ) );
+
// validate the subtree
for ( let i = 0; i < this.children.length; i++ ) {
const childInstance = this.children[ i ];
Index: utterance-queue/js/UtteranceQueue.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/UtteranceQueue.js b/utterance-queue/js/UtteranceQueue.js
--- a/utterance-queue/js/UtteranceQueue.js (revision 5b05862c771ef49e45b226bf9cca5c00a8f074fd)
+++ b/utterance-queue/js/UtteranceQueue.js (date 1645211403579)
@@ -602,7 +602,7 @@
if ( this.announcer.readyToAnnounce ) {
// only announce the utterance if not muted and the Utterance predicate returns true
- if ( !this._muted && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
+ if ( !this._muted && utterance.canAnnounceProperty.value && utterance.predicate() && utterance.getAlertText( this.announcer.respectResponseCollectorProperties ) !== '' ) {
assert && assert( this.announcingUtteranceWrapper === null, 'announcingUtteranceWrapper and its priorityProperty listener should have been disposed' );
// Save a reference to the UtteranceWrapper and its priorityProperty listener while the Announcer is announcing
Index: scenery-phet/js/buttons/ResetAllButton.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery-phet/js/buttons/ResetAllButton.js b/scenery-phet/js/buttons/ResetAllButton.js
--- a/scenery-phet/js/buttons/ResetAllButton.js (revision 0402ded0be511f80986cd85d5afae9516ea92660)
+++ b/scenery-phet/js/buttons/ResetAllButton.js (date 1645226752825)
@@ -77,7 +77,7 @@
options.xMargin = options.yMargin = options.radius * resetAllButtonMarginCoefficient;
super( options );
-
+ window.resetAllButton = this;
// a11y - when reset all button is fired, disable alerts so that there isn't an excessive stream of alerts
// while many Properties are reset. When callbacks are ended for reset all, enable alerts again and announce an
// alert that everything was reset.
Index: scenery/js/nodes/Node.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/nodes/Node.ts b/scenery/js/nodes/Node.ts
--- a/scenery/js/nodes/Node.ts (revision 6ff8888c6fdbdc1af457ccd23757050ea8514466)
+++ b/scenery/js/nodes/Node.ts (date 1645234612699)
@@ -397,6 +397,14 @@
// NOTE: This is fired synchronously when the clipArea of the Node is toggled
clipAreaProperty: TinyProperty<Shape | null>;
+ // Whether this Node and its subtree can announce content with Voicing and SpeechSynthesis. Though
+ // related to Voicing it exists in Node so that the Property can be set on Nodes that do not use the
+ // trait. That is useful when you want to set voicingVisible on a subtree where the root does
+ // not compose Voicing. This is not ideal, but the entirety of Voicing cannot be composed into every
+ // Node, it would produce the wrong behavior and have a massive memory impact. See setVoicingVisible
+ // and Voicing.ts for more information about Voicing.
+ voicingVisibleProperty: TinyProperty<boolean>;
+
// Areas for hit intersection. If set on a Node, no descendants can handle events.
_mouseArea: Shape | Bounds2 | null; // for mouse position in the local coordinate frame
_touchArea: Shape | Bounds2 | null; // for touch and pen position in the local coordinate frame
@@ -649,6 +657,7 @@
this._inputEnabledProperty = new TinyForwardingProperty( DEFAULT_OPTIONS.inputEnabled,
DEFAULT_OPTIONS.phetioInputEnabledPropertyInstrumented );
this.clipAreaProperty = new TinyProperty<Shape | null>( DEFAULT_OPTIONS.clipArea );
+ this.voicingVisibleProperty = new TinyProperty<boolean>( true );
this._mouseArea = DEFAULT_OPTIONS.mouseArea;
this._touchArea = DEFAULT_OPTIONS.touchArea;
this._cursor = DEFAULT_OPTIONS.cursor;
@@ -6013,6 +6022,29 @@
}
}
+ /**
+ * Set the visibility of this Node with respect to Voicing features. Totally separate from graphical display.
+ * When visible, this Node and all of its ancestors will be able to speak with Voicing. When voicingVisible
+ * is false, all Voicing under this Node will be muted.
+ */
+ setVoicingVisible( visible: boolean ) {
+ if ( this.voicingVisibleProperty.value !== visible ) {
+ this.voicingVisibleProperty.value = visible;
+ }
+ }
+
+ set voicingVisible( visible ) { this.setVoicingVisible( visible ); }
+
+ /**
+ * Get whether or not this Node has voicingVisible. When false, speech with Voicing under the subtree under
+ * for this Node will not be spoken.
+ */
+ isVoicingVisible() {
+ return this.voicingVisibleProperty.value;
+ }
+
+ get voicingVisible() { return this.isVoicingVisible(); }
+
/**
* Override for extra information in the debugging output (from Display.getDebugHTML()). (scenery-internal)
*/
Index: utterance-queue/js/Utterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/Utterance.js b/utterance-queue/js/Utterance.js
--- a/utterance-queue/js/Utterance.js (revision 5b05862c771ef49e45b226bf9cca5c00a8f074fd)
+++ b/utterance-queue/js/Utterance.js (date 1645219994172)
@@ -20,7 +20,11 @@
* @author Michael Kauzmann (PhET Interactive Simulations)
*/
+import DerivedProperty from '../../axon/js/DerivedProperty.js';
+import DynamicProperty from '../../axon/js/DynamicProperty.js';
import NumberProperty from '../../axon/js/NumberProperty.js';
+import Property from '../../axon/js/Property.js';
+import TinyProperty from '../../axon/js/TinyProperty.js';
import validate from '../../axon/js/validate.js';
import merge from '../../phet-core/js/merge.js';
import IOType from '../../tandem/js/types/IOType.js';
@@ -67,6 +71,9 @@
// wait in the announcer before being alerted.
predicate: function() { return true; },
+ // TODO: Documentation, https://github.com/phetsims/scenery/issues/1300
+ canAnnounceProperties: [ new TinyProperty( true ) ],
+
// {number} - in ms, how long to wait before the utterance is considered "stable" and stops being
// added to the queue, at which point it will be spoken. Default value chosen because
// it sounds nice in most usages of Utterance with alertStableDelay. If you want to hear the utterance as fast
@@ -124,6 +131,18 @@
// @public (utterance-queue-internal) {Object} - Options to be passed to the announcer for this Utterance
this.announcerOptions = options.announcerOptions;
+ // @private {../../js/axon/IProperty<boolean>[]}
+ this._canAnnounceProperties = [];
+
+ // @private
+ this.canAnnounceImplementationProperty = new Property( null );
+
+ // @public (read-only) {TinyForwardingProperty<boolean>}
+ this.canAnnounceProperty = new DynamicProperty( this.canAnnounceImplementationProperty, {
+ defaultValue: true
+ } );
+ this.setCanAnnounceProperties( options.canAnnounceProperties );
+
// @public - observable for the priority, can be set to change the priority of this Utterance
// while it is still in the UtteranceQueue. See options documentation for behavior of priority.
this.priorityProperty = new NumberProperty( options.priority );
@@ -133,6 +152,40 @@
this.previousAlertText = null;
}
+ /**
+ * Set the Properties controlling whether this Utterance can announce. All Properties must be
+ * true for the alert content of this Utterance to be announced.
+ * @public
+ *
+ * @param {Property<boolean>[]} canAnnounceProperties
+ */
+ setCanAnnounceProperties( canAnnounceProperties ) {
+
+ if ( this.canAnnounceImplementationProperty.value ) {
+ this.canAnnounceImplementationProperty.value.dispose();
+ }
+
+ const canSpeakProperty = DerivedProperty.and( canAnnounceProperties );
+ this.canAnnounceImplementationProperty.value = canSpeakProperty;
+
+ this._canAnnounceProperties = canAnnounceProperties;
+ }
+
+ set canAnnounceProperties( canAnnounceProperties ) { this.setCanAnnounceProperties( canAnnounceProperties ) }
+
+ /**
+ * Get the Properties that control whether the alert content for this Utterance can be announced.
+ * All must be true for the announcement to occur.
+ * @public
+ *
+ * @returns {../../axon/js/IProperty<boolean>[]}
+ */
+ getCanAnnounceProperties() {
+ return this._canAnnounceProperties.slice( 0 ); // defensive copy
+ }
+
+ get canAnnounceProperties() { return this.getCanAnnounceProperties(); }
+
/**
*
* @param {ResponsePacket} alert
Index: scenery/js/display/Display.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Display.js b/scenery/js/display/Display.js
--- a/scenery/js/display/Display.js (revision 6ff8888c6fdbdc1af457ccd23757050ea8514466)
+++ b/scenery/js/display/Display.js (date 1645234612696)
@@ -457,7 +457,7 @@
// pre-repaint phase: update relative transform information for listeners (notification) and precomputation where desired
this.updateDirtyTransformRoots();
// pre-repaint phase update visibility information on instances
- this._baseInstance.updateVisibility( true, true, false );
+ this._baseInstance.updateVisibility( true, true, true, false );
if ( assertSlow ) { this._baseInstance.auditVisibility( true ); }
Index: scenery/js/accessibility/voicing/Voicing.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/Voicing.ts b/scenery/js/accessibility/voicing/Voicing.ts
--- a/scenery/js/accessibility/voicing/Voicing.ts (revision 6ff8888c6fdbdc1af457ccd23757050ea8514466)
+++ b/scenery/js/accessibility/voicing/Voicing.ts (date 1645234612694)
@@ -30,11 +30,12 @@
import ResponsePatternCollection from '../../../../utterance-queue/js/ResponsePatternCollection.js';
import Utterance from '../../../../utterance-queue/js/Utterance.js';
import UtteranceQueue from '../../../../utterance-queue/js/UtteranceQueue.js';
-import { InteractiveHighlighting, Node, NodeOptions, scenery, SceneryListenerFunction, voicingUtteranceQueue } from '../../imports.js';
+import { Instance, InteractiveHighlighting, Node, NodeOptions, scenery, SceneryListenerFunction, voicingUtteranceQueue } from '../../imports.js';
import optionize from '../../../../phet-core/js/optionize.js';
import Constructor from '../../../../phet-core/js/Constructor.js';
import { TAlertableDef } from '../../../../utterance-queue/js/AlertableDef.js';
import IntentionalAny from '../../../../phet-core/js/IntentionalAny.js';
+import TinyProperty from '../../../../axon/js/TinyProperty.js';
// options that are supported by Voicing.js. Added to mutator keys so that Voicing properties can be set with mutate.
const VOICING_OPTION_KEYS = [
@@ -77,6 +78,8 @@
ResponsePacketOptions[PropertyName];
}
+type InstanceListener = ( instance: Instance ) => void;
+
/**
* @param Type
* @param optionsArgPosition - zero-indexed number that the options argument is provided at
@@ -104,6 +107,21 @@
// Called when this node is focused.
_voicingFocusListener!: SceneryListenerFunction | null;
+ // Indicates whether this Node can speak. A Node can speak if self and all of its ancestors are visible and
+ // voicingVisible.
+ _voicingCanSpeakProperty!: TinyProperty<boolean>;
+
+ // A counter that keeps track of visible and voicingVisible Instances of this Node.
+ // As long as this value is greater than zero, this Node can speak. See onInstanceVisibilityChange
+ // and onInstanceVoicingVisibilityChange for more details.
+ _voicingCanSpeakCount!: number;
+
+ // Called when `visible` or `voicingVisible` change for an Instance.
+ _boundInstanceVisibilityChangeListener!: InstanceListener;
+ _boundInstanceVoicingVisibilityChangeListener!: InstanceListener;
+
+ _boundInstancesChangedListener!: ( instance: Instance, added: boolean ) => void;
+
// Input listener that speaks content on focus. This is the only input listener added
// by Voicing, but it is the one that is consistent for all Voicing nodes. On focus, speak the name, object
// response, and interaction hint.
@@ -132,9 +150,28 @@
// @ts-ignore
super.initialize && super.initialize( args );
+ // Indicates whether this Node can speak. A Node can speak if self and all of its ancestors are visible and
+ // voicingVisible.
+ this._voicingCanSpeakProperty = new TinyProperty<boolean>( true );
+
this._voicingResponsePacket = new ResponsePacket();
this._voicingFocusListener = this.defaultFocusListener;
- this._voicingUtterance = new Utterance();
+
+ // Sets the default voicingUtterance and makes this.canSpeakProperty a dependency on its ability to announce.
+ this.setVoicingUtterance( new Utterance() );
+
+ // A counter that keeps track of visible and voicingVisible Instances of this Node. As long as this value is
+ // greater than zero, this Node can speak. See onInstanceVisibilityChange and onInstanceVoicingVisibilityChange
+ // for more details.
+ this._voicingCanSpeakCount = 0;
+
+ this._boundInstanceVisibilityChangeListener = this.onInstanceVisibilityChange.bind( this );
+ this._boundInstanceVoicingVisibilityChangeListener = this.onInstanceVoicingVisibilityChange.bind( this );
+
+ // Whenever an Instance of this Node is added or removed, add/remove listeners that will update the
+ // canSpeakProperty.
+ this._boundInstancesChangedListener = this.addOrRemoveInstanceListeners.bind( this );
+ ( this as unknown as Node ).changedInstanceEmitter.addListener( this._boundInstancesChangedListener );
this._speakContentOnFocusListener = {
focus: event => {
@@ -427,7 +464,21 @@
* one will be created, but a custom one can optionally be provided.
*/
setVoicingUtterance( utterance: Utterance ) {
- this._voicingUtterance = utterance;
+ if ( this._voicingUtterance !== utterance ) {
+
+ // Remove this.canSpeakProperty from the old Utterance which will no longer be gated by this Voicing Node.
+ // May not be defined on initialization.
+ if ( this._voicingUtterance ) {
+ this._voicingUtterance.canAnnounceProperties = _.without( this._voicingUtterance.canAnnounceProperties, this._voicingCanSpeakProperty );
+ }
+
+ // Add this.canSpeakProperty controlling whether the Utterance on this instance of Voicing can speak. Gracefully
+ // if it is already added to support calling this function multiple times with the same Utterance.
+ const previousCanAnnounceProperties = utterance.canAnnounceProperties;
+ utterance.canAnnounceProperties = [ this._voicingCanSpeakProperty, ...previousCanAnnounceProperties ];
+
+ this._voicingUtterance = utterance;
+ }
}
set voicingUtterance( utterance: Utterance ) { this.setVoicingUtterance( utterance ); }
@@ -461,6 +512,16 @@
get voicingUtteranceQueue(): UtteranceQueue | null { return this.getVoicingUtteranceQueue(); }
+ /**
+ * Get the Property indicating that this Voicing Node can speak. True when this Voicing Node and all of its
+ * ancestors are visible and voicingVisible.
+ */
+ getVoicingCanSpeakProperty() {
+ return this._voicingCanSpeakProperty;
+ }
+
+ get voicingCanSpeakProperty() { return this.getVoicingCanSpeakProperty(); }
+
/**
* Called whenever this Node is focused.
*/
@@ -501,16 +562,120 @@
*/
dispose() {
( this as unknown as Node ).removeInputListener( this._speakContentOnFocusListener );
+ ( this as unknown as Node ).changedInstanceEmitter.removeListener( this._boundInstancesChangedListener );
super.dispose();
}
clean() {
( this as unknown as Node ).removeInputListener( this._speakContentOnFocusListener );
+ ( this as unknown as Node ).changedInstanceEmitter.removeListener( this._boundInstancesChangedListener );
+ console.log( 'cleaning' );
// @ts-ignore
super.clean && super.clean();
}
+
+ /***********************************************************************************************************/
+ // PRIVATE METHODS
+ /***********************************************************************************************************/
+
+ /**
+ * When visibility changes on an Instance update the canSpeakProperty. A counting variable keeps track
+ * of the instances attached to the display that are both globally visible and voicingVisible. If any
+ * Instance is voicingVisible and visible, this Node can speak.
+ *
+ * @private
+ */
+ onInstanceVisibilityChange( instance: Instance ) {
+
+ // Since this is called on the change and `visible` is a boolean wasVisible is the not of the current value.
+ // From the change we can determine if the count should be incremented or decremented.
+ const wasVisible = !instance.visible && instance.voicingVisible;
+ const isVisible = instance.visible && instance.voicingVisible;
+
+ if ( wasVisible && !isVisible ) {
+ this._voicingCanSpeakCount--;
+ }
+ else if ( !wasVisible && isVisible ) {
+ this._voicingCanSpeakCount++;
+ }
+
+ this._voicingCanSpeakProperty.value = this._voicingCanSpeakCount > 0;
+ }
+
+ /**
+ * When voicingVisible changes on an Instance, update the canSpeakProperty. A counting variable keeps track of
+ * the instances attached to the display that are both globally visible and voicingVisible. If any Instance
+ * is voicingVisible and visible this Node can speak.
+ *
+ * @private
+ */
+ onInstanceVoicingVisibilityChange( instance: Instance ) {
+
+ // Since this is called on the change and `visible` is a boolean wasVisible is the not of the current value.
+ // From the change we can determine if the count should be incremented or decremented.
+ const wasVoicingVisible = !instance.voicingVisible && instance.visible;
+ const isVoicingVisible = instance.voicingVisible && instance.visible;
+
+ if ( wasVoicingVisible && !isVoicingVisible ) {
+ this._voicingCanSpeakCount--;
+ }
+ else if ( !wasVoicingVisible && isVoicingVisible ) {
+ this._voicingCanSpeakCount++;
+ }
+
+ this._voicingCanSpeakProperty.value = this._voicingCanSpeakCount > 0;
+ }
+
+ /**
+ * Update the canSpeakProperty and counting variable in response to an Instance of this Node being added or
+ * removed.
+ *
+ * @private
+ */
+ handleInstancesChanged( instance: Instance, added: boolean ) {
+ const isVisible = instance.visible && instance.voicingVisible;
+ if ( isVisible ) {
+
+ // If the added Instance was visible and voicingVisible it should increment the counter. If the removed
+ // instance is NOT visible/voicingVisible it would not have contributed to the counter so we should not
+ // decrement in that case.
+ this._voicingCanSpeakCount = added ? this._voicingCanSpeakCount + 1 : this._voicingCanSpeakCount - 1;
+ }
+
+ this._voicingCanSpeakProperty.value = this._voicingCanSpeakCount > 0;
+ }
+
+ /**
+ * Add or remove listeners on an Instance watching visibleEmitter and voicingVisible Emitter that modify
+ * the voicingCanSpeakCount controlling this voicingCanSpeakProperty.
+ *
+ * @private
+ */
+ addOrRemoveInstanceListeners( instance: Instance, added: boolean ) {
+ assert && assert( instance.voicingVisibleEmitter, 'Instance must be initialized.' );
+ assert && assert( instance.visibleEmitter, 'Instance must be initialized.' );
+
+ if ( added ) {
+
+ // @ts-ignore - Emitters in Instance need typing
+ instance.voicingVisibleEmitter!.addListener( this._boundInstanceVoicingVisibilityChangeListener );
+
+ // @ts-ignore - Emitters in Instance need typing
+ instance.visibleEmitter!.addListener( this._boundInstanceVisibilityChangeListener );
+ }
+ else {
+ // @ts-ignore - Emitters in Instance need typing
+ instance.voicingVisibleEmitter!.removeListener( this._boundInstanceVoicingVisibilityChangeListener );
+
+ // @ts-ignore - Emitters in Instance need typing
+ instance.visibleEmitter!.removeListener( this._boundInstanceVisibilityChangeListener );
+ }
+
+ // eagerly update the canSpeakProperty and counting variables in addition to adding change listeners
+ this.handleInstancesChanged( instance, added );
+ }
};
/**
Some notes about the changes or things that deviated from the previous patch:
|
Something to consider is interrupting an Utterance as a Node becomes |
We added a DerivedProperty in the above patch, but @zepumph pointed out today that it can be replaced with a new MappedProperty that will work well exactly for the case we need here. |
Returning to this, I updated the patch after more TypeScript conversions and did some additional cleanup. Then I did the following testing:
I feel ready to commit this change now. |
I think this is ready to close up, @zepumph would you like to these final changes? In the end, the above two commits are the only changes made for this issue (though they are big changes). We are ready to start using this feature and I think the following will come out of this as new issues.
|
While working on #1403 I noticed that the EDIT: I think I found it. In these lines in const wasVisible = !instance.visible && instance.voicingVisible;
const isVisible = instance.visible && instance.voicingVisible; And these lines in onInstanceVoicingVisibilityChange const wasVoicingVisible = !instance.voicingVisible && instance.visible;
const isVoicingVisible = instance.voicingVisible && instance.visible;
|
…gVisible change at the same time, see #1300
In b0cbedf @zepumph and I went forward with an improved solution that was identified in a comment in the first attempt at this fix. This seems to be working well, this should be ready for review again.
|
Can we delete these? scenery/js/accessibility/voicing/Voicing.ts Lines 649 to 650 in e089c7c
Can we delete Instance.voicingVisibleEmitter? It isn't being used because it is only half of the pie, and the I didn't see anything else. Back to you. |
This is still in the blocking-RAP list, hence the ping. |
Thanks for the ping, those assertions were updated and |
I noticed that |
Excellent! Thanks. |
Oops, we never made stuff behind a modal dialog voicing invisible, but we will now!! SEe phetsims/ratio-and-proportion#509 |
From phetsims/gravity-force-lab-basics#307:
Voicing for a particular Node doesn't stop when that Node is removed from the scene graph. Opening this issue to think about how to handle this generally.
The text was updated successfully, but these errors were encountered: