Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design: Switching input mode #2211

Open
compulim opened this issue Jul 21, 2019 · 13 comments
Open

Design: Switching input mode #2211

compulim opened this issue Jul 21, 2019 · 13 comments
Assignees
Labels
feature-request Azure report label needs-design-input UX/UI design item

Comments

@compulim
Copy link
Contributor

compulim commented Jul 21, 2019

User story

Today, we mix typing and speech into the send box.

The end-user is not very indicative on whether the microphone is recording or not, especially without a waveform animation.

Please look at #1839 when working on this.

Alternatives

Today, we mix the input via keyboard and speech into a single input box.

Other implementations

Send box in Cortana and Siri can be either for keyboard or speech, but not hybrid.

Potential implementation

Tomorrow, we could use the microphone button to switch between 2+ types of input. And the end-user should has a very clear indication on what type of inputs they are using.

image

What this will fix?

[Enhancement]

@compulim
Copy link
Contributor Author

Punting this to R9.

@DesignPolice
Copy link

Thanks @compulim - I think we will need to do some testing on how best to do this, given the number of different settings that WebChat is used.

@scheyal scheyal self-assigned this Mar 4, 2020
@corinagum corinagum removed the backlog Out of scope for the current iteration but it will be evaluated in a future release. label Jan 20, 2021
@Kaiqb Kaiqb assigned compulim and Quirinevwm and unassigned scheyal and emivers8 Jan 21, 2021
@Quirinevwm
Copy link

Quirinevwm commented Jan 25, 2021

UX recommendations:
As discussed during our sync, this modality switching asks for a design pattern to support you. It's very hard to go for the hybrid interruption fix, let's first go for the chat versus speech fix.

(1) 1 modality
Only "chat" or "speech" modality, the bot is only accessible via 1 modality:

"Chat"
User types = (no > icon needed)
User shares/uploads = action button (clip or + icon) should be separate from modality, typically positioned left of insert field.

"Speech"
User talks = (microphone icon)

(>1) Switch between modalities, only send content via 1 modality to bot:
Hypothesis: when an agent supports both modalities, whenever the user starts typing "chat" in the "chat field" they can send over their message by clicking enter. If they want to "speak" to the agent, speech can be accessed by clicking on the speech icon.

The proposed pattern is that we stick to show the speak icon per default (on the right)/ upload icon (on the left) with supporting text in the entry field with 'Type a message', so whenever the user starts 'chat' (so typing) they can just click enter to send through their message, and “listening” when a user is recording.

Flow =
Whenever a user start typing and they decide halfway they do want to speak out their content, the user will need to erase their textual written chat, for the mic to be clickable and kick-off the conversation via voice by clicking on that mic icon. They can follow up by speaking again and clicking the microphone. Or start typing again in the "chat" field.
image
(See design link for exploration & design pattern)

(>1) move halfway from chat to speaking is out of scope for now.

@corinagum corinagum removed this from the R12 milestone Jan 25, 2021
@pinarkaymaz6
Copy link

Looking forward to this, is there any update?

@corinagum
Copy link
Contributor

We're still in the design phase for this, but the team is also excited for this feature! We are hoping to get to coding phase soon!

@corinagum corinagum added this to the R14 milestone Apr 26, 2021
@compulim compulim modified the milestones: R14, R15 Jun 16, 2021
@fraygosa
Copy link

update? I noticed the front-burner got removed. Would like to implement this on our bot.

@compulim compulim removed this from the R15 milestone Jun 29, 2021
@fraygosa
Copy link

fraygosa commented Jul 6, 2021

update? I noticed the front-burner got removed. Would like to implement this on our bot.

The issue is because a user cannot determine whether they are typing or not. Users will have a difficult time thinking they have to speak. If the mic button can change to send button when there is typing it will go a long way to remove user frustration. This feature works on v3. Can this not be treated as a bug?

@harish-madugula
Copy link

Hey there!
any workaround or update on this?
any help is appreciated.
Thanks in advance.

@casperhuijsman
Copy link

Can you provide an update?

@casperhuijsman
Copy link

@cwhitten @compulim

Could you update on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request Azure report label needs-design-input UX/UI design item
Projects
None yet
Development

No branches or pull requests