You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the development of the Keyboard App algorithm for word predictions, we would like to implement a similar system for pictograms predictions in the OTTAA app. Aside from being deployed on a different Cloud Function on the OTTAA app Firebase project, the OTTAA algorithm differs from the Keyboard one in that it has to get pictograms out of the training dataset and avoid predictions that have no assigned picto.
Intended Outcome
Cloud Function triggered by HTTP request that recieves an array of pictos ID's or words sequenced from a phrase formed on the OTTAA app and settings data (language, max no. of predictions, etc) and returns an array off predictions that would continue the phrase requested.
How will it work?
After the user presses a pictogram on the app (and that picto is moved to the sentence block) the app requests predictions from the Cloud Function using and HTTP request, after it gets the response, it shows on the predictions interface the most likely pictos that would follow the sentence. The picto relations used in the app up to now can still be used to get a fast, simple prediction while the Cloud Function responds.
The text was updated successfully, but these errors were encountered:
Summary
Based on the development of the Keyboard App algorithm for word predictions, we would like to implement a similar system for pictograms predictions in the OTTAA app. Aside from being deployed on a different Cloud Function on the OTTAA app Firebase project, the OTTAA algorithm differs from the Keyboard one in that it has to get pictograms out of the training dataset and avoid predictions that have no assigned picto.
Intended Outcome
Cloud Function triggered by HTTP request that recieves an array of pictos ID's or words sequenced from a phrase formed on the OTTAA app and settings data (language, max no. of predictions, etc) and returns an array off predictions that would continue the phrase requested.
How will it work?
After the user presses a pictogram on the app (and that picto is moved to the sentence block) the app requests predictions from the Cloud Function using and HTTP request, after it gets the response, it shows on the predictions interface the most likely pictos that would follow the sentence. The picto relations used in the app up to now can still be used to get a fast, simple prediction while the Cloud Function responds.
The text was updated successfully, but these errors were encountered: