-
Notifications
You must be signed in to change notification settings - Fork 595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
speech: initial support #1407
speech: initial support #1407
Conversation
@@ -23,6 +23,8 @@ To run the system tests, first create and configure a project in the Google Deve | |||
- **GCLOUD_TESTS_KEY**: The path to the JSON key file. | |||
- ***GCLOUD_TESTS_API_KEY*** (*optional*): An API key that can be used to test the Translate API. | |||
- ***GCLOUD_TESTS_DNS_DOMAIN*** (*optional*): A domain you own managed by Google Cloud DNS (expected format: `'gcloud-node.com.'`). | |||
- ***GCLOUD_TESTS_BIGTABLE_ZONE*** (*optional*): A zone containing a Google Cloud Bigtable cluster. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
baseUrl: 'speech.googleapis.com', | ||
projectIdRequired: false, | ||
service: 'speech', | ||
apiVersion: 'v1', |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
Sorry for being quiet on this. It looks great to me so far... I'll dive in deeper asap. Thanks! |
Yeah, this is ready for review. I was looking at using something like https://github.com/audiocogs/aurora.js to detect encoding and sampleRate, but after some testing I couldn't get it to work reliably (maybe because the Speech API supports only a small set of encodings?). Also https://github.com/audiocogs/aurora.js requires some native dependencies, which I'm not sure we'd want. If some day another API also implements the |
Assigned to @callmehiphop for a review. |
// We must establish an authClient to give to grpc. | ||
this.getGrpcCredentials_(function(err, credentials) { | ||
if (err) { | ||
setImmediate(function() { |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
* } | ||
* | ||
* //- | ||
* // <h3>Run speech detection over a local file</h3> |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
Pulled this out from the now-squashed commit comment:
@callmehiphop many changes made, PTAL! |
Looks like there are still some linting issues lingering about. |
Any idea when this will get merged? |
Assuming the tests pass after my most recent commit, I think we can:
|
* // Run speech recognition over raw file contents. | ||
* //- | ||
* speech.recognize({ | ||
* content: fs.readFileSync('./bridge.raw') |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@jmdobry I might have misunderstood, but I thought we agreed to wait until next Monday to cut the release so we could try and get vtk in as well. |
Maybe I misunderstand too. Are we talking about just adding the autogen layer, or adding the autogen layer and changing the hand-written layer to use autogen? |
I obviously wasn't at the meeting, but just a thought... if we can get this out now, let's just do that :) |
Fixes #1406
Add support for the Speech API (v1beta1)!
Speech#recognize
(SyncRecognize)Speech#startRecognition
(AsyncRecognize)Speech#createRecognizeStream
(StreamingRecognize)Speech#recognize
Speech#startRecognition
Speech#createRecognizeStream
Speech#recognize
Speech#startRecognition
Speech#createRecognizeStream