Click or drag to resize

SpeechToText Properties

The SpeechToText type exposes the following members.

Properties
  NameDescription
Public propertyAcousticCustomizationId
Specifies the Globally Unique Identifier (GUID) of a custom acoustic model that is to be used for all requests sent over the connection. The base model of the custom acoustic model must match the value of the model parameter. By default, no custom acoustic model is used. For more information, see https://console.bluemix.net/docs/services/speech-to-text/custom.html.
Public propertyAudioSent
True if AudioData has been sent and we are recognizing speech.
Public propertyCredentials
Gets and sets the credentials of the service. Replace the default endpoint if endpoint is defined.
Public propertyCustomizationId
Specifies the Globally Unique Identifier (GUID) of a custom language model that is to be used for all requests sent over the connection. The base model of the custom language model must match the value of the model parameter. By default, no custom language model is used. For more information, see https://console.bluemix.net/docs/services/speech-to-text/custom.html.
Public propertyCustomizationWeight
Specifies the weight the service gives to words from a specified custom language model compared to those from the base model for all requests sent over the connection. Specify a value between 0.0 and 1.0; the default value is 0.3. For more information, see https://console.bluemix.net/docs/services/speech-to-text/language-use.html#weight.
Public propertyDetectSilence
If true, then we will try not to send silent audio clips to the server. This can save bandwidth when no sound is happening.
Public propertyEnableInterimResults
If true, then we will get interim results while recognizing. The user will then need to check the Final flag on the results.
Public propertyEnableTimestamps
True to return timestamps of words with results.
Public propertyEnableWordConfidence
True to return word confidence with results.
Public propertyInactivityTimeout
NON-MULTIPART ONLY: The time in seconds after which, if only silence (no speech) is detected in submitted audio, the connection is closed with a 400 error. Useful for stopping audio submission from a live microphone when a user simply walks away. Use -1 for infinity.
Public propertyIsListening
True if StartListening() has been called.
Public propertyKeywords
NON-MULTIPART ONLY: Array of keyword strings to spot in the audio. Each keyword string can include one or more tokens. Keywords are spotted only in the final hypothesis, not in interim results. Omit the parameter or specify an empty array if you do not need to spot keywords.
Public propertyKeywordsThreshold
NON-MULTIPART ONLY: Confidence value that is the lower bound for spotting a keyword. A word is considered to match a keyword if its confidence is greater than or equal to the threshold. Specify a probability between 0 and 1 inclusive. No keyword spotting is performed if you omit the parameter. If you specify a threshold, you must also specify one or more keywords.
Public propertyLoadFile
Set this property to overload the internal file loading of this class.
Public propertyMaxAlternatives
Returns the maximum number of alternatives returned by recognize.
Public propertyOnError
This delegate is invoked when an error occurs.
Public propertyProfanityFilter
NON-MULTIPART ONLY: If true (the default), filters profanity from all output except for keyword results by replacing inappropriate words with a series of asterisks. Set the parameter to false to return results with no censoring. Applies to US English transcription only.
Public propertyRecognizeModel
This property controls which recognize model we use when making recognize requests of the server.
Public propertySilenceThreshold
A value from 1.0 to 0.0 that determines what is considered silence. If the absolute value of the audio level is below this value then we consider it silence.
Public propertySmartFormatting
NON-MULTIPART ONLY: If true, converts dates, times, series of digits and numbers, phone numbers, currency values, and Internet addresses into more readable, conventional representations in the final transcript of a recognition request. If false (the default), no formatting is performed. Applies to US English transcription only.
Public propertySpeakerLabels
NON-MULTIPART ONLY: Indicates whether labels that identify which words were spoken by which participants in a multi-person exchange are to be included in the response. If true, speaker labels are returned; if false (the default), they are not. Speaker labels can be returned only for the following language models: en-US_NarrowbandModel, en-US_BroadbandModel, es-ES_NarrowbandModel, es-ES_BroadbandModel, ja-JP_NarrowbandModel, and ja-JP_BroadbandModel. Setting speaker_labels to true forces the timestamps parameter to be true, regardless of whether you specify false for the parameter.
Public propertyStreamMultipart
If true sets `Transfer-Encoding` request header to `chunked` causing the audio to be streamed to the service. By default, audio is sent all at once as a one-shot delivery. See https://console.bluemix.net/docs/services/speech-to-text/input.html#transmission.
Public propertyUrl
Gets and sets the endpoint URL for the service.
Public propertyWordAlternativesThreshold
NON-MULTIPART ONLY: Confidence value that is the lower bound for identifying a hypothesis as a possible word alternative (also known as "Confusion Networks"). An alternative word is considered if its confidence is greater than or equal to the threshold. Specify a probability between 0 and 1 inclusive. No alternative words are computed if you omit the parameter.
Top
See Also