public class CreateJobOptions
extends com.ibm.cloud.sdk.core.service.model.GenericModel
Modifier and Type | Class and Description |
---|---|
static class |
CreateJobOptions.Builder
Builder.
|
static interface |
CreateJobOptions.Events
If the job includes a callback URL, a comma-separated list of notification events to which to
subscribe.
|
static interface |
CreateJobOptions.Model
The model to use for speech recognition.
|
Modifier and Type | Method and Description |
---|---|
String |
acousticCustomizationId()
Gets the acousticCustomizationId.
|
InputStream |
audio()
Gets the audio.
|
Boolean |
audioMetrics()
Gets the audioMetrics.
|
Float |
backgroundAudioSuppression()
Gets the backgroundAudioSuppression.
|
String |
baseModelVersion()
Gets the baseModelVersion.
|
String |
callbackUrl()
Gets the callbackUrl.
|
Float |
characterInsertionBias()
Gets the characterInsertionBias.
|
String |
contentType()
Gets the contentType.
|
Double |
customizationWeight()
Gets the customizationWeight.
|
Double |
endOfPhraseSilenceTime()
Gets the endOfPhraseSilenceTime.
|
String |
events()
Gets the events.
|
String |
grammarName()
Gets the grammarName.
|
Long |
inactivityTimeout()
Gets the inactivityTimeout.
|
List<String> |
keywords()
Gets the keywords.
|
Float |
keywordsThreshold()
Gets the keywordsThreshold.
|
String |
languageCustomizationId()
Gets the languageCustomizationId.
|
Boolean |
lowLatency()
Gets the lowLatency.
|
Long |
maxAlternatives()
Gets the maxAlternatives.
|
String |
model()
Gets the model.
|
CreateJobOptions.Builder |
newBuilder()
New builder.
|
Boolean |
processingMetrics()
Gets the processingMetrics.
|
Float |
processingMetricsInterval()
Gets the processingMetricsInterval.
|
Boolean |
profanityFilter()
Gets the profanityFilter.
|
Boolean |
redaction()
Gets the redaction.
|
Long |
resultsTtl()
Gets the resultsTtl.
|
Boolean |
smartFormatting()
Gets the smartFormatting.
|
Long |
smartFormattingVersion()
Gets the smartFormattingVersion.
|
Boolean |
speakerLabels()
Gets the speakerLabels.
|
Float |
speechDetectorSensitivity()
Gets the speechDetectorSensitivity.
|
Boolean |
splitTranscriptAtPhraseEnd()
Gets the splitTranscriptAtPhraseEnd.
|
Boolean |
timestamps()
Gets the timestamps.
|
String |
userToken()
Gets the userToken.
|
Float |
wordAlternativesThreshold()
Gets the wordAlternativesThreshold.
|
Boolean |
wordConfidence()
Gets the wordConfidence.
|
public CreateJobOptions.Builder newBuilder()
public InputStream audio()
The audio to transcribe.
public String contentType()
The format (MIME type) of the audio. For more information about specifying an audio format, see **Audio formats (content types)** in the method description.
public String model()
The model to use for speech recognition. If you omit the `model` parameter, the service uses the US English `en-US_BroadbandModel` by default.
_For IBM Cloud Pak for Data,_ if you do not install the `en-US_BroadbandModel`, you must either specify a model with the request or specify a new default model for your installation of the service.
**See also:** * [Using a model for speech recognition](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-models-use) * [Using the default model](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-models-use#models-use-default).
public String callbackUrl()
A URL to which callback notifications are to be sent. The URL must already be successfully allowlisted by using the [Register a callback](#registercallback) method. You can include the same callback URL with any number of job creation requests. Omit the parameter to poll the service for job completion and results.
Use the `user_token` parameter to specify a unique user-specified string with each job to differentiate the callback notifications for the jobs.
public String events()
If the job includes a callback URL, a comma-separated list of notification events to which to subscribe. Valid events are * `recognitions.started` generates a callback notification when the service begins to process the job. * `recognitions.completed` generates a callback notification when the job is complete. You must use the [Check a job](#checkjob) method to retrieve the results before they time out or are deleted. * `recognitions.completed_with_results` generates a callback notification when the job is complete. The notification includes the results of the request. * `recognitions.failed` generates a callback notification if the service experiences an error while processing the job.
The `recognitions.completed` and `recognitions.completed_with_results` events are incompatible. You can specify only of the two events.
If the job includes a callback URL, omit the parameter to subscribe to the default events: `recognitions.started`, `recognitions.completed`, and `recognitions.failed`. If the job does not include a callback URL, omit the parameter.
public String userToken()
If the job includes a callback URL, a user-specified string that the service is to include with each callback notification for the job; the token allows the user to maintain an internal mapping between jobs and notification events. If the job does not include a callback URL, omit the parameter.
public Long resultsTtl()
The number of minutes for which the results are to be available after the job has finished. If not delivered via a callback, the results must be retrieved within this time. Omit the parameter to use a time to live of one week. The parameter is valid with or without a callback URL.
public String languageCustomizationId()
The customization ID (GUID) of a custom language model that is to be used with the recognition request. The base model of the specified custom language model must match the model specified with the `model` parameter. You must make the request with credentials for the instance of the service that owns the custom model. By default, no custom language model is used. See [Using a custom language model for speech recognition](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-languageUse).
**Note:** Use this parameter instead of the deprecated `customization_id` parameter.
public String acousticCustomizationId()
The customization ID (GUID) of a custom acoustic model that is to be used with the recognition request. The base model of the specified custom acoustic model must match the model specified with the `model` parameter. You must make the request with credentials for the instance of the service that owns the custom model. By default, no custom acoustic model is used. See [Using a custom acoustic model for speech recognition](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-acousticUse).
public String baseModelVersion()
The version of the specified base model that is to be used with the recognition request. Multiple versions of a base model can exist when a model is updated for internal improvements. The parameter is intended primarily for use with custom models that have been upgraded for a new base model. The default value depends on whether the parameter is used with or without a custom model. See [Making speech recognition requests with upgraded custom models](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-custom-upgrade-use#custom-upgrade-use-recognition).
public Double customizationWeight()
If you specify the customization ID (GUID) of a custom language model with the recognition request, the customization weight tells the service how much weight to give to words from the custom language model compared to those from the base model for the current request.
Specify a value between 0.0 and 1.0. Unless a different customization weight was specified for the custom model when the model was trained, the default value is: * 0.3 for previous-generation models * 0.2 for most next-generation models * 0.1 for next-generation English and Japanese models
A customization weight that you specify overrides a weight that was specified when the custom model was trained. The default value yields the best performance in general. Assign a higher value if your audio makes frequent use of OOV words from the custom model. Use caution when setting the weight: a higher value can improve the accuracy of phrases from the custom model's domain, but it can negatively affect performance on non-domain phrases.
See [Using customization weight](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-languageUse#weight).
public Long inactivityTimeout()
The time in seconds after which, if only silence (no speech) is detected in streaming audio, the connection is closed with a 400 error. The parameter is useful for stopping audio submission from a live microphone when a user simply walks away. Use `-1` for infinity. See [Inactivity timeout](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-input#timeouts-inactivity).
public List<String> keywords()
An array of keyword strings to spot in the audio. Each keyword string can include one or more string tokens. Keywords are spotted only in the final results, not in interim hypotheses. If you specify any keywords, you must also specify a keywords threshold. Omit the parameter or specify an empty array if you do not need to spot keywords.
You can spot a maximum of 1000 keywords with a single request. A single keyword can have a maximum length of 1024 characters, though the maximum effective length for double-byte languages might be shorter. Keywords are case-insensitive.
See [Keyword spotting](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-spotting#keyword-spotting).
public Float keywordsThreshold()
A confidence value that is the lower bound for spotting a keyword. A word is considered to match a keyword if its confidence is greater than or equal to the threshold. Specify a probability between 0.0 and 1.0. If you specify a threshold, you must also specify one or more keywords. The service performs no keyword spotting if you omit either parameter. See [Keyword spotting](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-spotting#keyword-spotting).
public Long maxAlternatives()
The maximum number of alternative transcripts that the service is to return. By default, the service returns a single transcript. If you specify a value of `0`, the service uses the default value, `1`. See [Maximum alternatives](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-metadata#max-alternatives).
public Float wordAlternativesThreshold()
A confidence value that is the lower bound for identifying a hypothesis as a possible word alternative (also known as "Confusion Networks"). An alternative word is considered if its confidence is greater than or equal to the threshold. Specify a probability between 0.0 and 1.0. By default, the service computes no alternative words. See [Word alternatives](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-spotting#word-alternatives).
public Boolean wordConfidence()
If `true`, the service returns a confidence measure in the range of 0.0 to 1.0 for each word. By default, the service returns no word confidence scores. See [Word confidence](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-metadata#word-confidence).
public Boolean timestamps()
If `true`, the service returns time alignment for each word. By default, no timestamps are returned. See [Word timestamps](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-metadata#word-timestamps).
public Boolean profanityFilter()
If `true`, the service filters profanity from all output except for keyword results by replacing inappropriate words with a series of asterisks. Set the parameter to `false` to return results with no censoring.
**Note:** The parameter can be used with US English and Japanese transcription only. See [Profanity filtering](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-formatting#profanity-filtering).
public Boolean smartFormatting()
If `true`, the service converts dates, times, series of digits and numbers, phone numbers, currency values, and internet addresses into more readable, conventional representations in the final transcript of a recognition request. For US English, the service also converts certain keyword strings to punctuation symbols. By default, the service performs no smart formatting.
**Note:** The parameter can be used with US English, Japanese, and Spanish (all dialects) transcription only.
See [Smart formatting](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-formatting#smart-formatting).
public Long smartFormattingVersion()
Smart formatting version is for next-generation models and that is supported in US English, Brazilian Portuguese, French and German languages.
public Boolean speakerLabels()
If `true`, the response includes labels that identify which words were spoken by which participants in a multi-person exchange. By default, the service returns no speaker labels. Setting `speaker_labels` to `true` forces the `timestamps` parameter to be `true`, regardless of whether you specify `false` for the parameter. * _For previous-generation models,_ the parameter can be used with Australian English, US English, German, Japanese, Korean, and Spanish (both broadband and narrowband models) and UK English (narrowband model) transcription only. * _For next-generation models,_ the parameter can be used with Czech, English (Australian, Indian, UK, and US), German, Japanese, Korean, and Spanish transcription only.
See [Speaker labels](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-speaker-labels).
public String grammarName()
The name of a grammar that is to be used with the recognition request. If you specify a grammar, you must also use the `language_customization_id` parameter to specify the name of the custom language model for which the grammar is defined. The service recognizes only strings that are recognized by the specified grammar; it does not recognize other custom words from the model's words resource.
See [Using a grammar for speech recognition](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-grammarUse).
public Boolean redaction()
If `true`, the service redacts, or masks, numeric data from final transcripts. The feature redacts any number that has three or more consecutive digits by replacing each digit with an `X` character. It is intended to redact sensitive numeric data, such as credit card numbers. By default, the service performs no redaction.
When you enable redaction, the service automatically enables smart formatting, regardless of whether you explicitly disable that feature. To ensure maximum security, the service also disables keyword spotting (ignores the `keywords` and `keywords_threshold` parameters) and returns only a single final transcript (forces the `max_alternatives` parameter to be `1`).
**Note:** The parameter can be used with US English, Japanese, and Korean transcription only.
See [Numeric redaction](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-formatting#numeric-redaction).
public Boolean processingMetrics()
If `true`, requests processing metrics about the service's transcription of the input audio. The service returns processing metrics at the interval specified by the `processing_metrics_interval` parameter. It also returns processing metrics for transcription events, for example, for final and interim results. By default, the service returns no processing metrics.
See [Processing metrics](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-metrics#processing-metrics).
public Float processingMetricsInterval()
Specifies the interval in real wall-clock seconds at which the service is to return processing metrics. The parameter is ignored unless the `processing_metrics` parameter is set to `true`.
The parameter accepts a minimum value of 0.1 seconds. The level of precision is not restricted, so you can specify values such as 0.25 and 0.125.
The service does not impose a maximum value. If you want to receive processing metrics only for transcription events instead of at periodic intervals, set the value to a large number. If the value is larger than the duration of the audio, the service returns processing metrics only for transcription events.
See [Processing metrics](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-metrics#processing-metrics).
public Boolean audioMetrics()
If `true`, requests detailed information about the signal characteristics of the input audio. The service returns audio metrics with the final transcription results. By default, the service returns no audio metrics.
See [Audio metrics](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-metrics#audio-metrics).
public Double endOfPhraseSilenceTime()
Specifies the duration of the pause interval at which the service splits a transcript into multiple final results. If the service detects pauses or extended silence before it reaches the end of the audio stream, its response can include multiple final results. Silence indicates a point at which the speaker pauses between spoken words or phrases.
Specify a value for the pause interval in the range of 0.0 to 120.0. * A value greater than 0 specifies the interval that the service is to use for speech recognition. * A value of 0 indicates that the service is to use the default interval. It is equivalent to omitting the parameter.
The default pause interval for most languages is 0.8 seconds; the default for Chinese is 0.6 seconds.
See [End of phrase silence time](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-parsing#silence-time).
public Boolean splitTranscriptAtPhraseEnd()
If `true`, directs the service to split the transcript into multiple final results based on semantic features of the input, for example, at the conclusion of meaningful phrases such as sentences. The service bases its understanding of semantic features on the base language model that you use with a request. Custom language models and grammars can also influence how and where the service splits a transcript.
By default, the service splits transcripts based solely on the pause interval. If the parameters are used together on the same request, `end_of_phrase_silence_time` has precedence over `split_transcript_at_phrase_end`.
See [Split transcript at phrase end](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-parsing#split-transcript).
public Float speechDetectorSensitivity()
The sensitivity of speech activity detection that the service is to perform. Use the parameter to suppress word insertions from music, coughing, and other non-speech events. The service biases the audio it passes for speech recognition by evaluating the input audio against prior models of speech and non-speech activity.
Specify a value between 0.0 and 1.0: * 0.0 suppresses all audio (no speech is transcribed). * 0.5 (the default) provides a reasonable compromise for the level of sensitivity. * 1.0 suppresses no audio (speech detection sensitivity is disabled).
The values increase on a monotonic curve. Specifying one or two decimal places of precision (for example, `0.55`) is typically more than sufficient.
The parameter is supported with all next-generation models and with most previous-generation models. See [Speech detector sensitivity](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-detection#detection-parameters-sensitivity) and [Language model support](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-detection#detection-support).
public Float backgroundAudioSuppression()
The level to which the service is to suppress background audio based on its volume to prevent it from being transcribed as speech. Use the parameter to suppress side conversations or background noise.
Specify a value in the range of 0.0 to 1.0: * 0.0 (the default) provides no suppression (background audio suppression is disabled). * 0.5 provides a reasonable level of audio suppression for general usage. * 1.0 suppresses all audio (no audio is transcribed).
The values increase on a monotonic curve. Specifying one or two decimal places of precision (for example, `0.55`) is typically more than sufficient.
The parameter is supported with all next-generation models and with most previous-generation models. See [Background audio suppression](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-detection#detection-parameters-suppression) and [Language model support](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-detection#detection-support).
public Boolean lowLatency()
If `true` for next-generation `Multimedia` and `Telephony` models that support low latency, directs the service to produce results even more quickly than it usually does. Next-generation models produce transcription results faster than previous-generation models. The `low_latency` parameter causes the models to produce results even more quickly, though the results might be less accurate when the parameter is used.
The parameter is not available for previous-generation `Broadband` and `Narrowband` models. It is available for most next-generation models. * For a list of next-generation models that support low latency, see [Supported next-generation language models](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-models-ng#models-ng-supported). * For more information about the `low_latency` parameter, see [Low latency](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-interim#low-latency).
public Float characterInsertionBias()
For next-generation models, an indication of whether the service is biased to recognize shorter or longer strings of characters when developing transcription hypotheses. By default, the service is optimized to produce the best balance of strings of different lengths.
The default bias is 0.0. The allowable range of values is -1.0 to 1.0. * Negative values bias the service to favor hypotheses with shorter strings of characters. * Positive values bias the service to favor hypotheses with longer strings of characters.
As the value approaches -1.0 or 1.0, the impact of the parameter becomes more pronounced. To determine the most effective value for your scenario, start by setting the value of the parameter to a small increment, such as -0.1, -0.05, 0.05, or 0.1, and assess how the value impacts the transcription results. Then experiment with different values as necessary, adjusting the value by small increments.
The parameter is not available for previous-generation models.
See [Character insertion bias](https://cloud.ibm.com/docs/speech-to-text?topic=speech-to-text-parsing#insertion-bias).
Copyright © 2024 IBM Cloud. All rights reserved.