ibm_watson.text_to_speech_v1 module¶
The IBM Watson™ Text to Speech service provides APIs that use IBM’s speech-synthesis capabilities to synthesize text into natural-sounding speech in a variety of languages, dialects, and voices. The service supports at least one male or female voice, sometimes both, for each language. The audio is streamed back to the client with minimal delay. For speech synthesis, the service supports a synchronous HTTP Representational State Transfer (REST) interface and a WebSocket interface. Both interfaces support plain text and SSML input. SSML is an XML-based markup language that provides text annotation for speech-synthesis applications. The WebSocket interface also supports the SSML <code><mark></code> element and word timings. The service offers a customization interface that you can use to define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. A phonetic translation is based on the SSML phoneme format for representing a word. You can specify a phonetic translation in standard International Phonetic Alphabet (IPA) representation or in the proprietary IBM Symbolic Phonetic Representation (SPR). The Arabic, Chinese, Dutch, Australian English, and Korean languages support only IPA. The service also offers a Tune by Example feature that lets you define custom prompts. You can also define speaker models to improve the quality of your custom prompts. The service support custom prompts only for US English custom models and voices.
-
class
TextToSpeechV1
(authenticator: Optional[ibm_cloud_sdk_core.authenticators.authenticator.Authenticator] = None, service_name: str = 'text_to_speech')[source]¶ Bases:
ibm_cloud_sdk_core.base_service.BaseService
The Text to Speech V1 service.
-
DEFAULT_SERVICE_URL
= 'https://api.us-south.text-to-speech.watson.cloud.ibm.com'¶
-
DEFAULT_SERVICE_NAME
= 'text_to_speech'¶
-
list_voices
(**kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ List voices.
Lists all voices available for use with the service. The information includes the name, language, gender, and other details about the voice. The ordering of the list of voices can change from call to call; do not rely on an alphabetized or static list of voices. To see information about a specific voice, use the Get a voice method. See also: [Listing all available voices](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-voices#listVoices).
- Parameters
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Voices object
-
get_voice
(voice: str, *, customization_id: Optional[str] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Get a voice.
Gets information about the specified voice. The information includes the name, language, gender, and other details about the voice. Specify a customization ID to obtain information for a custom model that is defined for the language of the specified voice. To list information about all available voices, use the List voices method. See also: [Listing a specific voice](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-voices#listVoice). ### Important voice updates
The service’s voices underwent significant change on 2 December 2020.
The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural
instead of concatenative. * The ar-AR_OmarVoice voice is deprecated. Use ar-MS_OmarVoice voice instead. * The ar-AR language identifier cannot be used to create a custom model. Use the ar-MS identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the volume attribute of the <prosody> element are deprecated and are not supported with any of the service’s neural voices. * All of the service’s voices are now customizable and generally available (GA) for production use. The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes.
- Parameters
voice (str) – The voice for which information is to be returned. For more information about specifying a voice, see Important voice updates in the method description.
customization_id (str) – (optional) The customization ID (GUID) of a custom model for which information is to be returned. You must make the request with credentials for the instance of the service that owns the custom model. Omit the parameter to see information about the specified voice with no customization.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Voice object
-
synthesize
(text: str, *, accept: Optional[str] = None, voice: Optional[str] = None, customization_id: Optional[str] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Synthesize audio.
Synthesizes text to audio that is spoken in the specified voice. The service bases its understanding of the language for the input text on the specified voice. Use a voice that matches the language of the input text. The method accepts a maximum of 5 KB of input text in the body of the request, and 8 KB for the URL and headers. The 5 KB limit includes any SSML tags that you specify. The service returns the synthesized audio stream as an array of bytes. See also: [The HTTP interface](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-usingHTTP#usingHTTP). ### Audio formats (accept types)
The service can return audio in the following formats (MIME types).
Where indicated, you can optionally specify the sampling rate (rate) of the
audio. You must specify a sampling rate for the audio/l16 and audio/mulaw formats. A specified sampling rate must lie in the range of 8 kHz to 192 kHz. Some formats restrict the sampling rate to certain values, as noted. * For the audio/l16 format, you can optionally specify the endianness (endianness) of the audio: endianness=big-endian or endianness=little-endian. Use the Accept header or the accept parameter to specify the requested format of the response audio. If you omit an audio format altogether, the service returns the audio in Ogg format with the Opus codec (audio/ogg;codecs=opus). The service always returns single-channel audio. * audio/basic - The service returns audio with a sampling rate of 8000 Hz. * audio/flac - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/l16 - You must specify the rate of the audio. You can optionally specify the endianness of the audio. The default endianness is little-endian. * audio/mp3 - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/mpeg - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/mulaw - You must specify the rate of the audio. * audio/ogg - The service returns the audio in the vorbis codec. You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/ogg;codecs=opus - You can optionally specify the rate of the audio. Only the following values are valid sampling rates: 48000, 24000, 16000, 12000, or 8000. If you specify a value other than one of these, the service returns an error. The default sampling rate is 48,000 Hz. * audio/ogg;codecs=vorbis - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/wav - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/webm - The service returns the audio in the opus codec. The service returns audio with a sampling rate of 48,000 Hz. * audio/webm;codecs=opus - The service returns audio with a sampling rate of 48,000 Hz. * audio/webm;codecs=vorbis - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. For more information about specifying an audio format, including additional details about some of the formats, see [Audio formats](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-audioFormats#audioFormats). ### Important voice updates
The service’s voices underwent significant change on 2 December 2020.
The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural
instead of concatenative. * The ar-AR_OmarVoice voice is deprecated. Use ar-MS_OmarVoice voice instead. * The ar-AR language identifier cannot be used to create a custom model. Use the ar-MS identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the volume attribute of the <prosody> element are deprecated and are not supported with any of the service’s neural voices. * All of the service’s voices are now customizable and generally available (GA) for production use. The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes. ### Warning messages
If a request includes invalid query parameters, the service returns a Warnings
response header that provides messages about the invalid parameters. The warning includes a descriptive message and a list of invalid argument strings. For example, a message such as “Unknown arguments:” or “Unknown url query arguments:” followed by a list of the form “{invalid_arg_1}, {invalid_arg_2}.” The request succeeds despite the warnings.
- Parameters
text (str) – The text to synthesize.
accept (str) – (optional) The requested format (MIME type) of the audio. You can use the Accept header or the accept parameter to specify the audio format. For more information about specifying an audio format, see Audio formats (accept types) in the method description.
voice (str) – (optional) The voice to use for synthesis. For more information about specifying a voice, see Important voice updates in the method description.
customization_id (str) – (optional) The customization ID (GUID) of a custom model to use for the synthesis. If a custom model is specified, it works only if it matches the language of the indicated voice. You must make the request with credentials for the instance of the service that owns the custom model. Omit the parameter to use the specified voice with no customization.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with BinaryIO result
-
get_pronunciation
(text: str, *, voice: Optional[str] = None, format: Optional[str] = None, customization_id: Optional[str] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Get pronunciation.
Gets the phonetic pronunciation for the specified word. You can request the pronunciation for a specific format. You can also request the pronunciation for a specific voice to see the default translation for the language of that voice or for a specific custom model to see the translation for that model. See also: [Querying a word from a language](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordsQueryLanguage). ### Important voice updates
The service’s voices underwent significant change on 2 December 2020.
The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural
instead of concatenative. * The ar-AR_OmarVoice voice is deprecated. Use ar-MS_OmarVoice voice instead. * The ar-AR language identifier cannot be used to create a custom model. Use the ar-MS identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the volume attribute of the <prosody> element are deprecated and are not supported with any of the service’s neural voices. * All of the service’s voices are now customizable and generally available (GA) for production use. The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes.
- Parameters
text (str) – The word for which the pronunciation is requested.
voice (str) – (optional) A voice that specifies the language in which the pronunciation is to be returned. All voices for the same language (for example, en-US) return the same translation. For more information about specifying a voice, see Important voice updates in the method description.
format (str) – (optional) The phoneme format in which to return the pronunciation. The Arabic, Chinese, Dutch, Australian English, and Korean languages support only IPA. Omit the parameter to obtain the pronunciation in the default format.
customization_id (str) – (optional) The customization ID (GUID) of a custom model for which the pronunciation is to be returned. The language of a specified custom model must match the language of the specified voice. If the word is not defined in the specified custom model, the service returns the default translation for the custom model’s language. You must make the request with credentials for the instance of the service that owns the custom model. Omit the parameter to see the translation for the specified voice with no customization.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Pronunciation object
-
create_custom_model
(name: str, *, language: Optional[str] = None, description: Optional[str] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Create a custom model.
Creates a new empty custom model. You must specify a name for the new custom model. You can optionally specify the language and a description for the new model. The model is owned by the instance of the service whose credentials are used to create it. See also: [Creating a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsCreate). ### Important voice updates
The service’s voices underwent significant change on 2 December 2020.
The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural
instead of concatenative. * The ar-AR_OmarVoice voice is deprecated. Use ar-MS_OmarVoice voice instead. * The ar-AR language identifier cannot be used to create a custom model. Use the ar-MS identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the volume attribute of the <prosody> element are deprecated and are not supported with any of the service’s neural voices. * All of the service’s voices are now customizable and generally available (GA) for production use. The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes.
- Parameters
name (str) – The name of the new custom model.
language (str) – (optional) The language of the new custom model. You create a custom model for a specific language, not for a specific voice. A custom model can be used with any voice for its specified language. Omit the parameter to use the the default language, en-US. Note: The ar-AR language identifier cannot be used to create a custom model. Use the ar-MS identifier instead.
description (str) – (optional) A description of the new custom model. Specifying a description is recommended.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a CustomModel object
-
list_custom_models
(*, language: Optional[str] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ List custom models.
Lists metadata such as the name and description for all custom models that are owned by an instance of the service. Specify a language to list the custom models for that language only. To see the words and prompts in addition to the metadata for a specific custom model, use the Get a custom model method. You must use credentials for the instance of the service that owns a model to list information about it. See also: [Querying all custom models](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsQueryAll).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a CustomModels object
-
update_custom_model
(customization_id: str, *, name: Optional[str] = None, description: Optional[str] = None, words: Optional[List[ibm_watson.text_to_speech_v1.Word]] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Update a custom model.
Updates information for the specified custom model. You can update metadata such as the name and description of the model. You can also update the words in the model and their translations. Adding a new translation for a word that already exists in a custom model overwrites the word’s existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to update it. You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation
<code><phoneme alphabet=”ipa”
- ph=”təmˈɑto”></phoneme></code>
or in the proprietary IBM Symbolic Phonetic Representation (SPR) <code><phoneme alphabet=”ibm”
ph=”1gAstroEntxrYFXs”></phoneme></code> See also: * [Updating a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsUpdate) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customIntro#customIntro).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the service that owns the custom model.
name (str) – (optional) A new name for the custom model.
description (str) – (optional) A new description for the custom model.
words (List[Word]) – (optional) An array of Word objects that provides the words and their translations that are to be added or updated for the custom model. Pass an empty array to make no additions or updates.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
get_custom_model
(customization_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Get a custom model.
Gets all information about a specified custom model. In addition to metadata such as the name and description of the custom model, the output includes the words and their translations that are defined for the model, as well as any prompts that are defined for the model. To see just the metadata for a model, use the List custom models method. See also: [Querying a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsQuery).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a CustomModel object
-
delete_custom_model
(customization_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Delete a custom model.
Deletes the specified custom model. You must use credentials for the instance of the service that owns a model to delete it. See also: [Deleting a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsDelete).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
add_words
(customization_id: str, words: List[ibm_watson.text_to_speech_v1.Word], **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Add custom words.
Adds one or more words and their translations to the specified custom model. Adding a new translation for a word that already exists in a custom model overwrites the word’s existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to add words to it. You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation
<code><phoneme alphabet=”ipa”
- ph=”təmˈɑto”></phoneme></code>
or in the proprietary IBM Symbolic Phonetic Representation (SPR) <code><phoneme alphabet=”ibm”
ph=”1gAstroEntxrYFXs”></phoneme></code> See also: * [Adding multiple words to a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordsAdd) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customIntro#customIntro).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the service that owns the custom model.
words (List[Word]) – The Add custom words method accepts an array of Word objects. Each object provides a word that is to be added or updated for the custom model and the word’s translation. The List custom words method returns an array of Word objects. Each object shows a word and its translation from the custom model. The words are listed in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if the custom model contains no words.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
list_words
(customization_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ List custom words.
Lists all of the words and their translations for the specified custom model. The output shows the translations as they are defined in the model. You must use credentials for the instance of the service that owns a model to list its words. See also: [Querying all words from a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordsQueryModel).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Words object
-
add_word
(customization_id: str, word: str, translation: str, *, part_of_speech: Optional[str] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Add a custom word.
Adds a single word and its translation to the specified custom model. Adding a new translation for a word that already exists in a custom model overwrites the word’s existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to add a word to it. You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation
<code><phoneme alphabet=”ipa”
- ph=”təmˈɑto”></phoneme></code>
or in the proprietary IBM Symbolic Phonetic Representation (SPR) <code><phoneme alphabet=”ibm”
ph=”1gAstroEntxrYFXs”></phoneme></code> See also: * [Adding a single word to a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordAdd) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customIntro#customIntro).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the service that owns the custom model.
word (str) – The word that is to be added or updated for the custom model.
translation (str) – The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for representing the phonetic string of a word either as an IPA translation or as an IBM SPR translation. The Arabic, Chinese, Dutch, Australian English, and Korean languages support only IPA. A sounds-like is one or more words that, when combined, sound like the word.
part_of_speech (str) – (optional) Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
get_word
(customization_id: str, word: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Get a custom word.
Gets the translation for a single word from the specified custom model. The output shows the translation as it is defined in the model. You must use credentials for the instance of the service that owns a model to list its words. See also: [Querying a single word from a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordQueryModel).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Translation object
-
delete_word
(customization_id: str, word: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Delete a custom word.
Deletes a single word from the specified custom model. You must use credentials for the instance of the service that owns a model to delete its words. See also: [Deleting a word from a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordDelete).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
list_custom_prompts
(customization_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ List custom prompts.
Lists information about all custom prompts that are defined for a custom model. The information includes the prompt ID, prompt text, status, and optional speaker ID for each prompt of the custom model. You must use credentials for the instance of the service that owns the custom model. The same information about all of the prompts for a custom model is also provided by the Get a custom model method. That method provides complete details about a specified custom model, including its language, owner, custom words, and more. Beta: Custom prompts are beta functionality that is supported only for use with US English custom models and voices. See also: [Listing custom prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-custom-prompts#tbe-custom-prompts-list).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Prompts object
-
add_custom_prompt
(customization_id: str, prompt_id: str, metadata: ibm_watson.text_to_speech_v1.PromptMetadata, file: BinaryIO, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Add a custom prompt.
Adds a custom prompt to a custom model. A prompt is defined by the text that is to be spoken, the audio for that text, a unique user-specified ID for the prompt, and an optional speaker ID. The information is used to generate prosodic data that is not visible to the user. This data is used by the service to produce the synthesized audio upon request. You must use credentials for the instance of the service that owns a custom model to add a prompt to it. You can add a maximum of 1000 custom prompts to a single custom model. You are recommended to assign meaningful values for prompt IDs. For example, use goodbye to identify a prompt that speaks a farewell message. Prompt IDs must be unique within a given custom model. You cannot define two prompts with the same name for the same custom model. If you provide the ID of an existing prompt, the previously uploaded prompt is replaced by the new information. The existing prompt is reprocessed by using the new text and audio and, if provided, new speaker model, and the prosody data associated with the prompt is updated. The quality of a prompt is undefined if the language of a prompt does not match the language of its custom model. This is consistent with any text or SSML that is specified for a speech synthesis request. The service makes a best-effort attempt to render the specified text for the prompt; it does not validate that the language of the text matches the language of the model. Adding a prompt is an asynchronous operation. Although it accepts less audio than speaker enrollment, the service must align the audio with the provided text. The time that it takes to process a prompt depends on the prompt itself. The processing time for a reasonably sized prompt generally matches the length of the audio (for example, it takes 20 seconds to process a 20-second prompt). For shorter prompts, you can wait for a reasonable amount of time and then check the status of the prompt with the Get a custom prompt method. For longer prompts, consider using that method to poll the service every few seconds to determine when the prompt becomes available. No prompt can be used for speech synthesis if it is in the processing or failed state. Only prompts that are in the available state can be used for speech synthesis. When it processes a request, the service attempts to align the text and the audio that are provided for the prompt. The text that is passed with a prompt must match the spoken audio as closely as possible. Optimally, the text and audio match exactly. The service does its best to align the specified text with the audio, and it can often compensate for mismatches between the two. But if the service cannot effectively align the text and the audio, possibly because the magnitude of mismatches between the two is too great, processing of the prompt fails. ### Evaluating a prompt
Always listen to and evaluate a prompt to determine its quality before using it
in production. To evaluate a prompt, include only the single prompt in a speech synthesis request by using the following SSML extension, in this case for a prompt whose ID is goodbye: <ibm:prompt id=”goodbye”/> In some cases, you might need to rerecord and resubmit a prompt as many as five times to address the following possible problems: * The service might fail to detect a mismatch between the prompt’s text and audio. The longer the prompt, the greater the chance for misalignment between its text and audio. Therefore, multiple shorter prompts are preferable to a single long prompt. * The text of a prompt might include a word that the service does not recognize. In this case, you can create a custom word and pronunciation pair to tell the service how to pronounce the word. You must then re-create the prompt. * The quality of the input audio might be insufficient or the service’s processing of the audio might fail to detect the intended prosody. Submitting new audio for the prompt can correct these issues. If a prompt that is created without a speaker ID does not adequately reflect the intended prosody, enrolling the speaker and providing a speaker ID for the prompt is one recommended means of potentially improving the quality of the prompt. This is especially important for shorter prompts such as “good-bye” or “thank you,” where less audio data makes it more difficult to match the prosody of the speaker. Beta: Custom prompts are beta functionality that is supported only for use with US English custom models and voices. See also: * [Add a custom prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-add-prompt) * [Evaluate a custom prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-evaluate-prompt) * [Rules for creating custom prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-rules#tbe-rules-prompts).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the service that owns the custom model.
prompt_id (str) – The identifier of the prompt that is to be added to the custom model: * Include a maximum of 49 characters in the ID. * Include only alphanumeric characters and _ (underscores) in the ID. * Do not include XML sensitive characters (double quotes, single quotes, ampersands, angle brackets, and slashes) in the ID. * To add a new prompt, the ID must be unique for the specified custom model. Otherwise, the new information for the prompt overwrites the existing prompt that has that ID.
metadata (PromptMetadata) – Information about the prompt that is to be added to a custom model. The following example of a PromptMetadata object includes both the required prompt text and an optional speaker model ID: { “prompt_text”: “Thank you and good-bye!”, “speaker_id”: “823068b2-ed4e-11ea-b6e0-7b6456aa95cc” }.
file (BinaryIO) – An audio file that speaks the text of the prompt with intonation and prosody that matches how you would like the prompt to be spoken. * The prompt audio must be in WAV format and must have a minimum sampling rate of 16 kHz. The service accepts audio with higher sampling rates. The service transcodes all audio to 16 kHz before processing it. * The length of the prompt audio is limited to 30 seconds.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Prompt object
-
get_custom_prompt
(customization_id: str, prompt_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Get a custom prompt.
Gets information about a specified custom prompt for a specified custom model. The information includes the prompt ID, prompt text, status, and optional speaker ID for each prompt of the custom model. You must use credentials for the instance of the service that owns the custom model. Beta: Custom prompts are beta functionality that is supported only for use with US English custom models and voices. See also: [Listing custom prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-custom-prompts#tbe-custom-prompts-list).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Prompt object
-
delete_custom_prompt
(customization_id: str, prompt_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Delete a custom prompt.
Deletes an existing custom prompt from a custom model. The service deletes the prompt with the specified ID. You must use credentials for the instance of the service that owns the custom model from which the prompt is to be deleted. Caution: Deleting a custom prompt elicits a 400 response code from synthesis requests that attempt to use the prompt. Make sure that you do not attempt to use a deleted prompt in a production application. Beta: Custom prompts are beta functionality that is supported only for use with US English custom models and voices. See also: [Deleting a custom prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-custom-prompts#tbe-custom-prompts-delete).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the service that owns the custom model.
prompt_id (str) – The identifier (name) of the prompt that is to be deleted.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
list_speaker_models
(**kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ List speaker models.
Lists information about all speaker models that are defined for a service instance. The information includes the speaker ID and speaker name of each defined speaker. You must use credentials for the instance of a service to list its speakers. Beta: Speaker models and the custom prompts with which they are used are beta functionality that is supported only for use with US English custom models and voices. See also: [Listing speaker models](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-speaker-models#tbe-speaker-models-list).
- Parameters
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a Speakers object
-
create_speaker_model
(speaker_name: str, audio: BinaryIO, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Create a speaker model.
Creates a new speaker model, which is an optional enrollment token for users who are to add prompts to custom models. A speaker model contains information about a user’s voice. The service extracts this information from a WAV audio sample that you pass as the body of the request. Associating a speaker model with a prompt is optional, but the information that is extracted from the speaker model helps the service learn about the speaker’s voice. A speaker model can make an appreciable difference in the quality of prompts, especially short prompts with relatively little audio, that are associated with that speaker. A speaker model can help the service produce a prompt with more confidence; the lack of a speaker model can potentially compromise the quality of a prompt. The gender of the speaker who creates a speaker model does not need to match the gender of a voice that is used with prompts that are associated with that speaker model. For example, a speaker model that is created by a male speaker can be associated with prompts that are spoken by female voices. You create a speaker model for a given instance of the service. The new speaker model is owned by the service instance whose credentials are used to create it. That same speaker can then be used to create prompts for all custom models within that service instance. No language is associated with a speaker model, but each custom model has a single specified language. You can add prompts only to US English models. You specify a name for the speaker when you create it. The name must be unique among all speaker names for the owning service instance. To re-create a speaker model for an existing speaker name, you must first delete the existing speaker model that has that name. Speaker enrollment is a synchronous operation. Although it accepts more audio data than a prompt, the process of adding a speaker is very fast. The service simply extracts information about the speaker’s voice from the audio. Unlike prompts, speaker models neither need nor accept a transcription of the audio. When the call returns, the audio is fully processed and the speaker enrollment is complete. The service returns a speaker ID with the request. A speaker ID is globally unique identifier (GUID) that you use to identify the speaker in subsequent requests to the service. Beta: Speaker models and the custom prompts with which they are used are beta functionality that is supported only for use with US English custom models and voices. See also: * [Create a speaker model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-speaker-model) * [Rules for creating speaker models](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-rules#tbe-rules-speakers).
- Parameters
speaker_name (str) – The name of the speaker that is to be added to the service instance. * Include a maximum of 49 characters in the name. * Include only alphanumeric characters and _ (underscores) in the name. * Do not include XML sensitive characters (double quotes, single quotes, ampersands, angle brackets, and slashes) in the name. * Do not use the name of an existing speaker that is already defined for the service instance.
audio (BinaryIO) – An enrollment audio file that contains a sample of the speaker’s voice. * The enrollment audio must be in WAV format and must have a minimum sampling rate of 16 kHz. The service accepts audio with higher sampling rates. It transcodes all audio to 16 kHz before processing it. * The length of the enrollment audio is limited to 1 minute. Speaking one or two paragraphs of text that include five to ten sentences is recommended.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a SpeakerModel object
-
get_speaker_model
(speaker_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Get a speaker model.
Gets information about all prompts that are defined by a specified speaker for all custom models that are owned by a service instance. The information is grouped by the customization IDs of the custom models. For each custom model, the information lists information about each prompt that is defined for that custom model by the speaker. You must use credentials for the instance of the service that owns a speaker model to list its prompts. Beta: Speaker models and the custom prompts with which they are used are beta functionality that is supported only for use with US English custom models and voices. See also: [Listing the custom prompts for a speaker model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-speaker-models#tbe-speaker-models-list-prompts).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse with dict result representing a SpeakerCustomModels object
-
delete_speaker_model
(speaker_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Delete a speaker model.
Deletes an existing speaker model from the service instance. The service deletes the enrolled speaker with the specified speaker ID. You must use credentials for the instance of the service that owns a speaker model to delete the speaker. Any prompts that are associated with the deleted speaker are not affected by the speaker’s deletion. The prosodic data that defines the quality of a prompt is established when the prompt is created. A prompt is static and remains unaffected by deletion of its associated speaker. However, the prompt cannot be resubmitted or updated with its original speaker once that speaker is deleted. Beta: Speaker models and the custom prompts with which they are used are beta functionality that is supported only for use with US English custom models and voices. See also: [Deleting a speaker model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-speaker-models#tbe-speaker-models-delete).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
delete_user_data
(customer_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]¶ Delete labeled data.
Deletes all data that is associated with a specified customer ID. The method deletes all data for the customer ID, regardless of the method by which the information was added. The method has no effect if no data is associated with the customer ID. You must issue the request with credentials for the same instance of the service that was used to associate the customer ID with the data. You associate a customer ID with data by passing the X-Watson-Metadata header with a request that passes the data. Note: If you delete an instance of the service from the service console, all data associated with that service instance is automatically deleted. This includes all custom models and word/translation pairs, and all data related to speech synthesis requests. See also: [Information security](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-information-security#information-security).
-
-
class
GetVoiceEnums
[source]¶ Bases:
object
Enums for get_voice parameters.
-
class
Voice
(value)[source]¶ -
The voice for which information is to be returned. For more information about specifying a voice, see Important voice updates in the method description.
-
AR_AR_OMARVOICE
= 'ar-AR_OmarVoice'¶
-
AR_MS_OMARVOICE
= 'ar-MS_OmarVoice'¶
-
DE_DE_BIRGITVOICE
= 'de-DE_BirgitVoice'¶
-
DE_DE_BIRGITV3VOICE
= 'de-DE_BirgitV3Voice'¶
-
DE_DE_DIETERVOICE
= 'de-DE_DieterVoice'¶
-
DE_DE_DIETERV3VOICE
= 'de-DE_DieterV3Voice'¶
-
DE_DE_ERIKAV3VOICE
= 'de-DE_ErikaV3Voice'¶
-
EN_AU_CRAIGVOICE
= 'en-AU-CraigVoice'¶
-
EN_AU_MADISONVOICE
= 'en-AU-MadisonVoice'¶
-
EN_GB_CHARLOTTEV3VOICE
= 'en-GB_CharlotteV3Voice'¶
-
EN_GB_JAMESV3VOICE
= 'en-GB_JamesV3Voice'¶
-
EN_GB_KATEVOICE
= 'en-GB_KateVoice'¶
-
EN_GB_KATEV3VOICE
= 'en-GB_KateV3Voice'¶
-
EN_US_ALLISONVOICE
= 'en-US_AllisonVoice'¶
-
EN_US_ALLISONV3VOICE
= 'en-US_AllisonV3Voice'¶
-
EN_US_EMILYV3VOICE
= 'en-US_EmilyV3Voice'¶
-
EN_US_HENRYV3VOICE
= 'en-US_HenryV3Voice'¶
-
EN_US_KEVINV3VOICE
= 'en-US_KevinV3Voice'¶
-
EN_US_LISAVOICE
= 'en-US_LisaVoice'¶
-
EN_US_LISAV3VOICE
= 'en-US_LisaV3Voice'¶
-
EN_US_MICHAELVOICE
= 'en-US_MichaelVoice'¶
-
EN_US_MICHAELV3VOICE
= 'en-US_MichaelV3Voice'¶
-
EN_US_OLIVIAV3VOICE
= 'en-US_OliviaV3Voice'¶
-
ES_ES_ENRIQUEVOICE
= 'es-ES_EnriqueVoice'¶
-
ES_ES_ENRIQUEV3VOICE
= 'es-ES_EnriqueV3Voice'¶
-
ES_ES_LAURAVOICE
= 'es-ES_LauraVoice'¶
-
ES_ES_LAURAV3VOICE
= 'es-ES_LauraV3Voice'¶
-
ES_LA_SOFIAVOICE
= 'es-LA_SofiaVoice'¶
-
ES_LA_SOFIAV3VOICE
= 'es-LA_SofiaV3Voice'¶
-
ES_US_SOFIAVOICE
= 'es-US_SofiaVoice'¶
-
ES_US_SOFIAV3VOICE
= 'es-US_SofiaV3Voice'¶
-
FR_CA_LOUISEV3VOICE
= 'fr-CA_LouiseV3Voice'¶
-
FR_FR_NICOLASV3VOICE
= 'fr-FR_NicolasV3Voice'¶
-
FR_FR_RENEEVOICE
= 'fr-FR_ReneeVoice'¶
-
FR_FR_RENEEV3VOICE
= 'fr-FR_ReneeV3Voice'¶
-
IT_IT_FRANCESCAVOICE
= 'it-IT_FrancescaVoice'¶
-
IT_IT_FRANCESCAV3VOICE
= 'it-IT_FrancescaV3Voice'¶
-
JA_JP_EMIVOICE
= 'ja-JP_EmiVoice'¶
-
JA_JP_EMIV3VOICE
= 'ja-JP_EmiV3Voice'¶
-
KO_KR_HYUNJUNVOICE
= 'ko-KR_HyunjunVoice'¶
-
KO_KR_SIWOOVOICE
= 'ko-KR_SiWooVoice'¶
-
KO_KR_YOUNGMIVOICE
= 'ko-KR_YoungmiVoice'¶
-
KO_KR_YUNAVOICE
= 'ko-KR_YunaVoice'¶
-
NL_NL_EMMAVOICE
= 'nl-NL_EmmaVoice'¶
-
NL_NL_LIAMVOICE
= 'nl-NL_LiamVoice'¶
-
PT_BR_ISABELAVOICE
= 'pt-BR_IsabelaVoice'¶
-
PT_BR_ISABELAV3VOICE
= 'pt-BR_IsabelaV3Voice'¶
-
ZH_CN_LINAVOICE
= 'zh-CN_LiNaVoice'¶
-
ZH_CN_WANGWEIVOICE
= 'zh-CN_WangWeiVoice'¶
-
ZH_CN_ZHANGJINGVOICE
= 'zh-CN_ZhangJingVoice'¶
-
-
class
-
class
SynthesizeEnums
[source]¶ Bases:
object
Enums for synthesize parameters.
-
class
Accept
(value)[source]¶ -
The requested format (MIME type) of the audio. You can use the Accept header or the accept parameter to specify the audio format. For more information about specifying an audio format, see Audio formats (accept types) in the method description.
-
AUDIO_BASIC
= 'audio/basic'¶
-
AUDIO_FLAC
= 'audio/flac'¶
-
AUDIO_L16
= 'audio/l16'¶
-
AUDIO_OGG
= 'audio/ogg'¶
-
AUDIO_OGG_CODECS_OPUS
= 'audio/ogg;codecs=opus'¶
-
AUDIO_OGG_CODECS_VORBIS
= 'audio/ogg;codecs=vorbis'¶
-
AUDIO_MP3
= 'audio/mp3'¶
-
AUDIO_MPEG
= 'audio/mpeg'¶
-
AUDIO_MULAW
= 'audio/mulaw'¶
-
AUDIO_WAV
= 'audio/wav'¶
-
AUDIO_WEBM
= 'audio/webm'¶
-
AUDIO_WEBM_CODECS_OPUS
= 'audio/webm;codecs=opus'¶
-
AUDIO_WEBM_CODECS_VORBIS
= 'audio/webm;codecs=vorbis'¶
-
-
class
Voice
(value)[source]¶ -
The voice to use for synthesis. For more information about specifying a voice, see Important voice updates in the method description.
-
AR_AR_OMARVOICE
= 'ar-AR_OmarVoice'¶
-
AR_MS_OMARVOICE
= 'ar-MS_OmarVoice'¶
-
DE_DE_BIRGITVOICE
= 'de-DE_BirgitVoice'¶
-
DE_DE_BIRGITV3VOICE
= 'de-DE_BirgitV3Voice'¶
-
DE_DE_DIETERVOICE
= 'de-DE_DieterVoice'¶
-
DE_DE_DIETERV3VOICE
= 'de-DE_DieterV3Voice'¶
-
DE_DE_ERIKAV3VOICE
= 'de-DE_ErikaV3Voice'¶
-
EN_AU_CRAIGVOICE
= 'en-AU-CraigVoice'¶
-
EN_AU_MADISONVOICE
= 'en-AU-MadisonVoice'¶
-
EN_GB_CHARLOTTEV3VOICE
= 'en-GB_CharlotteV3Voice'¶
-
EN_GB_JAMESV3VOICE
= 'en-GB_JamesV3Voice'¶
-
EN_GB_KATEVOICE
= 'en-GB_KateVoice'¶
-
EN_GB_KATEV3VOICE
= 'en-GB_KateV3Voice'¶
-
EN_US_ALLISONVOICE
= 'en-US_AllisonVoice'¶
-
EN_US_ALLISONV3VOICE
= 'en-US_AllisonV3Voice'¶
-
EN_US_EMILYV3VOICE
= 'en-US_EmilyV3Voice'¶
-
EN_US_HENRYV3VOICE
= 'en-US_HenryV3Voice'¶
-
EN_US_KEVINV3VOICE
= 'en-US_KevinV3Voice'¶
-
EN_US_LISAVOICE
= 'en-US_LisaVoice'¶
-
EN_US_LISAV3VOICE
= 'en-US_LisaV3Voice'¶
-
EN_US_MICHAELVOICE
= 'en-US_MichaelVoice'¶
-
EN_US_MICHAELV3VOICE
= 'en-US_MichaelV3Voice'¶
-
EN_US_OLIVIAV3VOICE
= 'en-US_OliviaV3Voice'¶
-
ES_ES_ENRIQUEVOICE
= 'es-ES_EnriqueVoice'¶
-
ES_ES_ENRIQUEV3VOICE
= 'es-ES_EnriqueV3Voice'¶
-
ES_ES_LAURAVOICE
= 'es-ES_LauraVoice'¶
-
ES_ES_LAURAV3VOICE
= 'es-ES_LauraV3Voice'¶
-
ES_LA_SOFIAVOICE
= 'es-LA_SofiaVoice'¶
-
ES_LA_SOFIAV3VOICE
= 'es-LA_SofiaV3Voice'¶
-
ES_US_SOFIAVOICE
= 'es-US_SofiaVoice'¶
-
ES_US_SOFIAV3VOICE
= 'es-US_SofiaV3Voice'¶
-
FR_CA_LOUISEV3VOICE
= 'fr-CA_LouiseV3Voice'¶
-
FR_FR_NICOLASV3VOICE
= 'fr-FR_NicolasV3Voice'¶
-
FR_FR_RENEEVOICE
= 'fr-FR_ReneeVoice'¶
-
FR_FR_RENEEV3VOICE
= 'fr-FR_ReneeV3Voice'¶
-
IT_IT_FRANCESCAVOICE
= 'it-IT_FrancescaVoice'¶
-
IT_IT_FRANCESCAV3VOICE
= 'it-IT_FrancescaV3Voice'¶
-
JA_JP_EMIVOICE
= 'ja-JP_EmiVoice'¶
-
JA_JP_EMIV3VOICE
= 'ja-JP_EmiV3Voice'¶
-
KO_KR_HYUNJUNVOICE
= 'ko-KR_HyunjunVoice'¶
-
KO_KR_SIWOOVOICE
= 'ko-KR_SiWooVoice'¶
-
KO_KR_YOUNGMIVOICE
= 'ko-KR_YoungmiVoice'¶
-
KO_KR_YUNAVOICE
= 'ko-KR_YunaVoice'¶
-
NL_NL_EMMAVOICE
= 'nl-NL_EmmaVoice'¶
-
NL_NL_LIAMVOICE
= 'nl-NL_LiamVoice'¶
-
PT_BR_ISABELAVOICE
= 'pt-BR_IsabelaVoice'¶
-
PT_BR_ISABELAV3VOICE
= 'pt-BR_IsabelaV3Voice'¶
-
ZH_CN_LINAVOICE
= 'zh-CN_LiNaVoice'¶
-
ZH_CN_WANGWEIVOICE
= 'zh-CN_WangWeiVoice'¶
-
ZH_CN_ZHANGJINGVOICE
= 'zh-CN_ZhangJingVoice'¶
-
-
class
-
class
GetPronunciationEnums
[source]¶ Bases:
object
Enums for get_pronunciation parameters.
-
class
Voice
(value)[source]¶ -
A voice that specifies the language in which the pronunciation is to be returned. All voices for the same language (for example, en-US) return the same translation. For more information about specifying a voice, see Important voice updates in the method description.
-
AR_AR_OMARVOICE
= 'ar-AR_OmarVoice'¶
-
AR_MS_OMARVOICE
= 'ar-MS_OmarVoice'¶
-
DE_DE_BIRGITVOICE
= 'de-DE_BirgitVoice'¶
-
DE_DE_BIRGITV3VOICE
= 'de-DE_BirgitV3Voice'¶
-
DE_DE_DIETERVOICE
= 'de-DE_DieterVoice'¶
-
DE_DE_DIETERV3VOICE
= 'de-DE_DieterV3Voice'¶
-
DE_DE_ERIKAV3VOICE
= 'de-DE_ErikaV3Voice'¶
-
EN_AU_CRAIGVOICE
= 'en-AU-CraigVoice'¶
-
EN_AU_MADISONVOICE
= 'en-AU-MadisonVoice'¶
-
EN_GB_CHARLOTTEV3VOICE
= 'en-GB_CharlotteV3Voice'¶
-
EN_GB_JAMESV3VOICE
= 'en-GB_JamesV3Voice'¶
-
EN_GB_KATEVOICE
= 'en-GB_KateVoice'¶
-
EN_GB_KATEV3VOICE
= 'en-GB_KateV3Voice'¶
-
EN_US_ALLISONVOICE
= 'en-US_AllisonVoice'¶
-
EN_US_ALLISONV3VOICE
= 'en-US_AllisonV3Voice'¶
-
EN_US_EMILYV3VOICE
= 'en-US_EmilyV3Voice'¶
-
EN_US_HENRYV3VOICE
= 'en-US_HenryV3Voice'¶
-
EN_US_KEVINV3VOICE
= 'en-US_KevinV3Voice'¶
-
EN_US_LISAVOICE
= 'en-US_LisaVoice'¶
-
EN_US_LISAV3VOICE
= 'en-US_LisaV3Voice'¶
-
EN_US_MICHAELVOICE
= 'en-US_MichaelVoice'¶
-
EN_US_MICHAELV3VOICE
= 'en-US_MichaelV3Voice'¶
-
EN_US_OLIVIAV3VOICE
= 'en-US_OliviaV3Voice'¶
-
ES_ES_ENRIQUEVOICE
= 'es-ES_EnriqueVoice'¶
-
ES_ES_ENRIQUEV3VOICE
= 'es-ES_EnriqueV3Voice'¶
-
ES_ES_LAURAVOICE
= 'es-ES_LauraVoice'¶
-
ES_ES_LAURAV3VOICE
= 'es-ES_LauraV3Voice'¶
-
ES_LA_SOFIAVOICE
= 'es-LA_SofiaVoice'¶
-
ES_LA_SOFIAV3VOICE
= 'es-LA_SofiaV3Voice'¶
-
ES_US_SOFIAVOICE
= 'es-US_SofiaVoice'¶
-
ES_US_SOFIAV3VOICE
= 'es-US_SofiaV3Voice'¶
-
FR_CA_LOUISEV3VOICE
= 'fr-CA_LouiseV3Voice'¶
-
FR_FR_NICOLASV3VOICE
= 'fr-FR_NicolasV3Voice'¶
-
FR_FR_RENEEVOICE
= 'fr-FR_ReneeVoice'¶
-
FR_FR_RENEEV3VOICE
= 'fr-FR_ReneeV3Voice'¶
-
IT_IT_FRANCESCAVOICE
= 'it-IT_FrancescaVoice'¶
-
IT_IT_FRANCESCAV3VOICE
= 'it-IT_FrancescaV3Voice'¶
-
JA_JP_EMIVOICE
= 'ja-JP_EmiVoice'¶
-
JA_JP_EMIV3VOICE
= 'ja-JP_EmiV3Voice'¶
-
KO_KR_HYUNJUNVOICE
= 'ko-KR_HyunjunVoice'¶
-
KO_KR_SIWOOVOICE
= 'ko-KR_SiWooVoice'¶
-
KO_KR_YOUNGMIVOICE
= 'ko-KR_YoungmiVoice'¶
-
KO_KR_YUNAVOICE
= 'ko-KR_YunaVoice'¶
-
NL_NL_EMMAVOICE
= 'nl-NL_EmmaVoice'¶
-
NL_NL_LIAMVOICE
= 'nl-NL_LiamVoice'¶
-
PT_BR_ISABELAVOICE
= 'pt-BR_IsabelaVoice'¶
-
PT_BR_ISABELAV3VOICE
= 'pt-BR_IsabelaV3Voice'¶
-
ZH_CN_LINAVOICE
= 'zh-CN_LiNaVoice'¶
-
ZH_CN_WANGWEIVOICE
= 'zh-CN_WangWeiVoice'¶
-
ZH_CN_ZHANGJINGVOICE
= 'zh-CN_ZhangJingVoice'¶
-
-
class
-
class
ListCustomModelsEnums
[source]¶ Bases:
object
Enums for list_custom_models parameters.
-
class
Language
(value)[source]¶ -
The language for which custom models that are owned by the requesting credentials are to be returned. Omit the parameter to see all custom models that are owned by the requester.
-
AR_MS
= 'ar-MS'¶
-
DE_DE
= 'de-DE'¶
-
EN_AU
= 'en-AU'¶
-
EN_GB
= 'en-GB'¶
-
EN_US
= 'en-US'¶
-
ES_ES
= 'es-ES'¶
-
ES_LA
= 'es-LA'¶
-
ES_US
= 'es-US'¶
-
FR_CA
= 'fr-CA'¶
-
FR_FR
= 'fr-FR'¶
-
IT_IT
= 'it-IT'¶
-
JA_JP
= 'ja-JP'¶
-
KO_KR
= 'ko-KR'¶
-
NL_NL
= 'nl-NL'¶
-
PT_BR
= 'pt-BR'¶
-
ZH_CN
= 'zh-CN'¶
-
-
class
-
class
CustomModel
(customization_id: str, *, name: Optional[str] = None, language: Optional[str] = None, owner: Optional[str] = None, created: Optional[str] = None, last_modified: Optional[str] = None, description: Optional[str] = None, words: Optional[List[ibm_watson.text_to_speech_v1.Word]] = None, prompts: Optional[List[ibm_watson.text_to_speech_v1.Prompt]] = None)[source]¶ Bases:
object
Information about an existing custom model.
- Attr str customization_id
The customization ID (GUID) of the custom model. The Create a custom model method returns only this field. It does not not return the other fields of this object.
- Attr str name
(optional) The name of the custom model.
- Attr str language
(optional) The language identifier of the custom model (for example, en-US).
- Attr str owner
(optional) The GUID of the credentials for the instance of the service that owns the custom model.
- Attr str created
(optional) The date and time in Coordinated Universal Time (UTC) at which the custom model was created. The value is provided in full ISO 8601 format (YYYY-MM-DDThh:mm:ss.sTZD).
- Attr str last_modified
(optional) The date and time in Coordinated Universal Time (UTC) at which the custom model was last modified. The created and updated fields are equal when a model is first added but has yet to be updated. The value is provided in full ISO 8601 format (YYYY-MM-DDThh:mm:ss.sTZD).
- Attr str description
(optional) The description of the custom model.
- Attr List[Word] words
(optional) An array of Word objects that lists the words and their translations from the custom model. The words are listed in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if no words are defined for the custom model. This field is returned only by the Get a custom model method.
- Attr List[Prompt] prompts
(optional) An array of Prompt objects that provides information about the prompts that are defined for the specified custom model. The array is empty if no prompts are defined for the custom model. This field is returned only by the Get a custom model method.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.CustomModel[source]¶ Initialize a CustomModel object from a json dictionary.
-
class
CustomModels
(customizations: List[ibm_watson.text_to_speech_v1.CustomModel])[source]¶ Bases:
object
Information about existing custom models.
- Attr List[CustomModel] customizations
An array of CustomModel objects that provides information about each available custom model. The array is empty if the requesting credentials own no custom models (if no language is specified) or own no custom models for the specified language.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.CustomModels[source]¶ Initialize a CustomModels object from a json dictionary.
-
class
Prompt
(prompt: str, prompt_id: str, status: str, *, error: Optional[str] = None, speaker_id: Optional[str] = None)[source]¶ Bases:
object
Information about a custom prompt.
- Attr str prompt
The user-specified text of the prompt.
- Attr str prompt_id
The user-specified identifier (name) of the prompt.
- Attr str status
The status of the prompt: * processing: The service received the request to add the prompt and is analyzing the validity of the prompt. * available: The service successfully validated the prompt, which is now ready for use in a speech synthesis request. * failed: The service’s validation of the prompt failed. The status of the prompt includes an error field that describes the reason for the failure.
- Attr str error
(optional) If the status of the prompt is failed, an error message that describes the reason for the failure. The field is omitted if no error occurred.
- Attr str speaker_id
(optional) The speaker ID (GUID) of the speaker for which the prompt was defined. The field is omitted if no speaker ID was specified.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Prompt[source]¶ Initialize a Prompt object from a json dictionary.
-
class
PromptMetadata
(prompt_text: str, *, speaker_id: Optional[str] = None)[source]¶ Bases:
object
Information about the prompt that is to be added to a custom model. The following example of a PromptMetadata object includes both the required prompt text and an optional speaker model ID: { “prompt_text”: “Thank you and good-bye!”, “speaker_id”: “823068b2-ed4e-11ea-b6e0-7b6456aa95cc” }.
- Attr str prompt_text
The required written text of the spoken prompt. The length of a prompt’s text is limited to a few sentences. Speaking one or two sentences of text is the recommended limit. A prompt cannot contain more than 1000 characters of text. Escape any XML control characters (double quotes, single quotes, ampersands, angle brackets, and slashes) that appear in the text of the prompt.
- Attr str speaker_id
(optional) The optional speaker ID (GUID) of a previously defined speaker model that is to be associated with the prompt.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.PromptMetadata[source]¶ Initialize a PromptMetadata object from a json dictionary.
-
class
Prompts
(prompts: List[ibm_watson.text_to_speech_v1.Prompt])[source]¶ Bases:
object
Information about the custom prompts that are defined for a custom model.
- Attr List[Prompt] prompts
An array of Prompt objects that provides information about the prompts that are defined for the specified custom model. The array is empty if no prompts are defined for the custom model.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Prompts[source]¶ Initialize a Prompts object from a json dictionary.
-
class
Pronunciation
(pronunciation: str)[source]¶ Bases:
object
The pronunciation of the specified text.
- Attr str pronunciation
The pronunciation of the specified text in the requested voice and format. If a custom model is specified, the pronunciation also reflects that custom model.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Pronunciation[source]¶ Initialize a Pronunciation object from a json dictionary.
-
class
Speaker
(speaker_id: str, name: str)[source]¶ Bases:
object
Information about a speaker model.
- Attr str speaker_id
The speaker ID (GUID) of the speaker.
- Attr str name
The user-defined name of the speaker.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Speaker[source]¶ Initialize a Speaker object from a json dictionary.
-
class
SpeakerCustomModel
(customization_id: str, prompts: List[ibm_watson.text_to_speech_v1.SpeakerPrompt])[source]¶ Bases:
object
A custom models for which the speaker has defined prompts.
- Attr str customization_id
The customization ID (GUID) of a custom model for which the speaker has defined one or more prompts.
- Attr List[SpeakerPrompt] prompts
An array of SpeakerPrompt objects that provides information about each prompt that the user has defined for the custom model.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.SpeakerCustomModel[source]¶ Initialize a SpeakerCustomModel object from a json dictionary.
-
class
SpeakerCustomModels
(customizations: List[ibm_watson.text_to_speech_v1.SpeakerCustomModel])[source]¶ Bases:
object
Custom models for which the speaker has defined prompts.
- Attr List[SpeakerCustomModel] customizations
An array of SpeakerCustomModel objects. Each object provides information about the prompts that are defined for a specified speaker in the custom models that are owned by a specified service instance. The array is empty if no prompts are defined for the speaker.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.SpeakerCustomModels[source]¶ Initialize a SpeakerCustomModels object from a json dictionary.
-
class
SpeakerModel
(speaker_id: str)[source]¶ Bases:
object
The speaker ID of the speaker model.
- Attr str speaker_id
The speaker ID (GUID) of the speaker model.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.SpeakerModel[source]¶ Initialize a SpeakerModel object from a json dictionary.
-
class
SpeakerPrompt
(prompt: str, prompt_id: str, status: str, *, error: Optional[str] = None)[source]¶ Bases:
object
A prompt that a speaker has defined for a custom model.
- Attr str prompt
The user-specified text of the prompt.
- Attr str prompt_id
The user-specified identifier (name) of the prompt.
- Attr str status
The status of the prompt: * processing: The service received the request to add the prompt and is analyzing the validity of the prompt. * available: The service successfully validated the prompt, which is now ready for use in a speech synthesis request. * failed: The service’s validation of the prompt failed. The status of the prompt includes an error field that describes the reason for the failure.
- Attr str error
(optional) If the status of the prompt is failed, an error message that describes the reason for the failure. The field is omitted if no error occurred.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.SpeakerPrompt[source]¶ Initialize a SpeakerPrompt object from a json dictionary.
-
class
Speakers
(speakers: List[ibm_watson.text_to_speech_v1.Speaker])[source]¶ Bases:
object
Information about all speaker models for the service instance.
- Attr List[Speaker] speakers
An array of Speaker objects that provides information about the speakers for the service instance. The array is empty if the service instance has no speakers.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Speakers[source]¶ Initialize a Speakers object from a json dictionary.
-
class
SupportedFeatures
(custom_pronunciation: bool, voice_transformation: bool)[source]¶ Bases:
object
Additional service features that are supported with the voice.
- Attr bool custom_pronunciation
If true, the voice can be customized; if false, the voice cannot be customized. (Same as customizable.).
- Attr bool voice_transformation
If true, the voice can be transformed by using the SSML <voice-transformation> element; if false, the voice cannot be transformed. The feature was available only for the now-deprecated standard voices. You cannot use the feature with neural voices.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.SupportedFeatures[source]¶ Initialize a SupportedFeatures object from a json dictionary.
-
class
Translation
(translation: str, *, part_of_speech: Optional[str] = None)[source]¶ Bases:
object
Information about the translation for the specified text.
- Attr str translation
The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for representing the phonetic string of a word either as an IPA translation or as an IBM SPR translation. The Arabic, Chinese, Dutch, Australian English, and Korean languages support only IPA. A sounds-like is one or more words that, when combined, sound like the word.
- Attr str part_of_speech
(optional) Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Translation[source]¶ Initialize a Translation object from a json dictionary.
-
class
PartOfSpeechEnum
(value)[source]¶ -
Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
DOSI
= 'Dosi'¶
-
FUKU
= 'Fuku'¶
-
GOBI
= 'Gobi'¶
-
HOKA
= 'Hoka'¶
-
JODO
= 'Jodo'¶
-
JOSI
= 'Josi'¶
-
KATO
= 'Kato'¶
-
KEDO
= 'Kedo'¶
-
KEYO
= 'Keyo'¶
-
KIGO
= 'Kigo'¶
-
KOYU
= 'Koyu'¶
-
MESI
= 'Mesi'¶
-
RETA
= 'Reta'¶
-
STBI
= 'Stbi'¶
-
STTO
= 'Stto'¶
-
STZO
= 'Stzo'¶
-
SUJI
= 'Suji'¶
-
-
class
Voice
(url: str, gender: str, name: str, language: str, description: str, customizable: bool, supported_features: ibm_watson.text_to_speech_v1.SupportedFeatures, *, customization: Optional[ibm_watson.text_to_speech_v1.CustomModel] = None)[source]¶ Bases:
object
Information about an available voice.
- Attr str url
The URI of the voice.
- Attr str gender
The gender of the voice: male or female.
- Attr str name
The name of the voice. Use this as the voice identifier in all requests.
- Attr str language
The language and region of the voice (for example, en-US).
- Attr str description
A textual description of the voice.
- Attr bool customizable
If true, the voice can be customized; if false, the voice cannot be customized. (Same as custom_pronunciation; maintained for backward compatibility.).
- Attr SupportedFeatures supported_features
Additional service features that are supported with the voice.
- Attr CustomModel customization
(optional) Returns information about a specified custom model. This field is returned only by the Get a voice method and only when you specify the customization ID of a custom model.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Voice[source]¶ Initialize a Voice object from a json dictionary.
-
class
Voices
(voices: List[ibm_watson.text_to_speech_v1.Voice])[source]¶ Bases:
object
Information about all available voices.
- Attr List[Voice] voices
A list of available voices.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Voices[source]¶ Initialize a Voices object from a json dictionary.
-
class
Word
(word: str, translation: str, *, part_of_speech: Optional[str] = None)[source]¶ Bases:
object
Information about a word for the custom model.
- Attr str word
The word for the custom model. The maximum length of a word is 49 characters.
- Attr str translation
The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for representing the phonetic string of a word either as an IPA or IBM SPR translation. The Arabic, Chinese, Dutch, Australian English, and Korean languages support only IPA. A sounds-like translation consists of one or more words that, when combined, sound like the word. The maximum length of a translation is 499 characters.
- Attr str part_of_speech
(optional) Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Word[source]¶ Initialize a Word object from a json dictionary.
-
class
PartOfSpeechEnum
(value)[source]¶ -
Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
DOSI
= 'Dosi'¶
-
FUKU
= 'Fuku'¶
-
GOBI
= 'Gobi'¶
-
HOKA
= 'Hoka'¶
-
JODO
= 'Jodo'¶
-
JOSI
= 'Josi'¶
-
KATO
= 'Kato'¶
-
KEDO
= 'Kedo'¶
-
KEYO
= 'Keyo'¶
-
KIGO
= 'Kigo'¶
-
KOYU
= 'Koyu'¶
-
MESI
= 'Mesi'¶
-
RETA
= 'Reta'¶
-
STBI
= 'Stbi'¶
-
STTO
= 'Stto'¶
-
STZO
= 'Stzo'¶
-
SUJI
= 'Suji'¶
-
-
class
Words
(words: List[ibm_watson.text_to_speech_v1.Word])[source]¶ Bases:
object
For the Add custom words method, one or more words that are to be added or updated for the custom model and the translation for each specified word. For the List custom words method, the words and their translations from the custom model.
- Attr List[Word] words
The Add custom words method accepts an array of Word objects. Each object provides a word that is to be added or updated for the custom model and the word’s translation. The List custom words method returns an array of Word objects. Each object shows a word and its translation from the custom model. The words are listed in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if the custom model contains no words.
-
classmethod
from_dict
(_dict: Dict) → ibm_watson.text_to_speech_v1.Words[source]¶ Initialize a Words object from a json dictionary.