ibm_watson.text_to_speech_v1 module¶
The IBM® Text to Speech service provides APIs that use IBM’s speech-synthesis capabilities to synthesize text into natural-sounding speech in a variety of languages, dialects, and voices. The service supports at least one male or female voice, sometimes both, for each language. The audio is streamed back to the client with minimal delay. For speech synthesis, the service supports a synchronous HTTP Representational State Transfer (REST) interface. It also supports a WebSocket interface that provides both plain text and SSML input, including the SSML <mark> element and word timings. SSML is an XML-based markup language that provides text annotation for speech-synthesis applications. The service also offers a customization interface. You can use the interface to define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. A phonetic translation is based on the SSML phoneme format for representing a word. You can specify a phonetic translation in standard International Phonetic Alphabet (IPA) representation or in the proprietary IBM Symbolic Phonetic Representation (SPR).
-
class
TextToSpeechV1
(authenticator=None)[source]¶ Bases:
ibm_cloud_sdk_core.base_service.BaseService
The Text to Speech V1 service.
-
default_service_url
= 'https://stream.watsonplatform.net/text-to-speech/api'¶
-
list_voices
(**kwargs)[source]¶ List voices.
Lists all voices available for use with the service. The information includes the name, language, gender, and other details about the voice. To see information about a specific voice, use the Get a voice method. See also: [Listing all available voices](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-voices#listVoices).
- Parameters
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
get_voice
(voice, *, customization_id=None, **kwargs)[source]¶ Get a voice.
Gets information about the specified voice. The information includes the name, language, gender, and other details about the voice. Specify a customization ID to obtain information for a custom voice model that is defined for the language of the specified voice. To list information about all available voices, use the List voices method. See also: [Listing a specific voice](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-voices#listVoice).
- Parameters
voice (str) – The voice for which information is to be returned.
customization_id (str) – (optional) The customization ID (GUID) of a custom voice model for which information is to be returned. You must make the request with credentials for the instance of the service that owns the custom model. Omit the parameter to see information about the specified voice with no customization.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
synthesize
(text, *, accept=None, voice=None, customization_id=None, **kwargs)[source]¶ Synthesize audio.
Synthesizes text to audio that is spoken in the specified voice. The service bases its understanding of the language for the input text on the specified voice. Use a voice that matches the language of the input text. The method accepts a maximum of 5 KB of input text in the body of the request, and 8 KB for the URL and headers. The 5 KB limit includes any SSML tags that you specify. The service returns the synthesized audio stream as an array of bytes. See also: [The HTTP interface](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-usingHTTP#usingHTTP). ### Audio formats (accept types)
The service can return audio in the following formats (MIME types).
Where indicated, you can optionally specify the sampling rate (rate) of the
audio. You must specify a sampling rate for the audio/l16 and audio/mulaw formats. A specified sampling rate must lie in the range of 8 kHz to 192 kHz. Some formats restrict the sampling rate to certain values, as noted. * For the audio/l16 format, you can optionally specify the endianness (endianness) of the audio: endianness=big-endian or endianness=little-endian. Use the Accept header or the accept parameter to specify the requested format of the response audio. If you omit an audio format altogether, the service returns the audio in Ogg format with the Opus codec (audio/ogg;codecs=opus). The service always returns single-channel audio. * audio/basic - The service returns audio with a sampling rate of 8000 Hz. * audio/flac - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/l16 - You must specify the rate of the audio. You can optionally specify the endianness of the audio. The default endianness is little-endian. * audio/mp3 - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/mpeg - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/mulaw - You must specify the rate of the audio. * audio/ogg - The service returns the audio in the vorbis codec. You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/ogg;codecs=opus - You can optionally specify the rate of the audio. Only the following values are valid sampling rates: 48000, 24000, 16000, 12000, or 8000. If you specify a value other than one of these, the service returns an error. The default sampling rate is 48,000 Hz. * audio/ogg;codecs=vorbis - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/wav - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. * audio/webm - The service returns the audio in the opus codec. The service returns audio with a sampling rate of 48,000 Hz. * audio/webm;codecs=opus - The service returns audio with a sampling rate of 48,000 Hz. * audio/webm;codecs=vorbis - You can optionally specify the rate of the audio. The default sampling rate is 22,050 Hz. For more information about specifying an audio format, including additional details about some of the formats, see [Audio formats](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-audioFormats#audioFormats). ### Warning messages
If a request includes invalid query parameters, the service returns a Warnings
response header that provides messages about the invalid parameters. The warning includes a descriptive message and a list of invalid argument strings. For example, a message such as “Unknown arguments:” or “Unknown url query arguments:” followed by a list of the form “{invalid_arg_1}, {invalid_arg_2}.” The request succeeds despite the warnings.
- Parameters
text (str) – The text to synthesize.
accept (str) – (optional) The requested format (MIME type) of the audio. You can use the Accept header or the accept parameter to specify the audio format. For more information about specifying an audio format, see Audio formats (accept types) in the method description.
voice (str) – (optional) The voice to use for synthesis.
customization_id (str) – (optional) The customization ID (GUID) of a custom voice model to use for the synthesis. If a custom voice model is specified, it is guaranteed to work only if it matches the language of the indicated voice. You must make the request with credentials for the instance of the service that owns the custom model. Omit the parameter to use the specified voice with no customization.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
get_pronunciation
(text, *, voice=None, format=None, customization_id=None, **kwargs)[source]¶ Get pronunciation.
Gets the phonetic pronunciation for the specified word. You can request the pronunciation for a specific format. You can also request the pronunciation for a specific voice to see the default translation for the language of that voice or for a specific custom voice model to see the translation for that voice model. Note: This method is currently a beta release. See also: [Querying a word from a language](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuWordsQueryLanguage).
- Parameters
text (str) – The word for which the pronunciation is requested.
voice (str) – (optional) A voice that specifies the language in which the pronunciation is to be returned. All voices for the same language (for example, en-US) return the same translation.
format (str) – (optional) The phoneme format in which to return the pronunciation. Omit the parameter to obtain the pronunciation in the default format.
customization_id (str) – (optional) The customization ID (GUID) of a custom voice model for which the pronunciation is to be returned. The language of a specified custom model must match the language of the specified voice. If the word is not defined in the specified custom model, the service returns the default translation for the custom model’s language. You must make the request with credentials for the instance of the service that owns the custom model. Omit the parameter to see the translation for the specified voice with no customization.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
create_voice_model
(name, *, language=None, description=None, **kwargs)[source]¶ Create a custom model.
Creates a new empty custom voice model. You must specify a name for the new custom model. You can optionally specify the language and a description for the new model. The model is owned by the instance of the service whose credentials are used to create it. Note: This method is currently a beta release. See also: [Creating a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customModels#cuModelsCreate).
- Parameters
name (str) – The name of the new custom voice model.
language (str) – (optional) The language of the new custom voice model. Omit the parameter to use the the default language, en-US.
description (str) – (optional) A description of the new custom voice model. Specifying a description is recommended.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
list_voice_models
(*, language=None, **kwargs)[source]¶ List custom models.
Lists metadata such as the name and description for all custom voice models that are owned by an instance of the service. Specify a language to list the voice models for that language only. To see the words in addition to the metadata for a specific voice model, use the List a custom model method. You must use credentials for the instance of the service that owns a model to list information about it. Note: This method is currently a beta release. See also: [Querying all custom models](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customModels#cuModelsQueryAll).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
update_voice_model
(customization_id, *, name=None, description=None, words=None, **kwargs)[source]¶ Update a custom model.
Updates information for the specified custom voice model. You can update metadata such as the name and description of the voice model. You can also update the words in the model and their translations. Adding a new translation for a word that already exists in a custom model overwrites the word’s existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to update it. You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation
<code><phoneme alphabet=”ipa”
- ph=”təmˈɑto”></phoneme></code>
or in the proprietary IBM Symbolic Phonetic Representation (SPR) <code><phoneme alphabet=”ibm”
ph=”1gAstroEntxrYFXs”></phoneme></code> Note: This method is currently a beta release. See also: * [Updating a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customModels#cuModelsUpdate) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customIntro#customIntro).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom voice model. You must make the request with credentials for the instance of the service that owns the custom model.
name (str) – (optional) A new name for the custom voice model.
description (str) – (optional) A new description for the custom voice model.
words (list[Word]) – (optional) An array of Word objects that provides the words and their translations that are to be added or updated for the custom voice model. Pass an empty array to make no additions or updates.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
get_voice_model
(customization_id, **kwargs)[source]¶ Get a custom model.
Gets all information about a specified custom voice model. In addition to metadata such as the name and description of the voice model, the output includes the words and their translations as defined in the model. To see just the metadata for a voice model, use the List custom models method. Note: This method is currently a beta release. See also: [Querying a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customModels#cuModelsQuery).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
delete_voice_model
(customization_id, **kwargs)[source]¶ Delete a custom model.
Deletes the specified custom voice model. You must use credentials for the instance of the service that owns a model to delete it. Note: This method is currently a beta release. See also: [Deleting a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customModels#cuModelsDelete).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
add_words
(customization_id, words, **kwargs)[source]¶ Add custom words.
Adds one or more words and their translations to the specified custom voice model. Adding a new translation for a word that already exists in a custom model overwrites the word’s existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to add words to it. You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation
<code><phoneme alphabet=”ipa”
- ph=”təmˈɑto”></phoneme></code>
or in the proprietary IBM Symbolic Phonetic Representation (SPR) <code><phoneme alphabet=”ibm”
ph=”1gAstroEntxrYFXs”></phoneme></code> Note: This method is currently a beta release. See also: * [Adding multiple words to a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuWordsAdd) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customIntro#customIntro).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom voice model. You must make the request with credentials for the instance of the service that owns the custom model.
words (list[Word]) – The Add custom words method accepts an array of Word objects. Each object provides a word that is to be added or updated for the custom voice model and the word’s translation. The List custom words method returns an array of Word objects. Each object shows a word and its translation from the custom voice model. The words are listed in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if the custom model contains no words.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
list_words
(customization_id, **kwargs)[source]¶ List custom words.
Lists all of the words and their translations for the specified custom voice model. The output shows the translations as they are defined in the model. You must use credentials for the instance of the service that owns a model to list its words. Note: This method is currently a beta release. See also: [Querying all words from a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuWordsQueryModel).
- Parameters
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
add_word
(customization_id, word, translation, *, part_of_speech=None, **kwargs)[source]¶ Add a custom word.
Adds a single word and its translation to the specified custom voice model. Adding a new translation for a word that already exists in a custom model overwrites the word’s existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to add a word to it. You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation
<code><phoneme alphabet=”ipa”
- ph=”təmˈɑto”></phoneme></code>
or in the proprietary IBM Symbolic Phonetic Representation (SPR) <code><phoneme alphabet=”ibm”
ph=”1gAstroEntxrYFXs”></phoneme></code> Note: This method is currently a beta release. See also: * [Adding a single word to a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuWordAdd) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customIntro#customIntro).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom voice model. You must make the request with credentials for the instance of the service that owns the custom model.
word (str) – The word that is to be added or updated for the custom voice model.
translation (str) – The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for representing the phonetic string of a word either as an IPA translation or as an IBM SPR translation. A sounds-like is one or more words that, when combined, sound like the word.
part_of_speech (str) – (optional) Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-rules#jaNotes).
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
get_word
(customization_id, word, **kwargs)[source]¶ Get a custom word.
Gets the translation for a single word from the specified custom model. The output shows the translation as it is defined in the model. You must use credentials for the instance of the service that owns a model to list its words. Note: This method is currently a beta release. See also: [Querying a single word from a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuWordQueryModel).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom voice model. You must make the request with credentials for the instance of the service that owns the custom model.
word (str) – The word that is to be queried from the custom voice model.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
delete_word
(customization_id, word, **kwargs)[source]¶ Delete a custom word.
Deletes a single word from the specified custom voice model. You must use credentials for the instance of the service that owns a model to delete its words. Note: This method is currently a beta release. See also: [Deleting a word from a custom model](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-customWords#cuWordDelete).
- Parameters
customization_id (str) – The customization ID (GUID) of the custom voice model. You must make the request with credentials for the instance of the service that owns the custom model.
word (str) – The word that is to be deleted from the custom voice model.
headers (dict) – A dict containing the request headers
- Returns
A DetailedResponse containing the result, headers and HTTP status code.
- Return type
DetailedResponse
-
delete_user_data
(customer_id, **kwargs)[source]¶ Delete labeled data.
Deletes all data that is associated with a specified customer ID. The method deletes all data for the customer ID, regardless of the method by which the information was added. The method has no effect if no data is associated with the customer ID. You must issue the request with credentials for the same instance of the service that was used to associate the customer ID with the data. You associate a customer ID with data by passing the X-Watson-Metadata header with a request that passes the data. See also: [Information security](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-information-security#information-security).
-
-
class
GetVoiceEnums
[source]¶ Bases:
object
-
class
Voice
[source]¶ Bases:
enum.Enum
The voice for which information is to be returned.
-
DE_DE_BIRGITVOICE
= 'de-DE_BirgitVoice'¶
-
DE_DE_BIRGITV3VOICE
= 'de-DE_BirgitV3Voice'¶
-
DE_DE_DIETERVOICE
= 'de-DE_DieterVoice'¶
-
DE_DE_DIETERV3VOICE
= 'de-DE_DieterV3Voice'¶
-
EN_GB_KATEVOICE
= 'en-GB_KateVoice'¶
-
EN_GB_KATEV3VOICE
= 'en-GB_KateV3Voice'¶
-
EN_US_ALLISONVOICE
= 'en-US_AllisonVoice'¶
-
EN_US_ALLISONV3VOICE
= 'en-US_AllisonV3Voice'¶
-
EN_US_LISAVOICE
= 'en-US_LisaVoice'¶
-
EN_US_LISAV3VOICE
= 'en-US_LisaV3Voice'¶
-
EN_US_MICHAELVOICE
= 'en-US_MichaelVoice'¶
-
EN_US_MICHAELV3VOICE
= 'en-US_MichaelV3Voice'¶
-
ES_ES_ENRIQUEVOICE
= 'es-ES_EnriqueVoice'¶
-
ES_ES_ENRIQUEV3VOICE
= 'es-ES_EnriqueV3Voice'¶
-
ES_ES_LAURAVOICE
= 'es-ES_LauraVoice'¶
-
ES_ES_LAURAV3VOICE
= 'es-ES_LauraV3Voice'¶
-
ES_LA_SOFIAVOICE
= 'es-LA_SofiaVoice'¶
-
ES_LA_SOFIAV3VOICE
= 'es-LA_SofiaV3Voice'¶
-
ES_US_SOFIAVOICE
= 'es-US_SofiaVoice'¶
-
ES_US_SOFIAV3VOICE
= 'es-US_SofiaV3Voice'¶
-
FR_FR_RENEEVOICE
= 'fr-FR_ReneeVoice'¶
-
FR_FR_RENEEV3VOICE
= 'fr-FR_ReneeV3Voice'¶
-
IT_IT_FRANCESCAVOICE
= 'it-IT_FrancescaVoice'¶
-
IT_IT_FRANCESCAV3VOICE
= 'it-IT_FrancescaV3Voice'¶
-
JA_JP_EMIVOICE
= 'ja-JP_EmiVoice'¶
-
JA_JP_EMIV3VOICE
= 'ja-JP_EmiV3Voice'¶
-
PT_BR_ISABELAVOICE
= 'pt-BR_IsabelaVoice'¶
-
PT_BR_ISABELAV3VOICE
= 'pt-BR_IsabelaV3Voice'¶
-
-
class
-
class
SynthesizeEnums
[source]¶ Bases:
object
-
class
Accept
[source]¶ Bases:
enum.Enum
The requested format (MIME type) of the audio. You can use the Accept header or the accept parameter to specify the audio format. For more information about specifying an audio format, see Audio formats (accept types) in the method description.
-
AUDIO_BASIC
= 'audio/basic'¶
-
AUDIO_FLAC
= 'audio/flac'¶
-
AUDIO_L16
= 'audio/l16'¶
-
AUDIO_OGG
= 'audio/ogg'¶
-
AUDIO_OGG_CODECS_OPUS
= 'audio/ogg;codecs=opus'¶
-
AUDIO_OGG_CODECS_VORBIS
= 'audio/ogg;codecs=vorbis'¶
-
AUDIO_MP3
= 'audio/mp3'¶
-
AUDIO_MPEG
= 'audio/mpeg'¶
-
AUDIO_MULAW
= 'audio/mulaw'¶
-
AUDIO_WAV
= 'audio/wav'¶
-
AUDIO_WEBM
= 'audio/webm'¶
-
AUDIO_WEBM_CODECS_OPUS
= 'audio/webm;codecs=opus'¶
-
AUDIO_WEBM_CODECS_VORBIS
= 'audio/webm;codecs=vorbis'¶
-
-
class
Voice
[source]¶ Bases:
enum.Enum
The voice to use for synthesis.
-
DE_DE_BIRGITVOICE
= 'de-DE_BirgitVoice'¶
-
DE_DE_BIRGITV3VOICE
= 'de-DE_BirgitV3Voice'¶
-
DE_DE_DIETERVOICE
= 'de-DE_DieterVoice'¶
-
DE_DE_DIETERV3VOICE
= 'de-DE_DieterV3Voice'¶
-
EN_GB_KATEVOICE
= 'en-GB_KateVoice'¶
-
EN_GB_KATEV3VOICE
= 'en-GB_KateV3Voice'¶
-
EN_US_ALLISONVOICE
= 'en-US_AllisonVoice'¶
-
EN_US_ALLISONV3VOICE
= 'en-US_AllisonV3Voice'¶
-
EN_US_LISAVOICE
= 'en-US_LisaVoice'¶
-
EN_US_LISAV3VOICE
= 'en-US_LisaV3Voice'¶
-
EN_US_MICHAELVOICE
= 'en-US_MichaelVoice'¶
-
EN_US_MICHAELV3VOICE
= 'en-US_MichaelV3Voice'¶
-
ES_ES_ENRIQUEVOICE
= 'es-ES_EnriqueVoice'¶
-
ES_ES_ENRIQUEV3VOICE
= 'es-ES_EnriqueV3Voice'¶
-
ES_ES_LAURAVOICE
= 'es-ES_LauraVoice'¶
-
ES_ES_LAURAV3VOICE
= 'es-ES_LauraV3Voice'¶
-
ES_LA_SOFIAVOICE
= 'es-LA_SofiaVoice'¶
-
ES_LA_SOFIAV3VOICE
= 'es-LA_SofiaV3Voice'¶
-
ES_US_SOFIAVOICE
= 'es-US_SofiaVoice'¶
-
ES_US_SOFIAV3VOICE
= 'es-US_SofiaV3Voice'¶
-
FR_FR_RENEEVOICE
= 'fr-FR_ReneeVoice'¶
-
FR_FR_RENEEV3VOICE
= 'fr-FR_ReneeV3Voice'¶
-
IT_IT_FRANCESCAVOICE
= 'it-IT_FrancescaVoice'¶
-
IT_IT_FRANCESCAV3VOICE
= 'it-IT_FrancescaV3Voice'¶
-
JA_JP_EMIVOICE
= 'ja-JP_EmiVoice'¶
-
JA_JP_EMIV3VOICE
= 'ja-JP_EmiV3Voice'¶
-
PT_BR_ISABELAVOICE
= 'pt-BR_IsabelaVoice'¶
-
PT_BR_ISABELAV3VOICE
= 'pt-BR_IsabelaV3Voice'¶
-
-
class
-
class
GetPronunciationEnums
[source]¶ Bases:
object
-
class
Voice
[source]¶ Bases:
enum.Enum
A voice that specifies the language in which the pronunciation is to be returned. All voices for the same language (for example, en-US) return the same translation.
-
DE_DE_BIRGITVOICE
= 'de-DE_BirgitVoice'¶
-
DE_DE_BIRGITV3VOICE
= 'de-DE_BirgitV3Voice'¶
-
DE_DE_DIETERVOICE
= 'de-DE_DieterVoice'¶
-
DE_DE_DIETERV3VOICE
= 'de-DE_DieterV3Voice'¶
-
EN_GB_KATEVOICE
= 'en-GB_KateVoice'¶
-
EN_GB_KATEV3VOICE
= 'en-GB_KateV3Voice'¶
-
EN_US_ALLISONVOICE
= 'en-US_AllisonVoice'¶
-
EN_US_ALLISONV3VOICE
= 'en-US_AllisonV3Voice'¶
-
EN_US_LISAVOICE
= 'en-US_LisaVoice'¶
-
EN_US_LISAV3VOICE
= 'en-US_LisaV3Voice'¶
-
EN_US_MICHAELVOICE
= 'en-US_MichaelVoice'¶
-
EN_US_MICHAELV3VOICE
= 'en-US_MichaelV3Voice'¶
-
ES_ES_ENRIQUEVOICE
= 'es-ES_EnriqueVoice'¶
-
ES_ES_ENRIQUEV3VOICE
= 'es-ES_EnriqueV3Voice'¶
-
ES_ES_LAURAVOICE
= 'es-ES_LauraVoice'¶
-
ES_ES_LAURAV3VOICE
= 'es-ES_LauraV3Voice'¶
-
ES_LA_SOFIAVOICE
= 'es-LA_SofiaVoice'¶
-
ES_LA_SOFIAV3VOICE
= 'es-LA_SofiaV3Voice'¶
-
ES_US_SOFIAVOICE
= 'es-US_SofiaVoice'¶
-
ES_US_SOFIAV3VOICE
= 'es-US_SofiaV3Voice'¶
-
FR_FR_RENEEVOICE
= 'fr-FR_ReneeVoice'¶
-
FR_FR_RENEEV3VOICE
= 'fr-FR_ReneeV3Voice'¶
-
IT_IT_FRANCESCAVOICE
= 'it-IT_FrancescaVoice'¶
-
IT_IT_FRANCESCAV3VOICE
= 'it-IT_FrancescaV3Voice'¶
-
JA_JP_EMIVOICE
= 'ja-JP_EmiVoice'¶
-
JA_JP_EMIV3VOICE
= 'ja-JP_EmiV3Voice'¶
-
PT_BR_ISABELAVOICE
= 'pt-BR_IsabelaVoice'¶
-
PT_BR_ISABELAV3VOICE
= 'pt-BR_IsabelaV3Voice'¶
-
-
class
-
class
ListVoiceModelsEnums
[source]¶ Bases:
object
-
class
Language
[source]¶ Bases:
enum.Enum
The language for which custom voice models that are owned by the requesting credentials are to be returned. Omit the parameter to see all custom voice models that are owned by the requester.
-
DE_DE
= 'de-DE'¶
-
EN_GB
= 'en-GB'¶
-
EN_US
= 'en-US'¶
-
ES_ES
= 'es-ES'¶
-
ES_LA
= 'es-LA'¶
-
ES_US
= 'es-US'¶
-
FR_FR
= 'fr-FR'¶
-
IT_IT
= 'it-IT'¶
-
JA_JP
= 'ja-JP'¶
-
PT_BR
= 'pt-BR'¶
-
-
class
-
class
Pronunciation
(pronunciation)[source]¶ Bases:
object
The pronunciation of the specified text.
- Attr str pronunciation
The pronunciation of the specified text in the requested voice and format. If a custom voice model is specified, the pronunciation also reflects that custom voice.
-
class
SupportedFeatures
(custom_pronunciation, voice_transformation)[source]¶ Bases:
object
Additional service features that are supported with the voice.
- Attr bool custom_pronunciation
If true, the voice can be customized; if false, the voice cannot be customized. (Same as customizable.).
- Attr bool voice_transformation
If true, the voice can be transformed by using the SSML <voice-transformation> element; if false, the voice cannot be transformed.
-
class
Translation
(translation, *, part_of_speech=None)[source]¶ Bases:
object
Information about the translation for the specified text.
- Attr str translation
The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for representing the phonetic string of a word either as an IPA translation or as an IBM SPR translation. A sounds-like is one or more words that, when combined, sound like the word.
- Attr str part_of_speech
(optional) Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
class
PartOfSpeechEnum
[source]¶ Bases:
enum.Enum
Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
DOSI
= 'Dosi'¶
-
FUKU
= 'Fuku'¶
-
GOBI
= 'Gobi'¶
-
HOKA
= 'Hoka'¶
-
JODO
= 'Jodo'¶
-
JOSI
= 'Josi'¶
-
KATO
= 'Kato'¶
-
KEDO
= 'Kedo'¶
-
KEYO
= 'Keyo'¶
-
KIGO
= 'Kigo'¶
-
KOYU
= 'Koyu'¶
-
MESI
= 'Mesi'¶
-
RETA
= 'Reta'¶
-
STBI
= 'Stbi'¶
-
STTO
= 'Stto'¶
-
STZO
= 'Stzo'¶
-
SUJI
= 'Suji'¶
-
-
class
Voice
(url, gender, name, language, description, customizable, supported_features, *, customization=None)[source]¶ Bases:
object
Information about an available voice model.
- Attr str url
The URI of the voice.
- Attr str gender
The gender of the voice: male or female.
- Attr str name
The name of the voice. Use this as the voice identifier in all requests.
- Attr str language
The language and region of the voice (for example, en-US).
- Attr str description
A textual description of the voice.
- Attr bool customizable
If true, the voice can be customized; if false, the voice cannot be customized. (Same as custom_pronunciation; maintained for backward compatibility.).
- Attr SupportedFeatures supported_features
Additional service features that are supported with the voice.
- Attr VoiceModel customization
(optional) Returns information about a specified custom voice model. This field is returned only by the Get a voice method and only when you specify the customization ID of a custom voice model.
-
class
VoiceModel
(customization_id, *, name=None, language=None, owner=None, created=None, last_modified=None, description=None, words=None)[source]¶ Bases:
object
Information about an existing custom voice model.
- Attr str customization_id
The customization ID (GUID) of the custom voice model. The Create a custom model method returns only this field. It does not not return the other fields of this object.
- Attr str name
(optional) The name of the custom voice model.
- Attr str language
(optional) The language identifier of the custom voice model (for example, en-US).
- Attr str owner
(optional) The GUID of the credentials for the instance of the service that owns the custom voice model.
- Attr str created
(optional) The date and time in Coordinated Universal Time (UTC) at which the custom voice model was created. The value is provided in full ISO 8601 format (YYYY-MM-DDThh:mm:ss.sTZD).
- Attr str last_modified
(optional) The date and time in Coordinated Universal Time (UTC) at which the custom voice model was last modified. The created and updated fields are equal when a voice model is first added but has yet to be updated. The value is provided in full ISO 8601 format (YYYY-MM-DDThh:mm:ss.sTZD).
- Attr str description
(optional) The description of the custom voice model.
- Attr list[Word] words
(optional) An array of Word objects that lists the words and their translations from the custom voice model. The words are listed in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if the custom model contains no words. This field is returned only by the Get a voice method and only when you specify the customization ID of a custom voice model.
-
class
VoiceModels
(customizations)[source]¶ Bases:
object
Information about existing custom voice models.
- Attr list[VoiceModel] customizations
An array of VoiceModel objects that provides information about each available custom voice model. The array is empty if the requesting credentials own no custom voice models (if no language is specified) or own no custom voice models for the specified language.
-
class
Voices
(voices)[source]¶ Bases:
object
Information about all available voice models.
- Attr list[Voice] voices
A list of available voices.
-
class
Word
(word, translation, *, part_of_speech=None)[source]¶ Bases:
object
Information about a word for the custom voice model.
- Attr str word
The word for the custom voice model.
- Attr str translation
The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for representing the phonetic string of a word either as an IPA or IBM SPR translation. A sounds-like translation consists of one or more words that, when combined, sound like the word.
- Attr str part_of_speech
(optional) Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
class
PartOfSpeechEnum
[source]¶ Bases:
enum.Enum
Japanese only. The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/services/text-to-speech?topic=text-to-speech-rules#jaNotes).
-
DOSI
= 'Dosi'¶
-
FUKU
= 'Fuku'¶
-
GOBI
= 'Gobi'¶
-
HOKA
= 'Hoka'¶
-
JODO
= 'Jodo'¶
-
JOSI
= 'Josi'¶
-
KATO
= 'Kato'¶
-
KEDO
= 'Kedo'¶
-
KEYO
= 'Keyo'¶
-
KIGO
= 'Kigo'¶
-
KOYU
= 'Koyu'¶
-
MESI
= 'Mesi'¶
-
RETA
= 'Reta'¶
-
STBI
= 'Stbi'¶
-
STTO
= 'Stto'¶
-
STZO
= 'Stzo'¶
-
SUJI
= 'Suji'¶
-
-
class
Words
(words)[source]¶ Bases:
object
For the Add custom words method, one or more words that are to be added or updated for the custom voice model and the translation for each specified word. For the List custom words method, the words and their translations from the custom voice model.
- Attr list[Word] words
The Add custom words method accepts an array of Word objects. Each object provides a word that is to be added or updated for the custom voice model and the word’s translation. The List custom words method returns an array of Word objects. Each object shows a word and its translation from the custom voice model. The words are listed in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if the custom model contains no words.