texttospeechv1

package
v2.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 14, 2021 License: Apache-2.0 Imports: 13 Imported by: 2

Documentation

Overview

Package texttospeechv1 : Operations and models for the TextToSpeechV1 service

Index

Constants

View Source
const (
	AddWordOptionsPartOfSpeechDosiConst = "Dosi"
	AddWordOptionsPartOfSpeechFukuConst = "Fuku"
	AddWordOptionsPartOfSpeechGobiConst = "Gobi"
	AddWordOptionsPartOfSpeechHokaConst = "Hoka"
	AddWordOptionsPartOfSpeechJodoConst = "Jodo"
	AddWordOptionsPartOfSpeechJosiConst = "Josi"
	AddWordOptionsPartOfSpeechKatoConst = "Kato"
	AddWordOptionsPartOfSpeechKedoConst = "Kedo"
	AddWordOptionsPartOfSpeechKeyoConst = "Keyo"
	AddWordOptionsPartOfSpeechKigoConst = "Kigo"
	AddWordOptionsPartOfSpeechKoyuConst = "Koyu"
	AddWordOptionsPartOfSpeechMesiConst = "Mesi"
	AddWordOptionsPartOfSpeechRetaConst = "Reta"
	AddWordOptionsPartOfSpeechStbiConst = "Stbi"
	AddWordOptionsPartOfSpeechSttoConst = "Stto"
	AddWordOptionsPartOfSpeechStzoConst = "Stzo"
	AddWordOptionsPartOfSpeechSujiConst = "Suji"
)

Constants associated with the AddWordOptions.PartOfSpeech property. **Japanese only.** The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).

View Source
const (
	CreateCustomModelOptionsLanguageArMsConst = "ar-MS"
	CreateCustomModelOptionsLanguageDeDeConst = "de-DE"
	CreateCustomModelOptionsLanguageEnGbConst = "en-GB"
	CreateCustomModelOptionsLanguageEnUsConst = "en-US"
	CreateCustomModelOptionsLanguageEsEsConst = "es-ES"
	CreateCustomModelOptionsLanguageEsLaConst = "es-LA"
	CreateCustomModelOptionsLanguageEsUsConst = "es-US"
	CreateCustomModelOptionsLanguageFrCaConst = "fr-CA"
	CreateCustomModelOptionsLanguageFrFrConst = "fr-FR"
	CreateCustomModelOptionsLanguageItItConst = "it-IT"
	CreateCustomModelOptionsLanguageJaJpConst = "ja-JP"
	CreateCustomModelOptionsLanguageKoKrConst = "ko-KR"
	CreateCustomModelOptionsLanguageNlBeConst = "nl-BE"
	CreateCustomModelOptionsLanguageNlNlConst = "nl-NL"
	CreateCustomModelOptionsLanguagePtBrConst = "pt-BR"
	CreateCustomModelOptionsLanguageZhCnConst = "zh-CN"
)

Constants associated with the CreateCustomModelOptions.Language property. The language of the new custom model. You create a custom model for a specific language, not for a specific voice. A custom model can be used with any voice for its specified language. Omit the parameter to use the the default language, `en-US`. **Note:** The `ar-AR` language identifier cannot be used to create a custom model. Use the `ar-MS` identifier instead.

**IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only for IBM Cloud.

View Source
const (
	GetPronunciationOptionsVoiceArArOmarvoiceConst        = "ar-AR_OmarVoice"
	GetPronunciationOptionsVoiceArMsOmarvoiceConst        = "ar-MS_OmarVoice"
	GetPronunciationOptionsVoiceDeDeBirgitv3voiceConst    = "de-DE_BirgitV3Voice"
	GetPronunciationOptionsVoiceDeDeBirgitvoiceConst      = "de-DE_BirgitVoice"
	GetPronunciationOptionsVoiceDeDeDieterv3voiceConst    = "de-DE_DieterV3Voice"
	GetPronunciationOptionsVoiceDeDeDietervoiceConst      = "de-DE_DieterVoice"
	GetPronunciationOptionsVoiceDeDeErikav3voiceConst     = "de-DE_ErikaV3Voice"
	GetPronunciationOptionsVoiceEnAuCraigvoiceConst       = "en-AU-CraigVoice"
	GetPronunciationOptionsVoiceEnAuMadisonvoiceConst     = "en-AU-MadisonVoice"
	GetPronunciationOptionsVoiceEnGbCharlottev3voiceConst = "en-GB_CharlotteV3Voice"
	GetPronunciationOptionsVoiceEnGbJamesv3voiceConst     = "en-GB_JamesV3Voice"
	GetPronunciationOptionsVoiceEnGbKatev3voiceConst      = "en-GB_KateV3Voice"
	GetPronunciationOptionsVoiceEnGbKatevoiceConst        = "en-GB_KateVoice"
	GetPronunciationOptionsVoiceEnUsAllisonv3voiceConst   = "en-US_AllisonV3Voice"
	GetPronunciationOptionsVoiceEnUsAllisonvoiceConst     = "en-US_AllisonVoice"
	GetPronunciationOptionsVoiceEnUsEmilyv3voiceConst     = "en-US_EmilyV3Voice"
	GetPronunciationOptionsVoiceEnUsHenryv3voiceConst     = "en-US_HenryV3Voice"
	GetPronunciationOptionsVoiceEnUsKevinv3voiceConst     = "en-US_KevinV3Voice"
	GetPronunciationOptionsVoiceEnUsLisav3voiceConst      = "en-US_LisaV3Voice"
	GetPronunciationOptionsVoiceEnUsLisavoiceConst        = "en-US_LisaVoice"
	GetPronunciationOptionsVoiceEnUsMichaelv3voiceConst   = "en-US_MichaelV3Voice"
	GetPronunciationOptionsVoiceEnUsMichaelvoiceConst     = "en-US_MichaelVoice"
	GetPronunciationOptionsVoiceEnUsOliviav3voiceConst    = "en-US_OliviaV3Voice"
	GetPronunciationOptionsVoiceEsEsEnriquev3voiceConst   = "es-ES_EnriqueV3Voice"
	GetPronunciationOptionsVoiceEsEsEnriquevoiceConst     = "es-ES_EnriqueVoice"
	GetPronunciationOptionsVoiceEsEsLaurav3voiceConst     = "es-ES_LauraV3Voice"
	GetPronunciationOptionsVoiceEsEsLauravoiceConst       = "es-ES_LauraVoice"
	GetPronunciationOptionsVoiceEsLaSofiav3voiceConst     = "es-LA_SofiaV3Voice"
	GetPronunciationOptionsVoiceEsLaSofiavoiceConst       = "es-LA_SofiaVoice"
	GetPronunciationOptionsVoiceEsUsSofiav3voiceConst     = "es-US_SofiaV3Voice"
	GetPronunciationOptionsVoiceEsUsSofiavoiceConst       = "es-US_SofiaVoice"
	GetPronunciationOptionsVoiceFrCaLouisev3voiceConst    = "fr-CA_LouiseV3Voice"
	GetPronunciationOptionsVoiceFrFrNicolasv3voiceConst   = "fr-FR_NicolasV3Voice"
	GetPronunciationOptionsVoiceFrFrReneev3voiceConst     = "fr-FR_ReneeV3Voice"
	GetPronunciationOptionsVoiceFrFrReneevoiceConst       = "fr-FR_ReneeVoice"
	GetPronunciationOptionsVoiceItItFrancescav3voiceConst = "it-IT_FrancescaV3Voice"
	GetPronunciationOptionsVoiceItItFrancescavoiceConst   = "it-IT_FrancescaVoice"
	GetPronunciationOptionsVoiceJaJpEmiv3voiceConst       = "ja-JP_EmiV3Voice"
	GetPronunciationOptionsVoiceJaJpEmivoiceConst         = "ja-JP_EmiVoice"
	GetPronunciationOptionsVoiceKoKrHyunjunvoiceConst     = "ko-KR_HyunjunVoice"
	GetPronunciationOptionsVoiceKoKrSiwoovoiceConst       = "ko-KR_SiWooVoice"
	GetPronunciationOptionsVoiceKoKrYoungmivoiceConst     = "ko-KR_YoungmiVoice"
	GetPronunciationOptionsVoiceKoKrYunavoiceConst        = "ko-KR_YunaVoice"
	GetPronunciationOptionsVoiceNlBeAdelevoiceConst       = "nl-BE_AdeleVoice"
	GetPronunciationOptionsVoiceNlNlEmmavoiceConst        = "nl-NL_EmmaVoice"
	GetPronunciationOptionsVoiceNlNlLiamvoiceConst        = "nl-NL_LiamVoice"
	GetPronunciationOptionsVoicePtBrIsabelav3voiceConst   = "pt-BR_IsabelaV3Voice"
	GetPronunciationOptionsVoicePtBrIsabelavoiceConst     = "pt-BR_IsabelaVoice"
	GetPronunciationOptionsVoiceZhCnLinavoiceConst        = "zh-CN_LiNaVoice"
	GetPronunciationOptionsVoiceZhCnWangweivoiceConst     = "zh-CN_WangWeiVoice"
	GetPronunciationOptionsVoiceZhCnZhangjingvoiceConst   = "zh-CN_ZhangJingVoice"
)

Constants associated with the GetPronunciationOptions.Voice property. A voice that specifies the language in which the pronunciation is to be returned. All voices for the same language (for example, `en-US`) return the same translation. For more information about specifying a voice, see **Important voice updates for IBM Cloud** in the method description.

**IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only for IBM Cloud.

View Source
const (
	GetPronunciationOptionsFormatIBMConst = "ibm"
	GetPronunciationOptionsFormatIpaConst = "ipa"
)

Constants associated with the GetPronunciationOptions.Format property. The phoneme format in which to return the pronunciation. The Arabic, Chinese, Dutch, Australian English, and Korean languages support only IPA. Omit the parameter to obtain the pronunciation in the default format.

View Source
const (
	GetVoiceOptionsVoiceArArOmarvoiceConst        = "ar-AR_OmarVoice"
	GetVoiceOptionsVoiceArMsOmarvoiceConst        = "ar-MS_OmarVoice"
	GetVoiceOptionsVoiceDeDeBirgitv3voiceConst    = "de-DE_BirgitV3Voice"
	GetVoiceOptionsVoiceDeDeBirgitvoiceConst      = "de-DE_BirgitVoice"
	GetVoiceOptionsVoiceDeDeDieterv3voiceConst    = "de-DE_DieterV3Voice"
	GetVoiceOptionsVoiceDeDeDietervoiceConst      = "de-DE_DieterVoice"
	GetVoiceOptionsVoiceDeDeErikav3voiceConst     = "de-DE_ErikaV3Voice"
	GetVoiceOptionsVoiceEnAuCraigvoiceConst       = "en-AU-CraigVoice"
	GetVoiceOptionsVoiceEnAuMadisonvoiceConst     = "en-AU-MadisonVoice"
	GetVoiceOptionsVoiceEnGbCharlottev3voiceConst = "en-GB_CharlotteV3Voice"
	GetVoiceOptionsVoiceEnGbJamesv3voiceConst     = "en-GB_JamesV3Voice"
	GetVoiceOptionsVoiceEnGbKatev3voiceConst      = "en-GB_KateV3Voice"
	GetVoiceOptionsVoiceEnGbKatevoiceConst        = "en-GB_KateVoice"
	GetVoiceOptionsVoiceEnUsAllisonv3voiceConst   = "en-US_AllisonV3Voice"
	GetVoiceOptionsVoiceEnUsAllisonvoiceConst     = "en-US_AllisonVoice"
	GetVoiceOptionsVoiceEnUsEmilyv3voiceConst     = "en-US_EmilyV3Voice"
	GetVoiceOptionsVoiceEnUsHenryv3voiceConst     = "en-US_HenryV3Voice"
	GetVoiceOptionsVoiceEnUsKevinv3voiceConst     = "en-US_KevinV3Voice"
	GetVoiceOptionsVoiceEnUsLisav3voiceConst      = "en-US_LisaV3Voice"
	GetVoiceOptionsVoiceEnUsLisavoiceConst        = "en-US_LisaVoice"
	GetVoiceOptionsVoiceEnUsMichaelv3voiceConst   = "en-US_MichaelV3Voice"
	GetVoiceOptionsVoiceEnUsMichaelvoiceConst     = "en-US_MichaelVoice"
	GetVoiceOptionsVoiceEnUsOliviav3voiceConst    = "en-US_OliviaV3Voice"
	GetVoiceOptionsVoiceEsEsEnriquev3voiceConst   = "es-ES_EnriqueV3Voice"
	GetVoiceOptionsVoiceEsEsEnriquevoiceConst     = "es-ES_EnriqueVoice"
	GetVoiceOptionsVoiceEsEsLaurav3voiceConst     = "es-ES_LauraV3Voice"
	GetVoiceOptionsVoiceEsEsLauravoiceConst       = "es-ES_LauraVoice"
	GetVoiceOptionsVoiceEsLaSofiav3voiceConst     = "es-LA_SofiaV3Voice"
	GetVoiceOptionsVoiceEsLaSofiavoiceConst       = "es-LA_SofiaVoice"
	GetVoiceOptionsVoiceEsUsSofiav3voiceConst     = "es-US_SofiaV3Voice"
	GetVoiceOptionsVoiceEsUsSofiavoiceConst       = "es-US_SofiaVoice"
	GetVoiceOptionsVoiceFrCaLouisev3voiceConst    = "fr-CA_LouiseV3Voice"
	GetVoiceOptionsVoiceFrFrNicolasv3voiceConst   = "fr-FR_NicolasV3Voice"
	GetVoiceOptionsVoiceFrFrReneev3voiceConst     = "fr-FR_ReneeV3Voice"
	GetVoiceOptionsVoiceFrFrReneevoiceConst       = "fr-FR_ReneeVoice"
	GetVoiceOptionsVoiceItItFrancescav3voiceConst = "it-IT_FrancescaV3Voice"
	GetVoiceOptionsVoiceItItFrancescavoiceConst   = "it-IT_FrancescaVoice"
	GetVoiceOptionsVoiceJaJpEmiv3voiceConst       = "ja-JP_EmiV3Voice"
	GetVoiceOptionsVoiceJaJpEmivoiceConst         = "ja-JP_EmiVoice"
	GetVoiceOptionsVoiceKoKrHyunjunvoiceConst     = "ko-KR_HyunjunVoice"
	GetVoiceOptionsVoiceKoKrSiwoovoiceConst       = "ko-KR_SiWooVoice"
	GetVoiceOptionsVoiceKoKrYoungmivoiceConst     = "ko-KR_YoungmiVoice"
	GetVoiceOptionsVoiceKoKrYunavoiceConst        = "ko-KR_YunaVoice"
	GetVoiceOptionsVoiceNlBeAdelevoiceConst       = "nl-BE_AdeleVoice"
	GetVoiceOptionsVoiceNlNlEmmavoiceConst        = "nl-NL_EmmaVoice"
	GetVoiceOptionsVoiceNlNlLiamvoiceConst        = "nl-NL_LiamVoice"
	GetVoiceOptionsVoicePtBrIsabelav3voiceConst   = "pt-BR_IsabelaV3Voice"
	GetVoiceOptionsVoicePtBrIsabelavoiceConst     = "pt-BR_IsabelaVoice"
	GetVoiceOptionsVoiceZhCnLinavoiceConst        = "zh-CN_LiNaVoice"
	GetVoiceOptionsVoiceZhCnWangweivoiceConst     = "zh-CN_WangWeiVoice"
	GetVoiceOptionsVoiceZhCnZhangjingvoiceConst   = "zh-CN_ZhangJingVoice"
)

Constants associated with the GetVoiceOptions.Voice property. The voice for which information is to be returned. For more information about specifying a voice, see **Important voice updates for IBM Cloud** in the method description.

**IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only for IBM Cloud.

View Source
const (
	ListCustomModelsOptionsLanguageArMsConst = "ar-MS"
	ListCustomModelsOptionsLanguageDeDeConst = "de-DE"
	ListCustomModelsOptionsLanguageEnAuConst = "en-AU"
	ListCustomModelsOptionsLanguageEnGbConst = "en-GB"
	ListCustomModelsOptionsLanguageEnUsConst = "en-US"
	ListCustomModelsOptionsLanguageEsEsConst = "es-ES"
	ListCustomModelsOptionsLanguageEsLaConst = "es-LA"
	ListCustomModelsOptionsLanguageEsUsConst = "es-US"
	ListCustomModelsOptionsLanguageFrCaConst = "fr-CA"
	ListCustomModelsOptionsLanguageFrFrConst = "fr-FR"
	ListCustomModelsOptionsLanguageItItConst = "it-IT"
	ListCustomModelsOptionsLanguageJaJpConst = "ja-JP"
	ListCustomModelsOptionsLanguageKoKrConst = "ko-KR"
	ListCustomModelsOptionsLanguageNlBeConst = "nl-BE"
	ListCustomModelsOptionsLanguageNlNlConst = "nl-NL"
	ListCustomModelsOptionsLanguagePtBrConst = "pt-BR"
	ListCustomModelsOptionsLanguageZhCnConst = "zh-CN"
)

Constants associated with the ListCustomModelsOptions.Language property. The language for which custom models that are owned by the requesting credentials are to be returned. Omit the parameter to see all custom models that are owned by the requester.

View Source
const (
	SynthesizeOptionsVoiceArArOmarvoiceConst        = "ar-AR_OmarVoice"
	SynthesizeOptionsVoiceArMsOmarvoiceConst        = "ar-MS_OmarVoice"
	SynthesizeOptionsVoiceDeDeBirgitv3voiceConst    = "de-DE_BirgitV3Voice"
	SynthesizeOptionsVoiceDeDeBirgitvoiceConst      = "de-DE_BirgitVoice"
	SynthesizeOptionsVoiceDeDeDieterv3voiceConst    = "de-DE_DieterV3Voice"
	SynthesizeOptionsVoiceDeDeDietervoiceConst      = "de-DE_DieterVoice"
	SynthesizeOptionsVoiceDeDeErikav3voiceConst     = "de-DE_ErikaV3Voice"
	SynthesizeOptionsVoiceEnAuCraigvoiceConst       = "en-AU-CraigVoice"
	SynthesizeOptionsVoiceEnAuMadisonvoiceConst     = "en-AU-MadisonVoice"
	SynthesizeOptionsVoiceEnGbCharlottev3voiceConst = "en-GB_CharlotteV3Voice"
	SynthesizeOptionsVoiceEnGbJamesv3voiceConst     = "en-GB_JamesV3Voice"
	SynthesizeOptionsVoiceEnGbKatev3voiceConst      = "en-GB_KateV3Voice"
	SynthesizeOptionsVoiceEnGbKatevoiceConst        = "en-GB_KateVoice"
	SynthesizeOptionsVoiceEnUsAllisonv3voiceConst   = "en-US_AllisonV3Voice"
	SynthesizeOptionsVoiceEnUsAllisonvoiceConst     = "en-US_AllisonVoice"
	SynthesizeOptionsVoiceEnUsEmilyv3voiceConst     = "en-US_EmilyV3Voice"
	SynthesizeOptionsVoiceEnUsHenryv3voiceConst     = "en-US_HenryV3Voice"
	SynthesizeOptionsVoiceEnUsKevinv3voiceConst     = "en-US_KevinV3Voice"
	SynthesizeOptionsVoiceEnUsLisav3voiceConst      = "en-US_LisaV3Voice"
	SynthesizeOptionsVoiceEnUsLisavoiceConst        = "en-US_LisaVoice"
	SynthesizeOptionsVoiceEnUsMichaelv3voiceConst   = "en-US_MichaelV3Voice"
	SynthesizeOptionsVoiceEnUsMichaelvoiceConst     = "en-US_MichaelVoice"
	SynthesizeOptionsVoiceEnUsOliviav3voiceConst    = "en-US_OliviaV3Voice"
	SynthesizeOptionsVoiceEsEsEnriquev3voiceConst   = "es-ES_EnriqueV3Voice"
	SynthesizeOptionsVoiceEsEsEnriquevoiceConst     = "es-ES_EnriqueVoice"
	SynthesizeOptionsVoiceEsEsLaurav3voiceConst     = "es-ES_LauraV3Voice"
	SynthesizeOptionsVoiceEsEsLauravoiceConst       = "es-ES_LauraVoice"
	SynthesizeOptionsVoiceEsLaSofiav3voiceConst     = "es-LA_SofiaV3Voice"
	SynthesizeOptionsVoiceEsLaSofiavoiceConst       = "es-LA_SofiaVoice"
	SynthesizeOptionsVoiceEsUsSofiav3voiceConst     = "es-US_SofiaV3Voice"
	SynthesizeOptionsVoiceEsUsSofiavoiceConst       = "es-US_SofiaVoice"
	SynthesizeOptionsVoiceFrCaLouisev3voiceConst    = "fr-CA_LouiseV3Voice"
	SynthesizeOptionsVoiceFrFrNicolasv3voiceConst   = "fr-FR_NicolasV3Voice"
	SynthesizeOptionsVoiceFrFrReneev3voiceConst     = "fr-FR_ReneeV3Voice"
	SynthesizeOptionsVoiceFrFrReneevoiceConst       = "fr-FR_ReneeVoice"
	SynthesizeOptionsVoiceItItFrancescav3voiceConst = "it-IT_FrancescaV3Voice"
	SynthesizeOptionsVoiceItItFrancescavoiceConst   = "it-IT_FrancescaVoice"
	SynthesizeOptionsVoiceJaJpEmiv3voiceConst       = "ja-JP_EmiV3Voice"
	SynthesizeOptionsVoiceJaJpEmivoiceConst         = "ja-JP_EmiVoice"
	SynthesizeOptionsVoiceKoKrHyunjunvoiceConst     = "ko-KR_HyunjunVoice"
	SynthesizeOptionsVoiceKoKrSiwoovoiceConst       = "ko-KR_SiWooVoice"
	SynthesizeOptionsVoiceKoKrYoungmivoiceConst     = "ko-KR_YoungmiVoice"
	SynthesizeOptionsVoiceKoKrYunavoiceConst        = "ko-KR_YunaVoice"
	SynthesizeOptionsVoiceNlBeAdelevoiceConst       = "nl-BE_AdeleVoice"
	SynthesizeOptionsVoiceNlNlEmmavoiceConst        = "nl-NL_EmmaVoice"
	SynthesizeOptionsVoiceNlNlLiamvoiceConst        = "nl-NL_LiamVoice"
	SynthesizeOptionsVoicePtBrIsabelav3voiceConst   = "pt-BR_IsabelaV3Voice"
	SynthesizeOptionsVoicePtBrIsabelavoiceConst     = "pt-BR_IsabelaVoice"
	SynthesizeOptionsVoiceZhCnLinavoiceConst        = "zh-CN_LiNaVoice"
	SynthesizeOptionsVoiceZhCnWangweivoiceConst     = "zh-CN_WangWeiVoice"
	SynthesizeOptionsVoiceZhCnZhangjingvoiceConst   = "zh-CN_ZhangJingVoice"
)

Constants associated with the SynthesizeOptions.Voice property. The voice to use for synthesis. For more information about specifying a voice, see **Important voice updates for IBM Cloud** in the method description.

**IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only for IBM Cloud.

**See also:** See also [Using languages and voices](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-voices).

View Source
const (
	TranslationPartOfSpeechDosiConst = "Dosi"
	TranslationPartOfSpeechFukuConst = "Fuku"
	TranslationPartOfSpeechGobiConst = "Gobi"
	TranslationPartOfSpeechHokaConst = "Hoka"
	TranslationPartOfSpeechJodoConst = "Jodo"
	TranslationPartOfSpeechJosiConst = "Josi"
	TranslationPartOfSpeechKatoConst = "Kato"
	TranslationPartOfSpeechKedoConst = "Kedo"
	TranslationPartOfSpeechKeyoConst = "Keyo"
	TranslationPartOfSpeechKigoConst = "Kigo"
	TranslationPartOfSpeechKoyuConst = "Koyu"
	TranslationPartOfSpeechMesiConst = "Mesi"
	TranslationPartOfSpeechRetaConst = "Reta"
	TranslationPartOfSpeechStbiConst = "Stbi"
	TranslationPartOfSpeechSttoConst = "Stto"
	TranslationPartOfSpeechStzoConst = "Stzo"
	TranslationPartOfSpeechSujiConst = "Suji"
)

Constants associated with the Translation.PartOfSpeech property. **Japanese only.** The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).

View Source
const (
	WordPartOfSpeechDosiConst = "Dosi"
	WordPartOfSpeechFukuConst = "Fuku"
	WordPartOfSpeechGobiConst = "Gobi"
	WordPartOfSpeechHokaConst = "Hoka"
	WordPartOfSpeechJodoConst = "Jodo"
	WordPartOfSpeechJosiConst = "Josi"
	WordPartOfSpeechKatoConst = "Kato"
	WordPartOfSpeechKedoConst = "Kedo"
	WordPartOfSpeechKeyoConst = "Keyo"
	WordPartOfSpeechKigoConst = "Kigo"
	WordPartOfSpeechKoyuConst = "Koyu"
	WordPartOfSpeechMesiConst = "Mesi"
	WordPartOfSpeechRetaConst = "Reta"
	WordPartOfSpeechStbiConst = "Stbi"
	WordPartOfSpeechSttoConst = "Stto"
	WordPartOfSpeechStzoConst = "Stzo"
	WordPartOfSpeechSujiConst = "Suji"
)

Constants associated with the Word.PartOfSpeech property. **Japanese only.** The part of speech for the word. The service uses the value to produce the correct intonation for the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot create multiple entries with different parts of speech for the same word. For more information, see [Working with Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).

View Source
const DefaultServiceName = "text_to_speech"

DefaultServiceName is the default key used to find external configuration information.

View Source
const DefaultServiceURL = "https://api.us-south.text-to-speech.watson.cloud.ibm.com"

DefaultServiceURL is the default URL to make service requests to.

View Source
const (
	SUCCESS = 200
)

Variables

This section is empty.

Functions

func GetServiceURLForRegion

func GetServiceURLForRegion(region string) (string, error)

GetServiceURLForRegion returns the service URL to be used for the specified region

func UnmarshalCustomModel

func UnmarshalCustomModel(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalCustomModel unmarshals an instance of CustomModel from the specified map of raw messages.

func UnmarshalCustomModels

func UnmarshalCustomModels(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalCustomModels unmarshals an instance of CustomModels from the specified map of raw messages.

func UnmarshalPrompt added in v2.1.0

func UnmarshalPrompt(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalPrompt unmarshals an instance of Prompt from the specified map of raw messages.

func UnmarshalPromptMetadata added in v2.1.0

func UnmarshalPromptMetadata(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalPromptMetadata unmarshals an instance of PromptMetadata from the specified map of raw messages.

func UnmarshalPrompts added in v2.1.0

func UnmarshalPrompts(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalPrompts unmarshals an instance of Prompts from the specified map of raw messages.

func UnmarshalPronunciation

func UnmarshalPronunciation(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalPronunciation unmarshals an instance of Pronunciation from the specified map of raw messages.

func UnmarshalSpeaker added in v2.1.0

func UnmarshalSpeaker(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSpeaker unmarshals an instance of Speaker from the specified map of raw messages.

func UnmarshalSpeakerCustomModel added in v2.1.0

func UnmarshalSpeakerCustomModel(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSpeakerCustomModel unmarshals an instance of SpeakerCustomModel from the specified map of raw messages.

func UnmarshalSpeakerCustomModels added in v2.1.0

func UnmarshalSpeakerCustomModels(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSpeakerCustomModels unmarshals an instance of SpeakerCustomModels from the specified map of raw messages.

func UnmarshalSpeakerModel added in v2.1.0

func UnmarshalSpeakerModel(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSpeakerModel unmarshals an instance of SpeakerModel from the specified map of raw messages.

func UnmarshalSpeakerPrompt added in v2.1.0

func UnmarshalSpeakerPrompt(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSpeakerPrompt unmarshals an instance of SpeakerPrompt from the specified map of raw messages.

func UnmarshalSpeakers added in v2.1.0

func UnmarshalSpeakers(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSpeakers unmarshals an instance of Speakers from the specified map of raw messages.

func UnmarshalSupportedFeatures

func UnmarshalSupportedFeatures(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalSupportedFeatures unmarshals an instance of SupportedFeatures from the specified map of raw messages.

func UnmarshalTranslation

func UnmarshalTranslation(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalTranslation unmarshals an instance of Translation from the specified map of raw messages.

func UnmarshalVoice

func UnmarshalVoice(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalVoice unmarshals an instance of Voice from the specified map of raw messages.

func UnmarshalVoices

func UnmarshalVoices(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalVoices unmarshals an instance of Voices from the specified map of raw messages.

func UnmarshalWord

func UnmarshalWord(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalWord unmarshals an instance of Word from the specified map of raw messages.

func UnmarshalWords

func UnmarshalWords(m map[string]json.RawMessage, result interface{}) (err error)

UnmarshalWords unmarshals an instance of Words from the specified map of raw messages.

Types

type AddCustomPromptOptions added in v2.1.0

type AddCustomPromptOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The identifier of the prompt that is to be added to the custom model:
	// * Include a maximum of 49 characters in the ID.
	// * Include only alphanumeric characters and `_` (underscores) in the ID.
	// * Do not include XML sensitive characters (double quotes, single quotes, ampersands, angle brackets, and slashes) in
	// the ID.
	// * To add a new prompt, the ID must be unique for the specified custom model. Otherwise, the new information for the
	// prompt overwrites the existing prompt that has that ID.
	PromptID *string `json:"-" validate:"required,ne="`

	// Information about the prompt that is to be added to a custom model. The following example of a `PromptMetadata`
	// object includes both the required prompt text and an optional speaker model ID:
	//
	// `{ "prompt_text": "Thank you and good-bye!", "speaker_id": "823068b2-ed4e-11ea-b6e0-7b6456aa95cc" }`.
	Metadata *PromptMetadata `json:"-" validate:"required"`

	// An audio file that speaks the text of the prompt with intonation and prosody that matches how you would like the
	// prompt to be spoken.
	// * The prompt audio must be in WAV format and must have a minimum sampling rate of 16 kHz. The service accepts audio
	// with higher sampling rates. The service transcodes all audio to 16 kHz before processing it.
	// * The length of the prompt audio is limited to 30 seconds.
	File io.ReadCloser `json:"-" validate:"required"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

AddCustomPromptOptions : The AddCustomPrompt options.

func (*AddCustomPromptOptions) SetCustomizationID added in v2.1.0

func (_options *AddCustomPromptOptions) SetCustomizationID(customizationID string) *AddCustomPromptOptions

SetCustomizationID : Allow user to set CustomizationID

func (*AddCustomPromptOptions) SetFile added in v2.1.0

SetFile : Allow user to set File

func (*AddCustomPromptOptions) SetHeaders added in v2.1.0

func (options *AddCustomPromptOptions) SetHeaders(param map[string]string) *AddCustomPromptOptions

SetHeaders : Allow user to set Headers

func (*AddCustomPromptOptions) SetMetadata added in v2.1.0

func (_options *AddCustomPromptOptions) SetMetadata(metadata *PromptMetadata) *AddCustomPromptOptions

SetMetadata : Allow user to set Metadata

func (*AddCustomPromptOptions) SetPromptID added in v2.1.0

func (_options *AddCustomPromptOptions) SetPromptID(promptID string) *AddCustomPromptOptions

SetPromptID : Allow user to set PromptID

type AddWordOptions

type AddWordOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The word that is to be added or updated for the custom model.
	Word *string `json:"-" validate:"required,ne="`

	// The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for
	// representing the phonetic string of a word either as an IPA translation or as an IBM SPR translation. The Arabic,
	// Chinese, Dutch, Australian English, and Korean languages support only IPA. A sounds-like is one or more words that,
	// when combined, sound like the word.
	Translation *string `json:"translation" validate:"required"`

	// **Japanese only.** The part of speech for the word. The service uses the value to produce the correct intonation for
	// the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot
	// create multiple entries with different parts of speech for the same word. For more information, see [Working with
	// Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
	PartOfSpeech *string `json:"part_of_speech,omitempty"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

AddWordOptions : The AddWord options.

func (*AddWordOptions) SetCustomizationID

func (_options *AddWordOptions) SetCustomizationID(customizationID string) *AddWordOptions

SetCustomizationID : Allow user to set CustomizationID

func (*AddWordOptions) SetHeaders

func (options *AddWordOptions) SetHeaders(param map[string]string) *AddWordOptions

SetHeaders : Allow user to set Headers

func (*AddWordOptions) SetPartOfSpeech

func (_options *AddWordOptions) SetPartOfSpeech(partOfSpeech string) *AddWordOptions

SetPartOfSpeech : Allow user to set PartOfSpeech

func (*AddWordOptions) SetTranslation

func (_options *AddWordOptions) SetTranslation(translation string) *AddWordOptions

SetTranslation : Allow user to set Translation

func (*AddWordOptions) SetWord

func (_options *AddWordOptions) SetWord(word string) *AddWordOptions

SetWord : Allow user to set Word

type AddWordsOptions

type AddWordsOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The [Add custom words](#addwords) method accepts an array of `Word` objects. Each object provides a word that is to
	// be added or updated for the custom model and the word's translation.
	//
	// The [List custom words](#listwords) method returns an array of `Word` objects. Each object shows a word and its
	// translation from the custom model. The words are listed in alphabetical order, with uppercase letters listed before
	// lowercase letters. The array is empty if the custom model contains no words.
	Words []Word `json:"words" validate:"required"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

AddWordsOptions : The AddWords options.

func (*AddWordsOptions) SetCustomizationID

func (_options *AddWordsOptions) SetCustomizationID(customizationID string) *AddWordsOptions

SetCustomizationID : Allow user to set CustomizationID

func (*AddWordsOptions) SetHeaders

func (options *AddWordsOptions) SetHeaders(param map[string]string) *AddWordsOptions

SetHeaders : Allow user to set Headers

func (*AddWordsOptions) SetWords

func (_options *AddWordsOptions) SetWords(words []Word) *AddWordsOptions

SetWords : Allow user to set Words

type AudioContentTypeWrapper

type AudioContentTypeWrapper struct {
	BinaryStreams []struct {
		ContentType string `json:"content_type"`
	} `json:"binary_streams"`
}

AudioContentTypeWrapper : The service sends this message to confirm the audio format

type CreateCustomModelOptions

type CreateCustomModelOptions struct {
	// The name of the new custom model.
	Name *string `json:"name" validate:"required"`

	// The language of the new custom model. You create a custom model for a specific language, not for a specific voice. A
	// custom model can be used with any voice for its specified language. Omit the parameter to use the the default
	// language, `en-US`. **Note:** The `ar-AR` language identifier cannot be used to create a custom model. Use the
	// `ar-MS` identifier instead.
	//
	// **IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only
	// for IBM Cloud.
	Language *string `json:"language,omitempty"`

	// A description of the new custom model. Specifying a description is recommended.
	Description *string `json:"description,omitempty"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

CreateCustomModelOptions : The CreateCustomModel options.

func (*CreateCustomModelOptions) SetDescription

func (_options *CreateCustomModelOptions) SetDescription(description string) *CreateCustomModelOptions

SetDescription : Allow user to set Description

func (*CreateCustomModelOptions) SetHeaders

func (options *CreateCustomModelOptions) SetHeaders(param map[string]string) *CreateCustomModelOptions

SetHeaders : Allow user to set Headers

func (*CreateCustomModelOptions) SetLanguage

func (_options *CreateCustomModelOptions) SetLanguage(language string) *CreateCustomModelOptions

SetLanguage : Allow user to set Language

func (*CreateCustomModelOptions) SetName

func (_options *CreateCustomModelOptions) SetName(name string) *CreateCustomModelOptions

SetName : Allow user to set Name

type CreateSpeakerModelOptions added in v2.1.0

type CreateSpeakerModelOptions struct {
	// The name of the speaker that is to be added to the service instance.
	// * Include a maximum of 49 characters in the name.
	// * Include only alphanumeric characters and `_` (underscores) in the name.
	// * Do not include XML sensitive characters (double quotes, single quotes, ampersands, angle brackets, and slashes) in
	// the name.
	// * Do not use the name of an existing speaker that is already defined for the service instance.
	SpeakerName *string `json:"-" validate:"required"`

	// An enrollment audio file that contains a sample of the speaker’s voice.
	// * The enrollment audio must be in WAV format and must have a minimum sampling rate of 16 kHz. The service accepts
	// audio with higher sampling rates. It transcodes all audio to 16 kHz before processing it.
	// * The length of the enrollment audio is limited to 1 minute. Speaking one or two paragraphs of text that include
	// five to ten sentences is recommended.
	Audio io.ReadCloser `json:"audio" validate:"required"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

CreateSpeakerModelOptions : The CreateSpeakerModel options.

func (*CreateSpeakerModelOptions) SetAudio added in v2.1.0

SetAudio : Allow user to set Audio

func (*CreateSpeakerModelOptions) SetHeaders added in v2.1.0

func (options *CreateSpeakerModelOptions) SetHeaders(param map[string]string) *CreateSpeakerModelOptions

SetHeaders : Allow user to set Headers

func (*CreateSpeakerModelOptions) SetSpeakerName added in v2.1.0

func (_options *CreateSpeakerModelOptions) SetSpeakerName(speakerName string) *CreateSpeakerModelOptions

SetSpeakerName : Allow user to set SpeakerName

type CustomModel

type CustomModel struct {
	// The customization ID (GUID) of the custom model. The [Create a custom model](#createcustommodel) method returns only
	// this field. It does not not return the other fields of this object.
	CustomizationID *string `json:"customization_id" validate:"required"`

	// The name of the custom model.
	Name *string `json:"name,omitempty"`

	// The language identifier of the custom model (for example, `en-US`).
	Language *string `json:"language,omitempty"`

	// The GUID of the credentials for the instance of the service that owns the custom model.
	Owner *string `json:"owner,omitempty"`

	// The date and time in Coordinated Universal Time (UTC) at which the custom model was created. The value is provided
	// in full ISO 8601 format (`YYYY-MM-DDThh:mm:ss.sTZD`).
	Created *string `json:"created,omitempty"`

	// The date and time in Coordinated Universal Time (UTC) at which the custom model was last modified. The `created` and
	// `updated` fields are equal when a model is first added but has yet to be updated. The value is provided in full ISO
	// 8601 format (`YYYY-MM-DDThh:mm:ss.sTZD`).
	LastModified *string `json:"last_modified,omitempty"`

	// The description of the custom model.
	Description *string `json:"description,omitempty"`

	// An array of `Word` objects that lists the words and their translations from the custom model. The words are listed
	// in alphabetical order, with uppercase letters listed before lowercase letters. The array is empty if no words are
	// defined for the custom model. This field is returned only by the [Get a custom model](#getcustommodel) method.
	Words []Word `json:"words,omitempty"`

	// An array of `Prompt` objects that provides information about the prompts that are defined for the specified custom
	// model. The array is empty if no prompts are defined for the custom model. This field is returned only by the [Get a
	// custom model](#getcustommodel) method.
	Prompts []Prompt `json:"prompts,omitempty"`
}

CustomModel : Information about an existing custom model.

type CustomModels

type CustomModels struct {
	// An array of `CustomModel` objects that provides information about each available custom model. The array is empty if
	// the requesting credentials own no custom models (if no language is specified) or own no custom models for the
	// specified language.
	Customizations []CustomModel `json:"customizations" validate:"required"`
}

CustomModels : Information about existing custom models.

type DeleteCustomModelOptions

type DeleteCustomModelOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

DeleteCustomModelOptions : The DeleteCustomModel options.

func (*DeleteCustomModelOptions) SetCustomizationID

func (_options *DeleteCustomModelOptions) SetCustomizationID(customizationID string) *DeleteCustomModelOptions

SetCustomizationID : Allow user to set CustomizationID

func (*DeleteCustomModelOptions) SetHeaders

func (options *DeleteCustomModelOptions) SetHeaders(param map[string]string) *DeleteCustomModelOptions

SetHeaders : Allow user to set Headers

type DeleteCustomPromptOptions added in v2.1.0

type DeleteCustomPromptOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The identifier (name) of the prompt that is to be deleted.
	PromptID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

DeleteCustomPromptOptions : The DeleteCustomPrompt options.

func (*DeleteCustomPromptOptions) SetCustomizationID added in v2.1.0

func (_options *DeleteCustomPromptOptions) SetCustomizationID(customizationID string) *DeleteCustomPromptOptions

SetCustomizationID : Allow user to set CustomizationID

func (*DeleteCustomPromptOptions) SetHeaders added in v2.1.0

func (options *DeleteCustomPromptOptions) SetHeaders(param map[string]string) *DeleteCustomPromptOptions

SetHeaders : Allow user to set Headers

func (*DeleteCustomPromptOptions) SetPromptID added in v2.1.0

func (_options *DeleteCustomPromptOptions) SetPromptID(promptID string) *DeleteCustomPromptOptions

SetPromptID : Allow user to set PromptID

type DeleteSpeakerModelOptions added in v2.1.0

type DeleteSpeakerModelOptions struct {
	// The speaker ID (GUID) of the speaker model. You must make the request with service credentials for the instance of
	// the service that owns the speaker model.
	SpeakerID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

DeleteSpeakerModelOptions : The DeleteSpeakerModel options.

func (*DeleteSpeakerModelOptions) SetHeaders added in v2.1.0

func (options *DeleteSpeakerModelOptions) SetHeaders(param map[string]string) *DeleteSpeakerModelOptions

SetHeaders : Allow user to set Headers

func (*DeleteSpeakerModelOptions) SetSpeakerID added in v2.1.0

func (_options *DeleteSpeakerModelOptions) SetSpeakerID(speakerID string) *DeleteSpeakerModelOptions

SetSpeakerID : Allow user to set SpeakerID

type DeleteUserDataOptions

type DeleteUserDataOptions struct {
	// The customer ID for which all data is to be deleted.
	CustomerID *string `json:"-" validate:"required"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

DeleteUserDataOptions : The DeleteUserData options.

func (*DeleteUserDataOptions) SetCustomerID

func (_options *DeleteUserDataOptions) SetCustomerID(customerID string) *DeleteUserDataOptions

SetCustomerID : Allow user to set CustomerID

func (*DeleteUserDataOptions) SetHeaders

func (options *DeleteUserDataOptions) SetHeaders(param map[string]string) *DeleteUserDataOptions

SetHeaders : Allow user to set Headers

type DeleteWordOptions

type DeleteWordOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The word that is to be deleted from the custom model.
	Word *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

DeleteWordOptions : The DeleteWord options.

func (*DeleteWordOptions) SetCustomizationID

func (_options *DeleteWordOptions) SetCustomizationID(customizationID string) *DeleteWordOptions

SetCustomizationID : Allow user to set CustomizationID

func (*DeleteWordOptions) SetHeaders

func (options *DeleteWordOptions) SetHeaders(param map[string]string) *DeleteWordOptions

SetHeaders : Allow user to set Headers

func (*DeleteWordOptions) SetWord

func (_options *DeleteWordOptions) SetWord(word string) *DeleteWordOptions

SetWord : Allow user to set Word

type GetCustomModelOptions

type GetCustomModelOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

GetCustomModelOptions : The GetCustomModel options.

func (*GetCustomModelOptions) SetCustomizationID

func (_options *GetCustomModelOptions) SetCustomizationID(customizationID string) *GetCustomModelOptions

SetCustomizationID : Allow user to set CustomizationID

func (*GetCustomModelOptions) SetHeaders

func (options *GetCustomModelOptions) SetHeaders(param map[string]string) *GetCustomModelOptions

SetHeaders : Allow user to set Headers

type GetCustomPromptOptions added in v2.1.0

type GetCustomPromptOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The identifier (name) of the prompt.
	PromptID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

GetCustomPromptOptions : The GetCustomPrompt options.

func (*GetCustomPromptOptions) SetCustomizationID added in v2.1.0

func (_options *GetCustomPromptOptions) SetCustomizationID(customizationID string) *GetCustomPromptOptions

SetCustomizationID : Allow user to set CustomizationID

func (*GetCustomPromptOptions) SetHeaders added in v2.1.0

func (options *GetCustomPromptOptions) SetHeaders(param map[string]string) *GetCustomPromptOptions

SetHeaders : Allow user to set Headers

func (*GetCustomPromptOptions) SetPromptID added in v2.1.0

func (_options *GetCustomPromptOptions) SetPromptID(promptID string) *GetCustomPromptOptions

SetPromptID : Allow user to set PromptID

type GetPronunciationOptions

type GetPronunciationOptions struct {
	// The word for which the pronunciation is requested.
	Text *string `json:"-" validate:"required"`

	// A voice that specifies the language in which the pronunciation is to be returned. All voices for the same language
	// (for example, `en-US`) return the same translation. For more information about specifying a voice, see **Important
	// voice updates for IBM Cloud** in the method description.
	//
	// **IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only
	// for IBM Cloud.
	Voice *string `json:"-"`

	// The phoneme format in which to return the pronunciation. The Arabic, Chinese, Dutch, Australian English, and Korean
	// languages support only IPA. Omit the parameter to obtain the pronunciation in the default format.
	Format *string `json:"-"`

	// The customization ID (GUID) of a custom model for which the pronunciation is to be returned. The language of a
	// specified custom model must match the language of the specified voice. If the word is not defined in the specified
	// custom model, the service returns the default translation for the custom model's language. You must make the request
	// with credentials for the instance of the service that owns the custom model. Omit the parameter to see the
	// translation for the specified voice with no customization.
	CustomizationID *string `json:"-"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

GetPronunciationOptions : The GetPronunciation options.

func (*GetPronunciationOptions) SetCustomizationID

func (_options *GetPronunciationOptions) SetCustomizationID(customizationID string) *GetPronunciationOptions

SetCustomizationID : Allow user to set CustomizationID

func (*GetPronunciationOptions) SetFormat

func (_options *GetPronunciationOptions) SetFormat(format string) *GetPronunciationOptions

SetFormat : Allow user to set Format

func (*GetPronunciationOptions) SetHeaders

func (options *GetPronunciationOptions) SetHeaders(param map[string]string) *GetPronunciationOptions

SetHeaders : Allow user to set Headers

func (*GetPronunciationOptions) SetText

func (_options *GetPronunciationOptions) SetText(text string) *GetPronunciationOptions

SetText : Allow user to set Text

func (*GetPronunciationOptions) SetVoice

func (_options *GetPronunciationOptions) SetVoice(voice string) *GetPronunciationOptions

SetVoice : Allow user to set Voice

type GetSpeakerModelOptions added in v2.1.0

type GetSpeakerModelOptions struct {
	// The speaker ID (GUID) of the speaker model. You must make the request with service credentials for the instance of
	// the service that owns the speaker model.
	SpeakerID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

GetSpeakerModelOptions : The GetSpeakerModel options.

func (*GetSpeakerModelOptions) SetHeaders added in v2.1.0

func (options *GetSpeakerModelOptions) SetHeaders(param map[string]string) *GetSpeakerModelOptions

SetHeaders : Allow user to set Headers

func (*GetSpeakerModelOptions) SetSpeakerID added in v2.1.0

func (_options *GetSpeakerModelOptions) SetSpeakerID(speakerID string) *GetSpeakerModelOptions

SetSpeakerID : Allow user to set SpeakerID

type GetVoiceOptions

type GetVoiceOptions struct {
	// The voice for which information is to be returned. For more information about specifying a voice, see **Important
	// voice updates for IBM Cloud** in the method description.
	//
	// **IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only
	// for IBM Cloud.
	Voice *string `json:"-" validate:"required,ne="`

	// The customization ID (GUID) of a custom model for which information is to be returned. You must make the request
	// with credentials for the instance of the service that owns the custom model. Omit the parameter to see information
	// about the specified voice with no customization.
	CustomizationID *string `json:"-"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

GetVoiceOptions : The GetVoice options.

func (*GetVoiceOptions) SetCustomizationID

func (_options *GetVoiceOptions) SetCustomizationID(customizationID string) *GetVoiceOptions

SetCustomizationID : Allow user to set CustomizationID

func (*GetVoiceOptions) SetHeaders

func (options *GetVoiceOptions) SetHeaders(param map[string]string) *GetVoiceOptions

SetHeaders : Allow user to set Headers

func (*GetVoiceOptions) SetVoice

func (_options *GetVoiceOptions) SetVoice(voice string) *GetVoiceOptions

SetVoice : Allow user to set Voice

type GetWordOptions

type GetWordOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// The word that is to be queried from the custom model.
	Word *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

GetWordOptions : The GetWord options.

func (*GetWordOptions) SetCustomizationID

func (_options *GetWordOptions) SetCustomizationID(customizationID string) *GetWordOptions

SetCustomizationID : Allow user to set CustomizationID

func (*GetWordOptions) SetHeaders

func (options *GetWordOptions) SetHeaders(param map[string]string) *GetWordOptions

SetHeaders : Allow user to set Headers

func (*GetWordOptions) SetWord

func (_options *GetWordOptions) SetWord(word string) *GetWordOptions

SetWord : Allow user to set Word

type ListCustomModelsOptions

type ListCustomModelsOptions struct {
	// The language for which custom models that are owned by the requesting credentials are to be returned. Omit the
	// parameter to see all custom models that are owned by the requester.
	Language *string `json:"-"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

ListCustomModelsOptions : The ListCustomModels options.

func (*ListCustomModelsOptions) SetHeaders

func (options *ListCustomModelsOptions) SetHeaders(param map[string]string) *ListCustomModelsOptions

SetHeaders : Allow user to set Headers

func (*ListCustomModelsOptions) SetLanguage

func (_options *ListCustomModelsOptions) SetLanguage(language string) *ListCustomModelsOptions

SetLanguage : Allow user to set Language

type ListCustomPromptsOptions added in v2.1.0

type ListCustomPromptsOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

ListCustomPromptsOptions : The ListCustomPrompts options.

func (*ListCustomPromptsOptions) SetCustomizationID added in v2.1.0

func (_options *ListCustomPromptsOptions) SetCustomizationID(customizationID string) *ListCustomPromptsOptions

SetCustomizationID : Allow user to set CustomizationID

func (*ListCustomPromptsOptions) SetHeaders added in v2.1.0

func (options *ListCustomPromptsOptions) SetHeaders(param map[string]string) *ListCustomPromptsOptions

SetHeaders : Allow user to set Headers

type ListSpeakerModelsOptions added in v2.1.0

type ListSpeakerModelsOptions struct {

	// Allows users to set headers on API requests
	Headers map[string]string
}

ListSpeakerModelsOptions : The ListSpeakerModels options.

func (*ListSpeakerModelsOptions) SetHeaders added in v2.1.0

func (options *ListSpeakerModelsOptions) SetHeaders(param map[string]string) *ListSpeakerModelsOptions

SetHeaders : Allow user to set Headers

type ListVoicesOptions

type ListVoicesOptions struct {

	// Allows users to set headers on API requests
	Headers map[string]string
}

ListVoicesOptions : The ListVoices options.

func (*ListVoicesOptions) SetHeaders

func (options *ListVoicesOptions) SetHeaders(param map[string]string) *ListVoicesOptions

SetHeaders : Allow user to set Headers

type ListWordsOptions

type ListWordsOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// Allows users to set headers on API requests
	Headers map[string]string
}

ListWordsOptions : The ListWords options.

func (*ListWordsOptions) SetCustomizationID

func (_options *ListWordsOptions) SetCustomizationID(customizationID string) *ListWordsOptions

SetCustomizationID : Allow user to set CustomizationID

func (*ListWordsOptions) SetHeaders

func (options *ListWordsOptions) SetHeaders(param map[string]string) *ListWordsOptions

SetHeaders : Allow user to set Headers

type Marks

type Marks struct {
	Marks [][]interface{} `json:"marks"`
}

Timings : An array of mark times

type Prompt added in v2.1.0

type Prompt struct {
	// The user-specified text of the prompt.
	Prompt *string `json:"prompt" validate:"required"`

	// The user-specified identifier (name) of the prompt.
	PromptID *string `json:"prompt_id" validate:"required"`

	// The status of the prompt:
	// * `processing`: The service received the request to add the prompt and is analyzing the validity of the prompt.
	// * `available`: The service successfully validated the prompt, which is now ready for use in a speech synthesis
	// request.
	// * `failed`: The service's validation of the prompt failed. The status of the prompt includes an `error` field that
	// describes the reason for the failure.
	Status *string `json:"status" validate:"required"`

	// If the status of the prompt is `failed`, an error message that describes the reason for the failure. The field is
	// omitted if no error occurred.
	Error *string `json:"error,omitempty"`

	// The speaker ID (GUID) of the speaker for which the prompt was defined. The field is omitted if no speaker ID was
	// specified.
	SpeakerID *string `json:"speaker_id,omitempty"`
}

Prompt : Information about a custom prompt.

type PromptMetadata added in v2.1.0

type PromptMetadata struct {
	// The required written text of the spoken prompt. The length of a prompt's text is limited to a few sentences.
	// Speaking one or two sentences of text is the recommended limit. A prompt cannot contain more than 1000 characters of
	// text. Escape any XML control characters (double quotes, single quotes, ampersands, angle brackets, and slashes) that
	// appear in the text of the prompt.
	PromptText *string `json:"prompt_text" validate:"required"`

	// The optional speaker ID (GUID) of a previously defined speaker model that is to be associated with the prompt.
	SpeakerID *string `json:"speaker_id,omitempty"`
}

PromptMetadata : Information about the prompt that is to be added to a custom model. The following example of a `PromptMetadata` object includes both the required prompt text and an optional speaker model ID:

`{ "prompt_text": "Thank you and good-bye!", "speaker_id": "823068b2-ed4e-11ea-b6e0-7b6456aa95cc" }`.

type Prompts added in v2.1.0

type Prompts struct {
	// An array of `Prompt` objects that provides information about the prompts that are defined for the specified custom
	// model. The array is empty if no prompts are defined for the custom model.
	Prompts []Prompt `json:"prompts" validate:"required"`
}

Prompts : Information about the custom prompts that are defined for a custom model.

type Pronunciation

type Pronunciation struct {
	// The pronunciation of the specified text in the requested voice and format. If a custom model is specified, the
	// pronunciation also reflects that custom model.
	Pronunciation *string `json:"pronunciation" validate:"required"`
}

Pronunciation : The pronunciation of the specified text.

type Speaker added in v2.1.0

type Speaker struct {
	// The speaker ID (GUID) of the speaker.
	SpeakerID *string `json:"speaker_id" validate:"required"`

	// The user-defined name of the speaker.
	Name *string `json:"name" validate:"required"`
}

Speaker : Information about a speaker model.

type SpeakerCustomModel added in v2.1.0

type SpeakerCustomModel struct {
	// The customization ID (GUID) of a custom model for which the speaker has defined one or more prompts.
	CustomizationID *string `json:"customization_id" validate:"required"`

	// An array of `SpeakerPrompt` objects that provides information about each prompt that the user has defined for the
	// custom model.
	Prompts []SpeakerPrompt `json:"prompts" validate:"required"`
}

SpeakerCustomModel : A custom models for which the speaker has defined prompts.

type SpeakerCustomModels added in v2.1.0

type SpeakerCustomModels struct {
	// An array of `SpeakerCustomModel` objects. Each object provides information about the prompts that are defined for a
	// specified speaker in the custom models that are owned by a specified service instance. The array is empty if no
	// prompts are defined for the speaker.
	Customizations []SpeakerCustomModel `json:"customizations" validate:"required"`
}

SpeakerCustomModels : Custom models for which the speaker has defined prompts.

type SpeakerModel added in v2.1.0

type SpeakerModel struct {
	// The speaker ID (GUID) of the speaker model.
	SpeakerID *string `json:"speaker_id" validate:"required"`
}

SpeakerModel : The speaker ID of the speaker model.

type SpeakerPrompt added in v2.1.0

type SpeakerPrompt struct {
	// The user-specified text of the prompt.
	Prompt *string `json:"prompt" validate:"required"`

	// The user-specified identifier (name) of the prompt.
	PromptID *string `json:"prompt_id" validate:"required"`

	// The status of the prompt:
	// * `processing`: The service received the request to add the prompt and is analyzing the validity of the prompt.
	// * `available`: The service successfully validated the prompt, which is now ready for use in a speech synthesis
	// request.
	// * `failed`: The service's validation of the prompt failed. The status of the prompt includes an `error` field that
	// describes the reason for the failure.
	Status *string `json:"status" validate:"required"`

	// If the status of the prompt is `failed`, an error message that describes the reason for the failure. The field is
	// omitted if no error occurred.
	Error *string `json:"error,omitempty"`
}

SpeakerPrompt : A prompt that a speaker has defined for a custom model.

type Speakers added in v2.1.0

type Speakers struct {
	// An array of `Speaker` objects that provides information about the speakers for the service instance. The array is
	// empty if the service instance has no speakers.
	Speakers []Speaker `json:"speakers" validate:"required"`
}

Speakers : Information about all speaker models for the service instance.

type SupportedFeatures

type SupportedFeatures struct {
	// If `true`, the voice can be customized; if `false`, the voice cannot be customized. (Same as `customizable`.).
	CustomPronunciation *bool `json:"custom_pronunciation" validate:"required"`

	// If `true`, the voice can be transformed by using the SSML <voice-transformation> element; if `false`, the
	// voice cannot be transformed. The feature was available only for the now-deprecated standard voices. You cannot use
	// the feature with neural voices.
	VoiceTransformation *bool `json:"voice_transformation" validate:"required"`
}

SupportedFeatures : Additional service features that are supported with the voice.

type SynthesizeCallbackWrapper

type SynthesizeCallbackWrapper interface {
	OnOpen()
	OnError(error)
	OnContentType(string)
	OnTimingInformation(Timings)
	OnMarks(Marks)
	OnAudioStream([]byte)
	OnData(*core.DetailedResponse)
	OnClose()
}

SynthesizeCallbackWrapper : callback for synthesize using websocket

type SynthesizeListener

type SynthesizeListener struct {
	IsClosed chan bool
	Callback SynthesizeCallbackWrapper
}

func (SynthesizeListener) OnClose

func (listener SynthesizeListener) OnClose()

OnClose: Callback when websocket connection is closed

func (SynthesizeListener) OnData

func (listener SynthesizeListener) OnData(conn *websocket.Conn)

OnData: Callback when websocket connection receives data

func (SynthesizeListener) OnError

func (listener SynthesizeListener) OnError(err error)

OnError: Callback when error encountered

func (SynthesizeListener) OnOpen

func (listener SynthesizeListener) OnOpen(conn *websocket.Conn)

OnOpen: Sends start message to server when connection created

func (SynthesizeListener) SendText

func (listener SynthesizeListener) SendText(conn *websocket.Conn, req *http.Request)

SendText: Sends the text message Note: The service handles one request per connection

type SynthesizeOptions

type SynthesizeOptions struct {
	// The text to synthesize.
	Text *string `json:"text" validate:"required"`

	// The requested format (MIME type) of the audio. You can use the `Accept` header or the `accept` parameter to specify
	// the audio format. For more information about specifying an audio format, see **Audio formats (accept types)** in the
	// method description.
	Accept *string `json:"-"`

	// The voice to use for synthesis. For more information about specifying a voice, see **Important voice updates for IBM
	// Cloud** in the method description.
	//
	// **IBM Cloud:** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only
	// for IBM Cloud.
	//
	// **See also:** See also [Using languages and
	// voices](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-voices).
	Voice *string `json:"-"`

	// The customization ID (GUID) of a custom model to use for the synthesis. If a custom model is specified, it works
	// only if it matches the language of the indicated voice. You must make the request with credentials for the instance
	// of the service that owns the custom model. Omit the parameter to use the specified voice with no customization.
	CustomizationID *string `json:"-"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

SynthesizeOptions : The Synthesize options.

func (*SynthesizeOptions) SetAccept

func (_options *SynthesizeOptions) SetAccept(accept string) *SynthesizeOptions

SetAccept : Allow user to set Accept

func (*SynthesizeOptions) SetCustomizationID

func (_options *SynthesizeOptions) SetCustomizationID(customizationID string) *SynthesizeOptions

SetCustomizationID : Allow user to set CustomizationID

func (*SynthesizeOptions) SetHeaders

func (options *SynthesizeOptions) SetHeaders(param map[string]string) *SynthesizeOptions

SetHeaders : Allow user to set Headers

func (*SynthesizeOptions) SetText

func (_options *SynthesizeOptions) SetText(text string) *SynthesizeOptions

SetText : Allow user to set Text

func (*SynthesizeOptions) SetVoice

func (_options *SynthesizeOptions) SetVoice(voice string) *SynthesizeOptions

SetVoice : Allow user to set Voice

type SynthesizeUsingWebsocketOptions

type SynthesizeUsingWebsocketOptions struct {
	SynthesizeOptions

	// Callback to listen to events
	Callback SynthesizeCallbackWrapper `json:"callback" validate:"required"`

	// Timings specifies that the service is to return word timing information for all strings of the
	// input text. The service returns the start and end time of each string of the input. Specify words as the lone element
	// of the array to request word timings. Specify an empty array or omit the parameter to receive no word timings. For
	// more information, see [Obtaining word timings](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-timing#timing).
	// Not supported for Japanese input text.
	Timings []string `json:"action,omitempty"`
}

SynthesizeOptions : The SynthesizeUsingWebsocket options

func (*SynthesizeUsingWebsocketOptions) SetCallback

SetCallback: Allows user to set the Callback

func (*SynthesizeUsingWebsocketOptions) SetTimings

SetTimings: Allows user to set the Timings

type TextToSpeechV1

type TextToSpeechV1 struct {
	Service *core.BaseService
}

TextToSpeechV1 : The IBM Watson™ Text to Speech service provides APIs that use IBM's speech-synthesis capabilities to synthesize text into natural-sounding speech in a variety of languages, dialects, and voices. The service supports at least one male or female voice, sometimes both, for each language. The audio is streamed back to the client with minimal delay.

For speech synthesis, the service supports a synchronous HTTP Representational State Transfer (REST) interface and a WebSocket interface. Both interfaces support plain text and SSML input. SSML is an XML-based markup language that provides text annotation for speech-synthesis applications. The WebSocket interface also supports the SSML <code>&lt;mark&gt;</code> element and word timings.

The service offers a customization interface that you can use to define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. A phonetic translation is based on the SSML phoneme format for representing a word. You can specify a phonetic translation in standard International Phonetic Alphabet (IPA) representation or in the proprietary IBM Symbolic Phonetic Representation (SPR).

The service also offers a Tune by Example feature that lets you define custom prompts. You can also define speaker models to improve the quality of your custom prompts. The service support custom prompts only for US English custom models and voices.

**IBM Cloud&reg;.** The Arabic, Chinese, Dutch, Australian English, and Korean languages and voices are supported only for IBM Cloud. For phonetic translation, they support only IPA, not SPR.

API Version: 1.0.0 See: https://cloud.ibm.com/docs/text-to-speech

func NewTextToSpeechV1

func NewTextToSpeechV1(options *TextToSpeechV1Options) (service *TextToSpeechV1, err error)

NewTextToSpeechV1 : constructs an instance of TextToSpeechV1 with passed in options.

func (*TextToSpeechV1) AddCustomPrompt added in v2.1.0

func (textToSpeech *TextToSpeechV1) AddCustomPrompt(addCustomPromptOptions *AddCustomPromptOptions) (result *Prompt, response *core.DetailedResponse, err error)

AddCustomPrompt : Add a custom prompt Adds a custom prompt to a custom model. A prompt is defined by the text that is to be spoken, the audio for that text, a unique user-specified ID for the prompt, and an optional speaker ID. The information is used to generate prosodic data that is not visible to the user. This data is used by the service to produce the synthesized audio upon request. You must use credentials for the instance of the service that owns a custom model to add a prompt to it. You can add a maximum of 1000 custom prompts to a single custom model.

You are recommended to assign meaningful values for prompt IDs. For example, use `goodbye` to identify a prompt that speaks a farewell message. Prompt IDs must be unique within a given custom model. You cannot define two prompts with the same name for the same custom model. If you provide the ID of an existing prompt, the previously uploaded prompt is replaced by the new information. The existing prompt is reprocessed by using the new text and audio and, if provided, new speaker model, and the prosody data associated with the prompt is updated.

The quality of a prompt is undefined if the language of a prompt does not match the language of its custom model. This is consistent with any text or SSML that is specified for a speech synthesis request. The service makes a best-effort attempt to render the specified text for the prompt; it does not validate that the language of the text matches the language of the model.

Adding a prompt is an asynchronous operation. Although it accepts less audio than speaker enrollment, the service must align the audio with the provided text. The time that it takes to process a prompt depends on the prompt itself. The processing time for a reasonably sized prompt generally matches the length of the audio (for example, it takes 20 seconds to process a 20-second prompt).

For shorter prompts, you can wait for a reasonable amount of time and then check the status of the prompt with the [Get a custom prompt](#getcustomprompt) method. For longer prompts, consider using that method to poll the service every few seconds to determine when the prompt becomes available. No prompt can be used for speech synthesis if it is in the `processing` or `failed` state. Only prompts that are in the `available` state can be used for speech synthesis.

When it processes a request, the service attempts to align the text and the audio that are provided for the prompt. The text that is passed with a prompt must match the spoken audio as closely as possible. Optimally, the text and audio match exactly. The service does its best to align the specified text with the audio, and it can often compensate for mismatches between the two. But if the service cannot effectively align the text and the audio, possibly because the magnitude of mismatches between the two is too great, processing of the prompt fails.

### Evaluating a prompt

Always listen to and evaluate a prompt to determine its quality before using it in production. To evaluate a prompt,

include only the single prompt in a speech synthesis request by using the following SSML extension, in this case for a prompt whose ID is `goodbye`:

`<ibm:prompt id="goodbye"/>`

In some cases, you might need to rerecord and resubmit a prompt as many as five times to address the following possible problems: * The service might fail to detect a mismatch between the prompt’s text and audio. The longer the prompt, the greater the chance for misalignment between its text and audio. Therefore, multiple shorter prompts are preferable to a single long prompt. * The text of a prompt might include a word that the service does not recognize. In this case, you can create a custom word and pronunciation pair to tell the service how to pronounce the word. You must then re-create the prompt. * The quality of the input audio might be insufficient or the service’s processing of the audio might fail to detect the intended prosody. Submitting new audio for the prompt can correct these issues.

If a prompt that is created without a speaker ID does not adequately reflect the intended prosody, enrolling the speaker and providing a speaker ID for the prompt is one recommended means of potentially improving the quality of the prompt. This is especially important for shorter prompts such as "good-bye" or "thank you," where less audio data makes it more difficult to match the prosody of the speaker. Custom prompts are supported only for use with US English custom models and voices.

**See also:** * [Add a custom prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-add-prompt) * [Evaluate a custom prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-evaluate-prompt) * [Rules for creating custom prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-rules#tbe-rules-prompts).

func (*TextToSpeechV1) AddCustomPromptWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) AddCustomPromptWithContext(ctx context.Context, addCustomPromptOptions *AddCustomPromptOptions) (result *Prompt, response *core.DetailedResponse, err error)

AddCustomPromptWithContext is an alternate form of the AddCustomPrompt method which supports a Context parameter

func (*TextToSpeechV1) AddWord

func (textToSpeech *TextToSpeechV1) AddWord(addWordOptions *AddWordOptions) (response *core.DetailedResponse, err error)

AddWord : Add a custom word Adds a single word and its translation to the specified custom model. Adding a new translation for a word that already exists in a custom model overwrites the word's existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to add a word to it.

You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation

<code>&lt;phoneme alphabet="ipa" ph="t&#601;m&#712;&#593;to"&gt;&lt;/phoneme&gt;</code>

or in the proprietary IBM Symbolic Phonetic Representation (SPR)

<code>&lt;phoneme alphabet="ibm" ph="1gAstroEntxrYFXs"&gt;&lt;/phoneme&gt;</code>

**See also:** * [Adding a single word to a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordAdd) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customIntro#customIntro).

func (*TextToSpeechV1) AddWordWithContext

func (textToSpeech *TextToSpeechV1) AddWordWithContext(ctx context.Context, addWordOptions *AddWordOptions) (response *core.DetailedResponse, err error)

AddWordWithContext is an alternate form of the AddWord method which supports a Context parameter

func (*TextToSpeechV1) AddWords

func (textToSpeech *TextToSpeechV1) AddWords(addWordsOptions *AddWordsOptions) (response *core.DetailedResponse, err error)

AddWords : Add custom words Adds one or more words and their translations to the specified custom model. Adding a new translation for a word that already exists in a custom model overwrites the word's existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to add words to it.

You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation

<code>&lt;phoneme alphabet="ipa" ph="t&#601;m&#712;&#593;to"&gt;&lt;/phoneme&gt;</code>

or in the proprietary IBM Symbolic Phonetic Representation (SPR)

<code>&lt;phoneme alphabet="ibm" ph="1gAstroEntxrYFXs"&gt;&lt;/phoneme&gt;</code>

**See also:** * [Adding multiple words to a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordsAdd) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customIntro#customIntro).

func (*TextToSpeechV1) AddWordsWithContext

func (textToSpeech *TextToSpeechV1) AddWordsWithContext(ctx context.Context, addWordsOptions *AddWordsOptions) (response *core.DetailedResponse, err error)

AddWordsWithContext is an alternate form of the AddWords method which supports a Context parameter

func (*TextToSpeechV1) Clone

func (textToSpeech *TextToSpeechV1) Clone() *TextToSpeechV1

Clone makes a copy of "textToSpeech" suitable for processing requests.

func (*TextToSpeechV1) CreateCustomModel

func (textToSpeech *TextToSpeechV1) CreateCustomModel(createCustomModelOptions *CreateCustomModelOptions) (result *CustomModel, response *core.DetailedResponse, err error)

CreateCustomModel : Create a custom model Creates a new empty custom model. You must specify a name for the new custom model. You can optionally specify the language and a description for the new model. The model is owned by the instance of the service whose credentials are used to create it.

**See also:** [Creating a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsCreate).

### Important voice updates for IBM Cloud

The service's voices underwent significant change on 2 December 2020.

* The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural instead of concatenative. * The `ar-AR_OmarVoice` voice is deprecated. Use `ar-MS_OmarVoice` voice instead. * The `ar-AR` language identifier cannot be used to create a custom model. Use the `ar-MS` identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the `volume` attribute of the `<prosody>` element are deprecated and are not supported with any of the service's neural voices. * All of the service's voices are now customizable and generally available (GA) for production use.

The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes for IBM Cloud.

func (*TextToSpeechV1) CreateCustomModelWithContext

func (textToSpeech *TextToSpeechV1) CreateCustomModelWithContext(ctx context.Context, createCustomModelOptions *CreateCustomModelOptions) (result *CustomModel, response *core.DetailedResponse, err error)

CreateCustomModelWithContext is an alternate form of the CreateCustomModel method which supports a Context parameter

func (*TextToSpeechV1) CreateSpeakerModel added in v2.1.0

func (textToSpeech *TextToSpeechV1) CreateSpeakerModel(createSpeakerModelOptions *CreateSpeakerModelOptions) (result *SpeakerModel, response *core.DetailedResponse, err error)

CreateSpeakerModel : Create a speaker model Creates a new speaker model, which is an optional enrollment token for users who are to add prompts to custom models. A speaker model contains information about a user's voice. The service extracts this information from a WAV audio sample that you pass as the body of the request. Associating a speaker model with a prompt is optional, but the information that is extracted from the speaker model helps the service learn about the speaker's voice.

A speaker model can make an appreciable difference in the quality of prompts, especially short prompts with relatively little audio, that are associated with that speaker. A speaker model can help the service produce a prompt with more confidence; the lack of a speaker model can potentially compromise the quality of a prompt.

The gender of the speaker who creates a speaker model does not need to match the gender of a voice that is used with prompts that are associated with that speaker model. For example, a speaker model that is created by a male speaker can be associated with prompts that are spoken by female voices.

You create a speaker model for a given instance of the service. The new speaker model is owned by the service instance whose credentials are used to create it. That same speaker can then be used to create prompts for all custom models within that service instance. No language is associated with a speaker model, but each custom model has a single specified language. You can add prompts only to US English models.

You specify a name for the speaker when you create it. The name must be unique among all speaker names for the owning service instance. To re-create a speaker model for an existing speaker name, you must first delete the existing speaker model that has that name.

Speaker enrollment is a synchronous operation. Although it accepts more audio data than a prompt, the process of adding a speaker is very fast. The service simply extracts information about the speaker’s voice from the audio. Unlike prompts, speaker models neither need nor accept a transcription of the audio. When the call returns, the audio is fully processed and the speaker enrollment is complete.

The service returns a speaker ID with the request. A speaker ID is globally unique identifier (GUID) that you use to identify the speaker in subsequent requests to the service. Speaker models and the custom prompts with which they are used are supported only for use with US English custom models and voices.

**See also:** * [Create a speaker model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-create#tbe-create-speaker-model) * [Rules for creating speaker models](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-rules#tbe-rules-speakers).

func (*TextToSpeechV1) CreateSpeakerModelWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) CreateSpeakerModelWithContext(ctx context.Context, createSpeakerModelOptions *CreateSpeakerModelOptions) (result *SpeakerModel, response *core.DetailedResponse, err error)

CreateSpeakerModelWithContext is an alternate form of the CreateSpeakerModel method which supports a Context parameter

func (*TextToSpeechV1) DeleteCustomModel

func (textToSpeech *TextToSpeechV1) DeleteCustomModel(deleteCustomModelOptions *DeleteCustomModelOptions) (response *core.DetailedResponse, err error)

DeleteCustomModel : Delete a custom model Deletes the specified custom model. You must use credentials for the instance of the service that owns a model to delete it.

**See also:** [Deleting a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsDelete).

func (*TextToSpeechV1) DeleteCustomModelWithContext

func (textToSpeech *TextToSpeechV1) DeleteCustomModelWithContext(ctx context.Context, deleteCustomModelOptions *DeleteCustomModelOptions) (response *core.DetailedResponse, err error)

DeleteCustomModelWithContext is an alternate form of the DeleteCustomModel method which supports a Context parameter

func (*TextToSpeechV1) DeleteCustomPrompt added in v2.1.0

func (textToSpeech *TextToSpeechV1) DeleteCustomPrompt(deleteCustomPromptOptions *DeleteCustomPromptOptions) (response *core.DetailedResponse, err error)

DeleteCustomPrompt : Delete a custom prompt Deletes an existing custom prompt from a custom model. The service deletes the prompt with the specified ID. You must use credentials for the instance of the service that owns the custom model from which the prompt is to be deleted.

**Caution:** Deleting a custom prompt elicits a 400 response code from synthesis requests that attempt to use the prompt. Make sure that you do not attempt to use a deleted prompt in a production application. Custom prompts are supported only for use with US English custom models and voices.

**See also:** [Deleting a custom prompt](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-custom-prompts#tbe-custom-prompts-delete).

func (*TextToSpeechV1) DeleteCustomPromptWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) DeleteCustomPromptWithContext(ctx context.Context, deleteCustomPromptOptions *DeleteCustomPromptOptions) (response *core.DetailedResponse, err error)

DeleteCustomPromptWithContext is an alternate form of the DeleteCustomPrompt method which supports a Context parameter

func (*TextToSpeechV1) DeleteSpeakerModel added in v2.1.0

func (textToSpeech *TextToSpeechV1) DeleteSpeakerModel(deleteSpeakerModelOptions *DeleteSpeakerModelOptions) (response *core.DetailedResponse, err error)

DeleteSpeakerModel : Delete a speaker model Deletes an existing speaker model from the service instance. The service deletes the enrolled speaker with the specified speaker ID. You must use credentials for the instance of the service that owns a speaker model to delete the speaker.

Any prompts that are associated with the deleted speaker are not affected by the speaker's deletion. The prosodic data that defines the quality of a prompt is established when the prompt is created. A prompt is static and remains unaffected by deletion of its associated speaker. However, the prompt cannot be resubmitted or updated with its original speaker once that speaker is deleted. Speaker models and the custom prompts with which they are used are supported only for use with US English custom models and voices.

**See also:** [Deleting a speaker model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-speaker-models#tbe-speaker-models-delete).

func (*TextToSpeechV1) DeleteSpeakerModelWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) DeleteSpeakerModelWithContext(ctx context.Context, deleteSpeakerModelOptions *DeleteSpeakerModelOptions) (response *core.DetailedResponse, err error)

DeleteSpeakerModelWithContext is an alternate form of the DeleteSpeakerModel method which supports a Context parameter

func (*TextToSpeechV1) DeleteUserData

func (textToSpeech *TextToSpeechV1) DeleteUserData(deleteUserDataOptions *DeleteUserDataOptions) (response *core.DetailedResponse, err error)

DeleteUserData : Delete labeled data Deletes all data that is associated with a specified customer ID. The method deletes all data for the customer ID, regardless of the method by which the information was added. The method has no effect if no data is associated with the customer ID. You must issue the request with credentials for the same instance of the service that was used to associate the customer ID with the data. You associate a customer ID with data by passing the `X-Watson-Metadata` header with a request that passes the data.

**Note:** If you delete an instance of the service from the service console, all data associated with that service instance is automatically deleted. This includes all custom models and word/translation pairs, and all data related to speech synthesis requests.

**See also:** [Information security](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-information-security#information-security).

func (*TextToSpeechV1) DeleteUserDataWithContext

func (textToSpeech *TextToSpeechV1) DeleteUserDataWithContext(ctx context.Context, deleteUserDataOptions *DeleteUserDataOptions) (response *core.DetailedResponse, err error)

DeleteUserDataWithContext is an alternate form of the DeleteUserData method which supports a Context parameter

func (*TextToSpeechV1) DeleteWord

func (textToSpeech *TextToSpeechV1) DeleteWord(deleteWordOptions *DeleteWordOptions) (response *core.DetailedResponse, err error)

DeleteWord : Delete a custom word Deletes a single word from the specified custom model. You must use credentials for the instance of the service that owns a model to delete its words.

**See also:** [Deleting a word from a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordDelete).

func (*TextToSpeechV1) DeleteWordWithContext

func (textToSpeech *TextToSpeechV1) DeleteWordWithContext(ctx context.Context, deleteWordOptions *DeleteWordOptions) (response *core.DetailedResponse, err error)

DeleteWordWithContext is an alternate form of the DeleteWord method which supports a Context parameter

func (*TextToSpeechV1) DisableRetries

func (textToSpeech *TextToSpeechV1) DisableRetries()

DisableRetries disables automatic retries for requests invoked for this service instance.

func (*TextToSpeechV1) DisableSSLVerification

func (textToSpeech *TextToSpeechV1) DisableSSLVerification()

DisableSSLVerification bypasses verification of the server's SSL certificate

func (*TextToSpeechV1) EnableRetries

func (textToSpeech *TextToSpeechV1) EnableRetries(maxRetries int, maxRetryInterval time.Duration)

EnableRetries enables automatic retries for requests invoked for this service instance. If either parameter is specified as 0, then a default value is used instead.

func (*TextToSpeechV1) GetCustomModel

func (textToSpeech *TextToSpeechV1) GetCustomModel(getCustomModelOptions *GetCustomModelOptions) (result *CustomModel, response *core.DetailedResponse, err error)

GetCustomModel : Get a custom model Gets all information about a specified custom model. In addition to metadata such as the name and description of the custom model, the output includes the words and their translations that are defined for the model, as well as any prompts that are defined for the model. To see just the metadata for a model, use the [List custom models](#listcustommodels) method.

**See also:** [Querying a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsQuery).

func (*TextToSpeechV1) GetCustomModelWithContext

func (textToSpeech *TextToSpeechV1) GetCustomModelWithContext(ctx context.Context, getCustomModelOptions *GetCustomModelOptions) (result *CustomModel, response *core.DetailedResponse, err error)

GetCustomModelWithContext is an alternate form of the GetCustomModel method which supports a Context parameter

func (*TextToSpeechV1) GetCustomPrompt added in v2.1.0

func (textToSpeech *TextToSpeechV1) GetCustomPrompt(getCustomPromptOptions *GetCustomPromptOptions) (result *Prompt, response *core.DetailedResponse, err error)

GetCustomPrompt : Get a custom prompt Gets information about a specified custom prompt for a specified custom model. The information includes the prompt ID, prompt text, status, and optional speaker ID for each prompt of the custom model. You must use credentials for the instance of the service that owns the custom model. Custom prompts are supported only for use with US English custom models and voices.

**See also:** [Listing custom prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-custom-prompts#tbe-custom-prompts-list).

func (*TextToSpeechV1) GetCustomPromptWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) GetCustomPromptWithContext(ctx context.Context, getCustomPromptOptions *GetCustomPromptOptions) (result *Prompt, response *core.DetailedResponse, err error)

GetCustomPromptWithContext is an alternate form of the GetCustomPrompt method which supports a Context parameter

func (*TextToSpeechV1) GetEnableGzipCompression

func (textToSpeech *TextToSpeechV1) GetEnableGzipCompression() bool

GetEnableGzipCompression returns the service's EnableGzipCompression field

func (*TextToSpeechV1) GetPronunciation

func (textToSpeech *TextToSpeechV1) GetPronunciation(getPronunciationOptions *GetPronunciationOptions) (result *Pronunciation, response *core.DetailedResponse, err error)

GetPronunciation : Get pronunciation Gets the phonetic pronunciation for the specified word. You can request the pronunciation for a specific format. You can also request the pronunciation for a specific voice to see the default translation for the language of that voice or for a specific custom model to see the translation for that model.

**See also:** [Querying a word from a language](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordsQueryLanguage).

### Important voice updates for IBM Cloud

The service's voices underwent significant change on 2 December 2020.

* The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural instead of concatenative. * The `ar-AR_OmarVoice` voice is deprecated. Use `ar-MS_OmarVoice` voice instead. * The `ar-AR` language identifier cannot be used to create a custom model. Use the `ar-MS` identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the `volume` attribute of the `<prosody>` element are deprecated and are not supported with any of the service's neural voices. * All of the service's voices are now customizable and generally available (GA) for production use.

The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes for IBM Cloud.

func (*TextToSpeechV1) GetPronunciationWithContext

func (textToSpeech *TextToSpeechV1) GetPronunciationWithContext(ctx context.Context, getPronunciationOptions *GetPronunciationOptions) (result *Pronunciation, response *core.DetailedResponse, err error)

GetPronunciationWithContext is an alternate form of the GetPronunciation method which supports a Context parameter

func (*TextToSpeechV1) GetServiceURL

func (textToSpeech *TextToSpeechV1) GetServiceURL() string

GetServiceURL returns the service URL

func (*TextToSpeechV1) GetSpeakerModel added in v2.1.0

func (textToSpeech *TextToSpeechV1) GetSpeakerModel(getSpeakerModelOptions *GetSpeakerModelOptions) (result *SpeakerCustomModels, response *core.DetailedResponse, err error)

GetSpeakerModel : Get a speaker model Gets information about all prompts that are defined by a specified speaker for all custom models that are owned by a service instance. The information is grouped by the customization IDs of the custom models. For each custom model, the information lists information about each prompt that is defined for that custom model by the speaker. You must use credentials for the instance of the service that owns a speaker model to list its prompts. Speaker models and the custom prompts with which they are used are supported only for use with US English custom models and voices.

**See also:** [Listing the custom prompts for a speaker model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-speaker-models#tbe-speaker-models-list-prompts).

func (*TextToSpeechV1) GetSpeakerModelWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) GetSpeakerModelWithContext(ctx context.Context, getSpeakerModelOptions *GetSpeakerModelOptions) (result *SpeakerCustomModels, response *core.DetailedResponse, err error)

GetSpeakerModelWithContext is an alternate form of the GetSpeakerModel method which supports a Context parameter

func (*TextToSpeechV1) GetVoice

func (textToSpeech *TextToSpeechV1) GetVoice(getVoiceOptions *GetVoiceOptions) (result *Voice, response *core.DetailedResponse, err error)

GetVoice : Get a voice Gets information about the specified voice. The information includes the name, language, gender, and other details about the voice. Specify a customization ID to obtain information for a custom model that is defined for the language of the specified voice. To list information about all available voices, use the [List voices](#listvoices) method.

**See also:** [Listing a specific voice](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-voices#listVoice).

### Important voice updates for IBM Cloud

The service's voices underwent significant change on 2 December 2020.

* The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural instead of concatenative. * The `ar-AR_OmarVoice` voice is deprecated. Use `ar-MS_OmarVoice` voice instead. * The `ar-AR` language identifier cannot be used to create a custom model. Use the `ar-MS` identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the `volume` attribute of the `<prosody>` element are deprecated and are not supported with any of the service's neural voices. * All of the service's voices are now customizable and generally available (GA) for production use.

The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes for IBM Cloud.

func (*TextToSpeechV1) GetVoiceWithContext

func (textToSpeech *TextToSpeechV1) GetVoiceWithContext(ctx context.Context, getVoiceOptions *GetVoiceOptions) (result *Voice, response *core.DetailedResponse, err error)

GetVoiceWithContext is an alternate form of the GetVoice method which supports a Context parameter

func (*TextToSpeechV1) GetWord

func (textToSpeech *TextToSpeechV1) GetWord(getWordOptions *GetWordOptions) (result *Translation, response *core.DetailedResponse, err error)

GetWord : Get a custom word Gets the translation for a single word from the specified custom model. The output shows the translation as it is defined in the model. You must use credentials for the instance of the service that owns a model to list its words.

**See also:** [Querying a single word from a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordQueryModel).

func (*TextToSpeechV1) GetWordWithContext

func (textToSpeech *TextToSpeechV1) GetWordWithContext(ctx context.Context, getWordOptions *GetWordOptions) (result *Translation, response *core.DetailedResponse, err error)

GetWordWithContext is an alternate form of the GetWord method which supports a Context parameter

func (*TextToSpeechV1) ListCustomModels

func (textToSpeech *TextToSpeechV1) ListCustomModels(listCustomModelsOptions *ListCustomModelsOptions) (result *CustomModels, response *core.DetailedResponse, err error)

ListCustomModels : List custom models Lists metadata such as the name and description for all custom models that are owned by an instance of the service. Specify a language to list the custom models for that language only. To see the words and prompts in addition to the metadata for a specific custom model, use the [Get a custom model](#getcustommodel) method. You must use credentials for the instance of the service that owns a model to list information about it.

**See also:** [Querying all custom models](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsQueryAll).

func (*TextToSpeechV1) ListCustomModelsWithContext

func (textToSpeech *TextToSpeechV1) ListCustomModelsWithContext(ctx context.Context, listCustomModelsOptions *ListCustomModelsOptions) (result *CustomModels, response *core.DetailedResponse, err error)

ListCustomModelsWithContext is an alternate form of the ListCustomModels method which supports a Context parameter

func (*TextToSpeechV1) ListCustomPrompts added in v2.1.0

func (textToSpeech *TextToSpeechV1) ListCustomPrompts(listCustomPromptsOptions *ListCustomPromptsOptions) (result *Prompts, response *core.DetailedResponse, err error)

ListCustomPrompts : List custom prompts Lists information about all custom prompts that are defined for a custom model. The information includes the prompt ID, prompt text, status, and optional speaker ID for each prompt of the custom model. You must use credentials for the instance of the service that owns the custom model. The same information about all of the prompts for a custom model is also provided by the [Get a custom model](#getcustommodel) method. That method provides complete details about a specified custom model, including its language, owner, custom words, and more. Custom prompts are supported only for use with US English custom models and voices.

**See also:** [Listing custom prompts](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-custom-prompts#tbe-custom-prompts-list).

func (*TextToSpeechV1) ListCustomPromptsWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) ListCustomPromptsWithContext(ctx context.Context, listCustomPromptsOptions *ListCustomPromptsOptions) (result *Prompts, response *core.DetailedResponse, err error)

ListCustomPromptsWithContext is an alternate form of the ListCustomPrompts method which supports a Context parameter

func (*TextToSpeechV1) ListSpeakerModels added in v2.1.0

func (textToSpeech *TextToSpeechV1) ListSpeakerModels(listSpeakerModelsOptions *ListSpeakerModelsOptions) (result *Speakers, response *core.DetailedResponse, err error)

ListSpeakerModels : List speaker models Lists information about all speaker models that are defined for a service instance. The information includes the speaker ID and speaker name of each defined speaker. You must use credentials for the instance of a service to list its speakers. Speaker models and the custom prompts with which they are used are supported only for use with US English custom models and voices.

**See also:** [Listing speaker models](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-tbe-speaker-models#tbe-speaker-models-list).

func (*TextToSpeechV1) ListSpeakerModelsWithContext added in v2.1.0

func (textToSpeech *TextToSpeechV1) ListSpeakerModelsWithContext(ctx context.Context, listSpeakerModelsOptions *ListSpeakerModelsOptions) (result *Speakers, response *core.DetailedResponse, err error)

ListSpeakerModelsWithContext is an alternate form of the ListSpeakerModels method which supports a Context parameter

func (*TextToSpeechV1) ListVoices

func (textToSpeech *TextToSpeechV1) ListVoices(listVoicesOptions *ListVoicesOptions) (result *Voices, response *core.DetailedResponse, err error)

ListVoices : List voices Lists all voices available for use with the service. The information includes the name, language, gender, and other details about the voice. The ordering of the list of voices can change from call to call; do not rely on an alphabetized or static list of voices. To see information about a specific voice, use the [Get a voice](#getvoice).

**See also:** [Listing all available voices](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-voices#listVoices).

func (*TextToSpeechV1) ListVoicesWithContext

func (textToSpeech *TextToSpeechV1) ListVoicesWithContext(ctx context.Context, listVoicesOptions *ListVoicesOptions) (result *Voices, response *core.DetailedResponse, err error)

ListVoicesWithContext is an alternate form of the ListVoices method which supports a Context parameter

func (*TextToSpeechV1) ListWords

func (textToSpeech *TextToSpeechV1) ListWords(listWordsOptions *ListWordsOptions) (result *Words, response *core.DetailedResponse, err error)

ListWords : List custom words Lists all of the words and their translations for the specified custom model. The output shows the translations as they are defined in the model. You must use credentials for the instance of the service that owns a model to list its words.

**See also:** [Querying all words from a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuWordsQueryModel).

func (*TextToSpeechV1) ListWordsWithContext

func (textToSpeech *TextToSpeechV1) ListWordsWithContext(ctx context.Context, listWordsOptions *ListWordsOptions) (result *Words, response *core.DetailedResponse, err error)

ListWordsWithContext is an alternate form of the ListWords method which supports a Context parameter

func (*TextToSpeechV1) NewAddCustomPromptOptions added in v2.1.0

func (*TextToSpeechV1) NewAddCustomPromptOptions(customizationID string, promptID string, metadata *PromptMetadata, file io.ReadCloser) *AddCustomPromptOptions

NewAddCustomPromptOptions : Instantiate AddCustomPromptOptions

func (*TextToSpeechV1) NewAddWordOptions

func (*TextToSpeechV1) NewAddWordOptions(customizationID string, word string, translation string) *AddWordOptions

NewAddWordOptions : Instantiate AddWordOptions

func (*TextToSpeechV1) NewAddWordsOptions

func (*TextToSpeechV1) NewAddWordsOptions(customizationID string, words []Word) *AddWordsOptions

NewAddWordsOptions : Instantiate AddWordsOptions

func (*TextToSpeechV1) NewCreateCustomModelOptions

func (*TextToSpeechV1) NewCreateCustomModelOptions(name string) *CreateCustomModelOptions

NewCreateCustomModelOptions : Instantiate CreateCustomModelOptions

func (*TextToSpeechV1) NewCreateSpeakerModelOptions added in v2.1.0

func (*TextToSpeechV1) NewCreateSpeakerModelOptions(speakerName string, audio io.ReadCloser) *CreateSpeakerModelOptions

NewCreateSpeakerModelOptions : Instantiate CreateSpeakerModelOptions

func (*TextToSpeechV1) NewDeleteCustomModelOptions

func (*TextToSpeechV1) NewDeleteCustomModelOptions(customizationID string) *DeleteCustomModelOptions

NewDeleteCustomModelOptions : Instantiate DeleteCustomModelOptions

func (*TextToSpeechV1) NewDeleteCustomPromptOptions added in v2.1.0

func (*TextToSpeechV1) NewDeleteCustomPromptOptions(customizationID string, promptID string) *DeleteCustomPromptOptions

NewDeleteCustomPromptOptions : Instantiate DeleteCustomPromptOptions

func (*TextToSpeechV1) NewDeleteSpeakerModelOptions added in v2.1.0

func (*TextToSpeechV1) NewDeleteSpeakerModelOptions(speakerID string) *DeleteSpeakerModelOptions

NewDeleteSpeakerModelOptions : Instantiate DeleteSpeakerModelOptions

func (*TextToSpeechV1) NewDeleteUserDataOptions

func (*TextToSpeechV1) NewDeleteUserDataOptions(customerID string) *DeleteUserDataOptions

NewDeleteUserDataOptions : Instantiate DeleteUserDataOptions

func (*TextToSpeechV1) NewDeleteWordOptions

func (*TextToSpeechV1) NewDeleteWordOptions(customizationID string, word string) *DeleteWordOptions

NewDeleteWordOptions : Instantiate DeleteWordOptions

func (*TextToSpeechV1) NewGetCustomModelOptions

func (*TextToSpeechV1) NewGetCustomModelOptions(customizationID string) *GetCustomModelOptions

NewGetCustomModelOptions : Instantiate GetCustomModelOptions

func (*TextToSpeechV1) NewGetCustomPromptOptions added in v2.1.0

func (*TextToSpeechV1) NewGetCustomPromptOptions(customizationID string, promptID string) *GetCustomPromptOptions

NewGetCustomPromptOptions : Instantiate GetCustomPromptOptions

func (*TextToSpeechV1) NewGetPronunciationOptions

func (*TextToSpeechV1) NewGetPronunciationOptions(text string) *GetPronunciationOptions

NewGetPronunciationOptions : Instantiate GetPronunciationOptions

func (*TextToSpeechV1) NewGetSpeakerModelOptions added in v2.1.0

func (*TextToSpeechV1) NewGetSpeakerModelOptions(speakerID string) *GetSpeakerModelOptions

NewGetSpeakerModelOptions : Instantiate GetSpeakerModelOptions

func (*TextToSpeechV1) NewGetVoiceOptions

func (*TextToSpeechV1) NewGetVoiceOptions(voice string) *GetVoiceOptions

NewGetVoiceOptions : Instantiate GetVoiceOptions

func (*TextToSpeechV1) NewGetWordOptions

func (*TextToSpeechV1) NewGetWordOptions(customizationID string, word string) *GetWordOptions

NewGetWordOptions : Instantiate GetWordOptions

func (*TextToSpeechV1) NewListCustomModelsOptions

func (*TextToSpeechV1) NewListCustomModelsOptions() *ListCustomModelsOptions

NewListCustomModelsOptions : Instantiate ListCustomModelsOptions

func (*TextToSpeechV1) NewListCustomPromptsOptions added in v2.1.0

func (*TextToSpeechV1) NewListCustomPromptsOptions(customizationID string) *ListCustomPromptsOptions

NewListCustomPromptsOptions : Instantiate ListCustomPromptsOptions

func (*TextToSpeechV1) NewListSpeakerModelsOptions added in v2.1.0

func (*TextToSpeechV1) NewListSpeakerModelsOptions() *ListSpeakerModelsOptions

NewListSpeakerModelsOptions : Instantiate ListSpeakerModelsOptions

func (*TextToSpeechV1) NewListVoicesOptions

func (*TextToSpeechV1) NewListVoicesOptions() *ListVoicesOptions

NewListVoicesOptions : Instantiate ListVoicesOptions

func (*TextToSpeechV1) NewListWordsOptions

func (*TextToSpeechV1) NewListWordsOptions(customizationID string) *ListWordsOptions

NewListWordsOptions : Instantiate ListWordsOptions

func (*TextToSpeechV1) NewPromptMetadata added in v2.1.0

func (*TextToSpeechV1) NewPromptMetadata(promptText string) (_model *PromptMetadata, err error)

NewPromptMetadata : Instantiate PromptMetadata (Generic Model Constructor)

func (*TextToSpeechV1) NewSynthesizeListener

func (textToSpeechV1 *TextToSpeechV1) NewSynthesizeListener(callback SynthesizeCallbackWrapper, req *http.Request)

func (*TextToSpeechV1) NewSynthesizeOptions

func (*TextToSpeechV1) NewSynthesizeOptions(text string) *SynthesizeOptions

NewSynthesizeOptions : Instantiate SynthesizeOptions

func (*TextToSpeechV1) NewSynthesizeUsingWebsocketOptions

func (textToSpeech *TextToSpeechV1) NewSynthesizeUsingWebsocketOptions(text string, callback SynthesizeCallbackWrapper) *SynthesizeUsingWebsocketOptions

NewSynthesizeUsingWebsocketOptions: Instantiate SynthesizeOptions to enable websocket support

func (*TextToSpeechV1) NewTranslation

func (*TextToSpeechV1) NewTranslation(translation string) (_model *Translation, err error)

NewTranslation : Instantiate Translation (Generic Model Constructor)

func (*TextToSpeechV1) NewUpdateCustomModelOptions

func (*TextToSpeechV1) NewUpdateCustomModelOptions(customizationID string) *UpdateCustomModelOptions

NewUpdateCustomModelOptions : Instantiate UpdateCustomModelOptions

func (*TextToSpeechV1) NewWord

func (*TextToSpeechV1) NewWord(word string, translation string) (_model *Word, err error)

NewWord : Instantiate Word (Generic Model Constructor)

func (*TextToSpeechV1) NewWords

func (*TextToSpeechV1) NewWords(words []Word) (_model *Words, err error)

NewWords : Instantiate Words (Generic Model Constructor)

func (*TextToSpeechV1) SetDefaultHeaders

func (textToSpeech *TextToSpeechV1) SetDefaultHeaders(headers http.Header)

SetDefaultHeaders sets HTTP headers to be sent in every request

func (*TextToSpeechV1) SetEnableGzipCompression

func (textToSpeech *TextToSpeechV1) SetEnableGzipCompression(enableGzip bool)

SetEnableGzipCompression sets the service's EnableGzipCompression field

func (*TextToSpeechV1) SetServiceURL

func (textToSpeech *TextToSpeechV1) SetServiceURL(url string) error

SetServiceURL sets the service URL

func (*TextToSpeechV1) Synthesize

func (textToSpeech *TextToSpeechV1) Synthesize(synthesizeOptions *SynthesizeOptions) (result io.ReadCloser, response *core.DetailedResponse, err error)

Synthesize : Synthesize audio Synthesizes text to audio that is spoken in the specified voice. The service bases its understanding of the language for the input text on the specified voice. Use a voice that matches the language of the input text.

The method accepts a maximum of 5 KB of input text in the body of the request, and 8 KB for the URL and headers. The 5 KB limit includes any SSML tags that you specify. The service returns the synthesized audio stream as an array of bytes.

**See also:** [The HTTP interface](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-usingHTTP#usingHTTP).

### Audio formats (accept types)

The service can return audio in the following formats (MIME types).

* Where indicated, you can optionally specify the sampling rate (`rate`) of the audio. You must specify a sampling rate for the `audio/l16` and `audio/mulaw` formats. A specified sampling rate must lie in the range of 8 kHz to 192 kHz. Some formats restrict the sampling rate to certain values, as noted. * For the `audio/l16` format, you can optionally specify the endianness (`endianness`) of the audio: `endianness=big-endian` or `endianness=little-endian`.

Use the `Accept` header or the `accept` parameter to specify the requested format of the response audio. If you omit an audio format altogether, the service returns the audio in Ogg format with the Opus codec (`audio/ogg;codecs=opus`). The service always returns single-channel audio. * `audio/basic` - The service returns audio with a sampling rate of 8000 Hz. * `audio/flac` - You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz. * `audio/l16` - You must specify the `rate` of the audio. You can optionally specify the `endianness` of the audio. The default endianness is `little-endian`. * `audio/mp3` - You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz. * `audio/mpeg` - You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz. * `audio/mulaw` - You must specify the `rate` of the audio. * `audio/ogg` - The service returns the audio in the `vorbis` codec. You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz. * `audio/ogg;codecs=opus` - You can optionally specify the `rate` of the audio. Only the following values are valid sampling rates: `48000`, `24000`, `16000`, `12000`, or `8000`. If you specify a value other than one of these, the service returns an error. The default sampling rate is 48,000 Hz. * `audio/ogg;codecs=vorbis` - You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz. * `audio/wav` - You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz. * `audio/webm` - The service returns the audio in the `opus` codec. The service returns audio with a sampling rate of 48,000 Hz. * `audio/webm;codecs=opus` - The service returns audio with a sampling rate of 48,000 Hz. * `audio/webm;codecs=vorbis` - You can optionally specify the `rate` of the audio. The default sampling rate is 22,050 Hz.

For more information about specifying an audio format, including additional details about some of the formats, see [Using audio formats](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-audio-formats).

### Important voice updates for IBM Cloud

The service's voices underwent significant change on 2 December 2020.

* The Arabic, Chinese, Dutch, Australian English, and Korean voices are now neural instead of concatenative. * The `ar-AR_OmarVoice` voice is deprecated. Use `ar-MS_OmarVoice` voice instead. * The `ar-AR` language identifier cannot be used to create a custom model. Use the `ar-MS` identifier instead. * The standard concatenative voices for the following languages are now deprecated: Brazilian Portuguese, United Kingdom and United States English, French, German, Italian, Japanese, and Spanish (all dialects). * The features expressive SSML, voice transformation SSML, and use of the `volume` attribute of the `<prosody>` element are deprecated and are not supported with any of the service's neural voices. * All of the service's voices are now customizable and generally available (GA) for production use.

The deprecated voices and features will continue to function for at least one year but might be removed at a future date. You are encouraged to migrate to the equivalent neural voices at your earliest convenience. For more information about all voice updates, see the [2 December 2020 service update](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-release-notes#December2020) in the release notes for IBM Cloud.

### Warning messages

If a request includes invalid query parameters, the service returns a `Warnings` response header that provides

messages about the invalid parameters. The warning includes a descriptive message and a list of invalid argument strings. For example, a message such as `"Unknown arguments:"` or `"Unknown url query arguments:"` followed by a list of the form `"{invalid_arg_1}, {invalid_arg_2}."` The request succeeds despite the warnings.

func (*TextToSpeechV1) SynthesizeUsingWebsocket

func (textToSpeech *TextToSpeechV1) SynthesizeUsingWebsocket(synthesizeOptions *SynthesizeUsingWebsocketOptions) error

SynthesizeUsingWebsocket: Synthesize text over websocket connection

func (*TextToSpeechV1) SynthesizeWithContext

func (textToSpeech *TextToSpeechV1) SynthesizeWithContext(ctx context.Context, synthesizeOptions *SynthesizeOptions) (result io.ReadCloser, response *core.DetailedResponse, err error)

SynthesizeWithContext is an alternate form of the Synthesize method which supports a Context parameter

func (*TextToSpeechV1) UpdateCustomModel

func (textToSpeech *TextToSpeechV1) UpdateCustomModel(updateCustomModelOptions *UpdateCustomModelOptions) (response *core.DetailedResponse, err error)

UpdateCustomModel : Update a custom model Updates information for the specified custom model. You can update metadata such as the name and description of the model. You can also update the words in the model and their translations. Adding a new translation for a word that already exists in a custom model overwrites the word's existing translation. A custom model can contain no more than 20,000 entries. You must use credentials for the instance of the service that owns a model to update it.

You can define sounds-like or phonetic translations for words. A sounds-like translation consists of one or more words that, when combined, sound like the word. Phonetic translations are based on the SSML phoneme format for representing a word. You can specify them in standard International Phonetic Alphabet (IPA) representation

<code>&lt;phoneme alphabet="ipa" ph="t&#601;m&#712;&#593;to"&gt;&lt;/phoneme&gt;</code>

or in the proprietary IBM Symbolic Phonetic Representation (SPR)

<code>&lt;phoneme alphabet="ibm" ph="1gAstroEntxrYFXs"&gt;&lt;/phoneme&gt;</code>

**See also:** * [Updating a custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customModels#cuModelsUpdate) * [Adding words to a Japanese custom model](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customWords#cuJapaneseAdd) * [Understanding customization](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-customIntro#customIntro).

func (*TextToSpeechV1) UpdateCustomModelWithContext

func (textToSpeech *TextToSpeechV1) UpdateCustomModelWithContext(ctx context.Context, updateCustomModelOptions *UpdateCustomModelOptions) (response *core.DetailedResponse, err error)

UpdateCustomModelWithContext is an alternate form of the UpdateCustomModel method which supports a Context parameter

type TextToSpeechV1Options

type TextToSpeechV1Options struct {
	ServiceName   string
	URL           string
	Authenticator core.Authenticator
}

TextToSpeechV1Options : Service options

type Timings

type Timings struct {
	Words [][]interface{} `json:"words,omitempty"`
}

Timings : An array of words and their start and end times in seconds from the beginning of the synthesized audio.

type Translation

type Translation struct {
	// The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for
	// representing the phonetic string of a word either as an IPA translation or as an IBM SPR translation. The Arabic,
	// Chinese, Dutch, Australian English, and Korean languages support only IPA. A sounds-like is one or more words that,
	// when combined, sound like the word.
	Translation *string `json:"translation" validate:"required"`

	// **Japanese only.** The part of speech for the word. The service uses the value to produce the correct intonation for
	// the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot
	// create multiple entries with different parts of speech for the same word. For more information, see [Working with
	// Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
	PartOfSpeech *string `json:"part_of_speech,omitempty"`
}

Translation : Information about the translation for the specified text.

type UpdateCustomModelOptions

type UpdateCustomModelOptions struct {
	// The customization ID (GUID) of the custom model. You must make the request with credentials for the instance of the
	// service that owns the custom model.
	CustomizationID *string `json:"-" validate:"required,ne="`

	// A new name for the custom model.
	Name *string `json:"name,omitempty"`

	// A new description for the custom model.
	Description *string `json:"description,omitempty"`

	// An array of `Word` objects that provides the words and their translations that are to be added or updated for the
	// custom model. Pass an empty array to make no additions or updates.
	Words []Word `json:"words,omitempty"`

	// Allows users to set headers on API requests
	Headers map[string]string
}

UpdateCustomModelOptions : The UpdateCustomModel options.

func (*UpdateCustomModelOptions) SetCustomizationID

func (_options *UpdateCustomModelOptions) SetCustomizationID(customizationID string) *UpdateCustomModelOptions

SetCustomizationID : Allow user to set CustomizationID

func (*UpdateCustomModelOptions) SetDescription

func (_options *UpdateCustomModelOptions) SetDescription(description string) *UpdateCustomModelOptions

SetDescription : Allow user to set Description

func (*UpdateCustomModelOptions) SetHeaders

func (options *UpdateCustomModelOptions) SetHeaders(param map[string]string) *UpdateCustomModelOptions

SetHeaders : Allow user to set Headers

func (*UpdateCustomModelOptions) SetName

func (_options *UpdateCustomModelOptions) SetName(name string) *UpdateCustomModelOptions

SetName : Allow user to set Name

func (*UpdateCustomModelOptions) SetWords

func (_options *UpdateCustomModelOptions) SetWords(words []Word) *UpdateCustomModelOptions

SetWords : Allow user to set Words

type Voice

type Voice struct {
	// The URI of the voice.
	URL *string `json:"url" validate:"required"`

	// The gender of the voice: `male` or `female`.
	Gender *string `json:"gender" validate:"required"`

	// The name of the voice. Use this as the voice identifier in all requests.
	Name *string `json:"name" validate:"required"`

	// The language and region of the voice (for example, `en-US`).
	Language *string `json:"language" validate:"required"`

	// A textual description of the voice.
	Description *string `json:"description" validate:"required"`

	// If `true`, the voice can be customized; if `false`, the voice cannot be customized. (Same as `custom_pronunciation`;
	// maintained for backward compatibility.).
	Customizable *bool `json:"customizable" validate:"required"`

	// Additional service features that are supported with the voice.
	SupportedFeatures *SupportedFeatures `json:"supported_features" validate:"required"`

	// Returns information about a specified custom model. This field is returned only by the [Get a voice](#getvoice)
	// method and only when you specify the customization ID of a custom model.
	Customization *CustomModel `json:"customization,omitempty"`
}

Voice : Information about an available voice.

type Voices

type Voices struct {
	// A list of available voices.
	Voices []Voice `json:"voices" validate:"required"`
}

Voices : Information about all available voices.

type Word

type Word struct {
	// The word for the custom model. The maximum length of a word is 49 characters.
	Word *string `json:"word" validate:"required"`

	// The phonetic or sounds-like translation for the word. A phonetic translation is based on the SSML format for
	// representing the phonetic string of a word either as an IPA or IBM SPR translation. The Arabic, Chinese, Dutch,
	// Australian English, and Korean languages support only IPA. A sounds-like translation consists of one or more words
	// that, when combined, sound like the word. The maximum length of a translation is 499 characters.
	Translation *string `json:"translation" validate:"required"`

	// **Japanese only.** The part of speech for the word. The service uses the value to produce the correct intonation for
	// the word. You can create only a single entry, with or without a single part of speech, for any word; you cannot
	// create multiple entries with different parts of speech for the same word. For more information, see [Working with
	// Japanese entries](https://cloud.ibm.com/docs/text-to-speech?topic=text-to-speech-rules#jaNotes).
	PartOfSpeech *string `json:"part_of_speech,omitempty"`
}

Word : Information about a word for the custom model.

type Words

type Words struct {
	// The [Add custom words](#addwords) method accepts an array of `Word` objects. Each object provides a word that is to
	// be added or updated for the custom model and the word's translation.
	//
	// The [List custom words](#listwords) method returns an array of `Word` objects. Each object shows a word and its
	// translation from the custom model. The words are listed in alphabetical order, with uppercase letters listed before
	// lowercase letters. The array is empty if the custom model contains no words.
	Words []Word `json:"words" validate:"required"`
}

Words : For the [Add custom words](#addwords) method, one or more words that are to be added or updated for the custom model and the translation for each specified word.

For the [List custom words](#listwords) method, the words and their translations from the custom model.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL