
( IBAction ) speak: ( UIButton * ) sender To populate the picker view I use two properties: A UIPickerView is used to allow the user to select from the list of available speech synthesis voices.
#Text to speech api ios code#
There is a lot of setup code in the view controller that is not directly related to the speech synthesis API’s that I will mostly skip over. It is up to the user to select a language appropriate to the text. The speed and pitch of the speech can be modified and as an extra bonus each word in the text is highlighted as it is spoken. The user types some text in the text field at the top of the screen, selects a language from the picker and then taps “Speak!” to hear the text spoken in the chosen voice. It is a single view controller App with a simple user interface as shown in the screenshot below: To test out the speech API’s I have created a small Xcode project called SpeakEasy which you can find in my GitHub repository. The method call includes the character range in the string being spoken making it easy to highlight the text as it is being spoken. The most interesting method for me is the -speechSynthesizer:willSpeakRangeOfSpeechString:utterance: method which is called just before each unit of text (mostly a word) will be spoken by the synthesizer. Refer back to the earlier class diagram for the names of each of the delegate methods. If you need to interact with the speech synthesis you can set the delegate of the AVSpeechSynthesizer object and implement one or more of the optional AVSpeechSynthesisDelegate protocol methods.ĭelegate methods allow you to respond when the synthesizer starts, pauses, resumes, finishes or cancels speaking an utterance.
#Text to speech api ios update#
Update 13 July 2014, iOS 8 added support for the Hebrew language AVSpeechSynthesisDelegate The following code snippet would, for example, set the utterance to use an Australian voice: Note that the language codes use a hyphen not an underscore: The list is shown below for reference with both the display name, for the English locale, and the BCP-47 language code you need when creating the voice.
#Text to speech api ios full#
Luckily you can retrieve the codes for the full list of supported languages with the AVSpeechSynthesisVoice class method +speechVoices. Unfortunately Apple does not list all of the supported language codes in the class documentation but mentions they need to be BCP-47 codes. I will leave it to you to judge how accurate some of these voices are.

This includes local language variants such as UK and US versions of English. Creating a new voice is as simple as creating an instance of AVSpeechSynthesisVoice and setting the language property to the correct language code string for a supported language.Īt the time of writing, iOS 7.0.4 supports 36 distinct voices. However what is interesting is that you can create voices appropriate for languages and locales different from the default. īy default an utterance will use a default voice for the current user locale.
