As Siri, Cortana, Alexa, Bixby, Google and others like them become more meshed in our digital and physical lives there is still a huge gap for practical UX job skills and training in this alternate reality to pixel-based UI designs. For too long companies are looking into digital design skills as must-haves but in doing so have overlooked new skills required for voice products. As as a result there is a huge gap. Customers are expecting to interact with your product or service with newer ways such as voice user interfaces. We need to see interaction design, product design, human computer interaction and cognitive psychology as yesterday’s news feed and instead use them as a platform to launch new opportunities. Now they are part of a foundation for new Voice UX job skills: interconnected product design, dialogue writing skills and software-based Voice platforms.
Interconnected Product Design
Similar to the old discussions of "Do you even need mobile app?" there needs to be a business case based on value proposition to user communities. Equally important is figuring out the context of use and if users will want to talk to your product. Embracing the intersectionalities of a voice-enabled product will make a difference. The benefits far outweigh the risks of adding a voice user interface to your product. To start, the opportunities include localization and personalization, multi-device session(s), mic usage discovery that leads to engagement, winning over users with more access than just one channel and many more. This is unexplored territory.
Dialogue Writing Skills
Here is where borrowing from screenwriting techniques to figure out conversation flows really shines. When designing the voice user interface(s), the conversations must be designed around solving real world problems. These can be mapped out and planned due to their unpredictability. Create sample dialog snippets and flow diagrams.
A crucial list of dialog writing skills for Voice UX would include:
- Real world talk
- Iterate the conversation
- Identify patterns in words
- Smart dialogue
Aside from facilitating information exchanges, the conversation should inform or satisfy a higher purpose for using the product. This can come through for the user by the personality perceived in the VUI through content of responses for errors, status updates, greetings etc. You can use flow map techniques to capture all the paths the user will take when speaking to your product. Phrases or questions users will ask should be grouped by features (search, financial transactions, calendar etc).
The basics of voice/conversational platforms are universal. Instead of pushing pixels we now have to capture what the user means when the people speak to your product or service voice platform, and the behind-the-scenes API's serve appropriate responses. Like all voice platforms currently there is one input, or what the user is speaking to your product, and one output, or your product’s response.
- User speaks to device which passes the speech input to the Voice Platform API
- Voice Platform API takes the speech input converts it to text, submits it to machine learning algorithms to figure out what request to send to the Voice Platform logic
- Voice Platform logic, informed by your conversational flow diagrams, returns information through the device as speech to the User.
Make It Happen
Again with so much opportunity and easy wins, what’s stopping you to add voice capability to your product or service? What stopping you to break in this field? Start learning today!