‘Siri, what’s the weather like in New York? Siri, where’s the nearest library? Siri, do you love me?’
Users of Apple’s products in recent years will be very familiar with artificial intelligence (AI) assistant Siri. Siri uses voice recognition technology to filter speech for keywords, before searching its database of responses to provide the user with the most relevant answer. But how do developers make sure that the linguistic abilities of AI are robust enough to provide genuine assistance to users and learners?
How does Siri understand us now?
Highly sophisticated algorithms are put in place to allow Siri to parse natural human language. Speech is inconsistent and complex, and a great deal of work goes into allowing these systems to understand what we say. We speak to Siri as if it is human – we ask ‘What time is the next train?’ and not ‘Train time?’ This is also reflected in more recent text search trends – SEO companies now target long-tail keywords, representing whole phrases or sentences, rather than simple keywords. Searching ‘why isn’t my cat eating’ brings up much more relevant results than simply ‘cat eating’, with forums, specialist vet pages and more.
AI for performance support
AI can be very useful as part of a performance support programme. Making a searchable database of digital learning resources available on smartphones means you can use a voice recognition AI system to take you straight to what you need. It can even help you find resources you didn’t know you needed. As a learning tool, AI needs to have a comprehensive understanding of the phrases relating to each topic in its database, as learners may not know exactly what it is they’re looking for. If you ask, ‘How do I help someone who is choking?’, the natural language processing capabilities will allow the system to retrieve a short video showing you how to administer abdominal thrusts to dislodge the food blocking the windpipe. This is obviously far easier to do in a stressful situation than to recall the term ‘abdominal thrusts’, if you knew it in the first place, and it is a much faster way to reach the material than typing out your query.
The future of AI for e-learning
In the future, we could also see AI being used in more traditional e-learning courses. In customer service training, for instance, the voice recognition could act in a roleplay with learners to test how they deal with difficult customers. The AI would respond intelligently to create a simulation environment for the learner, which would create a more immersive experience than watching actors in the same scenario in a video. However, this relies on a very human level of interaction from AI systems, which means comprehensive research into natural language and how this can be better understood by computers. For instance, some call centres have started assessing the personality types of customers in the first few sentences of a conversation to ‘stream’ them through to an employee with the same personality type – could computers one day be able to achieve something similar through their comprehension of language?
So what’s next?
New AI assistants are being created all the time. The next major player to launch will be Microsoft’s Cortana. The ability of these technologies to understand language is improving constantly, finding more and more ways to pin down the infinite number of utterances available to humans. It will be interesting to see how AI technology is used in the future, and whether the sophistication of the natural language processing will be able to keep up with our increasing demands. How do you expect to see AI used in e-learning in the future? For ideas about more 2014-friendly performance support, why not check out our top 25 tips for performance support on the Knowledge Base?