For Google Cloud Next and Devoxx France, I’m working on a new talk showing how to build a conference assistant, to whom you’ll be able to ask questions like “what is the next talk about Java”, “when is Guillaume Laforge speaking”, “what is the topic of the ongoing keynote”, etc.
For that purpose, I’m developing the assistant using API.AI. It’s a “conversational user experience platform” recently acquired by Google, which allows you to define various “intents” which correspond to the kind of questions / sentences that a user can say, and various “entities” which relate to the concepts dealt with (in my example, I have entities like “talk” or “speaker”). API.AI lets you define sentences pretty much in free form, and it derives what must be the various entities in the sentences, and is able to actually understand more sentences that you’ve given it. Pretty clever machine learning and natural language process at play. In addition to that, you also have support for several spoken languages (English, French, Italian, Chinese and more), integrations with key messaging platforms like Slack, Facebook Messenger, Twilio, or Google Home. It also offers various SDKs so you can integrate it easily in your website, mobile application, backend code (Java, Android, Node, C#…)
Read more...