4

ReComment: A speech-based Recommender System

Most of you will probably know that as my “day job”, I am a student currently pursuing my master’s degree in computer science. This, of course, also entails some original research.
In this blog post, I will describe both one of these efforts and a practical use case of Simon’s upcoming dictation features, all conveniently rolled up into one project: ReComment.

Outline

A recommender system tries to aid users in selecting e.g., the best product, the optimal flight, or, in the case of a dating website, even the ideal partner – all specifically tailored to the users needs. Most of you have probably already used a recommender system at some point: Who hasn’t ever clicked on one of the products in the “Customers Who Bought This Item Also Bought…” section on Amazon?

The example from Amazon uses what is conventionally called a “single-shot” approach. Based on the systems information about the user a set of products are suggested. In contrast, “conversational” recommender systems actively interact with the user, thereby refining their understanding of the user’s preferences incrementally.

Such conversational recommender systems have been shown to work really well in finding great items, but obviously require more effort from any single user than a single-shot system. Many different interaction methods have been proposed to keep this effort to a minimum while still finding optimal products in reasonable time. However, these two goals (user effort and convergence performance) are often contradictatory as the less information the user provides, the less information is also available for the recommendation strategy.

In our research we intend to slightly sidestep this problem that is traditionally combated with increasingly complex recommendation strategies and instead make it easier for the user to provide complex information to the system: ReComment is a speech-based approach to build a more efficient conversational recommender system.

What this means exactly is probably best explained with a short video demonstration.

(The experiment was conducted in German to find more native speaking testers in Austria; be sure to turn on subtitles!)

Implementation

Powering ReComment is Simond with the SPHINX backend using a custom-built, German speech model. The NLP layer uses relatively straight-forward keyword spotting to extract meaning from user feedback.

A pilot study was conducted with 11 users to confirm and extend the choice of recognized keywords and grammar structures. The language model was modified to heavily favor keywords recognized by ReComment during decoding. Recordings of the users from the pilot study were manually annotated and used to adapt the acoustic model to the local dialect.

ReComment itself is built in pure Qt to run on Linux and the Blackberry PlayBook.

Results and Further Information

To evaluate the performance of ReComment, we conducted an empirical study with 80 participants, comparing the speech-based interface to a traditional mouse-based interface and found that users not just reported higher overall satisfaction using the speech-based system, but also reported finding better products in significantly less interaction cycles.

The research was published at this years ACM Recommender System conference. You can find the presentation and the full paper as pdf, in the publications section on my homepage.
The code for the developed prototype, including both the speech-based and mouse-based interface has been released as well.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Peter Grasch