Search Interaction Optimization
I completed an industrial Ph.D. program in the R&D department of Unister GmbH, who were developing BlueKiwi, a semantic search engine, at that time. The goal of my project was to develop a new methodology for more automatic usability evaluation and optimization of search engines that is still effective.

Its title was Search Interaction Optimization: A Human-Centered Design Approach.
Unister GmbH
Team of roughly 30
Jun 2012–Dec 2014
What I was/did
Industrial Ph.D. Student
User Research
System Design + Architecture
Project Management
How the process turned out
literature review
competitive analysis
field research
Relevance Prediction
iterative implementation (2×)
large-scale data analysis (2×)
Usability Evaluation
expert interviews
iterative implementation (2×)
user study (2×)
a/b testing
Usability Optimization
competitive analysis
expert inspections (2×)
user study
Finding a Ph.D. Project.
My first task after joining Unister was to find a project I could write my Ph.D. thesis about and pitch it internally as well as to Chemnitz University of Technology and the SAB (Sächsische AufbauBank), who granted my scholarship. I started with a literature review and competitive analysis. After that, because of my interest in HCI, I spent some time doing field research at various teams within Unister—front-end, UI design, usability testing, and data analytics—where I interviewed people and discovered that traditional usability evaluation through user studies was virtually absent. Instead, optimization was mostly based on split testing and conversions such as the number of clicked ads. Clearly, there was the need for a new approach that would combine split testing with an effective metric for usability. I concluded this phase by synthesizing two personas (Finn, the searcher and Rey, the developer) and four scenarios from my findings, which would form the basis for my Ph.D. thesis.
I conducted expert interviews to create a new usability questionnaire that was derived from the existing ISO definition, which the experts broke down into lower-level features.
Predicting the Relevance of Search Results from User Interactions.
In search engines, relevance (or informativeness) is a crucial usability factor. I designed and developed TellMyRelevance! (TMR), a pipeline that utilized a variety of user interactions (e.g., cursor speed and the length of the cursor trail) to determine the relevance of search results. I was allowed to collect more than 30 GB of interaction data on two travel-booking websites, which were used to train machine-learning models used for relevance prediction. In comparison to a state-of-the-art solution used in industry, TMR performed considerably better for all datasets in my large-scale data analysis. In another iteration, TMR was extended with streaming capabilities and incremental learning and predictions were ultimately incorporated into BlueKiwi.
WaPPU enables A/B tests based on a usability score and the seven items of my new usability questionnaire. The traffic light on the right indicates whether interface B is better or worse with statistical significance.
Usability-based Split Testing: A New Metho­do­logy.
Next, I applied the insights gained from relevance prediction to usability in general. For this, I first had to develop a new usability questionnaire with items (informativeness, understandability, readability, etc.) that are suitable for correlation with user interactions. Therefore, I conducted a review of existing best practices and guidelines as well as interviews with nine usability professionals. Then, I designed and developed WaPPU, a new A/B testing tool based on a usability score rather than clicks on ads. WaPPU collects interaction data on two versions of the same website, trains machine-learning models, predicts usability as a single score as well as the items from my new questionnaire and then determines the better interface. I used a first prototype in a user study to validate my new questionnaire and refine the system. In a second study with more than 80 participants who compared BlueKiwi's results page with a deliberately worse version of it, WaPPU could correctly identify even subtle differences in usability.
The nine optimizations proposed by S.O.S. for BlueKiwi's search results page. The usability score of the website improved from 59.9% to 67.5% after the redesign.
Optimizing BlueKiwi.
Based on a competitive analysis of existing search engines and two rounds of expert inspections, WaPPU was then extended with a catalog of based practices to form S.O.S., a system that automatically proposes usability optimizations based on the usability measurements, such as Your site is not well readable, you should consider changing your font size. Finally, in another user study, S.O.S. was applied to BlueKiwi's results page and informed a redesign that proved to yield significantly better usability.

The research paper about S.O.S. won a Best Paper Honorable Mention Award at the 2015 ACM Conference on Human Factos in Computing Systems.
overall usability after redesign
Best Paper Honorable Mention Award