User Profiling and Context Understanding for Adaptive and Personalised Museum Experiences
In this article we present an integrated multimedia system for passive and active profiling of visitors in the Donatello room of the Bargello Museum in Florence. The system is composed by two Computer Vision powered modules: 1) the first, MNEMOSYNE, based on passive observation of visitors through cameras, builds a list of the artworks of interest for each visitor. These preferred artworks are then used to deliver personalised content and targeted recommendation of other items of interest on an interactive table exploiting user re-identification; 2) the latter, SeeForMe, is a wearable embedded system featuring an application which augments the functions of the well-known museum audio guides. The embedded system provides real-time artwork recognition on data obtained through a micro-camera exploiting a Convolutional Neural Network; furthermore it is smart, understanding the context and user behaviours such as walking, talking or being distracted and reacting consequently.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.
DigitCult is published under a Creative Commons Attribution Licence 3.0.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask further permissions both to author or journal board.