When Demographics Meets Interactive Digital Signage
IntuiLab (the world leader in touch-first experience creation without coding) and Quividi (the world leader in automated audience measurement for digital signage) have recently announced the integration of their products.
The integration of these two solutions enables the creation of smart, adaptive and data rich digital signs. Quividi’s video analytics produces a host of real-time presence and demographic information, all of which can be used to dynamically adapt signage to the audience thanks to IntuiFace’s trigger/action mechanism. And with the correlation of video analytics with usage statistics generated by the on-screen activities of visitors, retailers gain actionable, data-based insight to the effectiveness of their content.
Readers of this blog know IntuiFace well. With it one can create touch-first experiences incorporating a wide variety of media formats, an open connection to cloud-based data and APIs (including the Internet of Things), a broad range of expressive capability, a powerful trigger/action (aka IF This Then That) mechanism and - with digital signage in mind - all of the core capabilities necessary for a successful deployment, including analytics.
Quividi provides a real-time description of each person appearing in the field of view of a camera placed on top of a display. 10 times per second, for each detected face, Quividi collects information including that face's position and distance from the screen, whether s/he is currently looking, gender and age bracket. Note that each face is not recognized, it is detected and qualified, which is why the technology is refered to as Anonymous Video Analytics. In parallel to face detection, a summary of each viewing session is uploaded in the cloud every 30 minutes for statistical analysis. For more information about the Quividi solution, see www.quividi.com.
The combined solution opens new avenues for interactive apps and more engaging experiences for the user. Let’s review some of the scenarios that are now made possible, following the 3 stages of a user experience with an interactive application.
Using video analytics before someone starts touching the screen
- Pre-select a catalog based on the demographic profileWhen a person approaches the screen, you can use his gender and age to filter through a catalog and start presenting content tailored for him. Personalized experiences have been proven to significantly increase audience engagement.
- Shy users? Get them to approach!If you detect that a person has been looking at the screen and not moved for, say, 10 seconds, you could display a reinforcement message to get him to approach and touch the screen.
- Change the tone of speech when a child is around.If a child is amongst the viewers at any moment in time, you might want to change the wording or graphics to appeal to that young audience. If you detect an adult female and/or male around, you might also assume it's a family.
- Trigger content when someone comes from a certain direction.If a person is coming from the right then you might infer that he’s already been in a certain part of the venue, so you may want to display a message different from one if he had come from the left.
Using video analytics during an interactive session
- Improve the photobooth experienceTaking pictures of a face that can then be enhanced with add-on attributes (hat, moustache…), or pasted into postcards? You’d generally want the user to be in a certain position, indicated with a face silhouette on the screen or some stickers on the ground. With the Quividi solution running on a second camera, you’ll have the face coordinates (X,Y and Z) for every single face, so you could start your animation at any moment without going through this cumbersome positioning pre-stage.
- Get users to do something specialTo immerse an audience into your app, or just for fun, you may want to entice viewers to do something special: move their head a certain way, step back, bring 3 people side by side, etc.NOTE: Quividi will soon support emotion detection (e.g. detecting a smile or a frown) – so there will be even more behaviors to follow!
- Reward those who dwellEach person’s presence and attention time is provided. Knowing that someone has been around (and interacting) for more than 30 seconds could, for instance, be used to reward a person with some information or coupon.
Using video analytics after a session
- Calculate the engagement funnel
With both the face statistics from Quividi and the interaction statistics (e.g. which items were selected) from IntuiFace, you will have a rich data set for counting conversions and analyzing the real impact of your application. You will know:
- the number of passers-by (= total universe),
- the number of people who glanced at the screen (=impressions),
- the number of those who looked at least X seconds (=signs of interest)
- the number of those who touched the screen (=engagements)
- the number of those having completed the navigation up to a certain point (=completions)
- Calculate the total engagement time, broken down by stageHaving the presence time from the first glance will enable you to measure duration for the pre-touch stage (i.e. how much time it takes before someone touches the screen) and in each stage thereafter (e.g. how much time on the opening screen, on the second one, etc.)
- Analyze the dominant profile, identifying popular navigations or spaces within the appIf you wonder which demographic profile most visits a certain page, or spends the most time with it, you’ll be able to define the minimum dwell time for a specified space, then identify the profile of each person close to the screen for that length of time, then do a pie chart to calculate the dominant demographics.
- Analyze preferred spaces by demographicsConversely, if you want to study the path through your application, you might want to look for all viewing sessions longer than 10 seconds for young adult males and see which touch activity was performed by these people, then do it again for adult females, etc.
Eager to try? Just ask for a free trial of Quividi’s solution up on www.quividi.com and then try out the IntuiFace-based experience - highlighted here - available for download in IntuiFace Composer and Player.
Can you think of other applications? We’d love to hear from you!