16: THE FUTURE (The Future)

Signs - Signs Smart Tool - Signs JPG
Signs - Signs Smart Tool - Signs JPG
Signs - Signs Smart Tool - Signs JPG
Signs - Signs Smart Tool - Signs JPG
Signs - Signs Smart Tool - Signs MOV 2m:02s
Signs - Signs Smart Tool - Signs MP4 1m:45s

Signs - Signs
Signs Smart Tool

Signs - Signs - Signs Smart Tool


Title of Entry: Signs
Brand: Signs
Product/Service: Signs Smart Tool
Client: German Youth Association of People with Hearing Loss
Entrant Company: MRM//McCANN
Creative Agency: MRM//McCANN & McCANN
Judging URL: https://projectsigns.org/
Judging URL: https://projectsigns.org/
Chief Creative Officer: Sebastian Hardieck
Account Manager: Sebastian Klein
Date of Release: 2019-04-25
Notes: Background: There are over 2 billion voice-enabled devices across the globe. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? According to the World Health Organization around 466 million people worldwide have disabling hearing loss. Project SIGNS was developed to create awareness for inclusion in the digital age as well as to facilitate access to new technologies. Idea: SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. Strategy: Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). Execution: SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant.
Creative Director: Mark Hollering, Jan Portz
Designer: Jawad Saleem
Art Director: Nico Koehler, Irini Sidira
Other Credits: Executive Creative Director: Mark Biela, Dushan Drakalski
Other Credits: Vice President Global, Product Innovation & LAB13: Dominik Heinrich
Other Credits: Senior Copywriter / Concept & LAB13: Chris Endecott
Other Credits: Senior Motion Designer: Michael Klaiber
Other Credits: PR & Communications Director: Jerome Cholet
Other Credits: Trainee LAB13: Sofia Paz-Vivo
Other Credits: Head of Production: Klaus Flemmer
Other Credits: Chairwoman German Youth Association of People with Hearing Loss: Michelle Mohring
Other Credits: Clerk German Youth Association of People with Hearing Loss: Lucas Garthe