Menu

Search

  |   Insights & Views

Menu

  |   Insights & Views

Search

How artificial intelligence will transform how we gesture

Performance of « Sorcieres: dance together with smartphones » at ENSCI les Ateliers. YouTube

“Over the last decade, the machine learning, which is part of artificial intelligence (AI), has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome.”
(Lee Bell, Wired 2016)

With machine learning, we can also now teach our gestures to a machine using the micro-sensor and their data already everywhere. Movements can be recognised, memorized, interpreted and shared through networks. There are many applications in health, sports and especially education in the digital age. A recent article in Le Monde stated:

“Driven by advances in artificial intelligence and natural language processing, voice is slowly establishing itself as the new interface to reach the digital universe.”

Here we point gestures as another emerging interface to reach the digital universe.

Teaching a computer how to write the letters R, G and B by moving your arm. It will then recognize these gestures when you repeat them.

Billions of sensors

During the last 10 years, micro-sensors have proliferated and invaded our lives. They constantly detect and measure our movements. We know, but often we do not even notice, that they constantly count our daily steps. Up to one hundred measurements per second for each of these billions of sensors in our lives. What is the meaning of all these data. What is their presence in our lives? The answer might be “important of course but not that much” as digital technology is today primarily built on omnipresent screens. They put such a pressure on our daily life that they substitute images for the real world, and actually stop the body.

The fall: sofas and beds can be dangerous places.

With the data from the motion micro-sensors, processed by the machine learning in real time, we try to get back to the real world: can we imagine a digital world without screens and keyboards, which would lead us to love to communicate through movements and gestures using our mobile phones? Can we be all dancers in the digital age? Can this amazing image be made real by the alliance of sensors and of machine learning to process their data in real time and put them in our hands?

Imagine: we all move connected to sensors to communicate.

A new tool for home rehabilitation

Rehabilitation was our first experience. Entering these questions through this application was the obvious thing to do. How can you best accompany a patient who returns home quickly with knee or hip prosthesis? With the help of an increasingly individualized technological environment, built up through close collaboration with the nursing staff. This is the heart of collaboration with the trauma department of the Lariboisière Hospital in Paris. This application is based on the link between sensors, intelligent data processing and the design of the interface that connects to the patient. In practice, a small electronic plate on which the smartphone sensors and a wi-fi are put together is sewn into the sock or the garment.

Getting back to walking under the watch of technology. Author provided

Under the gaze of a physiotherapist, the patient teaches the machine its limit postures beyond which a warning signal is given. He can thus practice in his daily life, under the watchful and benevolent gaze of this technology, which therefore knows its limits. Individualisation is an essential aspect of this e-coach. Show me how you move…

Movement measurement and data processing have a dual benefit for the patient. The real-time control of the signal increases its confidence zone. Beside the patient knows the data are transmitted and are analysed by caregivers if the situation requires it.

Enhancing our gestures to learn them better?

The video above shows how it is possible to associate any type of sign or digital information with gestures and movements. This example is based on the work of Frédéric Bevilacqua group at IRCAM (Institut de Recherche et Coordination Acoustique Musique) in Paris. It is no surprise that IRCAM funded by composer Pierre Boulez, is a key player in this research. In the list of our finest and most difficult gestures to acquire, the ones imposed by musical instruments are among those at the top. Can you learn to make music by moving a smartphone?

Gesture interfaces for digital music at IRCAM.

Here you can try the dialogue with the learning machine. This is the web application COMO produced by IRCAM. Learn your own gestures to your smartphone and play with the sounds. From IRCAM, this technology has spread to the Center for Research and Interdisciplinarity (CRI) at the Université Paris Descartes, hub for experimentation and pedagogical innovation. In a “motion lab”, CRI Paris is tackling “the question of gesture as an emerging interface to reach the digital universe” from the point of view of learning and education.

How to learn today? In fact, how can we learn to learn as the world is changing so fast and profoundly? How to build this learning society? The answer from Francois Taddei, director of the CRI: by trying together with kindness and by keeping this astonished look on the world, that is the one of researchers. We are here on this path. Using these technologies for lifelong learning in a world that has become massively digital is exciting, but difficult. And in fact, no one is here an expert, as this technological breakthrough takes us beyond the usual patterns of thought. But there is good news: as we all move, everyone is welcome on the boat…

From scientists to contemporary dance

It took me sometime to understand why researchers and students coming from contemporary dance, computer science and of course, robotics, design, pedagogy, textile, physics, mechanics, anthropology, health, sports, music, sound… are gathering around IRCAM, CRI Paris and the design school, ENSCI Les Ateliers. I should have been more attentive during workshops, where with all these people we explore scenarios to use these fascinating but also confusing tools. For example, I once learned a gesture to my smartphone. When I tried to replicate it, the machine refused to recognize it. A dancer, who was with us, took my smartphone and mimicked my gesture. They both were quickly able to identify “my gesture”. Frustrating.

If the gesture can become an interface to reach the digital universe, then a new way of moving appears. It leads us to pay unusual attention to movements that we make every day without thinking about them. In Le Geste et la Parole (1965), written by André Leroi-Gourhan in a then non-digital world:

“The acrobatics, the balancing exercises, the dance materialize to a large extent the effort of subtraction from normal operating chains, the search for a creation that breaks the daily cycle of positions in space.”

I couldn’t have said it any better. That collaboration with contemporary dance has quickly established itself in this research on gesture and movement, coming from technology and science, is not really a surprise.

The ConversationJoel Chevrier does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.