2015-10-26 Gesture x HCI – Moscow, Russia

Conference: Gesture Research Applied to Human-Computer Interaction

I presented CIGALE, an interactive artistic platform to explore gesture expressivity at the Moscow State Linguistic University Centre for Socio-Cognitive Discourse Studies (SCoDis) (Laboratory PoliMod)


CIGALE is a four-years project which gathers eight international partners together supported by the French Laboratory of Excellence Labex Arts-H2H. It aims to explore the expressivity of human gesture, conducting both artistic and scientific studies, and experimenting using artificial life models. In this talk, we expose the iterative process we used to create an interactive application that presents a virtual expressive agent. The design of the application involves linguists, computer scientists, digital artists and professional actors in the research process.

At first, in order to understand what is a virtual gesture, we design a genetic algorithm that is able to generate purely new gestures. Its fitness function can be based on biomechanics laws or on a database of more than 400 motion captures of professional actors’ gestures (a mime, a linguist, a deaf poet and a choral conductor). Then, we create an interactive application where a virtual actor can perform an expressive improvisation enriched with the databases of generated or captured gestures. Several experiments of improvisation were conducted with actors through computer-mediated interactions. This helps in characterizing specific dynamics of gestures, recurrent behaviors and gestural patterns. We detail the experiments conducted and the method and the development of the tool.

We present “InterActe” and “Interactive Deaf Poetry”, two artistic installations made with the CIGALE platform where spectators or actors and the virtual actor are able to improvise together. Observing the improvisation situations, we are now analyzing with linguists the emergence of expressivity during the enactive interaction loop.

 

October 26, 2015
MSLU, Ostozhenka 38, room 509
Moscow State Linguistic University
Centre for Socio-Cognitive Discourse Studies (SCoDis)(Laboratory PoliMod)

 

 PROGRAMME

Gesture Research Applied to Human-Computer Interaction:

The Case of Robots and Virtual Agents [1]


October 26, 2015 – MSLU, Ostozhenka 38, room 509

Organizer: Centre for Socio-Cognitive Discourse Studies (SCoDis) (Laboratory PoliMod), MSLU

    

9:15-10:00 – Registration of the participants

10:00 

 

WORD OF WELCOME

Prof. Dr. Irina I. Khaleyeva

Rector of Moscow State Linguistic University

Director of the Centre for Socio-Cognitive Discourse Studies, Moscow, Russia

Prof. Dr. Alan Cienki

Director of the PoliMod Laboratory of the SCoDis Centre, Moscow State Linguistic University, Vrije Universiteit, Amsterdam, the Netherlands

SESSION 1

Co-chairs:

 

Prof. Dr. Olga K. Iriskhanova, Vice-Director of the SCoDis Centre, Moscow State Linguistic University, Moscow, RussiaProf. Dr. Cornelia M. H. Müller, European University Viadrina, Frankfurt (Oder), Germany

 

10:30 Boris M. Velichkovsky, Dr., Prof., Dr. rer. nat., Member of the Russian Academy of Sciences, Head of the Department of Neurocognitive Sciences, Social Humanities and Intellectual Systems,   “Kurchatov Institute” Research Centre, Moscow, Russia; Dresden University of Technology, Dresden, Germany

Communicating attention: From cognitive research to new technological solutions

11:45 Stefan Kopp, Dr.-Ing., Prof., Social Cognitive Systems Group, CITEC / Faculty of Technology, Bielefeld University, Bielefeld, Germany

Computational gesture research — studying gesture in and with social cognitive systems

12:45 Vladimir N. Katasonov, Dr., Prof., Head of the Department of Philosophy, SS Cyril and Methodius Theological Institute of Post-Graduate Studies, Moscow, Russia

Robots and programs: History and prospects

 

13:45 – 14:45 LUNCH

SESSION 2

 

Co-chairs:

Prof. Dr. Aliyah Morgenstern, Université Sorbonne Nouvelle, Paris, France

Dr. Dominique Boutet, Université Paris 8, Paris, France

 

 14:45 Kristiina Jokinen, Dr., Prof., Institute of Behavioural Sciences, University of Helsinki, Visiting Professor, University of Tartu

Engagement and Autonomous Robot Agents – Social Interaction in the WikiTalk application

15:45 Artemy A. Kotov, Dr., Leading Researcher of the Department of Neurocognitive Sciences, Social Humanities and Intellectual Systems, “Kurchatov Institute” Research Center, Moscow, Russia

Analysis and modeling of non-verbal behaviors based on the REC corpus

 17:00
 Jean-François Jégo, Dr., artist-researcher, Department of Arts & Technologies de l’Image, INReV virtual reality lab, Université Paris 8, Paris, France

CIGALE: an interactive artistic platform to explore gesture expressivity

 

18:15-19:15 A seminar for BA, MA and Ph.D. students on recent developments in the brain-computer interface studies

Daniil Kiryanov, Ph. D. student, researcher at the Laboratory of Neurophysiology and the Brain-computer Interface, Moscow State University, Moscow, Russia

On supervillains, elephant trunks and the future of the human brain

[1]                The symposium is supported by the Russian Science Foundation (project «Verbal and co-verbal means of event construal across languages”, grant № 14-48-00067).