top of page

Projects

The following are the several national and multinational research projects I have actively been participated. In the most of the projects, my responsibilities are Text-To-Speech (TTS) systems, social signal processing, affective computing and multimodal interaction.  

Latest Projects

AVATAR 1.1 is recently started French National Research Project. The project aims at developing virtual agents of about the user's height, and communicate with people about highlights of a museum. I have been working on speech analysis and social signal processing as part of this project. 

ILHAIRE (Incorporating Laughter into Human Avatar Interactions: Research and Experiments) is an European FET (Future Emerging Technology) Research Project. Its objectives are to help the scientific and industrial community to bridge the gap between knowledge on human laughter and its use by avatars, thus enabling sociable conversational agents to be designed, using natural-looking and natural-sounding laughter. My contributions to this project include: (i) Automatic detection of laughter from audio using  segmental Hidden Markov Models (HMMs); (ii) Extraction of naturalistic facial expressions from laughter videos. (iii) Facial expression cloning onto virtual avatar using automatically extracted facial landmarks. 

ALIZ-E is an European FP7 project. It aims to move human-robot interaction from the range of minutes to the range of days, by building cognitive systems that can adapt, over longer periods of time, to a user (in this case, an 8-year old child). With the help of adaptation techniques, the project has to build up and maintain a social relationship with the kid. My contributions include: (i) controls for prosody modification using HMM-based speech synthesis; (ii) integration of TTS systems with NAO robots; 

The Semaine project is an EU-FP7 1st call STREP project and aims to build a SAL, a Sensitive Artificial Listener, a multi modal dialogue system which can: interact with humans with a virtual character and react appropriately to the user's non-verbal behavior. I involved in this project for its complete duration of three years. My PhD work on nonverbal vocalizations is part of this project. In addition, I also built voices for virtual agents, and the synchronization of lips for corresponding speech. 

This project addresses novel interaction paradigms and technologies for enhancing the flexibility of interaction and dialogue management in computer games. This will enable computer-driven game characters to lead more intelligent and versatile dialogues than is possible with current games technology.  My contributions include: (i) Development of voice-building tools for TTS systems; (ii) Automatic labeling for expressive TTS systems; (iii) poker-style voice creation.

bottom of page