Skip to content
View MariusLeChapelier's full-sized avatar

Block or report MariusLeChapelier

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MariusLeChapelier/README.md

Hi there 👋 , I'm Marius

I'm a research engineer at Inria Paris, where I study human interactions through the prism of AI and, especially, Natural Langage Processing (NLP) applied to human dialogue. I'm part of the the Articulab, within the Almanach team, our work contributes to theoretical research in cognitive science, linguistics, artificial intelligence and other disciplines.

Dialogue is a complex interaction, as it involves diverse, simultaneous and interdependent actions. These actions constitute parallel signals of different types (voice, gesture, facial expression, etc.), at different scales (turn, sentence, word or even shorter) and in different temporal domains (continuous or discrete).

We generally distinguish 3 types of signals, 3 modalities :

  • Verbal (semantic information)
  • Vocal (prosody, intonation, etc)
  • Non-Verbal (gesture, facial expression, etc)

It has been shown in multiple past research that the study of the relationship between these modalities is essential for a better understanding of human interaction, and that their synergy benefits greatly the modeling of such interaction. Our interest is to take forward the recent advances on training multimodal neural networks models to understand or generate human-like interactions.

In this context, I'm currently working on a real-time multimodal dialogue system and embodied conversational agent. This project is a sequel of Articulab's previous SARA. A dialogue system is a system capable of having a conversation with a human interlocutor. Which implies that it must be able, in real-time, to process the user's multimodal signals : vocal (capturing user's voice with a microphone), verbal (extracting semantic information from user's transcription), non-verbal (capturing gesture, facial expression with a camera), and to generate similar signals through an avatar.

Prior to this, I obtained my Master's degree in Artificial Intelligence from Sorbonne Université, during which I studied the following AI subfields :

  • Interactive environments : virtual environments, human-computer interaction, serious games, video games, e-learning, information systems
  • Decision : decision theory, preference modeling and learning, multi-objective or multi-agent combinatorial optimization, Bayesian networks
  • Robotics and intelligent systems : agent and autonomous robot, multi-agent systems, machine learning
  • Operational research : mathematical programming, optimization and complexity, graphs and scheduling

Popular repositories Loading

  1. woz_unity_test woz_unity_test Public

    C#

  2. MariusLeChapelier.github.io MariusLeChapelier.github.io Public

    HTML

  3. MariusLeChapelier MariusLeChapelier Public