This site provides information about the work during my Ph.D. under Dr. Michael Filhol at LISN(University Paris-Saclay - CNRS).
Sign language synthesis is a technique for converting a sign language utterance description into an avatar animation. Animating such descriptions into an avatar is challenging but allows for a flexible way to generate sign language content. Previous systems for sign language synthesis developed over the years relied on descriptions that modelled sign language utterances as sequences of glosses. The improved AZee model allows us to write parameterised signed forms for semantic functions. An utterance is encoded in a hierarchy of applied production rules instead of a sequence. My work focuses on how to animate these hierarchies onto an avatar.