What we do

Ongoing Funded Projects

VERE: Virtual Embodiment and Robotic re-Embodiment

VERE aims at dissolving the boundary between the human body and surrogate representations in immersive virtual reality physical reality.

Dissolving the boundery means that people have the illusion that their surrogate representation is their own body, and act and have thoughts that correspond to this. The work in VERE may be thought of as applied presence research and applied cognitive neuroscience, and it would also significantly add to scientific knowledge in these areas.  Imagine that you could transfer your self to an artificial body – a virtual body as represented in a virtual reality or a physical robotic body. Recent advances in cognitive neuroscience show that internalising a body beyond ourselves (external, remote, alien) is possible, that our body representationis highly malleable and may be altered by appropriate changes to the normal stimulus contingencies that we are used to in every day life. The purpose of this research is to make such body tranference possible with two different modes (virtual and physical). On top of these technical achievement, that would be a number of practical applications that will be constructed and investigated during the project.

GOLEM – Realistic Virtual Humans

GOLEM will share knowledge between relevant partners in Industry and Academia in order to carry out multidisciplinary research to streamline the production pipeline for virtual (digital) characters. The technical objectives can be summarized in the development of a marker-less facial capture system, an automatic rigging system, an interactive facial muscle animation system, automatic lip-sync based on speech recognition and realistic dynamic, real-time skin shaders.

Move4Health

Move4Health project is a one year project that aims to improve motor skill abilities of impaired individuals to achieve better control of the body, by means of therapeutic learning tools. Move4Health shows how it is possible to apply a pioneer serious game approach to teach and asses people to physically navigate the world, combining real-time markerless motion capture (MoCap), virtual reality (VR) or 3D environments. Physically navigate the world means that people require considerable skills to control the body for both fine (e.g. writing) and large motor (e.g. walking) movements. Move4Health focuses on large motor skills. Today, most rehabilitation therapies still use non-interactive techniques. This interactive solution targets the health care industry: the games will help individuals to explore their limits and control their physical awareness in a fun way without inducing stress.

Past Funded Projects

LIFEisGAME: Learning Facial Emotions using Serious Games

LIFEisGAME attempts to show how it is possible to apply a pioneer serious game approach to teach people with Autism Spectrum Disorder (ASD) to recognize facial emotions, using real time synthesis and automatic facial expression analysis. The ability of socially and emotionally impaired individuals to recognize and respond to emotions conveyed by the face is critical to improve their communication skills.

FaceExpress- PT Inovação

FACE EXPRESS shows how it is possible to teach children how to recognize facial emotions, using real-time synthesis and automatic facial expression analysis. Our interactive digital media solution, MEO Box and Kinect and/or similar, has an explicit and carefully throught-out educational purpose: to help improve the children’s communication skills and learn to recognize emotions in a fun way and without inducing stress. The overall objective is to deploy a low cost real-time facial animation system embedded in a game, which can be used by everyone in the family in a social, educational and leisure environment

Individual Projects

Reactive Character Framework

One of the core barriers for the gamers to experience high immersion is empathy towards game characters. Facial animation plays an important role, because having the virtual characters react to world stimuli would contribute to their believability. The challenge is how to define behaviors, and how to properly animate the character in order to provide immersion. The key contribution is the development of a system that is capable of automatically triggering facial reactions based on external stimuli such as a ball that is going to collide with the face of the character. This allows for an easier character behaviour definition. A framework was created that allows the definition of behaviours from a database of predefined possible actions when the selected events occur. The framework is composed of an action manager that reads from the configuration file reactive agents that control different zones of the face, by interpreting the data they get from simulating vision. This framework can be used by game and films industry, more specifically developers and artists. As a result users can define in a rapid way reactive behaviours and are capable of feasible facial animation. Automatic behaviours are provided by the framework that will make the character dodge and retract the face when an object is about to collide with it.

Painting Muscles for Real-time Skin Deformation [PhD Thesis]

This research addresses two recurring issues in muscle animation: creation/animation interfaces and real-time simulation of the muscles’ actions over the skin. It targets the film and video games industries and it’s meant to be used by 3D artists (both modellers and animators) who wish to enhance their characters with subtle deformations.The main contribution of this research is a novel interface, where artists paint the effects of the muscles over the skin’s geometry, getting in return the shape of the muscles and their subtle effects over the skin, like wrinkles, sliding and bulging. These same algorithms are used later to reproduce in real-time the subtle deformations. The challenges involve developing and testing a new user-friendly interface and inferring the muscles’ properties for both an accurate representation of the artist’s strokes and real-time simulation. With the help of industry experts, the aim is a novel interface that sets a new standard and provides a real-time solution for muscle animation, a long expected result in the computer graphics community.

Automatic Visual Speech Animation

Lip synchronization is the process of matching a speech audio file with the mouth’s movements of a synthetic character, so it looks like it is talking. It can be manually done by digital artists, which is very time consuming or can be based on automatic methods that reduce the amount of time required to produce the animation, at the cost of complex algorithms. The major problem in speech animation is the simulation of co-articulation effects, i.e. the effect that one phone has on the others around it, which is highly context dependent. This research aims to create an automatic visual speech animation system capable of producing realistic visual speech animation and modelling the co-articulation effects that can be used in both in interactive applications like videogames and in films. Currently, a framework for visual speech animation has already been proposed and published and has been used to create an automatic speech animation system for Portuguese and English. It was integrated in the Maya authoring tool to provide support for fine tuning for artists.

Easy Facial Rigging: A Dynamic Industry Standard

Animating the face of a 3D character for films and videogames is hard because a complex facial control structure has to be built to support its behaviors. This process is called facial rigging: setting the controls to animate a character’s face. Today, this task is laborious, since artists have to place each control on the 3D model by hand. It is also expensive because of the time spent generating different control paradigms for each character facial style. Our overall goal is to study, analyze and deploy a facial rigging standard to bridge the gap between facial modeling and animation. The standard provides the facial rig design specifications for the morphologies and behaviors of a character’s face. The question that drives our research is: is it possible to create a rigging standard that guarantees the accurate control and the subtle deformation of a character’s face?

Facial Skin Shading Parameterization Methodology for Rendering Emotions

Skin texture painting for animation of facial expressions is currently a time consuming task, being done without accuracy and coherence since there are no guidelines. Artists and animators would benefit from a tool that could give them visual guidelines on how to portray emotions, respecting each character art style, being it photo-realistic, cartoon or a fantasy creature. A methodology is being designed as a tool that provides visual examples on how to paint coherent set of textures for accurate portrait of the six universal facial emotions plus after a subject has performed physical exercise. The methodology is supported by scientific and artistic data empirical comparison: melanin and hemoglobin SIAscope maps that accurately characterize human skin and painted portraits of different authors and styles, as an example of artistic expression and visual perception. Following the visual guidelines of our methodology, digital artists can portray more coherent expressions and believable characters.

3D Scanner and surface reconstruction

Creating a high-quality human like 3D facial avatar, automatically or in a short amount of time can be an highly time-consuming and laborious task. Artists spend much of their time by manually creating the subtle details of the face. For the film and game industry this system can drastically improve the quality of 3D avatars and reduce the time spent.In order to accomplish this achievement it is necessary to create and algorithm that aligns and integrates depth camera frames, reconstructs a 3D surface from a point cloud, performs an automatic retopology and texture registration from RGB. The final result will be based on a low-cost depth camera or a single-shot from a setup of high resolution cameras.

Virtual Marionette

Virtual Marionette is a research on digital puppetry, an interdisciplinary approach that brings the art of puppetry into the world of digital animation. Inspired in the traditional marionette technology our intention is to study novel interfaces as an interaction platform for creating artistic contents based on computer animated puppets. The overall goal of this thesis is to research and deploy techniques and methods for the manipulation of articulated puppets in real-time with low-cost interfaces to establish an interaction model for digital puppetry.

Auto-rigging for facial models

A character’s face animation process is a complex and time consuming task in any major entertainment enterprise. The complexity arises from the precision and the large quantity of geometrical manipulation an artist must perform to obtain a final animation sequence and because the face has so many subtle movements. Digital artist are responsible for the task of creating virtual personages. To attend the required animation results, artist use controllers to simplify the deformation process.A controller example is the rig. Our research focus on an automatic rigging process development. An automatic solution simplifies and reduces the animation process time by generating on-the-fly facial rigs. The challenges we face on this research are where to correctly place the joints of a rig, how does a joint influence a set of geometry vertices and to provide artists with additional controls over the model. By the end of our work, we will deploy a system that of automatically rigging a facial model by, at an initial stage, identify important regions on a model, building a rig accordingly with those regions, setting the influence amounts of joints over the mesh vertices and adding extra controls to provide extra capabilities to animators. The final application can be inserted in any production pipeline to optimize the character creation process by building the rigs for the facial characters.

Face Tracking and Emotion Classification

Manual intervention requirement in 3D character facial animation is still a bottleneck on many production pipelines. Motion Capture (MoCap) systems have been adopted to automate and allow realistic facial animation. However, MoCap presents problems when reproducing human like movements, so fine tuning is necessary. Given solution to this problem would enhance efficiency and sensitivity of facial animation on videogame and films pipelines. Therefore, it is aimed to study facial recognition/tracking systems and knowledge of how these systems to induce motion on the 3D character. As solution, we propose the study of two systems: an efficient and accurate algorithm of MoCap, and an algorithm that maps optimized MoCap data to the character producing realistic and real time facial animation. At the moment, a first prototype of facial recognition and mapping to a 3D character is already available, being still under optimization.

Sketch Express: A Sketching Interface for Facial Animation

One of the most challenging tasks for an animator is to quickly create convincing facial expressions. Finding an effective control interface to manipulate facial geometry has traditionally required experienced users (usually technical directors), who create and place the necessary animation controls. Here we present our sketching interface control system, designed to reduce the time and effort necessary to create facial animations. Inspired in the way artists draw, where simple strokes define the shape of an object, our approach allows the user to sketch such strokes either directly on the 3D mesh or on two different types of canvas: a 2D fixed canvas or a more flexible 2.5D dynamic screen-aligned billboards. In all cases, the strokes do not control the geometry of the face, but the underlying animation rig instead, allowing direct manipulation of the rig elements. Additionally, we show how the strokes can be easily reused in different characters, allowing retargeting of poses on several models. We illustrate our interactive approach using varied facial models of different styles showing that first time users typically create appealing 3D poses and animations in just a few minutes. We also present the results of a user study. We deploy our method in an application for an artistic purpose. Our system has also been used in a pioneer serious game context, where the goal was to teach people with Autism Spectrum Disorders (ASD) to recognize facial emotions, using real time synthesis and automatic facial expression analysis.

Simulation of Deformable Solid and Rigid Bodies on Facial Models

Physically based simulation and animation is always challenging because of it’s complexity and efficiency problems. However, nowadays physics based modelsand simulations are being used heavily with the help of latest hardware in entertainment, biomedical and military industries. This research is based on the study, analysis, comparison and exploration of the physically based animation methods on human based facial models. The main aim is to investigate the properties of the behaviors of different types of physics based deformation techniques on the same surface which is a synthetic face model. The skin of the face will be used as deformable surface experiment platform and plastic, elastic, viscoelastic and fluid behaviors are going to be modeled on this platform. The topological complexity of the facial models is main challenge of this research. The linear elastic behavior of the mouth surface, viscoelastic behavior of the skin on the face will be explored deeply. In the final stage, the usability of these simulations and models will be presented in human and cartoon facial examples in a comparison approach with previous researches from mocap based platforms

© Copyright Porto Interactive Center 2017

Faculdade de Ciências da Universidade do Porto | Departamento de Ciência de Computadores | Rua de Campo Alegre 1021-1055, 4169-007 Porto, PORTUGAL
Tel: +351 220 402 966 | Email: info@portointeractivecenter.org