Seminars

PIC helds weekly seminars where relevant publications related to our research fields are presented and discussed. Feel free to drop by…

Sleight of Hand: Perception of Finger Motion from Reduced Marker Sets

Authors: Ludovic Hoyet, Kenneth Ryall, Rachel McDonnell, Carol O’Sullivan

Presented by: Pedro Rodrigues | 07/03/2013

ABSTRACT: Subtle animation details such as finger or facial movements help to bring virtual characters to life and increase their appeal. However, it is not always possible to capture finger animations simultaneously with full-body motion, due to limitations of the setup or tight production schedules. Therefore, hand motions are often either omitted, manually created by animators, or captured during a separate session and spliced with full body animation. In this paper, we investigate the perceived fidelity of hand animations where all the degrees of freedom of the hands are computed from reduced marker sets. In a set of perceptual experiments, we found that finger motions reconstructed with inverse kinematics from a reduced marker set of eight markers per hand are perceived to be very similar to the corresponding motions reconstructed using a full set of twenty markers. We demonstrate how using this reduced set of eight large markers enabled us to capture the finger and full-body motions of two actors performing a range of relatively unconstrained actions using a 13-camera motion capture system. This serves to simplify the capture process and to significantly reduce the time for cleanup, while preserving the natural biological movements of the hands relative to the actions performed.

Prakash: Lighting-Aware Motion Capture Using Photosensing Markers and Multiplexed Illuminators

Authors: Ramesh Raskar, Hideaki Nii, Bert deDecker, Yuki Hashimoto, Jay Summet, Dylan Moore, Yong Zhao, Jonathan Westhues, Paul Dietz, John Barnwell, Shree Nayar, Masahiko Inami, Philippe Bekaert, Michael Noland, Vlad Branzoi and Erich Bruns

Presented by: Jorge Ribeiro | 20/12/2012

ABSTRACT:  In this paper, we present a high speed optical motion capture method that can measure three dimensional motion, orientation, and incident illumination at tagged points in a scene. We use tracking tags that work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. Our system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Our tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique is therefore ideal for on-set motion capture or real-time broadcasting of virtual sets.

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

Authors: Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih and Zara Ambadar

Presented by: Gustavo Augusto | 22/11/2012

ABSTRACT: In 2000, the Cohn-Kanade (CK) database was released for the purpose of promoting research into automatically detecting individual facial expressions. Since then, the CK
database has become one of the most widely used test-beds for algorithm development and evaluation. During this period, three limitations have become apparent: 1) While AU codes are well validated, emotion labels are not, as they refer to what was requested rather than what was actually performed, 2) The lack of a common performance metric against which to evaluate new algorithms, and 3) Standard protocols for common databases have not emerged. As a consequence, the CK database has been used for both AU and emotion detection (even though labels for the latter have not been validated), comparison with benchmark algorithms is missing, and use of random subsets of the original database makes meta-analyses difficult. To address these and other concerns, we present the Extended Cohn-Kanade (CK+) database. The number of sequences is increased by 22% and the number of subjects by 27%. The target expression for each sequence is fully FACS coded and emotion labels have been revised and validated. In addition to this, non-posed sequences for several types of smiles and their associated metadata have been added. We present baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave one-out subject cross-validation for both AU and emotion detection for the posed data. The emotion and AU labels, along with the extended image data and tracked landmarks will be made available July 2010.

3D Material Style Transfer

Authors: Chuong H. Nguyen, Tobias Ritschel, Karol Myszkowski, Elmar Eisemann and Hans-Peter Seidel

Presented by: Pedro Mendes | 18/11/2012

ABSTRACT:  This work proposes a technique to transfer the material style or mood from a guide source such as an image or video onto a target 3D scene. It formulates the problem as a combinatorial optimization of assigning discrete materials extracted from the guide source to discrete objects in the target 3D scene. The assignment is optimized to fulfill multiple goals: overall image mood based on several image statistics; spatial material organization and grouping as well as geometric similarity between objects that were assigned to similar materials. To be able to use common uncalibrated images and videos with unknown geometry and lighting as guides, a material estimation derives perceptually plausible reflectance, specularity, glossiness, and texture. Finally, results produced by our method are compared to manual material assignments in a perceptual study.

Coupled 3D Reconstruction of Sparse Facial Hair and Skin

Authors: Thabo Beeler, Bernd Bickel, Gioacchino Noris, Paul Beardsley, Steve Marschner, Robert W. Sumner and Markus Gross

Presented by: Ivan Silva | 18/10/2012

ABSTRACT:  Although facial hair plays an important role in individual expression, facial-hair reconstruction is not addressed by current facecapture systems. Our research addresses this limitation with an algorithm that treats hair and skin surface capture together in a coupled fashion so that a high-quality representation of hair fibers aswell as the underlying skin surface can be reconstructed. We propose a passive, camera-based system that is robust against arbitrary motion since all data is acquired within the time period of a single exposure. Our reconstruction algorithm detects and traces hairs in the captured images and reconstructs them in 3D using a multiview stereo approach. Our coupled skin-reconstruction algorithm uses information about the detected hairs to deliver a skin surface that lies underneath all hairs irrespective of occlusions. In dense regions like eyebrows, we employ a hair-synthesis method to create hair fibers that plausibly match the image data. We demonstrate our scanning system on a number of individuals and show that it can successfully reconstruct a variety of facial-hair styles together with the underlying skin surface.

Graph-based Fire Synthesis

Authors: Yubo Zhang, Carlos D. Correa, and Kwan-Liu Ma

Presented by: Catarina Runa | 19/07/2012

ABSTRACT: We present a novel graph-based data-driven technique for cost-effective fire modeling. This technique allows composing long animation sequences using a small number of short simulations. While traditional techniques such as motion graphs and motion blending work well for character motion synthesis, they cannot be trivially applied to fluids to produce results with physically consistent properties which are crucial to the visual appearance of fluids. Motivated by the motion graph technique used in character animations, we introduce a new type of graph which can be applied to create various fire phenomena. Each graph node consists of a group of compact spatial-temporal flow pathlines instead of a set of volumetric state fields. Consequently, achieving smooth transitions between discontinuous graph nodes for modeling turbulent fires becomes feasible and computationally efficient. The synthesized particle flow results allow direct particle controls which is much more flexible than a full volumetric representation of the simulation output. The accompanying video shows the versatility and potential power of this new technique for synthesizing realtime complex fire at the quality comparable to production animations.

Weta Digital

Presented by: Bruno Oliveira | 28/06/2012

Weta Digital is a world leading visual effects company based in Wellington, New Zealand. We provide a full suite of digital production services for feature films and high end commercials, from concept design to cutting edge 3D animation.

Weta was formed in 1993 by a group of young New Zealand filmmakers including Peter Jackson, Richard Taylor and Jamie Selkirk. It later split into two specialised halves – Weta Digital (digital effects) and Weta Workshop (physical effects).

One of our first projects was to provide visual effects for Peter Jackson’s film Heavenly Creatures. We went on to work digital magic on Peter’s blockbuster movies The Lord of the Rings trilogy and King Kong.

We also work with other Hollywood directors, providing digital effects for box office hits like I, Robot, X-Men: The Last Stand, Eragon, Bridge to Terabithia, Fantastic Four: Rise of the Silver Surfer, The Water Horse, Jumper, The Day the Earth Stood Still, District 9 and The Lovely Bones.

Most recently Weta Digital won an Academy Award for Best Visual Effects on James Cameron’s Avatar. Our work on the film involved using a new camera system and shooting on a virtual stage. New technologies were developed for supporting software and a new production pipeline in order to reach a new level of creative and technological excellence, delivering the film in 3D.

Our reputation for creativity and delivery keeps us in high demand with some of the world’s leading film studios

A Grounded Investigation of Game Immersion

Authors: Emily Brown and Paul Cairns

Presented by: Teresa Vieira | 24/05/2012

ABSTRACT: The term immersion is widely used to describe games but it is not clear what immersion is or indeed if people are using the same word consistently. This paper describes work done to define immersion based on the experiences of gamers. Grounded Theory is used to construct a robust division of immersion into the three levels: engagement, engrossment and total immersion. This division alone suggests new lines for investigating immersion and transferring it into software domains other than games.

Projection micro-stereolithography using digital micro-mirror dynamic mask

Authors: C. Sun, N. Fang, D.M. Wu and X. Zhang

Presented by: Henrique Serro | 17/05/2012

ABSTRACT: We present in this paper the development of a high-resolution projection micro-stereolithography (PSL) process by using the Digital Micromirror Device (DMDTM, Texas Instruments) as a dynamic mask. This unique technology provides a parallel fabrication of complex three-dimensional (3D) microstructures used for micro electro-mechanical systems (MEMS). Based on the understanding of underlying mechanisms, a process model has been developed with all critical parameters obtained from the experimental measurement. By coupling the experimental measurement and the process model, the photon-induced curing behavior of the resin has been quantitatively studied. The role of UV doping has been thereafter justified, as it can effectively reduce the curing depth without compromising the chemical property of the resin. The fabrication of complex 3D microstructures, such as matrix, and micro-spring array, with the smallest feature of 0.6 m, has been demonstrated.

Ray Tracing Point Set Surfaces

Authors: Anders Adamson and Marc Alexa

Presented by: Ozan Çetinaslan | 10/05/2012

ABSTRACT:  Point set surfaces are a smooth manifold surface approximation from a set of sample points. The surface definition is based on a projection operation that constructs local polynomial approximations and respects a minimum feature size. We present techniques for ray tracing point set surfaces. For the computation of ray-surface intersection the properties of the projection operation are exploited: The surface is enclosed by a union of minimum feature size spheres. A ray is intersected with the spheres first and inside the spheres with local polynomial approximations. Our results show that 2–3 projections are sufficient to accurately intersect a ray with the surface.

3D Facial Feature Detection Using Iso-Geodesic Stripes and Shape-Index Integral Projection

Authors: James Allen, Nikhil Karkera and Lijun Yin

Presented by: Nuno Barbosa | 05/04/2012

ABSTRACT: Research on 3D face models relies on extraction of feature points for segmentation, registration, or recognition. Robust feature point extraction from pure geometric surface data is still a challenging issue. In this project, we attempt to automatically extract feature points from 3D range face models without texture information. Human facial surface is overall convex in shape and a majority of the feature points are contained in concave regions within this generally convex structure. These “feature-rich” regions occupy a relatively small portion of the entire face surface area. We propose a novel approach that looks for features only in regions with a high density of concave points and ignores all convex regions. We apply an iso-geodesic stripe approach to limit the search region, and apply the shape-index integral projection to locate the features of interest. Finally, eight individual features (i.e., inner corners of eye, outer corners of eye, nose sides, and outer lip corners) are detected on 3D range models. The algorithm is evaluated on publicly available 3D databases and achieved over 90% accuracy on average.

The Making of Toy Story

Author: Mark Henne, Hal Hickel, Ewan Johnson and Sonoko Konishi

Presented by: Pedro Bastos | 29/03/2012

ABSTRACT: Toy Story is the first full length feature film produced entirely using the technology of computer animation. The main characters, Sheriff Woody and Space Ranger Buzz Lightyear; are toys that come to life when humans aren’t around. Their story is one of rivalry challenges, teamwork and redemption. Making this film required four years of effort, from writing the story and script, to illustrated storyboards, through modeling, animation, lighting, rendering, and filming. This paper examines various processes involved in producing the film.

An Extensible Framework for Interactive Facial Animation with Facial Expressions, Lip Synchronization and Eye Behavior

Authors: ROSSANA B. QUEIROZ, MARCELO COHEN, and SORAIA R. MUSSE

Presented by: José Serra | 22/03/2012

ABSTRACT: In this article we describe our approach to generating convincing and empathetic facial animation. Our goal is to develop a robust facial animation platform that is usable and can be easily extended. We also want to facilitate the integration of research in the area and to directly incorporate the characters in interactive applications such as embodied conversational agents and games.We have developed a framework capable of easily animating MPEG-4 parameterized faces through highlevel description of facial actions and behaviors. The animations can be generated in real time for interactive applications. We present some case studies that integrate computer vision techniques in order to provide interaction between the user and a character that interacts with different facial actions according to events in the application.

© Copyright Porto Interactive Center 2017

Faculdade de Ciências da Universidade do Porto | Departamento de Ciência de Computadores | Rua de Campo Alegre 1021-1055, 4169-007 Porto, PORTUGAL
Tel: +351 220 402 966 | Email: info@portointeractivecenter.org