Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Vandeventer, Jason
Languages: English
Types: Doctoral thesis
Subjects: QA75
In this thesis, a novel approach for modelling 4D (3D Dynamic) conversational interactions and synthesising highly-realistic expression sequences is described.\ud To achieve these goals, a fully-automatic, fast, and robust pre-processing pipeline was developed, along with an approach for tracking and inter-subject registering 3D sequences (4D data). A method for modelling and representing sequences as single entities is also introduced. These sequences can be manipulated and used for synthesising new expression sequences. Classification experiments and perceptual studies were performed to validate the methods and models developed in this work.\ud To achieve the goals described above, a 4D database of natural, synced, dyadic conversations was captured. This database is the first of its kind in the world.\ud Another contribution of this thesis is the development of a novel method for modelling conversational interactions. Our approach takes into account the time-sequential nature of the interactions, and encompasses the characteristics of each expression in an interaction, as well as information about the interaction itself.\ud Classification experiments were performed to evaluate the quality of our tracking, inter-subject registration, and modelling methods. To evaluate our ability to model, manipulate, and synthesise new expression sequences, we conducted perceptual experiments. For these perceptual studies, we manipulated modelled sequences by modifying their amplitudes, and had human observers evaluate the level of expression realism and image quality.\ud To evaluate our coupled modelling approach for conversational facial expression interactions, we performed a classification experiment that differentiated predicted frontchannel and backchannel sequences, using the original sequences in the training set. We also used the predicted backchannel sequences in a perceptual study in which human observers rated the level of similarity of the predicted and original sequences. The results of these experiments help support our methods and our claim of our ability to produce 4D, highly-realistic expression sequences that compete with state-of-the-art methods.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [64] M. Castelan. Face Shape Recovery from a Single Image View. PhD thesis, University of York, 2006.
    • [65] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. Intelligent Systems and Technology (TIST), ACM Transactions on, 2(3):27:1{ 27, 2011.
    • [66] G. Chen. Edge detection by regularized cubic B-spline tting. Systems, Man and Cybernetics, IEEE Transactions on, 25(4):636{643, 1995.
    • [67] H. Chipman, T. J. Hastie, and R. Tibshirani. Clustering microarray data. In T. Speed, editor, Statistical Analysis of Gene Expression Microarray Data, chapter 4, pages 159{200. Chapman & Hall/CRC, 2003.
    • [68] P. Cignoni, M. Corsini, and G. Ranzuglia. Meshlab: An open-source 3D mesh processing system. European Research Consortium for Informatics and Mathematics (ERCIM) News, 73:45{46, 2008.
    • [69] J. Cockburn, M. Bartlett, J. Tanaka, J. Movellan, M. Pierce, and R. Schultz. SmileMaze: A tutoring system in real-time facial expression perception and production in children with Autism Spectrum Disorder. In Automatic Face & Gesture Recognition (FG 2008), Workshop on Facial and Bodily expressions for Control and Adaptation of Games (ECAG), IEEE International Conference on, pages 978{986, 2008.
    • [70] T. Coleman. Estimating the correlation of non-contemporaneous time-series. Social Science Research Network (SSRN), 2007.
    • [71] T. F. Cootes. Model-based methods in analysis of biomedical images. In R. Baldock and J. Graham, editors, Image Processing and Analysis: A Practical approach, chapter 7, pages 223{248. Oxford University Press, 1999.
    • [72] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. In Proceedings of the 5th European Conference on Computer Vision (ECCV 1998), volume 1407, pages 484{498. Springer, 1998.
    • [73] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on, 23(6):681{685, 2001.
    • [74] T. F. Cootes and C. J. Taylor. Statistical models of appearance for computer vision, 2004.
    • [75] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham. Active shape modelstheir training and application. Computer Vision and Image Understanding (CVIU), 61(1):38{59, 1995.
    • [76] T. F. Cootes, G. V. Wheeler, K. N. Walker, and C. J. Taylor. Coupled-view active appearance models. In Proceedings of the 11th British Machine Vision Conference (BMVC 2000), volume 1, pages 52{61, 2000.
    • [77] M. Core, D. Traum, H. C. Lane, W. Swartout, J. Gratch, M. Van Lent, and S. Marsella. Teaching negotiation skills through practice and re ection with virtual humans. Simulation, 82(11):685{701, 2006.
    • [78] D. Cosker, R. Borkett, D. Marshall, and P. L. Rosin. Towards automatic performance-driven animation between multiple types of facial model. Computer Vision, The Institution of Engineering and Technology (IET), 2(3):129{ 141, 2008.
    • [79] D. Cosker, E. Krumhuber, and A. Hilton. Perception of linear and nonlinear motion properties using a FACS validated 3D facial model. In Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization, pages 101{108. ACM, 2010.
    • [80] D. Cosker, E. Krumhuber, and A. Hilton. A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2296{2303. IEEE, 2011.
    • [81] B. G. Cox. The weighted sequential hot deck imputation procedure. In Proceedings of the American Statistical Association, Section on Survey Research Methods, pages 721{726. American Statistical Association, 1980.
    • [82] A. Cray. Modern Di erential Geometry of Curves and Surfaces. CRC Press, 2nd edition, 1998.
    • [83] D. Cristinacce and T. F. Cootes. Feature detection and tracking with constrained local models. In Proceedings of the 17th British Machine Vision Conference (BMVC 2006), pages 95:1{10. BMVA Press, 2006.
    • [84] D. W. Cunningham, M. Kleiner, H. H. Bultho , and C. Wallraven. The components of conversational facial expressions. In Proceedings of the 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004), pages 143{150. ACM, 2004.
    • [85] D. W. Cunningham, M. Kleiner, C. Wallraven, and H. H. Bultho . Manipulating video sequences to determine the components of conversational facial expressions. ACM Transactions on Applied Perception (TAP), 2(3):251{269, 2005.
    • [86] D. W. Cunningham, M. Nusseck, C. Wallraven, and H. H. Bultho . The role of image size in the recognition of conversational facial expressions. Computer Animation and Virtual Worlds, 15(3-4):305{310, 2004.
    • [87] D. W. Cunningham and C. Wallraven. Dynamic information for the recognition of conversational expressions. Journal of Vision, 9(13):7:1{7:17, 2009.
    • [88] C. De Boor. A practical guide to splines. Mathematics of Computation, 1978.
    • [89] I. de Kok and D. Heylen. The MultiLis corpus { Dealing with individual di erences in nonverbal listening behavior. In A. Esposito, A. M. Esposito, R. Martone, V. Muller, and G. Scarpetta, editors, Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, volume 6456, pages 362{375. Springer, 2011.
    • [90] J. P. De Ruiter, S. Rossignol, L. Vuurpijl, D. W. Cunningham, and W. J. Levelt. SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3):408{419, 2003.
    • [91] P. Debevec. The light stages and their applications to photoreal digital actors. SIGGRAPH Asia Technical Briefs, 2012.
    • [92] D. DeCarlo and D. Metaxas. The integration of optical ow and deformable models with applications to human face shape and motion estimation. In Computer Vision and Pattern Recognition (CVPR 1996), IEEE Conference on, pages 231{238. IEEE, 1996.
    • [93] D. Decarlo and D. Metaxas. Optical ow constraints on deformable models with applications to face tracking. International Journal of Computer Vision, 38(2):99{127, 2000.
    • [95] D. DeVault, R. Artstein, G. Benn, T. Dey, E. Fast, A. Gainer, K. Georgila, J. Gratch, A. Hartholt, M. Lhommet, et al. SimSensei Kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 1061{1068. International Foundation for Autonomous Agents and Multiagent Systems, 2014.
    • [96] T. K. Dey. Curve and surface reconstruction: algorithms with mathematical analysis, volume 23. Cambridge University Press, 2006.
    • [97] T. K. Dey and T. Ray. Polygonal surface remeshing with delaunay re nement. Engineering with Computers, 26(3):289{301, 2010.
    • [98] F. Dornaika and J. Ahlberg. Fast and reliable active appearance model search for 3-D face tracking. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions On, 34(4):1838{1853, 2004.
    • [99] J. Duchon. Splines minimizing rotation-invariant semi-norms in Sobolev spaces. In W. Schempp and K. Zeller, editors, Constructive theory of functions of several variables, volume 571, pages 85{100. Springer, 1977.
    • [100] R. Durstenfeld. Algorithm 235: Random Permutation. Communications of the ACM, 7(7):420, 1964.
    • [101] M. Eck, T. DeRose, T. Duchamp, H. Hoppe, M. Lounsbery, and W. Stuetzle. Multiresolution analysis of arbitrary meshes. In Proceedings of the 22nd annual conference on Computer Graphics and Interactive Techniques, pages 173{182. ACM, 1995.
    • [102] P. H. Eilers and B. D. Marx. Flexible smoothing with B-splines and penalties. Statistical Science, pages 89{102, 1996.
    • [103] P. Ekman, R. J. Davidson, and W. V. Friesen. The Duchenne Smile: Emotional expression and brain physiology: II. Journal of Personality and Social Psychology, 58(2):342{353, 1990.
    • [104] P. Ekman, W. Friesen, and J. Hager. Facial Action Coding System - The Manual on CD-ROM. A Human Face, 2002.
    • [105] P. Ekman and W. V. Friesen. Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, 1978.
    • [106] Q. Fang, D. Boas, et al. Tetrahedral mesh generation from volumetric binary and grayscale images. In Biomedical Imaging (ISBI): From Nano to Macro, IEEE International Symposium on, pages 1142{1145. IEEE, 2009.
    • [107] T. Fang, X. Zhao, O. Ocegueda, S. K. Shah, and I. A. Kakadiaris. 3D/4D facial expression analysis: An advanced annotated face model approach. Image and Vision Computing, 30(10):738{749, 2012.
    • [108] T. Fernandes, S. Alves, J. Miranda, C. Queiros, and V. Orvalho. LIFEisGAME: A facial character animation system to help recognize facial expressions. In Enterprise information systems, pages 423{432. Springer, 2011.
    • [109] M. S. Floater, K. Hormann, and M. Reimers. Parameterization of manifold triangulations. In Approximation Theory X: Abstract and Classical Analysis, pages 197{209. Vanderbilt University Press, Nashville, 2002.
    • [110] S. F. Frisken, R. N. Perry, A. P. Rockwood, and T. R. Jones. Adaptively sampled distance elds: a general representation of shape for computer graphics. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 249{254. ACM Press, 2000.
    • [111] C. D. Frowd, B. J. Matuszewski, L.-K. Shark, W. Quan, I. Rudas, M. Demiralp, and N. Mastorakis. Towards a comprehensive 3D dynamic facial expression database. In Proceedings of the 9th WSEAS International Conference on Multimedia, Internet and Video Technology, number 9, pages 113{119. World Scienti c and Engineering Academy and Society (WSEAS), 2009.
    • [112] G. Fy e, A. Jones, O. Alexander, R. Ichikari, and P. Debevec. Driving highresolution facial scans with video performance capture. ACM Transactions on Graphics (TOG), 34(1):8, 2014.
    • [113] A. Galata, N. Johnson, and D. Hogg. Learning structured behaviour models using variable length markov models. In Modelling People, IEEE International Workshop on, pages 95{102. IEEE, 1999.
    • [114] T. Gautama and M. M. Van Hulle. A phase-based approach to the estimation of the optical ow eld using spatial ltering. Neural Networks, IEEE Transactions on, 13(5):1127{1136, 2002.
    • [115] J. Ghent and J. McDonald. Photo-realistic facial expression synthesis. Image and Vision Computing, 23(12):1041{1050, 2005.
    • [116] A. Ghosh, G. Fy e, B. Tunwattanapong, J. Busch, X. Yu, and P. Debevec. Multiview face capture using polarized spherical gradient illumination. ACM Transactions on Graphics (TOG), 30(6):129, 2011.
    • [117] A. Ghosh, T. Hawkins, P. Peers, S. Frederiksen, and P. Debevec. Practical modeling and acquisition of layered facial re ectance. In ACM Transactions on Graphics (TOG), volume 27, page 139. ACM, 2008.
    • [118] J. J. Gibson. The Perception of the Visual World. Houghton Mi in, 1950.
    • [119] I. Gonzalez, H. Sahli, and W. Verhelst. Automatic recognition of lower facial action units. In Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research, pages 8:1{8:4. ACM, 2010.
    • [120] L. Grammatikopoulos, I. Kalisperakis, G. Karras, and E. Petsa. Automatic multi-view texture mapping of 3D surface projections. In Proceedings of the 2nd ISPRS International Workshop 3D-ARCH, pages 1{6, 2007.
    • [121] L. Graser, J. Vandeventer, J. van der Schalk, P. L. Rosin, and D. Marshall. 4D tracking and inter-subject registration for the synthesis of realistic facial expression sequences. In Prep., 2015.
    • [122] R. Gross, I. Matthews, and S. Baker. Generic vs. person speci c active appearance models. Image and Vision Computing, 23(12):1080{1093, 2005.
    • [123] P. Hammond, T. J. Hutton, J. E. Allanson, L. E. Campbell, R. Hennekam, S. Holden, M. A. Patton, A. Shaw, I. K. Temple, M. Trotter, et al. 3D analysis of facial morphology. American Journal of Medical Genetics Part A, 126(4):339{348, 2004.
    • [124] P. Hammond, M. Suttie, R. C. Hennekam, J. Allanson, E. M. Shore, and F. S. Kaplan. The face signature of brodysplasia ossi cans progressiva. American Journal of Medical Genetics Part A, 158(6):1368{1380, 2012.
    • [125] T. Hastie, R. Tibshirani, G. Sherlock, M. Eisen, P. Brown, and D. Botstein. Imputing missing data for gene expression arrays. Technical report, Division of Biostatistics, Stanford University, 1999. http://www.web.stanford.edu/ ~hastie/Papers/missing.pdf.
    • [126] D. Hogg, N. Johnson, R. Morris, D. Buesching, and A. Galata. Visual models of interaction. In Proceedings of the 2nd International Workshop on Cooperative Distributed Vision, 1998.
    • [127] B. K. Horn and B. G. Schunck. Determining optical ow. In 1981 Technical Symposium East, pages 319{331. International Society for Optics and Photonics, 1981.
    • [128] C.-W. Hsu, C.-C. Chang, and C.-J. Lin. A practical guide to support vector classi cation. Technical report, Department of Computer Science, National Taiwan University, 2003.
    • [129] X. Huang, S. Zhang, Y. Wang, D. Metaxas, and D. Samaras. A hierarchical framework for high resolution facial expression tracking. In Computer Vision and Pattern Recognition Workshop (CVPRW 2004), Conference on, pages 22{29. IEEE, 2004.
    • [130] Y. Huang, X. Zhang, Y. Fan, L. Yin, L. Seversky, J. Allen, T. Lei, and W. Dong. Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis. Image and Vision Computing, 30(10):750{761, 2012.
    • [131] E. A. Isaacs and J. C. Tang. What video can and cannot do for collaboration: A case study. Multimedia Systems, 2(2):63{73, 1994.
    • [132] A. Jepson and M. J. Black. Mixture models for optical ow computation. In Computer Vision and Pattern Recognition (CVPR 1993), IEEE Conference on, pages 760{761. IEEE, 1993.
    • [133] N. Johnson, A. Galata, and D. Hogg. The acquisition and use of interaction behaviour models. In Computer Vision and Pattern Recognition (CVPR 1998), IEEE Conference on, pages 866{871. IEEE, 1998.
    • [134] T. Ju. Robust repair of polygonal models. In ACM Transactions on Graphics (TOG), volume 23, pages 888{895. ACM, 2004.
    • [135] T. Kanade, J. F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In Automatic Face and Gesture Recognition (FG 2000), IEEE International Conference on, pages 46{53. IEEE, 2000.
    • [136] D. A. Kashy and D. A. Kenny. The analysis of data from dyads and groups. Handbook of research methods in social and personality psychology, pages 451{477, 2000.
    • [137] D. A. Kenny, D. A. Kashy, and W. L. Cook. Dyadic data analysis. Guilford Press, 2006.
    • [138] E. J. Keogh and M. J. Pazzani. Derivative dynamic time warping. In Proceedings of the 2001 SIAM International Conference on Data Mining, pages 1{11. SIAM, 2001.
    • [139] G. Klincsek. Minimal triangulations of polygonal domains. Ann. Discrete Math, 9:121{123, 1980.
    • [140] D. E. Knuth. The Art of Computer Programming: Seminumerical Algorithms, volume 2. Addison-Wesley, Third edition, 1997.
    • [141] E. Kreyszig and E. J. Norminton. Advanced engineering mathematics. Wiley, fourth edition, 1979.
    • [142] R. Krovi, A. C. Graesser, and W. E. Pracht. Agent behaviors in virtual negotiation environments. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 29(1):15{25, 1999.
    • [143] E. G. Krumhuber, L. Tamarit, E. B. Roesch, and K. R. Scherer. FACSGen 2.0 animation software: Generating three-dimensional FACS-valid facial expressions for emotion research. Emotion, 12(2):351{363, 2012.
    • [144] H.-S. Lee and D. Kim. Tensor-based AAM with continuous variation estimation: Application to variation-robust face recognition. Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on, 31(6):1102{1116, 2009.
    • [145] M. Legerstee. The role of dyadic communication in social cognitive development. Advances in Child Development and Behavior, 37:1{53, 2009.
    • [146] B. Li, X. Zhang, P. Zhou, and P. Hu. Mesh parameterization based on one-step inverse forming. Computer-Aided Design, 42(7):633{640, 2010.
    • [147] J. J. Lien, T. Kanade, J. F. Cohn, and C.-C. Li. Automated facial expression recognition based on facs action units. In Automatic Face and Gesture Recognition (FG 1998), IEEE International Conference on, pages 390{395. IEEE, 1998.
    • [148] P. Linell, L. Gustavsson, and P. Juvonen. Interactional dominance in dyadic communication: A presentation of initiative-response analysis. Linguistics, 26(3):415{442, 1988.
    • [149] C. X. Ling, J. Huang, and H. Zhang. AUC: A statistically consistent and more discriminating measure than accuracy. In IJCAI, volume 3, pages 519{524, 2003.
    • [150] G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. The Computer Expression Recognition Toolbox (CERT). In Automatic Face & Gesture Recognition and Workshops (FG 2011), IEEE International Conference on, pages 298{305. IEEE, 2011.
    • [151] G. C. Littlewort, M. S. Bartlett, and K. Lee. Faces of Pain: Automated measurement of spontaneous facial expressions of genuine and posed pain. In Proceedings of the 9th International Conference on Multimodal Interfaces (ICMI 2007), pages 15{21. ACM, 2007.
    • [152] G. C. Littlewort, M. S. Bartlett, and K. Lee. Automatic coding of facial expressions displayed during posed and genuine pain. Image and Vision Computing, 27(12):1797{1803, 2009.
    • [153] G. C. Littlewort, M. S. Bartlett, L. P. Salamanca, and J. Reilly. Automated measurement of children's facial expressions during problem solving tasks. In Automatic Face & Gesture Recognition and Workshops (FG 2011), IEEE International Conference on, pages 30{35. IEEE, 2011.
    • [154] L. Liu, L. Zhang, Y. Xu, C. Gotsman, and S. J. Gortler. A local/global approach to mesh parameterization. In Computer Graphics Forum, volume 27, pages 1495{1504. Wiley Online Library, 2008.
    • [155] X. Liu and T. Chen. Video-based face recognition using adaptive hidden Markov models. In Computer Vision and Pattern Recognition (CVPR 2003), IEEE Conference on, volume 1, pages 340{345. IEEE, 2003.
    • [156] D. G. Lowe. Object recognition from local scale-invariant features. In Computer Vision, The proceedings of the 7th IEEE International Conference on, volume 2, pages 1150{1157. IEEE, 1999.
    • [157] B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In International Joint Conference on Arti cial Intelligence (IJCAI 1981), volume 81, pages 674{679, 1981.
    • [158] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-speci ed expression. In Computer Vision and Pattern Recognition Workshops (CVPRW 2010), IEEE Conference on, pages 94{101. IEEE, 2010.
    • [159] S. Lucey, A. B. Ashraf, and J. F. Cohn. Investigating spontaneous facial action recognition through AAM representations of the face. In K. Delac and M. Grgic, editors, Face Recognition. InTech, 2007.
    • [160] J. Lundgren. Alpha shapes of 2D/3D point set. http://uk.mathworks.com/ matlabcentral/fileexchange/28851-alpha-shapes, 2010.
    • [162] T. Martin, E. Cohen, and M. Kirby. Volumetric parameterization and trivariate B-spline tting using harmonic functions. In Proceedings of the 2008 ACM symposium on Solid and Physical Modeling (SPM), pages 269{280. ACM, 2008.
    • [163] MATLAB. version (R2013b). The MathWorks Inc., 2013.
    • [164] I. Matthews and S. Baker. Active appearance models revisited. International Journal of Computer Vision (IJCV), 60(2):135{164, 2004.
    • [165] B. J. Matuszewski, W. Quan, L.-K. Shark, A. S. Mcloughlin, C. E. Lightbody, H. C. Emsley, and C. L. Watkins. Hi4D-ADSIP 3-D dynamic facial articulation database. Image and Vision Computing, 30(10):713{727, 2012.
    • [166] G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder. The SEMAINE Database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. A ective Computing, IEEE Transactions on, 3(1):5{17, 2012.
    • [167] A. Mehrabian and S. R. Ferris. Inference of attitudes from nonverbal communication in two channels. Journal of Consulting Psychology, 31(3):248{252, 1967.
    • [168] A. F. Mobius. Der barycentrische calcul (The barycentric calculation). Johann Ambrosius Barth Verlag, 1827.
    • [169] L.-P. Morency, I. de Kok, and J. Gratch. Predicting Listener Backchannels: A probabilistic multimodal approach. In Intelligent Virtual Agents, volume 5208, pages 176{190. Springer, 2008.
    • [170] P. Mullen, Y. Tong, P. Alliez, and M. Desbrun. Spectral conformal parameterization. In Computer Graphics Forum, volume 27, pages 1487{1494. Wiley Online Library, 2008.
    • [171] M. Muller. Dynamic time warping. In Information retrieval for music and motion, volume 2, pages 69{84. Springer, 2007.
    • [172] T. Murali and T. A. Funkhouser. Consistent solid and boundary representations from arbitrary polygonal data. In Proceedings of the 1997 Symposium on Interactive 3D Graphics (I3D), pages 155{162. ACM, 1997.
    • [173] A. V. Ne an and M. H. Hayes III. Face detection and recognition using hidden Markov models. In Image Processing (ICIP 1998), International Conference on, volume 1, pages 141{145. IEEE, 1998.
    • [174] F. S. Nooruddin and G. Turk. Simpli cation and repair of polygonal models using volumetric techniques. Visualization and Computer Graphics, IEEE Transactions on, 9(2):191{205, 2003.
    • [175] D. G. Novick, B. Hansen, and K. Ward. Coordinating turn-taking with gaze. In Proceedings of the 4th International Conference on Spoken Language (ICSLP 1996), volume 3, pages 1888{1891. IEEE, 1996.
    • [176] M. Nusseck, D. W. Cunningham, C. Wallraven, and H. H. Bultho . The contribution of di erent facial regions to the recognition of conversational expressions. Journal of vision, 8(8):1{23, 2008.
    • [177] C. Oertel, F. Cummins, J. Edlund, P. Wagner, and N. Campbell. D64: A corpus of richly recorded conversational interaction. Journal on Multimodal User Interfaces, 7(1-2):19{28, 2013.
    • [182] E. K. Patterson and A. Gaweda. Toward using dynamics of facial expressions and gestures for person identi cation. In Proceedings of the 5th International Association of Science and Technology for Development (IASTED 2010) International Conference, volume 711, pages 26{58, 2010.
    • [183] K. Person. On lines and planes of closest t to system of points in space. Philosophical Magazine, 2(6):559{572, 1901.
    • [184] G. Peyre. Graph theory toolbox. http://www.mathworks.com/ matlabcentral/fileexchange/5355-toolbox-graph.
    • [186] J. Podolak and S. Rusinkiewicz. Atomic volumes for mesh completion. In Symposium on Geometry Processing, pages 33{41. Citeseer, 2005.
    • [187] I. Poggi and C. Pelachaud. Performative facial expressions in animated faces. In J. Cassell, J. Sullivan, S. Prevost, and E. F. Churchill, editors, Embodied Conversational Agents, chapter 6, pages 155{189. MIT Press, 2000.
    • [188] H. Popat and S. Richmond. New developments in: Three-dimensional planning for orthognathic surgery. Journal of Orthodontics, 37(1):62{71, 2010.
    • [189] H. Popat, S. Richmond, D. Marshall, and P. L. Rosin. Three-dimensional assessment of functional change following Class 3 orthognathic correction { A preliminary report. Journal of Cranio-Maxillofacial Surgery, 40(1):36{42, 2012.
    • [190] C. Queiros, S. Alves, A. J. Marques, M. Oliveira, and V. Orvalho. Serious Games and Emotion Teaching in Autism Spectrum Disorders: A comparison with LIFEisGAME project. http://hdl.handle.net/10216/64635, 2012.
    • [191] M. Ratli . Active appearance models for a ect recognition using facial expressions. Master's thesis, University of North Carolina Wilmington, 2010.
    • [192] J. M. Rehg, G. D. Abowd, A. Rozga, M. Romero, M. Clements, S. Sclaro , I. Essa, O. Y. Ousley, Y. Li, C. Kim, et al. Decoding children's social behavior. In Computer Vision and Pattern Recognition (CVPR 2013), IEEE Conference on, pages 3414{3421. IEEE, 2013.
    • [193] S. Renals, N. Morgan, H. Bourlard, M. Cohen, and H. Franco. Connectionist probability estimators in HMM speech recognition. Speech and Audio Processing, IEEE Transactions on, 2(1):161{174, 1994.
    • [207] B. Scholkopf, A. J. Smola, R. C. Williamson, and P. L. Bartlett. New support vector algorithms. Neural Computation, 12(5):1207{1245, 2000.
    • [208] N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang. Authentic facial expression analysis. Image and Vision Computing, 25(12):1856{1863, 2007.
    • [210] K. A. Sidorov, S. Richmond, and D. Marshall. An e cient stochastic approach to groupwise non-rigid image registration. In Computer Vision and Pattern Recognition (CVPR 2009), IEEE Conference on, pages 2208{2213. IEEE, 2009.
    • [211] K. A. Sidorov, S. Richmond, and D. Marshall. E cient groupwise non-rigid registration of textured surfaces. In Computer Vision and Pattern Recognition (CVPR 2011), IEEE Conference on, pages 2401{2408. IEEE, 2011.
    • [212] L. I. Smith. A tutorial on principal component analysis. http://www.cs.otago. ac.nz/cosc453/student_tutorials/principal_components.pdf, 2002.
    • [213] N. Smolyanskiy, C. Huitema, L. Liang, and S. E. Anderson. Real-time 3D face tracking based on active appearance model constrained by depth data. Image and Vision Computing, 32(11):860{869, 2014.
    • [248] V. H. Yngve. On getting a word in edgewise. In Papers from the 6th Regional Meeting of the Chicago Linguistic Society, pages 567{578, 1970.
    • [249] H. Yu, O. G. Garrod, and P. G. Schyns. Perception-driven facial expression synthesis. Computers & Graphics, 36(3):152{162, 2012.
    • [250] Z. Zeng, M. Pantic, G. Roisman, T. S. Huang, et al. A Survey of A ect Recognition Methods: Audio, visual, and spontaneous expressions. Pattern Analysis and Machine Intelligence (TPAMI), IEEE Transactions on, 31(1):39{ 58, 2009.
    • [251] L. Zhang, N. Snavely, B. Curless, and S. M. Seitz. Spacetime Faces: Highresolution capture for modeling and animation. In Data-Driven 3D Facial Animation, pages 248{276. Springer, 2008.
    • [252] Q. Zhang, Z. Liu, G. Quo, D. Terzopoulos, and H.-Y. Shum. Geometry-driven photorealistic facial expression synthesis. Visualization and Computer Graphics (TVCG), IEEE Transactions on, 12(1):48{60, 2006.
    • [253] X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, and P. Liu. A high-resolution spontaneous 3D dynamic facial expression database. In Automatic Face and Gesture Recognition (FG 2013), IEEE International Conference and Workshops on, pages 1{6. IEEE, 2013.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article