I. Akhter, T. Simon, S. Khan, I. Matthews, and Y. Sheikh, Bilinear spatiotemporal basis models, ACM Transactions on Graphics, vol.31, issue.2, pp.1-12, 2012.

K. Anjyo, H. Todo, and J. P. Lewis, A Practical Approach to Direct Manipulation Blendshapes, Journal of Graphics Tools, vol.16, pp.160-176, 2012.

V. Blanz and T. Vetter, A Morphable Model for the Synthesis of 3D Faces, Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99), pp.187-194, 1999.

Y. Cao, P. Faloutsos, and F. Pighin, Unsupervised learning for speech motion editing, Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation. Eurographics Association, pp.225-231, 2003.

O. Cetinaslan, V. Orvalho, and J. Lewis, Sketch-Based Controllers for Blendshape Facial Animation, Eurographics (Short Papers, pp.25-28, 2015.

J. Chi, S. Gao, and C. Zhang, Interactive facial expression editing based on spatio-temporal coherency, The Visual Computer, vol.33, pp.981-991, 2017.

D. Clevert, T. Unterthiner, and S. Hochreiter, Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), 2015.

D. Cudeiro, T. Bolkart, C. Laidlaw, A. Ranjan, and M. Black, Capture, Learning, and Synthesis of 3D Speaking Styles. Computer Vision and Pattern Recognition (CVPR), p.11, 2019.

. Dynamixyz, Performer, 2019.

G. Fanelli, J. Gall, H. Romsdorfer, T. Weise, and L. Van-gool, A 3-D Audio-Visual Corpus of Affective Communication, IEEE Transactions on Multimedia, vol.12, pp.591-598, 2010.

M. Gleicher, Motion editing with spacetime constraints, Proceedings of the 1997 symposium on Interactive 3D graphics -SI3D '97, p.139, 1997.

I. Habibie, D. Holden, J. Schwarz, J. Yearsley, T. Komura et al., A Recurrent Variational Autoencoder for Human Motion Synthesis, IEEE Computer Graphics and Applications, vol.37, p.4, 2017.

D. Holden, J. Saito, and T. Komura, A deep learning framework for character motion synthesis and editing, ACM Transactions on Graphics, vol.35, issue.4, pp.1-11, 2016.

D. Holden, J. Saito, T. Komura, and T. Joyce, Learning motion manifolds with convolutional autoencoders, pp.1-4, 2015.

P. Joshi, C. Wen, M. Tien, F. Desbrun, and . Pighin, Learning Controls for Blend Shape Based Realistic Facial Animation, SIG-GRAPH/Eurographics Symposium on Computer Animation, pp.187-192, 2003.

L. Kovar, M. Gleicher, and F. Pighin, Motion graphs, ACM SIGGRAPH, p.482, 2002.

M. Lau, J. Chai, Y. Xu, and H. Shum, Face poser: Interactive modeling of 3D facial expressions using facial priors, ACM Transactions on Graphics, vol.29, pp.1-17, 2009.

J. Lewis and K. Anjyo, Direct Manipulation Blendshapes, IEEE Computer Graphics and Applications, vol.30, pp.42-50, 2010.

H. Li, B. Adams, L. J. Guibas, and M. Pauly, Robust Singleview Geometry and Motion Reconstruction, ACM SIGGRAPH Asia 2009 Papers (SIGGRAPH Asia '09), vol.175, pp.1-175, 2009.

H. Li, J. Yu, Y. Ye, and C. Bregler, Realtime Facial Animation with On-the-fly Correctives, ACM Trans. Graph, vol.32, 2013.

Q. Li and Z. Deng, Orthogonal-Blendshape-Based Editing System for Facial Motion Capture Data, IEEE Computer Graphics and Applications, vol.28, pp.76-82, 2008.

L. Ma and Z. Deng, Real-Time Facial Expression Transformation for Monocular RGB Video: Real-Time Facial Expression Transformation for Monocular RGB Video, Computer Graphics Forum, 2018.

X. Ma, Z. Binh-huy-le, and . Deng, Style learning and transferring for facial animation editing, Proceedings of the 2009 ACM SIG-GRAPH/Eurographics Symposium on Computer Animation. ACM, pp.123-132, 2009.

J. Martinez, M. J. Black, and J. Romero, On human motion prediction using recurrent neural networks, 2017.

R. Blanco-i-ribera, E. Zell, J. P. Lewis, J. Noh, and M. Botsch, Facial retargeting with automatic range of motion alignment, ACM Transactions on Graphics, vol.36, issue.4, pp.1-12, 2017.

O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, Medical Image Computing and Computer-Assisted Intervention â?? MICCAI 2015, vol.9351, pp.234-241, 2015.

Y. Seol, J. P. Lewis, J. Seo, B. Choi, K. Anjyo et al., Spacetime expression cloning for blendshapes, ACM Transactions on Graphics (TOG), vol.31, p.14, 2012.

R. W. Sumner and J. Popovi?, Deformation transfer for triangle meshes, ACM Trans. Graph, vol.23, pp.399-405, 2004.

S. Taylor, T. Kim, Y. Yue, M. Mahler, J. Krahe et al., A Deep Learning Approach for Generalized Speech Animation, ACM Trans. Graph, vol.36, issue.4, pp.1-93, 2017.

J. , R. Tena, F. De-la, T. , and I. Matthews, Interactive Regionbased Linear 3D Face Models, ACM SIGGRAPH 2011 Papers (SIGGRAPH '11), 2011.

, , vol.76, pp.1-76

T. Weise, S. Bouaziz, H. Li, and M. Pauly, Realtime performance-based facial animation, ACM transactions on graphics (TOG), vol.30, p.77, 2011.

L. Zhang, N. Snavely, B. Curless, and S. Seitz, Spacetime Faces: High Resolution Capture for Modeling and Animation, ACM Trans. Graph, vol.23, pp.548-558, 2004.

Q. Zhang, Z. Liu, and H. Shum, Geometrydriven photorealistic facial expression synthesis, Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp.177-186, 2003.

Y. Zhou, Z. Xu, C. Landreth, E. Kalogerakis, S. Maji et al., Visemenet: Audio-driven animator-centric speech animation, ACM Transactions on Graphics (TOG), vol.37, p.161, 2018.