Remember Me
Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:

OpenAIRE is about to release its new face with lots of new content and services.
During September, you may notice downtime in services, while some functionalities (e.g. user registration, validation, claiming) will be temporarily disabled.
We apologize for the inconvenience, please stay tuned!
For further information please contact helpdesk[at]openaire.eu

fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph Robert; Hu, Shi-Min (2015)
Publisher: Institute of Electrical & Electronic Engineers
Languages: English
Types: Article
Subjects: QA75

Classified by OpenAIRE into

A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L1 camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] F. Liu, M. Gleicher, J. Wang, H. Jin, and A. Agarwala, “Subspace video stabilization,” ACM Trans. Graph., vol. 30, no. 1, pp. 70:1-70:10, 2011.
    • M. Grundmann, V. Kwatra, and I. Essa, “Auto-directed video stabilization with robust l1 optimal camera paths,” in IEEE CVPR, 2011, pp. 225-232.
    • [3] S. Liu, L. Yuan, P. Tan, and J. Sun, “Bundled camera paths for video stabilization,” ACM Trans. Graphics, vol. 32, no. 4, pp. 78:1-78:10, 2013.
    • [4] M. L. Gleicher and F. Liu, “Re-cinematography: Improving the camerawork of casual video,” ACM Trans. Multimedia Computing, Communications, and Applications, vol. 5, no. 1, pp. 2-11, 2008.
    • [5] Y. Wexler, E. Shechtman, and M. Irani, “Space-time completion of video,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 3, pp. 463-476, 2007.
    • [6] J. Herling and W. Broll, “High-quality real-time video inpainting with pixmix,” IEEE TVCG, vol. 20, no. 6, pp. 866-879, June 2014.
    • [7] S. Cho, J. Wang, and S. Lee, “Video deblurring for hand-held cameras using patch-based synthesis,” ACM Trans. Graphics, vol. 31, no. 4, pp. 64:1-64:12, 2012.
    • [8] M. Stengel, P. Bauszat, M. Eisemann, E. Eisemann, and M. Magnor, “Temporal video filtering and exposure control for perceptual motion blur,” IEEE TVCG, vol. 21, no. 5, pp. 663-671, May 2015.
    • [9] G. Ye, E. Garces, Y. Liu, Q. Dai, and D. Gutierrez, “Intrinsic video and applications,” ACM Trans. Graphics, vol. 33, no. 4, pp. 80:1-80:11, 2014.
    • [10] W. Qu, Y. Zhang, D. Wang, S. Feng, and G. Yu, “Semantic movie summarization based on string of ie-rolenets,” Computational Visual Media, vol. 1, no. 2, 2015.
    • [11] A. Litvin, J. Konrad, and W. C. Karl, “Probabilistic video stabilization using kalman filtering and mosaicing,” in Electronic Imaging 2003, 2003, pp. 663-674.
    • [12] Y. Matsushita, E. Ofek, W. Ge, X. Tang, and H.-Y. Shum, “Full-frame video stabilization with motion inpainting,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 7, pp. 1150-1163, 2006.
    • [13] S. Battiato, G. Gallo, G. Puglisi, and S. Scellato, “Sift features tracking for video stabilization,” in 14th Int. Conf. Image Analysis and Processing, ICIAP 2007, 2007, pp. 825-830.
    • [14] B.-Y. Chen, K.-Y. Lee, W.-T. Huang, and J.-S. Lin, “Capturing intentionbased full-frame video stabilization,” Computer Graphics Forum, vol. 27, no. 7, pp. 1805-1814, 2008.
    • [15] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE TPAMI, no. 11, pp. 1254-1259, 1998.
    • [16] A. Borji, “What is a salient object? a dataset and a baseline model for salient object detection,” IEEE TIP, vol. 24, no. 2, pp. 742-756, 2015.
    • [17] H. Li and K. N. Ngan, “A co-saliency model of image pairs,” IEEE TIP, vol. 20, no. 12, pp. 3365-3375, 2011.
    • [18] P. Siva, C. Russell, T. Xiang, and L. Agapito, “Looking beyond the image: Unsupervised learning for object saliency and detection,” in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. IEEE, 2013, pp. 3238-3245.
    • [19] M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global contrast based salient region detection,” in IEEE CVPR 2011, 2011, pp. 409-416.
    • [20] S. Marat, T. H. Phuoc, L. Granjon, N. Guyader, D. Pellerin, and A. Gue´rin-Dugue´, “Modelling spatio-temporal saliency to predict gaze direction for short videos,” International journal of computer vision, vol. 82, no. 3, pp. 231-243, 2009.
    • [21] E. Vig, M. Dorr, T. Martinetz, and E. Barth, “Intrinsic dimensionality predicts the saliency of natural dynamic scenes,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 6, pp. 1080- 1091, 2012.
    • [22] Y. Luo and X. Tang, “Photo and video quality evaluation: Focusing on the subject,” in Proc. ECCV 2008. Springer, 2008, pp. 386-399.
    • [23] H.-H. Yeh, C.-Y. Yang, M.-S. Lee, and C.-S. Chen, “Video aesthetic quality assessment by temporal integration of photo-and motion-based features,” IEEE Trans. Multimedia, vol. 15, no. 8, pp. 1944-1953, 2013.
    • [24] Y. Y. Xiang and M. S. Kankanhalli, “Automated aesthetic enhancement of videos,” in Proc. Int. Conf. Multimedia. ACM, 2010, pp. 281-290.
    • [25] F. Berthouzoz, W. Li, and M. Agrawala, “Tools for placing cuts and transitions in interview video,” ACM Trans. Graphics, vol. 31, no. 4, pp. 67:1-67:10, 2012.
    • [26] I. Arev, H. S. Park, Y. Sheikh, J. Hodgins, and A. Shamir, “Automatic editing of footage from multiple social cameras,” ACM Trans. Graphics, vol. 33, no. 4, pp. 81:1-81:10, 2014.
    • [27] J. Chang, D. Wei, and J. W. Fisher III, “A video representation using temporal superpixels,” in Proc. IEEE CVPR 2013, 2013, pp. 2051-2058.
    • [28] D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in Proc. IEEE CVPR 2010. IEEE, 2010, pp. 2432- 2439.
    • [29] S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework,” Int. J. Computer Vision, vol. 56, no. 3, pp. 221-255, 2004.
    • [30] B. Brown, Cinematography: Theory and Practice. Focal Press, Elsevier, 2012.
    • [31] R. Bresson, Notes on Cinematography. Urizen Books, New York, 1958.
    • [32] F. Liu, M. Gleicher, H. Jin, and A. Agarwala, “Content-preserving warps for 3D video stabilization,” ACM Trans. Graphics, vol. 28, no. 3, pp. 44:1-44:12, 2009.
    • [33] A. Newson, A. Almansa, M. Fradet, Y. Gousseau, and P. Prez, “Video inpainting of complex scenes,” SIAM Journal on Imaging Sciences, Society for Industrial and Applied Mathematics, vol. 7, no. 4, pp. 1993- 2019, 2014.
    • Fang-Lue Zhang is a post doctor in in Tsinghua University. He received his BS degree from the Zhejiang University in 2009 and Ph.D degree from Tsinghua University in 2015. His research interests include computer graphics, image processing and enhancement, image and video analysis and computer vision. Jue Wang is a Principle Research Scientist at Adobe Research. He received his B.E. (2000) and M.Sc. (2003) from Department of Automation, Tsinghua University, Beijing, China, and his Ph.D (2007) in Electrical Engineering from the University of Washington, Seattle, WA, USA. He received Microsoft Research Fellowship and Yang Research Award from University of Washington in 2006. He joined Adobe Research in 2007 as a research scientist. His research interests include image and video processing, computational photography, computer graphics and vision. He is a senior member of IEEE and a member of ACM.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article

Cookies make it easier for us to provide you with our services. With the usage of our services you permit us to use cookies.
More information Ok