LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Lu, Shao-Ping; Zhang, Song-Hai; Wei, Jin; Hu, Shi-Min; Martin, Ralph Robert (2013)
Publisher: IEEE
Languages: English
Types: Article
Subjects: QA75

Classified by OpenAIRE into

ACM Ref: ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
We present a video editing technique based on changing the timelines of individual objects in video, which leaves them in their original places but puts them at different times. This allows the production of object-level slow motion effects, fast motion effects, or even time reversal. This is more flexible than simply applying such effects to whole frames, as new relationships between objects can be created. As we restrict object interactions to the same spatial locations as in the original video, our approach can produce high-quality results using only coarse matting of video objects. Coarse matting can be done efficiently using automatic video object segmentation, avoiding tedious manual matting. To design the output, the user interactively indicates the desired new life spans of objects, and may also change the overall running time of the video. Our method rearranges the timelines of objects in the video whilst applying appropriate object interaction constraints. We demonstrate that, while this editing technique is somewhat restrictive, it still allows many interesting results.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] G. R. Bradski, “Computer vision face tracking for use in a perceptual user interface,” Intelligence Technology Journal, vol. 2, pp. 12-21, 1998.
    • [2] D. B. Goldman, C. Gonterman, B. Curless, D. Salesin, and S. M. Seitz, “Video object annotation, navigation, and composition,” in Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, Oct. 2008, pp. 3-12.
    • [3] X. L. K. Wei and J. X. Chai, “Interactive tracking of 2D generic objects with spacetime optimization,” in Proceedings of the 10th European Conference on Computer Vision: Part I, 2008, pp. 657- 670.
    • [4] A. Schodl and I. A. Essa, “Controlled animation of video sprites,” in ACM SIGGRAPH/Eurographics symposium on Computer animation, 2002, pp. 121-127.
    • [5] S. Yeung, C. Tang, M. Brown, and S. Kang, “Matting and compositing of transparent and refractive objects,” ACM Transactions on Graphics, vol. 30, no. 1, p. 2, 2011.
    • [6] K. He, C. Rhemann, C. Rother, X. Tang, and J. Sun, “A global sampling method for alpha matting,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 2049-2056.
    • [7] Y. Zhang and R. Tong, “Environment-sensitive cloning in images,” The Visual Computer, pp. 1-10, 2011.
    • [8] Z. Tang, Z. Miao, Y. Wan, and D. Zhang, “Video matting via opacity propagation,” The Visual Computer, pp. 1-15, 2011.
    • [9] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision, vol. 1, pp. 321-331, 1988.
    • [10] Y.-Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Transactions on Graphics, vol. 21, pp. 243-248, Jul. 2002.
    • [11] Y. Li, J. Sun, and H.-Y. Shum, “Video object cut and paste,” ACM Transactions on Graphics, vol. 24, pp. 595-600, Jul. 2005.
    • [12] J. Wang, P. Bhat, R. A. Colburn, M. Agrawala, and M. F. Cohen, “Interactive video cutout,” ACM Transactions on Graphics, vol. 24, pp. 585-594, Jul. 2005.
    • [13] X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Transactions on Graphics, vol. 28, pp. 70:1-70:11, Jul. 2009.
    • [14] T. Kwon, K. H. Lee, J. Lee, and S. Takahashi, “Group motion editing,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 80:1- 80:8, Aug. 2008.
    • [15] Y. Li, T. Zhang, and D. Tretter, “An overview of video abstraction techniques,” HP Laboratory, Tech. Rep. HP-2001-191, 2001.
    • [16] B. T. Truong and S. Venkatesh, “Video abstraction: A systematic review and classification,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 3, pp. 1-37, 2007.
    • [17] C. W. Ngo, Y. F. Ma, and H. J. Zhang, “Automatic video summarization by graph modeling,” in Proceedings of the Ninth IEEE International Conference on Computer Vision, 2003, pp. 104- 109.
    • [18] H. W. Kang, X. Q. Chen, Y. Matsushita, and X. Tang, “Spacetime video montage,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. II: 1331-1338.
    • [19] C. Barnes, D. B. Goldman, E. Shechtman, and A. Finkelstein, “Video tapestries with continuous temporal zoom,” ACM Transactions on Graphics, vol. 29, pp. 89:1-89:9, Jul. 2010.
    • [20] Z. Li, P. Ishwar, and J. Konrad, “Video condensation by ribbon carving,” IEEE Transactions on Image Processing, vol. 18, pp. 2572-2583, 2009.
    • [21] K. Slot, R. Truelsen, and J. Sporring, “Content-aware video editing in the temporal domain,” in Proceedings of the 16th Scandinavian Conference on Image Analysis, 2009, pp. 490-499.
    • [22] B. Chen and P. Sen, “Video carving,” in Eurographics'08, Short Papers., 2008.
    • [23] S. Pongnumkul, J. Wang, G. Ramos, and M. F. Cohen, “Content-aware dynamic timeline for video browsing,” in Proceedings of the 23nd annual ACM symposium on User interface software and technology, 2010, pp. 139-142.
    • [24] T. Karrer, M. Weiss, E. Lee, and J. Borchers, “Dragon: A direct manipulation interface for frame-accurate in-scene video navigation,” in Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, Apr. 2008, pp. 247-250.
    • [25] C. Liu, A. Torralba, W. T. Freeman, F. Durand, and E. H. Adelson, “Motion magnification,” ACM Transactions on Graphics, vol. 24, no. 3, pp. 519-526, Jul. 2005.
    • [26] J. Chen, S. Paris, J. Wang, W. Matusik, M. Cohen, and F. Durand, “The video mesh: A data structure for image-based video editing,” in Proceedings of IEEE International Conference on Computational Photography, 2011, pp. 1-8.
    • [27] V. Scholz, S. El-Abed, H.-P. Seidel, and M. A. Magnor, “Editing object behaviour in video sequences,” Computer Graphics Forum, vol. 28, no. 6, pp. 1632-1643, 2009.
    • [28] A. Rav-Acha, Y. Pritch, D. Lischinski, and S. Peleg, “Evolving time fronts: Spatio-temporal video warping,” Hebrew University, Tech. Rep. HUJI-CSE-LTR-2005-10, Apr. 2005.
    • [29] --, “Dynamosaicing: Mosaicing of dynamic scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1789-1801, Oct. 2007.
    • [30] C. D. Correa and K.-L. Ma, “Dynamic video narratives,” ACM Transactions on Graphics, vol. 29, pp. 88:1-88:9, Jul. 2010.
    • [31] E. P. Bennett and L. McMillan, “Computational time-lapse video,” ACM Transactions on Graphics, vol. 26, pp. 102 - 108, Jul. 2007.
    • [32] D. B. Goldman, B. Curless, D. Salesin, and S. M. Seitz, “Schematic storyboarding for video visualization and editing,” ACM Transactions on Graphics, vol. 25, pp. 862-871, Jul. 2006.
    • [33] Y. Pritch, A. Rav-Acha, and S. Peleg, “Nonchronological video synopsis and indexing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 1971-1984, Nov. 2008.
    • [34] M. Brown and D. G. Lowe, “Recognising panoramas,” in Proceedings of the Ninth IEEE International Conference on Computer Vision, 2003, pp. 1218-1227.
    • [35] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 1.21,” http://cvxr.com/cvx, Dec. 2010.
    • [36] Y. Weng, W. Xu, S. Hu, J. Zhang, and B. Guo, “Keyframe based video object deformation,” in International Conference on Cyberworlds, 2008, pp. 142-149.
    • [37] K. Peker, A. Divakaran, and H. Sun, “Constant pace skimming and temporal sub-sampling of video using motion activity,” in IEEE International Conference on Image Processing, 2001, pp. 414-417.
    • [38] F. Liu, M. Gleicher, J. Wang, H. Jin, and A. Agarwala, “Subspace video stabilization,” ACM Transactions on Graphics, vol. 30, no. 1, p. 4, 2011.
    • [39] Z. Farbman and D. Lischinski, “Tonal stabilization of video,” in ACM Transactions on Graphics, vol. 30, no. 4. ACM, 2011, p. 89.
    • [40] A. Finkelstein, C. E. Jacobs, and D. H. Salesin, “Multiresolution video,” in Computer Graphics (Proceedings of SIGGRAPH 96), 1996, pp. 281-290. Ralph R Martin obtained his PhD in 1983 from Cambridge University. Since then he has been at Cardiff University, as Professor since 2000, where he leads the Visual Computing research group. His publications include over 200 papers and 10 books covering such topics as solid modeling, surface modeling, reverse engineering, intelligent sketch input, mesh processing, video processing, computer graphics, vision based geometric inspection and geometric reasoning. He is a Fellow of the Learned Society of Wales, the Institute of Mathematics and its Applications, and the British Computer Society.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article