LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Languages: English
Types: Doctoral thesis
Subjects: TA

Classified by OpenAIRE into

ACM Ref: ComputingMethodologies_COMPUTERGRAPHICS, ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
The rational operator-based approach to depth from defocus (DfD) using pill-box point spread function (PSF) enables texture-invariant 3-dimensional (3D) surface reconstructions. However, pill-box PSF produces errors when the amount of lens diffraction and aberrations varies. This thesis proposes two DfD methods, one using the Gaussian PSF that addresses the situation when diffraction and aberrations are dominant, and the second based on the generalised Gaussian PSF that deals with any levels of the problem. The accuracy of DfD can be severely reduced by elliptical lens distortion. This thesis also presents two correction methods, correction by distortion cancellation and correction by least squares fit. Each method is followed by a smoothing algorithm to address the low-texture problem of DfD.\ud \ud Most existing human activity recognition systems pay little attention to an effective way to obtain training silhouettes. This thesis presents an algorithm to obtain silhouettes from any view using 3D data produced by Vicon Nexus. Existing background subtraction algorithms produce moving shadow that has a significant impact on silhouette-based recognition system. Shadow removal methods based on colour and texture fail when the surrounding background has similar colour or texture. This thesis proposes an algorithm based on known position of the sun to remove shadow in outdoor environment, which is able to remove essential part of the shadow to suffice recognition purpose. Unlike most recognition systems that are either speed-variant, temporal-order-variant, inefficient or computational expensive, this thesis presents a near real-time system based on embedded silhouettes. Silhouettes are first embedded with isometric feature mapping, and the transformation is learned by radial basis function. Complex human activities are then learnt with spatial objects created from the patterns of embedded silhouettes.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 6.8 Mesh-plots of Circular: column 1-4: Subbarao, Favaro, Raj, and Li. Row 1: Original; row 2: corrected using CDC; row 3: corrected using CLSF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    • 6.9 Mesh-plots of the front view of House: column 1-4: Subbarao, Favaro, Raj, and Li, respectively. Row 1: Original; row 2: corrected using CDC; row 3: corrected using CLSF. . . . . . . . . . . . . . . . . . .
    • 6.10 Mesh-plots of eight views of the house using Li's method after CDC.
    • 6.11 Mesh-plots of Lion: column 1-4: Subbarao, Favaro, Raj, and Li, respectively. Row 1: Original; row 2: corrected using CDC; row 3: corrected using CLSF. . . . . . . . . . . . . . . . . . . . . . . . . . .
    • 6.12 Mesh-plots of Soldier: row 1-4: Subbarao, Favaro, Raj, and Li, respectively. Column 1: Original; column 2: corrected using CDC; column 3: corrected using CLSF. . . . . . . . . . . . . . . . . . . . .
    • 6.13 Shell: row 1-4: Subbarao, Favaro, Raj, and Li, respectively. Column 1: Original; column 2: corrected using CDC; column 3: corrected using CLSF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    • 6.14 Bird: row 1-4: Subbarao, Favaro, Raj, and Li, respectively. Column 1: Original; column 2: corrected using CDC; column 3: corrected using CLSF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    • 9.1 Learning for frame-by-frame silhouette embedding. Top: training; and bottom: testing. . . . . . . . . . . . . . . . . . . . . . . . . . . 122 9.2 Silhouettes (row 1) and their centroid distance function (row 2). Odd columns: original silhouettes; and even columns: broken silhouettes. 123 9.3 The embedded data generated using Isomap. . . . . . . . . . . . . . 124 9.4 Training (left) and testing (right) using EPL. . . . . . . . . . . . . . 126 9.5 PES for (a) Walk, (b) Drop, (c) Crou, (d) Bag, (e) Shoot, and (f) Dig.126 D.1 The problem of image magni cation when aperture-to-lens distance is smaller than the focal length. . . . . . . . . . . . . . . . . . . . . 146 D.2 The problem of image magni cation when aperture-to-lens distance is larger than the focal length. . . . . . . . . . . . . . . . . . . . . . 147 D.3 The telecentric system. . . . . . . . . . . . . . . . . . . . . . . . . . 147
    • 1. A. Li, R. C. Staunton and T. Tjahjadi. Rational-operator-based depth-fromdefocus approach to scene reconstruction. JOSA A, 30(9):1787-1795, 2013.
    • 2. A. Li, R. C. Staunton and T. Tjahjadi. Adaptive deformation correction of depth from defocus for object reconstruction. JOSA A, 31(12):2694-2702, 2014.
    • 3. A. Li and T. Tjahjadi, Video-based human activity recognition using embedded silhouettes, Submitted to Pattern Recognition, 2014, awaiting review.
    • [1] Brian Curless. From range scans to 3D models. ACM SIGGRAPH Computer Graphics, 33(4):38{41, 1999.
    • [2] Thomas Guth and Bernd Czepan. Coordinate measurement device and method for controlling same. U.S. Patent: 6587810, issued date 2003.
    • [3] Carlo G. Someda. Electromagnetic Waves. CRC Press, 2006.
    • [4] LiDAR basics. In Ohio Department of Transportation. Retrieved 11:29, 9/6/2014, from http://www.dot.state.oh.us/Divisions/Engineering/ CaddMapping/RemoteSensingandMapping/Pages/LiDAR-Basics.aspx.
    • [5] Spring 2011 Rhode island statewide LiDAR data. In Rhode Island Geographic Information System. Retrieved 11:19, 9/6/2014, from http://www.edc.uri. edu/rigis/data/download/lidar/2011USGS.
    • [6] Francois Blais. Review of 20 years of range sensor development. Journal of Electronic Imaging, 13(1):231{243, 2004.
    • [7] Song Zhang. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Optics and Lasers in Engineering, 48(2):149{ 158, 2010.
    • [8] Zhengyou Zhang. Microsoft kinect sensor and its e ect. IEEE MultiMedia, 19(2):4{10, 2012.
    • [9] David Marr and Tomaso Poggio. Cooperative computation of stereo disparity. Science, 194(4262):283{287, 1976.
    • [10] 3D photography and geometry processing. In Taubin Group. Retrieved 14:57, 9/8/2014, from http://mesh.brown.edu/3dpgp-2009/homework/hw2/hw2. html.
    • [12] Basic shading. Retrieved 12:22, 9/6/2014, from http://hippie.nu/ ~unicorn/tut/xhtml-chunked/ch05.html.
    • [13] Transform your smartphone into a mobile 3D scanner. In ETHzurich Department of Computer Science. Retrieved 11:15, 9/6/2014, from http://www. inf.ethz.ch/news-and-events/spotlights/mobile_3dscanner.html.
    • [14] G. Calin and V. O. Roda. Real-time disparity map extraction in a dual head stereo vision system. Latin American Applied Research, 37:1, 2007.
    • [15] Zscanner 700PX. In 3DSystems. Retrieved 11:39, 9/6/2014, from http:// www.zcorp.com/documents/380_ZScanner700_PX_SpecSheet_HiRes.pdf.
    • [16] Masahiro Watanabe and Shree K Nayar. Rational lters for passive depth from defocus. International Journal of Computer Vision, 27(3):203{225, 1998.
    • [17] Alex N. J. Raj and Richard C. Staunton. Rational lter design for depth from defocus. Pattern Recognition, 45:198{207, 2011.
    • [19] Frangipani trees. In How To Garden. Retrieved 14:15, 25/7/2014, from http: //howto-garden.com.au/selecting-plants/frangipani-trees/.
    • [20] Neuschwanstein castle, Germany. In 8ThingsToDo. Retrieved 07:19, 7/6/2014, from http://www.8thingstodo.com/neuschwanstein-castle-germany.
    • [21] What is dfd (depth from defocus) technology? In Panasonic. Retrieved 15:32, 28/7/2014, from http://eng.faq.panasonic.com/app/ answers/detail/a_id/26478/~/what-is-dfd-(depth-from-defocus) -technology%3F---dmc-gh4.
    • [22] J. K. Aggarwal and Michael S. Ryoo. Human activity analysis: A review. ACM Computing Surveys (CSUR), 43(3):16, 2011.
    • [23] I. Laptev and T. Lindeberg. Space-time interest points. In Proceedings of International Conference on Computer Vision, pages 432{439, 2003.
    • [27] Moshe Blank, Lena Gorelick, Eli Shechtman, Michal Irani, and Ronen Basri. Actions as space-time shapes. In Proceedings of International Conference on Computer Vision, volume 2, pages 1395{1402. IEEE, 2005.
    • [29] Joseph B. Kruskal. Multidimensional scaling by optimizing goodness of t to a nonmetric hypothesis. Psychometrika, 29(1):1{27, 1964.
    • [30] Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323{2326, 2000.
    • [31] Joshua B. Tenenbaum, Vin De Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319{2323, 2000.
    • [32] Yaser Yacoob and Michael J Black. Parameterized modeling and recognition of activities. In Proceedings of IEEE International Conference on Computer Vision, pages 120{127. IEEE, 1998.
    • [33] Z. A. Khan and W. Sohn. Hierarchical human activity recognition system based on R-transform and nonlinear kernel discriminant features. Electronics Letters, 48(18):1119{1120, 2012.
    • [34] Ahmed Elgammal and Chan-Su Lee. Inferring 3D body pose from silhouettes using activity manifold learning. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages II{681. IEEE, 2004.
    • [45] Ang Li, Richard C. Staunton, and Tardi Tjahjadi. Rational-operator-based depth-from-defocus approach to scene reconstruction. Journal of the Optical Society of America A, 30:1787{1795, 2013.
    • [46] Ang Li, Richard C Staunton, and Tardi Tjahjadi. Adaptive deformation correction of depth from defocus for object reconstruction. Journal of the Optical Society of America A, 31(12):2694{2702, 2014.
    • [47] Ang Li and Tardi Tjahjadi. Video-based human activity recognition using embedded silhouettes. Submitted to Pattern Recognition, 2014.
    • [48] Alex Paul Pentland. Depth of scene from depth of eld. In Proceedings of Image Understanding Workshop, 1982.
    • [49] Alex Paul Pentland. A new sense for depth of eld. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(4):523{531, 1987.
    • [50] Muralidhara Subbarao and Natarajan Gurumoorthy. Depth recovery from blurred edges. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 498{503, Jun 1988.
    • [51] Shang-Hong Lai, Chang-Wu Fu, and Shyang Chang. A generalized depth estimation algorithm with a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(4):405{411, 1992.
    • [52] Du-Ming Tsai and Chin-Tun Lin. A moment-preserving approach for depth from defocus. Pattern Recognition, 31(5):551 { 560, 1998.
    • [53] A. Nedzved, V. Bucha, and S. Ablameyko. Augmented 3D endoscopy video. In Proceedings of 3DTV Conference on The True Vision-Capture, Transmission and Display of 3D Video, pages 349{352. IEEE, 2008.
    • [54] Anat Levin, Rob Fergus, Fredo Durand, and William T. Freeman. Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics, 26(3):70, 2007.
    • [55] A. Saxena, S. Chung, and A. Ng. 3-d depth reconstruction from a single still image. International Journal of Computer Vision, 76(1):53{69, 2008.
    • [58] Alex Paul Pentland, T. Darrell, M. Turk, and W. Huang. A simple, real-time range camera. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 256{261, Jun 1989.
    • [59] Murali Subbarao and T-C Wei. Depth from defocus and rapid autofocusing: a practical approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 773{776. IEEE, 1992.
    • [60] V. Michael Bove Jr. Entropy-based depth from focus. Journal of the Optical Society of America A, 10(4):561{566, 1993.
    • [61] Ambasamudram N. Rajagopalan and Subhasis Chaudhuri. A block shiftvariant blur model for recovering depth from defocused images. In Proceedings of International Conference on Image Processing, volume 3, pages 636{639. IEEE, 1995.
    • [62] Ambasamudram N. Rajagopalan and Subhasis Chaudhuri. Space-variant approaches to recovery of depth from defocused images. Computer Vision and Image Understanding, 68(3):309{329, 1997.
    • [80] Paolo Favaro, Stanley Osher, Stefano Soatto, and L Vese. 3D shape from anisotropic di usion. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 179{186, 2003.
    • [81] Paolo Favaro, Stefano Soatto, Martin Burger, and Stanley J. Osher. Shape from defocus via di usion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(3):518{531, 2008.
    • [82] Jan J. Koenderink. The structure of images. Biological Cybernetics, 50(5):363{ 370, 1984.
    • [83] Arnav V. Bhavsar and Ambasamudram N. Rajagopalan. Depth estimation with a practical camera. In BMVC, pages 1{11, 2009.
    • [102] Ming-Kuei Hu. Visual pattern recognition by moment invariants. IRE Transactions on Information Theory, 8(2):179{187, 1962.
    • [103] Mahalanobis distance description. In Jenness Enterprises. Retrieved 15:15, 20/7/2014, from http://www.jennessent.com/arcview/mahalanobis_ description.htm.
    • [104] Lena Gorelick, Moshe Blank, Eli Shechtman, Michal Irani, and Ronen Basri. Actions as space-time shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(12):2247{2253, 2007.
    • [105] Dorin Comaniciu, Visvanathan Ramesh, and Peter Meer. Real-time tracking of non-rigid objects using mean shift. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 142{149. IEEE, 2000.
    • [106] Eli Shechtman and Michal Irani. Space-time behavior based correlation. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 405{412. IEEE, 2005.
    • [126] Lawrence Rabiner and Biing-Hwang Juang. An introduction to hidden Markov models. ASSP Magazine, IEEE, 3(1):4{16, 1986.
    • [183] Viorel Badescu. Modeling Solar Radiation at the Earth's Surface: Recent Advances. Springer Science & Business Media, 2008.
    • [185] Jose M. Chaquet, Enrique J Carmona, and Antonio Fernandez-Caballero. A survey of video datasets for human action and activity recognition. Computer Vision and Image Understanding, 117(6):633{659, 2013.
  • Inferred research data

    The results below are discovered through our pilot algorithms. Let us know how we are doing!

    Title Trust
    42
    42%
  • No similar publications.

Share - Bookmark

Cite this article