Remember Me
Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:

OpenAIRE is about to release its new face with lots of new content and services.
During September, you may notice downtime in services, while some functionalities (e.g. user registration, login, validation, claiming) will be temporarily disabled.
We apologize for the inconvenience, please stay tuned!
For further information please contact helpdesk[at]openaire.eu

fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Cheng, K.L.; Ju, X.; Tong, R.F.; Tang, M.; Chang, Jian; Zhang, Jian J. (2016)
Languages: English
Types: Article

Classified by OpenAIRE into

arxiv: Computer Science::Computer Vision and Pattern Recognition
Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera’s method and 1/10 of Raposo’s method) due to the advantage of using hybrid parameters.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] Zhu Z, Martin R, Hu S M. Panorama completion for street views. Computational Visual Media, 2015, 1(1): 49-57.
    • [2] Tong R F, Zhang Y, Cheng K L. StereoPasting: Interactive composition in stereoscopic images. IEEE Trans. Vis. Comput. Graphics, 2013, 19(8): 1375-1385.
    • [3] Liu Y, Sun L, Yang S. A retargeting method for stereoscopic 3D video. Computational Visual Media, 2015, 1(2): 119-127.
    • [4] Mu T J, Wang J H, Du S P, Hu S M. Stereoscopic image completion and depth recovery. The Vis. Comput., 2014, 30(6/7/8): 833-843.
    • [5] Tang Y L, Tong R F, Tang M, Zhang Y. Depth incorporating with color improves salient object detection. The Vis. Comput., 2016, 32(1): 111-121.
    • [6] Smith B M, Zhang L, Jin H L. Stereo matching with nonparametric smoothness priors in feature space. In Proc. IEEE CVPR, June 2009, pp.485-492.
    • [7] Zhang G F, Jia J Y, Wong T T, Bao H J. Consistent depth maps recovery from a video sequence. IEEE Transactions on Pattern Anal. Mach. Intell., 2009, 31(6): 974-988.
    • [8] Zhang C, Zhang Z Y. Calibration between depth and color sensors for commodity depth cameras. In Proc. IEEE ICME, July 2011.
    • [9] Herrera C D, Kannala J, Heikkil¨a J. Joint depth and color camera calibration with distortion correction. IEEE Transactions on Pattern Anal. Mach. Intell., 2012, 34(10): 2058- 2064.
    • [10] Raposo C, Barreto J P, Nunes U. Fast and accurate calibration of a Kinect sensor. In Proc. 3DV, June 29-July 1, 2013, pp.342-349.
    • [11] Zhang Z Y. Flexible camera calibration by viewing a plane from unknown orientations. In Proc. the 7th IEEE ICCV, Sept. 1999, Vol. 1, pp.666-673.
    • [12] Heikkil¨a J. Geometric camera calibration using circular control points. IEEE Transactions on Pattern Anal. Mach. Intell., 2000, 22(10): 1066-1077.
    • [13] Smisek J, Jancosek M, Pajdla T. 3D with Kinect. In Proc. IEEE ICCV Workshops, Nov. 2011, pp.1154-1160.
    • [14] Yamazoe H, Habe H, Mitsugami I, Yagi Y. Easy depth sensor calibration. In Proc. the 21st ICPR, Nov. 2012, pp.465- 468.
    • [15] Li S, Zhou Q. A new approach to calibrate range image and color image from Kinect. In Proc. the 4th IHMSC, Aug. 2012, Vol. 2, pp.252-255.
    • [16] Jin B W, Lei H, Geng W D. Accurate intrinsic calibration of depth camera with cuboids. In Proc. the 13th ECCV, Sept. 2014, pp.788-803.
    • [17] Kim J H, Choi J, Koo B K. Simultaneous color camera and depth sensor calibration with correction of triangulation errors. In Proc. the 9th ISVC, July 2013, pp.301-311.
    • [18] Karan B. Accuracy improvements of consumer-grade 3D sensors for robotic applications. In Proc. the 11th IEEE SISY, Sept. 2013, pp.141-146.
    • [19] Karan B. Calibration of depth measurement model for Kinect-type 3D vision sensors. In Proc. the 21st WSCG, June 2013, pp.61-64.
    • [20] Izadi S, Kim D, Hilliges O, Molyneaux D, Newcombe R, Kohli P, Shotton J, Hodges S, Freeman D, Davison A, Fitzgibbon A. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In Proc. the 24th ACM UIST, Oct. 2011, pp.559-568.
    • [21] Li H, Vouga E, Gudym A, Luo L J, Barron J T, Gusev G. 3D self-portraits. ACM Trans. Graph., 2013, 32(6): Article No. 187.
    • [22] Strang G. Introduction to Linear Algebra (4th edition). Wellesley-Cambridge Press, 2009.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects


Cite this article

Cookies make it easier for us to provide you with our services. With the usage of our services you permit us to use cookies.
More information Ok