LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Kecskeméti, Gábor; Ostermann, Simon; Prodan, Radu (2013)
Publisher: Elsevier
Languages: English
Types: Article
Subjects: QA75, QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány
Academic cloud infrastructures are constructed and maintained so they minimally constrain their users. Since they are free and do not limit usage patterns, academics developed such behavior that jeopardizes fair and flexible resource provisioning. For efficiency, related work either explicitly limits user access to resources, or introduce automatic rationing techniques. Surprisingly, the root cause (i.e., the user behavior) is disregarded by these approaches. This article compares academic cloud user behavior to its commercial equivalent. We deduce, that academics should behave like commercial cloud users to relieve resource provisioning. To encourage commercial like behavior, we propose an architectural extension to existing academic infrastructure clouds. First, every user's energy consumption and efficiency is monitored. Then, energy efficiency based leader boards are used to ignite competition between academics and reveal their worst practices. Leader boards are not sufficient to completely change user behavior. Thus, we introduce engaging options that encourage academics to delay resource requests and prefer resources more suitable for the infrastructure's internal provisioning. Finally, we evaluate our extensions via a simulation using real life academic resource request traces. We show a potential resource utilization reduction (by the factor of at most 2.6) while maintaining the unlimited nature of academic clouds. © 2014 Elsevier Inc.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 5. Approaching spot pricing like behavior
    • [1] M. Zaharia, D. Borthakur, J. Sen Sarma, K. Elmeleegy, S. Shenker, I. Stoica, Delay scheduling: a simple technique for achieving locality and fairness in cluster scheduling, in: Proceedings of the 5th European conference on Computer systems, EuroSys '10, ACM, New York, NY, USA, 2010, pp. 265{278. doi:10.1145/1755913.1755940.
    • [2] S. Bhardwaj, L. Jain, S. Jain, Cloud computing: A study of infrastructure as a service (iaas), International Journal of engineering and information Technology 2 (1) (2010) 60{63.
    • [3] J. Diaz, G. von Laszewski, F. Wang, A. J. Younge, G. Fox, Futuregrid image repository: A generic catalog and storage system for heterogeneous virtual machine images, in: IEEE Third International Conference on Cloud Computing Technology and Science (CloudCom), IEEE, 2011, pp. 560{564. doi:10.1109/CloudCom.2011.85.
    • [4] D. Nurmi, R. Wolski, C. Grzegorczyk, G. Obertelli, S. Soman, L. Youse , D. Zagorodnov, The eucalyptus open-source cloudcomputing system, in: 9th IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID'09)., IEEE, 2009, pp. 124{131. doi:10.1109/CCGRID.2009.93.
    • [5] A. Harutyunyan, P. Buncic, T. Freeman, K. Keahey, Dynamic virtual alien grid sites on nimbus with cernvm, Journal of Physics: Conference Series 219 (2010) 1{8. doi:10.1088/ 1742-6596/219/7/072036.
    • [6] D. Milojicic, I. M. Llorente, R. S. Montero, Opennebula: A cloud management tool, Internet Computing, IEEE 15 (2) (2011) 11{14. doi:10.1109/MIC.2011.44.
    • [7] D. Durkee, Why cloud computing will never be free, Queue 8 (4) (2010) 20:20{20:29. doi:10.1145/1755884.1772130.
    • [8] I. S ligoi, D. C. Bradley, B. Holzman, P. Mhashilkar, S. Padhi, F. Wurthwein, The pilot way to grid resources using glideinWMS, in: WRI World Congress on Computer Science and Information Engineering, Vol. 2, 2009, pp. 428{432. doi: 10.1109/CSIE.2009.950.
    • [9] P.-O. O stberg, D. Espling, E. Elmroth, Decentralized scalable fairshare scheduling, Future Generation Computer Systems 29 (1) (2013) 130{143. doi:10.1016/j.future.2012.06.001.
    • [10] S. Zhuravlev, S. Blagodurov, A. Fedorova, Addressing shared resource contention in multicore processors via scheduling, SIGPLAN Not. 45 (3) (2010) 129{142. doi:10.1145/1735971. 1736036.
    • [11] I. T. Foster, Y. Zhao, I. Raicu, S. Lu, Cloud computing and grid computing 360-degree compared, in: Grid Computing Environments Workshop, 2008. GCE '08, 2009, pp. 1{10. doi: 10.1109/GCE.2008.4738445.
    • [12] E. Deelman, G. Singh, M. Livny, G. B. Berriman, J. Good, The cost of doing science on the cloud: the montage example, in: Proceedings of the 2008 ACM/IEEE conference on Supercomputing, SC '08, IEEE Press, Piscataway, NJ, USA, 2008, pp. 50:1{50:12. doi:10.1145/1413370.1413421.
    • [13] D. Kondo, B. Javadi, P. Malecot, F. Cappello, D. P. Anderson, Cost-bene t analysis of cloud computing versus desktop grids, in: IEEE International Symposium on Parallel Distributed Processing (IPDPS), 2009, pp. 1{12. doi:10.1109/IPDPS.2009. 5160911.
    • [14] G. Feng, S. Garg, R. Buyya, W. Li, Revenue maximization using adaptive resource provisioning in cloud computing environments, in: Proceedings of the 2012 ACM/IEEE 13th International Conference on Grid Computing, GRID '12, IEEE Computer Society, Washington, DC, USA, 2012, pp. 192{200. doi:10.1109/Grid.2012.16.
    • [15] M. Mazzucco, D. Dyachuk, R. Deters, Maximizing cloud providers revenues via energy aware allocation policies, in: IEEE 3rd International Conference on Cloud Computing (CLOUD), 2010, pp. 131{138. doi:10.1109/CLOUD.2010.68.
    • [16] A. B. Yoo, M. A. Jette, M. Grondona, Slurm: Simple linux utility for resource management, in: D. Feitelson, L. Rudolph, U. Schwiegelshohn (Eds.), Job Scheduling Strategies for Parallel Processing, Vol. 2862 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2003, pp. 44{60. doi:10. 1007/10968987_3.
    • [17] D. Jackson, Q. Snell, M. Clement, Core algorithms of the maui scheduler, in: D. G. Feitelson, L. Rudolph (Eds.), Job Scheduling Strategies for Parallel Processing, Vol. 2221 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2001, pp. 87{ 102. doi:10.1007/3-540-45540-X_6.
    • [18] C. Fischer, Feedback on household electricity consumption: a tool for saving energy?, Energy E ciency 1 (1) (2008) 79{104. doi:10.1007/s12053-008-9009-7.
    • [19] D. P. Anderson, J. Cobb, E. Korpela, M. Lebofsky, D. Werthimer, Seti@home: an experiment in public-resource computing, Communications of the ACM 45 (11) (2002) 56{61. doi:10.1145/581571.581573.
    • [20] J. J. Dongarra, H. W. Meuer, E. Strohmaier, et al., TOP500 supercomputer sites, Supercomputer 11 (2{3) (1995) 133{194.
    • [21] S. Darby, The e ectiveness of feedback on energy consumption. a review for defra of the literature on metering, billing and direct displays., Working paper, Oxford Environmental Change Institute (April 2006). URL http://www2.z3controls.com/doc/ ECI-Effectiveness-of-Feedback.pdf
    • [22] W. Chengjian, L. Xiang, Y. Yang, F. Ni, Y. Mu, System power model and virtual machine power metering for cloud computing pricing, in: Third International Conference on Intelligent System Design and Engineering Applications (ISDEA), 2013, pp. 1379{1382. doi:10.1109/ISDEA.2012.327.
    • [23] P. Xiao, Z. Hu, D. Liu, G. Yan, X. Qu, Virtual machine power measuring technique with bounded error in cloud environments, Journal of Network and Computer Applications 36 (2) (2013) 818 { 828. doi:http://dx.doi.org/10.1016/j.jnca.2012.12. 002.
    • [24] Y. Jin, Y. Wen, Q. Chen, Energy e ciency and server virtualization in data centers: An empirical investigation, in: IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2012, pp. 133{138. doi:10.1109/INFCOMW. 2012.6193474.
    • [25] A. Kansal, F. Zhao, J. Liu, N. Kothari, A. A. Bhattacharya, Virtual machine power metering and provisioning, in: Proceedings of the 1st ACM symposium on Cloud computing, SoCC '10, ACM, New York, NY, USA, 2010, pp. 39{50. doi: 10.1145/1807128.1807136.
    • [26] D. Huang, D. Yang, H. Zhang, L. Wu, Energy-aware virtual machine placement in data centers, in: Global Communications Conference (GLOBECOM), 2012 IEEE, 2012, pp. 3243{3249. doi:10.1109/GLOCOM.2012.6503614.
    • [27] S. Takahashi, A. Takefusa, M. Shigeno, H. Nakada, T. Kudoh, A. Yoshise, Virtual machine packing algorithms for lower power consumption, in: IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom), 2012, pp. 161{168. doi:10.1109/CloudCom.2012.6427493.
    • [28] C.-C. Lin, P. Liu, J.-J. Wu, Energy-e cient virtual machine provision algorithms for cloud systems, in: Fourth IEEE International Conference on Utility and Cloud Computing (UCC), 2011, pp. 81{88. doi:10.1109/UCC.2011.21.
    • [29] P. Schmitt, B. Skiera, C. Van den Bulte, Referral programs and customer value, Journal of Marketing 75 (1) (2011) 46{59.
    • [30] G. Ryu, L. Feick, A penny for your thoughts: Referral reward programs and referral likelihood, Journal of Marketing 71 (1) (2007) 84{94. URL http://www.jstor.org/stable/30162131
    • [31] M. A. Hogg, D. Abrams, Group motivation: Social psychological perspectives., Harvester Wheatsheaf, Hertfordshire, HP2 7EZ, England, 1993, Ch. Towards a single-process uncertaintyreduction model of social motivation in groups., pp. 173{190.
    • [32] A. Iosup, H. Li, M. Jan, S. Anoep, C. Dumitrescu, L. Wolters, D. H. J. Epema, The grid workloads archive, Future Generation Computer Systems 24 (7) (2008) 672{686. doi:10.1016/ j.future.2008.02.003.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects

  • FWF | Workflows on Manycore Proce...
  • EC | ENTICE

Cite this article