Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Dashevskyi, S.; Brucker, A.D.; Massacci, F. (2016)
Publisher: Springer Verlag (Germany)
Languages: English
Types: Other
The work presented in this paper is motivated by the need to estimate the security effort of consuming Free and Open Source Software (FOSS) components within a proprietary software supply chain of a large European software vendor. To this extent we have identified three different cost models: centralized (the company checks each component and propagates changes to the different product groups), distributed (each product group is in charge of evaluating and fixing its consumed FOSS components), and hybrid (only the least used components are checked individually by each development team). We investigated publicly available factors (\eg, development activity such as commits, code size, or fraction of code size in different programming languages) to identify which one has the major impact on the security effort of using a FOSS component in a larger software product.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 1. M. Aberdour. Achieving quality in open-source software. IEEE Software, 24(1):58- 64, 2007.
    • 2. O. Alhazmi, Y. Malaiya, and I. Ray. Security vulnerabilities in software systems: A quantitative perspective. In Data and Applications Security XIX, pages 281-294. 2005.
    • 3. K. Beecher, A. Capiluppi, and C. Boldyreff. Identifying exogenous drivers and evolutionary stages in floss projects. Journal of Systems and Software, 82(5):739- 750, 2009.
    • 4. L. ben Othmane, G. Chehrazi, E. Bodden, P. Tsalovski, A. D. Brucker, and P. Miseldine. Factors impacting the effort required to fix security vulnerabilities: An industrial case study. In Proceedings of Information Security Conference. 2015.
    • 5. A. Capiluppi. Models for the evolution of os projects. In Proceedings of International Conference on Software Maintenance, 2003.
    • 6. S. Christey. Unforgivable vulnerabilities. Black Hat Briefings, 2007.
    • 7. M. Gegick, L. Williams, J. Osborne, and M. Vouk. Prioritizing software security fortification throughcode-level metrics. In Proceedings of the 4th ACM workshop on Quality of protection, 2008.
    • 8. M. Hansen, K. Köhntopp, and A. Pfitzmann. The open source approachÑopportunities and limitations with respect to security and privacy. Computers & Security Journal, 21(5):461-471, 2002.
    • 9. J.-H. Hoepman and B. Jacobs. Increased security through open source. Communications of the ACM, 50(1):79-83, 2007.
    • 10. R. L. Jones and A. Rastogi. Secure coding: building security into the software development life cycle. Information Systems Security, 13(5):29-39, 2004.
    • 11. Y. Kamei, E. Shihab, B. Adams, A. E. Hassan, A. Mockus, A. Sinha, and N. Ubayashi. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering, 39(6):757-773, 2013.
    • 12. Z. Li, L. Tan, X. Wang, S. Lu, Y. Zhou, and C. Zhai. Have things changed now?: an empirical study of bug characteristics in modern open source software. In Proceedings of the 1st workshop on Architectural and system support for improving software dependability, 2006.
    • 13. F. Massacci and V. H. Nguyen. Which is the right source for vulnerability studies?: an empirical analysis on mozilla firefox. In Proceedings of the 6th International Workshop on Security Measurements and Metrics, 2010.
    • 14. F. Massacci and V. H. Nguyen. An empirical methodology to evaluate vulnerability discovery models. IEEE Transactions on Software Engineering, 40(12):1147-1162, 2014.
    • 15. N. Nagappan and T. Ball. Use of relative code churn measures to predict system defect density. In Proceedings of 27th International Conference on Software Engineering, 2005.
    • 16. V. H. Nguyen and L. M. S. Tran. Predicting vulnerable software components with dependency graphs. In Proceedings of the 6th International Workshop on Security Measurements and Metrics, 2010.
    • 17. A. Ozment and S. E. Schechter. Milk or wine: Does software security improve with age? In Proceedings of Usenix Security Symposium, 2006.
    • 18. G. Polančič, R. V. Horvat, and T. Rozman. Comparative assessment of open source software using easy accessible data. In Proceedings of 26th International Conference on Information Technology Interfaces, 2004.
    • 19. E. Raymond. The cathedral and the bazaar. Knowledge, Technology & Policy, 12(3):23-49, 1999.
    • 20. H. Sajnani, V. Saini, J. Ossher, and C. V. Lopes. Is popularity a measure of quality? an analysis of maven components. In Proceedings of IEEE International Conference on Software Maintenance and Evolution, 2014.
    • 21. R. Scandariato, J. Walden, A. Hovsepyan, and W. Joosen. Predicting vulnerable software components via text mining. IEEE Transactions on Software Engineering, 40(10):993-1006, 2014.
    • 22. G. Schryen. Is open source security a myth? Communications of the ACM, 54(5):130-140, 2011.
    • 23. R. C. Seacord. Secure coding standards. In Proceedings of the Static Analysis Summit, NIST Special Publication, 2006.
    • 24. Y. Shin, A. Meneely, L. Williams, J. Osborne, et al. Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Transactions on Software Engineering, 37(6):772-787, 2011.
    • 25. Y. Shin and L. Williams. An empirical model to predict security vulnerabilities using code complexity metrics. In Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement, 2008.
    • 26. K.-J. Stol and M. Ali Babar. Challenges in using open source software in product development: a review of the literature. In Proceedings of the 3rd International Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2010.
    • 27. J. Walden and M. Doyle. Savi: Static-analysis vulnerability indicator. IEEE Security & Privacy Journal, 10(3):32-39, 2012.
    • 28. J. Walden, J. Stuckman, and R. Scandariato. Predicting vulnerable components: Software metrics vs text mining. In Proceedings of IEEE 25th International Symposium on Software Reliability Engineering, 2014.
    • 29. D. A. Wheeler. How to evaluate open source software/free software (oss/fs) programs. URL http://www. dwheeler. com/oss_fs_eval. html, 2005.
    • 30. D. A. Wheeler and S. Khakimov. Open source software projects needing security investments. 2015.
    • 31. D. Zhang, K. El Emam, H. Liu, et al. An investigation into the functional form of the size-defect relationship for software modules. IEEE Transactions on Software Engineering, 35(2):293-304, 2009.
    • 32. F. Zhang, A. Mockus, Y. Zou, F. Khomh, and A. E. Hassan. How does context affect the distribution of software maintainability metrics? In Proceedings of 29th IEEE International Conference on Software Maintenance, 2013.
    • 33. H. Zhang. An investigation of the relationships between lines of code and defects. In Proceedings of IEEE International Conference on Software Maintenance, 2009.
    • 34. T. Zimmermann, N. Nagappan, and L. Williams. Searching for a needle in a haystack: Predicting security vulnerabilities for windows vista. In Proceedings of Third International Conference on Software Testing, Verification and Validation, 2010.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects


Cite this article