Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Sinclair, Jane; Butler, Matthew; Morgan, Michael; Kalvala, Sara (2015)
Publisher: ACM
Languages: English
Types: Unknown
Subjects: LB
Data relating to university students’ engagement is collected internationally via several large-scale student surveys such as the North American National Survey of Student Engagement. The instruments employed measure the extent to which students put their efforts into activities associated with effective learning. It is claimed that these process measures act as a reliable proxy for student attainment, and there appears to be some evidence to support this. So far, there has been little work done to investigate engagement instruments and the data they generate from a subject perspective. This paper brings together data relating to Computer Science (CS) across the range of major engagement surveys. The results of this meta-analysis appear to indicate that CS rates lower than average on many of the major engagement benchmarks and in some cases, considerably so. Particular benchmark areas giving cause for concern are identified prompting questions as to how these results should be interpreted and used in the context of a particular learning domain. We also critique aspects of the surveys themselves, suggesting that further research is needed to better understand their appropriateness for individual subjects or for groups of subjects with shared traits. The paper argues that more qualitative data is required and that other measures (such as student expectation and some subject-specific measures) are needed for a greater understanding of the CS student experience.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [4] Gibbs, G. (2012). Implications of 'Dimensions of quality' in a market environment (York, Higher Education Academy).
    • [5] Kuh, G.D., Kinzie, J., Cruce, T., Shoup, R., & Gonyea, R. M. (2006). Connecting the dots: Multi-faceted analyses of the relationships between student engagement results from the NSSE, and the institutional practices and conditions that foster student success. Indiana University, Center for Postsecondary Research.
    • [6] Pascarella, E. T., Seifert, T. A., & Blaich, C. (2010). How effective are the NSSE benchmarks in predicting important educational outcomes?. Change: The Magazine of Higher Learning, 42(1), 16-22.
    • [7] Pike, G.R. (2012). NSSE Benchmarks and Institutional Outcomes: a Note on the Importance of Considering the Intended Uses of a Measure in Validity Studies. Research in Higher Education, 54:149-170.
    • [8] Australasian Survey of Student Engagement, http://www.acer.edu.au/ausse
    • [9] Buckley, A. (2013). Engagement for enhancement: Report of a UK survey pilot. Higher Education Academy, UK.
    • [10] University Experience Survey, http://www.ues.edu.au/
    • [11] Graduate Careers Australia and the Social Research Centre (2014), “2013 University Experience Survey National Report”, https://education.gov.au/university-experiencesurvey
    • [12] MyUniversity, Australian Government, myuniversity.gov.au
    • [13] Radloff, A., Coates, H., James, R. and Krause, K. (2011). “Report on the Development of the University Experience Survey”. http://research.acer.edu.au/higher_education/30
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article