LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Despoina Chatzakou; Nicolas Kourtellis; Jeremy Blackburn; Emiliano De Cristofaro; Gianluca Stringhini; Athena Vakali (2017)
Publisher: ACM publishing
Types: Conference object
Subjects: Computer Science - Social and Information Networks, Computer Science - Computers and Society
In recent years, bullying and aggression against users on social media have grown significantly, causing serious consequences to victims of all demographics. In particular, cyberbullying affects more than half of young social media users worldwide, and has also led to teenage suicides, prompted by prolonged and/or coordinated digital harassment. Nonetheless, tools and technologies for understanding and mitigating it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of cyberbullies and aggressors, and what features distinguish them from regular users. We find that bully users post less, participate in fewer online communities, and are less popular than normal users, while aggressors are quite popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, achieving over 90% AUC.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] J. Blackburn, R. Simha, N. Kourtellis, X. Zuo, M. Ripeanu, J. Skvoretz, and A. Iamnitchi. Branded with a scarlet "c": cheaters in a gaming social network. In WWW, pages 81-90, 2012.
    • [2] V. Blondel, J. Guillaume, R. Lambiotte, and E. Lefebvre. The louvain method for community detection in large networks. Statistical Mechanics: Theory and Experiment, 10:P10008, 2011.
    • [3] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. Smote: Synthetic minority over-sampling technique. J. Artif. Int. Res., 16(1):321-357, 2002.
    • [4] C. Chen, A. Liaw, and L. Breiman. Using random forest to learn imbalanced data. University of California, Berkeley, pages 1-12, 2004.
    • [5] C. Chen, J. Zhang, X. Chen, Y. Xiang, and W. Zhou. 6 million spam tweets: A large ground truth for timely Twitter spam detection. In IEEE ICC, 2015.
    • [6] Y. Chen, Y. Zhou, S. Zhu, and H. Xu. Detecting Offensive Language in Social Media to Protect Adolescent Online Safety. In PASSAT and SocialCom, 2012.
    • [7] L. Corcoran, C. M. Guckin, and G. Prentice. Cyberbullying or cyber aggression?: A review of existing definitions of cyber-based peer-to-peer aggression. Societies, 5(2):245, 2015.
    • [8] M. Dadvar, D. Trieschnigg, and F. Jong. Experts and machines against bullies: A hybrid approach to detect cyberbullies. In Canadian Conference on Artificial Intelligence, pages 275-281, 2014.
    • [9] K. Dinakar, R. Reichart, and H. Lieberman. Modeling the detection of textual cyberbullying. The Social Mobile Web, 11:02, 2011.
    • [10] N. Djuric, J. Zhou, R. Morris, M. Grbovic, V. Radosavljevic, and N. Bhamidipati. Hate Speech Detection with Comment Embeddings. In WWW, 2015.
    • [11] J. Fleiss et al. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 1971.
    • [12] M. Giatsoglou, D. Chatzakou, N. Shah, C. Faloutsos, and A. Vakali. Retweeting Activity on Twitter: Signs of Deception. In PAKDD (1), 2015.
    • [13] D. W. Grigg. Cyber-Aggression: Definition and Concept of Cyberbullying. Australian Journal of Guidance and Counselling, 20(2), 2010.
    • [14] L. D. Hanish, B. Kochenderfer-Ladd, R. A. Fabes, C. L. Martin, D. Denning, et al. Bullying among young children: The influence of peers and teachers. Bullying in American schools: A social-ecological perspective on prevention and intervention, pages 141-159, 2004.
    • [15] H. Hosseinmardi, R. Han, Q. Lv, S. Mishra, and A. Ghasemianlangroodi. Towards understanding cyberbullying behavior in a semi-anonymous social network. In IEEE/ACM ASONAM, 2014.
    • [16] H. Hosseinmardi, S. A. Mattson, R. I. Rafiq, R. Han, Q. Lv, and S. Mishra. Analyzing Labeled Cyberbullying Incidents on the Instagram Social Network. In International Conference on Social Informatics, 2015.
    • [17] F. Jin, E. Dougherty, P. Saraf, Y. Cao, and N. Ramakrishnan. Epidemiological Modeling of News and Rumors on Twitter. In SNAKDD, 2013.
    • [18] I. Kayes, N. Kourtellis, D. Quercia, A. Iamnitchi, and F. Bonchi. The Social World of Content Abusers in Community Question Answering. In WWW, 2015.
    • [19] J.-H. Kim. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Comput. Stat. Data Anal., 53(11):3735-3745, 2009.
    • [20] K. Kira and L. A. Rendell. A Practical Approach to Feature Selection. In 9th International Workshop on Machine Learning, 1992.
    • [21] J. M. Kleinberg. Hubs, authorities, and communities. ACM Comput. Surv., 31(4es), 1999.
    • [22] J. R. Landis and G. G. Koch. The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1), 1977.
    • [23] A. Massanari. #gamergate and the fappening: How reddit's algorithm, governance, and culture support toxic technocultures. New Media & Society, 2015.
    • [24] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
    • [25] M. Miller. goo.gl/Gim4Oq, 2016.
    • [26] V. Nahar, S. Unankard, X. Li, and C. Pang. Sentiment Analysis for Effective Detection of Cyber Bullying. In APWeb, 2012.
    • [27] G. Navarro. A Guided Tour to Approximate String Matching. ACM Comput. Surv., 33(1), 2001.
    • [28] C. Nobata, J. Tetreault, A. Thomas, Y. Mehdad, and Y. Chang. Abusive Language Detection in Online User Content. In WWW, 2016.
    • [29] J. Pfeffer, T. Zorbach, and K. M. Carley. Understanding online firestorms: Negative word-of-mouth dynamics in social media networks. Journal of Marketing Communications, 20(1-2), 2014.
    • [30] S. Pieschl, T. Porsch, T. Kahl, and R. Klockenbusch. Relevant dimensions of cyberbullying - results from two experimental studies. Journal of Applied Developmental Psychology, 34(5):241 - 252, 2013.
    • [31] J. Quinlan. Induction of Decision Trees. Learning, 1(1), 1986.
    • [32] Twitter trolls are now abusing the company?s bottom line. goo.gl/JV9ZMH, 2016.
    • [33] P. Smith, J. Mahdavi, M. Carvalho, S. Fisher, S. Russell, and N. Tippett. Cyberbullying: Its nature and impact in secondary school pupils. In Child Psychol. Psychiatr, 2008.
    • [34] G. Stringhini, C. Kruegel, and G. Vigna. Detecting spammers on social networks. In ACSAC, 2010.
    • [35] The Guardian. Twitter CEO: We suck at dealing with trolls and abuse. goo.gl/uJmzNN, 2015.
    • [36] The Guardian. Did trolls cost Twitter 3.5bn and its sale? goo.gl/PlIL66, 2016.
    • [37] R. S. Tokunaga. Review: Following You Home from School: A Critical Review and Synthesis of Research on Cyberbullying Victimization. Comput. Hum. Behav., 26(3), 2010.
    • [38] C. Van Hee, E. Lefever, B. Verhoeven, J. Mennes, B. Desmet, G. De Pauw, W. Daelemans, and V. Hoste. Automatic detection and prevention of cyberbullying. In Human and Social Analytics, pages 13-18, 2015.
    • [39] A. H. Wang. Don't follow me: Spam detection in Twitter. In SECRYPT, 2010.
    • [40] J.-M. Xu, X. Zhu, and A. Bellmore. Fast Learning for Sentiment Analysis on Bullying. In WISDOM, 2012.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects

  • EC | ENCASE

Cite this article