LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
McGough, Andrew Stephen; Arief, Budi; Gamble, Carl; Wall, David; Brennan, John; Fitzgerald, John; van Moorsel, Aad; Alwis, Sujeewa; Theodoropoulos, Georgios; Ruck-Keene, Ed (2015)
Publisher: Innovative Information Science & Technology Research Group
Languages: English
Types: Article
Subjects: QA
The insider threat problem is a significant and ever present issue faced by any organisation. While security mechanisms can be put in place to reduce the chances of external agents gaining access to a system, either to steal assets or alter records, the issue is more complex in tackling insider threat. If an employee already has legitimate access rights to a system, it is much more difficult to prevent them from carrying out inappropriate acts, as it is hard to determine whether the acts are part of their official work or indeed malicious. We present in this paper the concept of “Ben-ware”: a beneficial software system that uses low-level data collection from employees’ computers, along with Artifi- cial Intelligence, to identify anomalous behaviour of an employee. By comparing each employee’s activities against their own ‘normal’ profile, as well as against the organisational’s norm, we can detect those that are significantly divergent, which might indicate malicious activities. Dealing with false positives is one of the main challenges here. Anomalous behaviour could indicate malicious activities (such as an employee trying to steal confidential information), but they could also be be- nign (for example, an employee is carrying out a workaround or taking a shortcut to complete their job). Therefore it is important to minimise the risk of false positives, and we do this by combining techniques from human factors, artificial intelligence, and risk analysis in our approach. Developed as a distributed system, Ben-ware has a three-tier architecture composed of (i) probes for data col- lection, (ii) intermediate nodes for data routing, and (iii) high level nodes for data analysis. The distributed nature of Ben-ware allows for near-real-time analysis of employees without the need for dedicated hardware or a significant impact on the existing infrastructure. This will enable Ben-ware to be deployed in situations where there are restrictions due to legacy and low-power resources, or in cases where the network connection may be intermittent or has a low bandwidth. We demonstrate the appropriateness of Ben-ware, both in its ability to detect potentially malicious acts and its low- impact on the resources of the organisation, through a proof-of-concept system and a scenario based on synthetically generated user data.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] A. S. McGough, D. Wall, J. Brennan, G. Theodoropoulos, E. Ruck-Keene, B. Arief, C. Gamble, J. Fitzgerald, A. van Moorsel, and S. Alwis, “Insider Threats: Identifying Anomalous Human Behaviour in Heterogeneous Systems Using Beneficial Intelligent Software (Ben-ware),” in Proc. of the 7th ACM CCS International Workshop on Managing Insider Security Threats (MIST'15), Denver, Colorado, USA. ACM, 2015, pp. 1-12. [Online]. Available: http://doi.acm.org/10.1145/2808783.2808785
    • [2] E. Cole and S. Ring, Insider Threat: Protecting the Enterprise from Sabotage, Spying, and Theft: Protecting the Enterprise from Sabotage, Spying, and Theft. Syngress, 2005.
    • [3] L. Spitzner, “Honeypots: catching the insider threat,” in Proc. of the 19th Annual Computer Security Applications Conference (ACSAC'03), Las Vegas, Nevada, USA. IEEE, December 2003, pp. 170-179. [Online]. Available: http://dx.doi.org/10.1109/CSAC.2003.1254322
    • [4] G. Greenwald, E. MacAskill, and L. Poitras, “Edward snowden: the whistleblower behind the NSA surveillance revelations,” The Guardian, June 2013, http://www.theguardian.com/world/2013/jun/09/ edward-snowden-nsa-whistleblower-surveillance [Online; Accessed on December 10, 2015].
    • [5] M. Bishop and C. Gates, “Defining the insider threat,” in Proc. of the 4th Annual Workshop on Cyber Security and Information Intelligence Research: Developing Strategies to Meet the Cyber Security and Information Intelligence Challenges Ahead (CSIIRW'08), Oak Ridge, Tennessee, USA. ACM, May 2008, pp. 15:1-15:3. [Online]. Available: http://doi.acm.org/10.1145/1413140.1413158
    • [6] K. D. Loch, H. H. Carr, and M. E. Warkentin, “Threats to information systems: Today's reality, yesterday's understanding,” MIS Quarterly, vol. 16, no. 2, pp. 173-186, June 1992.
    • [7] D. Wall, “Enemies within: redefining the insider threat in organizational security policy.” Security Journal, vol. 26, no. 2, pp. 107-124, April 2013.
    • [8] E. Pauwels and O. Ambekar, “One class classification for anomaly detection: Support vector data description revisited,” in Proc. of the 11th Industrial Conference on Advances in Data Mining. Applications and Theoretical Aspects (ICDM'11), New York, NY, USA, LNCS, P. Perner, Ed., vol. 6870. Springer Berlin Heidelberg, August-September 2011, pp. 25-39. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-23184-1 3
    • [9] D. Tax and R. Duin, “Support vector data description,” Machine Learning, vol. 54, no. 1, pp. 45-66, 2004.
    • [10] V. N. Vapnik, The Nature of Statistical Learning Theory. New York, NY, USA: Springer-Verlag, 1995.
    • [11] E. Schultz, “A framework for understanding and predicting insider attacks,” Computers and Security, vol. 21, no. 6, pp. 526-531, October 2002.
    • [12] D. B. Parker, Fighting computer crime: A new framework for protecting information. John Wiley & Sons, Inc., 1998.
    • [13] T. Tuglular and E. Spafford, “A framework for characterization of insider computer misuse,” 1997, unpublished paper, Purdue University.
    • [14] J. Suler and W. Phillips, “The bad boys of cyberspace: Deviant behavior in online multimedia communities and strategies for managing it,” CyberPsychology & Behavior, vol. 1, no. 3, pp. 275-294, 1998.
    • [15] E. Shaw, K. Ruby, and J. Post, “The insider threat to information systems: The psychology of the dangerous insider,” Security Awareness Bulletin, vol. 2, no. 98, pp. 1-10, 1998.
    • [16] C. Colwill, “Human factors in information security: The insider threat-who can you trust these days?” Information Security Technical Report, vol. 14, no. 4, pp. 186-196, 2009.
    • [17] M. A. Maloof and G. D. Stephens, “ELICIT: A system for detecting insiders who violate needto-know,” in Proc. of the 10th International Conference on Recent Advances in Intrusion Detection
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

Share - Bookmark

Download from

Cite this article