skip to main content
research-article

M-AID: An adaptive middleware built upon anomaly detectors for intrusion detection and rational response

Published:30 November 2009Publication History
Skip Abstract Section

Abstract

Anomaly-based intrusion detection is about the discrimination of malicious and legitimate behaviors on the basis of the characterization of system normality in terms of particular observable subjects. As the system normality is constructed solely from an observed sample of normally occurring patterns, anomaly detectors always suffer excessive false alerts. Adaptability is therefore a desirable feature that enables an anomaly detector to alleviate, if not eliminate, such annoyance. To achieve that, we either design self-learning anomaly detectors to capture the drifts of system normality or develop postprocessing mechanisms to deal with the outputs. As the former methodology is usually scenario- and application-specific, in this article, we focus on the latter one. In particular, our design starts from three key observations: (1) most of anomaly detectors are threshold based and parametric, that is, configurable by a set of parameters; (2) anomaly detectors differ in operational environment and operational capability in terms of detection coverage and blind spots; (3) an intrusive anomaly may leave traces across multiple system layers, incurring different observable events of interest. Firstly, we present a statistical framework to formally characterize and analyze the basic behaviors of anomaly detectors by examining the properties of their operational environments. The framework then serves as a theoretical basis for developing an adaptive middleware, which is called M-AID, to optimally integrate a number of observation-specific parameterizable anomaly detectors. Specifically, M-AID treats these fine-grained anomaly detectors as a whole and casts their collective behaviors in a framework which is formulated as a Multiagent Partially Observable Markov Decision Process (MPO-MDP). The generic anomaly detection models of M-AID are thus automatically inferred via a reinforcement learning algorithm which dynamically adjusts the behaviors of anomaly detectors in accordance with a reward signal that is defined and quantified by a suit of evaluation metrics. Fundamentally, the distributed and autonomous architecture enables M-AID to be scalable, dependable, and adaptable, and the reward signal allows security administrators to specify cost factors and take into account the operational context for taking rational response. Finally, a host-based prototype of M-AID is developed, along with comprehensive experimental evaluation and comparative studies.

References

  1. Aberdeen, D. 2003. A survey of approximate methods for solving partially observable markov decision processes. National ICT Australia rep., Canberra, Australia.Google ScholarGoogle Scholar
  2. Axelsson, S. 2000. The base-rate fallacy and the difficulty of intrusion detection. ACM Trans. Inform. Syst. Secur. 3, 3, 186--205. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Barlett, P. L. and Baxter, J. 1999. Hebbian synaptic modifications in spiking neurons that learn. Tech. rep., Australia National University, Canberra, Australia.Google ScholarGoogle Scholar
  4. Baxter, J. and Barlett, P. L. 1999. Direct gradient-based reinforcement learning: I. Gradiment estimation algorithms. Tech. rep., Australia National University, Canberra, Australia.Google ScholarGoogle Scholar
  5. Baxter, J., Weaver, L., and Barlett, P. L. 1999. Direct gradient-based reinforcement learning: II. Gradiment descent algorithms and experiments. Tech. rep., Australia National University, Canberra, Australia.Google ScholarGoogle Scholar
  6. Cardenas, A. A., Baras, J. S., and Seamon, K. 2006. A framework for the evaluation of intrusion detection systems. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 63--77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Cheung, S., Crawford, R., Dilger, M., Frank, J., and Hoagland, J. 1999. The design of GrIDS: a graph-based intrusion detection system. Tech. rep. CSE-99-2, University of California at Davis Computer Science Department.Google ScholarGoogle Scholar
  8. DARPA. 1999. Darpa intrusion detection data sets. http://www.II.mit.edu/mission/communications/ist/corpora/ideval/data/index.htmlGoogle ScholarGoogle Scholar
  9. Dobson, S., Denazis, S., and Fernandez, A. 2006. A survey of autonomic communications. ACM Trans. Auton. Adaptive Syst. 1, 2, 223--259. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Feng, H. H., Kolesnikov, O. M., Fogla, P., Lee, W., and Gong, W. 2003. Anomaly detection using call stack information. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 62--75. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Forrest, S., Hofmeyr, S. A., and Longstaff, T. A. 1996. A sense of self for unix processes. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 120--128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Giacinto, G., Roli, F., and Didaci, L. 2003. Fusion of multiple classifiers for intrusion detection in computer networks. Pattern Recogn. Lett. 24, 12, 1795--1803. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Gu, G., Fogal, P., Dagon, D., Lee, W., and Skoric, B. 2006. Mearuing intrusion detection capability: An information-theoretic approach. In Proceedings of the ASIACCS'06. ACM, 90--101. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Gu, G., Porras, P., Yegneswaran, V., Fong, M., and Lee, W. 2007. Bothunter: Detecting malware infection through ids-driven dialog correlation. In Proceedings of the 16th USENIX Security Symposium. USENIX, 167--182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Han, S.-J. and Cho, S.-B. 2003. Combining multiple host-based detectors using decision tree. In Proceedings of the 16th Australian Conference on Artificial Intelligence. Springer, 208--220.Google ScholarGoogle Scholar
  16. Helman, P. and Liepins, G. 1993. Statistical foundations of audit trail analysis for the detection of computer misuse. IEEE Trans. Softw. Engin. 19, 9, 886--901. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Huebscher, M. C. and Mccann, J. A. 2008. A survey of autonomic computing: Degress, models, and applications. ACM Comput. Surv. 40, 3, 7:1--7:28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Hutter, M. 2003. Optimality of universal bayesian sequence prediction for general loss and alphabet. J. Mach. Learn. Res. 4, 971--1000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Joshua, H., Dorene, K. R., Laura, T., and Stephen, T. 2003. Validation of sensor alert correlators. IEEE Secur. Privacy 1, 1, 46--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Kreidl, O. P. and Frazier, T. M. 2004. Feedback control applied to survivability: A host-based autonomic defense system. IEEE Trans. Reliabi. 53, 1, 148--166.Google ScholarGoogle ScholarCross RefCross Ref
  21. Lee, W., Fan, W., Miller, M., Stolfo, S. J., and Zadok, E. 2002. Toward cost-sensitive modeling for intrusion detection and response. J. Comput. Secur. 10, 1-2, 5--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Lee, W. and Xiang, D. 2001. Information-Theoretic measures for anomaly detection. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 130--143. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Liao, Y. and Vemuri, V. R. 2002. Use of k-nearest neighbor classifier for intrusion detection. Comput. Secur. 21, 5, 439--448.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. McHugh, J. 2000. Testing intrusion detection systems: A critique of the 1998 and 1999 darpa intrusion detection system evaluations as performed by lincoln laboratory. ACM Trans. Inform. Syst. Secur. 3, 4, 262--294. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Ning, P., Cui, Y., Reeves, D. S., and Ding, X. 2004. Techniques and tools for analyzing intrusion alerts. ACM Trans. Inform. Syst. Secus. 7, 2, 274--318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Porras, P. A. and Neumann, P. G. 1997. Emerald: Event monitoring enabling responses to anomalous live disturbances. In Proceedings of the 20th National Information Systems Security Conference. NISSC, 353--365.Google ScholarGoogle Scholar
  27. Snapp, S. R., Smaha, S. E., Teal, D. M., and Grance, T. 1992. The dids (distributed intrusion detection system) prototype. In Proceedings of the Summer USENIX Conference. USENIX Association, 227--233.Google ScholarGoogle Scholar
  28. Solomonoff, R. J. 2003. Three kinds of probabilistic induction: Universal distributions and convergence theorems. http://comjnl.oxfordjournals.org/cgi/content/abstract/bxml20vlGoogle ScholarGoogle Scholar
  29. Tan, K. M. C. and Maxion, R. A. 2002. “why 6” defining the operational limites of stide, an anomaly-based intrusion detector. In Proceedings of the IEEE Symposium on Security and Privacy. 188--201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Tao, N., Baxter, J., and Weaver, L. 2001. A multi-agent policy-gradient approach to network routing. In Proceedings of the 18th International Conference on Machine Learning. Morgan Kaufmann, 553--560. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Valdes, A. and Skinner, K. 2001. Probabilistic alert correlation. In Proceedings of the 4th International Symposium on Recent Advances in Intrusion Detection. Springer, 54--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Vasilakos, A. V., Parashar, M., Karnouskos, S., and Pedrycz, W. 2009. Autonomic Communication. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Warrender, C., Forrest, S., and Pearlmutter, B. 1999. Detecting intrusion detection using system calls: Alternative data models. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 1343--145.Google ScholarGoogle Scholar
  34. White, G. B., Fisch, E. A., and Pooch, U. W. 1996. Cooperating security managers: A peer-based intrusion detection system. IEEE Netw. 10, 1, 20--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Ye, N., Li, X., and Chen, Q. 2001. Probabilistic techniques for intrusion detection based on computer audit data. IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 31, 4, 266--274. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yeung, D. Y. and Ding, Y. 2003. Host-Based intrusion detection using dynamic and static behavioral models. Pattern Recogn. 36, 1, 229--243.Google ScholarGoogle ScholarCross RefCross Ref
  37. Zhang, Z. and Ho, P.-H. 2009. Janus: A dual-purpose analytical model for understanding, characterizing and countermining multi-stage collusive attacks in enterprise networks. J. Netw. Comput. Appl. 32, 3, 710--720. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Zhang, Z., Ho, P.-H., and Lin, X. 2007. Measuring intrusion impacts for rational response: A state-based approach. In Proceedings of the 2nd International Conference on Communications and Networking. IEEE, 317--321.Google ScholarGoogle Scholar
  39. Zhang, Z. and Shen, H. 2005. Constructing multi-layered boundary to defend against intrusion anomalies: An autonomic detection coordinator. In Proceedings of the International Conference on Dependable Systems and Networks. IEEE, 118--127. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. M-AID: An adaptive middleware built upon anomaly detectors for intrusion detection and rational response

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!