Bias and Fairness Issues in Artificial Intelligence-driven Cybersecurity

Ugochukwu Mmaduekwe *

University of Nigeria, Nsukka, Nigeria.

*Author to whom correspondence should be addressed.


Abstract

Aim: This paper aims to examine the bias and fairness issues accorded with artificial intelligence (AI)-driven cybersecurity.                                                              

Problem Statement: The evolving global dependence on cybersecurity has exposed organizations, individuals, and nations to different vulnerabilities and security threats. However, merging of cyberspace with AI technologies has the potential to transform multiple domains but the implementation of AI is faced with bias problems limiting its application.

Significance of Study: Artificial intelligence and cybersecurity have been identified as two transformative and interconnected entities with great potential to revolutionize numerous areas of human life. However, it is imperative to critically look at the bias and fairness accorded with the implication of artificial intelligence-driven cybersecurity which are keywords limiting the usage and efficiency of the approach.                              

Discussion: The concept of artificial intelligence and cybersecurity was discussed together with their interconnectivity which enhances the application in tackling cyber threats. Various areas of artificial intelligence deployment in cyberspace were presented. The sources and solutions to bias and fairness in artificial intelligence-driven cybersecurity were also discussed. This paper has critically discussed various ways via which AI biases influence cyber security. Nonetheless, ways by which this problem can be tackled were presented.   

Conclusion: Artificial intelligence-driven cybersecurity has found wide industrial applications in different areas. However, there is a need to critically address the issues of bias and fairness attached to it to improve its efficiency. The use of the teams; AI model; and Corporate governance and leadership should be adopted to find lasting solutions to the problem of biases in AI-driven cyber security.

Keywords: Artificial intelligence, cybersecurity, algorithmic bias, accountability in cybersecurity ai, fairness metrics


How to Cite

Mmaduekwe, Ugochukwu. 2024. “Bias and Fairness Issues in Artificial Intelligence-Driven Cybersecurity”. Current Journal of Applied Science and Technology 43 (6):109-19. https://doi.org/10.9734/cjast/2024/v43i64391.

Downloads

Download data is not yet available.

References

Davis JL, Williams A, Yang MW. Algorithmic reparation. Big Data and Society. 2021;8. DOI: 10.1177/20539517211044808.

Kennedy R. The ethical implications of lawtech. In Responsible AI and Analytics for an Ethical and Inclusive Digitized Society. 2021;198–207.

Chauhan PS, Kshetri N. The Role of Data and Artificial Intelligence in Driving Diversity, Equity, and Inclusion. Computer. 2022;55:88–93.

Gilbert TK, Mintz Y. Epistemic therapy for bias in automated decision-making. in Proceedings of the 2019 Conference on AI, Ethics, and Society, New York, NY, USA, 2019;61–67, Association for Computing Machinery.

Zhao C, Li C, Li J, Chen F. Fair meta-learning for few-shot classification, in 2020 IEEE International Conference on Knowledge Graph, Online, August 9-11. 2020;275–282. IEEE.

Loi M, Ferrario A, Viganò E. Transparency as design publicity: Explaining and justifying inscrutable algorithms. Ethics and Information Technology. 2021;23:253–263,9. DOI: 10.1007/s10676-020-09564-w.

Chauhan PS, Kshetri N. The Role of Data and Artificial Intelligence in Driving Diversity, Equity, and Inclusion. Computer. 2022;55:88–93.

Yao J, Craig A, Shafik W, Sharif L. Artificial Intelligence Application in Cybersecurity and Cyberdefense. Wireless Communications and Mobile Computing, Volume 2021, Article ID 3329581, 10 pages.

Bhushan T. Artificial Intelligence, Cyberspace and International Law. Indonesian Journal of International Law. 2024;21(2):Article 3, 269-302.

González-Sendino R, Serrano E, Bajo J, Novais P. A Review of Bias and Fairness in Artificial Intelligence, International Journal of Interactive Multimedia and Artificial Intelligence. 2023. Available:http://dx.doi.org/10.9781/ijimai.2023.11.001

Sarker IH, Furhad MH, Nowrozy R. AI‑Driven Cybersecurity: An overview, security intelligence Modeling and Research Directions. SN Computer Science. 2021;2:173

Ferrara E. Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci. 2024;6:3. Available:https://doi.org/10.3390/sci6010003

Barocas S, Selbst AD. Big data’s disparate impact. Calif. Law Rev. 2016;104:671–732.

Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv. Neural Inf. Process. Syst. 2016;29:4349–4357.

Ferguson AG. Predictive policing and reasonable suspicion. Emory LJ. 2012;62:259.

Wachter S, Mittelstadt B, Russell C. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. J. Law Technol. 2018;31:841–887.

Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data. 2017;5:153–163.

Lipton ZC. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue. 2018;16:31–57.

Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, Spitzer E, Raji ID, Gebru T. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019.

Stolbikova V. Can elliptic curve cryptography be trusted? A brief analysis of the security of a popular cryptosystem. ISACA Journal. 2016;3:48-59.

González-Sendino R, Serrano E, Bajo J, Novais P. A Review of Bias and Fairness in Artificial Intelligence, International Journal of Interactive Multimedia and Artificial Intelligence. 2023; Article in Press. Available:http://dx.doi.org/10.9781/ijimai.2023.11.001

Perlner G, Cooper DA. Quantum Resistant Public Key Cryptography: A survey. Proceedings of the 8th Symposium on Identity and Trust on the Internet, Gaithersburg, Maryland, USA: IDtrust, 2009.

Prajapati BLimit of Privacy and Quantum Cryptography. International Journal of Scientific Research in Science, Engineering and Technology (IJSRSET). 2018;4(4):1567-1571.

Kaur R, Gabrijelčič D, Klobučar T. Artificial intelligence for cybersecurity: Literature review and future research directions. Inf. Fusion. 2023;97:101804. DOI: 10.1016/J.INFFUS.2023.101804.