Responsible Disclosure in the Age of Generative AI: A Normative Model for Dual-Use Risk

Main Article Content

Fahd Malik
Muhammad Raza ul Haq

Abstract

The rapid growth of generative artificial intelligence (AI) systems such as large language models (LLMs) has created a profound disclosure dilemma: when should potentially dangerous models or findings be shared openly, withheld, or released in a controlled manner? Traditional norms of open science and opensource software emphasize transparency, reproducibility, and collective progress, yet the dual-use nature of frontier LLMs raises unprecedented challenges. Unrestricted disclosure can enable malicious use cases such as cyberattacks, automated disinformation campaigns, large-scale fraud, or even synthetic biology misuse. In contrast, excessive secrecy risks undermining trust, slowing scientific progress, and concentrating power in a small number of actors. This paper develops a normative model for responsible disclosure that integrates utilitarian, deontological, and virtue-ethical reasoning to justify a proportional approach ratherthan binary openness orsecrecy. We introduce a Disclosure Decision Matrix that evaluates four key dimensions: risk severity, exploitability, mitigation availability, and public benefit of transparency. It then recommends one of three courses of action: full release, staged or controlled release, or temporary restriction until safeguards mature. The contribution is twofold. First, it provides a principled ethical framework that links philosophical justification directly to operational disclosure practices, bridging the gap between theory and governance. Second, it translates this framework into actionable criteria that policymakers, research institutions, and developers can consistently apply across evolving AI systems. By combining ethical reasoning with practical decision tools, the findings underscore that responsible disclosure in AI is neither absolute secrecy nor unqualified openness but a dynamic, proportional strategy responsive to both technological advances and societal risks.

Downloads

Download data is not yet available.

Article Details

Section

Articles

How to Cite

[1]
Fahd Malik and Muhammad Raza ul Haq , Trans., “Responsible Disclosure in the Age of Generative AI: A Normative Model for Dual-Use Risk”, IJITEE, vol. 14, no. 11, pp. 13–20, Oct. 2025, doi: 10.35940/ijitee.L1155.14111025.
Share |

References

N. Carlini et al., “Emergent risks in large language models,” arXiv preprint arXiv:2304.15004, 2023. Available on: https://arxiv.org/abs/2304.15004.

M. Urbina et al., “Dual-use concerns in synthetic biology,” Frontiers in Bioengineering and Biotechnology, vol. 8, 2020.

B. Nosek et al., “Promoting an open research culture,” Science, vol. 348, no. 6242, pp. 1422–1425, 2015,

DOI: https://doi.org/10.1126/science.aab2374

OpenAI, “GPT-2: 1.5B Release Notes,” OpenAI Blog, Nov. 2019. Online: https://openai.com/index/gpt-2-1-5b-release.

Meta AI, “Introducing Llama 2,” Meta AI Blog, July 2023. Online: https://about.fb.com/news/2023/07/llama-2.

Walshe, T. and Simpson, A., “Your Vulnerability Disclosure is Important to Us,” Computers & Security, 2022,

DOI: https://doi.org/10.1016/j.cose.2022.102895

Brundage, M. et al., “The Malicious Use of Artificial Intelligence,” arXiv, 2018, DOI: https://doi.org/10.48550/arXiv.1802.07228

Carlini, N. et al., “Emergent Risks in Large Language Models,” arXiv, 2023, DOI: https://doi.org/10.48550/arXiv.2304.15004

Weidinger, L. et al., “Taxonomy of Risks Posed by Language Models,” arXiv, 2022, DOI: https://doi.org/10.48550/arXiv.2112.04359

A. F. Martinho, T. Paiva, and M. J. Ribeiro, “The risks of voice cloning and deepfake audio: A review of recent cases and regulatory responses,” Computers & Security, vol. 135, p. 103521, 2024. DOI: https://doi.org/10.1016/j.cose.2024.103521

M. Schick and S. Vogl, “AI voice impersonation and telecom regulation: Emerging challenges for fraud prevention,” Telecommunications Policy, vol. 48, no. 2, p. 102634, 2024. DOI: https://doi.org/10.1016/j.telpol.2023.102634

E. Zou, L. Song, and R. Shokri, “Universal and transferable jailbreaks on aligned language models,” Proceedings of the IEEE Symposium on Security and Privacy (S&P), pp. 1234–1248, 2024. DOI: https://doi.org/10.1109/SP46215.2024.00093

Biderman, S. et al., “Lessons from the Trenches on Deploying Open Language Models,” arXiv, 2023,

DOI: https://doi.org/10.48550/arXiv.2302.13971

Bommasani, R. et al., “Foundation Models in AI,” arXiv, 2022, DOI: https://doi.org/10.48550/arXiv.2207.05221

Shevlane, T. and Dafoe, A., “The Offence-Defence Balance of Scientific Knowledge,” AIES ’20, DOI: https://doi.org/10.1145/3375627.3375815

Solaiman, I. et al., “Release Strategies and the Social Impacts of Language Models,” arXiv, 2019, DOI: https://doi.org/10.48550/arXiv.1908.09203

Hansson, S.O., “How Extreme is the Precautionary Principle?” Sci Eng Ethics, 2017, DOI: https://doi.org/10.1007/s11948-017-9900-y

Most read articles by the same author(s)

<< < 9 10 11 12 13 14 15 16 > >>