OpenAI, the creator of ChatGPT, has silently modified its policies by removing a prohibition on utilizing the chatbot and other AI tools for military purposes, revealing an ongoing collaboration with the Department of Defense. The alteration occurred last week, removing a statement that previously restricted the models from use in activities posing a high risk of physical harm, including weapons development, military applications, and warfare.
An OpenAI spokesperson informed that the company, currently in discussions for fundraising at a valuation of $100 billion, is engaged with the Department of Defense in developing cybersecurity tools designed to safeguard open-source software. The spokesperson clarified, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission.”
The spokesperson continued, stating, “For example, we are already working with the Defense Advanced Research Projects Agency (DARPA) to spur the creation of new cybersecurity tools to secure open-source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”
Anna Makanju, OpenAI’s VP of global affairs, emphasized that the removal of the ‘blanket’ provision was intended to allow for military use cases that align with the company’s values. Makanju stated, “Because we previously had what was essentially a blanket prohibition on the military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.”
The article notes past controversies where tech companies faced internal protests regarding military contracts involving AI, such as Google’s Project Maven and Microsoft’s contract for augmented reality headsets. The concerns raised by tech employees and human rights experts centre around the potential misuse of AI in warfare, including the deployment of lethal autonomous systems.