AI companies are worried about the wrong thing
When AI companies test their AI models for the absolute worst things they could be used for, like helping someone build a nuclear or biological weapon, they’re ignoring the actual real-world harms that people are being subjected to right now, says a group of researchers in a new paper released by the AI-watchdog research group the AI Now Institute.
OpenAI, Anthropic, and other makers of “foundational” AI models all cite the same short list of potential risks, usually topped by uses related to chemical, biological, radiological, and nuclear (CBRN) applications.
The publicly available generative-AI tools of today can be considered “dual use.” In addition to helping people write cover letters, generate funny photos, or complete homework, powerful new AI tools can also potentially enable bad actors to develop CBRN weapons. While this idea is certainly terrifying and worth protecting against, the authors of this paper argue that it fails to recognize the potential harms from the AI tools being widely used today — ISTAR: intelligence, surveillance, target acquisition, and reconnaissance.
“The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations.”
The researchers also highlight the existence of personally identifiable information in the massive datasets commercial AI-model builders have trained their models with as a potential national-security risk that should preclude their use in military applications.
OpenAI, Anthropic, and other makers of “foundational” AI models all cite the same short list of potential risks, usually topped by uses related to chemical, biological, radiological, and nuclear (CBRN) applications.
The publicly available generative-AI tools of today can be considered “dual use.” In addition to helping people write cover letters, generate funny photos, or complete homework, powerful new AI tools can also potentially enable bad actors to develop CBRN weapons. While this idea is certainly terrifying and worth protecting against, the authors of this paper argue that it fails to recognize the potential harms from the AI tools being widely used today — ISTAR: intelligence, surveillance, target acquisition, and reconnaissance.
“The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations.”
The researchers also highlight the existence of personally identifiable information in the massive datasets commercial AI-model builders have trained their models with as a potential national-security risk that should preclude their use in military applications.