Soon, the tech behind ChatGPT may help drone operators decide which enemies to kill
As the AI industry grows in size and influence, the companies involved have begun making stark choices about where they land on issues of life and death. For example, can their AI models be used to guide weapons or make targeting decisions? Different companies have answered this question in different ways, but for ChatGPT maker OpenAI, what started as a hard line against weapons development and military applications has slipped away over time.
On Wednesday, defense-tech company Anduril Industries—started by Oculus founder Palmer Luckey in 2017—announced a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks.
The companies say their AI models will process data to reduce the workload on humans. “As part of the new initiative, Anduril and OpenAI will explore how leading-edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness,” Anduril said in a statement.