The man who exploded a Tesla Cybertruck outside of a Trump hotel on January 1st in Las Vegas used ChatGPT to plan his blast, according to new findings from the Las Vegas Metropolitan Police Department. In a recent press conference by the police department and partners in the ATF and FBI, specific prompts submitted to ChatGPT were revealed, along with information that some specific prompts returned information that were crucial in planning the explosion.
Matthew Livelsberger, the man who blew up the Cybertruck shortly after killing himself, asked ChatGPT a long list of questions about the plan over one hour in the days leading up to the event. These include questions about sourcing the explosives used in the blast, the effectiveness of the explosives, whether fireworks were legal in Arizona, where to buy guns in Denver, and what kind of gun would be needed to set off the chosen explosives.
Most importantly, Assistant Sheriff Dori Koren confirmed that ChatGPT was instrumental in making the blast plan work. ChatGPT returned prompts to Livelsberger which revealed the specific firing speed a firearm would need in order to ignite his chosen explosive. Without ChatGPT, the incident may not have been as explosive as it proved to be, though the ATF also confirmed in the conference that not all explosives detonated as were likely intended to in the initial blast.
"We knew that AI was going to change the game at some point or another, in really all of our lives," shared LVMPD Sheriff Kevin McMahill. "This is the first incident that I am aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device, to learn information all across the country as they're moving forward. Absolutely, it's a concerning moment for us."
McMahill was also not aware of any governmental oversight or tracking which would have been able to flag the 17+ prompts asked of ChatGPT, all relating to sourcing and detonating explosives/firearms, submitted within a one hour period.
While full info on the ChatGPT prompts has not yet been released by the Las Vegas police, the prompts shown in the press conference were straight-forward and written in simple English, without traditional backdoor terms used to "jailbreak" ChatGPT's content detection system. While this usage of ChatGPT violates OpenAI's Usage Policies and Terms of Use, it is not clear at this time whether safeguards or content warning violations were raised in Livelsberger's use of the LLM.
OpenAI and the Las Vegas Metropolitan Police Department have not yet responded to requests from the press for further information on the usage of ChatGPT in the event; we will update our coverage as more becomes available.