Is it time to get concerned about the AI incidents? Are they getting more common, or it just seems that way? According to a recent study by Surfshark, 2023 was record-setting for AI incidents, so no, we’re not just imagining it. According to the study, AI incidents in 2023 have surged by 30% compared to the year before. And judging from some recent cases – this is far from getting resolved.
“In 2023, there were 121 recorded AI incidents, representing a 30% increase from the previous year,” says Agneska Sablovskaja, Lead Researcher at Surfshark. “This figure accounts for one-fifth of all documented AI incidents between 2010 and 2023, marking 2023 as the year with the highest number of incidents in the history of AI.”
“Nonetheless, the second half of 2023 saw a nearly 60% reduction in reported incidents, as compared to the first half. Despite this progress, continuous vigilance and advancement in AI safety protocols are essential to maintain this downward trend.”
OpenAI leads in AI incidents in 2023
As Surfshar’s study has shown, OpenAI topped the charts for AI incidents in 2023, accounting for over one-fourth of all recorded cases. They primarily played the role of developer and/or deployer, meaning they built and/or ran the systems involved in these incidents. Microsoft followed closely with 17 incidents, acting as both deployer (10 instances) and victim (7 instances). Google and Meta rounded out the top four with 10 and 5 incidents, respectively.
However, the year ended with a notable decline in AI incidents, with only 14 and 22 cases reported in Q3 and Q4, respectively. This stands in stark contrast to the first half of the year, which saw significantly higher numbers: 54 incidents in Q1 and 33 in Q2. This downward trend suggests that companies may be finally implementing effective measures to prevent such incidents following the surge witnessed in late 2022 and early 2023.
The most common target: public figures
In 2023, a sudden increase in AI-related incidents affected several public figures. These incidents ranged from deepfake videos to AI-generated images, used mainly for advertising or entertainment.
I’m sure you remember at least one of these incidents: the image of Pope Francis wearing a white puffy coat, which went viral and demonstrated the advanced capabilities of AI apps. Famous actor Tom Hanks was caught up in an AI controversy, too. He warned his fans about a fake dental plan advertisement that used a deepfake of his image.
Scarlett Johansson and Emma Watson were also involuntarily featured in deepfake videos: over 230 unauthorized ads on Meta platforms! These ads promoted an app that could create “DeepFake FaceSwap” videos featuring content with women’s faces replaced with those of famous actresses.
These are only some of the examples. AI Joe Rogan was also part of an AI ad advertising a product he never endorsed. Of course, politicians also haven’t been spared from AI intrusions. This includes former US Presidents Donald Trump and Barack Obama, current President Joe Biden, and UK Labour Party Leader Keir Starmer. There was also some fake news that caused panic, and the AI Pentagon explosion is the first that comes to mind.
What about 2024?
Judging from recent cases, it doesn’t look like this trend will disappear any time soon. Only a few days ago, explicit deepfake images of Taylor Swift were shared and viewed millions of times on social media. Shortly before that, AI Donald Trump was praying for a miracle with his six fingers. In fact, we may get to see more of these political deepfakes in the current year. “Given that 2024 is a pivotal year marked by numerous elections, the frequency of AI incidents involving political figures could increase, potentially affecting the integrity of the electoral process,” Surfshark warns.
You can read more about this study on Surfshark’s website. And the next time you see an image that looks a little “off,” here you’ll find some tips on verifying if it’s real or not.