For a long time, deepfakes have been considered the forefront of artificial intelligence where criminals could exploit it, although this technology hasn’t matured to the point of indistinguishable simulation from reality yet. However, concerns have heightened in recent months as attempts to use deepfakes in videos and audio have surged by 3000% during 2023.
In this information warfare era, manipulated videos have undergone a wild evolution, prominently demonstrated in instances like fabricated videos of Ukrainian President Volodymyr Zelensky circulating during the Russian invasion.
This deceptive practice extends to less significant realms, such as revealed in October last year when a fake video spread on TikTok showed the influential social media persona MrBeast offering to distribute iPhones for just two dollars.
While deepfake videos attempting to deceive and steal money haven’t been notably successful so far, cybersecurity expert Mikko Hyppönen has witnessed only three that achieved their goals. However, he anticipates a significant rise in numbers, given the quality, accuracy, and widespread dissemination ease of these videos, saying, “This phenomenon is not yet widespread, but it will become a serious problem shortly.”
To mitigate risks, Hyppönen suggests that individuals facing such deception should employ passphrase verification to determine if the person they’re communicating with via video call is genuine or fake. For example, when interacting with colleagues or relatives, if the caller requests sensitive information, a money transfer, or sending a specific file, requesting a passphrase about an unrelated topic can help verify the person’s authenticity.
Even though this idea may seem trivial now, it is crucial for protection against deception. Hyppönen believes that using a passphrase or security phrase is a very cheap protective measure, but it will become essential in 2024.
Deep Web Scams:
While these scams might bear a resemblance to deepfake videos, they don’t necessarily rely on manipulated videos or audio. In this case, the term deep refers to the vast scale of the deception, using automation to target an infinite number of individuals.
Perpetrators using this method operate in various fields, including investment, account theft, ticket scams, and even within the realm of romantic relationships. For instance, the infamous swindler on the dating app “Tinder” managed to loot up to 10 million dollars from women he met online.
Imagine if this scammer were to use large language models (LLMs), a type of artificial intelligence algorithm based on deep learning and massive datasets, to spread lies and fabricate images to prove his claims, or if he used software to translate his speech into multiple languages. In that case, his victim count would likely have been much higher.
Moreover, websites like Airbnb witness intensive activity for these types of tricks, where stolen images from other properties found on the internet are used to convince house seekers to pay for reservations.
Leave a Reply