Table of Contents
- What is electronic fraud?
- How AIs are used in scams
- How to identify patterns in the use of AI in scams
- Tips to protect yourself
- When in doubt, speak in person
- Check the phone number
- Hang up and call the person if you are suspicious
- Confirm with a video call and ask questions that only the real person would know how to answer
- Do not lend money without verifying the other person’s identity
- Observe mannerisms, compare and look for image errors
- Discover more about Showmetech
Brazil has faced a significant increase in cases of electronic fraud, in part due to the use of Artificial Intelligence (AI) by scammers; This is what data from the Brazilian Public Security Yearbook 2023. With technological advancement, it has become easier for criminals to impersonate other people, using AIs to create fake profiles and deceive unsuspecting victims. This practice has intensified how criminals clone people, leading to an increase in cases of online fraud, such as financial scams, card cloning and Phishing, a practice in which personal information is obtained fraudulently.
The use of IA in this context, it has made scams more sophisticated and difficult to detect, as text and voice generation technologies can create messages and calls that closely resemble real communications. Furthermore, the ability of AIs to analyze large amounts of data makes it easier to personalize scams, making them more convincing and increasing the likelihood of success for scammers.
What is electronic fraud?
Electronic fraud is a form of crime that involves the use of electronic means, such as the internet and digital devices, to deceive and defraud people. This type of scam covers a variety of fraudulent practices, such as Phishing, card cloning, false promotions, among others, which aim to obtain personal or financial information from victims.
It is important to highlight that electronic fraud can occur in a variety of ways, from creating false pages to capture personal information to simulating identities to carry out financial scams.
According to the statistics of Brazilian Public Security Yearbook 2023, the number of electronic fraud cases reached 200.322 records last year. This value represents a significant increase of 65% compared to 2021, when 120.470 cases were recorded. The states most affected by this type of coup were:
- Santa Catarina, with 64.230 cases;
- Minas Gerais, with 35.749 cases;
- Federal District, with 15.580 cases;
- Espírito Santo, with 15.277 cases.
On the other hand, it is worrying that some states, such as Bahia, Ceará, Rio de Janeiro, Rio Grande do Norte, Rio Grande do Sul and São Paulo did not provide data on this type of crime until the end of the research, suggesting a possible underreporting or lack of accurate data on the extent of the problem in these regions.
How AIs are used in scams
As Artificial Intelligences (AIs) They are used in different ways for scammers to impersonate other people, increasing the sophistication and effectiveness of the scams. One of the common techniques is the creation of fake profiles on social media and messaging platforms, where AI is used to generate information and photos that resemble a real person.
As AIs They are able to analyze data available online, such as photos, posts, personal information and create extremely convincing fake profiles. Additionally, AIs can imitate people's language and behavior, making it more difficult for victims to identify that they are interacting with a machine and not a real human being.
In general, the AIs They are used in different ways for scammers to impersonate other people, making it more difficult for victims to identify that they are being deceived. See below the most common scams involving cases of electronic fraud with AIs.
Voice call scams
Scams in voice calls using Artificial Intelligence (AI) have become more common and sophisticated. In this type of scam, criminals use AI programs to simulate human voices, imitating people known to the victims or even authorities, such as bank representatives or client. With this, scammers are able to deceive victims, requesting personal information, such as bank passwords, or inducing them to make money transfers. These scams are dangerous as the fake voices can be extremely convincing, making it difficult for victims to identify the fraud.
One of the first public demonstrations of synthesized speech occurred in 1984, when a computer Apple recited a text in a robotic voice. Since then, we have seen the development of voice assistants like Siri, from Apple, and Alexa, from Amazon, which are capable of interpreting commands and interacting with users in an increasingly natural way. Recently, a South Korean company launched an “AI memorial” service that allows people to maintain a virtual presence after death, enabling family and friends to talk to a digital version of the deceased person.
Although this technology has legitimate applications, such as preserving memories and life stories, it can also be used maliciously. Scams in voice calls using IA they can become even more convincing when scammers take advantage of the voices posted by the victims themselves on social networks. With access to these voice recordings, criminals can create fake audio that accurately imitates the victim's voice, increasing credibility.
Additionally, scammers can use AIs sophisticated tools to create more interactive dialogues, answering victims' questions in a convincing way. This can make victims believe they are actually talking to a real person, increasing the effectiveness of the scam.
The lack of specific regulation makes it difficult to combat this type of fraud. Currently, copyright laws do not protect a person's voice, which makes it easier to manipulate and misuse voice cloning technology. To address this problem, some lawmakers have proposed laws that would increase penalties for using IA to impersonate people. However, the issue remains a challenge as it is necessary to strike a balance between protecting privacy and freedom of expression.
Scams using user photos
Another form of scam is the creation of fake profiles based on victims' photos. Criminals may use photos found on social media or other online sources. With these fake profiles, scammers can impersonate other people, deceiving victims' friends or family. This technique is commonly used in online romance platform scams, where criminals create fake profiles to attract victims' attention and eventually request money or personal information.
These scams can be even more convincing when criminals combine photos with personal information obtained online. This way, fake profiles appear extremely authentic, making it difficult for victims to identify that they are being scammed. It is important that people are aware of these practices and adopt security measures when interacting online, such as verifying the authenticity of the profiles with whom they are communicating and never sharing personal or financial information with strangers.
Video scams with Deepfakes
Video scams using Deepfakes, which are fake videos created with IA, have also become a growing concern. As technology advances, criminals can create videos that show real people doing or saying things they have never done, which can be used to defame someone or extort money. You Deepfakes can be extremely convincing, making it difficult for people to distinguish between real and fake videos.
The use of Deepfakes has already gained the attention and concern of governments around the world. China, until then, was a pioneer in widely legislating the use of Deepfakes. The Chinese legislation, much more comprehensive than those introduced by some Western governments, is described as a mechanism to preserve social stability. It specifically prohibits the production of deepfakes without user consent and requires specific identification that the content was generated through Artificial Intelligence (AI).
A recent and troubling example occurred after the Russian invasion of Ukraine, when a video Deepfake was released on social media, showing Ukrainian President Volodymyr Zelensky allegedly ordering his troops to surrender, which never happened in reality. Another emblematic case happened in Hong Kong, where a financial employee was tricked into paying $25 million after participating in a video conference with supposed senior members of his company, which were actually deepfakes.
These scams can have devastating consequences, especially when they involve public figures or ordinary people who are victims of defamation. Furthermore, the Deepfakes can also be used in scams Phishing, where criminals send fake videos to victims, trying to convince them to click on malicious links or share personal information. To protect yourself from these scams, it is important to be aware of the existence of Deepfakes and always verify the authenticity of videos before sharing or taking any action based on them.
How to identify patterns in the use of AI in scams
Identify patterns in the use of artificial intelligences (AIs) Scams can be challenging, but some clues can help identify potential fraud. One of the main characteristics is the excessive personalization of messages or interactions. AIs can analyze large amounts of data to create highly personalized messages, which may seem suspicious if the person has no prior relationship with the sender. Furthermore, grammatical or textual cohesion errors may indicate that the message was generated by a IA, especially if the communication is in a language foreign to the supposed sender.
Another common pattern is the request for personal or financial information quickly and insistently. Scammers who use AIs They often try to trick the victim into providing sensitive information urgently, using emotional manipulation tactics. Additionally, a lack of specific information or evasive answers to direct questions may indicate that the interaction is not taking place with a real person.
Tips to protect yourself
To avoid falling prey to possible digital scams, it is important to adopt some security measures. Below are some instructions to avoid falling prey to possible digital scams:
When in doubt, speak in person
In the digital age, it is easy to fall for online scams due to the sophistication of the techniques used by criminals. An effective way to protect yourself is to adopt the practice of, when in doubt, speaking personally to the person or company involved. If you receive a request for sensitive information via email, text or telephone, it is always recommended to verify the authenticity of the request through direct contact.
Additionally, when dealing with financial transactions or sharing personal information, it is prudent to avoid providing sensitive data online. Choose to carry out these transactions in person or through secure and verified channels. This simple practice can help you avoid falling for digital scams and protect your personal and financial information.
Check the phone number
To verify the authenticity of a phone number in case of a suspected scam, a simple measure is to perform an internet search. Use search engines to check if the number is associated with known scams. Additionally, look for information about the company or person contacting you, checking for reports of fraud related to the number. In some cases, you may find scam alerts shared by others, which can help confirm whether the communication is legitimate.
Hang up and call the person if you are suspicious
Hanging up and calling the person if you are suspicious can be an effective strategy to reaffirm and retest the legitimacy of the call. This is because, in many cases of scams, criminals may be using artificial intelligence (AI) to imitate someone else's voice. By hanging up and calling yourself, you can confirm that the caller's voice and behavior are consistent with what you expect. Furthermore, this measure can help prevent possible attempts to clone numbers or impersonate, common in telephone fraud.
Confirm with a video call and ask questions that only the real person would know how to answer
In situations of suspected scams using artificial intelligence (AI)as the deepfakes or voice cloning, an effective measure is to make a video call to confirm the person's identity. During the video call, ask questions that only the real person would be able to answer, such as details about shared experiences or specific information about your relationship.
These questions can help verify that you are actually interacting with the expected person, as scammers may have difficulty answering these questions convincingly. Additionally, when making a video call, you can observe the person's facial expression and behavior, which can help identify signs of spoofing.
Do not lend money without verifying the other person’s identity
Not lending money without verifying the person's identity is a fundamental measure to avoid scams. When approached by someone asking to borrow money, especially in person, it is important to carefully check the person's identity. Request personal information that only the real person would be able to provide and confirm it through documents or other means of verification. Additionally, be aware of urgent requests or emotional pressures, as these are common signs of financial scams.
Observe mannerisms, compare and look for image errors
Observing mannerisms, comparing information and looking for image errors are important measures to identify possible scams that use artificial intelligence (AI) to imitate people. When interacting with someone online, pay attention to how they communicate, their mannerisms and language patterns. If something seems out of the ordinary or inconsistent with the person's usual behavior, it may be a sign that you are dealing with a IA or with someone pretending to be someone else.
Additionally, compare the information provided by the person with information known about them. Image errors, such as failed photo or video editing, can also be an indication that communication is not authentic.
See also other features
Sources: ABC, Seacoastbank e PWC.
Text proofread by: Pedro Bomfim
Discover more about Showmetech
Sign up to receive our latest news via email.