Malayali in India's first deepfake fraud is just teaser, says SP Harishankar

The victim of India's first fraud case that used AI deepfake, too, was a Malayali, based at Palazhi in Kozhikode. Photo: kate3155/shutterstock

Recently, a Malayali reportedly fell prey to India's first Artificial Intelligence (AI)-based deepfake fraud.

Harishankar IPS. Photo: Manorama

Now, why do Malayalis become easy prey for such fraudsters? Are there rackets specifically targeting Malayalis?

Freeware was used in the Kozhikode cheating case. Photo: Blueastro/shutterstock

Or, are Malayalis behind these frauds? What are the precautions to be adopted in such an insecure or perhaps vulnerable cyberspace?

Superintendent of police Harishankar, IPS, of cyber operations wing of the Kerala police spoke to Manorama Online and shed some light into the cloud of intrigue looming over festering digital vulnerabilities.

Back story

A US-based doctor's wife in Kerala decided to try her hand at earning an income by working from home; her husband, however, has been earning at least Rs 20 lakh a month, we are told.

She checked several opportunities until she came across an advertisement that offered a decent pay for subscribing to YouTube channels. She was hooked.

The woman was convinced when money was credited into her account after she subscribed to certain YouTube channels. The links to these channels were sent through a Telegram group. Those in the group prompted her to invest in crypto trading to earn more and shared messages they had received citing huge amounts had been credited to their accounts.

The software creates expressions that suit the pronunciation of each new word. Photo: vs148/shutterstock

Convinced, the young woman 'invested' lakhs of rupees. She realised the con only after losing Rs 35 lakh.

This is just an instance of several online frauds taking place as we speak in Kerala.

Unfortunately, at least one Malayali will be the victim of each cyber fraud case. The victim of India's first fraud case that used an AI deepfake, too, was a Malayali, based at Palazhi in Kozhikode. We caught up with the SP to gain some understanding about such cases in point.

What is the status of the probe into India's first IA-based fraud case?
The investigation is progressing and centered on Maharashtra. We blocked the account to which the money had gone immediately after receiving the complaint. The money was credited to an account in Ratnakar (now RBL) Bank. The probe has been extended to other places, including Goa. The future actions will be based on the information gathered by these probes.

The complainant said the other person had held the phone close to his face, and the video call was a brief one. Photo: elenabsl/shutterstock

How was the victim cheated?
They used deepfake technology to create the image and voice of the victim's friend. It was not an advanced technology. They used free software to create a two-dimensional visual. There are several open-source tools such as deepfake, face swap, etc. We will get to know about the tool employed only after arresting the suspect.

The complainant said the other person had held the phone close to his face, and the video call was a brief one. He spoke in English, and the background was not clear. What we gather from this information is that a two-dimensional fake image was used, not a three-dimensional one. It is easy to create fake pictures now. If the video of a person speaking is available, his fake image, including how he speaks, could be created. This is called Generative AI.

There is software that can recreate facial expressions changing according to the pronunciation of each letter. The software creates expressions that suit the pronunciation of each new word. It requires lots of tech effort to differentiate the more than 1,000 expressions that accompany while pronouncing each word.

The money was credited to an account in Ratnakar (now RBL) Bank. Photo: Orn Rin/shutterstock

Currently, the animation sector is based on Generative AI. Complex tools are required to create 3-D images. Microsoft and Google have expensive tools. But freeware was used in the Kozhikode cheating case. The visual was 2-D, and the fraudster might have created the image using a passport-sized photograph he had found somewhere. Or, he might have used a photograph that was shared on social media. The lip movements could be created after blurring the background. This is 2-D, and hence the phone was held close to the face. These tools support only the English language, not other regional ones.

How do the fraudsters find their targets? Is there a trend?
The fraudsters mostly target the upper-middle class, people with enough money, and those who speak English. These frauds are based on social media engineering. The first step is to find upper-middle-class people on social media. The fraudsters directly access public profiles without hacking into accounts. Once accessed, they monitor the user's social media interventions.

They also monitor and study the user's friends. They gather information about the friendship, see if there are group photographs, and how to convincingly speak to the user. They then create a fake video using the photographs the user had shared on social media, before contacting the target. The WhatsApp video calls won't exceed 15 seconds. The receiver will be told that the caller could not talk for long. In the Kozhikode case, the caller said he was about to board an airplane.

Is there a pattern for such fraudsters?
No, we cannot say there is a pattern. We don't consider the Kozhikode case will be successful in the future. First, there are no tools that support regional languages. Second, the 2-D tools lack clarity, and it is easy to detect fraud. The 3-D tools are highly expensive. Fraudsters won't shell out that much money to con people. The Kozhikode case was likely a test dose. It was successful.

Are there Malayalis behind these cases? Do they work alone or as a network?
The cases reported so far originated outside Kerala. The suspect in the Kozhikode case was in Maharashtra. About 85 percent of cyber crimes in India originate from three places: Surat in Gujarat, Jamtara in Jharkhand, and Bharatpur in Rajasthan. The fraudsters act alone and as a network. They can withdraw money from different places since they are part of a network.

The fraudsters guarantee an income by working from home.. Photo: Tero Vesalainen /shutterstock

The maximum number of OTP crimes in India are linked to Jamtara. OTP crimes do not need any investment. Once the fraudster receives the OTP, he transfers the money to several other bank accounts. Of late, the money is converted to cryptocurrency. It is difficult to trace the money once it is converted to cryptocurrency. There are cases registered in Thiruvananthapuram, Kozhikode, and Kottayam. But no arrests have been made.

AI is a new challenge. What are the other methods used in cybercrime?
Kerala is witnessing a rise in 'Investment frauds'. The fraudsters guarantee an income by working from home. The messages are normally sent over WhatsApp or Telegram. Once the receiver clicks on the link, s/he will be taken to a Telegram group which will have several members. Several members will vouch that they have received money. They are all part of the gang. Once the user clicks on the link to be part of the 'work', s/he will be asked to register by providing the Email ID, bank account number, etc.

Once registered, the user will receive the links to 10 YouTube videos, with an instruction to 'like' and subscribe to them. On doing as instructed, Rs 1,000 will be credited to the user's account. This process will repeat once again to convince the user. Later, the user will be asked to pay to get more videos. The user will also get a choice to pay just Rs 10,000 as the first installment.

Once the user pays, s/he will get more videos to 'like' and subscribe, and the score will increase to 20,000 from 10,000. But there will be a catch. The user will be told that s/he can withdraw the money only when the score becomes one lakh. The credulous user will be compelled to shell out more money. Once the score becomes a lakh, s/he will be offered a scheme for five lakh. Some people still pay the fraudsters. But they won't be able to withdraw money even if the score is 15 lakh. By that time the user would have lost Rs 5.6 lakh. People should realise the fraud when asked to pay for working from home.

Sextortion
Another method is sextortion. Fake porn videos are used to force people to pay up. The user will get a video call. Once attended, a woman in the nude will be seen. The fraudsters will have taken the screenshot even before the receiver could react. The fraudsters will then threaten the receiver that the images will be circulated on social media. Several such cases of blackmail have been reported in Kerala. A retired Major, who fell prey, even committed suicide in Thiruvananthapuram. The fraudsters had forced him into paying lakhs of rupees.

There are several other methods, such as phone mirroring, loan apps, etc., too, are employed. Remote desktops are available on Playstore. It is a useful app. But fraudsters use it for a different purpose. While searching for jobs on Google, the result shows several opportunities. However, most numbers belong to fraudsters. Once the user contacts one such number, s/he will be forwarded a URL, with an instruction to pay Rs 100 as a registration fee. However, once the link is clicked, the remote desktop app will be downloaded. While paying Rs 100 as the registration fee, they will get the user's password and other account details, and they withdraw money.

The loan apps offer money without any collateral. The money will reach the account without any complex banking procedures. But the fraudsters get our personal details even as we download the app. They use the details to threaten the borrower. They will message our friends and relatives saying we are fraudsters. Recently, a 22-year-old man died by suicide in Bengaluru after his intimate picture with his girlfriend was circulated. They don't take over our property like the banks. They put a price on our honour.

Why is it that so many Malayalis are falling prey to these fraudsters? Is it because of a lack of employment for the educated?

Malayalis will go for more money even if he is decently employed. Take the case of the doctor's wife mentioned earlier. Malayalis invest money in any fraud offers, like chit-fund fraud, blade fraud, etc. They invest if someone offers more interest. Once the money is gone, they will come to the police.

How can we prevent cybercrimes?
AI technology itself could prevent cybercrimes. Tools could be created to detect fake voice calls and images. If we can develop these technologies, cybercrimes could be prevented to a certain extent.

Will there be any reforms in the police force to face new challenges?
AI is not a challenge. Only one case has been reported, and it doesn't need special training to handle it. The formation of a cyber division to probe cybercrime is under consideration. The cyber operations wing with an SP-ranked officer is part of the move.

Currently, cyber police are probing cybercrimes. We provide them with tech support. Our job involves learning about new cybercrimes and providing training to prevent them. This wing now has less than 15 officers. Once it becomes a separate cyber division like the crime branch, its functions will improve. We expect the government to decide in two months.

The comments posted here/below/in the given space are not on behalf of Onmanorama. The person posting the comment will be in sole ownership of its responsibility. According to the central government's IT rules, obscene or offensive statement made against a person, religion, community or nation is a punishable offense, and legal action would be taken against people who indulge in such activities.