Main menu

Pages

Deepfake technology is risky but intriguing to businesses

featured image

Recent online advertising from Daily voice, Connecticut-based internet news site called newscasters. The ad looked fine, but for one particular line, “Use the captured video in your portrait to continue to generate video clips of the story.”

What the news site promised to use was a variant of AI technology known as deepfake. Deepfake is a type of AI that combines deep learning with fake or synthetic data or media (visual or other information produced rather than actual events) to generate content.

Some consider deepfake to be just synthetic data that companies can use to their advantage in training machine learning models, while it shakes political opinions and events and harms consumers with falsely misleading images. Some consider it a dangerous tool that can affect you. An organization that undermines trust in genuine data.

Deepfake as a useful tool

Rowan Curran, an analyst at Forrester Research, says companies need to distinguish between bad and good deepfake.

“It is important to clarify this idea of ​​deepfake as a tool that individuals are using to forge politician speeches from these useful companies. [tools] To generate synthetic datasets for enterprises that are very useful and highly scalable [products]”Karan said.

It is important to clarify this idea of ​​deepfake as a tool that individuals are using to forge politician speeches from these useful companies. [tools] To generate a composite dataset.

Rowan KaranAnalyst, Forrester Research

Enterprises can use deepfake technology to create synthetic datasets for training machine learning models.

Deepfake technology is useful in simulation environments where you can train machine learning models in situations that don’t exist in the real world or are too private to use real-world data. These include industry applications such as healthcare for simulating or supplementing datasets, and Daily voice You can generate the voices of popular podcasters and radio hosts in a variety of languages.

Another application of Deepfake is to allow businesses to send messages on a large scale. One of the vendors developing this kind of technology is HourOne.

Hour One uses AI to generate videos of people who have allowed the company to use caricatures. The vendor has collected over 100 characters or deepfake based on real people. One of our customers, Alice Receptionist, uses a character to greet and inform the visitor and manage a virtual receptionist who connects employees to the visitor via video or voice calls.

Deception and fraud

Natalie Monbiot, Head of Strategy at HourOne, says vendors are protecting data-image similarities from fraudsters and others trying to trick others with technology.

“All replication and fraud is a system problem,” Monbiot said, citing the practice of hackers accessing consumer social media profiles and sensitive organization data. “I understand that synthetic media can be another way of duplication and fraud, but to be honest, it doesn’t have to be duplication or fraud in the first place.”

Fraudulent and misleading consumers and businesses can occur without synthetic media or this type of technology, and Hour One has legal documents to protect its characters. Mr. Monbiot said.

But with synthetic media and the rapidly advancing deepfake tools that allow almost anyone to create relatively high-quality fake images, it’s easy for villains to rock the general public for political purposes. Companies can liven up their ads in the way viewers can. t detect.

Gartner analyst Darin Stewart said: “This is going to amplify it with steroids.”

Meanwhile, organizations have risen to counter the threat of online deepfake technology.

A non-profit social media safety organization sponsors a deepfake prevention law in California. The proposed law defines deepfake as a recording that accidentally modifies the original video so that the new recording looks real. The law prohibits both sexual and political deepfake created without consent. Consent for political purposes to ensure that deepfake technology is not used to change the democratic voting process.

Mark Burkeman, CEO of the Social Media Consumer Protection Organization, said: “And this is one example. So, before it actually takes root, it comes in front of it and stops people from being harmed.”

Reproduction and fraud using synthetic media such as deepfake not only affects consumers and politicians, but also afflicts businesses.

One example quoted by Stewart is an organization that has been scammed four times. Scammers have targeted a variety of high-ranking officials who frequently appear in public. Then I trained the voice model using that person’s voice recording. Using synthetic voice, they left a voicemail to lower-level employees seeking large remittances, claiming they needed money immediately for the transaction. Employees were willing to relocate because they were recognized by senior officials, and the scammers ended up with a significant amount of money.

“Now, video deepfake is getting higher quality and cheaper to create. [this type of scam is] It just expands. “

Keep bad things away

However, Stewart says there are several ways to limit the damage caused by malicious individuals who attempt to steal or misunderstand using deepfake techniques. For example, a group of researchers at the University of California, Berkeley have built an AI detection system trained to detect whether a video is deepfake based on facial movements, tics, and facial expressions.

However, detection tools work after damage occurs, and scammers are already using these detection systems to train better deepfake.

The technical management process (or recording where the video or image came from, who created it, and where it was edited) may be a better approach to discovering deepfake. Having such a record of the video and educating the organization about the certification process can help identify what is true and what is not.

However, according to Stewart, most people and organizations are willing to take additional steps.

“It’s the biggest threat to deepfake,” he said. “Many people make no effort to determine if something has been manipulated or forged, and most of our society doesn’t care if it was done.”