Face and voice can be counterfeited: what serious consequences can malicious use bring?

category:Internet
 Face and voice can be counterfeited: what serious consequences can malicious use bring?


Face-changing software relies on an AI technology, which also attracts peoples attention. It is called DeepFake, deep forgery.

Recently, an AI Face Change APP named ZAO has been launched in China. Users only need to upload a front photo, they can operate with one button, change the faces of some actors in film and television clips into their own, generate video immediately, and spread through social media, as if instantly fulfilling the star dream.

This face-changing APP soon became a pop hit, but it soon caught the attention of relevant departments. Just this week, the Ministry of Industry and Information Technology (MIIT) interviewed relevant companies on the issue of network data security.

At the same time, the AI technology on which Face Change software relies also attracts peoples attention. It is called DeepFake, deep forgery.

Foreign media reports: We cant believe everything on the Internet, right, deep forgery, deep forgery, deep forgery! Yes, what exactly is this? Deep forgery is the use of artificial intelligence to synthesize fake video, which is a high-tech means to put some words into someones mouth.

Simply put, this is an artificial intelligence technology that can fake facial expressions in real time and render them into 2D synthetic video.

More than a year ago, in December 2017, a user named Deep Fakes posted a fake video on Reddit, replacing the faces of actresses in adult pornographic films with those of famous actresses such as Scarlett Johnson. This has attracted a great deal of traffic for AI face-changing technology. Deep Fake has gradually become the proxy of this technology. The algorithm of the same name is also open source in Github, which is a hosting platform for open source and private software projects.

Dr. Wardel: Hello, today Im going to talk to you about a new technology that affects celebrities. (Remember) Did Obama say Trump was a fool, or did Cardassian rap Because Im always half naked? Deep forgery! Deep forgery! Deep forgery! Its also deep forgery. Im not Adele either. Im an expert on online forgery. Deep forgery is used to describe video or audio files synthesized by artificial intelligence technology. At first it was a very basic facial replacement, but now its a movie stunt. With such explosive technology, God, we cant believe anything. Yes, deep forgery is a terrible anti-utopia. They will only become easier and cheaper to produce.

Li Hao, an assistant professor at the University of Southern California, is the co-founder of Needle Screen Animation, a software company that allows users to instantly customize their 3D avatars while playing virtual reality games and shopping.

Li Hao, co-founder of Needle Screen Animation: Now I have made an avatar of you.

ABC reporter OBrien: A kind and neat OBrien.

This kind of trick can be used on anybody, including politicians, of course.

ABC reporter OBrien: Now I am our President (Trump) and Prime Minister Shinzo Abe of Japan.

Li Hao, co-founder of Needle Screen Animation: Of course, this technology can be used to do some really bad things. But thats not the main reason. Its used for entertainment, an interesting tool for fashion and lifestyle, and it brings us more experience.

However, with the development of technology, such synthetic images or videos are becoming more and more authentic and deceptive. Researchers are also concerned about this.

Li Hao, co-founder of Needle Screen Animation: We all assume that there will be a critical point at which we cant tell the truth from the truth. I mean, visually speaking, I think you can be very close, just depending on how much effort you put into it. But as far as anyone can create content, I think its very close to that critical point.

In January 2018, a software called FakeApp came online, claiming that one-click face-changing could be achieved. Video content that can be generated includes spoofing US President Trump or putting your face on Hollywood stars.

Although some social news sites, such as Reddit, have explicitly banned the dissemination of face-changing videos and pictures on their own platforms, more than 90,000 users still disseminate such videos on Reddit.

Computer science expert Farid: Im worried about the weaponization of this technology and how it affects our society as a whole.

Deep forgery can not only change face, but also forge voice.

According to the Wall Street Journal, in March this year, criminals successfully defrauded 220,000 euros by using deep forgery technology to synthesize the voice of a companys CEO.

Is computer synthesized sound really so hard to read?

In 2018, three Doctors from the University of Montreal co-founded a company called Lyrebird. The company has developed a voice synthesis technology, which can generate any desired speech using the target persons voice by recording the target persons voice for more than one minute and dropping the recording to the piano bird for processing.

Piano Bird founder: You need to record your own voice for a few minutes.

Bloomberg Reporter Vance: Thousands of letters jump on amateur writersscreens. When you start eating like this, there will be problems. Youd better quit politics and stop working. I dont know how it works. Now create my digital voice. Creating your digital voice takes at least a minute, a minute, my God.

After recording the voice, the piano bird will automatically run to synthesize your digital voice.

At this point, just input what you want to say to the computer, and you can use your newly synthesized voice to say it.

Reporter Vances Digital Synthetic Voice: Artificial intelligence technology seems to be developing very fast. Should we be afraid?

Bloomberg reporter Vance: I did hear that. It was really interesting. I just picked it up. I never said anything.

Moreover, the piano bird can also add emotional elements to the synthesized sound to make the sound more realistic.

Bloomberg Reporter Vance: Now in order to experiment with my computer synthesizer, Im going to call my dear mother to see if she can hear it. Hi, Mom, what are you going to do today?

Vances mother: There was no electricity at home this morning. We were walking around the house.

Vance: I just got off work and waited to pick up my son.

Vances mother: OK.

Vance: I think I might be infected with the virus.

Vances mother: So you dont feel well, do you?

Vances mother: I feel like Im talking to you. Its amazing.

Vances mother: Its really scary when it comes to something very important. But now its you, isnt it?

Vance: I dont know.

Vances mother: It sounds like you.

Vance: Really?

Vances mother: Yes, yes, it sounds like you.

Bloomberg correspondent Vance: Obviously, some people are frightened by the technology. Because we have blurred the reality.

Piano Bird founder: Of course, there is a real risk that someone will use this technology to do something bad. However, science and technology can not stop developing, so we decided to choose an ethical approach to show the technology to people, let people know the feasibility of this technology, let them be more vigilant.

Pindrop, a cyber security company, launched an online survey of 500 companies in four countries in May 2018. The results show that the number of voice fraud cases increased by 350% from 2013 to 2017, and one out of 638 fraudulent calls is synthetic voice.

Udrich, a researcher at the University of Zurich, Switzerland: For a long time, human voices have been the biggest challenge, because each voice is so complex and unique that it is almost impossible to forge them. But in recent years, tremendous progress has been made, and the combination of video material and counterfeit sound poses a huge threat.

Indeed, with the mature technology of voice counterfeiting, coupled with forged images, it is really not difficult to make a video with false and ulterior motives.

Deep Forgery Fake Video: President Trump is a complete idiot. You see, Ill never say that, at least in public speeches (no), but others will say, like Jordan Pierre. This is a dangerous era. We should be more vigilant about the Internet we trust in in the future. We need credible sources of news in this age, which sounds simple, but how we develop, the information age will determine whether we survive or become some kind of bad anti-utopia. Thank you.

This video seems to be a speech by former US President Barack Obama, but it is actually a performance by comedian and film producer Pierre. The sound and picture are synthesized by software.

Computer science expert Farid: The AI system synthesizes Obamas mouth to align it with the audio stream, making the video look like what President Obama has never said before. This is called lip synchronization depth forgery.

There is no good or bad in technology. Deep forgery can be used in interesting videos, games and advertising cooperation, but malicious use can also bring serious consequences. It may destroy our perception of reality and make people dare not believe videos and videos anymore.

The Carnegie Endowment for International Peace has also warned that deep forgery is having a devastating impact on counter-terrorism and national security.

Computer science expert Farid: The nightmare scenario is that theres a video of President Trump in which he says I fired a nuclear weapon against North Korea. Someone hacked into his Twitter account, and the news was spread wildly. In 30 seconds, a global nuclear war broke out. Do I think thats possible? No, but its not entirely impossible, and it scares you half to death, right? But its not impossible. Its really worrying.

From in-depth learning to in-depth forgery, photographs, voices and videos can be easily counterfeited, while it is much more difficult to identify in-depth forgery than to make in-depth forgery.

What should we do when seeing is not necessarily true, and hearing is not necessarily true?

Zuckerbergs Deep Forgery Fake Video: Its all due to Spectre vulnerability, which tells me who controls the data, who controls the future.

In June this year, Instagram, a photo-sharing website, showed such a video. Facebook founder Zuckerberg spoke about the power of big data.

Soon, however, Zuckerberg dismissed rumors that he had never said such things. In fact, it was synthesized by an Israeli technology company using deep forgery technology.

It is slightly embarrassing that Facebook has claimed that it will not delete false news, but will reduce its influence on the website and display information from fact checkers.

Instagram President Mosley also said in an interview that there is no large-scale data or standard to detect these fake videos.

Instagram President Mosley: At present, we have no policy to deal with deep forgery. We are trying to assess whether we are willing to do so. If so, how to define deep forgery? I dont think its good.

Host Gail King: You can limit this technology. You have influence.

Mosley: Actually, I dont just want to take it down. I think the problem is how we do it in a principled way.

In response, Fortune urged that it was time for technology companies, academia and governments to work together to find solutions.

At present, many countries in the world have begun to legislate to strengthen the constraints on the application of face information collection and recognition technology. But all efforts are inseparable from the collaboration of social media platforms.

Recently, Facebook, Microsoft and other technology companies and many academic institutions jointly launched a Deep Forgery Challenge in the hope of improving existing tools and enhancing the detection ability of deep forgery pictures, sounds and videos.

On September 5, Facebook announced that it would invest $10 million.

Researchers at the University of California, Berkeley, are also studying how to deal with deep forgery.

Computer science expert Farid: The way we should deal with this problem is to build a soft biometrics model. These biological features refer to fingerprints, iris and face. Its not that unique, but its purpose is to capture subtle facial expressions and head movements that are unique to each individual, but are disrupted by fake video production.

Computer science expert Farid: First of all, we will measure various head movements and facial expressions. You can see in this video that we are capturing the blue box, capturing how his head rotates in three-dimensional space. The red dot is positioning his facial expression. In fact, we can see when he will raise his eyebrows and open his mouth. Of course, the green laser in his eyes will tell us where he is looking. In each frame of the video, we measure his facial expressions, facial movements and head movements, and then we use these to build a soft biometric recognition model.

Computer science expert Farid lamented that more and more people are making fake videos, and the technology of fake videos is developing very rapidly. In contrast, the strength of the team to fight fake videos is backward and weak.

The Pentagon has a research team, also known as the Defense Advanced Research Projects Agency, which has been studying how to resist the threat of deep forgery.

Computer scientist Tourek: That makes us distrust pictures and videos.

The software for detecting forged video can describe lip movement and compare it with the sound in the video.

Computer scientist Tourek: When you see these red dots, it means that the speakers voice actually does not match the movement of his lips.

Looking at the video again, it seems that the two men are sitting together, but by measuring the angle of light on their faces, it can be determined that this is a composite video.

Computer scientist Tourek: It predicts a 3-D face model along with this model. The software also estimates facial reflection characteristics and illumination angles. Here, we mainly use the light angle to see whether those (features) are consistent.

This is a picture in the surveillance video. The detection software tries to predict the moving direction of the object.

Computer scientist Tourek: It detects the discontinuity of motion of objects, which sends us signals to look at pictures or videos carefully. (inferred) Maybe the picture was cleared from here.

Detection found that the video was passive and another car was cut off from the screen.

Computer scientist Tourek: Its a cat-and-mouse game. The more layers you break through fake pictures or videos, the more pressure you put on the counterfeiters.

Two years ago, the United States released the report Artificial Intelligence and National Security, which clearly listed artificial intelligence forgery technology as a key technology threatening national security.

As a big developing country of AI, China has also strengthened the supervision of policy and technology to deal with the security risks that may be brought by new AI technologies.

After all, when voice, fingerprint, face and other important biometric information begin to appear high imitation, risks and hidden dangers will also increase.

Source: CCTV News Client Responsible Editor: Wang Fengzhi_NT2541