Fakebook: Why Deep Fakes Mean Deep Trouble

Video and audio are the most visceral mediums of human communication. From the movies that reduce us to tears to the music that lift our spirits, what we see and hear has huge power to shape our beliefs and guide our behaviour.

We all know when we watch Star Trek or immerse ourselves in EDM that we are suspending reality in order to feel a thrill of escapism. But what if reality was suspended permanently?

The rise of “deepfake” technology has the power to fracture society’s ability to tell fact from fiction. The term ‘deepfake’ refers to video or audio that has been altered with the aid of deep learning technology, to usually show a person doing something they never did; or saying something they never said.

Though media has been artificially manipulated for decades, faster computers and easy-to-use, publicly available technology makes convincing deepfakes increasingly easy to produce and proliferate online.

The most famous example is film director Jordan Peele’s 2018 deepfake of President Obama (below) to sound the alarm about the potential abuse of the technology. Being a film director, Peele is well placed to speak of the power of video and audio to manipulate emotions and persuade us to see events in a way the creator wants us to.

Many experts have recently raised their heads above the parapet and began publicly expressing concern. “There are a few phenomena that come together that make deepfakes particularly troubling when they’re provocative and destructive,” said Danielle Citron, a law professor at the University of Maryland “We know that as human beings, video and audio is so visceral, we tend to believe what our eyes and ears are telling us.” Citron was talking about deepfakes in the context of politics. And how a foreign government may release fake videos to sew chaos in democracies and make citizens believe things that never happened.

But technology expert Jamie Bartlett has recently expressed the opposite concern. That the most damaging effect of the rise of deepfakes may not be that we are all duped into believing fakes, but that we will become so cynical that we will believe nothing at all.

“If everything is potentially a fake then everything can be dismissed as a lie.” If a future Trump is caught saying “grab em by the pussy” It’s a deep fake! He will proclaim.

What Can We Do To Protect Democracy?

A recent hearing of the U.S House Intelligence Committee sought expert advice on the best means for governments to respond to deepfakes. Professor Citron contrasted two recent viral examples. The first was a video of Speaker Nancy Pelosi in which her voice was doctored to make her sound drunk when delivering a speech. The second was a parody video of Mark Zuckerberg by the artist Bill Posters in which Zuckerberg is synthetically made to say he controls the world’s data and therefore controls the world.

Citron suggested it was right for the Pelosi video to be removed while the Zuckerberg video allowed to stay online:

“For something like a video where its clearly a doctored and impersonation, not satire, not parody it should be removed.. [but] there are wonderful uses for deepfakes that are art, historical, sort of rejuvenating for people to create them about themselves…” 

The moral and legal principle which Citron seemed to be suggesting is that deepfakes should be permitted in instances where a reasonable person would be able to distinguish it as a fake equivalent to a piece of satire or fictional art but prohibited in instances where the primary purpose of the video is to deceive and injure.

David Doermann, former project manager at DARPA (Defense Advanced Research Project Agency) echoed this sentiment and added that he believes another layer of verification will be needed online. He advocated a new law for social media companies to delay the publishing of videos until some initial verification can be done, akin to the Facebook ads verification.

“There’s no reason why these things have to be instantaneous.. we’ve done it for child pornography, we’ve done it for human trafficking.” 

Public debate continues to rage as to what legal measures should be implemented to protect our democracies from falsification and confusion. But there is at least strong consensus emerging that there is a need to act and to act fast.

As the political scientist Hannah Arendt wrote in the 1950s, the ideal conditions for authoritarianism to thrive is a society where “the distinction between fact and fiction, true and false, no longer exists.” 

The health of democracies all over the world will depend on finding ways to re-establish truth and authenticity of video and audio content. I’ll leave you with this final quote from Jamie Bartlett on what we can expect if regulation is not urgently implemented:

“In the face of constant and endless deep fakes and deep denials, the only rational response from the citizen will be extreme cynicism and apathy about the very idea of truth itself. They will conclude that nothing is to be trusted except her own gut instinct and existing political loyalties..” 

 

 

Advertisements

Do the benefits of artificial intelligence outweigh the risks?

The discussion around Artificial Intelligence (AI) can sound a lot like Brexit. It’s coming but we don’t know when. It could destroy jobs but it could create more. There’s even questions about sovereignty, democracy and taking back control.

Yet even the prospect of a post Brexit Britain led by Boris “fuck business” Johnson doesn’t conjure the same level of collective anxiety as humanity’s precarious future in the face of super-intelligent AI. Opinions are divided as to whether this technological revolution will lead us on a new path to prosperity or a dark road to human obsolescence. One thing is clear, we are about to embark on a new age of rapid change the like of which has never been experienced before in human history. 

From cancer to climate change the promise of AI is to uncover solutions to our overwhelmingly complex problems. In healthcare, its use is already speeding up disease diagnoses, improving accuracy, reducing costs and freeing up the valuable time of doctors.

In mobility the age of autonomous vehicles is upon us. Despite two high-profile incidents from Uber and Tesla causing death to pedestrians in 2017, companies and investors are confident that self-driving cars will replace human operated vehicles as early as 2020. By removing human error from the road AI evangelists claim the world’s one million annual road deaths will be dramatically reduced while simultaneously eliminating city scourges like congestion and air pollution.

AI is also transforming energy. Google’s DeepMind is in talks with the U.K. National Grid to cut the country’s energy bill by 10% using predictive machine learning to analyse demand patterns and maximise the use of renewables in the system.

In the coming decades autonomous Ubers, AI doctors and smart energy systems could radically improve our quality of life, free us from monotonous tasks and speed up our access to vital services.

But haven’t we heard this story of technological liberation before? From Facebook to the gig economy we were sold a story of short term empowerment neglecting the potential for long-term exploitation.

In 2011 many were claiming that Twitter and Facebook had helped foment the Arab Spring and were eagerly applauding a new era of non-hierarchical connectivity that would empower ordinary citizens as never before. But fast forward seven years and those dreams seem to have morphed into a dystopian nightmare.

It’s been well documented that the deployment of powerful AI algorithms has had devastating and far reaching consequences on democratic politics. Personalisation and the collection of data is not employed to enhance user experience but to addict and profit from our manipulation by third parties.

Mustafa Suleyman co-founder of DeepMind has warned that just like other industries, AI suffers from a dangerous asymmetry between market-based incentives and wider societal goals. The standard measures of business achievement, from fundraising valuations to active users, do not capture the social responsibility that comes with trying to change the world for the better.

One eerie example is Google’s recently launched AI assistant under the marketing campaign “Make Google do it”. The AI will now do tasks for you such as reading, planning, remembering and typing. After already ceding concentration, focus and emotional control to algorithms, it seems the next step is for us to relinquish more fundamental cognitive skills.

This follows an increasing trend of companies nudging us to give up our personal autonomy and trust algorithms over our own intuition. It’s moved from a question of privacy invasion to trying to erode control and trust in our minds. From dating apps like Tinder to Google’s new assistant the underlying message is always that our brains are too slow, too biased, too unintelligent. If we want to be successful in our love, work or social life we need to upgrade our outdated biological feelings to modern, digital algorithms.

Yet once we begin to trust these digital systems to make our life choices we will become dependent upon them. The recent Facebook-Cambridge Analytica scandal of data misuse to influence the U.S election and Brexit referendum gives us a glimpse into the consequences of unleashing new and powerful technology before it has been publicly, legally and ethically understood.

We are still in the dark as to how powerful these technologies are at influencing our behaviour. Facebook have publicly stated that they have the power to increase voter turnout. A logical corollary is therefore that Facebook can decide to suppress voter turnout. It is scandalous just how beholden we are to a powerful private company with no safeguards to protect democracy from manipulative technology before it is rolled out on the market.

A recent poll from the RSA reveals just how oblivious the public are to the increasing use of AI in society. It found only 32% of people are aware that AI is being used in a decision making context, dropping to 9% awareness of automated decision making in the criminal justice system. Without public knowledge there is no public debate and no public debate means no demand for public representatives to ensure ethical conduct and accountability.

As more powerful AI is rolled out across the world it is imperative that AI safety and ethics is elevated to the forefront of political discourse. If AI’s development and discussion continues to take place in the shadows of Silicon Valley and Shenzhen and the public feel they are losing control over their society, then we can expect in a similar vein to Brexit and Trump  a political backlash against the technological “elites”.

Jobs

The most immediate risk of AI sparking political upheaval is in automation replacing the human workforce. As capital begins to outstrip labour it will not only displace workers but exacerbate inequality between those who own the algorithms and those who don’t. Optimists argue that as AI moves into the realm of outperforming humans in cognitive tasks new creative jobs will replace them focusing on skills machines can’t yet replicate such as empathy.

Yet this will have to be a quick, transformation in the job market. A recent report from McKinsey estimates that up to 375 million workers around the world may need to switch jobs by 2030, 100 million of which will be in China alone. It is surely impractical and wishful thinking to suggest that factory workers in China or the 3.5 million truck drivers in the United States can simply re-skill and retrain as machine learning specialists or software engineers.

Even if they do there is no guarantee automation will not overtake them again by the time they have re-skilled. The risk for the future of work in the new AI economic paradigm is not so much about creating new jobs but creating new jobs that humans can outperform machines. If new jobs don’t proliferate and the utopian infrastructure of universal basic income, job retraining schemes and outlets for finding purpose in a life without work are not in place, a populist neo-luddite revolution will likely erupt to halt AI development in its tracks.

Widespread social disorder is a real risk if liberal democracy cannot address citizens concerns and keep pace with the speed of technological advance. In our current democratic framework changing and updating our laws takes time and different societal voices must be heard. In this context, by the time we have implemented a regulatory framework to safeguard society from a new application of AI it could have morphed ten times in the intervening period.

French President Emmanuel Macron has acknowledged “this huge technological revolution is in fact a political revolution” and taken steps to carve a different vision than “the opaque privatization of AI or its potentially despotic usage” in the U.S and China respectively. France have launched a bold €1.5 billion initiative to become a leader in ethical AI research and innovation within a democratic sphere. Other democracies should follow this example and ensure democracy can steer the direction of AI rather than AI steering the direction of democracy.

Long Term Risks

Yet the long term risks of AI will transcend politics and economics. Today’s AI is known as narrow AI as it is capable of achieving specific narrow goals such as driving a car or playing a computer game. The long-term goal of most companies is to create general AI (AGI). Narrow AI may outperform us in specific tasks but general artificial intelligence would be able to outperform us in nearly every cognitive task.

One of the fundamental risks of AGI is that it will have the capacity to continue to improve itself independently along the spectrum of intelligence and advance beyond human control. If this were to occur and super-intelligent AI developed a goal that misaligned with our own it could spell the end for humanity. An analogy popularized by cosmologist and leading AI expert Max Tegmark is that of the relationship between humans and ants. Humans don’t hate ants but if put in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.

Humanity’s destruction of the natural world is not rooted in malice but indifference to harming inferior intelligent beings as we set out to achieve our complex goals. In a similar scenario if AI was to develop a goal which differed to humanity’s we would likely end up like the ants.

In analysing the current conditions of our world its clear the risks of artificial intelligence outweigh the benefits. Based on the political and corporate incentives of the twenty first century it is more likely advances in AI will benefit a small class of people rather than the general population. It is more likely the speed of automation will outpace preparations for a life without work. And it is more likely that the race to build artificial general intelligence will overtake the race to debate why we are developing the technology at all.