Fakebook: Why Deep Fakes Mean Deep Trouble

Video and audio are the most visceral mediums of human communication. From the movies that reduce us to tears to the music that lift our spirits, what we see and hear has huge power to shape our beliefs and guide our behaviour.

We all know when we watch Star Trek or immerse ourselves in EDM that we are suspending reality in order to feel a thrill of escapism. But what if reality was suspended permanently?

The rise of “deepfake” technology has the power to fracture society’s ability to tell fact from fiction. The term ‘deepfake’ refers to video or audio that has been altered with the aid of deep learning technology, to usually show a person doing something they never did; or saying something they never said.

Though media has been artificially manipulated for decades, faster computers and easy-to-use, publicly available technology makes convincing deepfakes increasingly easy to produce and proliferate online.

The most famous example is film director Jordan Peele’s 2018 deepfake of President Obama (below) to sound the alarm about the potential abuse of the technology. Being a film director, Peele is well placed to speak of the power of video and audio to manipulate emotions and persuade us to see events in a way the creator wants us to.

Many experts have recently raised their heads above the parapet and began publicly expressing concern. “There are a few phenomena that come together that make deepfakes particularly troubling when they’re provocative and destructive,” said Danielle Citron, a law professor at the University of Maryland “We know that as human beings, video and audio is so visceral, we tend to believe what our eyes and ears are telling us.” Citron was talking about deepfakes in the context of politics. And how a foreign government may release fake videos to sew chaos in democracies and make citizens believe things that never happened.

But technology expert Jamie Bartlett has recently expressed the opposite concern. That the most damaging effect of the rise of deepfakes may not be that we are all duped into believing fakes, but that we will become so cynical that we will believe nothing at all.

“If everything is potentially a fake then everything can be dismissed as a lie.” If a future Trump is caught saying “grab em by the pussy” It’s a deep fake! He will proclaim.

What Can We Do To Protect Democracy?

A recent hearing of the U.S House Intelligence Committee sought expert advice on the best means for governments to respond to deepfakes. Professor Citron contrasted two recent viral examples. The first was a video of Speaker Nancy Pelosi in which her voice was doctored to make her sound drunk when delivering a speech. The second was a parody video of Mark Zuckerberg by the artist Bill Posters in which Zuckerberg is synthetically made to say he controls the world’s data and therefore controls the world.

Citron suggested it was right for the Pelosi video to be removed while the Zuckerberg video allowed to stay online:

“For something like a video where its clearly a doctored and impersonation, not satire, not parody it should be removed.. [but] there are wonderful uses for deepfakes that are art, historical, sort of rejuvenating for people to create them about themselves…” 

The moral and legal principle which Citron seemed to be suggesting is that deepfakes should be permitted in instances where a reasonable person would be able to distinguish it as a fake equivalent to a piece of satire or fictional art but prohibited in instances where the primary purpose of the video is to deceive and injure.

David Doermann, former project manager at DARPA (Defense Advanced Research Project Agency) echoed this sentiment and added that he believes another layer of verification will be needed online. He advocated a new law for social media companies to delay the publishing of videos until some initial verification can be done, akin to the Facebook ads verification.

“There’s no reason why these things have to be instantaneous.. we’ve done it for child pornography, we’ve done it for human trafficking.” 

Public debate continues to rage as to what legal measures should be implemented to protect our democracies from falsification and confusion. But there is at least strong consensus emerging that there is a need to act and to act fast.

As the political scientist Hannah Arendt wrote in the 1950s, the ideal conditions for authoritarianism to thrive is a society where “the distinction between fact and fiction, true and false, no longer exists.” 

The health of democracies all over the world will depend on finding ways to re-establish truth and authenticity of video and audio content. I’ll leave you with this final quote from Jamie Bartlett on what we can expect if regulation is not urgently implemented:

“In the face of constant and endless deep fakes and deep denials, the only rational response from the citizen will be extreme cynicism and apathy about the very idea of truth itself. They will conclude that nothing is to be trusted except her own gut instinct and existing political loyalties..” 

 

 

Advertisements

The Corporate Capture of Social Change

“If we want things to stay as they are, things will have to change”

Anand Giridharadas isn’t afraid of controversy. His debut book Winner Takes All is a blistering take down of the faith put in the biggest beneficiaries of capitalism to lead capitalism’s reform and change the world for the better.

Be it the next Silicon Valley start up or philanthropic foundation, the underlying assumption pushed by the rich is always that business, entrepreneurship and the private sector are the most efficient and effective means of tackling society’s collective problems.

Giridharadas describes how even the language of social change which has historically been associated with grassroots movements, social justice and mass protest has been colonised by market logic and the billionaire class.

Rather than discussing social change as being rooted in rights, justice and systemic reform, the new corporate conception of social change sees inequality, climate change and poverty as a set of technical problems with market solutions. For these people  fixing the world is not about challenging powerful interests and overhauling a rigged economic system but about empowering “global leaders and opinion formers” to leverage “capital, data and technology to improve lives.”

What this actually means is cutting the public out of decision making for what the future should look like. Instead of community leaders, unions and businesses engaging in dialogue to decide whats best for their communities, we are instead told to look to McKinsey consultants and Goldman Sachs analysts to crunch numbers and provide reports on how to “restructure” the economy, to prepare for “inevitable” disruption and spur economic growth.

The glaring contradiction of putting the winners of our broken economy in charge of its repair is that the winners are actually quite comfortable with the status quo. Why would Goldman Sachs want solutions to social change if social change threatens their status, money and power?

By capturing social change within their control they are able to ensure social change is not pursued at all. Angel Gurria secretary General of the OECD describes the top down approach as “changing things on the surface so that in practice nothing changes at all.”

[END of part 1]

 

 

Why Technology Changes Who We Trust

Trust is the foundation of all human connections. From brief encounters to intimate relationships, it governs almost every interaction we have with each other. I trust my housemates not to go into my room without asking, I trust the bank to keep my money safe and I trust the pilot of my plane to fly safely to the destination.

Rachel Botsman describes trust as “a confident relationship with the unknown.” The bridge that allows us to cross from a position of certainty to one of uncertainty and move forward in our lives.

Throughout history, trust has been the glue that allowed people to live together and flourish in cooperative societies. An absence, loss or betrayal of trust could spark violent and deadly consequences.

In recent decades the world has witnessed a radical shift in trust. We might be losing faith in global institutions and political leaders but simultaneously millions of people are renting their homes to complete strangers on Air BnB, exchanging digital currencies like bitcoin or finding themselves trusting bots for help online. Botsman describes this shift as a new age of ‘distributed trust.’

Instead of a vertical relationship where trust flows upwards from individuals to hierarchical institutions, experts, authorities and regulators, today trust increasingly flows horizontally from individuals to networks, peers, friends, colleagues and fellow users.

If we are to benefit from this radical shift and not see a collapse of our institutions, we must understand the mechanics of how trust is built, managed, lost, and repaired in the digital age. To explain this new world, Botsman provides a detailed map of this uncharted landscape and explores what’s next for humanity.

Watch below:

And for a more detailed account listen here: https://play.acast.com/s/intelligencesquared/rachelbotsmanandhelenlewisontechnologyandtrust

 

Do the benefits of artificial intelligence outweigh the risks?

The discussion around Artificial Intelligence (AI) can sound a lot like Brexit. It’s coming but we don’t know when. It could destroy jobs but it could create more. There’s even questions about sovereignty, democracy and taking back control.

Yet even the prospect of a post Brexit Britain led by Boris “fuck business” Johnson doesn’t conjure the same level of collective anxiety as humanity’s precarious future in the face of super-intelligent AI. Opinions are divided as to whether this technological revolution will lead us on a new path to prosperity or a dark road to human obsolescence. One thing is clear, we are about to embark on a new age of rapid change the like of which has never been experienced before in human history. 

From cancer to climate change the promise of AI is to uncover solutions to our overwhelmingly complex problems. In healthcare, its use is already speeding up disease diagnoses, improving accuracy, reducing costs and freeing up the valuable time of doctors.

In mobility the age of autonomous vehicles is upon us. Despite two high-profile incidents from Uber and Tesla causing death to pedestrians in 2017, companies and investors are confident that self-driving cars will replace human operated vehicles as early as 2020. By removing human error from the road AI evangelists claim the world’s one million annual road deaths will be dramatically reduced while simultaneously eliminating city scourges like congestion and air pollution.

AI is also transforming energy. Google’s DeepMind is in talks with the U.K. National Grid to cut the country’s energy bill by 10% using predictive machine learning to analyse demand patterns and maximise the use of renewables in the system.

In the coming decades autonomous Ubers, AI doctors and smart energy systems could radically improve our quality of life, free us from monotonous tasks and speed up our access to vital services.

But haven’t we heard this story of technological liberation before? From Facebook to the gig economy we were sold a story of short term empowerment neglecting the potential for long-term exploitation.

In 2011 many were claiming that Twitter and Facebook had helped foment the Arab Spring and were eagerly applauding a new era of non-hierarchical connectivity that would empower ordinary citizens as never before. But fast forward seven years and those dreams seem to have morphed into a dystopian nightmare.

It’s been well documented that the deployment of powerful AI algorithms has had devastating and far reaching consequences on democratic politics. Personalisation and the collection of data is not employed to enhance user experience but to addict and profit from our manipulation by third parties.

Mustafa Suleyman co-founder of DeepMind has warned that just like other industries, AI suffers from a dangerous asymmetry between market-based incentives and wider societal goals. The standard measures of business achievement, from fundraising valuations to active users, do not capture the social responsibility that comes with trying to change the world for the better.

One eerie example is Google’s recently launched AI assistant under the marketing campaign “Make Google do it”. The AI will now do tasks for you such as reading, planning, remembering and typing. After already ceding concentration, focus and emotional control to algorithms, it seems the next step is for us to relinquish more fundamental cognitive skills.

This follows an increasing trend of companies nudging us to give up our personal autonomy and trust algorithms over our own intuition. It’s moved from a question of privacy invasion to trying to erode control and trust in our minds. From dating apps like Tinder to Google’s new assistant the underlying message is always that our brains are too slow, too biased, too unintelligent. If we want to be successful in our love, work or social life we need to upgrade our outdated biological feelings to modern, digital algorithms.

Yet once we begin to trust these digital systems to make our life choices we will become dependent upon them. The recent Facebook-Cambridge Analytica scandal of data misuse to influence the U.S election and Brexit referendum gives us a glimpse into the consequences of unleashing new and powerful technology before it has been publicly, legally and ethically understood.

We are still in the dark as to how powerful these technologies are at influencing our behaviour. Facebook have publicly stated that they have the power to increase voter turnout. A logical corollary is therefore that Facebook can decide to suppress voter turnout. It is scandalous just how beholden we are to a powerful private company with no safeguards to protect democracy from manipulative technology before it is rolled out on the market.

A recent poll from the RSA reveals just how oblivious the public are to the increasing use of AI in society. It found only 32% of people are aware that AI is being used in a decision making context, dropping to 9% awareness of automated decision making in the criminal justice system. Without public knowledge there is no public debate and no public debate means no demand for public representatives to ensure ethical conduct and accountability.

As more powerful AI is rolled out across the world it is imperative that AI safety and ethics is elevated to the forefront of political discourse. If AI’s development and discussion continues to take place in the shadows of Silicon Valley and Shenzhen and the public feel they are losing control over their society, then we can expect in a similar vein to Brexit and Trump  a political backlash against the technological “elites”.

Jobs

The most immediate risk of AI sparking political upheaval is in automation replacing the human workforce. As capital begins to outstrip labour it will not only displace workers but exacerbate inequality between those who own the algorithms and those who don’t. Optimists argue that as AI moves into the realm of outperforming humans in cognitive tasks new creative jobs will replace them focusing on skills machines can’t yet replicate such as empathy.

Yet this will have to be a quick, transformation in the job market. A recent report from McKinsey estimates that up to 375 million workers around the world may need to switch jobs by 2030, 100 million of which will be in China alone. It is surely impractical and wishful thinking to suggest that factory workers in China or the 3.5 million truck drivers in the United States can simply re-skill and retrain as machine learning specialists or software engineers.

Even if they do there is no guarantee automation will not overtake them again by the time they have re-skilled. The risk for the future of work in the new AI economic paradigm is not so much about creating new jobs but creating new jobs that humans can outperform machines. If new jobs don’t proliferate and the utopian infrastructure of universal basic income, job retraining schemes and outlets for finding purpose in a life without work are not in place, a populist neo-luddite revolution will likely erupt to halt AI development in its tracks.

Widespread social disorder is a real risk if liberal democracy cannot address citizens concerns and keep pace with the speed of technological advance. In our current democratic framework changing and updating our laws takes time and different societal voices must be heard. In this context, by the time we have implemented a regulatory framework to safeguard society from a new application of AI it could have morphed ten times in the intervening period.

French President Emmanuel Macron has acknowledged “this huge technological revolution is in fact a political revolution” and taken steps to carve a different vision than “the opaque privatization of AI or its potentially despotic usage” in the U.S and China respectively. France have launched a bold €1.5 billion initiative to become a leader in ethical AI research and innovation within a democratic sphere. Other democracies should follow this example and ensure democracy can steer the direction of AI rather than AI steering the direction of democracy.

Long Term Risks

Yet the long term risks of AI will transcend politics and economics. Today’s AI is known as narrow AI as it is capable of achieving specific narrow goals such as driving a car or playing a computer game. The long-term goal of most companies is to create general AI (AGI). Narrow AI may outperform us in specific tasks but general artificial intelligence would be able to outperform us in nearly every cognitive task.

One of the fundamental risks of AGI is that it will have the capacity to continue to improve itself independently along the spectrum of intelligence and advance beyond human control. If this were to occur and super-intelligent AI developed a goal that misaligned with our own it could spell the end for humanity. An analogy popularized by cosmologist and leading AI expert Max Tegmark is that of the relationship between humans and ants. Humans don’t hate ants but if put in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.

Humanity’s destruction of the natural world is not rooted in malice but indifference to harming inferior intelligent beings as we set out to achieve our complex goals. In a similar scenario if AI was to develop a goal which differed to humanity’s we would likely end up like the ants.

In analysing the current conditions of our world its clear the risks of artificial intelligence outweigh the benefits. Based on the political and corporate incentives of the twenty first century it is more likely advances in AI will benefit a small class of people rather than the general population. It is more likely the speed of automation will outpace preparations for a life without work. And it is more likely that the race to build artificial general intelligence will overtake the race to debate why we are developing the technology at all. 

 

Why News And The Internet Don’t Mix

(Image: Steve Cutts)

The way in which we consume information determines how we interpret it.

In his seminal work “Thinking, Fast and Slow” Daniel Kahneman, Nobel prize winning behavioural psychologist  describes how two basic systems govern the way we think. We have a primal ‘system one’ way of thinking which is fast, impulsive and emotional. We also have a ‘system two’ form of thinking which is slow, deliberative and logical.

Democracy demands we use ‘system two’ thinking in order to function. Our institutions are designed to arrive at logical, evidence-based decisions. Our legal systems are designed to apply standards of ‘reasonableness’ in solving disputes. And our media should, in theory,  be designed to engender healthy, informed, public debate.

The internet, by contrast, is designed for impulse. Everything is fast and personal. We click, like, swipe and tweet as our neural circuits light up and react to stimuli like notifications, clickbait and automatically playing video. The internet creates an effortless, instantly, interactive experience which allows us to constantly redirect our attention to whatever grabs it in the moment never settling on one task or focus.

The speed and responsive nature of the internet means not only is it a distracted medium for news consumption but also a highly emotional one. Unlike when reading a physical newspaper in which you digest information and can contemplate it’s content in manageable morsels, online news comes at you fast and encourages you to instantly share your emotional response to a story on a public platform. Today people barely get past the headlines before erupting in a tweet-storm of rage or entering the cesspit of crass comments to vent their anger and opposition.

The toxic environment for discussion and debate we all witness online is a natural manifestation of the internet’s fast and fleeting format.  Studies repeatedly show that the more moral and emotional language used in political headlines and tweets, the more likely they are to receive likes, shares, comments and retweets.

Thus in the competition for clicks, reasoned, logical and important information is often traded for stories that can manufacture outrage, anger and fear. If we want live in a world where media can inform citizens, reflect healthy disagreement and host democratic debate then we must begin to accept the current business model and infrastructure of the internet is incompatible with this objective.

We should also be concerned by the increasing extent to which online news consumption is being dictated by for-profit algorithms. In the same way the food industry has exploited our natural craving for fat, salt and sugar, so too is the attention industry exploiting our natural curiosity for conspiracy, mystery and doubt to lead us down a dangerous rabbit hole of consuming more extreme content in the name of “engagement.”

Youtube is the worst offender. Sociologist Zeynep Tufekci has written on just how manipulative Youtube’s recommended videos and autoplay function are in encouraging extreme consumption:

“Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultra-marathons. It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes”

The Wall Street Journal also conducted an investigation of YouTube content finding that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources”. 

Tufecki describes this recent phenomenon as “the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us.” As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads users down a rabbit hole of extremism and profits from the process.

The internet has opened up access to unlimited libraries of information allowing us to learn more about the world than ever before. However from inhibiting reasoned discussion to encouraging extreme consumption today’s diet of digital news isn’t making us smart it’s making us sick.

Selfie: How We Became So Self Obsessed

We live in strange times. A generation of selfies and self harm. We create edited online personas of people seemingly living perfect lives. Yet behind the screens insecurity, vanity and depression are the defining characteristics of our culture.

People absorb culture like sponges. Every time we open our phones we internalize the competitive game of likes, retweets and follows as we strive to reach the false cultural concept of the — “perfect self” —  (why did I only get 12 likes on my last post?! I’m better that that!) and when we don’t receive positive, dopamine fueled feedback we hate ourselves for failing. Hence the recent spikes in self-harm, body dysmorphia, eating disorders and suicide can be attributed to this damaging culture of ‘social perfectionism.’

This is the argument of a brilliant new book by author Will Storr who traces our story of self obsession back to Ancient Greece and Aristotle. Storr describes how the Greek concept of “selfhood” was heavily based on individual self improvement and through the persistence of personal will one could obtain the optimal level of spiritual, mental, physical, and material being.

Fetishizing the self permeated the Western conscience ever since. Storr decides to live with monks in a secluded monastic settlement, enrolls in the infamous California retreat centre named the Esalen Institute where the “self-esteem” movement is said to have been born and finally stays with the tech evangelist entrepreneurs of Silicon Valley to try and piece together how the modern self was formed and how we can survive it.

This is a phenomenal exploration of Western culture and through Storr’s blend of interviews, personal reflection and analysis the book reads like a Louis Theroux documentary. There are times when chapters can feel verbose and Storr spends a third of the book discussing the Esalen Institute and the libertarian movement’s impact on the social and political direction of the 20th century. Yet it’s a book that has stuck with me a month after reading and opened my mind to the extent our motivations and opinions of ourselves are products of a deeply individualistic culture of perfectionism.  I can’t recommend it enough.

 

Selfie ohoto

Why Young People Are Ditching Social Media For Good

The featured image is a work by the incredibly talented Steve Cutts.

Kids growing up in 2017’s digital dystopia are sold one of the biggest lies ever told. That social media is an innocuous online tool to “connect” with friends.

In reality social media has destroyed meaningful connection and replaced it with artificial online packs of “like-minded individuals” who all hold the same beliefs and subscribe to the same dogmas. This meticulously designed,  hyper-addictive technology’s only mantra is to keep the audience hooked for as many hours of the day as possible, monetize their attention by collecting data and sell it to advertisers.

Facebook says it has an eye-popping 2 billion users. It is staggering to see how globally, so much of our lives have migrated to platforms controlled and designed by a few Silicon Valley engineers. The exciting explosion of smartphone technology has overshadowed the questions as to whether tech companies should have such an invasive, intimate role in our lives. Leader in tech design ethics Tristan Harris explains why we should be concerned about tech changing our behaviour:

“Companies say, we’re just getting better at giving people what they want. But the average person checks their phone 150 times a day. Is each one a conscious choice? No. Companies are getting better at getting people to make the choices they want them to make.”

Young people are particularly vulnerable. Being introduced at such a young age to this addictive, disconnected lifestyle has created drug like dependencies among teens and desensitized many to sex and violence as they are daily exposed to porn and brutality online. This constant stimulation and competition for our attention also leaves many miserable, anxious and eventually feeling they have lost valuable time and years to aimlessly scrolling through newsfeeds and trying to convince others that they live a perfect life.

Is there hope?

Yet this business model of enslaving us to our phones is unsustainable. History shows that when advertisers and attention grabbers go too far, the people fight back. No more so than in 1860’s Paris when an aspiring young artist named Jules Chéret discovered the “billboard” as a technological innovation in commercial advertising. By creating seven foot tall, brightly coloured posters displaying eye-catching imagery such as half dressed women Chéret quickly became widely famous as a pioneer in art and commerce and others quicky began imitating his work.

Eventually though it became all too much. The constant attention grabbing of commercial advertising stripped Paris of it’s architectural beauty and engendered a social revolt. Parisians declared war on “the ugly poster” and began lobbying the City government to limit where advertisements could be placed, ban billboards from train tracks and heavily tax them in other public spaces.

The government took aggressive action and today many of the advertisement restrictions are still in place which is why Paris remains in many parts a beautiful city, unperturbed by the constant assault of advertising.

Will a similar revolt occur today in relation to social media? It’s difficult to say, we have become so individualized, I sometimes question whether young people still have the drive to organize and mobilize on mass or whether our conception of protest amounts to signing an online petition and joining a protest Facebook page.

But I do have hope. The first sparks of rebellion are already beginning to fly. Figures released in October show that 57% of schoolchildren in the UK would not mind if social media never existed and an even larger, 71% say they have taken “digital detoxes” to escape its constant stimulation, distraction and pressures.

The BBC also reported that pupils in Kent have  set up a three-day “phone-fast”. With sixth former Isobel Webster, describing:

“There’s a feeling that you have to go on Instagram, or whatever [site], to see what everyone’s doing – sometimes everyone’s talking about something and you feel like you have to look at it too”.

One Year 10 pupil, Pandora Mann, 14, said she was a bit annoyed at the phone-fast initially, but soon realised “we don’t enjoy our phones as much as we think we do”.

“In terms of the way we view ourselves and our lives negatively,” she explained, “I think people put what they see as their best image forward – it’s not always the real image.”

Isobel said that the ban stopped her from sitting in her room scrolling through social media and encouraged her to spend her work breaks chatting to friends.

She said it reminded her “what it was like before” – when as a Year 7 (aged 12) she would spend more time socialising in person.

Kids today are showing that they are not just the most tech savvy among us they’re also the most tech sensitive. Counterculture movements are cropping up and tapping into the undercurrent of anger and disillusionment experienced by many.

Folk Rebellion is one interesting example. A movement dedicated to reconnecting people with reality, creating a more balanced relationship with tech and ‘living in the present with actual things.’ Young people are gravitating to these movements as they begin to rediscover the pleasures of physical books, reconnect with the physical world and relearn what it means to live a fulfilling life.

The resistance is rising.