Do the benefits of artificial intelligence outweigh the risks?

The discussion around Artificial Intelligence (AI) can sound a lot like Brexit. It’s coming but we don’t know when. It could destroy jobs but it could create more. There’s even questions about sovereignty, democracy and taking back control.

Yet even the prospect of a post Brexit Britain led by Boris “fuck business” Johnson doesn’t conjure the same level of collective anxiety as humanity’s precarious future in the face of super-intelligent AI. Opinions are divided as to whether this technological revolution will lead us on a new path to prosperity or a dark road to human obsolescence. One thing is clear, we are about to embark on a new age of rapid change the like of which has never been experienced before in human history. 

From cancer to climate change the promise of AI is to uncover solutions to our overwhelmingly complex problems. In healthcare, its use is already speeding up disease diagnoses, improving accuracy, reducing costs and freeing up the valuable time of doctors.

In mobility the age of autonomous vehicles is upon us. Despite two high-profile incidents from Uber and Tesla causing death to pedestrians in 2017, companies and investors are confident that self-driving cars will replace human operated vehicles as early as 2020. By removing human error from the road AI evangelists claim the world’s one million annual road deaths will be dramatically reduced while simultaneously eliminating city scourges like congestion and air pollution.

AI is also transforming energy. Google’s DeepMind is in talks with the U.K. National Grid to cut the country’s energy bill by 10% using predictive machine learning to analyse demand patterns and maximise the use of renewables in the system.

In the coming decades autonomous Ubers, AI doctors and smart energy systems could radically improve our quality of life, free us from monotonous tasks and speed up our access to vital services.

But haven’t we heard this story of technological liberation before? From Facebook to the gig economy we were sold a story of short term empowerment neglecting the potential for long-term exploitation.

In 2011 many were claiming that Twitter and Facebook had helped foment the Arab Spring and were eagerly applauding a new era of non-hierarchical connectivity that would empower ordinary citizens as never before. But fast forward seven years and those dreams seem to have morphed into a dystopian nightmare.

It’s been well documented that the deployment of powerful AI algorithms has had devastating and far reaching consequences on democratic politics. Personalisation and the collection of data is not employed to enhance user experience but to addict and profit from our manipulation by third parties.

Mustafa Suleyman co-founder of DeepMind has warned that just like other industries, AI suffers from a dangerous asymmetry between market-based incentives and wider societal goals. The standard measures of business achievement, from fundraising valuations to active users, do not capture the social responsibility that comes with trying to change the world for the better.

One eerie example is Google’s recently launched AI assistant under the marketing campaign “Make Google do it”. The AI will now do tasks for you such as reading, planning, remembering and typing. After already ceding concentration, focus and emotional control to algorithms, it seems the next step is for us to relinquish more fundamental cognitive skills.

This follows an increasing trend of companies nudging us to give up our personal autonomy and trust algorithms over our own intuition. It’s moved from a question of privacy invasion to trying to erode control and trust in our minds. From dating apps like Tinder to Google’s new assistant the underlying message is always that our brains are too slow, too biased, too unintelligent. If we want to be successful in our love, work or social life we need to upgrade our outdated biological feelings to modern, digital algorithms.

Yet once we begin to trust these digital systems to make our life choices we will become dependent upon them. The recent Facebook-Cambridge Analytica scandal of data misuse to influence the U.S election and Brexit referendum gives us a glimpse into the consequences of unleashing new and powerful technology before it has been publicly, legally and ethically understood.

We are still in the dark as to how powerful these technologies are at influencing our behaviour. Facebook have publicly stated that they have the power to increase voter turnout. A logical corollary is therefore that Facebook can decide to suppress voter turnout. It is scandalous just how beholden we are to a powerful private company with no safeguards to protect democracy from manipulative technology before it is rolled out on the market.

A recent poll from the RSA reveals just how oblivious the public are to the increasing use of AI in society. It found only 32% of people are aware that AI is being used in a decision making context, dropping to 9% awareness of automated decision making in the criminal justice system. Without public knowledge there is no public debate and no public debate means no demand for public representatives to ensure ethical conduct and accountability.

As more powerful AI is rolled out across the world it is imperative that AI safety and ethics is elevated to the forefront of political discourse. If AI’s development and discussion continues to take place in the shadows of Silicon Valley and Shenzhen and the public feel they are losing control over their society, then we can expect in a similar vein to Brexit and Trump  a political backlash against the technological “elites”.

Jobs

The most immediate risk of AI sparking political upheaval is in automation replacing the human workforce. As capital begins to outstrip labour it will not only displace workers but exacerbate inequality between those who own the algorithms and those who don’t. Optimists argue that as AI moves into the realm of outperforming humans in cognitive tasks new creative jobs will replace them focusing on skills machines can’t yet replicate such as empathy.

Yet this will have to be a quick, transformation in the job market. A recent report from McKinsey estimates that up to 375 million workers around the world may need to switch jobs by 2030, 100 million of which will be in China alone. It is surely impractical and wishful thinking to suggest that factory workers in China or the 3.5 million truck drivers in the United States can simply re-skill and retrain as machine learning specialists or software engineers.

Even if they do there is no guarantee automation will not overtake them again by the time they have re-skilled. The risk for the future of work in the new AI economic paradigm is not so much about creating new jobs but creating new jobs that humans can outperform machines. If new jobs don’t proliferate and the utopian infrastructure of universal basic income, job retraining schemes and outlets for finding purpose in a life without work are not in place, a populist neo-luddite revolution will likely erupt to halt AI development in its tracks.

Widespread social disorder is a real risk if liberal democracy cannot address citizens concerns and keep pace with the speed of technological advance. In our current democratic framework changing and updating our laws takes time and different societal voices must be heard. In this context, by the time we have implemented a regulatory framework to safeguard society from a new application of AI it could have morphed ten times in the intervening period.

French President Emmanuel Macron has acknowledged “this huge technological revolution is in fact a political revolution” and taken steps to carve a different vision than “the opaque privatization of AI or its potentially despotic usage” in the U.S and China respectively. France have launched a bold €1.5 billion initiative to become a leader in ethical AI research and innovation within a democratic sphere. Other democracies should follow this example and ensure democracy can steer the direction of AI rather than AI steering the direction of democracy.

Long Term Risks

Yet the long term risks of AI will transcend politics and economics. Today’s AI is known as narrow AI as it is capable of achieving specific narrow goals such as driving a car or playing a computer game. The long-term goal of most companies is to create general AI (AGI). Narrow AI may outperform us in specific tasks but general artificial intelligence would be able to outperform us in nearly every cognitive task.

One of the fundamental risks of AGI is that it will have the capacity to continue to improve itself independently along the spectrum of intelligence and advance beyond human control. If this were to occur and super-intelligent AI developed a goal that misaligned with our own it could spell the end for humanity. An analogy popularized by cosmologist and leading AI expert Max Tegmark is that of the relationship between humans and ants. Humans don’t hate ants but if put in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.

Humanity’s destruction of the natural world is not rooted in malice but indifference to harming inferior intelligent beings as we set out to achieve our complex goals. In a similar scenario if AI was to develop a goal which differed to humanity’s we would likely end up like the ants.

In analysing the current conditions of our world its clear the risks of artificial intelligence outweigh the benefits. Based on the political and corporate incentives of the twenty first century it is more likely advances in AI will benefit a small class of people rather than the general population. It is more likely the speed of automation will outpace preparations for a life without work. And it is more likely that the race to build artificial general intelligence will overtake the race to debate why we are developing the technology at all. 

 

Advertisements

What Makes You A “Good” Person?

In this riveting video Robert Sapolsky examines moral failure or people’s inability to resist temptation from a neurological perspective.

Is being “good” a question of training our impulses to do the right thing? Using reason to navigate our way through life’s temptations and arriving at a morally good answer?

Maybe. However this video points to research that suggests rather than trusting reason of “oh I should never cheat in an exam because X, Y and Z reason” that being in a state of mind of “you don’t cheat. fullstop.”  is far more likely to yield success in the long term.

This is an interesting analysis of the inner processes and dialogue of the brain when presented with moral challenges and how we can achieve desired result.

 

What do you think..?

Why Robots Could Be Human

Author: Jonny Scott

 

Robots are taking over the world! Run! But where? I’m scared!

We hear a lot of crazy talk nowadays about robot dominance, endangerment of the human race and life being one big computer simulation. But is it all crazy? Or is there anything we can learn from it? Let’s pretend we’re scientists and try figure it out.

 

We’ve probably all heard of the universal simulation theory. The theory goes that we’re all players in a video game. Call that game, “life”. As players, we have gained consciousness. Every player operates within the boundaries of the game but some players are more conscious than others. Just like in video games, there are random life forms that are just there to make the environment seem more realistic. The more dominant players have the most consciousness. Consciousness is control over our actions. It’s freedom, or at least, the illusion of freedom. We can choose what to do, and develop ourselves to reach higher consciousness.

Players with higher consciousness reach higher levels in the game. Players with low consciousness are unproductive, overlooked characters, who are just there to make the game seem more realistic.
We are aware that by developing our control, or consciousness, we become more dominant players in the game and know not be low consciousness players who just go through the pre-programmed motions of life.
Is this an accurate explanation of the game of life? Even though it makes sense, and seems an interesting way to frame the world, we’ll never really know for certain.

But with this framework in mind we can relate it to the newest form of life, the form we have created. Artificial intelligence. The new form with major potential for power and intelligence which may greater than ours. But here’s the thing, will we get to the point where artificial intelligence is indistinguishable from human intelligence? The truth is, we have already surpassed that point.

You’re browsing the internet, and pop! Up comes an instant chat. Who’s on the other end? A sexy single in your area who’s dying to meet you? A service worker who is offering home repair? A phone company who promises to beat your current rate? Or, an artificially intelligent chat-bot? It’s becoming harder to tell the difference.

We are creating intelligence that is becoming a mirror of human intelligence. You might have a conversation with a more articulate Amazon Echo-like robot and not even realize until you look up and notice you’re talking to a little electronic device, not a human. Let’s take this to the next step.

 

Take a more articulate Amazon Echo, put it in a human costume, and program it with algorithms that teach it to walk and move like we do, and what do you have? Let’s see, if it walks like a human, talks like a human, looks like a human, what’s our first instinct? Smash it open and see if it spills blood or wires? See if it has organs or computers? Maybe, but more likely, we’ll just call it a human.

You see, life has cycles. Evolution takes time. But look at what we have already created, and imagine it as continuing evolution. A more evolved Siri, Amazon Echo, or humanoid robot basically is a human. Or maybe, a superhuman. An evolved human race that has potential to turn us into pets or entertainment puppets. We won’t have control. But we can fix this.

We see the potential for robotic-life integration into the world. It’s awesome. Robotics make life easier. But we have to remain in control. If dominance comes down to a battle of human vs robot intelligence, we can’t lose. With humanity’s increased reliance on computerised power we have become lazy and externalised much of our own brain power. It’s not as much of a necessity anymore. With less brain power comes less self control. If control is similar to consciousness, we can’t let robots gain more consciousness than us. If that happens, it won’t be a simple “pull the plug” situation.

 

And that’s kind of scary. But it doesn’t have to be. If we continue working alongside technology, and not depending on it for survival, we will truly thrive like never before. We have to prioritise human intelligence, brain power and see technology as an aid to our human intelligence rather than a replacement. We have to stay in control.

Or, maybe everyone is wrong, nothing matters and reality is a simulation controlled by higher power. Who can know for sure? All we can do is work within the limitations of our knowledge and try to find happiness and success within it.

Jonny Scott is a young American who writes about everything from the banes of modern society to the pressing issues of current affairs. You can follow his excellent blog here:

Why Humans Can Live 100 but not 1000 Years

 

Why is it that humans can’t live for a thousand years? Why is it that mice who are very similar to humans genetically only live two-three years?

Physicist Geoffrey West became deeply interested in the physics of mortality when he entered his fifties and people in his life began to perish. In this video he discusses metabolism, comparing it to a road that breaks down due to wear and tear.

He offers some interesting answers on slowing the ageing such as reducing caloric intake and potential drugs that keep the body cool. But whether human life will ever go beyond 110 years is a question not just of possibility but of desirability. 100 years is long enough!

Why I’d Vote For Corbyn

Professor Noam Chomsky speaks to BBC Newsnight to discuss the anger which has raged across the Middle and Working Classes of Western democracies since the economic collapse in 2008.

Discussing the roots of the anger, the rise of far right nationalism as well as the optimistic signs of youth galvanisation around progressive policies on climate change and income inequality – Chomsky discusses why he would vote for Jeremy Corbyn in the UK general election in the context of Brexit.

This is a riveting interview from one of the words best known progressive public intellectuals and gives some interesting insights into the global order and future of western democracy.

What Would Elon Musk Be Working On If He Was 22?

Inventor, Entrepreneur and Englineer discusses what he views as the most important work to be doing if he was a young person in 2017.

Musk has been at the centre of the conversation around artificial intelligence and sustainable energy consumption over the past 15 years. He is ranked the 21st most influential people in the world and his current company SpaceX are working on a project to eventually allow humans to colonise Mars.

What Do Humans Really Want?

Are humans just naturally lazy, comfort and pleasure seeking beings? Or do we really want dignity and fulfilment?

in this riveting excerpt professor Noam Chomsky discusses how the billions upon billions of dollars spent on advertising has been used to psychologically manipulate are ideas of what we want.

Tracing trends from the industrial revolution of the 1800’s to the educated poor in the 1930’s, Chomsky argues that what we really want is a sense of belonging and dignity in our work, not evermore accumulation and consumption of products.

What are your thoughts?