The Holy Grail of AI (Text)

CSD-image

 


———————————————————————

Artificial Intelligence: computer software that performs tasks that would normally require the skills of a human being. Examples include world champion Go players, self driving cars, a Jeopardy! champion and the Google search engine.

The reason humans now dominate the Earth is due to minor changes in brain size and neurological organization from our last ape ancestor. Digital computers have even more cognitive potential than humans, potentially reaching levels of intelligence we can’t currently imagine.

Before AI pioneer Geoffrey Hinton starting making contributions in artificial neural networks, he was inspired by how the human brain works. Hinton started his academic journey by learning brain anatomy and physiology, phycology and finally, computer science. He was fascinated by how the human brain computes and learns, and sought to make a similar brain in silicon. His progress in neural networks led to the creation of many narrow AI’s that we see today, some of which drive cars and land rockets.

After Hinton made strides in AI during the late 80’s, similar to other times in AI history, progress hit a wall. That was, until the mid-2000’s, when neural networks were given something that made the algorithms thrive. Lots and lots of data. Data on the Internet and many digital devices give current AI’s and businesses a lot of information to learn from. For example, companies like Youtube use AI’s that see what videos you watch and give you recommendations for new videos.

Using Reinforcement Learning (RL), neural networks can master particular domains. RL is a realm of machine learning motivated by behaviorist phycology, where software agents take actions in an environment to maximize a reward associated with a utility function.

In 2016, DeepMind’s AlphaGo beat all-time Go legend Lee Sedol 4-1, a feat many AI experts thought was at least 10 years away. If your interested in learning more about this feat, I recommend the film AlphaGo (which is currently on Netflix). The main goal of Go is to surround your opponent’s territory with your stones. The game is known to test intuition and creativity. There are more possible board combinations in Go than atoms in the known Universe. This means that, unlike how IBM’s DeepBlue beat world chess champion Gary Kasparov in 1997 (simply because of massive memory and compute power), AlphaGo displayed super-human intuition and decision making never before displayed by a computer.

A Go board

AlphaGo performed certain moves that the best Go players had never seen before, showing advanced creativity and intuition (Tegmark 87). After AlphaGo defeated Sedol, DeepMind improved it’s design. They created AlphaGo Zero, which taught itself how to play Go. This version of the software reached the level at which AlphaGo defeated Sedol, in three days. It took months for DeepMind’s programmers to get AlphaGo to this threshold originally. Zero had no data given to it, it simply learned by playing itself. This highlights another reason why AI has seen a boom this decade, better hardware. This allowed AlphaGo to play against different versions of itself million of times within a few days. Zero then defeated the original AlphaGo 100-0.

Author of the book How to Create a Mind and inventor of the image scanner, Ray Zurzweil, believes superintelligence will be achieved by 2029 (but he thinks in the form of cyborgs). In his books, he talks about what he calls “the law of accelerating returns.” It basically says the occurrence of improved technology, particularly AI, is occurring at an exponential rate. He stated, “we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate).”

Likewise, if you asked most AI researchers today about recent AI progress, they would tell you that even they’re surprised by how quickly it had advanced. Accelerating progress can make forecasting future breakthroughs (such as superintelligece) difficult to predict.

In a survey of AI experts asked when there will be a 50% probability of human-level AI, the consensus ranged from the year 2040 to 2050 (Bostrom 23). Yet, like with Go, these experts are likely underestimating when future breakthroughs will occur in AI.

Superintelligence seems if not inevitable, likely in many of our lifetimes. We are approaching a crucial threshold in human history, past which digital superintelligence either will or will not share our values.

The holy grail of AI is to make a general AI (AGI), one that can do all the things humans can do. Rather than only being able to preform one task like today’s narrow AI’s, an AGI would be super-human at all current domains, and probably much more.

Although some disagree, Elon Musk and others believe that the threshold of human-level intelligence is a mirage for AI. They believe that once an AI has mastered the technique of general intelligence, it would become superintelligent shortly after. Since computers operate via electrons passing through logic gates at the speed of light, a lot can happen in short periods of time from our perspective. Computers operate 1,000,000 times faster than elctro-chemical signals in the human brain; if an AGI was at human-level intelligence, in one week it would perform 20,000 years of human-level progress.

The reason AI safety research has exploded in the last five years is because computer scientists like Nick Bostrom and Eliezer Yudowky have shown potential flaws in AI systems who’s goal is not completely aligned with humans. Since a superintelligent would be vastly smarter than any human, if it’s goals are even slightly misaligned with ours, the outcome could be bad.

In the movie Do You Trust This Computer?, one AI researcher explained how one of their AI’s did more than he expected. He programmed the AI robot to detect specific physical objects. To his amazement, the AI created a neuron in it’s neural network that detected human faces. The fascinating part is that he never programmed the AI to do this.

It is possible that something similar (an AI doing more than expected) could happen with the first superintelligence. A certain threshold could be passed, similar to this scenario, and an AI could become superintelligent, potentially catching it’s programmers off-guard. A superintelligence made by accident probably wouldn’t have the necessary axioms to guarantee it’s safe.

AI myths
A myth/fact table from the book Life 3.0. Great table except when it says superintelligence is “at least decades away.” This is possible, but it is not definitive, it could happen this decade. It’s hard to make predictions with a double exponential… Tegmark was most likely trying to reach a wider audience.

Another historical trend is that governments tend to be reactive with regulation, rather than proactive. When the automobile appeared, it took time for governments to make seat belts mandatory, after people died in car crashes from not wearing them. With superintelligent AI, as Elon Musk put it at a meeting with the National Governors Association in 2017, it is crucial governments are proactive in AI regulation because if they’re reactive, it might be too late.

If  future AI will dramatically change our lives, don’t we have the right to make sure it is safe? 

If humans create an intelligence smarter than them without explicitly putting in place axioms to ensure it is benign, the outcome is uncertain, with mixed opinions between AI experts on whether the default outcome would be good or bad. Whether certain people (like Google co-founder Larry Page) think superintelligent AI would be beneficial for humanity no matter what, is irrelevant because that’s wishful thinking. We can’t roll the dice with humanity.

How do we make superintelligence safe? Luckily, there are many great people around the world working on this right now.

Making AI that shares human values is difficult. Example:

The Three Laws of Robotics (from the movie I Robot):

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

These laws were formulated by science fiction legend Issac Asimov. These laws may seem benign, but his books and the film I Robot show there are glaring flaws in these laws, which led to undesirable scenarios. A growing number of AI researchers are looking for a valid way to instill human values into AI’s, values that would stay with the AI even if it became superintelligent.

Physicists and founder of the Future of Life Institute Max Tegmark stated “…it’s tricky to fully codify even widely accepted ethical principles into a form applicable to future AI, and this problem deserves serious discussion and research as AI keeps progressing” (Tegmark 274). The control problem and value-loading problem to create a superintelligent AI safely are still unsolved.

There are many examples used to show how an extreme optimization process by an AI can lead to things it’s programmers would not anticipate. A popular example is called the paper clip maximizer scenario. This entails giving a superintelligent AI the goal of maximizing the production of paper clips. A possible scenario is for the AI to turn all of the atoms in the reachable Universe into paper clips, which would obviously conflict with human values. As computer scientists Eliezer Yudowsky put it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

AI legend Stuart Russell has been a pioneer in the field for decades, co-writing the textbook that most universities use to teach AI to students, Artificial Intelligence: A Modern Approach. Stuart has proposed an alternative approach for making safe AGI. In a recent TED talk, he explained what he calls “inverse reinforcement learning.” In short, he thinks we should give an AGI the following axioms:

  1. The only objective of the AI is to maximize the realization of human values.
  2. The AI is initially uncertain about what these values are.
  3. Human behavior provides information about human values.

The second law is one he emphasizes in his TED talk, saying this uncertainly would be crucial. The AI would be constantly learning how to make it’s realizations better, rather than believing is has already fully understood human values.

 

Sunway’s TaihuLight, the world’s fastest supercomputer in 2016. It’s raw computational power arguably exceeds that of the human brain (Tegmark 132). Advancements in computer hardware are making AI progress much faster.

 

Even though we can’t assume a good scenario will be handed to us, I have a lot of hope regarding the future of AI. In the past few years, the AI community has taken a lot of interest in AI safety. The Future of Life Institute has organized AI safety conferences in Puerto Rico and Asimilor, agreeing on more and more each year. There is overwhelming consensus between most AI researchers that we should delve away from creating undirected intelligence and instead make beneficial intelligence.

Max Tegmark ended his book Life 3.0 on a very optimistic note. He mentioned how in 2014, he was crying on a street in London, because he felt a bad outcome for humanity was likely to occur. Now, after organizing AI safety conferences and seeing the amazing agreement on key safety issues, he is excited about the future. He implores us all to practice “mindful optimism” (Tegmark 332), to believe  humanity will have a great future, as long as we successfully prepare for potential risks.

The concern is that someone makes a superintelligence before they solved the value-loading problem, purposely or accidentally cutting corners on safety. We are likely to have only one chance to make an AGI, it must be right the first time. We have to support AI safety research and not ignore this topic of conversation.

If superintelligence works out, I could imagine future generations of humans looking back at this century and realize we succeeded in amplifying our intelligence with machines, rather than being replaced by them. This is a crucial point in human history and we have the potential to have millennia of happiness and prosperity we can’t currently imagine, if we don’t fuck this up.

Regulation on AI would allow AI safety researchers to have more time to solve key challenges in AI aliment (safety) research, thus increase the chances of a bright future.

“All of us — not only scientists, industrialists and generals — should ask ourselves what we can do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time…”

– Professor Stephen Hawking, 2017

Facebook page: https://www.facebook.com/groups/1882811435075639/?ref=bookmarks

#RegulateAI #AISafety #AIAlignment #BrightFuture

Lee Sedol perplexed by move 37 from AlphaGo.

 

Bibliography:

https://deepmind.com/

Life 3.0 by Max Tegmark

Superintelligence by Nick Bostrom

 

 

 

http://www.kurzweilai.net/the-law-of-accelerating-returns

https://www.alphagomovie.com/

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

https://en.wikipedia.org/wiki/Reinforcement_learning

https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s