[Article.Ai] What is the planet’s strongest Artificial Intelligence?

God Bennett
3 min readNov 1, 2017

--

The response below was adapted from: https://www.quora.com/What-is-the-planets-strongest-artificial-intelligence/answer/Jordan-Bennett-9

The planet’s strongest AI, and also the most general, can be observed in the scalable algorithms that comprise AlphaGo Zero.

This is probably not surprising, because:

  1. Biological brains are fantastic examples of general intelligence.
  2. Algorithms get more and more general, and they also get stronger, the more we imitate biology in our models. This is why it is important to consider biological brain constraints, while attempting to build more and more general algorithms.
  3. Deepmind was (and still is) adapting to biological brain constraints, and so they continuously built more and more general models, leading up toAlphaGo Zero. (Example of prior works: AtariQ Player which is hippocampus based, and Early Visual Concept Learner, which is ventral visual stream based)
  4. AlphaGo Zero improved AlphaGo, by condensing the previous policy and value network into one module. Having these models as a single block is reasonably akin to the biological brain, because the brain is observable as one general block. As an example, a man with 90% of brain matter missing, is pretty much still normal/healthy. If modules in the brain were isolated, as seen in AlphaGo Zero’s predecessor AlphaGo, the man would not be able to live on normally with a majority of his brain gone. This indicates that one region is flexible enough to take over the function of another, or in other words, there is some uniform distribution of computation, as seen in the math of manifolds, or mean field theory/manifold learning networks — such as the model in this paper by Poole et al or Deepmind’s Early Visual Concept Learner.
  5. AlphaGo Zero benefited from attention at inference/non-self play and training, where it focused on roughly the region of activity on the board. In this scenario there is a scalar estimation v of the current player winning from position s starting from completely random weights θ subscript 0, by searching for optimal actions per time-step t from sampled probabilities π subscript t, on the prior instances of the network f subscript θ subscript i−1. (Notably attention is seen in biological general intelligence)
  6. AlphaGo Zero can be seen as encoding or modeling a measure of human intuition in terms of a search sequence π=α subscript θ subscript i−1(st) parameterized by neural net θ, i.e. a combination of monte-carlo tree search and residual convolutional neural network. Essentially “intuition” can simply be observed as algorithmic priors or biases, (as Bengio Yoshua likes to talk about) that enable optimal computation on input spaces. This means this enabled AlphaGo Zero to reduce the massive search space that is Go. The ability to reduce large problem spaces is inherent again, in biological general intelligence. (See why games like Go or Atari Games are important for developing more and more general algorithms here)

Big take away is that Deepmind is being very sensible, when they stress that consideration of biological brain constraints… is very important.

Author:

I am a casual body builder, and software engineer.

--

--

God Bennett
God Bennett

Written by God Bennett

Lecturer of Artificial Intelligence, and inventor of “Supersymmetric Deep Learning” → Github/Supersymmetric-artificial-neural-network

No responses yet