[Article.Ai] Machine Learning: Tabula Rasa vs Built In Priors

God Bennett
1 min readAug 30, 2018

--

How much tabula-rasa? How much built in stuff?
  1. I think brains are a combination of measures of tabula-rasa like states, together with non-trivial priors, such as convolutions.
  2. This means that brains possess both built in structures, and simultaneously processes that are absent information, or blank wrt said information, where said processes would be quite sub-optimal to occur as priors in the brain.

Imagine models/brains that already possessed all the weights needed to express solutions to a problem? The brain likely has priors like convolutions, but also tabula-rasa like states that are absent information which is gained via weight updates etc over a range of learning exercises.

  • So when Deepmind expressed that their alpha go zero model learnt from tabula rasa aligned state, we probably shouldn’t take it at face value, they meant tabula rasa as I described above, from the perspective of zero human data being used to train alpha go zero, but obviously not tabula rasa when it comes to priors such as convolutions used in the model W*x+b, where * signifies some convolution operation.

Author:

I am an atheist, casual body builder, and software engineer.

--

--

God Bennett
God Bennett

Written by God Bennett

Lecturer of Artificial Intelligence, and inventor of “Supersymmetric Deep Learning” → Github/Supersymmetric-artificial-neural-network

No responses yet