[Article.Ai] Is Deep Learning fundamentally flawed and hitting a wall? Was Gary Marcus correct in pointing out Deep Learning’s flaws?
?
(1) Reference — Gary/Yann Talk: https://www.youtube.com/watch?v=vdWPQ6iAkT4
(2) Beyond his critique paper, Gary identifies that 90% of nips papers are not in the regime of heavily considering causal priors/innate stuff. In other words beyond his critique paper and instead, in the talk above, Gary correctly points out that some papers are greatly considering this innate stuff he talks about, although very few.
(3) Gary is very helpful by publishing his critique paper, because it is a reminder that it is reasonable to consider more innate stuff. (Especially if his reports are correct that 90% of nips papers are not in the regime of considering causal priors/innate stuff especially beyond convolutions)
(4.a) Yann brings up a crucial point by mentioning that there was no real motivation to heavily explore building innate stuff beyond convolutional neural configuration because:
(note that Yann didn’t exactly say the following, but I am paraphrasing and injecting information based on my own knowledge)
(4.b) Researchers largely merely knew a lot about convolutional notations (i.e. W * x + b, instead of W ⋅ x +b…) as an example of innate biological constraint, and so researchers worked on that mostly up until now.
(4.c) Working on convolution was already challenging enough, and successful convolution detectors merely emerged in the last 4 years.
(5) So the big take away from this may be that it is time for many many more researchers to start considering more biological constraints, especially when some visual constraints (convolution equations) are already so successful.
However, as Yann points out, it is important to minimize the amount of priors or innate stuff manually integrated in the learning models, to be careful not to negatively affect overall learning performance.
This is challenging, but following cognitive science should help guarantee that researchers find the right priors or innate stuff, that reasonably guarantee optimal performance rather than stifle the learning model!
▼
▼
Author:
I am a casual body builder, and software engineer.