Could a Neuroscientist Understand a Microprocessor?

Addresses a popular belief in neuroscience that the field is primarily data limited by using a microprocessor as a model organism and applying modern data analysis methods from neuroscience to understand its information processing, with generally poor results.

Humans, but Not Deep Neural Networks, Often Miss Giant Targets in Scenes

Demonstrates that huamns use scene information to guide search towards likely target sizes, resulting in higher miss rates for mis-scaled targets, which does not occur for object detection DNNs.

Why does deep and cheap learning work so well?

Success of reasonably sized neural networks hinges on symmetry, locality, and polynomial log-probability in data from the natural world.

Attention Is All You Need

The Transformer, a sequence transduction model that replaces recurrent layers and relies entirely on attention mechanisms, achieves new SotA on machine translation tasks while reducing training time significantly.

Intuitive Physics: Current Research and Controversies

Recent research in intuitive physics, guided by knowledge-based and learning-based approaches, shifts to a probabilistic simulation framework that better explains human intuitive physics predictions compared to earlier heuristic models.

Automated Curriculum Learning for Neural Networks

Investigates automatically generating curricula based on a variety of progress signals that are computed for each data sample.