paper review

Intrinsically motivated learning of real-world sensorimotor skills with developmental constraints

The unlearnability, high-dimensionality, and unboundedness of the real world necessitates the integration of intrinsic motivation with other developmental constraints, such as sensorimotor primitives, task space representations, maturational mechanisms, and social guidance.

Intuitive experimentation in the physical world

Provides evidence that human experimentation in physical environments is effective at revealing properties of interest, and learning from observations relies on the learning goals.

Learning the Predictability of the Future

Presents the idea of using hyperbolic embeddings for hierarchical representations and provides some experiments classifying action within a hierarchy of actions.

Scaling Laws for Neural Language Models

A large-scale empirical invesigation of scaling laws shows that performance has a power-law relationship to model size, dataset size, and training compute, while architectural details have minimal effects.

Why does deep and cheap learning work so well?

Success of reasonably sized neural networks hinges on symmetry, locality, and polynomial log-probability in data from the natural world.

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

Evaluating object recognition performance of humans and CNNs on images with varying levels of shape and texture cues reveals contrasting biases, which can be partially alleviated by training CNNs with stylized images.

The Developing Infant Creates a Curriculum for Statistical Learning

Reviews recent work that analyzes the egocentric view of infants, highlighting the connection between the data and internal machinery for statistical learning.

Self-supervised learning through the eyes of a child

Applies self-supervised learning algorithms to developmentally realistic, longitudinal, egocentric video from young children and demonstrates the emergence of high-level visual representations.

Are we done with ImageNet?

Proposes a new set of ImageNet labels that address the limitations of the original labels resulting from multiple objects in a single image and synonymous labels.

High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks

In line with Rich Sutton's 'The Bitter Lesson', the improvement of video prediction performance as model capacity increases leaves an open question about how far we can get by finding the right combination of maximal model capacity and minimal inductive bias.