Uses bijective networks to identify large subspaces of invariance-based vulnerability and introduces the independence cross-entropy loss which partially alleviates it.
Demonstrates that scaling up self-supervised methods along data size, model capacity, and problem complexity enables them to match or surpass ImageNet supervised pre-training on a variety of tasks.
Demonstrates the benefit of curriculum learning with different scoring and pacing functions on various small datasets.
Applies task-agnostic, web-scale pre-training to computer vision using natural language supervision, enabling powerful zero-shot transfer to many datasets.
Demonstrates that providing explantions and model criticism can be useful tools to improve the reliability of ImageNet-trained CNNs for end-users.
Addresses a popular belief in neuroscience that the field is primarily data limited by using a microprocessor as a model organism and applying modern data analysis methods from neuroscience to understand its information processing, with generally poor results.
Demonstrates that huamns use scene information to guide search towards likely target sizes, resulting in higher miss rates for mis-scaled targets, which does not occur for object detection DNNs.
Large-scale evolutionary simulations by DERL yield insights into how the interaction between learning, evolution, and environmental complexity can lead to morphological intelligence.
Produces competitive convolution-free transformer, training only on ImageNet.
A variety of structural and functional differences in the brain are correlated with intelligence.