Uses bijective networks to identify large subspaces of invariance-based vulnerability and introduces the independence cross-entropy loss which partially alleviates it.
Demonstrates that scaling up self-supervised methods along data size, model capacity, and problem complexity enables them to match or surpass ImageNet supervised pre-training on a variety of tasks.
Applies task-agnostic, web-scale pre-training to computer vision using natural language supervision, enabling powerful zero-shot transfer to many datasets.
Addresses a popular belief in neuroscience that the field is primarily data limited by using a microprocessor as a model organism and applying modern data analysis methods from neuroscience to understand its information processing, with generally poor results.
Demonstrates that huamns use scene information to guide search towards likely target sizes, resulting in higher miss rates for mis-scaled targets, which does not occur for object detection DNNs.
Large-scale evolutionary simulations by DERL yield insights into how the interaction between learning, evolution, and environmental complexity can lead to morphological intelligence.