Scores on benchmarks

Model rank shown below is with respect to all public models.
.113 average_vision rank 376
81 benchmarks
.113
0
ceiling
best
median
.227 behavior_vision rank 163
43 benchmarks
.227
0
ceiling
best
median
.512 Rajalingham2018-i2n v2 [reference] rank 132
.512
0
ceiling
best
median
match-to-sample task
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.486 Geirhos2021-error_consistency [reference] rank 34
17 benchmarks
.486
0
ceiling
best
median
.800 Geirhos2021colour-error_consistency v1 [reference] rank 8
.800
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.544 Geirhos2021contrast-error_consistency v1 [reference] rank 31
.544
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.355 Geirhos2021cueconflict-error_consistency v1 [reference] rank 41
.355
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.153 Geirhos2021edge-error_consistency v1 [reference] rank 52
.153
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.593 Geirhos2021eidolonI-error_consistency v1 [reference] rank 33
.593
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.640 Geirhos2021eidolonII-error_consistency v1 [reference] rank 19
.640
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.501 Geirhos2021eidolonIII-error_consistency v1 [reference] rank 36
.501
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.702 Geirhos2021falsecolour-error_consistency v1 [reference] rank 16
.702
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.177 Geirhos2021highpass-error_consistency v1 [reference] rank 44
.177
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.410 Geirhos2021lowpass-error_consistency v1 [reference] rank 44
.410
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.376 Geirhos2021phasescrambling-error_consistency v1 [reference] rank 38
.376
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.380 Geirhos2021powerequalisation-error_consistency v1 [reference] rank 44
.380
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.431 Geirhos2021rotation-error_consistency v1 [reference] rank 24
.431
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.817 Geirhos2021silhouette-error_consistency v1 [reference] rank 37
.817
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.267 Geirhos2021sketch-error_consistency v1 [reference] rank 42
.267
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.627 Geirhos2021stylized-error_consistency v1 [reference] rank 34
.627
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.493 Geirhos2021uniformnoise-error_consistency v1 [reference] rank 46
.493
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.155 Baker2022 rank 132
3 benchmarks
.155
0
ceiling
best
median
.308 Baker2022fragmented-accuracy_delta v1 [reference] rank 108
.308
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.156 Baker2022frankenstein-accuracy_delta v1 [reference] rank 129
.156
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.000 Baker2022inverted-accuracy_delta v1 [reference] rank 54
.000
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.368 Ferguson2024 [reference] rank 172
14 benchmarks
.368
0
ceiling
best
median
.546 Ferguson2024gray_hard-value_delta v1 [reference] rank 86
.546
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.305 Ferguson2024lle-value_delta v1 [reference] rank 136
.305
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.608 Ferguson2024juncture-value_delta v1 [reference] rank 34
.608
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.952 Ferguson2024color-value_delta v1 [reference] rank 63
.952
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.062 Ferguson2024round_v-value_delta v1 [reference] rank 200
.062
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.148 Ferguson2024eighth-value_delta v1 [reference] rank 89
.148
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.576 Ferguson2024round_f-value_delta v1 [reference] rank 63
.576
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.242 Ferguson2024llh-value_delta v1 [reference] rank 151
.242
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.198 Ferguson2024circle_line-value_delta v1 [reference] rank 148
.198
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.518 Ferguson2024gray_easy-value_delta v1 [reference] rank 69
.518
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
1.0 Ferguson2024tilted_line-value_delta v1 [reference] rank 1
1.0
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.150 Hebart2023-match v1 rank 157
.150
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.145 BMD2024 rank 128
4 benchmarks
.145
0
ceiling
best
median
.166 BMD2024.dotted_1Behavioral-accuracy_distance v1 rank 83
.166
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.083 BMD2024.texture_1Behavioral-accuracy_distance v1 rank 166
.083
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.126 BMD2024.texture_2Behavioral-accuracy_distance v1 rank 129
.126
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.205 BMD2024.dotted_2Behavioral-accuracy_distance v1 rank 49
.205
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.158 engineering_vision rank 237
25 benchmarks
.158
0
ceiling
best
median
.604 Geirhos2021-top1 [reference] rank 67
17 benchmarks
.604
0
ceiling
best
median
.986 Geirhos2021colour-top1 v1 [reference] rank 44
.986
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.971 Geirhos2021contrast-top1 v1 [reference] rank 36
.971
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.195 Geirhos2021cueconflict-top1 v1 [reference] rank 164
.195
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.212 Geirhos2021edge-top1 v1 [reference] rank 194
.212
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.453 Geirhos2021eidolonI-top1 v1 [reference] rank 214
.453
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.505 Geirhos2021eidolonII-top1 v1 [reference] rank 147
.505
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.498 Geirhos2021eidolonIII-top1 v1 [reference] rank 158
.498
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.984 Geirhos2021falsecolour-top1 v1 [reference] rank 29
.984
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.487 Geirhos2021highpass-top1 v1 [reference] rank 73
.487
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.481 Geirhos2021lowpass-top1 v1 [reference] rank 62
.481
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.639 Geirhos2021phasescrambling-top1 v1 [reference] rank 88
.639
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.812 Geirhos2021powerequalisation-top1 v1 [reference] rank 61
.812
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.780 Geirhos2021rotation-top1 v1 [reference] rank 45
.780
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.544 Geirhos2021silhouette-top1 v1 [reference] rank 76
.544
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.672 Geirhos2021sketch-top1 v1 [reference] rank 59
.672
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.410 Geirhos2021stylized-top1 v1 [reference] rank 94
.410
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.641 Geirhos2021uniformnoise-top1 v1 [reference] rank 40
.641
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.185 Hermann2020 [reference] rank 213
2 benchmarks
.185
0
ceiling
best
median
.151 Hermann2020cueconflict-shape_match v1 [reference] rank 173
.151
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.219 Hermann2020cueconflict-shape_bias v1 [reference] rank 217
.219
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_model
model = load_model("pnasnet_large")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Benchmarks bibtex

@article {Rajalingham240614,
                author = {Rajalingham, Rishi and Issa, Elias B. and Bashivan, Pouya and Kar, Kohitij and Schmidt, Kailyn and DiCarlo, James J.},
                title = {Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks},
                elocation-id = {240614},
                year = {2018},
                doi = {10.1101/240614},
                publisher = {Cold Spring Harbor Laboratory},
                abstract = {Primates{	extemdash}including humans{	extemdash}can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks{	extemdash}such as those obtained here{	extemdash}could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENT Recently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.},
                URL = {https://www.biorxiv.org/content/early/2018/02/12/240614},
                eprint = {https://www.biorxiv.org/content/early/2018/02/12/240614.full.pdf},
                journal = {bioRxiv}
            }
        @article{geirhos2021partial,
              title={Partial success in closing the gap between human and machine vision},
              author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland},
              journal={Advances in Neural Information Processing Systems},
              volume={34},
              year={2021},
              url={https://openreview.net/forum?id=QkljT4mrfs}
        }
        @article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }
        @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024,
         title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?},
         url={osf.io/5ba3n},
         DOI={10.17605/OSF.IO/5BA3N},
         publisher={OSF},
         author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin},
         year={2024},
         month={Jun}
}
        @article{hermann2020origins,
              title={The origins and prevalence of texture bias in convolutional neural networks},
              author={Hermann, Katherine and Chen, Ting and Kornblith, Simon},
              journal={Advances in Neural Information Processing Systems},
              volume={33},
              pages={19000--19015},
              year={2020},
              url={https://proceedings.neurips.cc/paper/2020/hash/db5f9f42a7157abe65bb145000b5871a-Abstract.html}
        }
        

Layer Commitment

No layer commitments found for this model. Older submissions might not have stored this information but will be updated when evaluated on new benchmarks.

Visual Angle

None degrees