Scores on benchmarks
Model rank shown below is with respect to all public models..061 |
average_vision
rank 420
81 benchmarks |
|
.123 |
behavior_vision
rank 218
43 benchmarks |
|
.091 |
Baker2022
rank 150
3 benchmarks |
|
.274 |
Baker2022fragmented-accuracy_delta
v1
[reference]
rank 114
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.000 |
Baker2022frankenstein-accuracy_delta
v1
[reference]
rank 142
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.000 |
Baker2022inverted-accuracy_delta
v1
[reference]
rank 54
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.417 |
Maniquet2024
rank 144
2 benchmarks |
|
.164 |
Maniquet2024-confusion_similarity
v1
[reference]
rank 173
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.671 |
Maniquet2024-tasks_consistency
v1
[reference]
rank 59
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.233 |
Hebart2023-match
v1
rank 132
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.146 |
BMD2024
rank 127
4 benchmarks |
|
.166 |
BMD2024.dotted_1Behavioral-accuracy_distance
v1
rank 83
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.134 |
BMD2024.texture_1Behavioral-accuracy_distance
v1
rank 127
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.157 |
BMD2024.texture_2Behavioral-accuracy_distance
v1
rank 118
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.126 |
BMD2024.dotted_2Behavioral-accuracy_distance
v1
rank 107
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.095 |
Coggan2024_behavior-ConditionWiseAccuracySimilarity
v1
rank 161
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
How to use
from brainscore_vision import load_model model = load_model("resnet50_eMMCR_Vanilla") model.start_task(...) model.start_recording(...) model.look_at(...)
Benchmarks bibtex
@article{BAKER2022104913, title = {Deep learning models fail to capture the configural nature of human shape perception}, journal = {iScience}, volume = {25}, number = {9}, pages = {104913}, year = {2022}, issn = {2589-0042}, doi = {https://doi.org/10.1016/j.isci.2022.104913}, url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853}, author = {Nicholas Baker and James H. Elder}, keywords = {Biological sciences, Neuroscience, Sensory neuroscience}, abstract = {Summary A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.} } @article {Maniquet2024.04.02.587669, author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan}, title = {Recurrent issues with deep neural network models of visual recognition}, elocation-id = {2024.04.02.587669}, year = {2024}, doi = {10.1101/2024.04.02.587669}, publisher = {Cold Spring Harbor Laboratory}, URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669}, eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf}, journal = {bioRxiv} }