Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("Baker2022inverted-accuracy_delta")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
1.0
2
1.0
3
1.0
4
1.0
5
1.0
6
1.0
7
1.0
8
1.0
9
1.0
10
1.0
11
1.0
12
1.0
13
.982
14
.982
15
.982
16
.982
17
.982
18
.982
19
.982
20
.908
21
.908
22
.908
23
.908
24
.908
25
.698
26
.698
27
.698
28
.698
29
.698
30
.698
31
.698
32
.698
33
.683
34
.470
35
.470
36
.470
37
.470
38
.367
39
.367
40
.367
41
.367
42
.367
43
.367
44
.367
45
.367
46
.367
47
.367
48
.367
49
.310
50
.186
51
.186
52
.098
53
.023
54
.000
55
.000
56
.000
57
.000
58
.000
59
.000
60
.000
61
.000
62
.000
63
.000
64
.000
65
.000
66
.000
67
.000
68
.000
69
.000
70
.000
71
.000
72
.000
73
.000
74
.000
75
.000
76
.000
77
.000
78
.000
79
.000
80
.000
81
.000
82
.000
83
.000
84
.000
85
.000
86
.000
87
.000
88
.000
89
.000
90
.000
91
.000
92
.000
93
.000
94
.000
95
.000
96
.000
97
.000
98
.000
99
.000
100
.000
101
.000
102
.000
103
.000
104
.000
105
.000
106
.000
107
.000
108
.000
109
.000
110
.000
111
.000
112
.000
113
.000
114
.000
115
.000
116
.000
117
.000
118
.000
119
.000
120
.000
121
.000
122
.000
123
.000
124
.000
125
.000
126
.000
127
.000
128
.000
129
.000
130
.000
131
.000
132
.000
133
.000
134
.000
135
.000
136
.000
137
.000
138
.000
139
.000
140
.000
141
.000
142
.000
143
.000
144
.000
145
.000
146
.000
147
.000
148
.000
149
.000
150
.000
151
.000
152
.000
153
.000
154
.000
155
.000
156
.000
157
.000
158
.000
159
.000
160
.000
161
.000
162
.000
163
.000
164
.000
165
.000
166
.000
167
.000
168
X
169
X
170
X
171
X
172
X
173
X
174
X
175
X
176
X
177
X
178
X
179
X
180
X
181
X
182
X
183
X
184
X
185
X
186
X
187
X
188
X
189
X
190
X
191
X
192
X
193
X
194
X
195
X
196
X
197
X
198
X
199
X
200
X
201
X
202
X
203
X
204
X
205
X
206
X
207
X
208
X
209
X
210
X
211
X
212
X
213
X
214
X
215
X
216
X
217
X
218
X
219
X

Benchmark bibtex

@article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }

Ceiling

Not available

Data: Baker2022inverted

Metric: accuracy_delta