Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("Baker2022fragmented-accuracy_delta")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.986
2
.984
3
.984
4
.983
5
.982
6
.982
7
.982
8
.981
9
.978
10
.970
11
.965
12
.960
13
.960
14
.960
15
.957
16
.946
17
.945
18
.944
19
.944
20
.935
21
.926
22
.925
23
.917
24
.903
25
.901
26
.901
27
.901
28
.889
29
.882
30
.868
31
.858
32
.858
33
.838
34
.836
35
.836
36
.834
37
.832
38
.822
39
.806
40
.803
41
.802
42
.799
43
.796
44
.791
45
.788
46
.785
47
.760
48
.758
49
.756
50
.751
51
.740
52
.735
53
.734
54
.734
55
.730
56
.721
57
.720
58
.709
59
.698
60
.671
61
.670
62
.663
63
.656
64
.649
65
.646
66
.617
67
.603
68
.602
69
.592
70
.590
71
.583
72
.582
73
.566
74
.558
75
.558
76
.550
77
.543
78
.541
79
.538
80
.524
81
.523
82
.515
83
.507
84
.499
85
.494
86
.478
87
.473
88
.470
89
.446
90
.438
91
.433
92
.424
93
.421
94
.417
95
.412
96
.412
97
.412
98
.411
99
.400
100
.392
101
.392
102
.388
103
.365
104
.350
105
.336
106
.336
107
.333
108
.308
109
.304
110
.289
111
.287
112
.282
113
.280
114
.274
115
.268
116
.264
117
.251
118
.236
119
.221
120
.217
121
.204
122
.195
123
.195
124
.186
125
.178
126
.167
127
.161
128
.115
129
.111
130
.096
131
.053
132
.038
133
.032
134
.030
135
.029
136
.021
137
.015
138
.014
139
.011
140
.011
141
.003
142
.000
143
.000
144
.000
145
.000
146
.000
147
.000
148
.000
149
.000
150
.000
151
.000
152
.000
153
.000
154
.000
155
.000
156
.000
157
.000
158
.000
159
.000
160
.000
161
.000
162
.000
163
.000
164
.000
165
.000
166
.000
167
.000
168
.000
169
X
170
X
171
X
172
X
173
X
174
X
175
X
176
X
177
X
178
X
179
X
180
X
181
X
182
X
183
X
184
X
185
X
186
X
187
X
188
X
189
X
190
X
191
X
192
X
193
X
194
X
195
X
196
X
197
X
198
X
199
X
200
X
201
X
202
X
203
X
204
X
205
X
206
X
207
X
208
X
209
X
210
X
211
X
212
X
213
X
214
X
215
X
216
X
217
X
218
X
219
X
220
X
221
X

Benchmark bibtex

@article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }

Ceiling

Not available

Data: Baker2022fragmented

Metric: accuracy_delta