Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("MajajHong2015.V4-pls")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.620
2
.614
3
.614
4
.611
5
.610
6
.610
7
.610
8
.610
9
.608
10
.607
11
.605
12
.605
13
.604
14
.604
15
.604
16
.603
17
.603
18
.602
19
.602
20
.602
21
.602
22
.602
23
.602
24
.602
25
.602
26
.602
27
.602
28
.601
29
.601
30
.601
31
.600
32
.600
33
.600
34
.600
35
.600
36
.599
37
.599
38
.599
39
.599
40
.599
41
.599
42
.598
43
.598
44
.598
45
.598
46
.598
47
.597
48
.597
49
.597
50
.597
51
.596
52
.596
53
.596
54
.596
55
.596
56
.596
57
.596
58
.595
59
.595
60
.595
61
.594
62
.593
63
.592
64
.592
65
.592
66
.592
67
.592
68
.592
69
.591
70
.591
71
.591
72
.591
73
.591
74
.591
75
.590
76
.590
77
.589
78
.589
79
.589
80
.589
81
.589
82
.589
83
.589
84
.588
85
.588
86
.588
87
.588
88
.588
89
.587
90
.587
91
.587
92
.587
93
.587
94
.586
95
.586
96
.586
97
.586
98
.586
99
.585
100
.585
101
.585
102
.585
103
.584
104
.584
105
.584
106
.584
107
.584
108
.584
109
.584
110
.584
111
.583
112
.583
113
.583
114
.583
115
.583
116
.583
117
.583
118
.583
119
.582
120
.582
121
.582
122
.582
123
.582
124
.582
125
.582
126
.582
127
.582
128
.582
129
.581
130
.581
131
.581
132
.581
133
.581
134
.581
135
.581
136
.580
137
.580
138
.580
139
.580
140
.580
141
.580
142
.580
143
.579
144
.579
145
.579
146
.579
147
.579
148
.579
149
.578
150
.578
151
.578
152
.578
153
.578
154
.578
155
.578
156
.578
157
.578
158
.577
159
.577
160
.577
161
.577
162
.577
163
.576
164
.576
165
.576
166
.575
167
.575
168
.575
169
.575
170
.575
171
.575
172
.575
173
.574
174
.574
175
.574
176
.574
177
.574
178
.574
179
.574
180
.574
181
.574
182
.574
183
.574
184
.573
185
.573
186
.573
187
.573
188
.573
189
.572
190
.572
191
.572
192
.571
193
.571
194
.571
195
.571
196
.570
197
.570
198
.570
199
.570
200
.570
201
.570
202
.570
203
.570
204
.570
205
.569
206
.569
207
.569
208
.569
209
.569
210
.569
211
.569
212
.569
213
.569
214
.569
215
.568
216
.568
217
.568
218
.568
219
.568
220
.568
221
.568
222
.567
223
.567
224
.567
225
.566
226
.566
227
.566
228
.566
229
.566
230
.566
231
.566
232
.566
233
.565
234
.565
235
.565
236
.564
237
.564
238
.564
239
.564
240
.563
241
.563
242
.563
243
.563
244
.562
245
.562
246
.562
247
.562
248
.562
249
.562
250
.561
251
.560
252
.560
253
.560
254
.560
255
.560
256
.560
257
.560
258
.559
259
.559
260
.558
261
.558
262
.558
263
.558
264
.558
265
.558
266
.558
267
.557
268
.557
269
.557
270
.557
271
.556
272
.556
273
.556
274
.555
275
.555
276
.555
277
.555
278
.555
279
.554
280
.554
281
.553
282
.553
283
.553
284
.551
285
.551
286
.551
287
.550
288
.550
289
.550
290
.550
291
.550
292
.550
293
.550
294
.550
295
.550
296
.550
297
.550
298
.550
299
.550
300
.550
301
.550
302
.549
303
.549
304
.549
305
.549
306
.548
307
.548
308
.548
309
.548
310
.548
311
.548
312
.547
313
.547
314
.546
315
.545
316
.545
317
.544
318
.544
319
.543
320
.542
321
.541
322
.541
323
.540
324
.539
325
.539
326
.538
327
.538
328
.537
329
.536
330
.536
331
.536
332
.533
333
.533
334
.531
335
.530
336
.530
337
.527
338
.526
339
.524
340
.523
341
.521
342
.519
343
.518
344
.517
345
.517
346
.516
347
.516
348
.516
349
.515
350
.515
351
.514
352
.514
353
.514
354
.514
355
.514
356
.514
357
.514
358
.514
359
.514
360
.514
361
.514
362
.513
363
.511
364
.511
365
.509
366
.509
367
.504
368
.504
369
.503
370
.501
371
.501
372
.498
373
.498
374
.497
375
.494
376
.494
377
.491
378
.489
379
.487
380
.486
381
.485
382
.485
383
.483
384
.481
385
.476
386
.473
387
.469
388
.466
389
.456
390
.454
391
.453
392
.452
393
.451
394
.445
395
.443
396
.439
397
.438
398
.437
399
.436
400
.433
401
.433
402
.432
403
.432
404
.431
405
.431
406
.430
407
.430
408
.427
409
.421
410
.420
411
.419
412
.418
413
.376
414
.342
415
.339
416
.328
417
.316
418
.185
419
.179
420
.154
421
.098
422
.078
423
.073
424
.068
425
.068
426
X
427
X
428
X
429
X
430
X
431
X
432
X
433
X
434
X
435
X
436
X
437
X
438
X
439
X
440
X
441
X
442
X
443
X
444
X

Benchmark bibtex

@article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}

Ceiling

0.90.

Note that scores are relative to this ceiling.

Data: MajajHong2015.V4

2560 stimuli recordings from 88 sites in V4

Metric: pls