Rank |
Model |
average 3 |
neural 2 |
behavior 1 |
engineering 30 |
---|---|---|---|---|---|
1 |
.662
|
1.0
|
.324
|
.800
|
|
2 |
.652
|
.987
|
.318
|
.825
|
|
3 |
.649
|
.990
|
.308
|
.847
|
|
4 |
.639
|
.948
|
.331
|
.786
|
|
5 |
.634
|
.918
|
.350
|
.796
|
|
6 |
.608
|
.850
|
.367
|
.762
|
|
7 |
.594
|
.802
|
.387
|
.523
|
|
8 |
.590
|
.818
|
.361
|
.503
|
|
9 |
.571
|
.784
|
.358
|
.736
|
|
10 |
.433
|
.580
|
.286
|
.408
|
|
11 |
.166
|
.332
|
X
|
X
|
|
12 |
.107
|
.213
|
X
|
X
|
|
13 |
.062
|
.125
|
X
|
X
|
|
14 |
X
|
X
|
X
|
.000
|
How to Interpret
The leaderboard is the heart of Brain-Score. It displays scores, all ranging from 0-1 (where 1 indicates closest alignment between model and collected data), on every benchmark available. Benchmarks are arranged hierarchically, with scores for each child benchmark averaged together to create the overall score for the parent.
If you would like to submit your own model to Brain-Score, we highly recommend you complete our tutorial series here. It covers how to score a model on a single benchmark locally, walkthroughs of model submission packages, and more.