What is the purpose of Brain-Score?
The Brain-Score platform aims to yield strong computational models of the brain and mind. We enable researchers to quickly get a sense of the alignment of their model(s) to currently dozens of neural and behavioral measurements, and provide these models to experimentalists to prototype new experiments and make sense of biological data. We recommend reading the perspective paper and the technical paper that outline both the ideas and the inner workings of how Brain-Score operates.
What are all the numbers on the Brain-Score site?
As of now on the leaderboard (Brain-Score), there are many scores that your model would obtain. These are sub-divided into neural and behavioral scores which themselves are further hierarchically organized. Each one of these is a set of benchmarks that tests how “brain-like” your model is to various cognitive and neural data – in essence, it is a measure of how similar the model is to the brain’s visual or language system. Models are also tested on “Engineering” benchmarks which do not include biological data but typically test against ground truth, often for a machine learning benchmark. These are often correlated to the brain and behavioral scores (e.g. more V1-like → more robust to image perturbations).
What is a benchmark?
A benchmark is composed of data, a metric, and an experimental protocol to test models. The data is typically primate (human or non-human) neural and/or behavioral data. When a model is run on a benchmark, the metric outputs a score of how well the model predictions match the data.
What is a metric?
A metric scores how similar two sets of data are. Typically these two sets are model and primate measurements, but metrics are agnostic of the data source and can also be used to compare two primate measurements (e.g. for ceiling estimates).
What is a model?
Specifically, "model" here refers to any computational model that implements the BrainModel (for Brain-Score Vision) or ArtificialSubject (for Brain-Score Language) API, which defines the interface that allows a conformant model to run on Brain-Score benchmarks. Note that Brain-Score is agnostic to the exact model family, e.g. this could be an artificial neural network or a hand-crafted predictor. As long as the model makes predictions that adhere to the API, it can be tested.
What is a score?
The scores that Brain-Score computes are a normalized value ranging from 0.0 to 1.0 . The score reflects how closely model predictions align with biological data, e.g. neural activity or behavior, as evaluated by each benchmark's metric. 0 means the model does not match the measurements at all and 1 means the model matches the measurements at ceiling level such that no more data can be explained due to noise in the signal (e.g. if the model obtains a score of 0.8 and the data ceiling is also 0.8 , the score output by this method should be 1 ).
What am I able to submit?
You can submit new plugins to Brain-Score. A plugin can be a dataset, a metric, a benchmark, or a model. You can submit plugins directly on the website as a zip file or via a pull request on Github (zip file submissions will automatically be converted into a PR). The plugin directory has to contain at least two files: an __init__.py which adds your data/metric/benchmark/model to the corresponding registry, and a test.py file in which you can define tests to ensure your plugin works as expected. If your plugin requires third-party libraries, you can include a setup.py or arequirements.txt file. You can optionally add more files, e.g. to separate the detailed implementations.
How do I submit a model?
We have created two tutorials outlining how to submit a new Brain-Score plugin- a (Quickstart tutorial) and three (Deep Dive tutorials). The Quickstart tutorial outlines how to set up your local environment quickly and how to run the pipeline locally. The Deep Dive tutorials contain direction on how to submit plugins and even provide a template plugin file.
What happens to a model after I submit it?
A pull request is opened on the Brain-Score GitHub of whichever plugin type you submitted, either (vision or language). If this pull request passes all of the Jenkins tests, the PR will be merged and the scoring of your plugin will begin. Once scored, the Brain-Score of your plugin will appear on the leaderboard as long as you didn't ask for your model's score to be private when submitting it.
Am I able to keep my submissions private?
While we do not currently support private code submissions -- at the moment, all submitted code is merged into our public GitHub repository -- this functionality is under development and anticipated to be available soon. An automated option for including private files with your submission is also planned; in the interim, please contact us with any files that need to remain private and we will upload them directly to our AWS S3. This is a good way to enable model testing on your data without releasing it if you wish to keep it private: Your data will not be publicly accessible but by submitting models, the community can still get a sense of model alignment.
I submitted a model but don't see it in my profile. Why not?
There are a variety of reasons that your model may not be showing up in your profile. The submission and scoring process can vary depending on server availability, so if you haven't received an email indicating an error, it's possible that tests or scoring are still running. As every successful submission should result in a pull request in the relevant GitHub repository (vision or language), you can check to see if there is either an open or closed PR with your model name in the title, which can provide further information about the status of your submission. If you need assistance, you can either create a GitHub issue or reach out via Slack explaining your problem and someone in our developer community will assist you.
I was looking at the code and I found an error in the code/docs/etc. How can I contribute?
We welcome and encourage user participation in our code base! The easiest way to do so is to fork the repository (make a copy of the Brain-Score Github repository locally and/or in your own Github), make the necessary edits there, and submit a pull request (PR) to merge it into our master branch. We will then review and approve the PR, and after that it is merged into our code base.
I really like Brain-Score, and I have some ideas that I would love to talk to someone about. How do I get in touch?
The easiest way to get in touch with us is via our Slack channel. For bugs or feature requests, we encourage you to make an issue here, or send us an email! We will also be creating a mailing list soon, so stay tuned.
Is there any reward for reaching the top overall Brain-Score? Or even a top score on the individual benchmarks?
We sometimes run competitions - we had a model competition in 2022, as well as a benchmarking competition in 2024. A top Brain-Score result is also a great way to show the goodness of your model and market its value to the community.
Our tutorials and FAQs, created with Brain-Score users, aim to cover all bases. However, if issues arise, reach out to our community or consult the troubleshooting guide below for common errors and solutions.