Let's Jump Right In

The Brain-Score platform allows users to score models on public data via the command line on your own machine. It also allows scoring models on all data (public and private) via the website.

We highly recommend you complete this quickstart before trying to submit a model to the site. Not only will the quickstart show you what to expect from a score, but it will better prepare you to submit a plugin and get a score on all benchmarks!


#Create and activate a new conda environment
conda create -y -n myenv python=3.11
conda activate myenv

#Install packages
git clone https://github.com/brain-score/vision.git
cd vision
python -m pip install --upgrade pip .
python -m pip install -e .
            

Step 1: Install Packages

In order to use Brain-Score on your machine, you need to install it. We recommend setting up a Virtual Environment for all your Brain-Score projects, but this isn't required. If you need to install Conda, you can find instructions on how to do so here: Conda Website

Run the commands to the left to create and activate a conda environment and install the required Brain-Score packages and dependencies.

Step 2: Run a Model on a Benchmark

Next, make sure your working directory is /vision, and run the command below to score the model pixels on the publicly available data of a benchmark called MajajHong2015public.IT-pls. In this command, pixels is a Brain-Score "model identifier" and MajajHong2015public.IT-pls is a Brain-Score "benchmark identifier". You can use the same command to run other models on the MajajHong2015public.IT-pls benchmark by changing the model identifier. For example, try changing pixels to alexnet. In a similar manner, you can change the benchmark identifier in the command to run the pixels with other benchmarks. For example, try changing MajajHong2015public.IT-pls to Ferguson2024circle_line-value_delta

You can find a comprehensive collection of available models in the brainscore_vision/models directory, and you can find the model identifier for each model in its _init_.py file. Be aware that several of the models in this collection can be very memory intensive. For example the resnet50 model will run on a 2024 M3 Macbook Pro with 32 Gb of RAM, but will fail on the same machine with 16Gb.

Similarly You can find a comprehensive collection of available benchmarks in the brainscore_vision/benchmarks directory, and you can find the benchmark identifier(s) for each benchmark in it's _init_.py file.


python brainscore_vision score --model_identifier='pixels' --benchmark_identifier='MajajHong2015public.IT-pls'
    

Upon scoring completion, you should get a message like below, indicating what the score is.


<xarray.Score ()>
array(0.07637264)
Attributes:
    error:                 <xarray.Score ()>\narray(0.00548197)
    raw:                   <xarray.Score ()>\narray(0.22545106)\nAttributes:\...
    ceiling:               <xarray.DataArray ()>\narray(0.81579938)\nAttribut...
    model_identifier:      pixels
    benchmark_identifier:  MajajHong2015public.IT-pls
    comment:               layers: {'IT': 'pixels'}


Process finished with exit code 0
    

Let’s break down what these numbers mean. First, your score is 0.07637264, the first number in the first xarray listed. Next, you can see a few other attributes: error, with value 0.00548197, which represents the error of your the score estimate; raw with value 0.22545106 which represents the unceiled score that your model achieved on the MajajHong2015public.IT-pls benchmark; and ceiling, with value 0.81579938, which is the highest score a perfect model is expected to get.

In this case, the MajajHong2015public.IT-pls benchmark uses the standard NeuralBenchmark which ceiling-normalizes with explained variance (r(X, Y) / r(Y, Y))^2. More specifically, a class is defined that evaluates how well a brain model matches neural activity in a specific brain region during visual tasks. It does this by comparing the model's outputs to actual recorded brain data, using a mathematical measure called "explained variance" to assess similarity. The code adjusts or "ceils" the final score based on the maximum possible accuracy (ceiling) of the model. Essentially, it checks how closely a computer model's predictions align with real brain data and then calculates a final score (from the ceiling and raw scores) that reflects that similarity. (The code for this can be found here).

There is also more metadata listed in this score object, such as model_identifier, benchmark_identifier, and comment.

Please note that compute times may vary; running on a 2024 Macbook with an M2 Pro chip takes up to 20 minutes.

Further Learning Resources

If you would like to know more about Brain-Score, please visit our Deep Dive series! These are guided tours that walk you through how to put Brain-Score to work for you.

In Deep Dive 1, we will provide resources for new users on the background and philosophy of Brain-Score.

In Deep Dive 2, we will cover the submission package. You can use this as a formatting guide for your own future submissions.

Finally, in Deep Dive 3, we will walk though what a custom model submission looks like, and how to submit one via either the website, or with a GitHub PR.

Optional: Scoring a Language Model

The process is very similar for scoring a language model. First, install the needed packages in Step 1 above, but just change all occurrences of vision to language, i.e: brainscore_vision becomes brainscore_language. Next, simply call the language equivalent to the above vision command, which would be:


python brainscore_language score --model_identifier='distilgpt2' --benchmark_identifier='Futrell2018-pearsonr'
        

Where, in this case, we are calling the brainscore-language library to score the language model distilgpt2 on the language benchmark Futrell2018-pearsonr.

Stuck?

Our tutorials and FAQs, created with Brain-Score users, aim to cover all bases. However, if issues arise, reach out to our community or consult the troubleshooting guide below for common errors and solutions.

Something Not Right?

If you come across any bugs, please feel free to submit an Issue on Github. One of our team members will be happy to investigate any issues.