GPU Simulations of Large-scale Neuronal Networks

Main Researcher: Raphael Y. de Camargo

Nucleus of Cognition and Complex Systems (NCSC)
Center for Mathematics, Computation and Cognition (CMCC)
Federal University of ABC (UFABC)

Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number the scientists that can perform this kind of simulation. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed by hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications.

In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with 2 graphic boards with 2 GPUs each, when compared with a modern quad-core CPU.

To download the source code, click here. This an old version that will be updated soon.


Last update: April, 15 of 2010