Abstract:
Neuroscience is largely founded on experiments employing 10 to 102 subjects. Viewed through so narrow an aperture, the brain can be seen only dimly, requiring ingenuity and imagination to fill in the parts left in shadow. But the conventional experimental approach may also distort our picture of the brain, corrupting it further than any intellectual interpolation could redeem. Most strikingly, models of the brain intelligible with data of this scale are many orders of magnitude simpler than the structural-functional polymorphism the neural substrate allows, leaving most of the space of hypothetical models unexplored and unexplorable. Drawing on analyses of a collection of 106 clinical brain scans, and a series of single-subject direct cortical stimulation studies, here I argue that understanding the functional organisation of the brain optimally requires the combination of very large scale and single-subject data: the extremes of the current data-scale. I show how this can be achieved by integrating neuroscience within clinical data streams, and exploiting novel inferential techniques drawn from the field of deep learning.





