The main aim of my research is to understand the cognitive mechanisms supporting language comprehension. Given how quickly and effortlessly we understand language through visual and spoken input, I am interested in identifying what specific processing stages are taking place across time that allow us to achieve this. The tools I use in my research to investigate the dynamics of language comprehension are magnetoencephalography (MEG) and electroencephalography (EEG), which provide information about brain activity on a millisecond level.

    One strand of my research examines how we identify word structure during word comprehension. A key features of the human language system is the ability to assemble linguistic features into new combinations, allowing us to convey countless concepts with a limited set of units. New forms can be produced by combining words (e.g. dark or talk) with affixes such as {-ness} and {-ed}, creating new words (darkness, talked) with changed linguistic functions. Crucially, this gives us the ability to generate completely novel words such as Trumpism or misunderestimate that are easily understood because the affixes ({-ism} and {mis-}) carry meaningful information.


    Behavioural and neuroimaging results from my PhD showed converging evidence for an early and automatic analysis of word structure, resulting in segmentation of visual input into linguistic substrings (taking apart words like dark-ness into its constituent parts). This demonstrates that a critical stage in word processing is identifying meaningful substrings, a process that takes place approximately 100 ms before information about word meaning has been accessed. This stage could be dissociated from earlier processing of low-level visual features, and later processing of semantic information linked to accessing lexical representations.

    The second research stream, the focus of my postdoctoral work, uses MEG to detail the underlying organisation of the language system in both the visual and auditory domains. This work primarily uses representational similarity analysis (RSA), a powerful methodology for relating neural activity to specific models and hypotheses about the properties that are coded in different brain regions. Within my research on visual word recognition, I have focused on early stages of word reading related to letter identification. A central component of reading is the ability to identify the relative position of each letter within a word in order to retrieve the appropriate meaning (tap as opposed to pat). This research shows that within 200 milliseconds of perceiving a visual input, processing in visual brain areas reveal sensitivity to the identity of letters (e.g. a word beginning in b-a as opposed to c-h) but not yet to the meaning of a word. This selectivity for letter identity is also specific to letter position, demonstrating that these regions preserve abstract letter position information that is crucial for later access to word meaning.

    Recent publications:

    Giordano, B. L., Whiting, C. M., Kriegeskorte, N., Kotz, S. A., Belin, P., & Gross, J. (2018). From categories to dimensions: spatio-temporal dynamics of the cerebral representations of emotion in voice. bioRxiv doi: 10.1101/265843 [link]

    Whiting, C. M., Cowley, R. G., & Bozic, M. (2017). The role of semantic context in early morphological processing. Frontiers in Psychology, 8. doi: 10.3389/fpsyg.2017.00991 [link]

    Whiting, C. M., Shtyrov, Y., & Marslen-Wilson, W. D. (2015). Real-time functional architecture of visual word recognition. Journal of Cognitive Neuroscience, 27(2), 246-265. doi: 10.1162/jocn_a_00699 [link]

    Whiting, C. M., Marslen-Wilson, W. D., & Shtyrov, Y. (2013). Neural dynamics of inflectional and derivational processing in spoken word comprehension: laterality and automaticity. Frontiers in Human Neuroscience, 7. doi: 10.3389/fnhum.2013.00759 [link]

    Whiting, C. M. (2011). Spatiotemporal dynamics of morphological and lexical processing in the brain (Doctoral dissertation, University of Cambridge). [link]