Sat at a desk late at night, typing away at another blog post, something brushes against you in the dark. Urgh. Where did it go? You’re fairly sure it ran down your arm. At least, you felt it at your elbow, then towards your wrist. It felt like whatever it was crossed the space in between as well, so it must have been running down your arm, right?
Or at least: not necessarily. It is usually a safe assumption that a series of touches that are close together in time and space was produced by something moving along the body. However, this assumption means the brain can be fooled into mislocalising sensations of touch. Rapidly tapping first at one location on the body and then at another produces phantom sensations of taps at locations in between – an illusion called the cutaneous rabbit illusion, as the feeling is likened to a rabbit hopping along between the two locations (first described by Geldard and Sherrick). This illusion (and others such as the colour phi phenomenon) suggests that we perceive an interpretation of events that is constructed after they have happened. How that interpretation of events is constructed remains an unanswered question.
Experiments investigating the mechanisms behind the cutaneous rabbit illusion suggested that the somatotopic map in primary somatosensory cortex (S1) was strongly involved – the size of the ‘jumps’ is correlated with receptive field size in S1, and the illusion does not cross the midline of the body, which is consistent with the division of representations of the left and right side of the body in S1. FMRI studies also found a correlation between activity in S1 and experiencing the illusion. Thus, if the illusion was dependent solely on activity in S1 representations, we would expect phantom sensations to be restricted to areas of the body that have receptive fields in S1 and not to be felt in objects outside the body.
Recent research by Miyazaki and colleagues has shown, however, that the cutaneous rabbit can “hop out of the body”, and be felt along objects external to the body. When a sequence of taps was delivered to their outstretched fingers, they reported sensing taps at their fingertips. In contrast, when a stick was laid across their index fingers, subjects reported sensing illusory taps in the space between the stimulus locations – along the stick. It is well known that the representation of the body (the body scheme) can be temporarily extended to incorporate external objects such as tools. Thus, it seems as though the cutaneous rabbit illusion may occur along the stick because it has been incorporated into the body scheme.
However, explaining the findings of the study as the result of incorporating the object into the body scheme raises the question of where does this incorporation take place? Although the cutaneous rabbit illusion is associated with activity in S1, the incorporation of objects into the body scheme has been thought to take place primarily in representations of the body in other regions of cortex such as the intraparietal cortex (although see Theorin and Johansson for evidence that S1 might be involved).
Perhaps our understanding of the cutaneous rabbit illusion or our understanding of which regions are involved in the plastic body scheme needs to change. But what we do know is that when something brushes against your leg in the night, you can never be 100% sure of where it’s gone…
Miyazaki M, Hirashima M, & Nozaki D (2010). The “cutaneous rabbit” hopping out of the body. The Journal of neuroscience : the official journal of the Society for Neuroscience, 30 (5), 1856-60 PMID: 20130194 (free to access)
Previous papers we’ve looked at in the body representation reading group have manipulated multi-sensory external stimulation, such as the visible image of a rubber hand and simultaneous stroking of both the rubber and genuine hand, to look at how external stimuli are integrated to form a representation of the body. However, the body representation is also formed from internal as well as external information: the brain has access to proprioceptive information (information about joint angle) from neurons attached to muscles and tendons. But as we have seen elsewhere, in the absence of other, typically available sources of input, such as vision, this information can be inaccurate. The rubber hand illusion demonstrates that accurate or not, proprioceptive input is readily abandoned in favour of other input, perhaps because the other input more reliable, or perhaps because of mere accident of design. But people vary in their susceptibility to the rubber hand illusion. If the illusion is the result of integration of internal and external sensory signals, might people with a better sense of internal sensory signals be less susceptible to the rubber hand illusion?
Internal awareness and a robust body representation
Taskiris and colleagues set out to see; by measuring participants’ interoceptive awareness through their ability to accurately count their heart rate and comparing this to the extent to which they experienced the rubber hand illusion. They report that the rubber hand illusion had less of an effect (i.e. resulted in less proprioceptive drift) on participants who were categorised as having high interoceptive awareness (above the median level of awareness of all the participants) than those with low interoceptive awareness. There was no difference between the groups on the control version of the experiment, where the rubber hand and the participant’s hand are stroked asynchronously (180° out of phase). They also report that the low interoceptive group also showed a greater decrease in skin temperature during the illusion, a phenomenon associated with the rubber hand illusion.
Issues with the paper
Although the findings seem to support the idea that people with a better sense of internal sensory signals are less susceptible to the rubber hand illusion, I am concerned that this result might not be robust. The effect that interoceptive awareness has on the illusion is small, and although the statistical test conducted initially (ANOVA) suggested the presence of some difference between groups, follow up t-tests were not performed for all group-to-group comparisons. Some of those that were conducted only showed significant differences if the thresholds for significance are left unadjusted for multiple comparisons. Linear regressions likewise show a series of small correlations of aspects of the illusion with interoceptive awareness, the smallest of which (change in skin temperature) is significant only when a one-tailed test is used. The authors justify using a one-tailed test because of a hypothesis based on another study, but the study cited doesn’t seem to suggest we should expect a change in skin temperature any more than a change in other variables that are not subject to one-tailed tests. Further, not all of the comparisons of high to low aware individuals are followed up by a regression: A regression of introceptive awareness against the initial and strongest measure of effect of the illusion (proprioceptive drift) is not presented, only a regression of a calculated measure (proprioceptive shift). Likewise, there is a difference between low and high awareness groups on the average rating across eight scales measuring subjective experience of the illusion, but a regression is calculated for only one of the scales. Although this scale has previously been shown to be most strongly associated with changes in feelings of body ownership during the rubber hand illusion, comparable data for the participants in the paper is not published, neither are regressions for the other scales. Although these tests might have shown trivial results, their absence should have been discussed, or at least commented upon. Finally, Tasakiris and colleagues switch readily between talking about the effect of the rubber hand illusion on body representation as changes in feelings of body ownership, and changes in proprioceptive judgements of body position. There is evidence that these are dissociable aspects of the body representation, and it remains to be seen whether interoceptive awareness affects them differently.
Is more theoretical background needed?
Illusions of body representation are almost certainly the result of integrating internal as well as external sources of information. An individual’s susceptibility may well be affected by how much of a contribution internal versus external sources make to their body representation. However, how much contribution interoceptive information makes is not necessarily the same as how aware an individual is of that information – a participant may be acutely aware of weak interoceptive signals, or ignorant of dominant signals. Even if awareness and relative contribution are related, which is not an unlikely possibility, it may be that strong internal signals lead to both resistance to body representation illusions and high interoceptive awareness, rather than awareness playing an active modulatory roles, as Tasakiris and colleagues suggest. Further, is awareness of one’s heart rate a good indicator of awareness of interoceptive signals in general, including proprioceptive signals likely to contribute to body representation?
Body representation – an open field
What information the brain uses to form a representation of the body and how it does so are interesting questions. How the brain forms a body representation might give us insights into why it does so, how the representation breaks down in conditions such as neglect, whether the body representation is linked other concepts such as body image, and if so, whether the body representation is involved in a wider range of conditions such as body dysmorphias. This paper raises interesting points, but I think the results presented do not fully address the points raised, and in any case there are other questions to be answered before those points can be properly addressed.
Tsakiris M, Tajadura-Jiménez A, & Costantini M (2011). Just a heartbeat away from one’s body: interoceptive sensitivity predicts malleability of body-representations. Proceedings. Biological sciences / The Royal Society, 278 (1717), 2470-6 PMID: 21208964
The latest paper from the departmental body representation reading group is here – and it’s free to read! The reading group focuses around the idea of how the body is represented by the mind. Key points include how the representation is constructed, how accurate it is, where it is maintained (if a stable representation is maintained at all), whether and how the representation is modified by tool use, and, relevant to this week’s paper, what information is used to construct it the representation.
This week’s paper examined whether the absence of sensory and motor feedback from the limbs as a result of spinal cord injury (SCI) affects the body scheme. As well as measuring disruption of the body scheme and a sense of body ownership using the rubber hand illusion (RHI), the paper also looked at whether SCI produces a sense of disembodiment and depersonalisation using the Cambridge Depersonalisation Scale (CDS), as the authors suggest there is increasing evidence that the foundations of the sense of self lie in the systems that represent the body. The authors proposed two hypotheses:
- Mismatch between a pre-existing body model and sensory input causes depersonalisation, thus patients with reduced sensorimotor input strength due to SCI would have higher depersonalisation scores.
- The rubber hand illusion is occurs because the visual perception of a rubber hand being stroked ‘captures’ the tactile perception of your hand being stroked, resulting in the perception that a rubber hand is, in fact, your own, and your arm is localised where the rubber arm is. Thus, patients with reduced somatosensory input will show a stronger effect, as they have to rely more on visual cues to localise affected body parts.
The study involved 16 healthy participants and 30 participants with SCI. SCI participants were grouped either by ability (paraplegic – impairment of lower limbs, or tetraplegic – impairment of all limbs). In further analysis, participants were grouped on the basis of either complete or reduced tactile sensation on the left hand, regardless of the presence or absence of a lesion.
In line with their prediction SCI patients had higher depersonalisation scores. However, there were only significant differences in scores on three out of 28 items on the scale. The items were: “parts of my body feel as if they don’t belong to me”, “I have to touch myself to make sure that I have a body or a real existence”, and “I seem to have lost some bodily sensations (e.g. of hunger and thirst) so that when I eat or drink, it feels like an automatic routine”. In my opinion, these questions do not seem to unambiguously indicate depersonalisation, especially question three: participants had literally lost some bodily sensations. In the absence of significant differences on the other items, I would be reluctant to conclude that SCI patients showed more depersonalisation
In contrast with their prediction, SCI patients were not more likely to experience the illusion did not show a greater effect of the RHI. There was some variation, but generally:
- Healthy people predominately experienced the complete illusion (qualitative perception that the rubber hand was their hand, and proprioceptive drift)
- Participants with paraplegia often experienced the rubber hand as their own, but did not show proprioceptive drift
- About half of participants with tetraplegia experienced the complete illusion, but about half experienced no aspect of the illusion.
When the amount of proprioceptive drift was examined statistically, healthy participants had significant effect of the illusion, showing more drift after synchronous stroking, participants with tetraplegia had only a non-significant trend in that direction, and patients with paraplegia showed no difference. However, despite the absence of an effect of the objective measure of the RHI, there was a significant effect on perceived body ownership in all groups.
Depersonalisation – a fair conclusion?
The authors suggested that a mismatch between online sensorimotor input and the cortical sensorimotor representation of the body in SCI results in depersonalisation. While broadly true, we might have expected a difference between participants with paraplegia and those with tetraplegia, which wasn’t the case. Although higher lesion results in higher scores in item 3, a correlation with one out of 28 items on a depersonalisation scale is scant evidence to conclude a link between lack of input and depersonalisation.
A mixed bag of rubber hands
The picture with the rubber hand illusion is a complex one. Although, almost half of the participants with tetraplegia did not experience the illusion, this was not borne out in the statistics, which suggested no difference between them and healthy participants. Further, even though participants with tetraplegia were less likely to report experiencing the illusion, they showed greater proprioceptive drift than participants with paraplegia. This is an interesting finding, as it suggests that while removing the sensorimotor input does affect formation of a proprioceptive representation, the accuracy or inaccuracy of this representation is independent of subjective feelings of body ownership. However, there is a more interesting finding: participants with paraplegia showed less drift than both participants with tetraplegia and healthy participants.
The authors suggest a possible explanation for the lack of proprioceptive drift in participants with paraplegia: cortical reorganisation. In the absence of lower limb sensorimotor input to the cortex, the representation of the hand expands to fill the space. A larger (and possibly as a result stronger) neural representation is more resistant to the effect of visual input, and reduces likelihood of drift, but has no effect on the illusion of ownership. This is a neat explanation, backed up by other studies that found less proprioceptive drift was associated with greater neural activity in the primary and somatosensory cortices.
The paper seems to lend weight to the idea that sensorimotor input contributes to the body scheme, but the way in which the input is used is complex, as disrupting that input does not necessarily disrupt the body scheme in predictable ways. Further, there is some suggestion that subjective feelings of ownership of a visible body-part and integration of that body-part into a body scheme are separate constructs.
Lenggenhager B, Pazzaglia M, Scivoletto G, Molinari M, & Aglioti SM (2012). The sense of the body in individuals with spinal cord injury. PloS one, 7 (11) PMID: 23209824 (free to access)
I feel a little bit guilty for coming back to use the blog for my own gain when I’ve not posted anything for a while, but I needed someone more permenant than twitter to ask for advice.
I’m trying to create a way of reliably plotting anatomical data like recording sites onto diagrams of brain structures, rather than exporting pages of a pdf atlas as images then copying and pasting symbols in Illustrator and moving them around until they look like they’re in the right place.
What I had in mind was some bit of code that would take stereotaxic coordinates of points (e.g. recording sites) and an image from a brain atlas and combine the two. The end result will be a bit like this.
I’m learning Matlab at the moment, so I’d planned to stick with that, but it doesn’t seem to deal with importing vector images very easily. My options now seem to be:
- try to plot the recording sites in Matlab at the appropriate scale without the images from the atlas, export the recording sites as a vector image, then combine the two in illustrator
- move back to R, which seems to handle vectors more readily (pdf guide)
Does anyone have any suggestions? Advice?
Phrases like “right hand man”, perhaps with it’s roots in Christian mythology as Jesus as sitting at God’s right hand, having two left feet, the roots of dextrous and sinister are from the Latin for right and left, all indicate a deep association between direction and value. We tend to associate good with things on the right, and bad with the left. When I say ‘we’, I perhaps refer only to people who share my cultural upbringing, although it might be independent of culture, perhaps the result of a minority of left-handed people being seen as ‘them’, compared to the right-handed ‘us’. Interestingly, left-handed people often show a reversal of this association – for them, good is on the left. Does using the left hand change the association? And how fixed is the connection between hand use and the right-left/good-bad association?
Casasanto and Chrysikou looked at these questions. To investigate people’s subconscious associations, we can ask people to indicate the presence of good things with a button on the left, and bad things with a button on the right, and they will react slower compared to when the situation is reversed, suggestion that they may be fighting against some preexisting association. Alternatively, we can ask people to group objects into good and bad, and see whether they are more likely to align a value with a particular side. The latter is what Casasanto and Chrysikou did.
They found that right-handed stroke patients whose left hand had been affected by the stroke associated good with right. However, those whose right-hand had been affected by the stroke, forcing them to use their left, were more likely to associate good with left. However, a stroke can have effects beyond forcing a person to use a particular hand, so Casasanto and Chrysikou repeated the study with healthy participants who had performed a task with either their right or left hand, while the other hand was handicapped. Those who had performed a task with their left hand were more likely to associate good with left, suggesting that fluent motor manipulation, even for a few minutes, was sufficient to disrupt preexisting associations.
Maybe there is hope for a new generation of political collaboration between left and right, we just need everyone to become ambidextrous. Perhaps the Houses of Parliament should offer juggling lessons?
Casasanto D, & Chrysikou EG (2011). When left is “right”. Motor fluency shapes abstract concepts. Psychological science, 22 (4), 419-22 PMID: 21389336
Using maps to locate sensory events
Our brains and the brains of other animals have sensory maps. For vision, one of the maps is of our visual field, with neighbouring locations on the map corresponding to neighbouring locations in our visual field. For touch, regions of one of the maps correspond to different parts of the body (an illustration of this map is called a sensory ‘homunculus’ – little person). Physical contact with a part of the body produces activity in the corresponding area of the map. These maps enable us to identify where in the world or on our bodies a sensory event has occurred, allowing us to look at, or to move towards or away from whatever caused the event. However, a map on its own is not sufficient. If we have just grabbed hold of something interesting, we want to look at our hands to see what we have grabbed, but we won’t be able to do that unless we know where are hands are. To figure this out, we have to be able to combine information about contact on the body with information about the current position of body parts. Common sense might suggest that it would be useful to continuously track our body position, should we need to react quickly. However, two papers covered in our reading group suggest that the process of location takes time, and that rushing the process produces some interesting mistakes.
Crossed arms impairs stimulus order judgement
Average response rates at different interstimulus intervals for stimulation of uncrossed (black) and crossed (red) hands. Responses were less accurate at smaller interstimulus intervals, with a greater effect of interval in the crossed condition.
Yamamoto and Kitazawa (2001) asked participants to judge the order of stimulation applied to the index finger of each hand. When the participants had their arms uncrossed, they were able to successfully judge the order of stimulation at least 80% of the time even when the interval between stimulation was as little as 70 ms. At shorter intervals, participants were less successful. Participants’ responses were impaired when they were asked to make the same judgements with their arms crossed. Looking at the averaged response of the sample, participants were only able to accurately identify the order of stimulation ~50% of the time, even at intervals of around 300 ms. This suggests that something about crossing their hands was confusing the order judgement. Judgements were also slower than when hands were uncrossed, suggesting that suggests that some additional process occurs in that body position.
Response rates of one participant at different interstimulus intervals for stimulation of uncrossed (black) and crossed (red) hands. When hands were crossed, small interstimulus intervals led the participant to reliably misidentify the order of stimulation.
However, more interesting was the behaviour of some individual participants. At intervals of less than 300 ms, participants weren’t just unable to reliably identify the order of stimulation, they reliably indicated the opposite order of stimulation. This reliable misidentification suggests that there may be a default bodily orientation i.e. uncrossed arms. The delay in judgements with crossed vs uncrossed arms may therefore be the time it takes for information about body position to be updated, or combined with stimulation information. At short interstimulus intervals, sensory information occurs before knowledge of the body position has been established.
Crossed arms impairs stimulus localisation
Traces of saccades to stimulation applied to uncrossed and crossed hands when the experimental design was modified to produce a greater number of early saccades. More turn around saccades were produced when arms were crossed.
Overleit et al (2011) supports the idea that it takes time for the current body position to be established. Like Yamamoto and Kitazawa, they presented stimulation on crossed or uncrossed hands (although they stimulated only one hand and asked participants to saccade (make an eye movement) towards the stimulated hand). Also like Yamamoto and Kitazawa, they found that responses were slower when participants had their hands crossed vs uncrossed. They also found that some saccades began in the wrong direction, but were then corrected midflight (called ‘turn around saccades’ – TASs). These TASs started earlier on average than saccades in the correct direction to crossed hands (248 ms vs 319 ms), but at a similar time to correct saccades to uncrossed hands (227 ms). When the experiment was modified to produce more early saccades, there was a corresponding increase in the number of TASs. The greater number of errors in early saccades and the later start of correct saccades to crossed hands support the idea that some sort of ‘adjustment’ was applied to representation of where body parts were located. The case is made even stronger by the similarity of the timing of the turn around in TASs (332 ms) to the start of correct saccades. It seems that the adjustment process happens at some point between the start of the TASs at 248 ms and the start of saccades in the correct direction at 319 ms. Actions that are delayed until after this adjustment process are performed correctly, and those that start before it are in error, but can be adjusted along the way.
A stable body representation?
This inability to locate or judge the order of sensory stimulation is an interesting curiosity, but it raises questions – why is this happening? Is the ‘adjustment’ the result of existing body part information being combined with sensory maps, or is body part location calculated only when it is needed? If the latter is true, why don’t we maintain a map? Is it just too much effort, or is there some benefit to this flexibility? Finally, Yamamoto and Kitazawa found that the effect occurred only when arms crossed over each other, and not when the hands were similarly position with arms uncrossed. What is it about crossing our arms that confuses our bodily representation?
Yamamoto S, & Kitazawa S (2001). Reversal of subjective temporal order due to arm crossing. Nature neuroscience, 4 (7), 759-65 PMID: 11426234
Overvliet KE, Azañón E, & Soto-Faraco S (2011). Somatosensory saccades reveal the timing of tactile spatial remapping. Neuropsychologia, 49 (11), 3046-52 PMID: 21782835
In the last post, I mentioned the Pinocchio illusion – the illusory feeling that your nose is growing that results from your brain trying to reconcile the feeling of touching your nose with the feeling of your arm extending (a result of the biceps tendons being stimulated). But the Pinocchio illusion is not just restricted to the nose. Ehrsson and colleagues applied the idea behind the illusion to create the feeling of a shrinking waist, and did so inside of an MRI scanner to try and understand what was going on in the brain when the illusion was experienced.
Participants placed their palms against their hips, and then the tendons of the wrist were stimulated. In the same way that stimulating the biceps tendon sends the brain the signal that the arm is extending, stimulating the tendons of the wrist extensor muscle creates the feeling that the wrist is bending inwards. When placed against your hips, the illusion is created that your hips and waist are shrinking.
In fMRI experiments, it is important to isolate the activity that is solely related to the phenomenon you are interested in. In this study, the effect of the shrinking waist illusion on the brain must be separated from the effect of wrist movement, the effect of vibration from stimulation of the tendons, and the effect of the tactile stimulation of placing your palms against your hips. Thus, participants were scanned in 4 combinations of conditions – hands against or away from the body, combined with either tendon stimulation or stimulation away from the tendons.
When the illusion was induced, participants reported feeling their wrists bending inwards and their waists shrinking (but not that their hands were passing through their hips). Participants reported a greater degree of wrist movement when their hands were away from their bodies than when they were touching it, suggesting that although the brain was given the same information, when combined with the low likelihood of a very shrunken waist, the brain instead opted to adjust its perception of how much the wrists had bent.
Results from the fMRI data showed that there was more activity in the left parietal cortex in areas of association cortex, but not motor cortex, or primary or secondary somatosensory cortex. There was a similar trend in the right parietal cortex, but this did not reach statistical significance. In order to confirm the suggestion that regions of the parietal cortex are involved in the illusion, the authors demonstrated that there was a correlation between the perceived strength of the illusion and the strength of the BOLD fMRI signal. However, in describing their method, the authors say that they used a model to search for voxels where activity was related to the illusion, which may have biased their chances of finding an effect (although IANAfMRIresearcher).
FMRI research is sometimes criticised for only telling us where something is happening in the brain, but not telling us much about what is happening or how. However, it is interesting to note that while the paper in the previous post related our perception of the size of our body representation to distortions in representation in the primary somatosensory cortical map, this study found no effect of the illusion in primary somatosensory areas. This could be the result of some aspect of experimental variability, or it could mean that the body representation is stored or largely based on the anatomy of primary somatosensory cortex, but other areas are responsible for modifying it when necessary, or monitoring if it has changed. Regardless of why it is happening, the fact that activity related to potentially different aspects of a phenomenon happens in different regions of the brain suggests that they might be dissociable aspects, and that what we previously thought of as a unitary phenomenon might, in fact, be a complex, multistage process. Further, if that activity is located in an area of the brain that is presumed to perform only functions unrelated to the task at hand, it might prompt us to reassess whether our view of a particular region of the brain as a “region for X” is actually correct. And surely that can only be a good thing?
Ehrsson, H., Kito, T., Sadato, N., Passingham, R., & Naito, E. (2005). Neural Substrate of Body Size: Illusory Feeling of Shrinking of the Waist PLoS Biology, 3 (12) DOI: 10.1371/journal.pbio.0030412 (free to access)
Knowing where your body is and what shape it is seems like a pretty essential part of performing tasks involving spatial awareness, which is pretty much everything that involves the outside world. So we must have a pretty reliable and accurate sense of the shape and location of our body parts, right?
Although information about the angle of our joints is available from stretch receptors that tell the brain how much each muscle is extended, there is no corresponding sensory signal reflecting the distance between each joint i.e. how big each part of our body is. Instead, the representation is probably constructed from other information that is available, and this representation is remarkably flexible. Close your eyes, then bend your arm and touch your nose – you’ve got a pretty good sense of where your nose is. Now find a friendly psychologist to stimulate the stretch receptors in your arm by vibrating your biceps tendon. These receptors are usually active when you extend your arm, so driving them gives the feeling of your arm ‘extending’. Combined with the sense of touch from holding onto your nose, this creates the feeling that your nose is moving away from your face or growing longer. This is the Pinocchio Illusion.
So our representation of the body is flexible, but is it accurate when it’s not being actively manipulated? Surprisingly little work has been done to quantify properties of the body representation such as its dimensions, its resolution, its accuracy, how amenable to change it is, or which region(s) of the brain are responsible for its construction and storage, assuming a representation is stored.
One way to study the dimensions of the body representation is by asking participants to indicate the location of bodily ‘landmarks’, and compare them to the actual positions. Longo and Haggard asked participants to indicate the position of the knuckles of their hand while the hand was covered by a board. By comparing positions of landmarks estimated by the participants to their true locations, we can see just how accurate our body representation is.
It turns out it’s a bit poor. Participants underestimated the lengths of their fingers, with the degree of underestimation decreasing from the thumb to the little finger. They also overestimated the spacing between the knuckles, making the representation of their hand shorter and wider than their actual hand. Like all good scientists, Longo and Haggard performed tests to exclude alternative explanations of their results. To make sure the effect wasn’t the result of foreshortening or perspective, participants were asked to make the same judgement when the hand was rotated 90°, but the effect held. The thumb-little finger pattern of distortion was also consistent for both hands, so this was not an effect of direction.
The authors suggest that this distortion reflects the representation of the hand in the primary somatosensory cortex, where thumb and forefinger are overrepresented compared to the other fingers. However, this would suggest that the body map is at least in part determined by cortical anatomy. If this is the case then practised piano players, who have larger than average cortical representations of their hands, might counter-intuitively be expected to have even greater distortion of their body representation, i.e. a poorer sense of where their hands and fingers were. Conversely, we might expect our judgement of the size of our trunk (which has comparatively less representation in the somatosensory cortex) to be a significantly underestimate. Given that we don’t regularly get stuck in spaces that are too short or narrow for us, we’re probably pretty good at estimating the size of our trunk. A recent high-resolution MRI also suggests that the amount of cortex occupied by each finger doesn’t follow the same pattern that Longo and Haggard found – this might be because Longo and Haggard found a linear thumb to finger progression by fitting a linear model to a non-linear progression.
If our representation of the body is inaccurate and flexible, that raises some interesting questions:
- We presumably correct our body representation by using other information, most likely vision. But why is the body map distorted to start with?
- Using a tool extends the receptive fields of neurons that are responsive to the end of the hand along the length of the tool. Does this affect our body map? How? If we were asked to indicate the location of our hands and the end of the tool, would the representation of our hand be stretched out along the tool, or will we try to fit the tool into the same space as our hand representation, squashing the length of our hand up even further?
- How does the representation of the body demonstrated here relate to representations of the body in body dysmorphia and related disorders? Can the body representation be modified in a long term way? Is it even the same type of body representation, or is there an affective/cognitive representation of the perceived/ideal body that is separate from the representation we use for spatial awareness?
Longo, M., & Haggard, P. (2010). An implicit body representation underlying human position sense Proceedings of the National Academy of Sciences, 107 (26), 11727-11732 DOI: 10.1073/pnas.1003483107
Recently I’ve been involved in a reading group looking at papers that investigate the body scheme (or schema). The question behind the reading group is “how does the mind represent the the body scheme – a map of the current orientation and position of the body - and how does it locate stimuli on this map?” Occasionally we drift into other areas, but that’s the main thrust of the group. This is the first of what will hopefully become regular posts summarising the papers we read. As the paper is a review paper, this post is probably a bit longer than the others will be.
In their 2004 review, Maravita and Iriki ask: “What happens in our brain when we use a tool to reach for a distant object?” In order to move around and interact with the world, some have suggested that we combine a best guess at the size and shape and position of body parts into an up-to-date representation. Does wielding a tool that extends or modifies our interaction with the world affect our body representation? Maravita and Iriki present three strands of evidence that suggest it does.
Bimodal visual fields are extended by tool use
Single neuron recordings in the intraparietal cortex (where visual and somatosensory information is integrated) found neurons responding to both visual and somatosensory information – ‘bimodal neurons’. Some of these bimodal neurons had visual and somatosensory receptive fields which overlapped – e.g. the neuron responded to tactile contact on the hand, as well as to visual stimulation in the area around the hand, even if the hand was moved around. In some of these neurons, the visual receptive fields expanded to include the entire length of a tool, after the monkey had used to tool to retrieve a food reward. In bimodal neurons where the tactile receptive field was e.g. on the shoulder, the visual receptive field extended to the potential reach of the monkey’s arm, or to the reach of the tool after tool use. Bimodal neurons with finger focused receptive fields were not affected by tool use. Receptive fields were also not affected when the monkey merely passively held the tool.
Interference of visual and tactile stimuli on crossed and uncrossed hands and tools on temporal order decisions.
Previous studies have shown that humans are able to distinguish the temporal order of sensory stimuli, even when the interval between stimuli is as small as 30 ms. Participants are, however, slower to respond if distractor stimuli are presented at the same time e.g. determining the order of two tactile stimuli with two simultaneously flashing LEDs. Further, when the decision of which stimulus came first is between two stimuli presented one to each hand, a distractor stimulus has a greater affect when it is presented at the ipsilateral hand. However, when the hands are crossed the effect is reversed – visual distractors interfere more with localisation of somatosensory stimuli delivered to the anatomically contralateral (although now spatially ipsilateral) hand.
A similar pattern of interference also occurs with tool use. Visual distractors at the tips of tools interfered with reaction times to a greater extent when presented at the same time as ipsilateral tactile stimulation. When the tips of the tools were crossed, but the hand position remained the same, the pattern was reversed – visual distractors that were had a greater effect when presented at the same time as tactile stimulation to the hand holding that tool i.e. the tool tip was spatially contralateral, but ‘anatomically’ ipsilateral. The interference effect increased with extensive tool use.
Neuropsychological effects of tool use in brain damaged patients
The previous two sections describe neurophysiological changes in intraparietal cortex associated with tool use in macaque monkeys, and behavioural changes in humans. However, the connection between the function of monkey intraparietal cortex and human intraparietal cortex is only putative.
Evidence to support the connection comes from behavioural changes in patients with damage to intraparietal cortex. Maravita and Iriki relate the cases of several patients with damage to intraparietal cortex who had deficits in their ability to interact with the world, but who also showed behavioural changes when using tools that provide insight into body representations.
The damage suffered by patient PP resulted in her neglecting the left side of space, but on the space close to her body. When asked to show the midpoint of a line, PP put her mark further towards the right than the actual midpoint. However, when lines were presented out of arms reach and PP indicated the midpoint with a laser pointer, she showed no such deficit. Interestingly, when the midpoint of the same distant lines was indicated by PP with a long stick, PP showed the same rightward bias as she did when bisecting lines in near space, suggesting that using the tool in some way extended near space, or brought near space towards her. A similar case is also discussed, where the participant showed errors in bisecting lines except when using a laser pointer for lines in both near and far space. In this case, reaching with a physical object (finger or stick) rather than making a non-physical indication (with the laser pointer) seems to be the determining factor for impairment, regardless of the proximity of the bisection as in PP.
Extinction is a similar condition to neglect. Patients who show extinction would be able to detect a stimulus if it is presented on the side contralateral to their brain lesion as long as it is presented alone. However, the patient would be unlikely to detect the contralateral stimulus if it is presented at the same time as an ipsilateral stimulus. For patient BV, this meant that when a visual stimulus was presented near the hand ipsilateral to their lesion, they were unlikely to detect a contralesional touch – BV successfully detected the touch just 23% of the time. When the visual stimulus was presented ipsilaterally but further away, BV detected the touch 65% of the time. Interestingly, when BV held a stick that extend out to the visual stimulus, successful detection dropped again to 42%, but not when the stick was laid out on the desk without BV touching it (69% success rate). Even more interestingly, the effects of extinction can be reduced if the stick is held in the contralesional hand touching an ipsilesional visual stimulus – perhaps extending contralateral space ipsilaterally.
Maravita, A., & Iriki, A. (2004). Tools for the body (schema) Trends in Cognitive Sciences, 8 (2), 79-86 DOI: 10.1016/j.tics.2003.12.008
One of the benefits of working in research is that writing a blog post can help me think about the articles I’m reading. So here goes…
Is that my hand?
When you look down at your body, you probably have a fairly strong sense that what you are looking at is actually your body (barring neurological conditions such as somotoparaphrenia, which can cause patients to disown their body parts). But just how reliable is that sense of ownership?
Surprisingly, it doesn’t take much for people to take ownership over limbs that aren’t their own. The rubber hand illusion is quickly becoming an classic method in the psychology of bodily representations. As the name suggests, the experiment involves a rubber hand, which is visible to the participant, while the participant’s own hand is obscured. The experimenter then taps or strokes the visible rubber hand, while simultaneously performing the same action to the participant’s own hand. After a short period of stimulation, the majority of participants report a sense of ownership over the rubber hand, will physically respond if it is threatened, and when asked to indicate its location without looking, will point towards the location of the rubber hand rather than their own hand.
Rubber Hand Illusion – New Scientist (YouTube video)
One explanation of how the illusion works is that our brain tries to resolve the conflicting information it is getting – it can see a hand being stroked, and feel the stroking, so it puts the two together and assumes the hand as our own. Although the body does have a sense of where the body is in space (proprioception), this seems to either be overridden by the other information available, or isn’t precise enough to indicate exactly where arm actually is.
How fake is too fake?
So if our body image is sufficiently flexible that we can take ownership of a fake rubber hand, just how flexible is our body image? Surprisingly, the limits can be pushed quite far. Matching the rubber hand to the gender of the participant makes it more likely the illusion will be successful The illusion is still effective, although less so, when the model hand is larger than the participant’s actual hand, or impossibly far away, although it can break down if the hand is in a physically plausible position (i.e. bent back at an impossible angle).
How many arms is too many?
The average person grows up with two and only two arms (to the nearest arm), so the rubber hand illusion won’t work with two identical rubber hands, right? Wrong. Simultaneously stroking two rubber hands at the same time as the participant’s real hand actually caused the participant to feel equal ownership over both hands. Similar to the way that participants weren’t able to tell where their real arm was, the authors of the study suggest that the representation of the body works on probability – there was nothing to suggest that one arm was more likely than the other to be ‘real’, so the brain hedged its bets and incorporated both.
The age of technology – the virtual hand illusion
Early studies investigating body image used rubber hands, but researchers are increasingly using virtual reality to generate more realistic and more complicated illusions – so called visual hand illusions. We’re covering one such paper for our reading group next week (Kilteni et al. 2010), so it’s going to get a bit more scrutiny.
The aim of this study was to see whether asymmetric distortion of a paired body part would still be able to produce the illusion – would it hold when there was a ‘normal’ arm to compare it to? The experimenters created in image of a virtual hand arm, either the same length, or extended out to two, three or four times the length of the participant’s actual arm. Participants reported that the longer the arm, the less ownership they felt over it. The authors offer the interpretation that more distortion produces a weaker sense of ownership. Although I think this is probably a reasonable interpretation, I think other factors might contribute too:
- personal space might contribute to a sense of ownership – when the limb gets further away the sense of ownership might weaken, even if it’s attached. It would be interesting to see if the relationship held if the arm was distorted in other dimensions.
- in all conditions the image changed from a normal body image to a distorted image in two minutes. I would assume that a more rapid change would be be more likely to disrupt the illusion, so quadrupling in length in two minutes vs doubling might be less likely to produce a convincing illusion.
As well as asking participants whether the distorted arm felt like their own, a simulated saw cut through the virtual arm, and the experimenters tracked how much the participant moved their arm. Again, the greater the distortion, the smaller the effect – in this case, less movement. Again, although this could reasonably be interpreted to show that greater distortion produced a weaker illusion, the saw appeared at the wrist of the distorted arm, which was further away with greater distortion. This greater distance might have reduced the immediacy of the threat. Also, as I mentioned above, skin conductance response is often used as a measure of ownership rather than withdrawal – I’m not sure what the pros and cons of each method are.
So what’s the point of it all?
Recently I’ve been involved in a reading group looking at papers that investigate the body scheme (or schema). The question behind the reading group is “how does the mind represent the the body scheme – a map of the current orientation and position of the body - and how does it locate stimuli on this map?” Occasionally we drift into other areas, but that’s the main thrust of the group. Understanding how the mind maps the body has at least three benefits. First, understanding the function of a system can help understand dysfunction – like the somatoparaphrenia mentioned at the beginning of the article. Second, our department has a strong robotics group. Being able to describe how the brain maps the body brings us one step closer to putting that ability into a robot. That robot might be an artificial carer for the elderly, or a emergency rescue device that has to navigate unpredictable environments, which would be much more successful in their roles if they had an internal sense of where there limbs were when they were moving round their respective environments. But finally, it’s interesting from an academic point of view to know more about how our brains do the things that they do.
Kilteni K, Normand JM, Sanchez-Vives MV, & Slater M (2012). Extending body space in immersive virtual reality: a very long arm illusion. PloS one, 7 (7) PMID: 22829891 (free to access)