Previous papers we’ve looked at in the body representation reading group have manipulated multi-sensory external stimulation, such as the visible image of a rubber hand and simultaneous stroking of both the rubber and genuine hand, to look at how external stimuli are integrated to form a representation of the body. However, the body representation is also formed from internal as well as external information: the brain has access to proprioceptive information (information about joint angle) from neurons attached to muscles and tendons. But as we have seen elsewhere, in the absence of other, typically available sources of input, such as vision, this information can be inaccurate. The rubber hand illusion demonstrates that accurate or not, proprioceptive input is readily abandoned in favour of other input, perhaps because the other input more reliable, or perhaps because of mere accident of design. But people vary in their susceptibility to the rubber hand illusion. If the illusion is the result of integration of internal and external sensory signals, might people with a better sense of internal sensory signals be less susceptible to the rubber hand illusion?
Internal awareness and a robust body representation
Taskiris and colleagues set out to see; by measuring participants’ interoceptive awareness through their ability to accurately count their heart rate and comparing this to the extent to which they experienced the rubber hand illusion. They report that the rubber hand illusion had less of an effect (i.e. resulted in less proprioceptive drift) on participants who were categorised as having high interoceptive awareness (above the median level of awareness of all the participants) than those with low interoceptive awareness. There was no difference between the groups on the control version of the experiment, where the rubber hand and the participant’s hand are stroked asynchronously (180° out of phase). They also report that the low interoceptive group also showed a greater decrease in skin temperature during the illusion, a phenomenon associated with the rubber hand illusion.
Issues with the paper
Although the findings seem to support the idea that people with a better sense of internal sensory signals are less susceptible to the rubber hand illusion, I am concerned that this result might not be robust. The effect that interoceptive awareness has on the illusion is small, and although the statistical test conducted initially (ANOVA) suggested the presence of some difference between groups, follow up t-tests were not performed for all group-to-group comparisons. Some of those that were conducted only showed significant differences if the thresholds for significance are left unadjusted for multiple comparisons. Linear regressions likewise show a series of small correlations of aspects of the illusion with interoceptive awareness, the smallest of which (change in skin temperature) is significant only when a one-tailed test is used. The authors justify using a one-tailed test because of a hypothesis based on another study, but the study cited doesn’t seem to suggest we should expect a change in skin temperature any more than a change in other variables that are not subject to one-tailed tests. Further, not all of the comparisons of high to low aware individuals are followed up by a regression: A regression of introceptive awareness against the initial and strongest measure of effect of the illusion (proprioceptive drift) is not presented, only a regression of a calculated measure (proprioceptive shift). Likewise, there is a difference between low and high awareness groups on the average rating across eight scales measuring subjective experience of the illusion, but a regression is calculated for only one of the scales. Although this scale has previously been shown to be most strongly associated with changes in feelings of body ownership during the rubber hand illusion, comparable data for the participants in the paper is not published, neither are regressions for the other scales. Although these tests might have shown trivial results, their absence should have been discussed, or at least commented upon. Finally, Tasakiris and colleagues switch readily between talking about the effect of the rubber hand illusion on body representation as changes in feelings of body ownership, and changes in proprioceptive judgements of body position. There is evidence that these are dissociable aspects of the body representation, and it remains to be seen whether interoceptive awareness affects them differently.
Is more theoretical background needed?
Illusions of body representation are almost certainly the result of integrating internal as well as external sources of information. An individual’s susceptibility may well be affected by how much of a contribution internal versus external sources make to their body representation. However, how much contribution interoceptive information makes is not necessarily the same as how aware an individual is of that information – a participant may be acutely aware of weak interoceptive signals, or ignorant of dominant signals. Even if awareness and relative contribution are related, which is not an unlikely possibility, it may be that strong internal signals lead to both resistance to body representation illusions and high interoceptive awareness, rather than awareness playing an active modulatory roles, as Tasakiris and colleagues suggest. Further, is awareness of one’s heart rate a good indicator of awareness of interoceptive signals in general, including proprioceptive signals likely to contribute to body representation?
Body representation – an open field
What information the brain uses to form a representation of the body and how it does so are interesting questions. How the brain forms a body representation might give us insights into why it does so, how the representation breaks down in conditions such as neglect, whether the body representation is linked other concepts such as body image, and if so, whether the body representation is involved in a wider range of conditions such as body dysmorphias. This paper raises interesting points, but I think the results presented do not fully address the points raised, and in any case there are other questions to be answered before those points can be properly addressed.
Tsakiris M, Tajadura-Jiménez A, & Costantini M (2011). Just a heartbeat away from one’s body: interoceptive sensitivity predicts malleability of body-representations. Proceedings. Biological sciences / The Royal Society, 278 (1717), 2470-6 PMID: 21208964
The latest paper from the departmental body representation reading group is here – and it’s free to read! The reading group focuses around the idea of how the body is represented by the mind. Key points include how the representation is constructed, how accurate it is, where it is maintained (if a stable representation is maintained at all), whether and how the representation is modified by tool use, and, relevant to this week’s paper, what information is used to construct it the representation.
This week’s paper examined whether the absence of sensory and motor feedback from the limbs as a result of spinal cord injury (SCI) affects the body scheme. As well as measuring disruption of the body scheme and a sense of body ownership using the rubber hand illusion (RHI), the paper also looked at whether SCI produces a sense of disembodiment and depersonalisation using the Cambridge Depersonalisation Scale (CDS), as the authors suggest there is increasing evidence that the foundations of the sense of self lie in the systems that represent the body. The authors proposed two hypotheses:
- Mismatch between a pre-existing body model and sensory input causes depersonalisation, thus patients with reduced sensorimotor input strength due to SCI would have higher depersonalisation scores.
- The rubber hand illusion is occurs because the visual perception of a rubber hand being stroked ‘captures’ the tactile perception of your hand being stroked, resulting in the perception that a rubber hand is, in fact, your own, and your arm is localised where the rubber arm is. Thus, patients with reduced somatosensory input will show a stronger effect, as they have to rely more on visual cues to localise affected body parts.
The study involved 16 healthy participants and 30 participants with SCI. SCI participants were grouped either by ability (paraplegic – impairment of lower limbs, or tetraplegic – impairment of all limbs). In further analysis, participants were grouped on the basis of either complete or reduced tactile sensation on the left hand, regardless of the presence or absence of a lesion.
In line with their prediction SCI patients had higher depersonalisation scores. However, there were only significant differences in scores on three out of 28 items on the scale. The items were: “parts of my body feel as if they don’t belong to me”, “I have to touch myself to make sure that I have a body or a real existence”, and “I seem to have lost some bodily sensations (e.g. of hunger and thirst) so that when I eat or drink, it feels like an automatic routine”. In my opinion, these questions do not seem to unambiguously indicate depersonalisation, especially question three: participants had literally lost some bodily sensations. In the absence of significant differences on the other items, I would be reluctant to conclude that SCI patients showed more depersonalisation
In contrast with their prediction, SCI patients were not more likely to experience the illusion did not show a greater effect of the RHI. There was some variation, but generally:
- Healthy people predominately experienced the complete illusion (qualitative perception that the rubber hand was their hand, and proprioceptive drift)
- Participants with paraplegia often experienced the rubber hand as their own, but did not show proprioceptive drift
- About half of participants with tetraplegia experienced the complete illusion, but about half experienced no aspect of the illusion.
When the amount of proprioceptive drift was examined statistically, healthy participants had significant effect of the illusion, showing more drift after synchronous stroking, participants with tetraplegia had only a non-significant trend in that direction, and patients with paraplegia showed no difference. However, despite the absence of an effect of the objective measure of the RHI, there was a significant effect on perceived body ownership in all groups.
Depersonalisation – a fair conclusion?
The authors suggested that a mismatch between online sensorimotor input and the cortical sensorimotor representation of the body in SCI results in depersonalisation. While broadly true, we might have expected a difference between participants with paraplegia and those with tetraplegia, which wasn’t the case. Although higher lesion results in higher scores in item 3, a correlation with one out of 28 items on a depersonalisation scale is scant evidence to conclude a link between lack of input and depersonalisation.
A mixed bag of rubber hands
The picture with the rubber hand illusion is a complex one. Although, almost half of the participants with tetraplegia did not experience the illusion, this was not borne out in the statistics, which suggested no difference between them and healthy participants. Further, even though participants with tetraplegia were less likely to report experiencing the illusion, they showed greater proprioceptive drift than participants with paraplegia. This is an interesting finding, as it suggests that while removing the sensorimotor input does affect formation of a proprioceptive representation, the accuracy or inaccuracy of this representation is independent of subjective feelings of body ownership. However, there is a more interesting finding: participants with paraplegia showed less drift than both participants with tetraplegia and healthy participants.
The authors suggest a possible explanation for the lack of proprioceptive drift in participants with paraplegia: cortical reorganisation. In the absence of lower limb sensorimotor input to the cortex, the representation of the hand expands to fill the space. A larger (and possibly as a result stronger) neural representation is more resistant to the effect of visual input, and reduces likelihood of drift, but has no effect on the illusion of ownership. This is a neat explanation, backed up by other studies that found less proprioceptive drift was associated with greater neural activity in the primary and somatosensory cortices.
The paper seems to lend weight to the idea that sensorimotor input contributes to the body scheme, but the way in which the input is used is complex, as disrupting that input does not necessarily disrupt the body scheme in predictable ways. Further, there is some suggestion that subjective feelings of ownership of a visible body-part and integration of that body-part into a body scheme are separate constructs.
Lenggenhager B, Pazzaglia M, Scivoletto G, Molinari M, & Aglioti SM (2012). The sense of the body in individuals with spinal cord injury. PloS one, 7 (11) PMID: 23209824 (free to access)
I feel a little bit guilty for coming back to use the blog for my own gain when I’ve not posted anything for a while, but I needed someone more permenant than twitter to ask for advice.
I’m trying to create a way of reliably plotting anatomical data like recording sites onto diagrams of brain structures, rather than exporting pages of a pdf atlas as images then copying and pasting symbols in Illustrator and moving them around until they look like they’re in the right place.
What I had in mind was some bit of code that would take stereotaxic coordinates of points (e.g. recording sites) and an image from a brain atlas and combine the two. The end result will be a bit like this.
I’m learning Matlab at the moment, so I’d planned to stick with that, but it doesn’t seem to deal with importing vector images very easily. My options now seem to be:
- try to plot the recording sites in Matlab at the appropriate scale without the images from the atlas, export the recording sites as a vector image, then combine the two in illustrator
- move back to R, which seems to handle vectors more readily (pdf guide)
Does anyone have any suggestions? Advice?
In the last post, I mentioned the Pinocchio illusion – the illusory feeling that your nose is growing that results from your brain trying to reconcile the feeling of touching your nose with the feeling of your arm extending (a result of the biceps tendons being stimulated). But the Pinocchio illusion is not just restricted to the nose. Ehrsson and colleagues applied the idea behind the illusion to create the feeling of a shrinking waist, and did so inside of an MRI scanner to try and understand what was going on in the brain when the illusion was experienced.
Participants placed their palms against their hips, and then the tendons of the wrist were stimulated. In the same way that stimulating the biceps tendon sends the brain the signal that the arm is extending, stimulating the tendons of the wrist extensor muscle creates the feeling that the wrist is bending inwards. When placed against your hips, the illusion is created that your hips and waist are shrinking.
In fMRI experiments, it is important to isolate the activity that is solely related to the phenomenon you are interested in. In this study, the effect of the shrinking waist illusion on the brain must be separated from the effect of wrist movement, the effect of vibration from stimulation of the tendons, and the effect of the tactile stimulation of placing your palms against your hips. Thus, participants were scanned in 4 combinations of conditions – hands against or away from the body, combined with either tendon stimulation or stimulation away from the tendons.
When the illusion was induced, participants reported feeling their wrists bending inwards and their waists shrinking (but not that their hands were passing through their hips). Participants reported a greater degree of wrist movement when their hands were away from their bodies than when they were touching it, suggesting that although the brain was given the same information, when combined with the low likelihood of a very shrunken waist, the brain instead opted to adjust its perception of how much the wrists had bent.
Results from the fMRI data showed that there was more activity in the left parietal cortex in areas of association cortex, but not motor cortex, or primary or secondary somatosensory cortex. There was a similar trend in the right parietal cortex, but this did not reach statistical significance. In order to confirm the suggestion that regions of the parietal cortex are involved in the illusion, the authors demonstrated that there was a correlation between the perceived strength of the illusion and the strength of the BOLD fMRI signal. However, in describing their method, the authors say that they used a model to search for voxels where activity was related to the illusion, which may have biased their chances of finding an effect (although IANAfMRIresearcher).
FMRI research is sometimes criticised for only telling us where something is happening in the brain, but not telling us much about what is happening or how. However, it is interesting to note that while the paper in the previous post related our perception of the size of our body representation to distortions in representation in the primary somatosensory cortical map, this study found no effect of the illusion in primary somatosensory areas. This could be the result of some aspect of experimental variability, or it could mean that the body representation is stored or largely based on the anatomy of primary somatosensory cortex, but other areas are responsible for modifying it when necessary, or monitoring if it has changed. Regardless of why it is happening, the fact that activity related to potentially different aspects of a phenomenon happens in different regions of the brain suggests that they might be dissociable aspects, and that what we previously thought of as a unitary phenomenon might, in fact, be a complex, multistage process. Further, if that activity is located in an area of the brain that is presumed to perform only functions unrelated to the task at hand, it might prompt us to reassess whether our view of a particular region of the brain as a “region for X” is actually correct. And surely that can only be a good thing?
Ehrsson, H., Kito, T., Sadato, N., Passingham, R., & Naito, E. (2005). Neural Substrate of Body Size: Illusory Feeling of Shrinking of the Waist PLoS Biology, 3 (12) DOI: 10.1371/journal.pbio.0030412 (free to access)
Knowing where your body is and what shape it is seems like a pretty essential part of performing tasks involving spatial awareness, which is pretty much everything that involves the outside world. So we must have a pretty reliable and accurate sense of the shape and location of our body parts, right?
Although information about the angle of our joints is available from stretch receptors that tell the brain how much each muscle is extended, there is no corresponding sensory signal reflecting the distance between each joint i.e. how big each part of our body is. Instead, the representation is probably constructed from other information that is available, and this representation is remarkably flexible. Close your eyes, then bend your arm and touch your nose – you’ve got a pretty good sense of where your nose is. Now find a friendly psychologist to stimulate the stretch receptors in your arm by vibrating your biceps tendon. These receptors are usually active when you extend your arm, so driving them gives the feeling of your arm ‘extending’. Combined with the sense of touch from holding onto your nose, this creates the feeling that your nose is moving away from your face or growing longer. This is the Pinocchio Illusion.
So our representation of the body is flexible, but is it accurate when it’s not being actively manipulated? Surprisingly little work has been done to quantify properties of the body representation such as its dimensions, its resolution, its accuracy, how amenable to change it is, or which region(s) of the brain are responsible for its construction and storage, assuming a representation is stored.
One way to study the dimensions of the body representation is by asking participants to indicate the location of bodily ‘landmarks’, and compare them to the actual positions. Longo and Haggard asked participants to indicate the position of the knuckles of their hand while the hand was covered by a board. By comparing positions of landmarks estimated by the participants to their true locations, we can see just how accurate our body representation is.
It turns out it’s a bit poor. Participants underestimated the lengths of their fingers, with the degree of underestimation decreasing from the thumb to the little finger. They also overestimated the spacing between the knuckles, making the representation of their hand shorter and wider than their actual hand. Like all good scientists, Longo and Haggard performed tests to exclude alternative explanations of their results. To make sure the effect wasn’t the result of foreshortening or perspective, participants were asked to make the same judgement when the hand was rotated 90°, but the effect held. The thumb-little finger pattern of distortion was also consistent for both hands, so this was not an effect of direction.
The authors suggest that this distortion reflects the representation of the hand in the primary somatosensory cortex, where thumb and forefinger are overrepresented compared to the other fingers. However, this would suggest that the body map is at least in part determined by cortical anatomy. If this is the case then practised piano players, who have larger than average cortical representations of their hands, might counter-intuitively be expected to have even greater distortion of their body representation, i.e. a poorer sense of where their hands and fingers were. Conversely, we might expect our judgement of the size of our trunk (which has comparatively less representation in the somatosensory cortex) to be a significantly underestimate. Given that we don’t regularly get stuck in spaces that are too short or narrow for us, we’re probably pretty good at estimating the size of our trunk. A recent high-resolution MRI also suggests that the amount of cortex occupied by each finger doesn’t follow the same pattern that Longo and Haggard found – this might be because Longo and Haggard found a linear thumb to finger progression by fitting a linear model to a non-linear progression.
If our representation of the body is inaccurate and flexible, that raises some interesting questions:
- We presumably correct our body representation by using other information, most likely vision. But why is the body map distorted to start with?
- Using a tool extends the receptive fields of neurons that are responsive to the end of the hand along the length of the tool. Does this affect our body map? How? If we were asked to indicate the location of our hands and the end of the tool, would the representation of our hand be stretched out along the tool, or will we try to fit the tool into the same space as our hand representation, squashing the length of our hand up even further?
- How does the representation of the body demonstrated here relate to representations of the body in body dysmorphia and related disorders? Can the body representation be modified in a long term way? Is it even the same type of body representation, or is there an affective/cognitive representation of the perceived/ideal body that is separate from the representation we use for spatial awareness?
Longo, M., & Haggard, P. (2010). An implicit body representation underlying human position sense Proceedings of the National Academy of Sciences, 107 (26), 11727-11732 DOI: 10.1073/pnas.1003483107
Recently I’ve been involved in a reading group looking at papers that investigate the body scheme (or schema). The question behind the reading group is “how does the mind represent the the body scheme – a map of the current orientation and position of the body - and how does it locate stimuli on this map?” Occasionally we drift into other areas, but that’s the main thrust of the group. This is the first of what will hopefully become regular posts summarising the papers we read. As the paper is a review paper, this post is probably a bit longer than the others will be.
In their 2004 review, Maravita and Iriki ask: “What happens in our brain when we use a tool to reach for a distant object?” In order to move around and interact with the world, some have suggested that we combine a best guess at the size and shape and position of body parts into an up-to-date representation. Does wielding a tool that extends or modifies our interaction with the world affect our body representation? Maravita and Iriki present three strands of evidence that suggest it does.
Bimodal visual fields are extended by tool use
Single neuron recordings in the intraparietal cortex (where visual and somatosensory information is integrated) found neurons responding to both visual and somatosensory information – ‘bimodal neurons’. Some of these bimodal neurons had visual and somatosensory receptive fields which overlapped – e.g. the neuron responded to tactile contact on the hand, as well as to visual stimulation in the area around the hand, even if the hand was moved around. In some of these neurons, the visual receptive fields expanded to include the entire length of a tool, after the monkey had used to tool to retrieve a food reward. In bimodal neurons where the tactile receptive field was e.g. on the shoulder, the visual receptive field extended to the potential reach of the monkey’s arm, or to the reach of the tool after tool use. Bimodal neurons with finger focused receptive fields were not affected by tool use. Receptive fields were also not affected when the monkey merely passively held the tool.
Interference of visual and tactile stimuli on crossed and uncrossed hands and tools on temporal order decisions.
Previous studies have shown that humans are able to distinguish the temporal order of sensory stimuli, even when the interval between stimuli is as small as 30 ms. Participants are, however, slower to respond if distractor stimuli are presented at the same time e.g. determining the order of two tactile stimuli with two simultaneously flashing LEDs. Further, when the decision of which stimulus came first is between two stimuli presented one to each hand, a distractor stimulus has a greater affect when it is presented at the ipsilateral hand. However, when the hands are crossed the effect is reversed – visual distractors interfere more with localisation of somatosensory stimuli delivered to the anatomically contralateral (although now spatially ipsilateral) hand.
A similar pattern of interference also occurs with tool use. Visual distractors at the tips of tools interfered with reaction times to a greater extent when presented at the same time as ipsilateral tactile stimulation. When the tips of the tools were crossed, but the hand position remained the same, the pattern was reversed – visual distractors that were had a greater effect when presented at the same time as tactile stimulation to the hand holding that tool i.e. the tool tip was spatially contralateral, but ‘anatomically’ ipsilateral. The interference effect increased with extensive tool use.
Neuropsychological effects of tool use in brain damaged patients
The previous two sections describe neurophysiological changes in intraparietal cortex associated with tool use in macaque monkeys, and behavioural changes in humans. However, the connection between the function of monkey intraparietal cortex and human intraparietal cortex is only putative.
Evidence to support the connection comes from behavioural changes in patients with damage to intraparietal cortex. Maravita and Iriki relate the cases of several patients with damage to intraparietal cortex who had deficits in their ability to interact with the world, but who also showed behavioural changes when using tools that provide insight into body representations.
The damage suffered by patient PP resulted in her neglecting the left side of space, but on the space close to her body. When asked to show the midpoint of a line, PP put her mark further towards the right than the actual midpoint. However, when lines were presented out of arms reach and PP indicated the midpoint with a laser pointer, she showed no such deficit. Interestingly, when the midpoint of the same distant lines was indicated by PP with a long stick, PP showed the same rightward bias as she did when bisecting lines in near space, suggesting that using the tool in some way extended near space, or brought near space towards her. A similar case is also discussed, where the participant showed errors in bisecting lines except when using a laser pointer for lines in both near and far space. In this case, reaching with a physical object (finger or stick) rather than making a non-physical indication (with the laser pointer) seems to be the determining factor for impairment, regardless of the proximity of the bisection as in PP.
Extinction is a similar condition to neglect. Patients who show extinction would be able to detect a stimulus if it is presented on the side contralateral to their brain lesion as long as it is presented alone. However, the patient would be unlikely to detect the contralateral stimulus if it is presented at the same time as an ipsilateral stimulus. For patient BV, this meant that when a visual stimulus was presented near the hand ipsilateral to their lesion, they were unlikely to detect a contralesional touch – BV successfully detected the touch just 23% of the time. When the visual stimulus was presented ipsilaterally but further away, BV detected the touch 65% of the time. Interestingly, when BV held a stick that extend out to the visual stimulus, successful detection dropped again to 42%, but not when the stick was laid out on the desk without BV touching it (69% success rate). Even more interestingly, the effects of extinction can be reduced if the stick is held in the contralesional hand touching an ipsilesional visual stimulus – perhaps extending contralateral space ipsilaterally.
Maravita, A., & Iriki, A. (2004). Tools for the body (schema) Trends in Cognitive Sciences, 8 (2), 79-86 DOI: 10.1016/j.tics.2003.12.008
One of the benefits of working in research is that writing a blog post can help me think about the articles I’m reading. So here goes…
Is that my hand?
When you look down at your body, you probably have a fairly strong sense that what you are looking at is actually your body (barring neurological conditions such as somotoparaphrenia, which can cause patients to disown their body parts). But just how reliable is that sense of ownership?
Surprisingly, it doesn’t take much for people to take ownership over limbs that aren’t their own. The rubber hand illusion is quickly becoming an classic method in the psychology of bodily representations. As the name suggests, the experiment involves a rubber hand, which is visible to the participant, while the participant’s own hand is obscured. The experimenter then taps or strokes the visible rubber hand, while simultaneously performing the same action to the participant’s own hand. After a short period of stimulation, the majority of participants report a sense of ownership over the rubber hand, will physically respond if it is threatened, and when asked to indicate its location without looking, will point towards the location of the rubber hand rather than their own hand.
Rubber Hand Illusion – New Scientist (YouTube video)
One explanation of how the illusion works is that our brain tries to resolve the conflicting information it is getting – it can see a hand being stroked, and feel the stroking, so it puts the two together and assumes the hand as our own. Although the body does have a sense of where the body is in space (proprioception), this seems to either be overridden by the other information available, or isn’t precise enough to indicate exactly where arm actually is.
How fake is too fake?
So if our body image is sufficiently flexible that we can take ownership of a fake rubber hand, just how flexible is our body image? Surprisingly, the limits can be pushed quite far. Matching the rubber hand to the gender of the participant makes it more likely the illusion will be successful The illusion is still effective, although less so, when the model hand is larger than the participant’s actual hand, or impossibly far away, although it can break down if the hand is in a physically plausible position (i.e. bent back at an impossible angle).
How many arms is too many?
The average person grows up with two and only two arms (to the nearest arm), so the rubber hand illusion won’t work with two identical rubber hands, right? Wrong. Simultaneously stroking two rubber hands at the same time as the participant’s real hand actually caused the participant to feel equal ownership over both hands. Similar to the way that participants weren’t able to tell where their real arm was, the authors of the study suggest that the representation of the body works on probability – there was nothing to suggest that one arm was more likely than the other to be ‘real’, so the brain hedged its bets and incorporated both.
The age of technology – the virtual hand illusion
Early studies investigating body image used rubber hands, but researchers are increasingly using virtual reality to generate more realistic and more complicated illusions – so called visual hand illusions. We’re covering one such paper for our reading group next week (Kilteni et al. 2010), so it’s going to get a bit more scrutiny.
The aim of this study was to see whether asymmetric distortion of a paired body part would still be able to produce the illusion – would it hold when there was a ‘normal’ arm to compare it to? The experimenters created in image of a virtual hand arm, either the same length, or extended out to two, three or four times the length of the participant’s actual arm. Participants reported that the longer the arm, the less ownership they felt over it. The authors offer the interpretation that more distortion produces a weaker sense of ownership. Although I think this is probably a reasonable interpretation, I think other factors might contribute too:
- personal space might contribute to a sense of ownership – when the limb gets further away the sense of ownership might weaken, even if it’s attached. It would be interesting to see if the relationship held if the arm was distorted in other dimensions.
- in all conditions the image changed from a normal body image to a distorted image in two minutes. I would assume that a more rapid change would be be more likely to disrupt the illusion, so quadrupling in length in two minutes vs doubling might be less likely to produce a convincing illusion.
As well as asking participants whether the distorted arm felt like their own, a simulated saw cut through the virtual arm, and the experimenters tracked how much the participant moved their arm. Again, the greater the distortion, the smaller the effect – in this case, less movement. Again, although this could reasonably be interpreted to show that greater distortion produced a weaker illusion, the saw appeared at the wrist of the distorted arm, which was further away with greater distortion. This greater distance might have reduced the immediacy of the threat. Also, as I mentioned above, skin conductance response is often used as a measure of ownership rather than withdrawal – I’m not sure what the pros and cons of each method are.
So what’s the point of it all?
Recently I’ve been involved in a reading group looking at papers that investigate the body scheme (or schema). The question behind the reading group is “how does the mind represent the the body scheme – a map of the current orientation and position of the body - and how does it locate stimuli on this map?” Occasionally we drift into other areas, but that’s the main thrust of the group. Understanding how the mind maps the body has at least three benefits. First, understanding the function of a system can help understand dysfunction – like the somatoparaphrenia mentioned at the beginning of the article. Second, our department has a strong robotics group. Being able to describe how the brain maps the body brings us one step closer to putting that ability into a robot. That robot might be an artificial carer for the elderly, or a emergency rescue device that has to navigate unpredictable environments, which would be much more successful in their roles if they had an internal sense of where there limbs were when they were moving round their respective environments. But finally, it’s interesting from an academic point of view to know more about how our brains do the things that they do.
Kilteni K, Normand JM, Sanchez-Vives MV, & Slater M (2012). Extending body space in immersive virtual reality: a very long arm illusion. PloS one, 7 (7) PMID: 22829891 (free to access)
According to wikipedia, after images are “an optical illusion that refers to an image continuing to appear in one’s vision after the exposure to the original image has ceased”. The most common after image people experience is probably seeing dark spots after looking at a bright light, or seeing a natural coloured image after staring at an inverted colour version of the image.
United States flag with colours inverted. From Wikipedia.
Visual illusions are useful in that they tell us something about how the visual system works. Negative after images are thought to be the result of the light sensitive neurons in the retina adapting to an unchanging input. When they eyes are moved from the image to a blank page, the adapted neurons transmit a weak signal, but non-adapted neurons are still responsive, and send out a strong signal. In the case of the US flag above, the green stripes tire out the green receptors, producing an complimentary red after image. However, negative after images aren’t entirely driven by retinal activity.
Filling in: non-retinal afterimages
If after images were solely due to neurons in the retina adapting to a stimulus, negative after images could only consist of features present in the original stimulus. A study by Shimojo and colleagues demonstrated that stimuli that produce illusory contours like the Varin illusion (panel A below) can produce a standard negative after image (panel B, left) with the local features of the original image in tact, or after negative after images where the illusory contours of the implied shape are filled in (panel B, centre and left). As illusory contours are not in the original image they are probably the result of a process of perceptual adaptation in the cortex, rather than a process of physical adaptation in the retina. (The paper also shows further experiments that demonstrate that the process is the result of a global after image rather than the filling in of illusory contours in a local after image by manipulating the stimulus and the afterimage.)
Afterimages formed by adaptation to the color filling-in configuration. From Shimojo et al. (2001) Science, 293:1677-1680
Seeing edges that aren’t there
The Shimojo paper is a good demonstration of how what we ‘fill in’ what we see with what we expect. But if there is a stimulus present, and there is nothing to fill in, do after images still have an effect? Apparently so. Participants exposed to a particular polygon as a stimulus often reported an afterimage that was not the same shape as the original image, or blurred, but was instead a different, distinct polygon. A recent paper by Hiroyuki Ito investigated the phenomenon. Ito had noticed that participants who had adapted to hexagons often reported a circular after image, and vice versa. They offered two hypothesis – 1) the filling in hypothesis: the after images are the result of the background encroaching on the edges of the shape. This effect would be present on filled in shapes, but on unfilled shapes an encroaching background would erase the edges of the shape, rather than distorting the outline of the shape. 2) the line approximation hypothesis: the after images are the result of a cortical process of approximating a shape as a series of straight lines. This effect would produce hexagonal after images with circular stimuli, but hexagonal stimuli should retain their shape.
The results showed that when viewing unfilled circles, hexagonal afterimages predominated, suggesting that the filling in was not responsible for the after image. However, when viewing unfilled hexagons, both hexagonal and circular after images were typical, and circular after images were more common than after viewing a circle. When rotating shapes were presented, circular after images after viewing the hexagon were even more dominant.
After images reported after viewing stationary and rotating circles and hexagons. From Ito (2011) Psychological Science, 23: 126-132.
The change in shape strongly suggests that the after image is the result of cortical processes. Primary visual cortex contains a significant number of “orientation selective” neurons who respond best to lines of a particular orientation. The line approximation hypothesis suggests that approximation of a curved line by these neurons, then their subsequent adaptation may produce the hexagonal after images. However, the circular after images produced by hexagonal stimuli are harder to explain. It may be that orientation selective neurons responding to the outline of the hexagon may adapt, resulting in a perceived shape that is a combination of all other orientations, which looks approximately like a circle. Alternatively, there may be some adaptation in part of the visual system dealing in more abstract curve/corner recognition system.
Inducing visual after images without a before image…
After images result from prolonged viewing of a stimulus, both for low level and for high level stimuli. The characteristics of the after image are usually complementary or opposite to those of the initial stimulus. But it is possible to induce a visual after image with no physical similarities to the original stimulus. When presented with an androgynous face, the perceived gender of the face can be manipulated by preceding it with an exaggerated male or female face – the androgynous face appears more feminine in contrast with a male face. However, Ghuman and colleagues were able to induce a gender after image on an androgynous face by preceding it with pictures of gendered human bodies without faces. Adaptation and after images from simple stimulus properties such as colour and movement have long been known. Recent work has shown that the phenomenon extends to higher level properties such as object shape and face properties. The research presented by Ghuman demonstrates that visual after images need not be related to stimulus properties – face after effects can result from non-face stimuli. The effect initially appears to be an after image driven by adaptation to the concept of a gender; however, the effect was not found when pictures of “gender connotative” objects were used (football helmets, purses), even though participants reported being aware that the objects were intended to be gender specific. The authors suggest that perhaps there is a distinction between viewing intrinsically gendered pictures (of bodies) and viewing pictures of items that are culturally gendered.
The study offers an interesting insight into the process of visual adaptation. It would be interesting to see what neurons are active during adaptation to non-face stimuli – is the adaptation the result of an activation of a distinct gender representation network, or are bodies and faces represented in the same place? On a more conceptual level, what other categories could be activated to produce after images? Is the distinction between inherent gender of bodies and abstract gender of culturally gendered items valid? And can we ever trust our vision?
Shimojo S, Kamitani Y, & Nishida S (2001). Afterimage of perceptually filled-in surface. Science, 293 (5535), 1677-80 PMID: 11533495
Ito H (2012). Cortical shape adaptation transforms a circle into a hexagon: a novel afterimage illusion. Psychological science, 23 (2), 126-32 PMID: 22207643
Ghuman, AS, McDaniel, JR, & Martin, A (2010). Face Adaptation Without a Face Current Biology, 20 (1), 32-36 DOI: 10.1016/j.cub.2009.10.077
Different types of neurons are generally distinguished from one another by one or more salient properties – their location in the brain (e.g. thalamic interneurons); their shape (medium spiny neurons); their activity (fast spiking neurons); or the neurotransmitter they release (e.g. dopaminergic neurons).
Although traditionally neurons were considered to release only one neurontransmitter, there is increasing evidence that some neurons can release multiple neurotransmitters. Part of the evidence has been different types cellular machinery needed to produce different neurotransmitters were found in the same cell. This could have been a biological accident – proteins for production of the ‘wrong’ neurotransmitter being made in error. However, if this behaviour was accidental, I’d expect it to happen either uniformly or randomly throughout the brain. What I wouldn’t expect from an accidental process is a selective pattern of distribution. But this is what we find.
Some of the networks in the brain rely on coordinated signals transmitted by relase of dopamine and glutamate, and some dopaminergic neurons have been shown to also release glutamate. However, the prodcess isn’t well understood. A paper from researchers at UCSF demonstrates that only a select population of dopamine neurons of the midbrain also release glutamate – those that project to the shell of the nucleus accumbens, but not those that project to the dorsal striatum. Researchers used optogenetic techniques to make neurons that released glutamate light sensitive. When neurons that were identified as dopaminergic were stimulated with light, only those that projected to the shell of the nucleus accumbens released glutamate. The release of glutamate was detected by recording the activity of neurons in the nucleus accumbens and dorsal striatum that are usually sensitive to glutamate, called medium spiny neurons (MSNs). When the dopamine neurons were stimulated with light, the MSNs activity increased. The change in activity wasn’t seen when a drug was applied that blocks the effect of glutamate on MSNs. The effect also wasn’t seen in genetically modified mice that lacked a part of the cellular machinery that produces and releases glutamate.
Confirmation that neurons can release multiple neurotransmitters is important in itself. However, this study raises more issues. The ability of neurons to activate both glutamate and dopamine receptive neurons has implications for the ability to respond to salient/rewarding/pleasant stimuli. However, the lack of this ability in the dorsal striatum, which receives substantial glutamate input from other regions of the brain is also interesting – does glutamate corelease from dopamine neurons in the nucleus accumbens serve a similar function to glutamate release in the dorsal striatum from elsewhere, or is it unreleated?
Stuber GD, Hnasko TS, Britt JP, Edwards RH, & Bonci A (2010). Dopaminergic terminals in the nucleus accumbens but not the dorsal striatum corelease glutamate. The Journal of neuroscience : the official journal of the Society for Neuroscience, 30 (24), 8229-33 PMID: 20554874
“Are you blind?! That was never offside!” The man in the stands has spotted something and the ref’s visual ability is called into question. But just how well do referees see? And do poor referees have bad eyesight? Science to the rescue!
Two papers from the same lab assessed the visual abilities of referees. Acuity (the ability to see small detail) wasn’t measured, but this perhaps isn’t that important for refereeing. Instead, the visual skills of the men in black were assessed by measurements of rates of repeated accommodation (maintaining focus at different distances), large and small saccades (eye movement), object recognition (usually basic shapes) and peripheral vision. In the first paper, experienced referees from the Iranian top-flight football league were found to have significantly better visual skills on all measures when compared to both novice referees (from an adolescent league) and non-athletes. There were no significant differences between novice referees and non athletes. Although there was no intervention or any attempt to establish cause and effect, there were no real differences between the spread of abilities of novice referees and non-athletes, suggesting that there are no especially skilled referees at the novice level that may go on to referee at higher levels. Instead, improved visual skills are likely to develop with experience.
That covers experience, but what about when experienced referees get decisions wrong? Are their eyes actually performing worse? The study suggests the answer is yes. This study used the same measures of visual skill as the first paper, plus visual memory. It compared the visual skills of experienced referees who made correct decisions to those who made incorrect decisions after watching clips of football matches, and found that the successful referees actually had better visual skills. Poorer visual skills aren’t necessarily the whole reason for poor decision making – mistakes might also be due to lack of attention or influence by players’ appeals – but they certainly plays a part.
So it turns out that professional referees, on the whole, actually see better than the general population. Poor decision making, it seems, can be blamed on poor vision. But optometrism is the last refuge of the scoundrel, as they might say at the Referee’s Association.
Ghasemi, A., Momeni, M., Rezaee, M., & Gholami, A. (2009). The Difference in Visual Skills Between Expert Versus Novice Soccer Referees Journal of Human Kinetics, 22 (-1), 15-20 DOI: 10.2478/v10078-009-0018-1
Ghasemi A, Momeni M, Jafarzadehpur E, Rezaee M, & Taheri H (2011). Visual skills involved in decision making by expert referees. Perceptual and motor skills, 112 (1), 161-71 PMID: 21466089