A visionary look at sight
Like many young adults, after earning his first university degree, Mel Goodale traveled to Europe to “find himself”. He failed miserably. Instead, he spent a lot of time wandering around the U.K., taking odd jobs and living in damp apartments with dubious roommates.
“I decided that perhaps I should go back to school,” he says. That decision marks a turning point in his life. Goodale returned to the University of Calgary, his alma mater, to study psychology under Dr. Rod Cooper, a student of noted neuropsychologist Donald Hebb. Cooper was studying how the brain interprets visual signals.
“I became completely captivated,” says Goodale. “As a consequence, I've spent the last 45 years studying the visual system.”
His interest is purely “curiosity-driven”. The lure of basic science led him to seek answers to fundamental questions of how the brain takes information that arises in the eye and turns it into wonderful, sensible representations of our world.
Sight and movement
Vision not only allows us to see the world; it enables us to move through it with ease.
Early in his career, while working as a postdoctoral fellow at the University of Oxford in England, Goodale became curious about how the brain uses visual information to help us interact with our world, e.g., how it enables us to reach out, pick up, and handle objects.
Through a series of experiments, he began to realize that the visual pathways in the brain that allow us to see the world are distinct from those that control our movements.
The first insights for Goodale and long-time colleague David Milner came from a young woman (D.F.) with carbon monoxide-induced brain damage. D.F. proved to be the Rosetta stone for their paradigm-shifting hypothesis. Although able to discern an object’s colour and texture, she could not recognize its form or orientation. In fact, she was so impaired that she couldn’t tell whether you were holding a pencil upright, horizontally or on a slant.
“One day, when she was being tested, she said, ‘Let me see that’, and reached out to grab the pencil,” Goodale recalls. “Quite remarkably, her hand oriented in flight. She was able to use orientation information to control her movements.”
Experiments with other patients who could see orientation but were unable to grasp objects correctly led Goodale and Milner to theorize that different areas of the brain control perception and action.
The distinction between vision for perception and vision for action was a radical concept at the time. It has influenced a shift in thinking about the organization of the brain’s visual system. Today, based on this research, neuroscientists believe that a ventral visual pathway leads to perception, while a dorsal pathway controls action.
Echolocation for the blind
Bats use echolocation to fly and catch their prey. Blind people can use it to walk, ride a bike and even swim.
Goodale took “a big change of direction” to study how the brain’s visual system adapts in a world without vision. His interest in echolocation “fell out of a conversation” with a friend and ophthalmologist about Californian Daniel Kish.
At 18 months of age, after losing both eyes to cancer, Kish taught himself to find his way around by clicking his tongue against the roof of his mouth (palate) then listening to the returning echoes. His brain used this information to judge shape, distance, and even the material properties of objects.
“My friend was so impressed with Daniel’s confidence in his abilities that he suggested that we get together to scan his brain during echolocation. So that’s what we did,” says Goodale.
He and former postdoctoral fellows Lore Thaler and Stephen Arnott used functional magnetic resonance imaging (fMRI) to show that blind echolocators process the echoes in brain regions traditionally used for vision.
“It looks as though there’s a great deal of neuroplasticity,” Goodale says. “It was particularly evident in Daniel’s brain.”
In ongoing research, Goodale and his team are striving to discover how the brain processes sound images. They want to find out how soon the brain’s visual processing centre adapts after blind people learn echolocation.
“People who are blind and echolocate offer a wonderful opportunity to study neuroplasticity in the denervated visual system,” says Goodale. With fMRI, he can compare results from blind individuals who learn echolocation early or late in life and observe the brain during the learning process.
“You can scan their brains before, during and afterwards to look for any possible changes that occur in the brain while this is unfolding,” he says.
His research underscores the importance of echolocation as a tool for people who are blind or visually impaired.
As a person walks away from you, their image becomes smaller and smaller on your eye. Nevertheless, you don’t perceive that he or she actually shrinks. That perception is known as size constancy.
“We see the real size of objects, independent of how far they are away,” explains Goodale. “There’s a real question about where this takes place in the brain.”
Recently, Goodale and his colleagues examined afterimages to unravel this mystery. Afterimages are the shadowy blurs that exist after your eye sees a bright light. The surface on which an afterimage is projected determines its size. On a wall, it looks large; on a handheld piece of paper, it appears small.
“We took advantage of that phenomenon,” explains Goodale. “We put people in the magnet (fMRI scanner) and created an afterimage by flashing a bright light in their eyes. We got them to project that afterimage onto surfaces at different distances and then measured activity in the primary visual cortex.”
When the afterimage was projected on a far surface, Goodale and his team saw brain activity in a large swath of the visual cortex. “It looked as though the visual cortex was coding the perceived, not the retinal, size of an object. That suggests that, early on in the visual system, calculations allow the brain to integrate a lot of information to represent the real size of objects.”
He plans to use electroencephalography (EEG) to examine how the brain responds to afterimages. He will study how different visual cues interact, in terms of sequence and timing, to demystify size constancy. Visual cues include eye converge at close distance; pictorial clues, such as perspective; and stereovision.
Far-reaching impact of research
Goodale is “enormously excited” that his work in understanding the brain’s visual pathways has implications not only for neurologists and patients in their care but in the world of robotics and human-machine interface. Robotic engineers are studying Goodale’s two visual systems model to develop better visual control in robots and human-machine interfaces in software.
His work has also attracted the interest of well-known philosophers, including David Chalmers, Ned Block, Andy Clark, who are intrigued by how the brain’s visual system controls the conscious experience.
“Cognitive neuroscience is at a wonderful point in its young history,” he says. “We have tools like fMRI, EEG, transcranial magnetic stimulation and all kinds of computer-based visual and hearing tests to help us understand how the brain processes information. It’s a very exciting time to be a young scientist studying the brain.”