Category Archives: Cambridge

nature_cover_091126

Biological logic

nature_cover_091126Grabbing one of the three laptops in her office at Microsoft Research in Cambridge, UK, Jasmin Fisher flips open the lid and starts to describe how she and her collaborators used an approach from computer science to make a discovery in molecular biology. Fisher glances across her desk to where her collaborator, Nir Piterman of Imperial College London, is watching restlessly. “I know you could do this faster,” she says to Piterman, who is also her husband. “But you are a computer scientist and I am a biologist and we must be patient.”

After a few moments, patience is rewarded: Fisher pulls up a screen of what looks like programming code. Pointing to a sequence of lines highlighted in red, she explains that it is a warning generated by software originally  developed for finding flaws in microchip circuitry. In 2007, she, Piterman and their colleagues found a similar alert in a simulation they had devised for signalling pathways in the nematode worm Caenorhabditis elegans. Using that as a clue, they predicted and then experimentally verified the existence of a mutation that disrupts normal cell growth.

‘Executable biology’, as Fisher calls what she’s demonstrating, is an emerging approach to biological modelling that, its proponents say, could make simulations of cells and their components easier for researchers to build, understand and verify experimentally.

The screen full of code doesn’t look especially intuitive to a non-programmer. But Fisher toggles to another window that shows the same C. elegans simulation expressed graphically. It now looks much more like the schematic diagrams of cell–cell interactions and cellular pathways that biologists often sketch on white boards, in notebooks or even on cocktail napkins. One big goal of executable biology is to make model-building as easy as sketching. Fisher explains that each piece of biological knowledge pictured on the screen, such as the fact that the binding of one protein complex to another is necessary to activate a certain signal, corresponds to a programming statement on the first screen. Likewise, the diagram as a whole — illustrating, say, a regulatory pathway — corresponds to a sequence of statements that collectively function as a computer simulation. Ultimately, she says, this kind of software should develop to a point at which researchers can draw a hypothetical pathway or interaction on the screen in exactly the way they’re already used to doing, and have the computer automatically convert their drawing into a working simulation. The results of that simulation would then show the researchers whether or not their hypothesis corresponds to actual cell behaviour, and perhaps — as happened in the 2007 work — make predictions that suggest fruitful new experiments.

In the meantime, however, Fisher and her fellow executable-biology enthusiasts have a lot of convincing to do, says Stephen Oliver, a biologist at the University of Cambridge, UK. “Modelling in general is regarded sceptically by many biologists,” he points out.

Born-again modeller

Fisher’s fascination with this type of modelling started in about 2000. She was studying for her PhD in neuroimmunology at the Weizmann Institute of Science in Rehovot, Israel, when she encountered David Harel, a computer scientist who was applying computational ideas to biology.

Harel wanted to get around the problems encountered in conventional simulations, which use reaction-rate equations and other tools of theoretical chemistry to describe, step by step, how reaction networks and cell interactions change over time. Such simulations can provide biologists with a gratifying level of detail for testing against reality. But the number of differential equations in these models escalates rapidly as more reactions are included, until they become a strain on even the most powerful computers. In one recent model of the networks involving epidermal growth factor, for example, 499 equations were required to describe 828 possible reactions2. Even if the computers can handle such a load, the output is often difficult to interpret.

Such models quickly become “an impossibly unwieldy black box”, says Vincent Danos, a computational biologist at the University of Edinburgh, UK. And if the models have such a hard time simulating the behaviour of a single set of signalling pathways, he adds, then it’s hard to imagine they will ever be of much use in systems biology, which might, for example, seek to understand all the pathways in a cell as an integrated whole.

Harel’s approach was to represent networks of biological events by a considerably smaller set of logical statements. For example, instead of specifying the number of signal molecules involved in a particular cell–cell interaction, or the sensitivity of the various receptors, a statement might simply say ‘when cell X is near cell Y for long enough, cell Y switches from one type of behaviour to another’. And, unlike the conventional equations, the rules tend to be independent of one another — an important part of why the simulations are so much easier to build.

An additional advantage of the logic-based approach was that standard model-checking algorithms — widely used by industry for testing computer hardware — could check whether the statements were logically consistent, and capable of producing the behaviour seen in cells. This analysis would highlight points in the model at which the behaviour was going awry, which in turn might suggest experiments to look for previously unsuspected reactions and molecular species at that point (see graphic).

Fisher became so caught up in the idea that in 2003 she joined Harel’s lab as a postdoc. She continued to work in the field during a three-year postdoc appointment under Thomas Henzinger at the computer-science department of the Swiss Federal Institute in Lausanne (EPFL). Piterman, whom she had married in 1998, came to the EPFL as well, and the three of them collaborated with their colleague Alex Hajnal to build the C. elegans model.

They started by recording all the rules they could find in the literature pertaining to the maturation of a simple, well-studied system of six vulval precursor cells. “I wrote it all down first in a diagram,” says Fisher, pointing to a figure in a research article on her desk, “then we formalized all the arrows and feedback loops into the computer program.” Because the model needed only rules, not numbers, most of the information was qualitative (for example, this cell is closest to the cell sending the signal so the messenger molecules reach it first).

Lab confirmation

The team knew that genetic mutations could nudge the cells into different roles during maturation, but they wanted to know more about the cascade of signals that dictate the fate of each cell. The model-checker explored the set of 48 mutations known to affect vulval development, which could have up to 92,000 possible outcomes. All but four of the perturbations predicted normal cell fates, so the team concentrated on simulating different timings of those four cases. They found two previously unknown effects. First, a set of inhibitory genes collectively known as lst genes have to be activated for vulval cells to convert to their ‘primary’ fate, meaning that their daughter cells will make up the vulval opening. Second, if another gene was disrupted and signals between the cells weren’t timed just in just the right sequence, the cell would adopt a different fate. A laboratory experiment confirmed both predictions.

“We used this qualitative model because we simply didn’t have the quantitative knowledge,” says Fisher. But now that the approach and its predictions have been verified in the lab, she says, “you can’t argue with it”.

Since then, Fisher has become one of the world’s most energetic proponents of executable biology3, but she is far from being the only enthusiast. In 2007, for example, biologist John Heath of the University of Birmingham, UK, was trying to model signal transduction pathways and protein–protein interactions. “The processes are just really just too complicated to understand using intuition,” he says. He discussed his problem with University of Oxford computer scientist Marta Kwiatkowska, who was then working in the adjacent building at Birmingham, and she gave him a paper on model-checking. “I was reading the opening paragraph on the train and I thought, ‘This is exactly what I want’,” says Heath. In collaboration with Corrado Priami, who leads the Centre for Computational and Systems Biology at the University of Trento in Italy, Heath was soon modelling the gp130/JAK/STAT signalling pathway4, a well-studied system involved in human fertility, neuronal repair and embryonic stem-cell renewal. Their model reproduced the dynamic behaviour of the pathway as observed in the laboratory, and has allowed them to make testable predictions about which parts of the pathway are most sensitive to mutation or other perturbation. Heath, like Fisher, is now actively promoting executable biology, and has joined with Kwiatowska to publish a review paper on the approach5.

Another level

Executable biology does have limitations, Fisher acknowledges. At present, for example, such models can handle only one level of narrowly defined biological activity at a time — the level of protein–protein interaction, say, or the level of cell–cell interaction. “We know there is feedback between the levels,” Fisher says, “but we don’t know enough about it” to get a computer to simulate that feedback.

An additional complication is that the different levels are best handled by different computer languages. To model the molecules that travel between cells, for instance, the most natural languages are those known in computer science as ‘process calculi’, which were devised to model information flow through communication webs. But to model the behaviour of an individual cell and its components, as in the various signalling and regulatory pathways, the most natural languages are those based on the theory of interacting ‘state machines’, which was developed to describe how objects transition from one state to another.

The long-term goal, says Fisher, is to develop more sophisticated and complete simulations that would help researchers explore a wider range of biological phenomena, both by integrating behaviour at the genetic, molecular and cellular levels, and by integrating executable models with more mathematical models. Indeed, as a group of bioengineers led by C. Anthony Hunt of the University of California, San Francisco, pointed out in a response6 to Fisher and Henzinger’s 2007 review, it’s not an either–or choice between the executable biology and conventional mathematical modelling: both have their uses and limitations, depending on the level of biological activity being simulated.

Fully integrated modelling is still a long way off, admits Fisher. But now that executable-biology predictions have been verified in the lab, the field has begun to attract more attention. Labs worldwide are starting to use executable biology to study systems, and Fisher herself is giving invited lectures on the subject 15–18 times per year around the world.

Meanwhile, she and Piterman are trying to make the software more accessible to biologists, so that researchers can make executable-biology simulations a routine part of their work. Other research groups are working towards the same end. Priami’s group is trying to write interfaces so simple that biologists can fill in tables with their data, specify the rules they want to use in spatially organized diagrams and sit back while the program translates the data into a computer-readable language that can execute a simulation7. “We develop languages that allow people to program without knowing they are programming,” says Priami.

Commercial efforts

In another effort to make the executable-biology approach more intuitive, Walter Fontana of the Harvard Medical School in Boston, Massachusetts, has joined with colleagues at the start-up firm Plectix to launch Cellucidate, an online visual interface for biological-pathway modelling that generates statements in an executable computer language called Kappa, which Fontana developed explicitly to model molecular interactions. Cellucidate — available for free during its trial period — allows collaborators to add information to a shared online model and revise it Wikipedia-style, something Fontana says is increasingly important because the empirical facts on which models are based are continually being revised.

Fisher hopes that the excitement will catch on in more groups and suggests that some of the computer-inspired ideas she is testing in her group’s latest in vivo experiments, which now extend to fruitflies and yeast cells, should entice more interest in executable biology among lab-based biologists.

But in the end, Fisher emphasizes, the fact that using executable rules could make the models easier to visualize is only an added bonus. Executable biology’s real pay-off is that it can help biologists to understand the complexity of living things, whether at the level of groups of molecules, such as Kappa describes, or at that of signals sent between cells, as in the nematodes Fisher herself studies. And that enhanced understanding, in turn, helps biologists ask new questions, design new experiments and make new discoveries. “But however good the models are, “you still need a good scientist to implement them”, says Kwiatkowska.

“The model is not an oracle,” Heath agrees, “It’s an automation of your understanding.”

References

  1. Fisher, J., Piterman, N., Hajnal, A. & Henzinger, T. A. PLoS Comput. Biol. 3, e92 (2007). | Article | PubMed | ChemPort |
  2. Chen W. W. et al. Mol. Syst. Biol. 5, 239 (2009). | Article | PubMed
  3. Fisher, J. & Henzinger, T. A. Nature Biotechnol. 25, 1239-1249 (2007). | Article
  4. Guerriero, M. L., Dudka, A., Underhill-Day, N., Heath, J. K. & Priami, C. BMC Syst. Biol. 3, 40 (2009). | Article | PubMed
  5. Kwiatkowska, M. Z. & Heath, J. K. J. Cell Sci. 122, 2793-2800 (2009). | Article | PubMed
  6. Hunt, C. A., Ropella, G. E. P., Park, S. & Engelberg, J. Nature Biotechnol. 26, 737-738 (2008). | Article
  7. Priami, C. Commun. ACM 52, 80-88 (2009). | Article

This feature first appeared in Nature [html] [pdf]

This must be the most complex story I’ve ever reported. I still don’t feel like I understand everything in it, but I’m no less fascinated by it than when I discovered it this spring during my Nature internship. I’m eager to see what other discoveries biologists make using tools originally developed for analyzing computer hardware.

One of the things that appeals to me about executable, or algorithmic, approaches to biology is the idea that the sum of scientists’ information about a system can be continually updated in an online, working model by any collaborator in a more transparent way than some of the current generation of math-based models. This could one day prompt unexpected insights and faster interaction among scientists, since collaborators could see natural, visual representations of one another’s working hypotheses  in real time. A little scary–I doubt I’d want an editor reading my keystrokes until I’d had a chance to revise my drafts!

Update [18 May 2011]: Ran across an amusing ‘citation’ of this story in support of the thesis thatCircular logic is the best type of logic, because it’s circular.’

science_cover090313

A Memorable Device

science_cover090313It was over drinks at a local pub in the spring of 2006 that cognitive psychologist Martin Conway of the University of Leeds in the United Kingdom first told his colleague Chris Moulin about using a wearable camera for memory research. But it took more than a few pints of beer to convince Moulin that SenseCam, a camera that periodically takes still photos while worn on the user’s chest, might be a game-changer in the study of what psychologists call autobiographical memory. Although skeptical of the small device’s usefulness, Moulin did finally agree to take one for a test drive.

Or rather, he took it on a test walk. Moulin regularly wore a SenseCam on a series of walks. When he reviewed the images 6 months later, to see how well his memories matched the camera’s visual record, Moulin says he experienced an unexpected feeling of “mental time travel.” One of the images triggered the memory of the song–Thom Yorke’s “Black Swan”–that was playing on his iPod when the image was taken.

Conway says that many SenseCam users likewise report a sudden flood of memories of thoughts and sensations, what he calls “Proustian moments,” when they review images taken by the device. SenseCam’s images “correspond to the nature of human memory—they’re fragmentary, they’re formed outside your conscious control, they’re visual in nature, they’re from your perspective. All these features are very like what we call episodic memory,” says Conway.

That’s why he, Moulin, and dozens of other researchers have begun to test whether the images can help resolve how the brain handles personal memories. Cognitive experiments, however, represent just one line of inquiry supported by Microsoft Research, the scientific arm of the software giant and the inventor of SenseCam. Medical researchers are also evaluating whether the device can help people with memory problems due to illness or injuries.

In 2004, Narinder Kapur and Emma Berry, neuropsychologists at Addenbrooke’s Hospital in Cambridge, U.K., were the first to use a SenseCam for memory rehabilitation work. They found that the device significantly helped Mrs. B, an elderly woman with memory problems due to brain damage from an infection. Mrs. B normally forgot events after 3 to 5 days, and even keeping a diary that she periodically reviewed helped her remember events for only about 2 weeks. But when she regularly reviewed SenseCam images of events, she could recall more details—and her memories persisted for months after she ceased reviewing the past images. Encouraged by that data, Kapur says he and Berry grew hopeful that “periodic, regular review of visual images of personal events … really does help long-term [memory] consolidation.”

They and others are getting a chance to test that hypothesis. After the pair reported the results from Mrs. B, Microsoft Research decided to provide more than $550,000 in funding to seven research groups, most of them focusing on people with memory problems, and to loan hundreds of cameras to other scientists. SenseCam has “very obvious applications in a whole range of clinical disorders,” says one of the grant recipients, psychologist Philip Barnard of the University of Cambridge.

Personal black boxes

SenseCam grew out of a Microsoft Research project that aimed to create a “black box for the human body” which would record data that doctors might find useful if a person were in an accident, says Ken Wood of Microsoft Research Cambridge. In 1999, computer scientist Lyndsay Williams, then also at the same lab, suggested adding a camera to the device so it could double as a memory aid for mundane tasks such as finding lost keys.

In 2002, Kapur heard then-Microsoft CEO Bill Gates mention the project in a talk. Because his hospital is just a few miles from Microsoft Research Cambridge, it was easy enough for him and Berry to suggest using SenseCam prototypes for patients with memory problems due to Alzheimer’s or brain injuries.

Clinicians who work with such people have typically focused on helping them with their prospective memory, i.e., remembering tasks to be completed in the future, such as keeping appointments. For this, the best aids are still simple tools such as checklists and alarm clocks. But for patients with difficulty recalling past events, clinicians have had little to offer beyond diary-keeping, a task many people, such as Mrs. B and her husband, complain is onerous.

In contrast, SenseCam records images passively, permitting a person to go about their day without interruption. The latest version is about the size and weight of a clunky mobile phone and appears to observe the world through two unmatched eyeballs. One is a passive infrared sensor, tuned to trigger the camera whenever another person passes by. The other is a wide-angle camera lens, set to capture most of the user’s field of view. The device is also equipped with an ambient light sensor that triggers the camera when its user moves from one room to another, or goes in or out of doors. The camera can also be set to snap an image if the sensors haven’t triggered a photo after an arbitrary number of seconds. A typical wearer might come home with 2000 to 3000 fragmentary, artless images at the end of a day.

It may be just those characteristics of the SenseCam images that make them so useful for memory rehabilitation and research, Kapur says. Like Conway, he suspects that the reason the images stimulate memory retrieval and possibly consolidation is because they mimic “some of the representations that we have” of past events in our brains.

To move beyond the initial case study of Mrs. B, the Addenbrooke’s team, under the direction of neuropsychologist Georgina Brown, has followed five additional people with memory problems over a nearly 3-year period, exploring the difference between the memory boost provided by visual and written diary-keeping. Establishing a baseline of how fast these people lose their memories, the team asked each about an event every other day for 2 weeks after the event, and then again after 1 month and after 3 months. Then they asked the patients to keep a diary of a separate event and review it every other day during an initial 2-week assessment, but not during subsequent months. Finally, patients reviewed their SenseCam’s images for 2 weeks following a third event.

The preliminary results suggest that SenseCam use strengthened these patients’ memories more than diary-keeping did. A full analysis of the data is in preparation, says Brown, whose team plans to submit it to the journal Memory for a special issue devoted to SenseCam research.

In a recent, separate study, Mrs. B has repeated a version of her trial, this time incorporating a brain scanner. Researchers compared the activity in her brain as she tried to remember events she had either reviewed in her written diary or with personal images from her SenseCam. Mrs. B recognized about 50% of images taken at an event she had studied using a diary, but 90% if she had studied images instead. And brain regions associated with autobiographical memory were more active when she recalled events she had studied using SenseCam images than when she recalled the diary-studied event, Berry and colleagues report online on 13 March in the Journal of Neurology, Neurosurgery and Psychiatry.

The Addenbrooke’s work represents just a few patients with varying causes of memory loss, but Berry notes that worldwide there are about 30 ongoing SenseCam studies of memory patients. Adam Zeman of the University of Exeter in the United Kingdom leads one. “I think the main interest [in SenseCam] is that it gives you an opportunity to look at memory in what you might call a more ecological fashion than laboratory stimuli generally do,” he says, and “it gives an opportunity to support and rehabilitate memory.”

Memory walks

Normally, basic research precedes clinical studies, but the history of SenseCam has been the reverse. “The initial studies had a strong pragmatic aim,” says Kapur, “but certainly once we started to collect data, [psychologists] began to look at these things from a theoretical slant.” The question for cognitive scientists is whether SenseCam, or any similar wearable, point-of-view photographic device, can illuminate how healthy autobiographical memory works. Moulin, for example, has engaged volunteers to undertake memory walks in which they read a list of words while wearing the SenseCam. His student Katalin Pauly-Takacs has tested the participants’ recall of the words on the day of their walks and then again 3 months later, with and without the help of SenseCam images. Their preliminary results suggest that volunteers remember more of the words from walks that they reviewed using SenseCam images.

Moulin’s experiment is a nod to decades of autobiographical memory research, in which volunteers were tested on their ability to recall standard images or word lists they had previously seen. Some researchers suggest that the more personal nature of SenseCam images will be key to better studying autobiographical memory storage and retrieval. “Using SenseCam we can first, have more interesting stimuli and second, test [memory] processes that can generalize more easily to real life,” explains Roberto Cabeza, a neuroscientist at Duke University in Durham, North Carolina, who is also working with the device.

Despite SenseCam’s more personal touch, there are no guarantees it will break new ground in memory research. “Whether or not it will tell us different principles or something novel is unclear,” says Larry Squire, a psychologist at the University of California, San Diego, who hasn’t yet worked with the device.

William Brewer of the University of Illinois, Urbana-Champaign, notes that nobody really knows how best to evaluate SenseCam as a memory-consolidation aid or a retrieval cue. He and his graduate student Jason Finley have tested different aspects of memory using SenseCam images as cues, asking individuals how certain they are that they’ve seen an image before, or inquiring what they did after a certain image was taken. Such baseline studies, says Brewer, should help identify the most appropriate memory tests.

In addition to the seven Microsoft Research grants handed out in 2007, dozens of groups in cognitive psychology, clinical neuropsychology, education, and computer science are conducting research with borrowed SenseCams and independent funding. But there are no current plans to commercialize the hardware or the software from the SenseCam project—a fact that puzzles some fans of the device. In fact, to keep up with the growing demand for the devices, Microsoft would like to find another manufacturer willing to mass-produce the cameras, says Wood. Microsoft currently provides the cameras to only a limited number of patients under clinical supervision.

Even though he lobbies colleagues such as Moulin to try the device, Conway remains cautious about overselling SenseCam. There is still at least a decade’s work ahead before “we can maximize its use for research and its use as an intervention scheme in helping failing memories,” says the 56-year-old investigator. “By that time, I’ll need to wear one permanently, myself.”

This feature first appeared in Science [html] [pdf]

Light Show

Cambridge University Sheds Light on Darwin

The University of Cambridge rang in its 800th anniversary with church bells and a light show on Saturday the 17th. The light show, created by projection artist Ross Ashton, included specially commissioned illustrations of Cambridge alumni Charles Darwin and Isaac Newton by Roald Dahl’s illustrator, Quentin Blake. Above, a graying Darwin ponders the tree of life, whose branches recapitulate the origins of the species. Other images evoked the scientific, musical, and debaucherous achievements of 800 years of Cambridge students and alumni.

See all the photos at Science Magazine’s new Darwin blog [html].

Scientific American MIND Cover December 2008

Duct Tape for the Brain

Kirsten Timmons was navigating a frozen overpass one night when a passing car skidded out of control and slammed into her vehicle. As her car came to a stop, Timmons’s head probably snapped around its own axis, decelerating sharply when it struck the seat-belt holder next to her.

The impact produced a severe traumatic brain injury (TBI), knocking Timmons out and setting the stage for lasting brain damage. Luckily for her, emergency services rushed her to the hospital within an hour of the crash, greatly boosting her chances of survival. Prompt medical attention can, for example, prevent dangerous pressure buildup in the brain, remove perilous blood clots and thwart other life-threatening consequences of severe TBI.

After eight days in a medically induced coma, Timmons woke up to a daughter she did not recognize. Today, three years after the accident, Timmons knows her child but struggles to concentrate, recall numbers and perform simple calculations—disabilities that ended her career as a nurse practitioner. Similar problems often plague victims of mild TBI [see “Impact on the Brain,” by Richard J. Roberts]. “We’re not bad at getting people to survive [severe TBI],” says neurologist David Brody, a member of Timmons’s medical team at Washington University in St. Louis, “but we’re worse at getting good cognitive recovery.”

The best hope for improved healing lies neither in new medications, which have been disappointing so far, nor in exotic fixes involving stem cells and neural regeneration, which are at least a decade away, researchers say. Rather the biggest gains will likely result from advances in emergency room and intensive care practices that curtail the secondary damage from TBI. The methods include slowing the brain’s metabolism with cooling techniques, removing part of the skull to relieve intracranial pressure and injecting an experimental polymer “glue” to repair damaged brain cells.

On Ice

After a severe TBI, such as the one Timmons sustained, blood vessels broken in the initial injury can bleed into the brain, raising pressure inside the skull. These vessels also may dilate to feed oxygen-starved brain regions, increasing brain volume further. If the swelling goes unchecked, the brain pushes out in every direction. Not only does this expansion complicate oxygen delivery, but it may also push the brain through the only available hole, at the base of the skull, crushing the brain stem and killing the patient.

Initial treatment to prevent or relieve such swelling includes an agent such as a diuretic that extracts  fluid from the blood and elevating the patient’s head so that blood flows away from the brain. In addition, however, doctors may employ any of various techniques to slow metabolism and thereby reduce the brain’s demand for oxygen-laden blood.

A standard way to reduce the metabolic activity of brain cells is to inject a patient with a sedative, but some doctors are also experimenting with quieting the brain by lowering a patient’s body temperature, called hypothermia therapy—say, by injecting chilled saline or covering a patient with a blanket that circulates cool water. Cooling acts as a brake on cellular metabolism. People who have fallen into icy lakes, for instance, often recover from long periods without breathing because the cold temperatures dramatically decrease the brain’s demand for oxygen.

A 2007 analysis by the Brain Trauma Foundation published in the Journal of Neurotrauma suggested that although hypothermia therapy had little or no effect on survival rates for TBI victims, it did improve mental capacity and responsiveness among survivors. In addition to slowing brain metabolism, hypothermia therapy also appears to suppress inflammation and other chemical reactions that can damage brain cells, according to intensive care specialist David Menon of the University of Cambridge.

When faced with a persistent pressure problem, physicians may resort to surgery. In one relatively simple technique, physicians drill a small hole in the base of the skull to drain excess cerebrospinal fluid. But in the most vexing cases, surgeons may remove a large flap of bone from the top of the skull so that the brain can expand into a larger space, effectively decompressing the brain and preventing it from crushing itself. An international team of doctors led by University of Cambridge neurosurgeon Peter Hutchinson is now comparing the efficacy of bone-flap surgery with that of last-resort nonsurgical remedies (such as drug-induced comas) in a 600-patient trial that the researchers aim to complete in 2012.

Bad Chemistry

In the wake of a TBI, the release of biological poisons also threatens brain function: toxins ooze out of ruptured neurons and wreak havoc on neighboring cells. Some clinics monitor these minute chemical imbalances in the spaces between brain cells. In one such technique, intensive care specialists insert a millimeter-thick tube through the skull. The tube collects trace amounts of chemicals in the brain and delivers them to a nearby microdialysis machine for analysis.

Keeping a close eye on brain chemicals, such as the neurotransmitter glutamate, that exude from dying cells can help doctors fine-tune their treatments. Abnormally high levels of glutamate, for example, usually indicate a rapid rate of cellular damage. Such a sign might prompt physicians to try more aggressively to save cells by cooling the brain to decrease oxygen demand or boosting ventilation rates to improve oxygen delivery. Or if microdialysis indicated that cells near a blood clot were fading quickly, doctors might remove the clot to stem
the destruction.

Thus, many specialists believe that tracking a TBI patient’s brain chemistry can promote good cognitive recovery. At Addenbrooke’s Hospital in Cambridge, England, the introduction of a specialized brain injury intensive care unit in which doctors routinely perform microdialysis raised the fraction of TBI survivors who retained their independence from 40 to 60 percent. But no one really knows if microdialysis accounts for the difference. “There is not yet a consensus on whether [microdialysis] should be used for routine care and, if so, what value it adds,” Brody says. (The technique is more typically used in basic science experiments.)

In addition to responding to chemical warnings, ICU doctors ideally would like to prevent the release of toxic compounds from damaged cells in the first place. Biomedical engineer Richard Borgens of Purdue University and his colleagues are developing a technique that would repair cell membranes soon after injury using polymers such as polyethylene glycol. Just as bicycle tube sealant fills holes in bicycle tires, the polymer seals punctured membranes, restoring their ability to contain harmful chemicals.

In 2001 Borgens’s team confirmed in spinal cord tissue that polyethylene glycol mechanically mends burst cell membranes. More recently, in a study published in the June 2008 Journal of Biological Engineering, the researchers tested the technique on brain-injured rats. They injected the animals with polyethylene glycol at various time points after the injury. Rats given the injections within four hours navigated mazes, which test their spatial learning and memory, more proficiently than untreated rats did. No one knows whether polymer therapy would produce similar results in humans, but the scientists hope that ambulance workers might one day inject the polymer at the scene of an accident, jump-starting repair even before a patient reaches the hospital.

New Connections

For now, improving a TBI victim’s quality of life often means extensive occupational, speech and physical therapy. These remedies can help form new neuronal links in the brain to circumvent the damaged pathways, rebuilding connections that underpin important skills and thought patterns. “We think there’s more than one way to get from A to B” in the brain, Brody says. “You’ve got [neuronal] wiring that goes from A to C and C to B, and you haven’t used it much—but with practice it gets stronger.”

For Timmons, such interventions have met with only partial success. The former nurse can now carry on a normal conversation and care for her daughter, but her injury has left her unable to manage the stressful multitasking necessary to return to work. She also continues to struggle to add or subtract numbers, a deficit she called “a major reality check” because she had been good at performing such calculations before the accident. Timmons describes her life as a frustrating blend of independence and disability.

Doctors hope that advances in intensive care for TBI victims will reduce the daily aggravations of patients like Timmons. Studies that point to the best low-tech TBI treatments and that evaluate new techniques to prevent secondary brain damage should help intensive care specialists create a standard of therapy that will improve the lives of thousands of brain injury victims every year.

Further reading:

This feature appeared in the January 2009 Scientific American MIND [html] [pdf] et en français: [html].