aamc.org does not support this web browser.
  • AAMCNews

    Your data could cure you one day

    Vast repositories of patient data hold clues to better treatments. The challenge lies in extracting meaningful information from those data while still safeguarding patient privacy.

    20-data-cure-medicine-hero-image.jpg__992x558_q85_crop-smart_subsampling-2_upscale

    Eric Dishman had run out of options. After 23 years of battling a rare form of kidney cancer, his kidneys were failing, dialysis was not an option because of the chemo treatments, and he wasn’t a good candidate for a transplant. During that time, he had undergone more than 60 rounds of chemotherapy, radiation, and immunotherapy, at a cost of $6 million.

    The computer scientist was then in his early 40s and a fellow at Intel. A colleague suggested he undergo a whole genome sequencing — which saved his life. It turned out his unusual cancer behaved more like pancreatic cancer than renal cancer, and that 95% of his grueling treatments were a waste of time. Doctors put him on an experimental pancreatic cancer treatment, which vanquished the cancer and made him eligible for a kidney transplant. “When I think about all the suffering I went through,” Dishman recalls. “What if we had gotten it right the first time?”

    Today, Dishman is director of the NIH-funded All of Us Research Program, a $1.5 billion precision medicine initiative to create the largest, most diverse patient data set in human history.

    While insights from All of Us are still years away, vast repositories of patient data from electronic health records (EHRs) are already being analyzed to identify best practices, predict and prevent disease, and detect different reactions to drugs and treatment regimens in diverse patient populations.

    Here’s what’s happening on the frontiers of this data revolution.

    Mining EHRs to improve treatments

    Many researchers believe the key to better clinical outcomes lies in mining the millions of bytes of data available in patient EHRs.

    “The whole process of big data boils down to capturing the experiences of previous patients in a quantitative way so physicians can apply this in their practice.”

    Todd McNutt, PhD
    Johns Hopkins University

    Atul Butte, MD, PhD, head of the University of California (UC), San Francisco, Institute for Computational Health Sciences, is helping to integrate EHRs from all five medical centers in the UC system — a system that encompasses more than 15 million patients. “This is real world evidence that can tell us best practices,” he says. “Which drugs work best and in what patients? And in what order? Which ones should we try first? We can start to compute all of these quality metrics using health record data.”

    At the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University in Baltimore, Maryland, patient data is helping radiation oncologists be more effective. Head and neck cancers, for instance, are difficult to treat because the tumors can impede so many crucial bodily functions — tasting, swallowing, even the ability to speak. It can be tricky to figure out where to aim radiation beams without permanently damaging vital anatomical structures like the voice box, throat, esophagus, and salivary glands.

    So radiation oncologists at Hopkins devised a software platform called Oncospace that mines data on previous patients. It uses that information to generate an optimal treatment plan based on age, other health conditions, and other treatments patients are receiving. The software is now also used for lung, pancreatic, and prostate cancers.

    “The whole process of big data boils down to capturing the experiences of previous patients in a quantitative way so physicians can apply this in their practice,” says Todd McNutt, PhD, a radiation oncology physicist and director of clinical informatics at Johns Hopkins.

    Data can also reveal the best ways to prevent surgical complications. The University of Iowa Hospitals and Clinics used predictive analytics to reduce rates of post-surgical infections in patients undergoing colon surgery by 74%, which saves about $740,000 for every 1000 surgical patients. Health conditions like diabetes or high blood pressure contribute to complications, but what happens in the OR is also key.

    Doctors devised an algorithm that included a patient’s vital signs during surgery, such as blood pressure and heart rate, and whether they received a blood transfusion, as well as their body mass index, age, and gender. Based on the model, surgeons detected those at greater risk among a group of 1,600 patients, and instituted post-surgical treatments to prevent infections, like antibiotics or negative-pressure wound therapy.

    The software system has now been adopted by other hospitals and is also being used in the NICU to calibrate oxygen delivery to premature newborns, who can be blinded by too much oxygen, and to detect delirium among ICU patients, a condition with a 40% mortality rate. “If we can identify them early,” says John Cromwell, MD, director of surgical quality and safety at the Carver College of Medicine, “there are interventions that can be done.”

    Challenges of data mining

    Extracting and safeguarding data from EHRs can be challenging, though. When researchers at the Dana-Farber Cancer Institute in Boston, Massachusetts, attempted to mine their electronic patient database in order to link treatments and outcomes with patients’ genetic makeup, they hit an unexpected obstacle. Because hospitals use different software platforms, it was difficult to integrate data from multiple sources. And doctors often wrote down key details, which had to be manually entered. The upshot? The information they sought was trapped inside reams of paper.

    The federal government now has requirements for EHRs, but hurdles remain. Health systems often have multiple EHRs, imaging, laboratory, and other connecting data systems, and there are large amounts of unstructured data, such as clinical notes, which are difficult to interpret.

    “We’re entering a new era of data-driven medicine that will affect the quality, quantity, and composition of the care we’re delivering.”

    Atul Butte, MD, PhD
    University of California, San Francisco

    Common data models like the National Patient-Centered Clinical Research Network (PCORNet) may help. PCORNet allows clinicians to tap into the EHRs of more than 110 million patients from 130 academic and other health centers. The idea is to shift the focus from investigator-driven research to patient-centered studies that will more quickly identify best practices in the clinical setting.

    Current studies using PCORNet data include a Duke University study that compares the effectiveness of two different doses of aspirin (81mg versus 325mg daily) in preventing heart attacks and strokes, a Massachusetts General Hospital study comparing two treatments to help those with mood disorders, and a UC San Francisco study to determine if mobile health applications can help people with uncontrolled hypertension.

    “We’re entering a new era of data-driven medicine,” predicts Butte, “that will affect the quality, quantity, and composition of the care we’re delivering.”