Quick Connect

From the Blog »

Why Personalized Medicine Needs Hadoop

Dec. 18, 2015 ⋅ Categories: Analytics, Big Data, Hadoop

The future of health care now rests in the hands of Hadoop analytics. With new initiatives in “Personalized Medicine” relying on real-time data analytics to improve treatment programs, these important innovations in health care need even more out of high-performance computing.

In 2016, the Obama administration plans to dedicate $215 million of the budget to the “Precision Medicine Initiative,” which will be divided between the National Institutes of Health and Food and Drug Administration to collect patient data, create a database for it and protect its privacy. Organizations like the National Cancer Institute will also get a piece of the budget to implement Personalized Medicine in a clinical trial on cancer patients.

In the case of NCI and other health care projects, the goal of Personalized Medicine is to develop better treatment programs by using data analytics to improve results and reduce side effects for patients, so the push toward this initiative will generate a significant amount of data in the health care sector. The International Data Corporation predicts U.S. healthcare data will grow to a total of 2,314 exabytes (or 2,314 billion gigabytes) by 2020, compared to 2011 when it produced 150 exabytes.

To handle this enormous data volume, which is mostly semi-structured or unstructured, Personalized Medicine can’t rely on traditional data warehouses. The best tool for its development will be Apache Hadoop. With Hadoop’s HDFS storage and MapReduce model, Personalized Medicine is able to process these large amounts of complex analytics across large sets of data. To improve upon this performance, the data will be processed through Hadoop’s Apache Spark platform. Smart Data Collective wrote that Apache Spark provides the speed and simplicity needed by medical diagnostics and can offer analytics delivered in real-time across diverse datasets and will help Personalized Medicine develop faster.

Peter Coveney, a Chair holder in Physical Chemistry and Director of the Centre for Computational Science, led the Network of Excellence in the Virtual Physiological Human initiative, a project in Personalized Medicine using data to investigate the human body as a single complex system. Coveney told HPCWire of the specific challenges that come with such a project, explaining that Personalized Medicine has necessitated “rapid development of computational science and informatics capabilities to deal with management and analysis of data.”

According to Smart Data Collective, most clinicians believe that the most important aspect of Personalized Medicine is collecting data from patients via biosensors, such as “smart” patches, mobile technology or fixed devices, that are combined with diagnostic data sets and analyzed in real time. The hope is that the data will be monitored in real time by health care providers for faster decision-making about the patient’s treatment, which can in turn reduce medical care expenditures. That technology will be made possible by Hadoop’s big data servers.

More from the Blog »

Rotate Your Device