Personalized healthcare is increasingly viable for the life-sciences industry thanks to strides in big data and AI technologies, but its success is increasingly reliant on the quality of clinical trials data.
Researchers require huge volumes of data today. Information on patients’ genomes, genetic sequencing, imaging data, even data surrounding patients’ movements in and outside the clinic, are all used to inform teams about drug responses within individuals or study groups.
Technology, therefore, is central to every life sciences organization’s R&D strategies.
Bryn Roberts, Head of Operations for Pharma Research and Early Development at Roche, agrees.
“Technology is key to personalized healthcare because it relies on data. To get that precision, we need high resolution understanding of the biology and the disease. That means big data…and [beyond that] technology is required to produce the data and to manage the information. It’s a very technical approach.”
According to Roberts, Roche mostly use technologies such as big data, advanced analytics, and machine and deep learning.
“We apply these technologies across the R&D spectrum. In Roche we have examples that start early on in the research, for example, images of tumour cells; [and] ophthalmic images, where there is a huge amount of information encoded within an image that we would lose if we purely looked at certain features. So, using deep learning architectures we can draw a lot more information from those.”
This extra layer of information is powering teams to deliver personalized medicine.
Analysis of research assets, like photographs, using the latest technologies are yielding insights for clinicians that they wouldn’t have had access to as recently as five years ago.
This is a win-win scenario. Researchers can reshape clinical trials based on a deeper understanding of test responses. Patients receive a more tailored treatment plan within a clinical trial, and a higher chance of treatment success when those drugs reach the market. Regulators benefit, too, says Roberts.
“When we bring a biologic towards the market, like antibody treatment, we have to prove the clonality of the cell line that produces that antibody. Regulators demand that, and it may sound trivial but it’s extremely hard to do at scale. Yet, here, we apply automated image analysis – deep learning approaches – to make that process more robust.”
Furthermore, Roberts adds, regulators are also pacified by the use of digital biomarkers where the use of mobile monitoring systems and wearables provides clinicians with “an exquisite understanding of how diseases are progressing.”
A better understanding of a disease – and a population more willing to be measured using wearable technologies – consolidates regulatory trust within highly data-driven clinical trials.
FAIR data, fairer trials
There are still steps to be taken, however, in order to ensure the efficacy of these trials.
“The data we use has to be FAIR,” Roberts says. “Findable, accessible, interoperable and reusable.”
This is a central tenet within the entire science and technology profession. In this context, however, there are two further reasons for consideration. The first is that personalized medicine is a draw for patients, life sciences companies, regulators and health officials, so researchers have pressures from multiple sides to deliver personalized treatment. That means more AI, and machine and deep learning techniques – which are only as high-performing as the data they are fed.
The second is an obvious but vital consideration: these technologies directly affect human treatment. Where other industries can apply test-and-learn, or agile, philosophies, the life sciences industry rarely receives such flexibility when human life is involved.
FAIR data is vital for the success of future clinical trials, Roberts concludes.
“Without FAIR data, AI, for example, would be impossible to do. We need meaningful data at scale to build powerful models to predict responses to treatments.”