Blog

Four Pillars Driving Sanofi’s Clinical Data Strategy

This blog was guest authored by Patrick Nadolny, global head, clinical data management, Sanofi.

We need to streamline our complex data journey. While Clintrial 3.3 may be remembered as cumbersome by today’s standards, 25 years ago it was ubiquitous and a great example of a one-stop-shop data management tool. This is what we are looking for in the future: one ecosystem to handle everything. Connecting data across the drug development continuum is vital if we are to transform clinical data management.

We are at a major inflection point in the transformation of clinical research, with visionaries taking the lead, and early followers starting to join the movement. Data management will undergo major changes over the next decade, with some being fueled by COVID-19 or changes in regulations. The industry must not sit still. To move successfully from data management to data science, we need to focus on people, partners, processes, and technology. To manage change effectively, you need to partner with the right people.

Change is happening across four main areas:

1. The decentralization of clinical trials.

Traditionally, collecting a lot of data has been relatively easy. Moving forward, collecting data from homecare, telemedicine, and wearable technology requires a new way of executing data management. Today, we manage everything across many disconnected systems, and it has become complicated to deliver studies successfully and efficiently. It would be great to see technology-leading companies including Veeva, provide a unified and connected platform for all clinical data. We want to be able to identify trends at study, sites, and country level as well as across studies. In order to efficiently do this, we need one place to manage all clinical data across our portfolio.

2. The increasing complexity of clinical research with basket, umbrella, platform, and adaptive.

To accelerate drug development, study protocols continue to become increasingly complex from an operational standpoint. For example, at Sanofi, we have one ‘platform’ development program that spans across five concurrent early phase studies and more than 30 different patient populations. While this make clinical research more efficient, traditional data management is not ready to implement a risk-based data review strategy for so many different variations (i.e., 30+ studies combined into 5), requiring rapid decision making. Sanofi is also looking at the use of synthetic control arms which presents additional new challenges to data managers. Technology must enable us to manage this complexity, and even lead the way.

3. Risk-based approaches.

Regulations have evolved and are now requiring us to look at risk holistically. It is not just about monitoring investigational sites; risk-based strategies affect everybody, including clinical data managers. We are used to one-size-fits-all data management plans. To infuse Quality by Design (QbD) and focus on what matters, we will have to define dynamic data capture tools, edit checks and data review strategies tailored for specific patient populations, indications, investigational products, data types, etc. within one single study. That same study may also need specific data management strategies to mitigate the risks associated with naïve sites or countries where the protocol deviates more significantly from their standard of care. At the end of the day, we should cater data reviews based on any risk associated with that country, site, patient population, indication, etc. We need technology that can look at data in multiple ways. Although we may believe we have the right analytics solutions in place, that’s not the case for the variety of data or the variety of different protocol designs.

It is important that data managers understand what risk-based data management means and what risks to the data are specific to the protocol. They need to understand whether the data makes sense and will reach the right conclusion. We need to plan for risk mitigation, before initiating the study. These are new responsibilities for data managers.

4. Intelligent devices and intelligent solutions.

We increasingly rely on sensors and wearable technology, potentially collecting billions of data points. We cannot apply traditional data reviews ideas to this data – they will not scale – but we can look for patterns in these huge datasets. We look to machine learning to detect patterns and raise safety alerts as well as automating simple and mundane tasks. Some organizations are already looking to AI to automate manual reviews, and automation in data management will continue growing. The opportunity for data management lies in the review of each data related process to determine whether that task is fit for purpose today. If a task no longer adds value, stop doing it. If a task is repetitive but the output is valuable, automate it. We are experimenting as an industry, but we need to speed up – so much is happening all around us now.

From clinical data management to clinical data science

To accelerate drug development, we will be more frequently exposed to complex study designs in a risk-based, decentralized and adaptive framework. We need clinical data scientists to guide the study teams in operationalizing them. This is not a simple endeavor and requires dramatic changes. With many different data sources, patient populations and indications per study, we must consider all intricacies. For example, in a study where there are different end points across different patient populations, the data manager will have to consider how to cater to different dynamics in one EDC system which will have a knock down impact on all subsequent activities from data review to dataset generation.

Additionally, clinical data scientists will need to better understand each patient’s journey. We must implement complex protocol designs, such as basket designs with some adaptive design elements in a decentralized way, with a flexible approach, with some patients being treated remotely, others at bricks and mortar sites. Complex studies and complex data streams result in the need to identify the risk connected to the device, technology, protocol design, and the patient populations, and a plan must be created. It is essential that we employ the right tools and processes to support this.

The importance of partnerships

A large number of data streams, the increased collection of biomarker information, highly complex clinical research and training people on the appropriate technology, are all challenges that need to be addressed. To handle the requisite changes, your technology partner needs to change with you, and data managers need to be provided with the right understanding on subjects such as machine learning and the implication of new protocol designs. We need to prepare data managers for what is coming our way, including the use of synthetic control arms, Natural Language Processing (NLP) and machine learning, to name just a few. Sponsors, technology providers, and the Society for Clinical Data Management( SCDM )organization all have a role to play in continuing education for data managers.

To learn more, register for the upcoming webinar, Designing and Implementing a Data Management Strategy, featuring Patrick Nadolny, Global Head, Clinical Data Management, Sanofi, on Thursday, November 18.

Interested in learning more about how Veeva can help?