The last mile of model deployment is killing your data science ROI.
In the last five years, organizations have seen the benefits of investing heavily in predictive and analytic functions. Data scientists equal or outnumber data engineers, and the demand for more frequent, always-on machine learning and AI models is on a hockey stick-shaped trajectory.
Driving this in commercial pharma marketing organizations is omnichannel engagement. The drumbeat for “right message, right channel, right time” has not let up and organizations are getting closer every day to achieving levels of efficiency they only dreamed of five years ago.
However, the ROI on data science development is not meeting expectations. There are two factors that come into play when calculating the net gains of any organization’s data science and advanced analytic investment:
- Speed to market: getting a model out the door that predicts new patient starts is not dependent on that model being perfect. While good is good enough, progress delivering on the model development process is hampered because operationalizing models requires coordination across multiple groups (i.e., IS, the business, and the advanced analytics function).
- Cost: highly valued resources are managing the models while they could be delivering new insights. There is an unnecessary burden being placed on data scientists when they have to babysit a model just to make it run. Also, if the really smart people are staying busy with repetitive and reactive activities, is that going to move the innovation needle in the long run?
What is the solution?
Operationalizing analytics so they can be run on demand or on a schedule, paired with robust QA checks should be its own unique function. Placing a wall between development and production isn’t a novel concept in software engineering, so why should business-driven models be treated any differently?
The solution to fixing that last mile problem is analytic operations as a service. This is ML/LLMOps, but managed specifically for pharma. The key concerns that analytic ops focuses on is managing the sequence of events that go into getting AI (including GenAI) assets into a steady state, deploying the model from a development paradigm into a production paradigm, QA’ing the model results with pharma specific checks, and ensuring that the business still has visibility into the execution state and run history of the model.
Here are some examples of these concerns:
- Model and Data Management: will the code successfully run against the data provided?
- Coding Standards: is the code self-documented? Do the scripts contain main functions?
- QA: Are the model results throwing out unreasonable results?
- Visibility: can the business track when the model was last run with performance metrics to prove it?
Once enabled, there are significant benefits to organizations having the model deployment function always ready to go. The ROI case is simple: the faster the model gets deployed, the more value it will derive. The less time the models spend being managed by data scientists, the more they can tackle newer and more innovative problems. All of this is in the context of the sunk costs needed to create the model in the first place.
The opportunity cost for delaying model deployment is high when it sits in a queue to be pushed out, so hastening deployment is critical. There is also the opportunity cost associated with not finding out as quickly as possible whether predictions being served are being leveraged by the business.
Do you agree? Disagree?