## ## ##

What Life Sciences can learn about data from Financial Services

17th February 2022

As the capabilities of machine learning increase, so does interest in the questions of regulation and machine oversight. This is no unhappy circumstance either, as without proper practice, AI-driven decisions present all kinds of ethical risks. Particularly when it involves the health and security of people. 

On the surface, life sciences and financial services have little in common. But when it comes to the consequences of AI and decision engines, there’s more than a little overlap. Both handle sensitive information, both are moving that information around, and both give rise to a question: 

What would you ask of a system that is using sensitive data to make decisions that significantly affect your life?

In financial services, regulators began to take an interest in automated decision-making, and the sector responded. Now that life sciences have to respond in a similar way, it only makes sense to learn from the measures financial services took.


Full automation is not the endgame


The growing capabilities of AI are impressive, perhaps invaluable. AI can now be trained to analyse and recognise symptoms from digitised pathology images, even diagnose cancer. But as regulators, policymakers and academic experts have identified, this is not technology that can be given free rein.

When a consequential medical decision is taken by an AI, patients have a right to an explanation. In this regard, medical decisions are similar to lending decisions within financial services, where the consumer's right to an explanation is now enshrined within legislation like the EU's GDPR (Recital 71).

In the consumer financial services industry, risk directors, chief risk officers and those responsible for setting policies and executing them have been wrestling with three main challenges.

First, providing evidence for an auditor that their lending policy was implemented successfully. Second, explaining to someone, in detail, why a decision was made. And third, combining human explanation with AI analysis so that human intelligence always forms guard rails for the machine led statistical models that form part of a decision.

In practice, this means a lender will never look at a risk model and treat its word as gospel. AI should only ever be a part of the decision-making process - and even then, the end decision always needs to adhere to compliance and human explanation. 

So AI can’t be left to its own devices. Humans have to stay involved, otherwise you’ll end up with a black box, a system with workings that you cannot see or understand. This is an obscurity that neither financial services nor life sciences can afford. 

The narrative out there is that in healthcare, professionals are beginning to simply trust the machine led model. Exaggerated or not, this is what the regulators are concerned about - and this is what life sciences must ensure it does not do.

It’s essential then to ensure that systems support human analysis, intervention and review of each and every decision that is made. This level of auditability is crucial, since it will mean the developers of said algorithms can be properly sensitive to the rights and individual circumstances of patients.


You need to be able to turn over every stone

When the stakes are as high as they are in healthcare and financial services, meticulous detail and compliance matters;  If non-compliant, innovative AI results in a loan that cannot be repaid, or a symptom left undiagnosed, the innovation doesn’t count for much. 

So it’s essential to know where data comes from and what has been done to it along the journey. Regulators have long expected financial services organisations to be able to trace backwards from a decision all the way to the data that decision was based on. 

Now in healthcare, too, the expectation is that if you’re using machine learning to make a decision, you need to be able to walk the path in reverse. To comb back through the data to see how a decision was made. 

It’s here that healthcare stumbles on an issue all of its own. Although financial services records are considered the property of the lender, medical records belong to the patient, at least in the UK. When someone has the freedom to change their mind about sharing their data at any time, this can pose a problem. 

It means data processors need to, at any time, be able to check that people have not revoked permissions to use data that a machine led model is drawing its conclusions from, and they need a decision engine that takes such considerations into account.


Guard rails need to go beyond the development process

As in financial services, life sciences cannot rely on the development process to deal with all the issues. There are a number of conversations about the concept of AI “guard rails” right now but most of them revolve around charters and processes which govern how data is sourced and used, and how AI models are trained and evaluated. As the financial services sector has shown though, charters and processes are not enough on their own. 

Most AI models are inherently "black boxes". This means that no guarantees can be made as to their behaviour under all possible combinations of inputs. For this reason, "production" guard rails are also needed to ensure that strange or crazy decisions are not made. Without them, no matter how well-designed the technology, bad decisions remain a real possibility.

To comply with what regulators are saying, whether you are working with decision support software or decision engines, you need to go beyond the technical. That’s why we’ve developed Ruleau - an affordable decision engine that stays interactive. It works in tandem with human intelligence, so that you can stay transparent, compliant, and ultimately provide a better service. 


To find out more about how you can make the most of your data and AI, don’t hesitate to get in touch.