A solution to AI bias and discrimination in insurance
To address the problem of indirect discrimination, Dr Huang’s study examines the different degrees of anti-discrimination regulations, which are reviewed in this paper from an international perspective, including the US, EU, and Australia. The study lines up different regulations on the spectrum, ranging from no regulation, restrictions (or prohibitions) on protected or proxy variables, the disparate impact standard, to community rating. They noticed that the insurance regulations varied by the line of business and jurisdiction.
The authors then match the different regulations with fairness notions, including both individual fairness and group fairness. These fairness criteria aim to either achieve fairness at the individual or group level, noting that an inevitable conflict may exist between group fairness and individual fairness, explains Dr Huang.
Finally, the authors implement the fairness criteria into a series of existing and newly proposed anti-discrimination insurance pricing models. They compare the outcome of different insurance pricing models via the fairness-accuracy trade-off and analyzed the implications of using different pricing models on customer behavior and cross-subsidies.
How fairness criteria tackle discrimination
While changing the narrative around discrimination and insurance could be an important step, Dr Huang’s study ultimately finds that anti-discrimination models must also be based on these fairness criteria to prevent discrimination. She explains: “In our paper, we discussed four different anti-discrimination insurance pricing models corresponding to four fairness criteria to mitigate indirect discrimination. And we are studying more fairness criteria, their welfare implications, and assessment tools for regulators and insurers to use.”
“There are three ways to mitigate indirect discrimination: pre-processing (mitigating data bias before modelling), in-processing (mitigating bias during model training), and post-processing (mitigating bias by processing the model output). Depending on specific anti-discrimination regulations, insurers could choose the appropriate strategies in practice, based on the link created between the regulation, fairness criteria, and insurance pricing models.”
Dr Huang’s body of work to date highlights this problem will require actuaries to collaborate and discuss solutions with experts from multiple disciplines. “Academics and external partners from a wide range of disciplines (such as actuaries, economics, computer science, law, political science, philosophy, and sociology) need to work together to tackle difficult research problems, including algorithmic ethics and discrimination (certainly including, but not limited to insurance),” she says.
“We expect to see more structured multidisciplinary teams of this nature emerging to tackle large societal problems.”
This article first appeared on the UNSW Business School website.