Skip to main content
Uncategorised

#Building a Code for the #Ethics of #AI

By 25th April 2018June 27th, 2019No Comments
Unlocking the business value from big data is probably the biggest challenge any company working on an analytics implementation can face. For any insurance industry player intent on designing products fit for the future, and fit for a diverse and ethically-driven organisation, there’s a need to consider the ethical component being built into algorithms and scores.
The financial regulator the FCA has published its vision for the use of AI and big data in insurance, and some important next steps. It has said the sophistication of AI has great potential for social good, and for solving many of the problems it faces from a supervisory perspective. There’s already been some notable successes, for example in supervising large populations of small brokers and other firms, effectively being able to see the “wood from the trees”.
We commented in a previous blog article about how the global standards setting body, the IAIS (International Association of Insurance Supervisors) wants to reshape regulation in a world of pay-as-you-go insurance, balancing the risks and benefits of AI and innovation.
Stefan Hunt, Head of Behavioural Economics and Data Science at the FCA, said in a recent speech that from the point of view of the regulator, AI and machine learning are starting to make an impact on the tools used for compliance, for spotting the bad operators, for estimating demand, for driving efficiencies, and for tackling many other problems.
Certainly all of us involved in insurance need to guard against using these new techniques blindly.
AI for social good
We increasingly live in a digital world and commercial companies are not the only beneficiaries. LexisNexis Risk Solutions in the US for example, operates the ADAM programme (Automated Delivery of Alerts on Missing Children). To date, the programme has resulted in the recovery of over 175 missing children. ADAM uses advanced technology to distribute missing child posters to police, news media, schools, businesses, medical centres and other recipients within a specific geographic search area, such as a state, area code or a combined search area near a city and area code.
Our services also help governments and in the US, state authorities, verify personal identities and rectify tax fraud estimated at $32 million in the State of Georgia over just two tax years. The Florida Department of Children and Families (DCF) has saved more than $12 million in cost avoidance using LexisNexis technology to help prevent fraud and drive efficiencies.
For public bodies, the benefits of AI and advanced analytics go beyond the fact that identities can be stolen or fabricated. Data services help these public authorities move past the historical assumption that matching their own records against other government records is enough to fight fraud.
Artificial intelligence for insurers means the boundary can be moved from risk assessment based on aggregate modelled behaviour, towards risk assessment based on observed behaviour learned from the individual.
Good product design and protecting the consumer, as much as regulation, is ultimately about recognising patterns in data. Machine learning helps us find those patterns, in a complex environment where some business data is overtaking the cognitive abilities of a single human.
GDPR brings some of the answers, and more questions
These challenges and opportunities, for designing out any inherent biases that may already exist in insurance and financial services, were discussed recently at the Smart IOT World Conference in London. If we accept that bias does exist in data – for example the Gender Directive was intended to design such bias out of the system – can there ever be such a thing as zero bias in the data?
How to acknowledge bias, and how to work the solutions into business decisions? How will a greater understanding of emotion and the workings of the human brain help to eradicate bias? How to design systems that embed some kind of value system?
In the conference Sue Daley, Head of Cloud, Data Analytics and AI at Tech UK said that to a large extent the ethical issues around AI come down to transparency, accountability and responsibility.
“There are a lot more questions than answers right now,” she commented. “But I don’t see this as a tick box exercise…We can expect the technology to move on and it’s likely that our human perspective will also move on.”
Sue Daley commented on how we can picture what a benign “puppy dog” model of product design would look like, in the world of advanced data analytics.
“Where we are right now is that the questions are mostly around data privacy and protection [and legitimate uses of data],” she added.
“GDPR will answer a lot of those questions, but it will also raise some questions.”
Transparency, accountability and responsibility
Alexandra Vidyuk, Head of Data Platforms, HSBC commented that products can easily introduce bias by using data sets that are already inherently biased. So, on the positive side, AI is a good way of uncovering unethical behaviours.
“It is a chance for us to improve our trust and business processes,” said Alexandra Vidyuk, considering that as technology gradually becomes more autonomous and sophisticated, there’s a requirement to ensure the system has its own value system built in.
Michael Natsuch, Global Head of Artificial Intelligence, Prudential commented that in terms of defining regulation for the world of AI, this is a difficult topic since there can be good and bad types of regulation, depending on the goals we want to achieve.
He said there is a need to bring more clarity on the ethical rules for AI – possibly by way of federated learning – so as to get more confidence into the services we are already building.
“It could be for AI as it is in medicine,” said Michael Natsuch, “where the Hippocratic Oath upholds specific ethical standards and it helps practitioners in daily practice.”
But there are also questions for the rules. Should the AI reside on the sensor or on the endpoint? Or on the smartphone, or somewhere the individual can have more control on their interactions?
Dr Berndt Müller, CSAIP Research Leader at the University of South Wales, commented that the challenges and questions are many i
n relation to AI, and there is potential for mistakes.

“Who will be legally responsible for AI mistakes? It’s going to be the same people in the organisation, the compliance and data protection people,” commented Dr Müller.
Overall, there is some agreement that the more AI we use, the better the ethical outcome will be.
This is certainly true if we consider some specific examples such as using voice analytics to help vulnerable insurance customers in the call centre setting. Or if we think about ingesting advanced analytics as a better predictor of insurance customer needs, and a driver of fairer quoting based on an individual’s real life circumstances.
For anyone involved in training an artificial intelligence model, there are some important areas in which you must evaluate and consider demographics and diversity: yourself, your data and the annotators (labellers) of your training data. Using a purely data agnostic data aggregator, or data exchange, can go a long way to identifying patterns in relation to bias, and achieving a full cross-industry view.
Artificial intelligence allows us to leverage the ever-increasing pools of data available. It allows us to extract more information and insight. And ultimately this makes insurance more efficient and effective.This article is care of Lexisnexis, this article can be found here

 

Tim Kelly

Tim is a highly qualified Independent Engineer with over 20 years experience as an Engineering Assessor of damaged vehicles.

Leave a Reply