The pace of new technology is creating difficult ethical questions for insurance companies

Under the Hired Brains Advisory brand, Hired Brains will provide advisory services to chief executives, company boards and management teams on the design and development of frameworks and toolkits for ethical artificial intelligence. In addition, the firm will offer education, coaching and mentoring services related to the development of team or individual’s artificial intelligence capabilities.

We address these issues with focused consulting deliverables such as data adequacy, strategy formation research, media, and events. Our services are designed to avoid costly mistakes, uncover hidden risks, inform your team and share best practices with the Hired Brains Ethical AI Community of Interest.

Ethical Challenges Faced by AI

Neil Raden is author of the Ethical Use of Artificial Intelligence for Actuaries, published by the SOA

Neil Raden is author of the Ethical Use of Artificial Intelligence for Actuaries, published by the SOA

How does this promising technology get it so wrong sometimes? Surprising elements of bias, invasion of privacy, discrimination, breach of regulatory requirements and professional precepts are common. Judicial software biased against people of color, hiring models that favor only males, “chatbots” that are manipulated to speared vile hateful dialogue in a matter of minutes? The answer is actually quite simple. Machine learning models can learn, but they are not very smart. People are much smarter and there are preventative measures and remedies.

Machine learning is based on the concept that large enough volumes of data will yield inferencing algorithms that can then be applied to new data as it arrives. But at its core, it’s just curve fitting. The algorithm tries to fit variables to the desired objective. But if the developer is not careful or skilled enough with the features selected, or the data itself isn’t what it appears to be, things can go haywire. In its attempt to converge on an objective function, a poorly trained model will diverge from its designed features and search for certain “latent vales” that allow it to converge in often bizarre ways. Transparency is weak and when this happens in production, it can fire thousands or millions of times before it is noticed

Until recently, decision-making systems in insurance companies were composed of conventional development techniques and platforms. The scope of the systems ranged from rules-based scoring for underwriting to claims adjudication to valuation models. Often, rules were encoded in logic elucidated so they could be inspected at any time If errant results were detected, code traces, transaction logs and persistent data in databases could be investigated.

No organization is immune. But it doesn’t have to be this way.

Solutions

Insurance companies are in the greatest danger from mishaps with AI. Unlike other industries, everything an insurance company does affects people and is potentially subject to ethical breeches that are often not detected until the damage is evident. They arise from naïve preparation of data, ad hoc development, the team's level of skill, the quality of the data, and a variety of biased outcomes, and, Do-It-Yourself AI becoming more popular.

We offer solutions to the potentially extreme and insidious damage that can be caused by ethical breaches of errant AI and, unfortunately, those who exploit AI.

The AI Ethical Risk Report

We can provide an impartial assessment of the risks you face in your organization to help you understand and manage these risks.

Each report is custom to the organization that commissions it.

You can use this report:

  • To guide your implementation of AI in your organization,

  • To address risks that may have emerged unconsciously, and

  • As part of your ORSA report, and to demonstrate that you have considered the ethical risks from AI.

Training and Advisory

Conference: An invitation-only premier gathering in Santa Fe of the most thoughtful and accomplished industry people covering topics that change from year-to-year

Think Tank: Maximum of 25 people in a beautiful setting over a weekend, not scripted, but transcribed and distributed to members.

Workshop: Our two-day workshop in AI Ethics, previously on-site, being reconfigured for remote.

All of these events are subject to federal, state and local travel and meeting regulations for COVID-19

If you would like to learn more, please contact us for a confidential conversation.

 

About Us

Neil Profile.png

Neil Raden

Principal Analyst

Neil Raden has for more than a quarter-century devised and implemented analytical decision-making systems for industry and government as a consultant and delivered context and advisory services in the application of analytics, decision management, AI and AI Ethics as an author and industry analyst.. He is the founder of Hired Brains Research, co-author of the book Smart (Enough) Systems, is a contributing analyst at Diginomica, chairman of advisory boards at Sandia Labs, a lecturer at TDWI and a contributor Forbes., AnalyticsWeek has named him as one of the Top 100 Thought Leaders in Big Data and Analytics.

KevinPledgeProfilePicture.png

Kevin Pledge

Advisor / Contributor

Kevin is CEO of Acceptiv, a company that delivers online insurance solutions for insurance companies, he was also CEO of Claim Analytics for several years, specializing in predictive analytics to manage insurance claims. Kevin also chaired the Society of Actuaries Professional Development Committee and is a Fellow of both the SOA and IFoA.

Contact Us

Contact us for a confidential conversation about how we can help you.

nraden@hiredbrains.com
+1 (505) 982-6397

518 Old Santa Fe Trail
Santa Fe, NM, 87505