top of page

Insurance AI Use Cases That Are Already Here

June 17, 2025

Portrait of Kevin Glasgow, experienced insurance executive and former president of the International Claim Association.

Kevin Glasgow

Claims Expert & Advisor

Enhanced AI vision through an hologram in the eye.

November 30, 2022, was the day that, for many people, the world changed dramatically, or at least the perception of the world.  This was the day that ChatGPT was released to the public,[1] and the world took notice in wonder at what it could do.  The technology behind ChatGPT was impressive, but it was also the natural progression of the growth in computing technology and power.  Indeed, Moore’s Law suggests that computing power also doubles every two years.


For some, AI-based virtual assistants like Co-Pilot and Siri have become a part of our daily toolbox.  Insurance teams are already seeing the impact, from summarizing 1,000-page medical records to flagging synthetic fraud. For others, AI-based virtual assistants like Co-Pilot and Siri have become a part of our daily toolbox. But while the technology is expanding fast, many carriers haven’t caught up—and risks are compounding under the surface. But what is (and is not) artificial intelligence and how can it be used?  In this article, we will explore the types of artificial intelligence and the use cases for insurance.  There are also several gorillas in the room that will have to be addressed.


Background

The “holy grail” in computing, and the theme of many sci-fi films, are computers that can mimic human thinking and reasoning.  A test for measuring artificial intelligence was proposed by Alan Turing in 1950.  Turing proposed that the measure of artificial intelligence was when a human interrogator could not distinguish between a human and a computer based on the responses to questions posed.  This also became known as the “imitation game” since the test was to determine whether the computer was simply “imitating” from its learnings or whether the responses were based on conclusions requiring independent thought.  Some feel that current chatbots have reached the point where it is difficult to distinguish between a system and a human. 

The type of AI that can reason and learn on its own is Artificial General Intelligence (AGI), and some speculate that this will be achieved as early as the 2030’s. AGI mimics human intelligence and theoretically will be able to apply “observations” across different fields to reach conclusions. 

As helpful as today’s AI is, the available technology is “specialized AI,” also known as “Artificial Narrow Intelligence (ANI). ANI is designed for a specific purpose and can only perform tasks associated within the domain for which it was designed.  This type of AI includes virtual assistants like Siri and Alexa, image and speech recognition systems, large language models like ChatGPT and Grok, and self-driving systems found in some vehicles.  Generative AI, not to be confused with AGI, fits into this category and represents a subset of AI that can generate content like art, music, text, and more based on observations.  Some AI systems, for example, can even recreate the image and actions of actors, write movie scripts, and create music which has been the subject of controversy in the entertainment industry.  Generative AI can also be used by criminals in their exploits, which will be addressed in a later section of this article. 

Many of the ANI systems used today utilize large language models (LLMs) that can take natural language inputs through voice or text and then interpret the input and its context for a response. ChatGPT and Grok are examples.  LLMs are also capable of summarizing data whether it be summarizing a meeting or medical records. 

The precursor to LLMs were Machine Learning (ML) models and Rules Engines (REs). In machine learning, a system may observe tasks performed by a human and then ultimately be able to perform those same tasks by mimicking the human.  This generally requires a large dataset from which the ML models can mimic.  REs are less sophisticated and have a more structured approach to automating tasks performed by computers.  While powerful for certain tasks, REs require a human to build explicit logic around a task that the computer then follows.


Use cases in insurance 

There are numerous use cases for insurance, including the use of AI as chatbots and phone triaging systems which most people have experienced, but this article will focus the more specific uses within Underwriting and Claims including the inclusion of governing documents.


Underwriting 

One of the most time-consuming and arduous tasks in underwriting is reviewing medical records and then applying the rules in an underwriting manual to reach a decision on whether to accept, reject, or rate a policy application.  Reviewing medical records and related documents to confirm an applicant’s insurability has traditionally been a common requirement for underwriters, but reviewing medical records is a time-consuming process.  Electronic Health Records (EHRs) are making it even more difficult to review records manually given the duplicity often seen in EHRs.   As a result, many insurers have developed products that reduce the need for these time-consuming requirements, albeit potentially at a cost to mortality or morbidity experience – and ultimately to the consumer.  This is a perfect example of where AI can assist in the underwriting process. 

ANI LLMs can be trained to read medical records, lab reports, prescription drug reports, and similar information, in the context of what an underwriter would consider important.  ANI can quickly prepare summaries of the medical providers seen, the medical conditions presented and diagnosed, timing of events, smoking history, etc. Instead of taking a large amount of time to read and understand the records, ANI can review the medical records and provide the underwriter with a summary that is both easy to read and interpret in the context of an underwriting review.  The summaries can easily eliminate hours of work, freeing the underwriters to apply their expertise to more complex issues.  Some insurers estimate that the savings are in the 40% to 70% range. 

Not only can AI summarize records, AI can also read applications and underwriting manuals.  This allows a properly trained AI to provide valuable and quick insights into the medical conditions found in the records in the context of the application and underwriting manual.  For example, an applicant may misrepresent that he or she does not smoke or does not have a specific medical condition.  AI can then quickly note any evidence contradicting this in medical records and also suggest the significance of the misrepresentation or non-disclosure based upon the underwriting manual. Humans may still be needed to reach a decision, but AI can quickly provide the relevant information and context to reach a decision. 

Taking this a step further, AI can use the techniques above to triage underwriting cases to the appropriate team or person.  For example, an application or the applicant’s medical records may find that the applicant has a condition that requires the review of a senior underwriter or the Medical Director per the underwriting manual.  AI can make this determination and then route the case accordingly. 

All of this is designed to relieve the underwriter of the mundane and tedious task of reviewing every page of every record thus saving valuable time and improving compliance and consistency with the underwriting standards.  As an added bonus, some AI systems are multi-lingual and can translate records from other languages thus saving additional time, effort, and person power.


Claims 

The use case for claims is very similar.  A properly trained AI can be used when a claim is received.  Chatbots can be used to collect key information for routing the claimant to the appropriate team or claims examiner.  AI systems can also be used to determine the nature of the loss compared to the policy language.  Just like underwriting, the claim requirements (claim forms, medical records, accident reports, lab reports, coroner reports, etc.) can be read to provide the claims examiner with key information such as the date of loss, nature of the loss, limitations, and expected recovery.  Documents that contain data inconsistent with the claim or the benefit eligibility requirements can be highlighted for the examiner to review.  Additional sources such as medical dictionaries, occupational title dictionaries, prognosis tools, etc. can also be at the disposal of the AI system so that the AI system can provide valuable and time saving insights to the claims examiner.  AI systems can also be used to find patterns in claims data that may be indicative of fraudulent activity.


As an example of how AI could be used in claims, consider a life insurance claim that is received on a policy within two years of issuance.  An AI system could be used at inception as part of a web-based submission process or used once the claim forms are received.  The AI system could then potentially route the claim to a more senior claims examiner given the early nature of the claim.  Along with the referral, the AI system could provide the examiner with the policy history including the underwriting notes, medical history known at the time of underwriting, policy changes, and the parties to the contract.  It could then suggest medical providers from which to request records.  Once the records and other requirements are received, the AI system could be used to summarize the medical records obtained and compare them with the information collected at time of underwriting.  If there are discrepancies that were present prior to the application date such as a smoking history or undisclosed medical treatment, the AI system could highlight these discrepancies in a summary along with providing an interface to go directly to the relevant source documents.  The AI system could also review the images of the proofs of loss, in this case a death certificate, to detect traits that could be indicative of an AI-generated image.


Contracts and Treaties 

Overarching claims and underwriting are governance documents.  These documents include: 


  • reinsurance treaties which specify an insurer’s responsibility to its reinsurers, 


  • underwriting manuals which define the actions required by an underwriter based on underwriting findings, and 


  • state regulations and requirements which provide the legal standards that must be met when adjudicating claims. 


A properly trained AI system can integrate this into its ethos so that insights and context can be added to the summaries being generated.


Based on a survey conducted by Evadata, an organization that works with life and health insurers, nearly 70% of insurers responding are using some form of AI with enterprise AI being the most prevalent usage.  Although the potential within claims adjudication is great, less than 1% are using AI in their fraud detection model.  This is vastly different than the financial services industry which utilizes AI to analyze vast numbers of transactions to identify trends that are inconsistent with a consumer’s habits.


Nefarious uses of AI

For all the benefits AI promises for us, there will always be those who will exploit changes in technology for their own self-gain.[2]


Hollywood has been on the forefront of using computers to create digital content.   Indeed, the late Peter Cushing was resurrected in the movie Rouge One in 2016.  The capabilities in the past decade have been exponential and are now easily available to the public.  The implications of this ability to be used for perverse purposes are clear.  Generative AI can and has been used to create a range of documents including but not limited to medical records including X-rays, identification documents, financial statements, property damage photos, etc. all for the purpose of supporting fraudulent claims.  Taking this a step further, today’s AI systems can be armed with the power of all the information available on the internet so legitimate document formats can be mimicked.  In past decades, faked death certificates may have been easy to spot to the trained eye.  This is no longer the case.  Some companies are resorting to requiring their own applications to submit documents in real-time with “liveliness” checks to assist in identifying AI-generated content.  Ultimately, AI may be required to identify AI-generated content.  For today though, the ability of AI to identify AI-generated content is limited mostly to AI identifying its own content that it has created.


An example of a nefarious use of AI is the creation of synthetic identities that exist in the digital world, but they are not real people.  These synthetic identities are then used to defraud financial institutions, insurance companies, and individual persons.[3]  Unfortunately, there are websites easily found which help fraudsters create such identities, or alternatively, a fraudster has the option of purchasing a synthetic ID that has already been cultivated, some of which may already come with a credit score of 700 or better.  Even credit bureaus have a difficult time identifying well-planned synthetic IDs.


AI can also be used to create imposters of real people for the purpose of not only creating accounts in the name of the victim but also for account takeovers.  In these schemes, financial institutions may believe they are working with the victim when accepting insurance applications, claims, or withdrawals.  Unfortunately, it isn’t until after the scheme has been completed that the victim and insurance company becomes aware of the fraud. Companies cannot assume that the person initiating a transaction is who they represent themselves to be, and ID verification tools that only check to see if the named person has a data profile and is “real” may not identify imposter or synthetic schemes.


The above schemes highlight the importance that financial institutions and insurance companies know exactly who they are dealing with when accepting transactions, and this requires good controls, due diligence, and constant monitoring.  AI can be a part of the efforts to thwart such attempts.  Because of the significance of the impact the criminal use of AI against the insurance industry is having, industry groups and regulators such as the Coalition Against Insurance Fraud, the National Association of Insurance Commissioners, and state regulators are taking notice.


Other schemes involving AI include using AI to locate people on social media who may be vulnerable to romance or pig butchering scams.  As an example, a fraudster may use AI to identify people who are involved in support groups or appear lonely.  Once a potential victim is identified, AI can generate a profile of a synthetic person who the victim will relate to and likely befriend.  Once befriended, the fraudster will create situations requiring funds and then convince the victim to transfer funds to them.  These funds are often taken from financial institutions and insurance products.


The gorillas in the room 

AI promises to deliver so many positive benefits, but there are some cautions when using AI. 

First, AI can hallucinate.    If AI is being used to generate content, it is imperative to verify the information created.  Even attorneys have been caught using AI to generate court briefs only to find that the cases cited and the supposed rulings in the briefs did not exist. 

Context matters, and so do words.  Often, content is relevant when coupled with context. An analogy to this is translating documents from one language to another.  It is often difficult to perfectly translate the meaning of a statement or paragraph from one language to another without the proper context, and then words must be properly picked from a palate of words just as an artist must pick the right colors when painting a picture.  AI doesn’t always pick the best word when creating or editing text.


AI is only as good as the information it ingests.  Many AI models use information from the world-wide web, and if that information is biased, then the AI models will also be biased.  As an example, social media contains a great amount of information in forums from the public who may or may not be experts in the topic discussed.  These discussions from non-experts may be considered as a source for creating content by AI systems.


AI requires a lot of power, and globally, the power needs may outstrip supply.  In January 2025, Rand published an article speculating that the global power needs of AI data centers for 2027 may equal the entire power generated by the state of California.  By 2030, the power requirements for AI may scale to the equivalent of eight nuclear reactors.[4]  This was likely the reason behind Zuckerberg’s Meta recent signing of a 20-year deal with Constellation Energy for 1.1 Gigawatts of power.[5]  This energy demand will require tremendous expansion of today’s existing energy production.


The future of AI 

The capabilities of AI will continue to grow and “learn” as adoption continues to increase.  The scope and timing will be impacted by the implementation and evolution of regulatory frameworks at the state and federal level.   Regulatory oversight and changes will be necessary to prevent abuses of AI technology.  While attempted abuses cannot be avoided, providing the proper legal structure needed for companies to defend themselves – and their consumers – will become necessary.  Some regulators have also expressed concern that the use of AI by insurers must be non-discriminatory.  Such regulations may grow in scope and demand more transparency on how AI determines its results and testing by insurers to ensure it is done fairly and with accurate data.


The use of AI by nefarious actors will also continue to grow requiring insurers to adopt solutions involving AI to defeat these actors and against AI-generated content used for fraudulent purposes.


Traditionally, underwriting and claim functions have been transactional in nature and focused on that single transaction or that single insured. This has allowed many systemic fraudsters to evade detection until the scams become too big to hide.  In addition, AI can detect patterns 

Further adoption of AI will allow insurers to look at transactions more wholistically to detect patterns common amongst systemic fraud perpetrated using synthetic, stolen or hybrid identities.


Questions often surface concerning how AI will impact staffing, and it certainly will.  In the near-term, AI will allow companies to improve efficiency and effectiveness by routing cases to the appropriate queue and reducing the time an underwriter requires to review a case.  We have already seen this to some extent.  Automatic approval of simple cases by AI is gaining traction and is becoming more accurate, albeit it is still not as effective as a human underwriter.  As AI technology and capabilities grow, the skillsets required for underwriting will shift, but AI will not eliminate the need for human underwriters (or claim examiners), particularly with complex cases involving multiple co-morbidities.


Summary 

Computing power continues to expand and be used for both good and illicit purposes.  Many use cases for AI currently exist within the insurance vertical for reducing expenses and increasing accuracy while improving the customer experience.  There are also those who are using AI to illicitly profit from either insurance companies or its customers.  Insurers who do not use AI systems will fall behind competitors in terms of efficiency, capability, and ultimately profitability.  Likewise, the use of AI in fighting fraud will give insurers a competitive advantage by reducing fraudulent expenditures.  As AI continues to be employed in underwriting, account services, and claims, the use cases will not only grow but will also be required for a company to remain viable.  The question now is whether to use AI to lead, stay competitive, and protect our companies and customers or does one wait until the results are experienced in financial and reputational losses. 


Reference Material

1] Per www.searchenginejournal.com/history-of-chatgpt-timeline/488370/ ChatGPT was released November 30, 2022.


[2] Paraphrased from statements made by Frank Abagnale, the subject of the movie Catch Me If You Can.  After being convicted of multiple fraud schemes, the FBI hired Mr. Abagnale to assist in their anti-fraud efforts.  He is a frequent speaker about how his life and how he was able to perpetrate his fraud schemes for as long as he did.


[3] The United States Federal Reserve issued a white paper on the creation and impact of synthetic identities. Further reading can be found at The Federal Reserve, Payment Fraud Insights, Synthetic Identity Fraud In The U.S. Payment System, July, 2019, chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://fedpaymentsimprovement.org/wp-content/uploads/frs-synthetic-identity-payments-fraud-white-paper-july-2019.pdf


[4] Rand, AI’s Power Requirements Under Exponential Growth, January 28, 2025, www.Rand.Org/Pubs/Research_Reports/RRA3572-1.html.


[5] CNBC, Pipa Stevens, Meta Signs Nuclear Power Deal with Constellation Energy, June 3, 2025.

About the author 

Kevin Glasgow has been in the insurance industry for over 35 years working with both retail insurers and reinsurers in the United States and Canada.   His roles have included leading claim teams in the United States and Canada, and as such, he has worked extensively collaborating with underwriters and others to mitigate fraud risks as well as defending companies against fraudulent activities.  He has worked as an expert witness in insurance matters, and he is currently working with Diligence International Group, which specializes in fraud mitigation and identification.  He is also on the Advisory Board with Friendly, an AI company serving the insurance industry.  He holds a bachelor's and master's degree in Business Administration and is the past president of the International Claim Association and the Eastern Claim Conference, and his designations include the ARA, FLMI, FLHC, CFE and CLU®. 


Stay updated!

94109, San Francisco, California, USA

© 2025 Friendly INC. | Privacy Statement

Follow us on

bottom of page