Emerging AI risks: What you need to know

Is artificial intelligence a game-changing business tool or a potential threat? Both is the answer, and risk managers play a critical in maximizing the former and managing the latter

Emerging AI risks: What you need to know

Risk Management News

By

When it comes to the impact of AI on business and society – especially in managing data, automating processes, and increasing productivity – the upside is enormous. But along with the big business benefits come big – and sometimes new – business risks. And cyber risks are among the biggest.  

“AI comes with potential benefits and risks in many areas: economic, political, mobility, healthcare, defense and the environment,” says Michael Bruch, head of emerging trends at Allianz Global Corporate & Specialty (AGCS). “Active risk management strategies will be needed to maximize the net benefits of a full introduction of advanced AI applications into society.”

In its recent report “The Rise of Artificial Intelligence: Future Outlook and Emerging Risks”, AGCS identifies five crucial areas in which emerging AI risks can be rife:

  • software accessibility
  • safety
  • accountability
  • liability
  • ethics

“By addressing each of these areas, responsible development and introduction of AI becomes less hazardous for society,” explained Bruch. “Preventive measures that reduce risks from unintended consequences are essential.”

The emerging risks impact noted in the report includes:

  • Business risks, including greater vulnerability of businesses to technical failure or cyber-attacks, leading to larger-scale disruption events and losses as systems and economies become more interconnected. Lloyd’s estimates a major global cyber-attack has the potential to trigger losses of more than US$50bn.
  • Job markets will be disrupted with increased unemployment and loss of income as some repetitive jobs many no longer exist.
  • Businesses will be increasingly challenged by liability changes, according to the report, as responsibility shifts from humans to machines and manufacturers, and AI agents making decisions cannot legally be held reliable for those decisions. Autonomous driving is one example currently in the news. Who’s responsible if something goes wrong? “Leaving the decisions to courts may be expensive and inefficient if the number of AI-generated damages start increasing,” says Bruch. “A solution to the lack of legal liability would be to establish expert agencies or authorities to develop a liability framework under which designers, manufacturers or sellers of AI products would be subject to limited tort liability.
  • Businesses also will need to cover themselves from risks of regulatory non-compliance as the rapid implementation of AI technology leads to increased consumer and data protection updates. With regard to consumer protection regulation, the best risk management solution is likely to be “control, by having human supervisors controlling and explaining AI agents decisions,” notes the report.

The insurance industry has been an early adopter of AI, and that’s good news. “There is a huge potential for AI to improve the insurance value chain,” , Bruch says. “Initially, it will help automate insurance processes to enable better delivery to our customers. Policies can be issued and claims processed faster and more efficiently.”


 

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!