AI needs ethics rules, urges FERMA

AI needs ethics rules, urges FERMA | Corporate Risk & Insurance

AI needs ethics rules, urges FERMA

Artificial Intelligence (AI) is in need of ethics rules as well as a clear distinction between the opportunities it provides and the threats, the Federation of European Risk Management Associations (FERMA) has urged.

FERMA has called for urgent attention to the two priorities, as well as welcoming the appointment of experts to the European Commission’s High Level Group on Artificial Intelligence (AI HLG), which will support the implementation of the European strategy on AI, including the development of ethical guidelines by the end of this year.

There are currently no clear ethical rules for the use of data generated by AI tools, and the AI guidelines will take into account principles on data protection and transparency, FERMA said this week.

“FERMA argues that the new possibilities offered by AI must remain compatible with the public interest and those of the economy and commercial organisation. AI is already a reality in many organisations and it is going to disrupt our comprehension of the future,” said FERMA president Jo Willaert.

“Public authorities have a key role to play to ensure that there is a human judgement as a last resort. This dialogue between regulators and AI users must start now and the newly set up AI HLG and open access European AI Alliance are the right settings,” he continued.

The association said it is ready to bring its expertise in enterprise risk management methodology and tools, such as risk identification and mapping, risk control and risk financing, to the AI discussion, “so we can manage the threats and opportunities posed by the rise of AI to our organisations and society within acceptable risk tolerances.”

Specifically, FERMA is calling on the EC group to “immediately” address the following two priorities for corporate organisations:

  • Draw a clear line between the opportunities of AI technologies and the threats posed by the same technologies to the insurability of organisations as a result of over-reliance on AI during decision making processes.
  • Define ethical rules for the corporate use of AI not just for employees but also suppliers and all actors of the value chain. AI tools will allow increased and constant monitoring of a very high number of different parameters. The risk management profession believes that this greater use of data could create concerns among stakeholders and risks to reputation.

 

Related stories:
Risk management embraces automation
Autonomous tech: Here's an answer to risky robots