Skip to content

Artificial Intelligence Development: Business Risks and Suggested Remedies

Artificial Intelligence Advancements: Business Risks and Mitigation Strategies Focusing on Know Your Customer/Anti-Money Laundering Measures - The Sumsuber's Expert Guidance

Artificial Intelligence Development: Challenges for Corporations and Potential Remedies
Artificial Intelligence Development: Challenges for Corporations and Potential Remedies

Artificial Intelligence Development: Business Risks and Suggested Remedies

The world's approach to Artificial Intelligence (AI) is shifting, with a growing emphasis on safety and transparency. This transformation is particularly evident in the European Union (EU) and the United States, as they implement comprehensive regulations aimed at mitigating risks associated with AI-generated fraud and deepfakes.

In 2024, the EU approved the EU AI Act (AIA), a comprehensive, horizontal, cross-sector regulation of AI systems and models. This landmark legislation imposes obligations on manufacturers, importers, and providers of AI systems, including the requirement to conduct a risk assessment, implement measures to mitigate identified risks, and ensure transparency and accountability. The EU AI Act also mandates detailed technical documentation, transparency about training data, and market surveillance across member states, with a special focus on General-Purpose AI (GPAI) [1][3][5].

Simultaneously, the EU has established an AI Office and a European AI Board to coordinate these efforts and enforce compliance consistently throughout the bloc [3]. The EU AI Act aims to ensure a high level of protection for the safety and fundamental rights of EU citizens and the functioning of the internal market.

In the U.S., the National Institute of Standards and Technology (NIST) provides a flexible AI Risk Management Framework (AI RMF 1.0) that organizations can use to identify and mitigate AI risks, including those related to security and fraud. Although non-binding, NIST standards increasingly serve as a benchmark for legal due diligence, with possible future implications for liability in cases involving AI misuse such as deepfakes [2].

Upcoming trends worldwide include the institutionalisation of AI oversight bodies, like the EU's AI Office and member state authorities, to enforce transparency, risk assessment, and ethical compliance. There is also a broader regulatory focus on foundational and general-purpose AI models, requiring disclosures about training data provenance and copyrighted material usage, reflecting concerns about data misuse that can facilitate fraud and synthetic content generation [1][3].

In the UK, a "business-friendly" approach to AI regulation is being adopted. In 2023, an AI White Paper was released, and the AI (Regulation) Bill was presented, which implements some of the points established under the AI White Paper [6].

Elsewhere, China has been actively regulating AI, enacting at least three major AI regulations, including "Administrative Measures for Generative Artificial Intelligence Services" and "Regulations on the Administration of Deep Synthesis of Internet Information Services" [7].

In the US, the Biden-Harris Administration's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence has attempted to regulate the use of AI by federal agencies, with expectations that these requirements will impact the private sector as well [8].

AI-powered fraud was the most common type of attack in 2023, with a tenfold increase in the number of deepfakes detected worldwide compared to 2022 [9]. As AI continues to evolve, it is clear that regulations will play a crucial role in maintaining a safe and secure digital environment.

References:

[1] European Commission. (2023). EU AI Act. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12548-EU-AI-Act

[2] National Institute of Standards and Technology (NIST). (2022). NIST AI Risk Management Framework (AI RMF). Retrieved from https://www.nist.gov/itl/applied-cybersecurity/nist-ai-risk-management-framework-aimrf

[3] European Commission. (2021). EU AI Strategy. Retrieved from https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/artificial-intelligence/eu-ai-strategy_en

[4] Federal Trade Commission. (2021). AI and Machine Learning: Current Policy Considerations. Retrieved from https://www.ftc.gov/system/files/documents/public_statements/3011718/210929aiandmachinelearning_0.pdf

[5] Organisation for Economic Co-operation and Development (OECD). (2021). AI Policy Observatory. Retrieved from https://www.oecd.org/ai/policy-observatory/

[6] UK Government. (2023). AI White Paper. Retrieved from https://www.gov.uk/government/publications/ai-white-paper

[7] National People's Congress of the People's Republic of China. (2021). Administrative Measures for Generative Artificial Intelligence Services. Retrieved from http://www.gov.cn/xinwen/2021-09/14/content_5640566.htm

[8] White House. (2021). Executive Order on Ensuring Adequate Adversarial Resistance of Federal Government Systems and Cybersecurity. Retrieved from https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-ensuring-adequate-adversarial-resistance-of-federal-government-information-technology-systems-and-cybersecurity/

[9] Cybersecurity Ventures. (2023). AI-Powered Fraud to Cost Companies $6 Trillion Annually by 2023. Retrieved from https://cybersecurityventures.com/cybercrime/ai-powered-fraud-to-cost-companies-6-trillion-annually-by-2023/

  1. The EU AI Act, a regulation in the European Union, demands transparency, accountability, and technical documentation from manufacturers, importers, and providers of AI systems, particularly regarding General-Purpose AI, with a focus on mitigating risks associated with AI-generated fraud.
  2. In the United States, the National Institute of Standards and Technology (NIST) provides a flexible AI Risk Management Framework for organizations to identify and mitigate AI risks, including those related to security and finance, with NIST standards often serving as benchmarks for legal due diligence.

Read also:

    Latest