EU takes a big Step toward Regulating AI But there is one Big Problem with EU AI Act

Updated on March 25 2024
image

The European Union achieved a milestone on December 6, 2023, by finalizing the Artificial Intelligence Act (AI Act), marking the world’s inaugural comprehensive regulatory framework for AI.

Officially titled “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence”, and more commonly referred to as the EU AI Act, this sweeping proposal aims to impose standardized regulations on the development, deployment, and use of AI systems within EU member states.

Spearheaded by the European Commission as increasing algorithms influence decision-making across high-risk sectors like healthcare and transport, the Act classifies AI into various risk tiers based on intended use cases. Developers and enterprises deploying high-risk AI applications must adhere to proportional accountability rules around data quality, documentation, transparency, and other requirements tailored to address ethical dilemmas and real-world impacts unique to algorithms.

In simple words, this legislation is designed to guarantee the safety and adherence to fundamental rights and values of AI systems introduced to the European market and utilized within the EU.

However, there is one big problem with the Act.

Biggest problem with EU AI Act

Although the EU’s AI policy seems like a good start towards regulating AI and making it much safer for the economy, it cannot be enforced before 2025.

EU will Enforce its E.U. AI Act in 2025 because of several reasons like:

  1. Time for adaptation: The AI Act introduces significant changes for developers, users, and regulators. This transition period allows stakeholders to adapt to the new requirements and prepare their systems for compliance.
  2. Drafting of supplementary laws: The EU AI Act provides a framework, but additional legislation is needed to address specific details. National authorities need time to draft and implement these complementary laws.
  3. Building capacity: Regulators and enforcement bodies need to build their capacity to effectively implement the EU AI Act. This includes training staff, developing procedures, and acquiring necessary resources.
  4. Industry preparedness: The AI industry needs time to update their systems and processes to comply with the new regulations. This includes conducting risk assessments, implementing transparency measures, and establishing human oversight mechanisms.
  5. Avoiding disruption: Implementing the AI Act abruptly could cause significant disruption to the industry and potentially harm consumers. The transition period allows for a smooth and gradual implementation.
  6. International coordination: The EU is seeking international cooperation in regulating AI. This requires time for discussion and agreement on common standards and approaches.

Therefore, the transition period provides a necessary buffer for stakeholders to adapt to the new landscape of AI regulation within the EU. It allows for a smooth implementation, minimizes disruption, and ensures effective enforcement of the AI Act once it comes into full force in 2025.

However, this also gives AI companies enough time to gather more data.

EU has asked the AI companies to voluntarily follow the rules in the interim. But there won’t be any penalties if they don’t follow the rules.

Key Aspects of new EU AI Act:

1. Banned AI: Some AI systems will be completely banned, including those that use social scoring by governments and toys with voice assistants that encourage dangerous behavior.

2. High-risk AI: Strict regulations will be imposed on high-risk AI systems, such as those used in facial recognition, credit scoring, and recruitment. These systems will need to comply with stringent transparency, traceability, and accountability requirements.

3. Transparency and Explainability: Developers and users of AI systems will need to be more transparent about how their systems work. This includes providing information about the data used to train the system and how the system makes decisions.

4. Human oversight: Humans will need to be involved in the development and use of high-risk AI systems. This will help to ensure that the systems are used responsibly and ethically.

5. Liability: The AI Act will introduce new liability rules for AI developers and users. This means that they could be held liable for any harm caused by their AI systems.

6. Enforcement: The AI Act will be enforced by national authorities in each EU member state.

Also read: Google Launches its new LLM – Gemini AI

Conclusion

EU’s AI Act represents a commendable move toward responsible AI regulation. While the Act’s enforcement delay until 2025 allows for necessary industry adaptation, it also raises concerns about data accumulation by AI companies. The voluntary interim adherence lacks penalties, posing a challenge to compliance. Despite hurdles, the EU’s commitment to ethical AI use remains pivotal in shaping the future of artificial intelligence governance.

Featured Tools

CustomGPT Logo

CustomGPT

Air Chat

Beatoven

Notably

Spice

Unhinged

Related Articles