There's no advertising on this site.

November 5, 2024

Why Do AI

Artificial Intelligence Insights and News

Balancing Innovation and Ethics: Navigating the Impact of the EU AI Act

9 min read

The European Union (EU) has finalized the details of the AI Act, a comprehensive set of regulations designed to guide the development and application of artificial intelligence. The legislation seeks to establish ideal conditions and a comprehensive legal framework for regulation of AI systems, safeguarding fundamental rights, democracy, the rule of law, and environmental sustainability against the risks posed by high-impact AI technologies. This can happen through intentional projects or as a by-product of technical developments, leading to unforeseen outcomes or new applications.


The negotiations and communication involving the EU Commission, Parliament, and Council resulted in a classification, exceptions, and enforcement capacity for biometric identification systems, and regulation of general-purpose models, untargeted collection of facial images, social scoring, and emotion recognition.


At this point very little is specifically known concerning the agreement or the context of the deal. Currently the AI Act uses the definition of AI systems as proposed by the Organization for Economic Cooperation and Development (OECD), “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The term ‘AI Systems’ will not apply to those used for military and defense purposes, those used solely for the purpose of research and innovation, and those used for non-professional reasons.


Meetings are scheduled to take place as government officials will start sifting through details to determine the scope of the AI law, and its recitals. Point of reference, the EU General Data Protection Regulation totaled 173 recitals. All EU legislation must be ratified by the Committee of Permanent Representatives (COREPER). In October, Germany, Italy, and France initially voiced exemptions for EU AI startups, but rescinded in the December meeting. Presently, Emmanuel Macron, President of France, has called for “evaluating” its implementation. A possible ‘Blocking Minority’ of four countries could call for evaluation at a COREPER, derailing the deal.


WhyDoAI reached out to AI industry experts Daren Klum and Scott Litman to review the EU landmark decision, the implications that this could have on impacting the pace of innovation, what steps AI developers need to take for transparency requirements, and potential future legislative updates from the EU or potentially the US. They also analyzed shortcomings that may exist in certain specific areas.


Daren Klum, Founder and CEO at the digital security platform company Secured2, posed that regulation within any industry can be a slippery slope, pointing out that seemingly harmless regulations can evolve into industry-strangling measures over time.


Klum said that the looming threat of overregulation in the AI market, specifically the new laws in the EU AI Act is a huge concern for the technology’s sustained progress. However, just because there are laws doesn’t mean the market will choose to comply with these regulations, so enforcement becomes a significant hurdle, challenging and costly.


“Let me be clear – I’m not anti-regulation. Guidelines, frameworks, and rules are necessary to protect consumers privacy and data security. To me that’s a must. However, the key lies in fair competition preventing regulations that favor big corporations, impede an even playing field, and create a lopsided market in favor of mega corporations. We need regulation, but not regulations manipulated by corporate interests, wielding superior lobbying power and financial influence,” stated Klum.


“The challenge is to strike a balance, ensuring fair competition and protecting the rights of consumers without stifling innovation. I’m not sure that’s what we will get because right now our government is pay-to-play and so is the EU’s government,” he added.


From a data privacy perspective, the EU has been forward thinking when it comes to technology regulations and the global impact past rules like GDPR has had on technology providers, Scott Litman, Co-Founder, Managing Partner, and CRO at Lucy.ai, a fourth-generation Answer Engine.


Litman affirmed that, “there are elements that relate to biometrics and security that will receive plenty of applause and criticism, but that will have a major impact on the use of AI in the use of PII data for monitoring and security.” Litman continued, saying that, “At the same time, copyright holders with grievances against the AI providers that are deriving value from their work will see little recourse as it appears that regulators are abdicating responsibility for the use of copyright material in the LLM’s.”


“Every piece of enterprise software and infrastructure is built today to be GDPR compliant, and it can be expected that all AI will similarly be built to be compliant with the EU AI Act. Even with a timeframe that may have us 12 – 24 months from enforcement, you can expect that all AI providers will move forward with the expectation that these regulations will be in place, and they will start planning and implementing now for that future, which effectively renders the regulations in place in advance of when they will be enforced,” Litman added.


For high-risk AI applications, there are explicit transparency requirements due to significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. These requirements for ‘high-risk’ applications include providing information about the AI system’s capabilities and limitations, the logic involved, and the level of human oversight that will require AI developers maintain detailed documentation and records of their AI systems’ functioning and compliance measures, subject to auditing to ensure compliance.


Considering the breadth of tasks AI systems can accomplish and the rapid expansion of function, it was agreed that general-purpose AI systems, and the associated models those AI systems are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.


Litman indicated, ”There are new transparency requirements for the providers of AI to explain how their models work and where the content came from, but with regulations not to be formalized for months and implementation 12 – 24 months away and with only a disclosure requirement without teeth or action, it basically means that the copyright wars are likely over before they start.”


The AI Act introduced penalties that will be similar to fines calculated under GDPR, based on a percentage of the liable party’s global annual turnover in the previous financial year, or a fixed sum, whichever is higher:
€35 million or 7% for violations which involve the use of banned AI applications;
€15 million or 3% for violations of the Act’s obligations; and
€7.5 million or 1.5% for the supply of incorrect information.


However, to Litman’s point, currently there is only the disclosure requirement and these penalties for bad actors would be many months off in the distance. Litman also offered that the effort started years ago and at one point was proclaimed as future proof, but then ChatGPT moved the goal line by about 1000 yards and EU policy making went from prescient to way behind. “The challenge with some of this is that the genie got out of the bottle on some very important items, and I don’t think there’s anything that anyone can do to put it back in. Most notably from my point of view is the nature of the content that has been used to train the LLM’s, which includes millions of pages under copyright.”


Addressing some of the challenges first-hand, Klum stated, “Currently, I’m designing a new privacy/security focused AI engine and it’s truly hard to know what to do with regulation because everything is so fluid globally. To me building AI starts with building it ethically regardless of the laws.” Klum went on to add that, “This means building your tools and technology with ethics in mind while also keeping your eye on the global AI regulatory market or markets you intend to serve. Being compliant with growing laws will be critically important.”


Klum said that transparency is key and documenting everything is critical. “Sooner or later if you have success someone will come after you and you will need to be prepared to defend yourself and your decisions. Especially, when dealing with customers data and building models from data you don’t own.”
In meeting the requirements for documentation and auditing, Klum recommended “internal monitoring and auditing. AI companies must regularly assess the security, accuracy, and unbiased nature of your AI algorithms. Bias is a significant concern, and ensuring unbiased AI is imperative too. Think about how you can prove your model is unbiased.”


Litman, when considering the delay of the regulations and implementation, that “AI companies will be thrilled if this proves to be the case as they will continue forward as if the material taken under copyright into their model was under some form of Fair Use,” adding that “The AI industry has treated this as the inevitable march of technology and that its all for the public good, which can be debated, but it’s certainly for the good of the AI companies who are deriving billions of dollars of valuation and future profits yet to come at least partially on the backs of the intellectual capital of others and without their permission.”

When considering the implications of copyright complexities, Litman delved deeper sighting that, “There was serious discussion to ban material under copyright in the LLM, but EU regulators could not find consensus on this provision, one that would have had a massive impact on evolution of the LLM’s.” Litman when on to say, “In all likelihood, it would have forced either a recompiling of the LLM’s without such content or provisions that copyright holders could make a claim – most likely just a speed bump but a notable one in the protection of intellectual property rights.”


While a pioneering effort in regulating AI, the AI Act presents a number of potential shortcomings. A broad characterization of what an AI System is and the categorization of AI Systems into risk categories that could oversimplify the complexity of AI technologies. The AI Act may place overemphasis on compliance and regulation, hindering innovation, as discussed by Klum and Litman. Klum also pointed out the challenge to “strike a balance,” and “impede an even playing field,” that doesn’t skew the market in favor of mega corporations.


When looking at potential shortcomings of the AI Act, Klum pointed out that it falls short in live facial recognition, a notably sensitive area. “The act’s failure to prohibit public mass surveillance raises significant alarms. The absence of safeguards against ubiquitous cameras connected to AI engines is a major oversight. Essentially, the EU AI Act seems to legitimize live public facial recognition for law enforcement and government, reminiscent of China’s mass surveillance practices. This, in my view, inadequately shields citizens from invasive data-gathering systems like mass video surveillance.”

The AI Act has potential global impact, raising questions how non-EU countries and companies will adapt or be affected by the requirements, or how other countries will align to the AI Act in their own legislation.
Litman addressed this in noting that, “We could yet see legislation, particularly in the US that may address some of this as there are numerous lawsuits in the courts now from copyright holders, but by the time that these suits are resolved, the most likely outcome is some form of payments to copyright holders but little impact on the industry otherwise.”


Klum considered that there may be a strategic aspect to the EU AI Act. “It appears to serve as a tool to hedge against advancements by other countries, particularly the US, which leads the AI race and boasts the world’s largest data companies.” Klum concluded in saying, “The EU seems eager to claim the global AI technology leadership, but realistically, despite the regulation, the United States and/or China will hold the crown as the AI dominant leader. Time will certainly tell.”


With political agreement reached, the EU AI Act will shortly be officially adopted by the EU Parliament and Council and published in the EU’s Official Journal to enter into force. The majority of the Act’s provisions will apply after a two-year grace period for compliance. However, the regulation’s prohibitions will already apply after six months and the obligations for general-purpose AI models will become effective after 12 months.


The AI Act is generally regarded as a major step forward in responsible AI governance, a global precedent in regulation for rapidly evolving AI technology. Emphasis on transparency, risk-based regulation, and privacy look to build public trust and ethical AI development. There are concerns about potential stifling of innovation, especially for smaller players in the field. The AI Act’s success will largely depend on adaptability to advancements and capacity of future legislation to address shortcomings, ensuring it effectively balances innovation with ethical considerations. As Klum stated, “Time will certainly tell.”