EU legislation on AI may hurt open source developers

EU legislation on AI may hurt open source developers

Actualidad September 9, 2022

Back in May, the European Union (EU) reaffirmed its intention to become the first geopolitical bloc to regulate Artificial Intelligence (AI). Several months later, controversy is raging thanks to a statement from a group named Brookings.

The think tank denounces that EU legislation on AI could harm open source developers enormously. In this Befree blog post we will detail and delve deeper into the arguments they offer.

Here’s what the proposed EU regulation looks like

The EU regulatory proposal was the result of a long process of analysis, previous studies and consultations carried out by EU institutions. All of this was born out of the need to address a disruptive technology that is already a reality. This technology is having an impact on the way we work, relate to each other and the way our society functions.

The objectives set and announced by the EU itself in this regard are several. Among them are those of “ensuring the safety and respect for existing AI legislation” or “facilitating the development of a single market for a legal, safe and reliable use of AI applications and avoiding market fragmentation”.

The proposed legislation prohibits several practices. Among them, “the introduction on the market, putting into service or use of an AI system that makes use of subliminal techniques that transcend the consciousness of a person, that exploits any vulnerability of a specific group or that assesses or classifies the trustworthiness of natural persons“. You can find more information about this by clicking here.

Brookings’ complaints

As we were saying, the controversy is served. A think tank called Brookings publicly denounced that the EU legislation on AI could scupper the development of open source tools such as GPT-3. Brookings says the proposal will create “added legal liabilities for general AI systems”.

Under the draft bill, open source developers would have to comply with a number of guidelines on risk management, data governance, technical documentation and transparency, and accuracy and security standards. Thus, if a company implements an AI system that doesn’t work, it could hold the developer community that created the product accountable.

“This could further concentrate the power of AI in large technology corporations. Research processes, which are critical to its public understanding, would be avoided,” explains the author of the release, Alex Engker. In conclusion, the EU’s attempted legislation on AI could create a set of requirements that would seriously endanger the entire tech community. In addition, the ultimate threat of failure to improve AI systems of general interest would hover over.

Recent post

Read more
Read more
Read more