Europe is trying to take the lead in regulating the use of AI
[ad_1]
Within two years, if everything is planned, EU residents will be protected by law from some of the most controversial uses of AI, such as street cameras that identify and track people or government computers that track individual behavior.
This week, Brussels he defined the plans becoming the first global bloc with rules on how to use artificial intelligence, trying to put European values at the center of rapidly developing technology.
Over the past decade, AI has become a strategic priority for countries around the world, and two leaders around the world – the US and China – have taken very different perspectives.
The Chinese state-led plan has made significant investments in this technology and has rapidly expanded applications that have helped the government control surveillance and the population. In the US, the development of AI has been left to the private sector, which has focused on commercial applications.
“The US and China have been innovative and leaders in investment in AI,” said Anu Bradford, an EU law professor at Columbia University.
“But this regulation wants to put the EU back in the game. It is trying to balance the idea of what the EU needs to become more of a technological superpower and put China and the US at stake, without compromising its European values or fundamental rights. ”
EU authorities expect the rest of the world to continue on their path, and say they are looking closely at the proposals in Japan and Canada.
While the EU wants governments to reclaim the way they handle AI, it wants to encourage startups to experiment and innovate.
Officials said they hoped the clarity of the new framework would give confidence to these startups. “We will be the first continent to give directions. So now if you want to use AI apps, go to Europe. You will know what to do and how to do it, ”said Thierry Breton, the French commissioner in charge of digital policy for the bloc.
In an attempt to innovate, the proposals acknowledge that regulation falls frequently on small businesses and therefore include support measures. These include “sandboxes” where start-ups can use data to test new programs to improve the justice system, health and the environment, without fear of facing severe fines if they make mistakes.
Next to the regulations, the committee published a detailed road map for increased investment in the sector and the consolidation of public data across the block to help train machine learning algorithms.
The proposals are likely to be hotly debated by the European Parliament and the Member States, two groups that should turn the project into law. Legislation for 2023 is expected soon, according to those who follow the process closely.
But critics say that in trying to protect commercial AI, the bill does not go far enough in banning discriminatory police applications like AI, border migration control, and banning biometric categories of race, gender, and sexuality. These are currently labeled as “high risk” applications, meaning they need to inform anyone who develops them who they are using and provide transparency to know how the algorithms made their decisions, but they will still be widely used, especially for businesses. private.
Other high-risk but not-banned applications include the use of AI in recruitment and personnel management, as currently practiced by HireVue and Uber, as AI practices that assess and monitor students, and when granting and denying public use of AI. support benefits and services.
Access Now, a Brussels-based digital rights group, points out that direct bans on face recognition and credit scoring apply only to public authorities, such as face-recognition clearview AI companies or AI credit scoring startups Lenddo and ZestFinance, whose products are available worldwide.
Others highlighted the significant lack of citizens ’rights in the legislation. “The whole proposal regulates the relationship between suppliers (those who are developing) [AI technologies]) and users (who are expanding). Where do people fit in? “wrote Sarah Chander and Ella Jakubowski, a European rights group, on Twitter.” There seem to be few mechanisms that can be repaired by those directly affected or affected by AI systems. This is a huge vacuum for civil society, discriminated groups, consumers and workers. ”
On the other hand, lobby groups representing Big Tech’s interests also criticized the proposals, saying they would stifle innovation.
The Data Innovation Center, a parent group that receives funding from Apple and Amazon, said the bill has dealt a “damaging blow” to the EU’s intention to become a world leader in AI and that it is “a time of new rules”. will tighten technology companies ”with the aim of innovating.
In particular, the ban on AI that “manipulates” people’s behavior and the “high-risk” AI systems (such as mandatory human supervision and evidence of safety and efficacy) with the regulatory burden.
Despite these criticisms, the EU is concerned that if it does not currently lay down rules on IA, it will allow for a global rise in technologies that run counter to European values.
“The Chinese have been very active in applications that concern Europeans. These are being actively exported, mainly for the purpose of law enforcement, and there is a high demand among illiberal governments, ”Bradford said. ”
Petra Molnar, associate director of York University in Canada, agreed, saying the bill is more in-depth and based on human values than the first proposals in the U.S. and Canada.
“There’s a lot of hands shaking around ethics and AI in the US and Canada, but [proposals] they are more superficial “.
After all, the EU is committed to boosting people’s confidence in the development and commercialization of AI.
“If we can better regulate the AI that consumers trust, this will also create a market opportunity, because … it will be a source of competitive advantage for European systems. [as] they are considered reliable and of high quality, “said Bradford of Columbia University.” You don’t just compete on price. “
[ad_2]
Source link