Tech News

AI ‘High Risk’ When Defining When to Fight

[ad_1]

EU leaders stress that addressing the ethical issues surrounding AI will lead to a more competitive market for AI products and services, increase AI adoption, and help the region compete with China and the United States. Regulators hope high-risk labels will encourage more professional and responsible business practices.

According to companies surveyed, the draft law goes too far with the costs and rules of stifling innovation. Meanwhile, many human rights groups, AI ethics and anti-discrimination groups say the AI ​​Act is not far enough away because people are weak in business and in government with the resources to deploy advanced AI systems. (The bill does not primarily cover the use of AI by the military).

(Mostly) Strict business

Although some public comments on the AI ​​Act were made by individual EU citizens, the answers were provided by professional groups mainly for radiologists and oncologists, trade unions for Irish and German educators and major European businesses such as Nokia, Philips, Siemens and the BMW Group.

American companies are also well represented, with comments from Facebook, Google, IBM, Intel, Microsoft, OpenAI, Twilio and Workday. In fact, according to data collected by European Commission staff, the United States ranks fourth as the source of most comments, after Belgium, France and Germany.

Many companies expressed concern about the costs of the new regulations and questioned how their IA systems would be labeled. Facebook wanted the European Commission to be more explicit about whether the AI ​​Act’s ban on subliminal techniques that manipulate people extends to targeted ads. Equifax and MasterCard argued against the general high-risk designation for any AI that judges a person’s credibility, saying it would increase costs and reduce the accuracy of credit assessments. However, numerous studies they found instance on the discrimination that algorithms, financial services, and lending entail.

The Japanese face recognition company NEC argued that the AI ​​Act gives excessive responsibility to the AI ​​systems provider instead of users and that the project proposal would lead to high compliance costs for labeling all remote biometric identification systems as high risk.

One of the main conflicts companies have had with the bill deals with general-purpose or pre-trained models that are capable of performing many tasks. OpenAI GPT-3 or Google’s experimental multimodal model AMA. Some of these models are open source, and others are proprietary creations that cloud services companies sell to customers with the AI ​​talent, data, and computing resources needed to train these systems. In its 13-page response to the AI ​​Act, Google argued that it would be difficult or impossible for the creators of general-purpose AI systems to comply with the rules.

Other companies working on the development of general-purpose systems or artificial intelligence have also proposed work by Google like DeepMind, IBM and Microsoft to change AI accounts that can perform multiple tasks. OpenAI called on the European Commission to avoid banning general-purpose systems in the future, even if some use cases fall into the high-risk category.

Companies want to see the creators of the AI ​​Act change their definitions of critical terminology. Companies like Facebook argued that billing uses broad terminology to define high-risk systems, and as a result, over-regulation arises. Others suggested more technical changes. Google, for example, wants to add a new definition to the draft, which distinguishes between “extenders” of an AI system and “suppliers”, “distributors” or “importers” of AI systems. By doing so, the company says it can take responsibility for changes made to an AI system in the business or entity that makes the change rather than the company that created the original. Microsoft made a similar recommendation.

High Risk AI Costs

Then there’s the risk of knowing how much a high-risk label will cost businesses.

A examination European Commission staff The costs of implementing an AI project under the AI ​​Act are around € 10,000 and companies can expect initial start-up costs of around € 30,000. As companies develop professional approaches and consider them as a common business, the cost is expected to approach 20,000 euros. The study uses a model created by the German Federal Statistical Office and recognizes that costs can vary depending on the size and complexity of the project. As developers acquire and customize AI models as they incorporate them into their products, the study concluded that “a complex ecosystem could lead to a complex sharing of liabilities.”

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button