Thierry Breton, the European Commissioner for the Internal Market, told the sudden rise of popularity of AI applications like ChatGPT or newly Microsoft 365 Copilot and the associated risks underscore the urgent need for rules to be established.
"As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data," told Thierry Breton.
The European institutions are currently working on what will be the first legal framework on AI.
With ChatGPT rated the fastest-growing consumer app in history, fears raised that systems used by such apps could be misused for plagiarism, fraud and spreading misinformation.
OpenAI has said on its website it aims to produce artificial intelligence that "benefits all of humanity" as it attempts to build safe and beneficial AI. This is apparently not enough.
Moreover, as Jared Spataro, Microsoft's vice president, put it, "Microsoft 365 Copilot is much more than ChatGPT implemented in the Microsoft Office suite."
The first AI regulatory framework
Under the EU draft rules, ChatGPT is considered a general purpose AI system which can be used for multiple purposes, including high-risk ones such as the selection of candidates for jobs and credit scoring.
The regulatory framework currently defines four levels of risk in AI - which is causing disquiet amongst some companies who fear their products being labelled as high risk.
THE FOUR LEVELS ARE:
Unacceptable risk
Any system considered a clear threat to people "will be banned" according to the Commission, including “social scoring by governments to toys using voice assistance that encourages dangerous behaviour".
High risk
These are AI systems within critical infrastructures such as transport, or within educational or employment contexts where the outcome of exams or job applications could be determined by AI.
Limited risk
These are systems with "specific transparency obligations," such as a chatbot identifying itself as an AI.
Minimal or no risk
The Commission says the "vast majority" of systems currently used in the EU are in this category, and they include AI-enabled video games and spam filters.
"People would need to be informed that they are dealing with a chatbot and not with a human being," Breton said.
Being in a high-risk category would lead to tougher compliance requirements and higher costs, according to executives of several companies involved in developing artificial intelligence.
A survey by the industry body appliedAI showed that 51 per cent of the respondents expect a slowdown of their AI development activities as a result of the AI Act.
Microsoft president Brad Smith in this respect wrote:
"There are days when I'm optimistic and moments when I'm pessimistic about how humanity will put AI to use."
Commentaires