Dragon Claws/iStock via Getty Images
EU countries and lawmakers have tentatively agreed on rules to govern artificial intelligence tools such as ChatGPT, Reuters reported citing people familiar with the discussions.
EU governments made compromises in the hopes of getting lawmakers’ approval to be able to use AI in biometric surveillance for national security, defense and military purposes, the report added citing the person.
The lawmakers and governments were still in discussions on Thursday over several vital issues on regulations governing AI, a separate Reuters report noted citing two sources with knowledge of the matter. The discussion had carried over from the night into a second day.
The use of AI in biometric surveillance, and source code access were yet to be debated as talks were still going on.
On biometric surveillance, EU lawmakers intend to ban the use of AI, however, governments have pushed for an exception for national security, defense and military uses.
EU countries have been trying to hammer out details of the draft rules which were proposed by the European Commission two years ago but have not managed to keep up with the swiftly evolving technology. This has made it difficult to come to an agreement.
The details of what was agreed on related to governing AI models were not clear and there were still areas that had to be finalized.
A late proposal by France, Germany and Italy which noted that developers of generative AI models should self-regulate had added a point of friction. Such a move, however, would help France-based AI company Mistral and Germany’s Aleph Alpha, the report added.
Last week, the U.S., U.K. and over 15 countries has unveiled the first detailed international agreement on how to keep artificial intelligence safe from rogue players, pushing for companies to develop AI systems which are “secure by design.”
Previously, the EU has pushed for a tougher stance regarding governing AI, while Japan has looked at more easier approach, closer to what the U.S. has to strengthen economic growth. The Southeast Asian nations have also gone for a more business-friendly approach to AI. China is also expected to launch an initiative to govern AI from multiple angles. In October, U.S. President Joe Biden had issued an executive order to manage the risks of AI, among other things.
Generative AI services have taken the world by storm since the launch of Microsoft (NASDAQ:MSFT)-backed OpenAI’s ChatGPT last year. Companies worldwide are developing their own large language models, or LLMs.
Alibaba’s (BABA) Tongyi Qianwen 2.0 and Tongyi Wanxiang, Baidu’s (BIDU) Ernie Bot, OpenAI’s text-to-image tool DALL·E 3, Alphabet (NASDAQ:GOOG) (GOOGL) unit Google’ Bard, Meta Platforms’ (NASDAQ:META) Emu Video, Emu Edit, AudioCraft, SeamlessM4T, and Llama 2, Samsung’s (OTCPK:SSNLF) Gauss, and Getty Images’ (GETY) model called Generative AI by Getty Images, are some of the LLMs, among the many, being developed.

