Sundar Pichai, CEO of Alphabet.
Source: Alphabet
Alphabet CEO Sundar Pichai committed to an “AI pact” and discussed election disinformation and the Russian war in Ukraine in meetings with top EU officials on Wednesday.
In a meeting with Thierry Breton, the EU commissioner for the single market, Pichai said Alphabet-owned Google would work with other companies on self-regulation to ensure AI products and services are developed responsibly.
“Agreed with Google CEO @SundarPichai to work together with all major European and non-European #AI players to already develop an ‘AI Pact’ on a voluntary basis before the legal deadline for the AI Regulation,” Breton said in a tweet on Wednesday afternoon.
“We expect technology in Europe to respect all our rules, on data protection, online security and artificial intelligence. In Europe it’s not pick and choose. I’m glad @SundarPichai recognizes this and is committed to complying with all EU rules.”
The development suggests how top technology executives want calm down the politicians and go ahead threatening regulations. Earlier this month, the European Parliament gave the green light to a ground-breaking package of rules for AIincluding provisions to ensure that educational data for tools such as ChatGPT do not violate copyright laws.
The rules aim to take a risk-based approach to regulating AI, placing applications of the technology deemed “high-risk,” such as facial recognition, under a ban and enforcing tough transparency restrictions on applications that pose limited risk.
Regulators are increasingly concerned about some of the risks surrounding AI, with tech industry leaders, politicians and academics having sounded the alarm about recent advances in new forms of technology such as generative AI and the major language models that drive them.
These tools enable users to create new content—for example, a poem in the style of William Wordsworth or an essay in a refined form—by simply giving them prompts on what to do.
They have raised concerns not least because of the risk of disruptions in the labor market and their ability to produce disinformation.
ChatGPT, the most popular generative AI tool, has amassed more than 100 million users since its launch in November. Google released Google Bardits alternative to ChatGPT, in March, and unveiled an advanced new language model known as PaLM 2 earlier this month.
In a separate meeting with Vera Jourova, a vice president of the European Commission, Pichai pledged to ensure that its AI products are developed with security in mind.
Both Pichai and Jourova “agreed that AI could have an impact on disinformation tools and that everyone should be prepared for a new wave of AI-generated threats,” according to a readout of the meeting shared with CNBC.
“Part of the effort could go into flagging or making transparent AI-generated content. Mr. Pichai emphasized that Google’s AI models already include security measures, and that the company continues to invest in this area to ensure a safe launch of the new products .”
Dealing with Russian propaganda
Pichai’s meeting with Jourova also focused on disinformation surrounding Russia’s war on Ukraine and elections, according to a statement.
Jourova “shared her concern about the spread of pro-Kremlin war propaganda and disinformation, including on Google products and services,” according to a reading from the meeting. The EU official also discussed access to information in Russia.
Jourova asked Pichai to take “swift action” on the issues facing Russian independent media that cannot monetize their content in Russia on YouTube. Pichai agreed to follow up on the matter, according to the reading.
In addition, Jourova has “highlighted the risks of disinformation for electoral processes in the EU and its member states.”
The next elections to the European Parliament will take place in 2024. There are also regional and national elections throughout the region this year and next.
Jourova praised Google’s “engagement” with the bloc’s disinformation code of conduct, a self-regulatory framework released in 2018 and since revised, aimed at spurring online platforms to tackle false information. However, she added that “more work is needed to improve reporting” under the framework.
Signatories to the Code are required to report how they have implemented measures to deal with misinformation.
WATCH: Microsoft releases another wave of AI features as race with Google heats up

[pub1]