• 07 Aug, 2025

Google to Adopt EU AI Code of Practice Amid Push for AI Transparency and Trust

Google to Adopt EU AI Code of Practice Amid Push for AI Transparency and Trust

Google has officially announced its intent to adopt the European Union’s voluntary AI Code of Practice, signaling a major step in aligning its AI development processes with the bloc’s landmark AI Act. The move, revealed by Google’s General Counsel this week, underscores growing corporate support for transparent, ethical, and legally compliant artificial intelligence in Europe.

The AI Code of Practice—drafted by independent experts and backed by the European Commission—offers guidance for companies aiming to stay ahead of the continent’s increasingly robust AI regulations. Among its key provisions: disclosing high-level summaries of AI training data, respecting copyright in AI-generated content, and building safeguards for public trust.

What the Code Requires — And Why It Matters

The voluntary code is seen as a “soft law” bridge to full compliance with the EU AI Act, which is expected to come into force in 2026. It sets non-binding expectations for:

  • Transparency: Public summaries of datasets used to train AI models

  • Copyright Respect: Acknowledging and compensating original content creators

  • Safety Standards: Commitments to reduce algorithmic bias and misinformation

  • User Disclosure: Informing users when they interact with AI-generated outputs

Google’s endorsement reflects growing recognition of Europe's regulatory leadership in AI ethics, even as U.S. policy remains fragmented.

“We support efforts that promote access to secure, first-rate AI tools while also safeguarding user rights,” said Google’s legal spokesperson. “But we remain concerned that overly strict interpretations of copyright and disclosure requirements could inhibit innovation.”

Big Tech's Diverging Paths on AI in Europe

Google isn’t alone in navigating the EU’s evolving AI regulatory terrain. Microsoft is also expected to sign the code in coming weeks, further strengthening the code’s legitimacy across industry. However, Meta (Facebook’s parent company) has opted not to sign, citing concerns over how transparency requirements could affect competitive advantage and operational efficiency.

This divergence highlights an emerging fault line in Big Tech’s response to AI regulation: companies willing to prioritize public trust versus those betting on minimal compliance to maintain speed and flexibility.

EU’s AI Act: A Global Regulatory Blueprint?

The EU AI Act, passed earlier in 2025, is widely considered the most comprehensive attempt to regulate artificial intelligence globally. It introduces a tiered system of obligations based on risk levels, including:

  • Bans on certain high-risk applications (e.g., social scoring, predictive policing)

  • Strict oversight for AI systems used in education, employment, and finance

  • Auditing and documentation requirements for high-impact models like generative AI

Although the AI Code of Practice is voluntary, signing it signals that a company is proactively aligning with the forthcoming mandates—and potentially shaping how enforcement plays out in practice.

The Innovation-Compliance Dilemma

While EU officials hail the code as a milestone for AI governance, some tech leaders worry it could slow AI development, particularly in areas like generative content, enterprise AI, and open-source models.

Copyright compliance—especially around training large language models (LLMs) on public data—remains one of the most contentious issues. Critics argue that the requirement for dataset transparency could open companies to litigation or stifle the scale needed for breakthrough innovations.

Still, the momentum appears to favor a “trust-first” approach, especially in light of recent AI missteps and public backlash over deepfakes, hallucinations, and data misuse.

 

Final Thoughts: Google’s EU Strategy Sets a Precedent

Google’s adoption of the EU’s AI Code of Practice marks a watershed moment in the global AI race—not just for compliance, but for trust-building in a sector rife with both promise and peril. By aligning early with European norms, Google may gain credibility with regulators and consumers while nudging competitors toward similar commitments.

As the AI regulatory landscape rapidly evolves, voluntary codes like this may prove to be more than just symbolic—they could become the new competitive standard.

Stay with Cortex Hub for updates on AI policy, global tech regulation, and what it all means for developers, startups, and enterprises alike.