Elon Musk’s artificial intelligence startup, xAI, announced it will sign the Safety & Security Chapter of the European Union’s AI Code of Practice, joining a growing list of global tech firms navigating Europe’s evolving AI regulatory landscape.
The decision signals xAI’s support for building safe and responsible AI models, particularly those with the highest risk potential. However, xAI has declined to endorse the code’s two other key components—transparency and copyright compliance—calling them “detrimental to innovation” and an “overreach” on intellectual property.
“AI safety is non-negotiable,” said an xAI spokesperson. “But burdensome data transparency and vague copyright obligations threaten the very innovation we’re trying to protect.”
A Three-Part Code, One Signature
The EU AI Code of Practice is a voluntary framework designed to help companies prepare for compliance with the EU AI Act, which becomes binding law in 2026. It contains three major sections:
- Transparency – Requiring public summaries of training data and disclosure of AI use
- Copyright & IP – Encouraging respect for creative content in training and output
- Safety & Security – Setting safeguards for high-risk systems like LLMs, generative AI, and autonomous systems
xAI’s partial commitment—signing only the safety chapter—positions the startup in a middle lane: embracing responsible AI practices without fully aligning with the EU’s broader regulatory vision.
Industry Responses: A Divided Field
The move by xAI adds nuance to the growing roster of tech giants responding to the EU's voluntary AI framework.
- Google/Alphabet has already pledged to sign all three sections, citing the need for public trust and regulatory alignment.
- Microsoft is expected to follow with a full commitment in the coming days.
- Meta (Facebook’s parent company) has declined to sign any part of the code, citing concerns over competitive disadvantage and regulatory overreach.
- xAI’s position lands somewhere in between: a vocal endorsement of AI safety, coupled with skepticism toward European-style governance over data transparency and copyright.
Why xAI’s Position Matters
As a high-profile entrant in the generative AI race, xAI is building frontier AI models designed to rival OpenAI’s GPT-5 and Google DeepMind’s Gemini. Its decision to limit regulatory commitments to safety suggests a philosophical divergence from other AI labs—especially around how much government oversight is acceptable in the age of foundation models.
Musk has long advocated for “pro-human” AI alignment, but he has also repeatedly warned that regulation must not suffocate innovation. His comments echo broader concerns from Silicon Valley about Europe’s aggressive stance on AI governance.
Still, by signing even part of the code, xAI is positioning itself as a responsible player in global AI development—especially as regulatory scrutiny ramps up in both Brussels and Washington.
Looking Ahead: Safety First, but on Whose Terms?
The EU’s AI Code of Practice remains a voluntary step, but its growing adoption among top firms could turn it into a de facto industry standard. For now, xAI’s selective engagement may serve as a model for companies that support ethical AI—but oppose full regulatory capture.
With the AI Act set to reshape the legal framework for artificial intelligence across Europe by 2026, xAI’s move keeps the company in the room—if not yet at the head of the table.
Stay with Cortex Hub for more on AI regulation, tech policy shifts, and how leaders like Elon Musk are navigating the future of AI governance.