The European Parliament has reportedly blocked lawmakers from using AI tools built into work devices, citing cybersecurity and privacy risks from uploading sensitive communications to the cloud.
Congress’ IT department said it could not guarantee the security of data uploaded to the AI companies’ servers and that the full scope of information shared with the companies was “still being evaluated,” according to an email reviewed by Politico.
As such, the email said, “We believe it is safer to disable such features.”
For example, uploading data to AI chatbots such as Anthropic’s Claude, Microsoft’s Copilot, and OpenAI’s ChatGPT means that U.S. authorities can require the companies operating the chatbots to hand over information about their users.
Additionally, because AI chatbots typically rely on using information provided or uploaded by users to improve their models, potentially sensitive information uploaded by one person is more likely to be shared and seen by other users.
Europe has some of the strongest data protection rules in the world. But the European Commission, the 27-nation bloc’s executive body, last year announced new legislative proposals aimed at relaxing data protection rules to make it easier for tech giants to train AI models on European data, a move that angered critics who said it would give in to American tech giants.
The move to restrict European lawmakers’ access to AI products on their devices comes as several EU member states are reevaluating their relationships with US tech giants, which remain subject to US law and the unpredictable whims and demands of the Trump administration.
In recent weeks, the U.S. Department of Homeland Security has sent hundreds of subpoenas to U.S. tech and social media giants, demanding they turn over information about people, including Americans, who have publicly criticized the Trump administration’s policies.
Google, Meta, and Reddit have complied in several cases even though the subpoenas were not issued by a judge or enforced by a court.
