Most Spanish employees now use AI at work, but privacy concerns remain

- April 1, 2026

AI-powered CV builder MyPerfectCV released on March 31 its 2026 European AI at Work Report, which surveyed 1,000 workers across the UK, Germany, France, Spain, and Italy, finding that 58% use AI at work – with 36% using it weekly or more. 

A 2025 study by Boston Consulting Group had previously found that Spain stands as the European leader in workplace AI adoption, with 78% of professionals using it regularly – ahead of the UK and Italy. 

The new report, however, reflects the growing popularity of AI tools like ChatGPT, Perplexity, and Claude, which are reshaping how many work and live. These tools, plus a range of emerging enterprise-grade ones, are transforming modern company workflows and increasing productivity. 


However, many workers still have serious concerns over using AI, with 65% reporting they worry their personal data is being used to train AI tools. Other users noted a lack of reliability, with 58% noting they’d encountered AI misinformation or errors at work – including generating fake quotes and non-existent data. 


At a high level, AI adoption has its advantages, but it also introduces serious new risks that companies and users alike need to be prepared to address. Failure to do so can result in sensitive data being left exposed to unwanted third parties. 

Why adoption is growing despite concerns 

Despite the concerns, AI tools are seeing widespread adoption in the workplace, as employees attempt to automate their day-to-day workflows.

“AI is gaining traction in the workplace because it offers clear, practical advantages for everyday tasks. The European AI at Work Report shows that employees use it to save time, improve the quality of their work, and handle repetitive or mentally demanding activities more efficiently,” Dr. Jasmine Escalera, career expert at MyPerfectCV, told Novobrief via email.


Types of tasks that workers are using AI for include translation and proofreading (36%), research and brainstorming (33%), data analysis (28%), content creation (25%), task planning (22%), reporting (17%), and visual and presentation creation (13%). 

These trends indicate that AI is supporting knowledge workers with communication and analysis rather than fully automating tasks. 

“Tools that assist with writing, research, and organization are particularly popular as they can be easily integrated into existing workflows. At the same time, AI has become far more accessible and user-friendly, eliminating the need for technical expertise,” Escalera added.  

Employees aren’t just using AI in the workplace; they’re using it at home too. The study also found that users are using the technology for a range of tasks in their personal lives including travel advice (28%), education and studying (27%), cooking and meal planning (26%), shopping recommendations (21%), entertainment recommendations (20%), financial planning (20%), home improvement (19%), physical health (18%), and mental health counselling (10%). 

Hesitation around AI 

User privacy concerns are well founded. Not only do many vendors train their models on user inputs, but there is also a history of leaks. ChatGPT, for instance, reportedly leaked user prompts to Google last year. 

While there are configurations that can keep prompts private, users should operate under the assumption that any information they enter into the AI model will be shared with a third party. 

Concerns over data privacy and misinformation have contributed to a lack of trust among many employees. Escalera attributed this more to uncertainty than outright resistance: 

“Many employees are cautious because they are unsure about the accuracy of AI-generated outputs, and they worry that they might make mistakes that could affect their work,” she said. 

Mistakes are a prevalent issue, with research suggesting a hallucination rate of 9.6% for GPT-5. But, it’s not just the technical limitations of AI that’s putting employees off; it’s lack of clear management too. 

“Unclear or inconsistent workplace policies can also create hesitation, as employees may not know what is permitted or appropriate,” Escalera stressed. 

Clearer policies on what AI tools are permitted in the workplace can encourage adoption, just as training can help educate employees on the skills they need in order to use these tools effectively with minimal risk. For example, training employees on how to fact check outputs could reduce the risk of erroneous outputs having an operational impact. 

That being said, addressing lack of trust in AI vendors is a much taller task. Recent scandals – such as Grok being used to create explicit nonconsensual images of women – point to a lack of governance and accountability in the industry that will inevitably dissuade many users from implementing these products in their workflows.