Research shows that neurodiverse professionals may uniquely benefit from artificial intelligence tools and agents. As AI agent creation booms in 2025, people with conditions like ADHD, autism, and dyslexia are reporting a more level playing field in the workplace thanks to generative AI.
A recent study from the UK Department of Industry and Trade found that neurodiverse workers were 25% more satisfied with an AI assistant and more likely to recommend the tool than neurotypical respondents.
“Standing up and walking around during a meeting means you’re not taking notes, but now AI can step in and synthesize the entire meeting into a transcript and pick out the top themes,” said Tara DeZao, senior director of product marketing at Pega, an enterprise low-code platform provider. Dezao, who was diagnosed with ADHD as an adult, has complex ADHD, which includes both inattention symptoms (problems with time management and executive function) and hyperactivity symptoms (increased physical activity).
“I’ve lived my life in the business world,” Dezao said. “But these tools are very helpful.”
Workplace AI tools are diverse and can have very specific use cases, but solutions such as note takers, scheduling assistants, and internal communication support are common. Generative AI is particularly good at skills like communication, time management, and executive functioning, offering built-in benefits for neurodiverse workers who have traditionally had to find ways to fit into a work culture not built with them in mind.
Thanks to the skills that neurodiverse individuals can bring to the workplace (focus, creativity, empathy, expertise, just to name a few), several studies suggest that organizations that prioritize inclusion in this area generate nearly one-fifth higher returns.
AI ethics and neurodiverse workers
“Investing in ethical guardrails that protect and support neurodivergent workers is simply not the right thing to do,” says Christy Boyd, an AI specialist in SAS’ data ethics practice. “This is a smart way to leverage an organization’s AI investments.”
Boyd referenced a SAS study that found that companies that invest the most in AI governance and guardrails are 1.6 times more likely to achieve at least a 2x ROI on their AI investments. But Boyd highlighted three risks that companies should be aware of when deploying AI tools with neurodiverse and other individuals in mind: competing needs, unconscious bias, and inappropriate disclosure.
“Different neurodiverse conditions may have conflicting needs,” Boyd says. For example, people with dyslexia may benefit from a document reader, while people with bipolar disorder or other mental health disorders may benefit from scheduling AI support to make the most of their productive periods. “By proactively recognizing these tensions, organizations can create layered responses or provide choice-based frameworks that balance competing needs while promoting equity and inclusion,” she explained.
When it comes to unconscious bias in AI, algorithms can (and have been) unintentionally taught to associate neurodivergence with danger, disease, or negativity, as outlined in a Duke University study. And today, neurodiversity can still face discrimination in the workplace, making it important for companies to provide ways to use these tools safely without involuntarily disclosing individual workers’ diagnoses.
“It’s like someone turned on a light.”
As companies hold themselves accountable for the impact of AI tools in the workplace, Boyd says it’s important to remember to include diverse voices at every step, conduct regular audits, and establish secure ways for employees to anonymously report issues.
Efforts to make AI adoption more equitable, including for neurodiverse people, are just beginning. Humane Intelligence, a nonprofit organization focused on implementing AI for social welfare, announced the Bias Bounty Challenge in early October. The challenge allows participants to identify biases with the goal of building “more inclusive communication platforms, especially for users with cognitive differences, sensory sensitivities, or alternative communication styles.”
For example, emotional AI (where AI identifies human emotions) can help people who have difficulty identifying emotions to understand their meeting partners on video conferencing platforms like Zoom. Still, this technology must pay close attention to bias by allowing AI agents to fairly and accurately recognize diverse communication patterns, rather than embedding harmful assumptions.
Dezao said her ADHD diagnosis felt like “someone turned on a light in a very, very dark room.”
“One of the hardest parts of our hyper-connected, fast world is that we are all expected to multitask, and with my form of ADHD, multitasking is almost impossible,” she said.
Dezao said one of the most useful features of AI is that it can receive instructions and perform tasks while human employees can focus on the task at hand. “When I’m working on something and a new request comes in via Slack or Teams, my thought process is completely disrupted,” she said. “It was a godsend to be able to immediately outsource that request and have it done while I continued working on[my original job].”
