A.I.’s Anti-Neurodiversity Problem: How Did It Start, and How Can We Counteract It?

Written By: Menachem Rephun, Communications Manager and Advocate

Waiving the red flag: A.I. Hiring Tools and Neurodiversity Bias. Graphic of a grey and red flag that says AI. Creative Spirit #HireDifferent logo in black and red on bottom.

A.I.’s Anti-Neurodiversity Problem: How Did It Start, and How Can We Counteract It?  

The Rise of A.I. and Its Implications

With the introduction of artificial intelligence (A.I.) platforms like ChatGPT, A.I. has become one of society’s most controversial and widely-debated issues. On the plus side, A.I. is highly valuable in a number of areas, including research and data analysis, improving business efficiency, automating repetitive tasks, and helping doctors and nurses treat patients more effectively through quicker access to vital information. 

At the same time, A.I. has sparked apocalyptic fears of deep fakes, chatbots, and other tools becoming too intelligent, putting employment at risk, spreading disinformation, and being used deceptively or with malicious intent. 

A Hidden Concern: AI Bias Against the Neurodiverse

One important area that has drawn less focus is the bias that has seeped into the programming of many AI models, specifically bias against people with disabilities and neurodiverse. Research studies have shown that this bias exists, and addressing it needs to be a top priority for business leaders and supporters of the neurodiverse community, due to the extremely adverse impact it can have on neurodiverse employment.

A.I. Hiring Tools and Neurodiversity Bias

As it stands, finding gainful employment is already a serious challenge for millions of people who are neurodivergent, due to a lack of inclusion in the hiring process and a lack of reasonable accommodations in the workplace. As of right now, A.I. appears to be compounding this problem, rather than improving it. 

According to a Hill.com report, an estimated 70% of companies and 99% of Fortune 500 companies now use A.I. tools in their recruitment, drastically raising the chances of bias influencing employment and hiring. In an essay for Red2Green.org, autism employment expert David Dean warns that the growing dependence of many companies on A.I. could backfire by leading to “an overreliance on poorly programmed software systems that sift out great autistic & neurodiverse candidates due to a lack of a recruiter’s eye joining the dots of the application.” 

Even more troubling, The Hill adds that A.I. hiring tools haven’t just automated violations of the Americans With Disabilities Act (ADA), but have created new paradigms for discrimination, such as measuring applicants’ personality traits, and evaluating job performance “based on how they play a video game or speak and move in a video recording”. 

The Potential Harm to Neurodiverse Employment

In short, the anti-neurodiversity biases programmed into A.I. could potentially influence employers to screen out and overlook neurodiverse candidates who are highly qualified, with diverse skills, perspectives, and talents that would benefit any organization they’re a part of. Part of our own mission at Creative Spirit is to improve employer awareness of the talents and strengths of neurodiverse candidates. The fact that many A.I. tools seem to be working counter to that goal is something that deeply concerns us, and it should concern everyone who supports fair-wage employment for the neurodiverse community. 
Identifying A.I. Bias and Its Root Causes

Thankfully, there are effective strategies that can help identify and eliminate anti-neurodiverse bias in A.I. tools. For starters, it’s crucial to understand how and why the bias occurs in the first place. “The reasons may vary,” technology expert Yona Welker writes in a Forbes.com essay, “including lack of access to data for target populations, unconscious and conscious bias from the developing team, organizational structure and practices. 

Ethical A.I. Development: A Critical Step

As a result, algorithms may provide inaccurate predictions and outputs for certain subsets of the population or discriminate against particular groups.” Krista Lindsey, a business expert who is also neurodivergent, writes that these biases underscore the need for “ethical AI development that actively includes and values diverse cognitive profiles, ensuring that AI supports rather than sidelines neurodivergent individuals.”

Dr. Sam Brandsen, an autism researcher at the Duke Center for Autism and Brain Development, has also discussed the impact of anti-neurodiversity bias in A.I. “Artificial intelligence is involved in more and more decision-making processes,” Brandsen says. “This means that any potential biases in artificial intelligence algorithms can have important real-world consequences.” 

Research Highlighting Anti-Neurodiverse Bias in A.I.

In a video shared by the Duke Center, Brandsen describes how he and his team of researchers investigated possible anti-neurodiversity biases in A.I. by focusing on algorithms that learn ways to store human-generated text, using that text to learn the correlation between different words. “What we find in our results,” Brandsen says, “is that words related to neurodivergence are often correlated with negative concepts, like dangerous, badness, and disease in many of these encoding algorithms.” 

In the course of their research, Brandsen said he and his team discovered that sentences like “I have autism” are often perceived more negatively than sentences like “I am a bank robber,” suggesting high levels of bias against terms associated with neurodivergence.    

Auditing and Reducing Bias in A.I.

While the work of Brandsen and his team offers strong evidence that anti-neurodiverse bias in A.I. exists, he adds that there are already several techniques that can help reduce or remove that bias. One of those methods, according to Welker in his essay, is for software developers to perform an audit of the A.I. framework when developing it. “Performing an audit aims to include diverse perspectives when setting an algorithm’s purpose,” Welker writes, as well as “evaluat[ing] disability bias in a dataset and determin[ing] how to address it and establish disability equity-sensitive metrics and key performance indicators.”Although the exact audit criteria varies depending on the target group, it can include representation, accessible vocabulary, and accessibility frameworks. 

Involving the Neurodiverse Community in A.I. Development

This means that the target population, in this case people who are neurodivergent, are involved as part of the research resource group. Welker adds that documents and communication should use language that includes accessibility terminology, and that the audit process should account for sensory diversity, parameters of cognition, communication, learning, and memory, along with the input of neurodiverse individuals, families, parents, caregivers, counselors, and educators. 

The Importance of Ongoing Audits

The auditing should also be an ongoing and continuously updated process. As the article points out, disability AI is “not rigid. That means that the research, development and audit approaches should be constantly updated based on your target group’s feedback, policy updates and recommendations.” Consistently reviewing and reworking the framework for A.I. tools and software is one of the best strategies for weeding out anti-neurodiverse bias. 

The Potential of A.I. for Inclusion

Used constructively, A.I. has enormous potential to improve inclusion for the neurodiverse community in education and employment, rather than preventing it. Reviewing the framework of A.I. tools throughout the development process, and creating them with inclusion in mind, should not just be an afterthought, but a standard, industry-wide practice.

The Broader Mission of Fighting Discrimination

Identifying anti-neurodiverse bias in A.I. is a crucial aspect of pushing back against discrimination and standing up for disability rights and inclusion in employment. As technology continues to evolve, this mission will become even more important.

The Future of Inclusive A.I. Development

In the words of Pratik Joglekar, a Senior Product Designer for Hubspot, “Building an inclusive world for those 1.6 billion people [who are neurodivergent] is not a need for the future but a necessity of the present.” As Joglekar adds, this is “especially true because AI is booming, and making it inclusive now would be easy as it will scale into a behemoth set of features in every aspect of our lives in the future.” 

Developing neuro-inclusive A.I. tools will help businesses genuinely make a difference in neurodiverse hiring and employment, unlocking the potential of A.I. to be a force for good in society, rather than one that holds neurodiverse individuals back from reaching their potential.

Sources:

1. https://thehill.com/opinion/technology/4576649-ai-is-causing-massive-hiring-discrimination-based-on-disability/ 

2. https://red2green.org/2024/01/09/are-ai-recruiting-tools-biased-against-autistic-talent/ 

3. https://seramount.com/articles/ai-and-me-navigating-neurodiversity-and-technologys-new-frontier/  

4. https://autismcenter.duke.edu/news/duke-autism-research-explained-bias-against-neurodiversity-related-words-ai-language-models 

5.  https://www.smashingmagazine.com/2024/04/ai-neurodiversity-building-inclusive-tools/  

6. https://www.forbes.com/councils/forbestechcouncil/2023/05/09/algorithmic-diversity-mitigating-ai-bias-and-disability-exclusion/ 

Related Posts