The Future of Work and Disability
The Future of Work and Disability project brought together a study group of fifteen people, many with lived experience of disabilities, with researchers, artificial intelligence (AI) experts, data scientists, employment organizations and others engaged in the data ecosystem. The goal of the group was to understand and examine intersecting topics of AI, automation, standards and employment as they mainly relate to persons with disabilities.
Our Objectives
The Future of Work and Disability objectives were to:
- Explore, understand and draw insights into how artificial intelligence and other smart technologies affect persons with disabilities and limit or improve their opportunities and well-being with regards to employment.
- Produce a report that will share the insights gained through the workshop activities.
Our Process
The study group met weekly for eight weeks in late 2020 and early 2021 and also collaborated asynchronously using platforms such as Google Drive and Canvas. Three themes were explored—each with a webinar module and a research activity module.
The final activity of the group is to co-create a report on their understandings that can be used to help develop standards and regulations that support diversity within employment data systems. Accessibility Standards Canada (ASC) will use the report to inform best practices and policies for AI use in the workplace.
Our Contributors
The Future of Work and Disability had expert collaborators many of whom identify as having a disability formed a study group that was comprised of fourteen individuals, many with lived experiences of disability and/or knowledge of the AI field. The group was selected through a call for participation from the IDRC, and a selection process was used to ensure that there were diverse perspectives within the group for learning, collaborating and creation of the final report. Experts include:
Chris Butler
Theodore (Ted) Cooke
Katherine Gallagher
Kevin Keane
Mala Naraine
Runa Patel
Sricamalan (Sri) Pathmanathan
Gaitrie Persaud
Ramin Raunak
Fran Quintero Rawlings
Janet Rodriguez
Cybele Sack
Christopher Sutton
Ricardo Wagner
Report
Future of Work and Disability Findings Report:
In addition to our report to Accessibility Standards Canada, we created:
- Learning opportunities from our webinars
- Badges that can be used by learners to demonstrate their proficiency in the field
- A learning program that will be publicly available at the close of the project
Badges
FWD Webinar Series
The Future of Work and Disability project study group explored thee themes over six modules of study and research. The final two modules were dedicated collaboration and production or the report.
Risks and Opportunities of AI, Smart Systems and Automation
This theme introduced how AI creates both barriers and new opportunities for persons with disabilities in the hiring, training and retention of employees.
Module 1: AI Employment Systems Webinar
Panelists
Anhong Guo is an Assistant Professor in Computer Science & Engineering at the University of Michigan. He has also worked in the Ability and Intelligent User Experiences groups in Microsoft Research, the HCI group of Snap Research, the Accessibility Engineering team at Google, and the Mobile Innovation Center of SAP America.
Shari Trewin manages the IBM Accessibility Leadership Team, chairs the Association for Computing Machinery (ACM) Special Interest Group on Accessible Computing (SIGACCESS), and is a Distinguished Scientist of the ACM, a member of ACM’s Diversity and Inclusion Council.
Ben Tamblyn is the Director of Inclusive Design at Microsoft. Ben has worked in a wide range of marketing, design and technical roles, and has a passion for design, inclusion and potential impact of technology on the world.
Chancey Fleet was a 2018–19 Data & Society Fellow and is currently an Affiliate-in-Residence whose writing, organizing and advocacy aims to catalyze critical inquiry into how cloud-connected accessibility tools benefit and harm, empower and expose disability communities. Chancey is also the Assistive Technology Coordinator at the New York Public Library.
Moderator
Dr. Vera Roberts is Senior Manager Research, Consulting and Projects at the Inclusive Design Research Centre (IDRC) at OCAD University. Vera’s primary research area is generating a culture of inclusion through outreach activities and implementation of inclusive technology and digital sharing platforms.
Earn a Learner badge
You will learn:
- How new innovative technology solutions can potentially mitigate hiring biases for people with disabilities
- How the “normative behaviour” screening is harming people with disabilities
Learn and earn badges from this event:
- Watch the accessible AI Employment Systems webinar
- Apply for your Learner badge (five short answer questions)
Module 2: Storytelling Workshop
Storytelling can help us articulate our thoughts, feelings and experiences; it can build confidence and create a sense of belonging and connection. During our workshop, each participant shared an experience related to the challenges of employment with regards to AI, smart systems and automation.
Facilitators
Minette Samaroo is the President of AEBC Toronto Chapter. She is active in advocating for social change for persons with disabilities. In 2019, Minette led the development and execution of the Disability Advantage Program — a training program for employers that highlights the benefits of hiring and working with persons with disabilities.
Falaah Arif Khan is an Artist-in-Residence at the Montreal AI Ethics Institute and the NYU Center for Responsible AI, as well as a Research Fellow in the Bhasha group at the International Institute of Information Technology, Hyderabad. His latest artwork includes the first volume of the Superheroes of Deep Learning comics (with Zack Lipton) and the Data, Responsibly comics.
Melissa Egan is National Lead, Episodic Disabilities at Realize. Using her training in curriculum development and adult education, she helped create online learning webinars on episodic disabilities and employment. She has worked extensively with marginalized, LGBTQ and Indigenous people to develop and deliver workshops across Canada.
Addressing Bias in ML Models on Candidate Selection
In this theme, the group learned about how machine learning models are not neutral but rather often reinforce biases present in the real world. We focused on how the algorithms responsible for screening resumés are biased and how they adversely affect persons with disabilities in the job application process and developed a preliminary understanding of the policy issues at stake within this area of algorithmic bias. Our panel of experts navigated us through these urgent subjects.
AI Hiring System Policies Webinar
Panelists
Alexandra Reeve Givens is the CEO of the Center for Democracy & Technology, a leading U.S. think tank that focuses on protecting democracy and individual rights in the digital age. The organization works on a wide range of tech policy issues, including consumer privacy to data and discrimination, free expression, surveillance, internet governance and competition.
Julia Stoyanovich is an Assistant Professor of Computer Science and Engineering and of Data Science at New York University. Julia’s research focuses on responsible data management and analysis, including operationalizing fairness, diversity, transparency and data protection in all stages of the data science lifecycle. She is the founding director of the Center for Responsible AI at NYU, a comprehensive laboratory that is building a future in which responsible AI will be the only kind of AI accepted by society.
Moderator
Dr. Vera Roberts, Inclusive Design Research Centre
Earn a Learner badge
You will learn:
- How legal frameworks and public policies can act against structural discrimination in candidate selection on the basis of disability
- About the challenges to policy regulations for AI hiring systems
Learn and earn badges from this event:
- Watch the accessible AI Hiring System Policies webinar
- Apply for your Learner badge (five short answer questions)
Module 4: Co-Design Activity
In this module, participants explored policy through a co-design activity. The participants worked in groups to co-create approaches to AI and ML challenges in employment systems.
Guest Speaker
Abhishek Gupta is the Founder and Principal Researcher at the Montreal AI Ethics Institute and a Machine Learning Engineer at Microsoft, where he serves on the CSE Responsible AI Board. He is representing Canada for the International Visitor Leaders Program (IVLP) administered by the U.S. State Department as an expert on the future of work. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world.
Activity
In this module, we brainstormed and worked through practical scenarios and challenges related to policies and standards around employment, disability and AI. Abhishek Gupta helped guide us through this collaborative activity (co-design). Abhishek introduced and framed the co-design activity that engaged teams and encouraged them to work through practical situations and challenges related to policies and standards around employment, disability and AI.
Challenges
- As there are now an increased number of people working remotely, there has been an increase in the use of remote workplace productivity monitoring. The use of traditional metrics in evaluation of employee productivity can affect people with disabilities more so when things like tone of message, frequency of messages and speed of responses can be used in evaluation, such as on Slack channels by an automated bot for a sales team’s performance. What are some ways that we can create more inclusive metrics?
- In the hiring process, there were a lot of discussions during the previous module on the potential places where discrimination can enter the picture. In the use of AI, firms have an additional veil that they can wear because the systems are not human-interpretable. What accountability measures can we request that make disparate outcomes more explicit, especially in ensuring compliance to legal standards like the Canadian Human Rights Act?
- Privacy with disability data has severe implications because of smaller sample sizes, so traditional techniques like k-anonymity don’t work well. Disclosing such data to employers to request for accommodations is essential but creates privacy risks. In the previous module, we covered the subject of potentially sharing such data with a third-party data trust that can help track hiring outcomes to make assessments on whether discrimination might be taking place. From a public trust and policy perspective, what measures will permit more open data sharing with data trusts that doesn’t compromise the privacy rights of people, especially those with disabilities whose data carries extra sensitivity?
Making AI Inclusive for Hiring and HR
In this theme, our panel discussion highlighted some of the potential problems that AI raises in the hiring process and brainstormed ideas to make this process more inclusive for persons with disabilities.
Module 5: Inclusive AI for HR Webinar
Panelists
Shea Tanis is the Director for Policy and Advocacy at the Coleman Institute for Cognitive Disabilities at the University of Colorado. She is nationally recognized for her expertise in applied cognitive technology supports, cognitive accessibility and advancing the rights of people with cognitive disabilities to technology and information access.
Rich Donovan is CEO of the Return on Disability Group and is a globally recognized subject matter expert on the convergence of disability and corporate profitability. He has spent more than ten years focused on defining and unlocking the economic value of the disability market. In 2006 Rich founded Lime, the leading third-party recruiter in the disability space, where he worked with Google, PepsiCo, Bank of America/Merrill Lynch, IBM, TD Bank and others to help them attract and retain top talent from within the disability market.
Moderator
Dr. Vera Roberts, Inclusive Design Research Centre
Earn a Learner badge
You will learn:
- How AI in hiring systems impacts the employment of people with disabilities
- How to better develop AI-based hiring systems that are inclusive and transparent for people with disabilities
Learn and earn badges from this event:
- Watch the accessible Inclusive AI for HR webinar
- Apply for your Learner badge (five short answer questions)
Module 6: Workshop
In this module, participants had the opportunity to gather insights from previous modules and apply them to thinking critically about nugget.ai’s operations as a skills measurement technology company. As such, this module served as a real-world case study through which participants analyzed and reflected on the business decisions of nugget.ai.
Guest Speakers
Marian Pitel and Melissa Pike, members of nugget.ai’s science team, guided the presentation and activity, taking turns explaining key stages in nugget.ai’s operations and highlighting decisions and considerations that the nugget.ai team have had to make at these stages.
At each stage, Marian and Melissa worked with the whole group in identifying the advantages of nugget.ai’s approach and any areas of opportunities. The participants were encouraged to reflect on nugget.ai’s business decisions, leaning on lessons that have emerged from the study group’s previous modules.
Activity
We discussed ideal candidates based on David Dame’s YouTube video, which focuses on the following questions:
- How can an organization leverage the competitive advantage of using a candidate’s differences to their success?
- How can organizations begin to think differently about what it means to be “ideal” for the job?