The aim is to encourage presenters to share their innovative thinking, provide refreshing appraisals related to the use of AI and all that goes into AI models to support those with disabilities in their use of accessible and assistive technologies. Here are some ideas for papers but please do not be limited by this list:
AI and Inclusion, where machine learning and algorithms can be used to enable equity for those with disabilities
The pros and cons of AI, highlighting why issues can arise for those with disabilities, even with the most meticulously designed systems.
The use of augmentative and assistive AI in applications to support those with disabilities
AI supporting all that goes into making access to online digital content easier.
Enhanced independence using virtual assistants and robots
When submitting your contribution please make sure you choose our STS under “Special Thematic Session” (Artificial Intelligence, Accessible and Assistive Technologies). Contributions to a STS are evaluated by the Programme Committee of ICCHP and Peter Heumader and myself! Do get in touch to discuss your involvement and pre-evaluation of your contribution.
Chairs
E.A. Draffan, ECS Accessibility Team, Faculty of Physical Sciences and Engineering University of Southampton
Peter Heumader, Institute Integriert Stuideren, Johannes Kepler University Linz
Over the last few months we have been concentrating on projects related to automated web accessibility checks and the automatic linking and categorisation of open licenced and freely available Augmentative and Alternative Communication symbol sets for those with complex communication needs.
As has been mentioned we presented these projects at a workshop in the Alan Turing Institute in November and work has been ongoing. It is hoped that the results will be shared by the end of March 2020.
Recent
regulations and UK laws recognise the W3C Web Content Accessibility Guidelines
(WCAG) as a method of ensuring compliance, but testing can be laborious and those
checkers that automate the process need to be able to find where more errors
are occurring. This has led to the
development of an accessibility checker that carries out well-known automated
checks, but also includes image recognition to make it possible to see if the
alternative text tags for images are appropriate. A second AI related check
involves a new WCAG 2.1 Success Criteria 2.4.4 Link Purpose (In Context). This is where “the purpose of each link can
be determined from the link text alone or from the link text together with its
programmatically determined link context, except where the purpose of the link
would be ambiguous to users in general”.[1]
A Natural Language Processing (NLP) model is used to check whether
the text in the aria-label attribute within the target hyperlink object matches
the content in the target URL. Based on the matching result, it is possible to
determine whether the target web page or website fit the link purpose criteria.
Despite previous research in this area, the task is proving challenging with two
different experiments being worked on. One experiment has been designed to use
some existing NLP models (e.g. GloVe), while another one is investigating the
training of data with human input. The results will be published in an academic
paper and at a conference.
AAC symbol classification to aid searches.
The team have also investigated issues for
those supporting Augmentative and Alternative Communication (AAC) users who may
have severe communication difficulties and make use of symbols and pictures on
speech generating devices. A multilingual symbol repository for families,
carers and professionals has been created to link different freely available
symbol sets. The symbol sets can be used
to create communication charts for the AAC user but this takes time and finding
appropriate cultural symbols is not always easy. A system has been developed that
automatically links and categorises symbols across symbol sets related to their
parts of speech, topic and language using a combination of linked data, natural
language processing and image recognition.
The latter is not always successful in isolation as symbols lack context
and concepts are not necessarily concrete such as an image for ‘anxious’, so
further work is required to enhance the system.
The Global Symbols AAC symbol repository will be making use of these
features on their BoardBuilder for making symbol charts by the end of March
2020.
This project is exploring some existing Convolutional Neural Network (CNN, or ConvNet) models to help classify, categorise and integrate AAC symbols. Experiments have already been undertaken to produce a baseline by simply using the image matrix similarity. Due to the nature of AAC symbols, some of these similar symbols are representing different concepts, but some different symbols are representing the same concept across different symbols sets. The training data set has mapped symbol images labels and NLP models have been used to map the labels into the same concept across different symbols. This will help those supporting ACC users offer much wider symbol choices suitable for different cultures and languages. The Global Symbols API for searching open licence and freely available AAC symbols is already being used in the Cboard application for AAC users
Wired UK has a very good article by Alex Lee that was published on November 26th titled “An AI to stop hiring bias could be bad news for disabled people.” The technology that helps recruiters cut through the CV pile might be pushing disabled candidates out of the running “
Alex Lee provides a very good example of what can happen when an interviewee has to undertake the daunting task of a video recruitment system. This may cut time for the company but when you read the article you will find that the process would be tough for most people, let alone someone with a visual impairment.
The data collected and the algorithms used for these processes are meant to be more and more accurate as time passes, but as Professor Mike Wald has reminded us all…
“To train the algorithm, you’re going to have to give it past data,” explains Mike Wald, professor of electronics and computer science at the University of Southampton and a fellow of the Turing Institute. “if you say, here are the characteristics of all our good employees. We want more people like them. You’re going to get more people like them. And if they haven’t got any disabled people in there, you’re probably not going to get disabled people. […] ” “Disability is a very heterogeneous characteristic. Every person with a disability has a slightly different disability. And so, there is a huge issue in how to classify disabilities,” says Wald. “If you try and classify someone, until you meet that actual person and find out what they can and can’t do, then it’s not really fair to do that.”
Many more people seem to be writing about this issue and discuss where things can go wrong with AI, such an article published today called “Artificial Intelligence and Inclusion” by Mollie Lombardi. These articles and many more like ours are talking about the problem and making it clear that we need to sort it out.
If you have an answer do let us know!
We are still searching for a freely available recipe to ensure AI is inclusive and would enable us to take account of the very complex mix of disabilities and how they affect so many people in very different ways and to varying degrees; even at different times during the journey of life.
In our discussion at the Alan Turing Institute last week we mentioned how hard it was to define the way Inclusion is seen as a concept and so it was interesting to read this article by Lorna Gonzalez and Kristi O’Neil Published: Friday, November 15, 2019 Columns:Transforming Higher Ed . They begin by saying that:
” Attempting to define nuanced concepts brings with it a risk or reductionism, which is why the definitions that follow draw from cognitive science, universal design, and disability studies…
Disclosure or Self-Identity
Have you heard of the WYSIATI principle? Coined by Nobel Prize laureate Daniel Kahneman and pronounced “whiz-ee-yaht-tee,” this acronym stands for What You See Is All There Is. It’s theorized as a common, unconscious bias that even well-intentioned educators make: “I can’t see it, so it doesn’t exist.” It’s the idea that our minds are prone to making judgements and forming impressions based on the information available to us. In teaching and learning, the WYSIATI principle is an idea with consequences. That is, the students in our classes have all kinds of invisible circumstances that can impact their learning. Some of these circumstances include the following:
Attention or comprehension problems because of an emotional hardship or learning disability
Using a single device, like a phone or a tablet, for doing all of their digital coursework
Familial or other work obligations outside of school
A long commute to and from campus
Homelessness, or food or housing insecurities
Low vision, but not blindness; difficulty looking at a screen for extended periods of time
Difficulty hearing, but not deafness
A lack of experience or experienced mentors in higher education or in particular disciplines
Returning to school after a period of time and feeling rusty, insecure, or experiencing imposter syndrome2
While campuses are required to provide services (e.g., alternative formats for course materials, extra exam time, etc.) for students with disabilities and may have additional support for students with various other challenges, unless students self-identify or disclose their circumstances, courses and associated materials may contain barriers to student learning—even if those barriers are inadvertent. A faculty colleague shared with us that she had favored the use of a particular color to emphasize important ideas in her documents for nearly an entire semester before a student revealed to her that he could not see that color. Had she known, she would have made a simple change so that the student could read or understand the most important parts of the course documents. This is the WYSIATI principle at work: this faculty member couldn’t see that her student was color blind, so she didn’t know that she needed to do anything about it.
Whether or not students need to self-identify or disclose their circumstances is not the point. The point is that invisible circumstances exist regardless of disclosure, and, collectively, we can all do a better job of awareness: identifying and removing barriers from courses can benefit everyone, but doing so can also be critical to those who need it.
Accessibility
Colloquially, the term accessibility is often used to describe items or spaces that are available for use. One can expect an accessible road to be open as an option for safe, unobstructed travel for most vehicles. Here’s another example of an email from Dropbox, an online file storage tool, after a user reached the free storage limit.
In this email, the term accessible refers to availability. The user will not be able to access files because they will not be available on other devices. In both of these examples, however, the term accessible is limited to able individuals—those who are able to access material in its current form. In teaching and learning, as well as in universal design, accessible means that materials and spaces are not only available but also free from invisible barriers—even unintended ones—for anyone who needs to access those materials.
For example, an accessible text is one that is clearly organized, uses an unembellished font, and incorporates headings to separate sections. Online images should contain alternative text for moments when pages don’t load properly or for readers who use assistive devices. Videos should include closed captions or transcripts for people with hearing issues or attention/tracking problems, as well as for those who multitask (watch while exercising, for example). Even certain colloquial terms and cultural references that are used without context or explanation in a lecture or course material can function as barriers to learning.
The term accessible is evolving and currently connotes disability services and accommodations. Citing a keynote address by Andrew Lessman, a distance education lawyer, Thomas Tobin and Kirsten Behling explain why accessibility as accommodation is a problematic way to think about design: “‘accommodations are supposed to be for extraordinary circumstances'” and, paraphrasing him, added the following:
[I]t should be very rare for people to need to make specific requests for circumstances to be altered just for them [. . .] all of these environments, whether physical or virtual, should be designed so that the broadest segment of the general population will be able to interact successfully with materials and people.3
This idea applies to course design and instruction just as much as it applies to physical spaces. Practicing accessible course design and instruction is an opportunity (and a necessary imperative) to develop a pedagogy of inclusion.
Inclusion
The previous two definitions have tried to articulate the idea that students carry intersecting invisible circumstances with them into the classroom. Whether or not students disclose their circumstances—or whether faculty members invite students to disclose them—does not determine their existence. From this perspective, inclusion means designing and teaching for variability. Faculty can practice inclusive pedagogy by following universal design principles and offering multiple options for representation, engagement, and expression:
Options are essential to learning, because no single way of presenting information, no single way of responding to information, and no single way of engaging students will work across the diversity of students that populate our classrooms. Alternatives reduce barriers to learning for students with disabilities while enhancing learning opportunities for everyone.4
In a Nutshell . . .
Inclusive pedagogy can be an act of intention—something that is initiated before and during the course design process—rather than being an act of revision or omission.
Contribute to the Conversation!
Tweet your favorite inclusive design practices and resources, and be sure to tag @TLIatCI, @a11ygal, and @lgonzalez1
Thomas Tobin and Kirsten Behling, Reach Everyone, Teach Everyone: Universal Design for Learning in Higher Education (Morgantown: West Virginia University Press, 2018). ↩
Lorna Gonzalez is an Instructional Designer for Teaching and Learning Innovations and a Lecturer at California State University Channel Islands.
Kristi O’Neil is the Instructional Technologist-Accessibility Lead for Teaching and Learning Innovations at California State University Channel Islands.
Of the nine protected characteristics identified by the Equality Act 2010, AI would appear to have the greatest potential to help overcome barriers to inclusion for disabled people in terms of practical strategies for digital accessibility and assistive technology support. Examples of how innovative uses of AI can support those with disabilities include:
image and video description, independent navigation (vision);
captioning for words sounds and emotions, sign language translation, adaptive hearing aids (hearing);
symbol generation communication and translation, speech synthesis (communication);
text summarization and simplification (cognition);
smart monitoring and support (care);
web accessibility checking and correction (all)
Disabled people need to be involved in the design of Assistive or Augmentative Intelligence for ‘edge cases/outliers’ and as Disability is not a single homogeneous characteristic, algorithms need to work for all disabilities in the multitude of different settings and situations in which people find themselves and this also applies to ethical and fairness issues related to data gathering and algorithms affecting protected characteristics.
The list of challenges, for disabled people and those becoming less able due to age or debilitating illnesses, seems to grow despite the innovations being developed thanks to the use of clever algorithms and increasing amounts of data and high powered computing power. This is our first attempt at publishing our ideas…
Challenges
Understanding the role and meaning Inclusion
Equity v equality
Disability is a Heterogeneous not homogeneous
Single ‘Disability’
classification not helpful as every disabled person can have very different
needs
Small data for individual
disabilities compared to big data for all (e.g. remove individuals whose data
identifiable)
Skills and Abilities rather than deficit model
Looking at what an
individual can do rather than focussing on the disabilities/difficulties
Designing for average rather than edge cases and outliers
Every disabled person may have very different needs compared to peers without a disability
Assumptions of Stakeholders
Changing attitudes
Lack of understanding – AI
and ethics, data collection, algorithms, transparency
Expectations of experts –
will have a magic wand
Eugenics issues (e.g.
Autism genetic correction)
Few disabled people involved in AI (Nothing about us without us)
Disabled people need to be
involved in AI decisions
More disabled people need
to understand AI
Capacity Issues
Resources – human,
financial, tools
Policies and Procedures
Lack of general ICT as well
as AT/AAC technologies that are regularly used in many settings
Cohesive Approach
Collaboration
AT and AAC Market
Small Market
Localisation issues
Lack of Competencies
Knowledge building
Black box non transparent Deep NN machine learning
Difficult to understand
implications of AI DNN for disabled people
Lack of interest
Disabled people’s inclusion
of little interest to Turing researchers and Turing research challenges and
programmes (lack of knowledge due to lack of undergraduate courses, PhD
supervisors, High impact Journals, Research funding etc.)
“We can only see a short distance ahead, but we can see plenty there that needs to be done.”
A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.
We found that Microsoft Azure AI for Accessibility grants were not available in August so hope our previous bid will be moved into the November group. We have applied for a grant under the title of “AI for AAC Symbol Equality, Diversity and Inclusion”. The aim is to develop an online tool to generate automatically personalised pictographic symbol sets for Augmentative and Alternative Communication users and improve image recognition for symbols using three different AI services.
This will require the development of Machine Learning algorithms using Generative adversarial neural networks to produce new and adapted symbols and we would like to garner support for the gathering of open licenced AAC symbol data as well as make use of Microsoft’s systems.
Another grant bid has gone into the Economic and Social Sciences Research Council (ESRC)as part of a UKRI-JST Call on Artificial Intelligence and Society. This bid is about “Assistive AI for Augmentative and Alternative Communication in Shared Activities” workng with the University of Tsukuba and their FutureGym interactive environment The children who took part in the activities tended to have complex communication needs and social behaviour issues. The aim is to introduce symbols and photographs to support the gestures and body movements generally used to express enjoyment or interactions with others as part of the journey towards aided communication where children have limited use of speech or are unable to verbalise their feelings.
During July and August we caught up with collegues on projects with we have been involved during the last year because of our work in areas across a range of disabilities. The time culminated with a special thematic session on AI and Inclusion at the Association for the Advancement of Assistive Technology in Europe (AAATE) 2019 conference on “Global Challenges in Assistive Technology”.
Dr Chaohai Ding has been working on a Knowledge Transfer Project with MicrolinkPC (a specialist company providing Assistive Technologies and disability support. The project involved the use of the Natural Language Process and Deep Learning to develop a decision support system for assessors in the workplace assessment process. This involved training the AI model based on the free text extracted from many historical assessments and predicting the reasonable adjustments based on the difficulties and conditions provided by those in the workplace with a range of impairments. The aim being to offer an evidence base for stakeholders involved in the assessment process for the provision of the workplace reasonable adjustments. This ensures “workers with disabilities, or physical or mental health conditions, aren’t substantially disadvantaged when doing their jobs”.
The results have yet to be published, but as Chaohai has admitted some text-based evidence, when related to disability, can be hard to classify to see where patterns are arising that support clearly defined characteristics to aid decision support.
Working on the W3C WCAG Cognitive and Learning Disabilities Accessibility Task Force (Coga TF) has allowed us to be in touch with John Rochford (Program Director and Faculty Member at the Eunice Kennedy Shriver Center of the University of Massachusetts Medical School ). He was interviewed on the AXEchat last month. (If you do not want to listen to the entire YouTube Video start 11 minutes into the conversation and you will hear about his work with AI and text simplification. ) John’s aim is to provide text on the web that can be easily read by those with cognitive impairments. He has called the project ‘easytextAI ‘ and is two years into the work and presented at CSUN 2019 on ‘Creating Simple Web Text for People with ID to Train AI ‘
John’s work links in with Horizon 2020 EasyReading EU project that we have been involved with as a member of their International Advisory Board. This project also uses AI to provide support for disabled users of the web.
“The Easy Reading Framework is available as a browser plug-in or as a web app for mobile devices. With the help of the tools integrated into the framework, web content can be adapted to the individual needs of users in real time. The software offers (partially) automated support functions through the use of HCI techniques such as pop-ups, text-to-speech (TTS), subtitling by mouseover or eye-tracking. With the help of the tracking functions, eye movements and heart rates, it can be determined, among other things, whether the user is experiencing cognitive stress. In such cases, the Easy Reading Framework proactively offers support through the tools corresponding to the user profile. ” RehaData
Whilst exploring the ideas around digital accessibility and web accessibility we must not forget the wide range of technologies that come under the heading of Information Communication Technologies (ICT) and this includes Assistive Technologies (AT).
Many organisations think of AT as being “any information and communication technology product, device, equipment and related service used to maintain, increase, or improve the functional capabilities of individuals with specific needs or disabilities.” This definition comes from an International Telecommunication Union Model ICT Accessibility Report (2014) . Functional capabilities also include executive functioning so we must not forget how planning, organisation and memory can be supported, reduction in stress and anxiety to improve mental health etc. Now by collecting data about all the issues that can arise we can widen the scope of asssitive technologies to enable them to further enhance inclusion. Think of Augmentative and Alternative Communication (AAC) devices (used by those who may not be able to speak clearly or are nonverbal) offering easy to reach symbol choices based on the location of a user and the type of tasks they are undertaking. An early example of this type of technology is the Livox app.
Working with all forms of media have resulted in huge strides in image recognition supporting text descriptions and Mike was presenting at the Media and Learning Conference in Leuven on June 5-6 and presented innovations around the accessibility of video for learning.
He described how access can be enhanced by using current technologies and discussed the potential for AI to improve the availability of accessible media.
Being aware of the impact AI and ICT was having on us all, members of the team became involved with a document produced by the European Disability Forum called ‘Plug and Pray‘. This report looked into the affect that some of the technologies being developed in the AI arena could have on individuals with disabilities.
Teams working on new technologies are not diverse enough. Industry needs to assure that their teams reflect diversity of general population;
Accessibility and principles of Universal Design should be part of the curricula when teaching design, computer sciences, user experience and other related subjects.
Organisations of persons with disabilities and organisations working on digital rights need to work closer together. “
If Blockchain is to become adopted by the masses, Accessibility is a must-have for Decentralized Applications and Blockchain Applications aiming to be game changers.
There are so many ways blockchain technologies could perhaps support those with disabilities by enabling access to services online with increased security. The known blockchain technologies could provide access to safer banking but there are also possibilities such as secure messaging services for supporting services and internet ID systems so that CAPTCHAs are a thing of the past. Some have been looking at voting systems. “By capturing votes as transactions through blockchain, governments and voters would have a verifiable audit trail, ensuring no votes are changed or removed and no illegitimate votes are added.” (CBInsights June 2019) Extend the ideas secure certified documents to assessments and exams in schools, colleges and universities and it would be possible for more students to use their own assistive technologies and computers taking tests in a place of their choosing.
Medical records and personal details would be easier to share across countries and perhaps closer to home, even cross county or state boundaries! The travel through life with a secure personalised health, educational and employment passport or portfolio could become a reality. This would help to prevent the need to repeatedly communicate the same information to a myriad of gatekeepers servicing the wide range of facilities and resources available in most settings.
Let’s make AI Inclusive!
It is time alternative formats for certified documents were ensured and Blockchain technologies could offer the potential to avoid the need for locked inaccessible formats. There have been digital accessibility standards for at least ten years such as the W3C Web Accessibility Initiative Web Content Accessibility Guidelines (WCAG) and more recently a mandate such as EN 301 549 which covers procurement of ICT products and services in Europe. If developers do not adhere to these standards from the very beginning of the design process Blockchain will prove to be yet another barrier to ease of use and further prevent access to those who use assistive technologies.