Visiting the GAAD events page is often a good way to find out as many companies and organisations world wide share what they have achieved over the year, such as Google with its Machine Learning for Accessibility where they discuss Voice Access, Lookout, and Live Transcribe along with Sound Notifications for Android on May 19, 8:15 PM and Microsoft with its AI powered 365 event and others also listed on the Access 2 Accessibility site.
There is an AI for Accessibility Hackathon (Virtual) on May 24th – June 29th 9-10am BST (Beirut, Lebanon) run by the ABLE CLUB American University Of Beirut. This competition is aimed at rallying talents and fostering the regional development of the innovative entrepreneurship community related to artificial intelligence while also increasing social inclusiveness.
AccessiBe.com uses machine learning and computer vision technologies for image recognition and OCR as it scans web pages for accessibility issues, just as our Group Design Project team used similar technologies on Web2Access to highlight alt tags that were possibly a poor representation of an image on a website and where overlaps occurred when zoom was used as well as a visualisation of a site on a mobile phone if it failed WCAG guidelines.
However, still to come is Apple’s use of AI for screen recognition on iOS 14, where it “uses on-device intelligence to recognize elements on your screen to improve VoiceOver support for app and web experiences” such as detecting and identifying “important sounds such as alarms, and alerts you to them using notifications.”
So let’s all celebrate the improvements in digital accessibility that AI can bring, whilst making sure that one day there will be no need to have an AccessiBe YouTube video about “why web accessibility matters.” It will just be something we can take for granted! Equal Access for All.
The 13th ACM Web Science 2021 conference to be held on June 21st- June 25th will be hosting 12 interdisciplinary workshops addressing how Web Science research can illuminate key contemporary issues and global challenges.
We really would love it if you would like to submit your ideas and even a paper to our AI and Inclusion workshop or just come and join us virtually during the afternoon we are allotted (yet to be published!).
Accepted workshop papers will be published in the companion collection of the ACM WebSci’21 proceedings.
AI and Inclusion – Overcoming accessibility gaps on the Social Web
We are planning to make this workshop an interesting afternoon of presentations and a debate about how AI can help to achieve the goal of inclusion when thinking about the digital barriers that prevent people enjoying use of the social web.
Online interactivity and conversations should be accessible to all, all the more so during this period of isolation from face to face connections.
Apr 23, 2021 — Workshop paper submission deadline
May 17, 2021 — Camera-ready deadline for the Proceedings
Over the last year there has been an increasing amount of projects that have been using machine learning and image recognition to solve issues that cause accessibility barriers for web page users. Articles have been written about the subject. But we explored these ideas over a year ago having already added image recognition to check the accuracy of alternative texts on sites when carrying out an accessibility review on Web2Access.
Since that time we have been working on capturing data from online courses to develop training data via an onotology that can provide those working in education with a way of seeing what might cause a problem before the student even arrives on the course. The idea being that authors of the content can be alerted to the difficulties such as a lack of alternative texts or a need to annotate equations etc.
The same can apply to online lectures provided for students working remotely. Live captioning from the videos are largely provided via automatic speech recognition. Once again a facilitator can be alerted to where errors are appearing in a live session, so that manual corrections can occur at speed and the quality of the output improved to provide not just more accurate captions over time, but also transcripts suitable for annotation. NRemote will provide a system that can be customised and offer students a chance to use teaching and learning materials in multiple formats.
We have also been discussing the use of text simplification that is making use of machine learning. The team behind EasyText AI have been making web pages easier to read and are now looking at the idea of incorporating text to symbol support where a user can choose a symbol set to suit their preference.
There are no standards in the way graphical AAC symbol sets are designed or collated other than the Blissymbolics ideographic set that was “standardized as ISO-IR 169 a double-byte character set in 1993 including 2384 fixed characters whereas the BCI Unicode proposal suggests 886 characters that then can be combined.” Edutech Wiki.
Even Emojis have a Unicode ID, but the pictographic symbols most frequently used by those with complex communication needs do not have an international encoding standard. This means that if you search for different symbols amongst a collection of freely available and open licenced symbols sets you find several symbols have no relationship with the word you entered or the concept required.
This lack of concept accuracy means that much work has to be done to enable useful automatic text to symbol support for web content. Initially there needs to be a process to support text simplification or perhaps text summarisation in some cases. Then keywords need to be represented by a particular symbol (from a symbol set recognised by the reader), that can be accurately related to the concept by their ISO or Unicode ID. Examples can be found in the WCAG Personalization task force Requirements for Personalization Semantics using the Blissymbolics IDs.
The presentation at the beginning of this blog will illustrate the work that has been achieved to date, but it is hoped that more can be written up in the coming months. The aim is to have improved image recognition to assist with the semantic relatedness. This automatic linking will then be used to map to Blissymbolics IDs. It is hoped that this will also enable multilingual mapping, where symbol sets already have label or gloss translations.
However, there still needs to be a process that ensures whenever symbol sets are updated the mapping can continue to be accurate as some symbol sets do not come with APIs! That will be another challenge.
Thank you ‘Winston Churchill Memorial Trust Covid-19 Action Fund‘ for making it possible for us to develop our Boardbuilder for personalising and adapting symbols for easy to use communication and information charts. Many freely available Augmentative and Alternative Communication (AAC) symbols are developed for children rather than adults. There are also many COVID-19 symbol charts on offer around the world, but they are rarely personalised and hospital and care home stays are usually more than a few days long. Boardbuilder will allow for different templates and a mix of any images and symbols to support those struggling to understand what they are being told or to express themselves.
We know we need to find symbols suitable for older people and particular medical items that are used in hospitals and for social care. We also need to make it easy for users to see many different types of symbols and upload images, as well as translating labels into different languages.
Symbols with complex medical terms are not readily available in most AAC symbol sets, so we have linked the OCHA Humanitarian Icons and Openmojis to the Global Symbols’ sets and hope to adapt other symbols that have open licences.
Making information and communication charts can take time, so we are determined to ensure BoardBuilder is very easy to use and offer print outs as well as enabling the output to work with a free text to speech / AAC applications on tablets etc.
By adding semantic embedding, alongside the present use of ConceptNet , the linking of symbol labels (glosses) should be more accurate and it will make it easier to find appropriate symbols. This will in turn speed chart making for those supporting people who are struggling with the masks and personal protection equipment being used in hospitals and care homes. In the future it will also help with text to symbol translations, as there are often several symbol options for one word.
Much has changed for everyone since our last blog. Swami Sivasubramanian, VP of Amazon Machine Learning, AWS has written an article about the way AI and machine learning have been helping to fight COVID-19 and we can see how varied the use of this technology has been. However, we remain in a world that is having to come to terms with many different ways of working and travelling to conferences has been off the agenda for the last few months.
We have continued to work on topics covered in our papers for ICCHP that will delivered remotely, as will the one we submitted for WebSci 2020 . ISAAC 2020 has been moved to 2021, but who knows if we will get to Mexico but hopefully at least we may have some results from the linking of concepts for several free and open augmentative and alternative communication symbol sets.
As the months pass much of our work will be seen on Global Symbols with examples of how we will be using the linked symbol sets.
A Group Design Project has supported our intention to improve some automated web accessibility checks on our Web2Access review system. The project has resulted in a way of making sure alternative text used to describe images on web pages is accurate.
Accurate and simple descriptions are important for those who use screen readers, such as individuals with visual impairments. The ‘alt text’ that is used to describe an image is usually added by the author of a web page, but in recent years this process has often been automated. The results have been varied and do not necessarily accurately describe the image.
As part of the WCAG 2.1. checks for alt tags an additional check has been added using a pretrained network and object detection (Mobile Nets and COCO-SSD in Tensorflow ). Initially the automated checker uses a review of the alt tags by the Pa11y checker. Then the additional check of the text resulting from the image classification is compared to the actual descriptive text in the ‘img alt’ attribute for each image in a web page. If there is a successful match between the text, the automated review is accepted, but if none of the words correspond as a required description, a visual appraisal system is used to present the findings to the accessibility reviewer. This process acts as a double check and ensures issues can be flagged to the developer.
A similar process has been used for visual overlaps of content and it is intended that in the future titles of hypertext links could also be checked to ensure they accurately describe where the user would be sent if activated, not just that they say the already automately checked ‘click here’ or ‘more’ links or are a broken link.
The aim is to encourage presenters to share their innovative thinking, provide refreshing appraisals related to the use of AI and all that goes into AI models to support those with disabilities in their use of accessible and assistive technologies. Here are some ideas for papers but please do not be limited by this list:
AI and Inclusion, where machine learning and algorithms can be used to enable equity for those with disabilities
The pros and cons of AI, highlighting why issues can arise for those with disabilities, even with the most meticulously designed systems.
The use of augmentative and assistive AI in applications to support those with disabilities
AI supporting all that goes into making access to online digital content easier.
Enhanced independence using virtual assistants and robots
When submitting your contribution please make sure you choose our STS under “Special Thematic Session” (Artificial Intelligence, Accessible and Assistive Technologies). Contributions to a STS are evaluated by the Programme Committee of ICCHP and Peter Heumader and myself! Do get in touch to discuss your involvement and pre-evaluation of your contribution.
E.A. Draffan, ECS Accessibility Team, Faculty of Physical Sciences and Engineering University of Southampton
Peter Heumader, Institute Integriert Stuideren, Johannes Kepler University Linz
Over the last few months we have been concentrating on projects related to automated web accessibility checks and the automatic linking and categorisation of open licenced and freely available Augmentative and Alternative Communication symbol sets for those with complex communication needs.
As has been mentioned we presented these projects at a workshop in the Alan Turing Institute in November and work has been ongoing. It is hoped that the results will be shared by the end of March 2020.
regulations and UK laws recognise the W3C Web Content Accessibility Guidelines
(WCAG) as a method of ensuring compliance, but testing can be laborious and those
checkers that automate the process need to be able to find where more errors
are occurring. This has led to the
development of an accessibility checker that carries out well-known automated
checks, but also includes image recognition to make it possible to see if the
alternative text tags for images are appropriate. A second AI related check
involves a new WCAG 2.1 Success Criteria 2.4.4 Link Purpose (In Context). This is where “the purpose of each link can
be determined from the link text alone or from the link text together with its
programmatically determined link context, except where the purpose of the link
would be ambiguous to users in general”.
A Natural Language Processing (NLP) model is used to check whether
the text in the aria-label attribute within the target hyperlink object matches
the content in the target URL. Based on the matching result, it is possible to
determine whether the target web page or website fit the link purpose criteria.
Despite previous research in this area, the task is proving challenging with two
different experiments being worked on. One experiment has been designed to use
some existing NLP models (e.g. GloVe), while another one is investigating the
training of data with human input. The results will be published in an academic
paper and at a conference.
AAC symbol classification to aid searches.
The team have also investigated issues for
those supporting Augmentative and Alternative Communication (AAC) users who may
have severe communication difficulties and make use of symbols and pictures on
speech generating devices. A multilingual symbol repository for families,
carers and professionals has been created to link different freely available
symbol sets. The symbol sets can be used
to create communication charts for the AAC user but this takes time and finding
appropriate cultural symbols is not always easy. A system has been developed that
automatically links and categorises symbols across symbol sets related to their
parts of speech, topic and language using a combination of linked data, natural
language processing and image recognition.
The latter is not always successful in isolation as symbols lack context
and concepts are not necessarily concrete such as an image for ‘anxious’, so
further work is required to enhance the system.
The Global Symbols AAC symbol repository will be making use of these
features on their BoardBuilder for making symbol charts by the end of March
This project is exploring some existing Convolutional Neural Network (CNN, or ConvNet) models to help classify, categorise and integrate AAC symbols. Experiments have already been undertaken to produce a baseline by simply using the image matrix similarity. Due to the nature of AAC symbols, some of these similar symbols are representing different concepts, but some different symbols are representing the same concept across different symbols sets. The training data set has mapped symbol images labels and NLP models have been used to map the labels into the same concept across different symbols. This will help those supporting ACC users offer much wider symbol choices suitable for different cultures and languages. The Global Symbols API for searching open licence and freely available AAC symbols is already being used in the Cboard application for AAC users
Wired UK has a very good article by Alex Lee that was published on November 26th titled “An AI to stop hiring bias could be bad news for disabled people.” The technology that helps recruiters cut through the CV pile might be pushing disabled candidates out of the running “
Alex Lee provides a very good example of what can happen when an interviewee has to undertake the daunting task of a video recruitment system. This may cut time for the company but when you read the article you will find that the process would be tough for most people, let alone someone with a visual impairment.
The data collected and the algorithms used for these processes are meant to be more and more accurate as time passes, but as Professor Mike Wald has reminded us all…
“To train the algorithm, you’re going to have to give it past data,” explains Mike Wald, professor of electronics and computer science at the University of Southampton and a fellow of the Turing Institute. “if you say, here are the characteristics of all our good employees. We want more people like them. You’re going to get more people like them. And if they haven’t got any disabled people in there, you’re probably not going to get disabled people. […] ” “Disability is a very heterogeneous characteristic. Every person with a disability has a slightly different disability. And so, there is a huge issue in how to classify disabilities,” says Wald. “If you try and classify someone, until you meet that actual person and find out what they can and can’t do, then it’s not really fair to do that.”
Many more people seem to be writing about this issue and discuss where things can go wrong with AI, such an article published today called “Artificial Intelligence and Inclusion” by Mollie Lombardi. These articles and many more like ours are talking about the problem and making it clear that we need to sort it out.
If you have an answer do let us know!
We are still searching for a freely available recipe to ensure AI is inclusive and would enable us to take account of the very complex mix of disabilities and how they affect so many people in very different ways and to varying degrees; even at different times during the journey of life.