Learning more about Generative AI and AAC symbols

The complexities of creating symbols for communication and the way they work to support spoken and written language has never been easy. Ideas around guessability or iconicity and transparency to aid learning or remembering are jut one side of the coin in terms of design. There are also the questions around style, size, type of outlines and colour amongst many other design issues that need to be carefully considered and the entire schema or set of rules that exist for a particular AAC symbol set. These are aspects that are rarely discussed in detail other than by those developing the images.

However, when trying to work with computer algorithms to make adaptations from one image to another a starting point can be image to text recognition in order to discover how well chosen training data is going to work. It is possible to see if the systems can deal with the lack of background and other details that normally help to give images context, but are often lacking in AAC symbol sets. The computer has no way of knowing whether an animal is a wolf or dog unless there are additional elements, such as a collar or a wild natural area around the animal such as a forest compared to a room in a house. If it is possible to provide a form of alternative text as a visual description, not disimilar to that used by screen reader users when viewing images on web pages, the training data provided may then work for an image to image situation.

There remains the need to gather enough data to allow the AI systems to try to predict what it is you want. The systems used by Stable Diffusion and DALL-E 2 have scraped the web for masses of images in various styles, but they do not seem to have picked up on AAC symbol sets! There is also the case that each symbol topic category within the symbol set tends to have different styles even though the outlines and some colours may be similar and humans are generally able to recognise similarities within a symbol set that cannot necessarily be captured by the AI model that has been developed. More tweaks will always be needed along with more data training as the outcomes are evaluated.

Comparison of symbol sets

The image above compares groups of symbols from the ARASAAC, Mulberry, Sclera and Blissymbolics sets.

The other problem is that most generative artificial intelligence (AI) systems using something like Stable Diffusion and DALL-E 2 are designed to provide unique images in a chosen style, even when you enter the same text prompt. Therefore each outcome will look different to your first or second attempt. In other words there is very little consistency in how the details of the picture may be put together other than the overview will look as if it has a certain style. So if you put in the text prompt edit box that you want “A female teacher in front of a white board with a maths equation”, the system can generate as many images as you want, but none will be exactly the same.

A female teacher in front of a white board with a math equation

Created using DALL-E 2

Nevertheless, Chaohai Ding has managed to create examples of AI generated Mulberry AAC symbols by using Stable Diffusion with the addition of Dreambooth that uses a minimal number of images in a more consistent style. There are still multiple options available from the same text prompt, but the ‘look and feel’ of those automatically generated images makes us want to go on working with these ideas in order to support the idea of personalised AAC symbol adaptations.

racing driver friend and astronaut

In the style of the professions category in the Mulberry Symbol set these three images had the text prompt of racing driver, friend and astronaut.

We would like to thank Steve Lee for allowing us to use the Mulberry Symbol set on Global Symbols and the University of Southampton Web Science Institute Stimulus Fund for giving us the chance to collaborate on this project with Professor Mike Wald’s team.

AI for auto-translations; different languages for symbols

Over the last couple of months we have been testing the different AI automatic translation offerings to try and work out if we can translate symbol labels, with a chance to edit them online when they don’t make sense! This has been work related to an Augmentative and Alternative Communication (AAC) symbol repository – Global Symbols

Participants on the site who are registered AAC symbol developers can use Microsoft Azure’s cognitive translation services, but this does not work for all the languages we need.

Translation English to Dutch symbol labels

Microsoft Azure has 80 languages, but sadly not Macedonian or Montenegrin. This also means that when we use Weblate, an open sourse system using Microsoft, for menus and navigational elements on the website, there is also a problem. However, having tested the system with manual checks of other languages that we needed, Micorsoft appeared to provide a broadly satisfactory outcome.

When using Moodle we have found their Automated Manipulation Of Strings (AMOS) translation system can be used alongside the Google translation API which has the Macedonian language! Amazon also has Macedonian, but only 71 languages compared to 100.

Cost and the type of translation service required obviously affects choice and in our case we have been incredibly lucky to usually want a translation to go from English to another language, but in some cases it is important to have the option to reverse the situation as we have with one symbol set, where we need to go from Turkish to English. In this case Wikipedia offer a helpful chart, but do check the particular company sites, as they suggest.

Brewing tea!

Web Science 2021 conference and workshops

The 13th ACM Web Science 2021 conference to be held on June 21st- June 25th will be hosting 12 interdisciplinary workshops addressing how Web Science research can illuminate key contemporary issues and global challenges.

We really would love it if you would like to submit your ideas and even a paper to our AI and Inclusion workshop or just come and join us virtually during the afternoon we are allotted (yet to be published!).

Accepted workshop papers will be published in the companion collection of the ACM WebSci’21 proceedings.

AI and Inclusion – Overcoming accessibility gaps on the Social Web

We are planning to make this workshop an interesting afternoon of presentations and a debate about how AI can help to achieve the goal of inclusion when thinking about the digital barriers that prevent people enjoying use of the social web.

Online interactivity and conversations should be accessible to all, all the more so during this period of isolation from face to face connections.

Important Dates:

Apr 23, 2021 — Workshop paper submission deadline

May 17, 2021 — Camera-ready deadline for the Proceedings

For more information, please see https://websci21.webscience.org/workshops

Web Page Accessibility and AI

computer with webpageOver the last year there has been an increasing amount of projects that have been using machine learning and image recognition to solve issues that cause accessibility barriers for web page users. Articles have been written about the subject. But we explored these ideas over a year ago having already added image recognition to check the accuracy of alternative texts on sites when carrying out an accessibility review on Web2Access.

Since that time we have been working on capturing data from online courses to develop training data via an onotology that can provide those working in education with a way of seeing what might cause a problem before the student even arrives on the course. The idea being that authors of the content can be alerted to the difficulties such as a lack of alternative texts or a need to annotate equations etc.

computer with presentationThe same can apply to online lectures provided for students working remotely. Live captioning from the videos are largely provided via automatic speech recognition. Once again a facilitator can be alerted to where errors are appearing in a live session, so that manual corrections can occur at speed and the quality of the output improved to provide not just more accurate captions over time, but also transcripts suitable for annotation. NRemote will provide a system that can be customised and offer students a chance to use teaching and learning materials in multiple formats.

We have also been discussing the use of text simplification that is making use of machine learning. The team behind EasyText AI have been making web pages easier to read and are now looking at the idea of incorporating text to symbol support where a user can choose a symbol set to suit their preference.

three sentences using symbols saying I read your red book today

Working on Symbols and Concept Linking

View WebSci 2020 Presentation in a new tab

The WebSci 2020 virtual conference has a special theme on Digital (In)Equality, Digital Inclusion, Digital Humanism the first day of this virtual conference. This will gave us the chance to show the initial findings from our linking of freely available Augmentative and Alternative Communication (AAC) symbol sets to support understanding of web content.

There are no standards in the way graphical AAC symbol sets are designed or collated other than the Blissymbolics ideographic set that was “standardized as ISO-IR 169 a double-byte character set in 1993 including 2384 fixed characters whereas the BCI Unicode proposal suggests 886 characters that then can be combined.” Edutech Wiki.

Even Emojis have a Unicode ID, but the pictographic symbols most frequently used by those with complex communication needs do not have an international encoding standard. This means that if you search for different symbols amongst a collection of freely available and open licenced symbols sets you find several symbols have no relationship with the word you entered or the concept required.

symbols for up
Global Symbols used to show sample symbols when the word ‘up’ was entered in the search.

This lack of concept accuracy means that much work has to be done to enable useful automatic text to symbol support for web content. Initially there needs to be a process to support text simplification or perhaps text summarisation in some cases. Then keywords need to be represented by a particular symbol (from a symbol set recognised by the reader), that can be accurately related to the concept by their ISO or Unicode ID. Examples can be found in the WCAG Personalization task force Requirements for Personalization Semantics using the Blissymbolics IDs.

The presentation at the beginning of this blog will illustrate the work that has been achieved to date, but it is hoped that more can be written up in the coming months. The aim is to have improved image recognition to assist with the semantic relatedness. This automatic linking will then be used to map to Blissymbolics IDs. It is hoped that this will also enable multilingual mapping, where symbol sets already have label or gloss translations.

laptop coding

However, there still needs to be a process that ensures whenever symbol sets are updated the mapping can continue to be accurate as some symbol sets do not come with APIs! That will be another challenge.

Winston Churchill Memorial Trust Covid-19 Action Fund support symbol charts

boardbuilder beta version
Freely available Boardbuilder, about to be updated as version 3. Due to be developed for personalised COVID-19 information support to aid communication with different templates and improved symbol searches.

Thank you ‘Winston Churchill Memorial Trust Covid-19 Action Fund‘ for making it possible for us to develop our Boardbuilder for personalising and adapting symbols for easy to use communication and information charts. Many freely available Augmentative and Alternative Communication (AAC) symbols are developed for children rather than adults. There are also many COVID-19 symbol charts on offer around the world, but they are rarely personalised and hospital and care home stays are usually more than a few days long. Boardbuilder will allow for different templates and a mix of any images and symbols to support those struggling to understand what they are being told or to express themselves.

We know we need to find symbols suitable for older people and particular medical items that are used in hospitals and for social care. We also need to make it easy for users to see many different types of symbols and upload images, as well as translating labels into different languages.

Symbols with complex medical terms are not readily available in most AAC symbol sets, so we have linked the OCHA Humanitarian Icons and Openmojis to the Global Symbols’ sets and hope to adapt other symbols that have open licences.

Making information and communication charts can take time, so we are determined to ensure BoardBuilder is very easy to use and offer print outs as well as enabling the output to work with a free text to speech / AAC applications on tablets etc.

By adding semantic embedding, alongside the present use of ConceptNet , the linking of symbol labels (glosses) should be more accurate and it will make it easier to find appropriate symbols. This will in turn speed chart making for those supporting people who are struggling with the masks and personal protection equipment being used in hospitals and care homes. In the future it will also help with text to symbol translations, as there are often several symbol options for one word.

COVID-19, AI and our Conferences

conference seating

Much has changed for everyone since our last blog. Swami Sivasubramanian, VP of Amazon Machine Learning, AWS has written an article about the way AI and machine learning have been helping to fight COVID-19 and we can see how varied the use of this technology has been. However, we remain in a world that is having to come to terms with many different ways of working and travelling to conferences has been off the agenda for the last few months.

We have continued to work on topics covered in our papers for ICCHP that will delivered remotely, as will the one we submitted for WebSci 2020 . ISAAC 2020 has been moved to 2021, but who knows if we will get to Mexico but hopefully at least we may have some results from the linking of concepts for several free and open augmentative and alternative communication symbol sets.

As the months pass much of our work will be seen on Global Symbols with examples of how we will be using the linked symbol sets.

We are also trying to support the WCAG personalization task force in their “Requirements for Personalization Semantics” to automatically link concepts to increase understanding of web content for those who use AAC or have literacy difficulties and/or cognitive impairments.

mapping symbol sets
The future for freely available mapped sample AAC symbol sets to illustrate multilingual linking of concepts from simplified web content.

Image Recognition to check Image Description accuracy on Web Pages

A Group Design Project has supported our intention to improve some automated web accessibility checks on our Web2Access review system. The project has resulted in a way of making sure alternative text used to describe images on web pages is accurate.

Accurate and simple descriptions are important for those who use screen readers, such as individuals with visual impairments. The ‘alt text’ that is used to describe an image is usually added by the author of a web page, but in recent years this process has often been automated. The results have been varied and do not necessarily accurately describe the image.

Images where the title is used as the alternative text – sample from Outbrain advertisers

As part of the WCAG 2.1. checks for alt tags an additional check has been added using a pretrained network and object detection (Mobile Nets and COCO-SSD in Tensorflow ). Initially the automated checker uses a review of the alt tags by the Pa11y checker. Then the additional check of the text resulting from the image classifi cation is compared to the actual descriptive text in the ‘img alt’ attribute for each image in a web page. If there is a successful match between the text, the automated review is accepted, but if none of the words correspond as a required description, a visual appraisal system is used to present the findings to the accessibility reviewer. This process acts as a double check and ensures issues can be flagged to the developer.

A similar process has been used for visual overlaps of content and it is intended that in the future titles of hypertext links could also be checked to ensure they accurately describe where the user would be sent if activated, not just that they say the already automately checked ‘click here’ or ‘more’ links or are a broken link.

Checking whether the image’s alternative text attribute accurately represents the image content.

In the last few months the results have been beta tested and integrated into the Web2Access digital accessibility review system by the ECS Accessibility team. The output can now be viewed as part of an Accessibility Statement as required by law since September 2018 for public sector websites.

Artificial Intelligence, Accessible and Assistive Technologies

boats on lake Como by the town frontage of Lecco
Lecco, Italy by Stefano Ferrario from Pixabay 

We are chairing a Special Thematic Session at the 17th International Conference on Computers Helping People with Special Needs which will run from September 9-11, 2020 with the Pre-conference September from 7th – 8th, 2020 in Lecco, Italy.

Please come and join us at this conference and submit an extended abstract before April 1st 2020 for our special thematic session.

The aim is to encourage presenters to share their innovative thinking, provide refreshing appraisals related to the use of AI and all that goes into AI models to support those with disabilities in their use of accessible and assistive technologies.  Here are some ideas for papers but please do not be limited by this list:

  • AI and Inclusion, where machine learning and algorithms can be used to enable equity for those with disabilities
  • The pros and cons of AI, highlighting why issues can arise for those with disabilities, even with the most meticulously designed systems.
  • The use of augmentative and assistive AI in applications to support those with disabilities
  • AI supporting all that goes into making access to online digital content easier.
  • Enhanced independence using virtual assistants and robots

Contributions to a STS have to be submitted using the standard submission procedures of ICCHP.

When submitting your contribution please make sure you choose our STS under “Special Thematic Session” (Artificial Intelligence, Accessible and Assistive Technologies). Contributions to a STS are evaluated by the Programme Committee of ICCHP and Peter Heumader and myself! Do get in touch to discuss your involvement and pre-evaluation of your contribution.

Chairs


  • E.A. Draffan, ECS Accessibility Team, Faculty of Physical Sciences and Engineering University of Southampton

  • Peter Heumader, Institute Integriert Stuideren, Johannes Kepler University Linz

AI and Inclusion projects related to Web Accessibility and AAC support.

Over the last few months we have been concentrating on projects related to automated web accessibility checks and the automatic  linking and categorisation of open licenced and freely available Augmentative and Alternative Communication symbol sets for those with complex communication needs.

As has been mentioned we presented these projects at a workshop in the Alan Turing Institute in November and work has been ongoing. It is hoped that the results will be shared by the end of March 2020.

Automating Web Accessibility Checks

Recent regulations and UK laws recognise the W3C Web Content Accessibility Guidelines (WCAG) as a method of ensuring compliance, but testing can be laborious and those checkers that automate the process need to be able to find where more errors are occurring.  This has led to the development of an accessibility checker that carries out well-known automated checks, but also includes image recognition to make it possible to see if the alternative text tags for images are appropriate. A second AI related check involves a new WCAG 2.1 Success Criteria 2.4.4 Link Purpose (In Context).  This is where “the purpose of each link can be determined from the link text alone or from the link text together with its programmatically determined link context, except where the purpose of the link would be ambiguous to users in general”.[1] 

A Natural Language Processing (NLP) model is used to check whether the text in the aria-label attribute within the target hyperlink object matches the content in the target URL. Based on the matching result, it is possible to determine whether the target web page or website fit the link purpose criteria. Despite previous research in this area, the task is proving challenging with two different experiments being worked on. One experiment has been designed to use some existing NLP models (e.g. GloVe), while another one is investigating the training of data with human input. The results will be published in an academic paper and at a conference.

AAC symbol classification to aid searches.

Global Symbols with a Cboard user

The team have also investigated issues for those supporting Augmentative and Alternative Communication (AAC) users who may have severe communication difficulties and make use of symbols and pictures on speech generating devices. A multilingual symbol repository for families, carers and professionals has been created to link different freely available symbol sets.  The symbol sets can be used to create communication charts for the AAC user but this takes time and finding appropriate cultural symbols is not always easy.  A system has been developed that automatically links and categorises symbols across symbol sets related to their parts of speech, topic and language using a combination of linked data, natural language processing and image recognition.  The latter is not always successful in isolation as symbols lack context and concepts are not necessarily concrete such as an image for ‘anxious’, so further work is required to enhance the system.  The Global Symbols AAC symbol repository will be making use of these features on their BoardBuilder for making symbol charts by the end of March 2020.

This project is exploring some existing Convolutional Neural Network (CNN, or ConvNet) models to help classify, categorise and integrate AAC symbols. Experiments have already been undertaken to produce a baseline by simply using the image matrix similarity. Due to the nature of AAC symbols, some of these similar symbols are representing different concepts, but some different symbols are representing the same concept across different symbols sets. The training data set has mapped symbol images labels and NLP models have been used to map the labels into the same concept across different symbols. This will help those supporting ACC users offer much wider symbol choices suitable for different cultures and languages. The Global Symbols API for searching open licence and freely available AAC symbols is already being used in the Cboard application for AAC users


[1] https://www.w3.org/WAI/WCAG21/Understanding/link-purpose-in-context.html