Learning more about Generative AI and AAC symbols

The complexities of creating symbols for communication and the way they work to support spoken and written language has never been easy. Ideas around guessability or iconicity and transparency to aid learning or remembering are jut one side of the coin in terms of design. There are also the questions around style, size, type of outlines and colour amongst many other design issues that need to be carefully considered and the entire schema or set of rules that exist for a particular AAC symbol set. These are aspects that are rarely discussed in detail other than by those developing the images.

However, when trying to work with computer algorithms to make adaptations from one image to another a starting point can be image to text recognition in order to discover how well chosen training data is going to work. It is possible to see if the systems can deal with the lack of background and other details that normally help to give images context, but are often lacking in AAC symbol sets. The computer has no way of knowing whether an animal is a wolf or dog unless there are additional elements, such as a collar or a wild natural area around the animal such as a forest compared to a room in a house. If it is possible to provide a form of alternative text as a visual description, not disimilar to that used by screen reader users when viewing images on web pages, the training data provided may then work for an image to image situation.

There remains the need to gather enough data to allow the AI systems to try to predict what it is you want. The systems used by Stable Diffusion and DALL-E 2 have scraped the web for masses of images in various styles, but they do not seem to have picked up on AAC symbol sets! There is also the case that each symbol topic category within the symbol set tends to have different styles even though the outlines and some colours may be similar and humans are generally able to recognise similarities within a symbol set that cannot necessarily be captured by the AI model that has been developed. More tweaks will always be needed along with more data training as the outcomes are evaluated.

Comparison of symbol sets

The image above compares groups of symbols from the ARASAAC, Mulberry, Sclera and Blissymbolics sets.

The other problem is that most generative artificial intelligence (AI) systems using something like Stable Diffusion and DALL-E 2 are designed to provide unique images in a chosen style, even when you enter the same text prompt. Therefore each outcome will look different to your first or second attempt. In other words there is very little consistency in how the details of the picture may be put together other than the overview will look as if it has a certain style. So if you put in the text prompt edit box that you want “A female teacher in front of a white board with a maths equation”, the system can generate as many images as you want, but none will be exactly the same.

A female teacher in front of a white board with a math equation

Created using DALL-E 2

Nevertheless, Chaohai Ding has managed to create examples of AI generated Mulberry AAC symbols by using Stable Diffusion with the addition of Dreambooth that uses a minimal number of images in a more consistent style. There are still multiple options available from the same text prompt, but the ‘look and feel’ of those automatically generated images makes us want to go on working with these ideas in order to support the idea of personalised AAC symbol adaptations.

racing driver friend and astronaut

In the style of the professions category in the Mulberry Symbol set these three images had the text prompt of racing driver, friend and astronaut.

We would like to thank Steve Lee for allowing us to use the Mulberry Symbol set on Global Symbols and the University of Southampton Web Science Institute Stimulus Fund for giving us the chance to collaborate on this project with Professor Mike Wald’s team.

Stepping into the world of AI, Icon Recognition, Human Perception and AAC Symbols.

In the last few months we have been working on our Artificial Intelligence (AI) and AAC symbol project, finding how inconsistent the pictographic images may appear in some AAC symbol sets and the impact this has on the various stages of image processing such as perception, detection and recognition.

We have been researching how inconsistency can hamper automated image recognition after pre-processing and feature abstraction, but the advent of Stable Diffusion as a deep learning model allows us to include visual image text descriptions alongside image to image recognition processes to support our ideas of symbol to symbol recognition and creation.

Stable Diffusion

Stable Diffusion – “The input image on the left can produce several new images (on the right). This new model can be used for structure-preserving image-to-image and shape-conditional image synthesis.” https://stability.ai/blog/stable-diffusion-v2-release

Furthermore, with the help of Professor Irene Reppa and her project team researching “The Development of an Accessible, Diverse and Inclusive Digital Visual Language” we have discovered many overlaps in the work they are doing with icon standardisation. Working together we may be able to adapt our original voting criteria to provide a more granular approach to ensuring automatically generated AAC symbols in the style of a particular symbol set allow for ‘guessability’ (transparency) and ease of learning whilst also making them appealing based on a much more inclusive set of criteria. The latter have been used by many more evaluators over the last 8 years as is mentioned in this blog “When the going gets tough the beautiful get going.”

One important finding from Professors Reppa’s previous research was that when “icons were complex, abstract, or unfamiliar, there was a clear advantage for the more aesthetically appealing targets. By contrast, when the icons were visually simple, concrete, or familiar, aesthetic appeal no longer mattered.” The research team are now looking at yet more attributes, such as consistency, complexity and abstractness, to illustrate why and how the visual perception of icons changes within groups and in different situations or environments.

In the past we have used a simple voting system with five criteria using a Likert scale with an option to comment on the symbol and the evaluators have been experienced AAC users or those working in the field (which is small in number). On previous symbol survey occasions it has usually been the individual evaluator’s perception of the symbol, as seen in a text comment, that provided the best information. But, the comments have been small in number and the cohorts not necessarily representative of a wider population of communicators.

Symbol voting criteria

There is no doubt in my mind that we need to keep exploring ways to enhance our evaluation techniques by learning more from icon-based research, whilst being aware of the different needs of AAC users, where symbols may have a more abstract representation of a concept. This process may also help us to better categorise our symbols in the Global Symbols repository to aid text based and visual searches for those developing paper-based communication charts, boards and books as well as linking to the repository through AAC apps such as PiCom and Cboard.

AI for auto-translations; different languages for symbols

Over the last couple of months we have been testing the different AI automatic translation offerings to try and work out if we can translate symbol labels, with a chance to edit them online when they don’t make sense! This has been work related to an Augmentative and Alternative Communication (AAC) symbol repository – Global Symbols

Participants on the site who are registered AAC symbol developers can use Microsoft Azure’s cognitive translation services, but this does not work for all the languages we need.

Translation English to Dutch symbol labels

Microsoft Azure has 80 languages, but sadly not Macedonian or Montenegrin. This also means that when we use Weblate, an open sourse system using Microsoft, for menus and navigational elements on the website, there is also a problem. However, having tested the system with manual checks of other languages that we needed, Micorsoft appeared to provide a broadly satisfactory outcome.

When using Moodle we have found their Automated Manipulation Of Strings (AMOS) translation system can be used alongside the Google translation API which has the Macedonian language! Amazon also has Macedonian, but only 71 languages compared to 100.

Cost and the type of translation service required obviously affects choice and in our case we have been incredibly lucky to usually want a translation to go from English to another language, but in some cases it is important to have the option to reverse the situation as we have with one symbol set, where we need to go from Turkish to English. In this case Wikipedia offer a helpful chart, but do check the particular company sites, as they suggest.

Brewing tea!

Tenth Global Accessibility Awareness Day (GAAD) May 20, 2021

Over the last few years there has been a general move towards seeing how AI can help individuals involved with digital accessibility overcome some of the barriers faced by those with disabilities.  The use of machine learning can also provide access via assistive technologies that have been improved to such an extent that they are needing less and less human intervention.  Examples include automatic captioning on videos such as those presented on YouTube and speech recognition.

The question is whether we have really moved on from Deque’s 2018 “Five Ways in Which Artificial Intelligence Changes the Face of Web Accessibility”.

These included: 

  • Automated image recognition,
  • Automated facial recognition,
  • Automated lip-reading recognition,
  • Automated text summarization,
  • Real-time, automated translations.

Visiting the GAAD events page  is often a good way to find out as many companies and organisations world wide share what they have achieved over the year, such as Google with its Machine Learning for Accessibility where they discuss Voice Access, Lookout, and Live Transcribe along with Sound Notifications for Android on May 19, 8:15 PM and Microsoft with its AI powered 365 event and others also listed on the Access 2 Accessibility site. 

There is an AI for Accessibility Hackathon (Virtual) on May 24th – June 29th 9-10am BST (Beirut, Lebanon) run by the ABLE CLUB American University Of Beirut.  This competition is aimed at rallying talents and fostering the regional development of the innovative entrepreneurship community related to artificial intelligence while also increasing social inclusiveness.

AccessiBe.com uses machine learning and computer vision technologies for image recognition and OCR as it scans web pages for accessibility issues, just as our Group Design Project team used similar technologies on Web2Access to highlight alt tags that were possibly a poor representation of an image on a website and where overlaps occurred when zoom was used as well as a visualisation of a site on a mobile phone if it failed WCAG guidelines.  

However, still to come is Apple’s use of AI for screen recognition on iOS 14, where it “uses on-device intelligence to recognize elements on your screen to improve VoiceOver support for app and web experiences” such as detecting and identifying “important sounds such as alarms, and alerts you to them using notifications.”

So let’s all celebrate the improvements in digital accessibility that AI can bring, whilst making sure that one day there will be no need to have an AccessiBe YouTube video about “why web accessibility matters.”  It will just be something we can take for granted!  Equal Access for All. 

Web Science 2021 conference and workshops

The 13th ACM Web Science 2021 conference to be held on June 21st- June 25th will be hosting 12 interdisciplinary workshops addressing how Web Science research can illuminate key contemporary issues and global challenges.

We really would love it if you would like to submit your ideas and even a paper to our AI and Inclusion workshop or just come and join us virtually during the afternoon we are allotted (yet to be published!).

Accepted workshop papers will be published in the companion collection of the ACM WebSci’21 proceedings.

AI and Inclusion – Overcoming accessibility gaps on the Social Web

We are planning to make this workshop an interesting afternoon of presentations and a debate about how AI can help to achieve the goal of inclusion when thinking about the digital barriers that prevent people enjoying use of the social web.

Online interactivity and conversations should be accessible to all, all the more so during this period of isolation from face to face connections.

Important Dates:

Apr 23, 2021 — Workshop paper submission deadline

May 17, 2021 — Camera-ready deadline for the Proceedings

For more information, please see https://websci21.webscience.org/workshops

Web Page Accessibility and AI

computer with webpageOver the last year there has been an increasing amount of projects that have been using machine learning and image recognition to solve issues that cause accessibility barriers for web page users. Articles have been written about the subject. But we explored these ideas over a year ago having already added image recognition to check the accuracy of alternative texts on sites when carrying out an accessibility review on Web2Access.

Since that time we have been working on capturing data from online courses to develop training data via an onotology that can provide those working in education with a way of seeing what might cause a problem before the student even arrives on the course. The idea being that authors of the content can be alerted to the difficulties such as a lack of alternative texts or a need to annotate equations etc.

computer with presentationThe same can apply to online lectures provided for students working remotely. Live captioning from the videos are largely provided via automatic speech recognition. Once again a facilitator can be alerted to where errors are appearing in a live session, so that manual corrections can occur at speed and the quality of the output improved to provide not just more accurate captions over time, but also transcripts suitable for annotation. NRemote will provide a system that can be customised and offer students a chance to use teaching and learning materials in multiple formats.

We have also been discussing the use of text simplification that is making use of machine learning. The team behind EasyText AI have been making web pages easier to read and are now looking at the idea of incorporating text to symbol support where a user can choose a symbol set to suit their preference.

three sentences using symbols saying I read your red book today

Working on Symbols and Concept Linking

View WebSci 2020 Presentation in a new tab

The WebSci 2020 virtual conference has a special theme on Digital (In)Equality, Digital Inclusion, Digital Humanism the first day of this virtual conference. This will gave us the chance to show the initial findings from our linking of freely available Augmentative and Alternative Communication (AAC) symbol sets to support understanding of web content.

There are no standards in the way graphical AAC symbol sets are designed or collated other than the Blissymbolics ideographic set that was “standardized as ISO-IR 169 a double-byte character set in 1993 including 2384 fixed characters whereas the BCI Unicode proposal suggests 886 characters that then can be combined.” Edutech Wiki.

Even Emojis have a Unicode ID, but the pictographic symbols most frequently used by those with complex communication needs do not have an international encoding standard. This means that if you search for different symbols amongst a collection of freely available and open licenced symbols sets you find several symbols have no relationship with the word you entered or the concept required.

symbols for up
Global Symbols used to show sample symbols when the word ‘up’ was entered in the search.

This lack of concept accuracy means that much work has to be done to enable useful automatic text to symbol support for web content. Initially there needs to be a process to support text simplification or perhaps text summarisation in some cases. Then keywords need to be represented by a particular symbol (from a symbol set recognised by the reader), that can be accurately related to the concept by their ISO or Unicode ID. Examples can be found in the WCAG Personalization task force Requirements for Personalization Semantics using the Blissymbolics IDs.

The presentation at the beginning of this blog will illustrate the work that has been achieved to date, but it is hoped that more can be written up in the coming months. The aim is to have improved image recognition to assist with the semantic relatedness. This automatic linking will then be used to map to Blissymbolics IDs. It is hoped that this will also enable multilingual mapping, where symbol sets already have label or gloss translations.

laptop coding

However, there still needs to be a process that ensures whenever symbol sets are updated the mapping can continue to be accurate as some symbol sets do not come with APIs! That will be another challenge.

Winston Churchill Memorial Trust Covid-19 Action Fund support symbol charts

boardbuilder beta version
Freely available Boardbuilder, about to be updated as version 3. Due to be developed for personalised COVID-19 information support to aid communication with different templates and improved symbol searches.

Thank you ‘Winston Churchill Memorial Trust Covid-19 Action Fund‘ for making it possible for us to develop our Boardbuilder for personalising and adapting symbols for easy to use communication and information charts. Many freely available Augmentative and Alternative Communication (AAC) symbols are developed for children rather than adults. There are also many COVID-19 symbol charts on offer around the world, but they are rarely personalised and hospital and care home stays are usually more than a few days long. Boardbuilder will allow for different templates and a mix of any images and symbols to support those struggling to understand what they are being told or to express themselves.

We know we need to find symbols suitable for older people and particular medical items that are used in hospitals and for social care. We also need to make it easy for users to see many different types of symbols and upload images, as well as translating labels into different languages.

Symbols with complex medical terms are not readily available in most AAC symbol sets, so we have linked the OCHA Humanitarian Icons and Openmojis to the Global Symbols’ sets and hope to adapt other symbols that have open licences.

Making information and communication charts can take time, so we are determined to ensure BoardBuilder is very easy to use and offer print outs as well as enabling the output to work with a free text to speech / AAC applications on tablets etc.

By adding semantic embedding, alongside the present use of ConceptNet , the linking of symbol labels (glosses) should be more accurate and it will make it easier to find appropriate symbols. This will in turn speed chart making for those supporting people who are struggling with the masks and personal protection equipment being used in hospitals and care homes. In the future it will also help with text to symbol translations, as there are often several symbol options for one word.

COVID-19, AI and our Conferences

conference seating

Much has changed for everyone since our last blog. Swami Sivasubramanian, VP of Amazon Machine Learning, AWS has written an article about the way AI and machine learning have been helping to fight COVID-19 and we can see how varied the use of this technology has been. However, we remain in a world that is having to come to terms with many different ways of working and travelling to conferences has been off the agenda for the last few months.

We have continued to work on topics covered in our papers for ICCHP that will delivered remotely, as will the one we submitted for WebSci 2020 . ISAAC 2020 has been moved to 2021, but who knows if we will get to Mexico but hopefully at least we may have some results from the linking of concepts for several free and open augmentative and alternative communication symbol sets.

As the months pass much of our work will be seen on Global Symbols with examples of how we will be using the linked symbol sets.

We are also trying to support the WCAG personalization task force in their “Requirements for Personalization Semantics” to automatically link concepts to increase understanding of web content for those who use AAC or have literacy difficulties and/or cognitive impairments.

mapping symbol sets
The future for freely available mapped sample AAC symbol sets to illustrate multilingual linking of concepts from simplified web content.

Image Recognition to check Image Description accuracy on Web Pages

A Group Design Project has supported our intention to improve some automated web accessibility checks on our Web2Access review system. The project has resulted in a way of making sure alternative text used to describe images on web pages is accurate.

Accurate and simple descriptions are important for those who use screen readers, such as individuals with visual impairments. The ‘alt text’ that is used to describe an image is usually added by the author of a web page, but in recent years this process has often been automated. The results have been varied and do not necessarily accurately describe the image.

Images where the title is used as the alternative text – sample from Outbrain advertisers

As part of the WCAG 2.1. checks for alt tags an additional check has been added using a pretrained network and object detection (Mobile Nets and COCO-SSD in Tensorflow ). Initially the automated checker uses a review of the alt tags by the Pa11y checker. Then the additional check of the text resulting from the image classifi cation is compared to the actual descriptive text in the ‘img alt’ attribute for each image in a web page. If there is a successful match between the text, the automated review is accepted, but if none of the words correspond as a required description, a visual appraisal system is used to present the findings to the accessibility reviewer. This process acts as a double check and ensures issues can be flagged to the developer.

A similar process has been used for visual overlaps of content and it is intended that in the future titles of hypertext links could also be checked to ensure they accurately describe where the user would be sent if activated, not just that they say the already automately checked ‘click here’ or ‘more’ links or are a broken link.

Checking whether the image’s alternative text attribute accurately represents the image content.

In the last few months the results have been beta tested and integrated into the Web2Access digital accessibility review system by the ECS Accessibility team. The output can now be viewed as part of an Accessibility Statement as required by law since September 2018 for public sector websites.