Working on Symbols and Concept Linking

View WebSci 2020 Presentation in a new tab

The WebSci 2020 virtual conference has a special theme on Digital (In)Equality, Digital Inclusion, Digital Humanism the first day of this virtual conference. This will gave us the chance to show the initial findings from our linking of freely available Augmentative and Alternative Communication (AAC) symbol sets to support understanding of web content.

There are no standards in the way graphical AAC symbol sets are designed or collated other than the Blissymbolics ideographic set that was “standardized as ISO-IR 169 a double-byte character set in 1993 including 2384 fixed characters whereas the BCI Unicode proposal suggests 886 characters that then can be combined.” Edutech Wiki.

Even Emojis have a Unicode ID, but the pictographic symbols most frequently used by those with complex communication needs do not have an international encoding standard. This means that if you search for different symbols amongst a collection of freely available and open licenced symbols sets you find several symbols have no relationship with the word you entered or the concept required.

symbols for up
Global Symbols used to show sample symbols when the word ‘up’ was entered in the search.

This lack of concept accuracy means that much work has to be done to enable useful automatic text to symbol support for web content. Initially there needs to be a process to support text simplification or perhaps text summarisation in some cases. Then keywords need to be represented by a particular symbol (from a symbol set recognised by the reader), that can be accurately related to the concept by their ISO or Unicode ID. Examples can be found in the WCAG Personalization task force Requirements for Personalization Semantics using the Blissymbolics IDs.

The presentation at the beginning of this blog will illustrate the work that has been achieved to date, but it is hoped that more can be written up in the coming months. The aim is to have improved image recognition to assist with the semantic relatedness. This automatic linking will then be used to map to Blissymbolics IDs. It is hoped that this will also enable multilingual mapping, where symbol sets already have label or gloss translations.

laptop coding

However, there still needs to be a process that ensures whenever symbol sets are updated the mapping can continue to be accurate as some symbol sets do not come with APIs! That will be another challenge.

Winston Churchill Memorial Trust Covid-19 Action Fund support symbol charts

boardbuilder beta version
Freely available Boardbuilder, about to be updated as version 3. Due to be developed for personalised COVID-19 information support to aid communication with different templates and improved symbol searches.

Thank you ‘Winston Churchill Memorial Trust Covid-19 Action Fund‘ for making it possible for us to develop our Boardbuilder for personalising and adapting symbols for easy to use communication and information charts. Many freely available Augmentative and Alternative Communication (AAC) symbols are developed for children rather than adults. There are also many COVID-19 symbol charts on offer around the world, but they are rarely personalised and hospital and care home stays are usually more than a few days long. Boardbuilder will allow for different templates and a mix of any images and symbols to support those struggling to understand what they are being told or to express themselves.

We know we need to find symbols suitable for older people and particular medical items that are used in hospitals and for social care. We also need to make it easy for users to see many different types of symbols and upload images, as well as translating labels into different languages.

Symbols with complex medical terms are not readily available in most AAC symbol sets, so we have linked the OCHA Humanitarian Icons and Openmojis to the Global Symbols’ sets and hope to adapt other symbols that have open licences.

Making information and communication charts can take time, so we are determined to ensure BoardBuilder is very easy to use and offer print outs as well as enabling the output to work with a free text to speech / AAC applications on tablets etc.

By adding semantic embedding, alongside the present use of ConceptNet , the linking of symbol labels (glosses) should be more accurate and it will make it easier to find appropriate symbols. This will in turn speed chart making for those supporting people who are struggling with the masks and personal protection equipment being used in hospitals and care homes. In the future it will also help with text to symbol translations, as there are often several symbol options for one word.

COVID-19, AI and our Conferences

conference seating

Much has changed for everyone since our last blog. Swami Sivasubramanian, VP of Amazon Machine Learning, AWS has written an article about the way AI and machine learning have been helping to fight COVID-19 and we can see how varied the use of this technology has been. However, we remain in a world that is having to come to terms with many different ways of working and travelling to conferences has been off the agenda for the last few months.

We have continued to work on topics covered in our papers for ICCHP that will delivered remotely, as will the one we submitted for WebSci 2020 . ISAAC 2020 has been moved to 2021, but who knows if we will get to Mexico but hopefully at least we may have some results from the linking of concepts for several free and open augmentative and alternative communication symbol sets.

As the months pass much of our work will be seen on Global Symbols with examples of how we will be using the linked symbol sets.

We are also trying to support the WCAG personalization task force in their “Requirements for Personalization Semantics” to automatically link concepts to increase understanding of web content for those who use AAC or have literacy difficulties and/or cognitive impairments.

mapping symbol sets
The future for freely available mapped sample AAC symbol sets to illustrate multilingual linking of concepts from simplified web content.

Image Recognition to check Image Description accuracy on Web Pages

A Group Design Project has supported our intention to improve some automated web accessibility checks on our Web2Access review system. The project has resulted in a way of making sure alternative text used to describe images on web pages is accurate.

Accurate and simple descriptions are important for those who use screen readers, such as individuals with visual impairments. The ‘alt text’ that is used to describe an image is usually added by the author of a web page, but in recent years this process has often been automated. The results have been varied and do not necessarily accurately describe the image.

Images where the title is used as the alternative text – sample from Outbrain advertisers

As part of the WCAG 2.1. checks for alt tags an additional check has been added using a pretrained network and object detection (Mobile Nets and COCO-SSD in Tensorflow ). Initially the automated checker uses a review of the alt tags by the Pa11y checker. Then the additional check of the text resulting from the image classifi cation is compared to the actual descriptive text in the ‘img alt’ attribute for each image in a web page. If there is a successful match between the text, the automated review is accepted, but if none of the words correspond as a required description, a visual appraisal system is used to present the findings to the accessibility reviewer. This process acts as a double check and ensures issues can be flagged to the developer.

A similar process has been used for visual overlaps of content and it is intended that in the future titles of hypertext links could also be checked to ensure they accurately describe where the user would be sent if activated, not just that they say the already automately checked ‘click here’ or ‘more’ links or are a broken link.

Checking whether the image’s alternative text attribute accurately represents the image content.

In the last few months the results have been beta tested and integrated into the Web2Access digital accessibility review system by the ECS Accessibility team. The output can now be viewed as part of an Accessibility Statement as required by law since September 2018 for public sector websites.

Artificial Intelligence, Accessible and Assistive Technologies

boats on lake Como by the town frontage of Lecco
Lecco, Italy by Stefano Ferrario from Pixabay 

We are chairing a Special Thematic Session at the 17th International Conference on Computers Helping People with Special Needs which will run from September 9-11, 2020 with the Pre-conference September from 7th – 8th, 2020 in Lecco, Italy.

Please come and join us at this conference and submit an extended abstract before April 1st 2020 for our special thematic session.

The aim is to encourage presenters to share their innovative thinking, provide refreshing appraisals related to the use of AI and all that goes into AI models to support those with disabilities in their use of accessible and assistive technologies.  Here are some ideas for papers but please do not be limited by this list:

  • AI and Inclusion, where machine learning and algorithms can be used to enable equity for those with disabilities
  • The pros and cons of AI, highlighting why issues can arise for those with disabilities, even with the most meticulously designed systems.
  • The use of augmentative and assistive AI in applications to support those with disabilities
  • AI supporting all that goes into making access to online digital content easier.
  • Enhanced independence using virtual assistants and robots

Contributions to a STS have to be submitted using the standard submission procedures of ICCHP.

When submitting your contribution please make sure you choose our STS under “Special Thematic Session” (Artificial Intelligence, Accessible and Assistive Technologies). Contributions to a STS are evaluated by the Programme Committee of ICCHP and Peter Heumader and myself! Do get in touch to discuss your involvement and pre-evaluation of your contribution.

Chairs


  • E.A. Draffan, ECS Accessibility Team, Faculty of Physical Sciences and Engineering University of Southampton

  • Peter Heumader, Institute Integriert Stuideren, Johannes Kepler University Linz

AI and Inclusion projects related to Web Accessibility and AAC support.

Over the last few months we have been concentrating on projects related to automated web accessibility checks and the automatic  linking and categorisation of open licenced and freely available Augmentative and Alternative Communication symbol sets for those with complex communication needs.

As has been mentioned we presented these projects at a workshop in the Alan Turing Institute in November and work has been ongoing. It is hoped that the results will be shared by the end of March 2020.

Automating Web Accessibility Checks

Recent regulations and UK laws recognise the W3C Web Content Accessibility Guidelines (WCAG) as a method of ensuring compliance, but testing can be laborious and those checkers that automate the process need to be able to find where more errors are occurring.  This has led to the development of an accessibility checker that carries out well-known automated checks, but also includes image recognition to make it possible to see if the alternative text tags for images are appropriate. A second AI related check involves a new WCAG 2.1 Success Criteria 2.4.4 Link Purpose (In Context).  This is where “the purpose of each link can be determined from the link text alone or from the link text together with its programmatically determined link context, except where the purpose of the link would be ambiguous to users in general”.[1] 

A Natural Language Processing (NLP) model is used to check whether the text in the aria-label attribute within the target hyperlink object matches the content in the target URL. Based on the matching result, it is possible to determine whether the target web page or website fit the link purpose criteria. Despite previous research in this area, the task is proving challenging with two different experiments being worked on. One experiment has been designed to use some existing NLP models (e.g. GloVe), while another one is investigating the training of data with human input. The results will be published in an academic paper and at a conference.

AAC symbol classification to aid searches.

Global Symbols with a Cboard user

The team have also investigated issues for those supporting Augmentative and Alternative Communication (AAC) users who may have severe communication difficulties and make use of symbols and pictures on speech generating devices. A multilingual symbol repository for families, carers and professionals has been created to link different freely available symbol sets.  The symbol sets can be used to create communication charts for the AAC user but this takes time and finding appropriate cultural symbols is not always easy.  A system has been developed that automatically links and categorises symbols across symbol sets related to their parts of speech, topic and language using a combination of linked data, natural language processing and image recognition.  The latter is not always successful in isolation as symbols lack context and concepts are not necessarily concrete such as an image for ‘anxious’, so further work is required to enhance the system.  The Global Symbols AAC symbol repository will be making use of these features on their BoardBuilder for making symbol charts by the end of March 2020.

This project is exploring some existing Convolutional Neural Network (CNN, or ConvNet) models to help classify, categorise and integrate AAC symbols. Experiments have already been undertaken to produce a baseline by simply using the image matrix similarity. Due to the nature of AAC symbols, some of these similar symbols are representing different concepts, but some different symbols are representing the same concept across different symbols sets. The training data set has mapped symbol images labels and NLP models have been used to map the labels into the same concept across different symbols. This will help those supporting ACC users offer much wider symbol choices suitable for different cultures and languages. The Global Symbols API for searching open licence and freely available AAC symbols is already being used in the Cboard application for AAC users


[1] https://www.w3.org/WAI/WCAG21/Understanding/link-purpose-in-context.html

AI and Recruitment biases

weighing scale with heavier man holding down one sid

Wired UK has a very good article by Alex Lee that was published on November 26th titled “An AI to stop hiring bias could be bad news for disabled people.” The technology that helps recruiters cut through the CV pile might be pushing disabled candidates out of the running “

Alex Lee provides a very good example of what can happen when an interviewee has to undertake the daunting task of a video recruitment system. This may cut time for the company but when you read the article you will find that the process would be tough for most people, let alone someone with a visual impairment.

The data collected and the algorithms used for these processes are meant to be more and more accurate as time passes, but as Professor Mike Wald has reminded us all…

“To train the algorithm, you’re going to have to give it past data,” explains Mike Wald, professor of electronics and computer science at the University of Southampton and a fellow of the Turing Institute. “if you say, here are the characteristics of all our good employees. We want more people like them. You’re going to get more people like them. And if they haven’t got any disabled people in there, you’re probably not going to get disabled people. […] ” “Disability is a very heterogeneous characteristic. Every person with a disability has a slightly different disability. And so, there is a huge issue in how to classify disabilities,” says Wald. “If you try and classify someone, until you meet that actual person and find out what they can and can’t do, then it’s not really fair to do that.”

Wired Magazine, November 26th 2019

Many more people seem to be writing about this issue and discuss where things can go wrong with AI, such an article published today called “Artificial Intelligence and Inclusion” by Mollie Lombardi. These articles and many more like ours are talking about the problem and making it clear that we need to sort it out.

If you have an answer do let us know!

We are still searching for a freely available recipe to ensure AI is inclusive and would enable us to take account of the very complex mix of disabilities and how they affect so many people in very different ways and to varying degrees; even at different times during the journey of life.

Educause Review – Discussing the concepts of Disclosure, Accessibility and Inclusion

In our discussion at the Alan Turing Institute last week we mentioned how hard it was to define the way Inclusion is seen as a concept and so it was interesting to read this article by Lorna Gonzalez and Kristi O’Neil Published: Friday, November 15, 2019 Columns:Transforming Higher Ed . They begin by saying that:

” Attempting to define nuanced concepts brings with it a risk or reductionism, which is why the definitions that follow draw from cognitive science, universal design, and disability studies…

Disclosure or Self-Identity

Have you heard of the WYSIATI principle? Coined by Nobel Prize laureate Daniel Kahneman and pronounced “whiz-ee-yaht-tee,” this acronym stands for What You See Is All There Is. It’s theorized as a common, unconscious bias that even well-intentioned educators make: “I can’t see it, so it doesn’t exist.” It’s the idea that our minds are prone to making judgements and forming impressions based on the information available to us. In teaching and learning, the WYSIATI principle is an idea with consequences. That is, the students in our classes have all kinds of invisible circumstances that can impact their learning. Some of these circumstances include the following:

  • Attention or comprehension problems because of an emotional hardship or learning disability
  • Using a single device, like a phone or a tablet, for doing all of their digital coursework
  • Familial or other work obligations outside of school
  • A long commute to and from campus
  • Homelessness, or food or housing insecurities
  • Low vision, but not blindness; difficulty looking at a screen for extended periods of time
  • Difficulty hearing, but not deafness
  • A lack of experience or experienced mentors in higher education or in particular disciplines
  • Returning to school after a period of time and feeling rusty, insecure, or experiencing imposter syndrome2

While campuses are required to provide services (e.g., alternative formats for course materials, extra exam time, etc.) for students with disabilities and may have additional support for students with various other challenges, unless students self-identify or disclose their circumstances, courses and associated materials may contain barriers to student learning—even if those barriers are inadvertent. A faculty colleague shared with us that she had favored the use of a particular color to emphasize important ideas in her documents for nearly an entire semester before a student revealed to her that he could not see that color. Had she known, she would have made a simple change so that the student could read or understand the most important parts of the course documents. This is the WYSIATI principle at work: this faculty member couldn’t see that her student was color blind, so she didn’t know that she needed to do anything about it.

Whether or not students need to self-identify or disclose their circumstances is not the point. The point is that invisible circumstances exist regardless of disclosure, and, collectively, we can all do a better job of awareness: identifying and removing barriers from courses can benefit everyone, but doing so can also be critical to those who need it.

Accessibility

Colloquially, the term accessibility is often used to describe items or spaces that are available for use. One can expect an accessible road to be open as an option for safe, unobstructed travel for most vehicles. Here’s another example of an email from Dropbox, an online file storage tool, after a user reached the free storage limit.

Dropbox dialog box

In this email, the term accessible refers to availability. The user will not be able to access files because they will not be available on other devices. In both of these examples, however, the term accessible is limited to able individuals—those who are able to access material in its current form. In teaching and learning, as well as in universal design, accessible means that materials and spaces are not only available but also free from invisible barriers—even unintended ones—for anyone who needs to access those materials.

For example, an accessible text is one that is clearly organized, uses an unembellished font, and incorporates headings to separate sections. Online images should contain alternative text for moments when pages don’t load properly or for readers who use assistive devices. Videos should include closed captions or transcripts for people with hearing issues or attention/tracking problems, as well as for those who multitask (watch while exercising, for example). Even certain colloquial terms and cultural references that are used without context or explanation in a lecture or course material can function as barriers to learning.

The term accessible is evolving and currently connotes disability services and accommodations. Citing a keynote address by Andrew Lessman, a distance education lawyer, Thomas Tobin and Kirsten Behling explain why accessibility as accommodation is a problematic way to think about design: “‘accommodations are supposed to be for extraordinary circumstances'” and, paraphrasing him, added the following:

[I]t should be very rare for people to need to make specific requests for circumstances to be altered just for them [. . .] all of these environments, whether physical or virtual, should be designed so that the broadest segment of the general population will be able to interact successfully with materials and people.3

This idea applies to course design and instruction just as much as it applies to physical spaces. Practicing accessible course design and instruction is an opportunity (and a necessary imperative) to develop a pedagogy of inclusion.

Inclusion

The previous two definitions have tried to articulate the idea that students carry intersecting invisible circumstances with them into the classroom. Whether or not students disclose their circumstances—or whether faculty members invite students to disclose them—does not determine their existence. From this perspective, inclusion means designing and teaching for variability. Faculty can practice inclusive pedagogy by following universal design principles and offering multiple options for representation, engagement, and expression:

Options are essential to learning, because no single way of presenting information, no single way of responding to information, and no single way of engaging students will work across the diversity of students that populate our classrooms. Alternatives reduce barriers to learning for students with disabilities while enhancing learning opportunities for everyone.4

In a Nutshell . . .

Inclusive pedagogy can be an act of intention—something that is initiated before and during the course design process—rather than being an act of revision or omission.

Contribute to the Conversation!

Tweet your favorite inclusive design practices and resources, and be sure to tag @TLIatCI@a11ygal, and @lgonzalez1

For more insights about advancing teaching and learning through IT innovation, please visit the EDUCAUSE Review Transforming Higher Ed blog as well as the EDUCAUSE Learning Initiative page.

Notes

Special thanks to Amanda Timpson, Sarah Lohnes Watulak, and Tara Hughes for contributing their time to this post.

  1. Clair Lauer, “Contending with Terms: ‘Multimodal’ and ‘Multimedia’ in the Academic and Public Spheres,” Computers and Composition 26, no. 4 (2009): 225–239. 
  2. Valerie Young, “Finding a Name for the Feelings,” Imposter Syndrome (website), October 23, 2017; Megan Dalla-Camina, “The Reality of Imposter Syndrome,” Psychology Today, September 23, 2018. 
  3. Thomas Tobin and Kirsten Behling, Reach Everyone, Teach Everyone: Universal Design for Learning in Higher Education (Morgantown: West Virginia University Press, 2018). 
  4. Ibid. 

Lorna Gonzalez is an Instructional Designer for Teaching and Learning Innovations and a Lecturer at California State University Channel Islands.

Kristi O’Neil is the Instructional Technologist-Accessibility Lead for Teaching and Learning Innovations at California State University Channel Islands.

© 2019 Lorna Gonzalez and Kristi O’Neil. The text of this work is licensed under a Creative Commons BY 4.0 International License.ParentTopics:Accessibility Assistive Technology Diversity, Equity, and Inclusion (DEI) Web Accessibility

SHARETHIS ARTICLE

Seminar on ‘AI and Inclusion Challenges’ November 22nd 11.30 – Link up to the Alan Turing Institute via Zoom!

There is now a dial-in link to our seminar on AI and Inclusion

To support the Alan Turing Institute’s statement that ‘promoting and embedding equality, diversity and inclusion is integral to achieving our mission’ the research question addressed by this proposed new Challenge is ‘How can AI overcome barriers to inclusion’?

Of the nine protected characteristics identified by the Equality Act 2010, AI would appear to have the greatest potential to help overcome barriers to inclusion for disabled people in terms of practical strategies for digital accessibility and assistive technology support. Examples of how innovative uses of AI can support those with disabilities include:

  • image and video description, independent navigation (vision);
  • captioning for words sounds and emotions, sign language translation, adaptive hearing aids (hearing);
  • symbol generation communication and translation, speech synthesis (communication);
  • text summarization and simplification (cognition);
  • smart monitoring and support (care);
  • web accessibility checking and correction (all)

Disabled people need to be involved in the design of Assistive or Augmentative Intelligence for ‘edge cases/outliers’ and as Disability is not a single homogeneous characteristic, algorithms need to work for all disabilities in the multitude of different settings and situations in which people find themselves and this also applies to ethical and fairness issues related to data gathering and algorithms affecting protected characteristics.

Challenges to Implementation of AI and inclusion

treasure map

In no particular order as part of our roadmap we have been looking at the challenges facing aspects of inclusion for those who come under the umbrella of protected characteristics named in the UK’s Equality Act 2010

The list of challenges, for disabled people and those becoming less able due to age or debilitating illnesses, seems to grow despite the innovations being developed thanks to the use of clever algorithms and increasing amounts of data and high powered computing power. This is our first attempt at publishing our ideas…

road barriers

Challenges

Understanding the role and meaning Inclusion

  • Equity v equality

Disability is a Heterogeneous not homogeneous

  • Single ‘Disability’ classification not helpful as every disabled person can have very different needs
  • Small data for individual disabilities compared to big data for all (e.g. remove individuals whose data identifiable)

Skills and Abilities rather than deficit model

  • Looking at what an individual can do rather than focussing on the disabilities/difficulties

Designing for average rather than edge cases and outliers

  • Every disabled person may have very different needs compared to peers without a disability

Assumptions of Stakeholders

  • Changing attitudes
  • Lack of understanding – AI and ethics, data collection, algorithms, transparency  
  • Expectations of experts – will have a magic wand
  • Eugenics issues (e.g. Autism genetic correction)

Few disabled people involved in AI (Nothing about us without us)

  • Disabled people need to be involved in AI decisions
  • More disabled people need to understand AI

Capacity Issues

  • Resources – human, financial, tools
  • Policies and Procedures
  • Lack of general ICT as well as AT/AAC technologies that are regularly used in many settings

Cohesive Approach

  • Collaboration

AT and AAC Market

  • Small Market
  • Localisation issues

Lack of Competencies

  • Knowledge building

Black box non transparent Deep NN machine learning

  • Difficult to understand implications of AI DNN for disabled people

Lack of interest

  • Disabled people’s inclusion of little interest to Turing researchers and Turing research challenges and programmes (lack of knowledge due to lack of undergraduate courses, PhD supervisors, High impact Journals, Research funding etc.)

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.