Over the last year there has been an increasing amount of projects that have been using machine learning and image recognition to solve issues that cause accessibility barriers for web page users. Articles have been written about the subject. But we explored these ideas over a year ago having already added image recognition to check the accuracy of alternative texts on sites when carrying out an accessibility review on Web2Access.
Since that time we have been working on capturing data from online courses to develop training data via an onotology that can provide those working in education with a way of seeing what might cause a problem before the student even arrives on the course. The idea being that authors of the content can be alerted to the difficulties such as a lack of alternative texts or a need to annotate equations etc.
The same can apply to online lectures provided for students working remotely. Live captioning from the videos are largely provided via automatic speech recognition. Once again a facilitator can be alerted to where errors are appearing in a live session, so that manual corrections can occur at speed and the quality of the output improved to provide not just more accurate captions over time, but also transcripts suitable for annotation. NRemote will provide a system that can be customised and offer students a chance to use teaching and learning materials in multiple formats.
We have also been discussing the use of text simplification that is making use of machine learning. The team behind EasyText AI have been making web pages easier to read and are now looking at the idea of incorporating text to symbol support where a user can choose a symbol set to suit their preference.
In our discussion at the Alan Turing Institute last week we mentioned how hard it was to define the way Inclusion is seen as a concept and so it was interesting to read this article by Lorna Gonzalez and Kristi O’Neil Published: Friday, November 15, 2019 Columns:Transforming Higher Ed . They begin by saying that:
” Attempting to define nuanced concepts brings with it a risk or reductionism, which is why the definitions that follow draw from cognitive science, universal design, and disability studies…
Disclosure or Self-Identity
Have you heard of the WYSIATI principle? Coined by Nobel Prize laureate Daniel Kahneman and pronounced “whiz-ee-yaht-tee,” this acronym stands for What You See Is All There Is. It’s theorized as a common, unconscious bias that even well-intentioned educators make: “I can’t see it, so it doesn’t exist.” It’s the idea that our minds are prone to making judgements and forming impressions based on the information available to us. In teaching and learning, the WYSIATI principle is an idea with consequences. That is, the students in our classes have all kinds of invisible circumstances that can impact their learning. Some of these circumstances include the following:
Attention or comprehension problems because of an emotional hardship or learning disability
Using a single device, like a phone or a tablet, for doing all of their digital coursework
Familial or other work obligations outside of school
A long commute to and from campus
Homelessness, or food or housing insecurities
Low vision, but not blindness; difficulty looking at a screen for extended periods of time
Difficulty hearing, but not deafness
A lack of experience or experienced mentors in higher education or in particular disciplines
Returning to school after a period of time and feeling rusty, insecure, or experiencing imposter syndrome2
While campuses are required to provide services (e.g., alternative formats for course materials, extra exam time, etc.) for students with disabilities and may have additional support for students with various other challenges, unless students self-identify or disclose their circumstances, courses and associated materials may contain barriers to student learning—even if those barriers are inadvertent. A faculty colleague shared with us that she had favored the use of a particular color to emphasize important ideas in her documents for nearly an entire semester before a student revealed to her that he could not see that color. Had she known, she would have made a simple change so that the student could read or understand the most important parts of the course documents. This is the WYSIATI principle at work: this faculty member couldn’t see that her student was color blind, so she didn’t know that she needed to do anything about it.
Whether or not students need to self-identify or disclose their circumstances is not the point. The point is that invisible circumstances exist regardless of disclosure, and, collectively, we can all do a better job of awareness: identifying and removing barriers from courses can benefit everyone, but doing so can also be critical to those who need it.
Colloquially, the term accessibility is often used to describe items or spaces that are available for use. One can expect an accessible road to be open as an option for safe, unobstructed travel for most vehicles. Here’s another example of an email from Dropbox, an online file storage tool, after a user reached the free storage limit.
In this email, the term accessible refers to availability. The user will not be able to access files because they will not be available on other devices. In both of these examples, however, the term accessible is limited to able individuals—those who are able to access material in its current form. In teaching and learning, as well as in universal design, accessible means that materials and spaces are not only available but also free from invisible barriers—even unintended ones—for anyone who needs to access those materials.
For example, an accessible text is one that is clearly organized, uses an unembellished font, and incorporates headings to separate sections. Online images should contain alternative text for moments when pages don’t load properly or for readers who use assistive devices. Videos should include closed captions or transcripts for people with hearing issues or attention/tracking problems, as well as for those who multitask (watch while exercising, for example). Even certain colloquial terms and cultural references that are used without context or explanation in a lecture or course material can function as barriers to learning.
The term accessible is evolving and currently connotes disability services and accommodations. Citing a keynote address by Andrew Lessman, a distance education lawyer, Thomas Tobin and Kirsten Behling explain why accessibility as accommodation is a problematic way to think about design: “‘accommodations are supposed to be for extraordinary circumstances'” and, paraphrasing him, added the following:
[I]t should be very rare for people to need to make specific requests for circumstances to be altered just for them [. . .] all of these environments, whether physical or virtual, should be designed so that the broadest segment of the general population will be able to interact successfully with materials and people.3
This idea applies to course design and instruction just as much as it applies to physical spaces. Practicing accessible course design and instruction is an opportunity (and a necessary imperative) to develop a pedagogy of inclusion.
The previous two definitions have tried to articulate the idea that students carry intersecting invisible circumstances with them into the classroom. Whether or not students disclose their circumstances—or whether faculty members invite students to disclose them—does not determine their existence. From this perspective, inclusion means designing and teaching for variability. Faculty can practice inclusive pedagogy by following universal design principles and offering multiple options for representation, engagement, and expression:
Options are essential to learning, because no single way of presenting information, no single way of responding to information, and no single way of engaging students will work across the diversity of students that populate our classrooms. Alternatives reduce barriers to learning for students with disabilities while enhancing learning opportunities for everyone.4
In a Nutshell . . .
Inclusive pedagogy can be an act of intention—something that is initiated before and during the course design process—rather than being an act of revision or omission.
Over the last few months we have been debating what inclusion means to us all and how AI and machine learning can help.
There have been many ways in which we have seen the assistive side of the technology and how it can be used with apps that support speech technologies, image recognition and automatic captions. These assistive technologies have benefitted us all but, what about the barriers that could be removed to make it even easier for people to feel included?
Areas such as digital accessibility, augmentative and alternative forms of communication (AAC) and AI in education, are just a few of the subjects we have been exploring.
If you are surfing the web using a screen reader, and it
fails to work because text descriptions for images or labels on forms have been
omitted you may well be unable to complete the task in hand. However, alternative text can be generated
automatically using image recognition and if the text around that image is
explored in more detail, there is a chance that the accuracy of the alternative
text can be improved with more contextual information. If people don’t consider accessibility when
developing web sites and their content we need many more machine enabled
accessibility checks that actually work effectively without too many false positives
or negatives. Then we need the automatic
fixes to make sense of these barriers!
If someone with a communication difficulty who uses symbols wants to join a conversation where everyone is talking at a rate of 150 plus words per minute it is hard to compete only managing around 10 – 12 words per minute. It should be possible to speed input with better forms of prediction and language correction when users need to choose symbols. Once again context sensitivity could help.
Why can’t we also make symbol sets interchangeable so that users who work with one set of symbols are not dependent on the text translation to work with other AAC users – the ability to harmonise symbol sets with some standardisation should be possible – maybe image recognition and better use of natural language processing could help.
A conference with the ED-ICT network will hopefully result in discussions around the support AI could provide in education. Several AI technologies have been mentioned in international reports that could have an impact on some of our students when coping with the barriers of day to day life in universities. Could the better use of natural language processing further improve automatic captioning for lecture capture and provide more accurate search results when looking for academic papers, Biometrics and Blockchain in Education could offer enhanced security for aspects of our management systems and assessments perhaps allowing better supporting strategies for those who benefit from remote access. i feel these aspects of inclusive education need more research to support disabled students.
Microsoft held an evening at the London on March 28th where the invitation to the event encouraged us to believe that:
“With the advancement in conversational intelligence, Deep learning, and Reinforcement learning, Artificial Intelligence has the potential to revolutionize the way we live and interact with our surroundings. AI for accessibility is taking leap[s] into the realm of opportunities and changing people[s’] lives for better.”
It proved to be an interesting evening where Microsoft demonstrated how their Office products embed AI and accessibility within the process of developing documents. They offer automatic image labelling, accessibility checks, captioning and translations alongside supporting apps useful in many settings. Examples include Seeing AI (a smart phone app) providing information about the world around us via the camera with speech output and We Walk a smart cane that helps those who have visual impairments avoid obstacles. Virtual and augmented reality, haptics and working to support and support for those with hearing impairments were on show.
Interestingly innovative AI ideas for those with cognitive impairments such as learning disabilities were not high on the agenda and yet many of the innovations in this area can also help those with dementia and stroke when communication can be affected.
Professor Clayton Lewis has written a White Paper for the Coleman Institute for Cognitive Disabilities on “Implications of Developments in Machine Learning for People with Cognitive Disabilities” He discusses a roadmap, with many of the strategies we have been collecting. Examples include making text easier to understand, the use of Natural Language Processing (NLP) for text simplification and clarification, visual assistants using image recognition to detect issues occurring in the home with chatbots to assist with problems and ideas around brain connected systems. As with many authors, Professor Lewis reflects on issues around ethics, security and privacy, the lack of disability specific data and algorithms and includes these thoughts under policy projects. But he also stresses that:
…we may expect continued progress in deep learning, as well, perhaps, as significant new ideas. Besides awaiting (and encouraging) these developments, our community should consider how more limited capabilities may be useful in the applications important to us.
Jutta Treviranus has developed a ” a guiding framework for inclusive design, suitable for a digitally transformed and increasingly connected context. ”
The three dimensions of the framework are:
1. Recognize, respect, and design for human uniqueness and variability.
2. Use inclusive, open & transparent processes, and co-design with people who have a diversity of perspectives, including people that can’t use or have difficulty using the current designs.
3. Realize that you are designing in a complex adaptive system.
The three blogs about ‘The Three Dimensions of Inclusive Design’ were published in March, April and May 2018 and encourage us to think very seriously about how we can make everything we do in our digital world more accessible and inclusive. In her final blog Jutta says:
Including difference is how we evolve as a human society. Inclusive design is about far more than addressing disability. But disability has been called our last frontier. It is the human difference that our social structures have not yet integrated. This is paradoxical because disability is a potential state we can all find ourselves in. If we reject and exclude individuals who experience disabilities, we reject and exclude our future selves and our loved ones.
Trying a “lawnmower of justice” for AI – leveling the playing field-restricting the repeats of any data element so the norm doesn’t overwhelm the edges. Takes longer to learn but handles the unexpected, detects weak signals & transfers to new contexts better #inclusion#AIpic.twitter.com/fhaEVdk0Nu