Over the last few months we have been debating what inclusion means to us all and how AI and machine learning can help.
There have been many ways in which we have seen the assistive side of the technology and how it can be used with apps that support speech technologies, image recognition and automatic captions. These assistive technologies have benefitted us all but, what about the barriers that could be removed to make it even easier for people to feel included?
Areas such as digital accessibility, augmentative and alternative forms of communication (AAC) and AI in education, are just a few of the subjects we have been exploring.
If you are surfing the web using a screen reader, and it
fails to work because text descriptions for images or labels on forms have been
omitted you may well be unable to complete the task in hand. However, alternative text can be generated
automatically using image recognition and if the text around that image is
explored in more detail, there is a chance that the accuracy of the alternative
text can be improved with more contextual information. If people don’t consider accessibility when
developing web sites and their content we need many more machine enabled
accessibility checks that actually work effectively without too many false positives
or negatives. Then we need the automatic
fixes to make sense of these barriers!
If someone with a communication difficulty who uses symbols wants to join a conversation where everyone is talking at a rate of 150 plus words per minute it is hard to compete only managing around 10 – 12 words per minute. It should be possible to speed input with better forms of prediction and language correction when users need to choose symbols. Once again context sensitivity could help.
Why can’t we also make symbol sets interchangeable so that users who work with one set of symbols are not dependent on the text translation to work with other AAC users – the ability to harmonise symbol sets with some standardisation should be possible – maybe image recognition and better use of natural language processing could help.
A conference with the ED-ICT network will hopefully result in discussions around the support AI could provide in education. Several AI technologies have been mentioned in international reports that could have an impact on some of our students when coping with the barriers of day to day life in universities. Could the better use of natural language processing further improve automatic captioning for lecture capture and provide more accurate search results when looking for academic papers, Biometrics and Blockchain in Education could offer enhanced security for aspects of our management systems and assessments perhaps allowing better supporting strategies for those who benefit from remote access. i feel these aspects of inclusive education need more research to support disabled students.
Microsoft held an evening at the London on March 28th where the invitation to the event encouraged us to believe that:
“With the advancement in conversational intelligence, Deep learning, and Reinforcement learning, Artificial Intelligence has the potential to revolutionize the way we live and interact with our surroundings. AI for accessibility is taking leap[s] into the realm of opportunities and changing people[s’] lives for better.”
It proved to be an interesting evening where Microsoft demonstrated how their Office products embed AI and accessibility within the process of developing documents. They offer automatic image labelling, accessibility checks, captioning and translations alongside supporting apps useful in many settings. Examples include Seeing AI (a smart phone app) providing information about the world around us via the camera with speech output and We Walk a smart cane that helps those who have visual impairments avoid obstacles. Virtual and augmented reality, haptics and working to support and support for those with hearing impairments were on show.
Interestingly innovative AI ideas for those with cognitive impairments such as learning disabilities were not high on the agenda and yet many of the innovations in this area can also help those with dementia and stroke when communication can be affected.
Professor Clayton Lewis has written a White Paper for the Coleman Institute for Cognitive Disabilities on “Implications of Developments in Machine Learning for People with Cognitive Disabilities” He discusses a roadmap, with many of the strategies we have been collecting. Examples include making text easier to understand, the use of Natural Language Processing (NLP) for text simplification and clarification, visual assistants using image recognition to detect issues occurring in the home with chatbots to assist with problems and ideas around brain connected systems. As with many authors, Professor Lewis reflects on issues around ethics, security and privacy, the lack of disability specific data and algorithms and includes these thoughts under policy projects. But he also stresses that:
…we may expect continued progress in deep learning, as well, perhaps, as significant new ideas. Besides awaiting (and encouraging) these developments, our community should consider how more limited capabilities may be useful in the applications important to us.
Jutta Treviranus has developed a ” a guiding framework for inclusive design, suitable for a digitally transformed and increasingly connected context. ”
The three dimensions of the framework are:
1. Recognize, respect, and design for human uniqueness and variability.
2. Use inclusive, open & transparent processes, and co-design with people who have a diversity of perspectives, including people that can’t use or have difficulty using the current designs.
3. Realize that you are designing in a complex adaptive system.
The three blogs about ‘The Three Dimensions of Inclusive Design’ were published in March, April and May 2018 and encourage us to think very seriously about how we can make everything we do in our digital world more accessible and inclusive. In her final blog Jutta says:
Including difference is how we evolve as a human society. Inclusive design is about far more than addressing disability. But disability has been called our last frontier. It is the human difference that our social structures have not yet integrated. This is paradoxical because disability is a potential state we can all find ourselves in. If we reject and exclude individuals who experience disabilities, we reject and exclude our future selves and our loved ones.
Trying a “lawnmower of justice” for AI – leveling the playing field-restricting the repeats of any data element so the norm doesn’t overwhelm the edges. Takes longer to learn but handles the unexpected, detects weak signals & transfers to new contexts better #inclusion#AIpic.twitter.com/fhaEVdk0Nu