AI and Digital Accessibility

Microsoft held an evening at the London on March 28th where the invitation to the event encouraged us to believe that:

“With the advancement in conversational intelligence, Deep learning, and Reinforcement learning, Artificial Intelligence has the potential to revolutionize the way we live and interact with our surroundings. AI for accessibility is taking leap[s] into the realm of opportunities and changing people[s’] lives for better.”

London – Microsoft Data & AILive stream on YouTube

It proved to be an interesting evening where Microsoft demonstrated how their Office products embed AI and accessibility within the process of developing documents. They offer automatic image labelling, accessibility checks, captioning and translations alongside supporting apps useful in many settings. Examples include Seeing AI (a smart phone app) providing information about the world around us via the camera with speech output and We Walk a smart cane that helps those who have visual impairments avoid obstacles. Virtual and augmented reality, haptics and working to support and support for those with hearing impairments were on show.

electronic wheelchair user

Companies showcased their applications throughout the evening and there was a fascinating presentation about wheelchair control via eye tracking from Professor Aldo Faisal (Imperial College)

Interestingly innovative AI ideas for those with cognitive impairments such as learning disabilities were not high on the agenda and yet many of the innovations in this area can also help those with dementia and stroke when communication can be affected.

Professor Clayton Lewis has written a White Paper for the Coleman Institute for Cognitive Disabilities on “Implications of Developments in Machine Learning for People with Cognitive Disabilities” He discusses a roadmap, with many of the strategies we have been collecting. Examples include making text easier to understand, the use of Natural Language Processing (NLP) for text simplification and clarification, visual assistants using image recognition to detect issues occurring in the home with chatbots to assist with problems and ideas around brain connected systems. As with many authors, Professor Lewis reflects on issues around ethics, security and privacy, the lack of disability specific data and algorithms and includes these thoughts under policy projects. But he also stresses that:

…we may expect continued progress in deep learning, as well, perhaps, as significant new ideas. Besides awaiting (and encouraging) these developments, our community should consider how more limited capabilities may be useful in the applications important to us.

At the end of April 2019 there was a short holiday period and as a wonderfully instructive and interesting read that explains all things AI to the non-mathematician, Associate Professor Hannah Fry’s ‘Hello World: Being Human in the Age of Algorithms‘ proved to be a good option! Associate Professor Fry has been interviewed about the book by Demetri Kofinas (YouTube) and this video introduces some of the ideas she explains.

Unicef and “AT in Inclusive Education for Children with Disabilities” at the UN in Geneva

two people at the exhibition stand
Questions about linking symbol sets!

On March 7/8 2019 we had a stand in the UN E building during the 40th session of the Human Rights Council This meant we met many interesting people from around the world thanks to an invitation from UNICEF in partnership with the Permanent Mission of the Republic of Bulgaria and other International Organisations in Geneva.

Twenty-one companies where showcasing their assistive technologies and services from alternative ebook organisations such as Bookshare and eKitabu to innovative wheelchairs, prosthetics and apps

Livox parrot logo

There were several companies that were using machine learning as the back bone of some their products for example Livox which is an augmentative and alternative communication (AAC) app designed to work on Android tablets to help children to develop speech and language skills. The app uses machine learning to adapt to the child’s situation and user skills. “Through artificial intelligence Livox will learn user’s routine and bring information according with the use based on time and location.”

two childlike robots
Kasper and a friend

Compusult are working with the University of Hertfordshire to develop Kasper into an assistive intelligent robot to support children with social behaviour difficulties such as autism. Much research has been undertaken to show the impact Kasper has had on children and the most recent publications are available from the Robot House at the University.

AAC board with symbols
cBoard symbol chart

The UNICEF sponsored eKitabu service and cBoard app developers have been exploring the use of machine learning as a way of gathering data that can show how their technologies have been used. cBoard, uses open symbols and open board format and wants to see how their users’ speech and language skills develop as a result of using their app. This type of reporting and data collection can be found in an article about logging data by CoughDrop

(EPFL) Swiss Federal Institute of Technology, Lausanne were showcasing some of their research into how technologies from eyetracking to robots could be used in learning situations supporting better MOOC design to handwriting. VerbaVoice has also been used in education to offer online interpreting for the live visualisation of language as captions and sign language. This helps those who are deaf and can be used for streamed language translations from the text provided.

The use of the internet to access software for the designing of affordable 3D printed prosthetics was also on show with Prosfit based in Bulgaria and ProjectVive from the USA showing how it is possible to 3D print the hardware that makes any tablet a usable AAC device with mounting kits, switches and amplifiers.

Finally Voiceitt was showing how speech recognition used with non-standard speech is possible . The system enables someone with poor articulation or dysarthria to turn their speech into text. There are examples of beta testers using the technology that has been funded by an EU Horizon 2020 programme.

Machine learning, body tracking and creativity

The Google Accessibility Blog has a collection of fascinating articles including one by Claire Kearney-Volpe, a Designer and researcher who made Creatability. This is a set of creative tools that can be used with any input device and have encouraged creation by a group of disabled users. The YouTube video about Creatability has captions and an expanded audio-described version of this video lives at: https://youtu.be/SbrMu6BuVWU

The various experiments use different input methods from head tracking to switch access. The site suggests that you explore ways to ” make music by moving your facedraw using sight or sound, and experience music visually. “

The system uses Posenet  combined with Tensorflow.js allowing a user to move in front of a webcam and create fun things within a browser – no downloading of programs or storage of data on other people’s servers. The code is open source and can be found on the Google Creativity Lab Github account.

There are lots of experiments that have been shared on the Creatability website with support and further resources.

The Three Dimensions of Inclusive Design

Jutta Treviranus has developed a ” a guiding framework for inclusive design, suitable for a digitally transformed and increasingly connected context. ”

The three dimensions of the framework are:

1. Recognize, respect, and design for human uniqueness and variability.

2. Use inclusive, open & transparent processes, and co-design with people who have a diversity of perspectives, including people that can’t use or have difficulty using the current designs.

3. Realize that you are designing in a complex adaptive system.

The three blogs about ‘The Three Dimensions of Inclusive Design’  were published in March, April and May 2018 and encourage us to think very seriously about how we can make everything we do in our digital world more accessible and inclusive. In her final blog Jutta says:

Including difference is how we evolve as a human society. Inclusive design is about far more than addressing disability. But disability has been called our last frontier. It is the human difference that our social structures have not yet integrated. This is paradoxical because disability is a potential state we can all find ourselves in. If we reject and exclude individuals who experience disabilities, we reject and exclude our future selves and our loved ones.

Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD University

Dr John Gilligan from the Technical University of Dublin  sent me a link to on of Jutta’s tweets about AI and Inclusion. We are just beginning to explore in more depth where the gaps are when thinking about AI and inclusion and how this impacts in both positive and negative ways on at least 20% of the world’s population.

Will AI combined with Inclusive Design make Digital Accessibility mainstream in 2019?

Isabelle Roughol published an article on Linkedin on December 11th, 2018 titled “50 Big Ideas for 2019: What to watch in the year ahead” and number six on the list was “Inclusive design will go mainstream“. She wrote:

“A growing awareness among professionals and advances in artificial intelligence are transforming inclusive design, says Satya Nadella, CEO of Microsoft (LinkedIn’s parent company). “We used to call it assistive technologies and it used to be a checklist of things you did after the product was built,” he says. Now it’s “about taking this way upstream into the design process. What if we said upfront we want a design for people of different abilities to fully participate?” He points to the new Xbox adaptive controller, where even the packaging was designed to be accessible, or new AI that helps people with dyslexia read and comprehend written text.”


Microsoft’s CEO Satya Nadella on Inclusive Design, AI and Digital Accessibility . (This video is also available on YouTube)

“This notion of inclusive design and the breakthroughs in AI, the combination of these two, the juxtaposition of these two in building the next wave of products is probably going to be what we are going to see in a much more mainstream way” Microsoft CEO Satya Nadella

Southampton computer scientists named as Fellows of the Alan Turing Institute

Turing Fellows

ECS, University of Southampton 

Four academics from the School of Electronics and Computer Science  have been named Fellows of The Alan Turing Institute as part of a new cohort from the University of Southampton.

Professors Elena SimperlMike WaldSarvapali Ramchurn and Dr George Konstantinidis will address complex research challenges within the UK’s national institute for data science and artificial intelligence.

The researchers are among 19 leading academics at the University that will now bring to the Institute specific projects covering topics from machine learning for space physics to AI and inclusion.

The Alan Turing Institute was founded in 2015 to undertake world-class research that is applied to real-world problems, drives economic impact and societal good, leads the training of a new generation of scientists and shapes public conversation around data.

Read the full story