Over the last few months we have been debating what inclusion means to us all and how AI and machine learning can help.
There have been many ways in which we have seen the assistive side of the technology and how it can be used with apps that support speech technologies, image recognition and automatic captions. These assistive technologies have benefitted us all but, what about the barriers that could be removed to make it even easier for people to feel included?
Areas such as digital accessibility, augmentative and alternative forms of communication (AAC) and AI in education, are just a few of the subjects we have been exploring.
If you are surfing the web using a screen reader, and it
fails to work because text descriptions for images or labels on forms have been
omitted you may well be unable to complete the task in hand. However, alternative text can be generated
automatically using image recognition and if the text around that image is
explored in more detail, there is a chance that the accuracy of the alternative
text can be improved with more contextual information. If people don’t consider accessibility when
developing web sites and their content we need many more machine enabled
accessibility checks that actually work effectively without too many false positives
or negatives. Then we need the automatic
fixes to make sense of these barriers!
If someone with a communication difficulty who uses symbols wants to join a conversation where everyone is talking at a rate of 150 plus words per minute it is hard to compete only managing around 10 – 12 words per minute. It should be possible to speed input with better forms of prediction and language correction when users need to choose symbols. Once again context sensitivity could help.
Why can’t we also make symbol sets interchangeable so that users who work with one set of symbols are not dependent on the text translation to work with other AAC users – the ability to harmonise symbol sets with some standardisation should be possible – maybe image recognition and better use of natural language processing could help.
A conference with the ED-ICT network will hopefully result in discussions around the support AI could provide in education. Several AI technologies have been mentioned in international reports that could have an impact on some of our students when coping with the barriers of day to day life in universities. Could the better use of natural language processing further improve automatic captioning for lecture capture and provide more accurate search results when looking for academic papers, Biometrics and Blockchain in Education could offer enhanced security for aspects of our management systems and assessments perhaps allowing better supporting strategies for those who benefit from remote access. i feel these aspects of inclusive education need more research to support disabled students.
Teams working on new technologies are not diverse enough. Industry needs to assure that their teams reflect diversity of general population;
Accessibility and principles of Universal Design should be part of the curricula when teaching design, computer sciences, user experience and other related subjects.
Organisations of persons with disabilities and organisations working on digital rights need to work closer together. “
But as Shadi Abou-Zahra said in the latest EDF news letter on the subject of ‘Technology is for People, Not the Other Way Round’
” As technology continues to evolve at lightning speed, we also see the opportunities and challenges continue to multiply. A friend of mine was recently using a mobile app that uses artificial intelligence techniques to recognise objects and text in front of the camera. He was using it to orient himself in a hotel he had just checked into. The app explained the layout of the hallways and read out the door numbers on the signs so that he could find his room as a blind person traveling alone for business. This is mind blowing considering that mainstream deployment and use of artificial intelligence is only in its early stages.”
Microsoft held an evening at the London on March 28th where the invitation to the event encouraged us to believe that:
“With the advancement in conversational intelligence, Deep learning, and Reinforcement learning, Artificial Intelligence has the potential to revolutionize the way we live and interact with our surroundings. AI for accessibility is taking leap[s] into the realm of opportunities and changing people[s’] lives for better.”
It proved to be an interesting evening where Microsoft demonstrated how their Office products embed AI and accessibility within the process of developing documents. They offer automatic image labelling, accessibility checks, captioning and translations alongside supporting apps useful in many settings. Examples include Seeing AI (a smart phone app) providing information about the world around us via the camera with speech output and We Walk a smart cane that helps those who have visual impairments avoid obstacles. Virtual and augmented reality, haptics and working to support and support for those with hearing impairments were on show.
Interestingly innovative AI ideas for those with cognitive impairments such as learning disabilities were not high on the agenda and yet many of the innovations in this area can also help those with dementia and stroke when communication can be affected.
Professor Clayton Lewis has written a White Paper for the Coleman Institute for Cognitive Disabilities on “Implications of Developments in Machine Learning for People with Cognitive Disabilities” He discusses a roadmap, with many of the strategies we have been collecting. Examples include making text easier to understand, the use of Natural Language Processing (NLP) for text simplification and clarification, visual assistants using image recognition to detect issues occurring in the home with chatbots to assist with problems and ideas around brain connected systems. As with many authors, Professor Lewis reflects on issues around ethics, security and privacy, the lack of disability specific data and algorithms and includes these thoughts under policy projects. But he also stresses that:
…we may expect continued progress in deep learning, as well, perhaps, as significant new ideas. Besides awaiting (and encouraging) these developments, our community should consider how more limited capabilities may be useful in the applications important to us.
On March 7/8 2019 we had a stand in the UN E building during the 40th session of the Human Rights Council This meant we met many interesting people from around the world thanks to an invitation from UNICEF in partnership with the Permanent Mission of the Republic of Bulgaria and other International Organisations in Geneva.
Twenty-one companies where showcasing their assistive technologies and services from alternative ebook organisations such as Bookshare and eKitabu to innovative wheelchairs, prosthetics and apps
(EPFL) Swiss Federal Institute of Technology, Lausanne were showcasing some of their research into how technologies from eyetracking to robots could be used in learning situations supporting better MOOC design to handwriting. VerbaVoice has also been used in education to offer online interpreting for the live visualisation of language as captions and sign language. This helps those who are deaf and can be used for streamed language translations from the text provided.
The use of the internet to access software for the designing of affordable 3D printed prosthetics was also on show with Prosfit based in Bulgaria and ProjectVive from the USA showing how it is possible to 3D print the hardware that makes any tablet a usable AAC device with mounting kits, switches and amplifiers.
Finally Voiceitt was showing how speech recognition used with non-standard speech is possible . The system enables someone with poor articulation or dysarthria to turn their speech into text. There are examples of beta testers using the technology that has been funded by an EU Horizon 2020 programme.
IBM say that that “Project Debater is the first AI system that can debate humans on complex topics. The goal is to help people build persuasive arguments and make well-informed decisions.” They showcased a fascinating debate on pre-school subsidies with experts and a live audience in San Francisco. Project Debater won the argument!
Project Debater has a very clear synthetic female voice and puts together enormous amounts of data about a subject and presents it in a very succinct way that makes perfect sense.
One wonders how this system could perhaps be used to work with those who need to be encouraged to have conversations and like to use technology rather than face humans in a debate. Also to help with practicing debating skills when social behaviour skills need encouraging and shyness needs to be overcome. The system could possibly help those with cognitive disabilities understand issues and be trained to allow more time between statements, perhaps to slow down a little and to use easy to comprehend language. Lots of ideas to think about and IBM have provided more information on how the project works.
“The code encourages technology companies to meet a gold-standard set of principles to protect patient data to the highest standards. It has been drawn up with the help of industry, academics and patient groups.
The aim is to make it easier for suppliers to develop technologies that tackle some of the biggest issues in healthcare, such as dementia, obesity and cancer. It will also help health and care providers choose safe, effective and secure technology to improve the services they provide.”
The system uses Posenet combined with Tensorflow.js allowing a user to move in front of a webcam and create fun things within a browser – no downloading of programs or storage of data on other people’s servers. The code is open source and can be found on the Google Creativity Lab Github account.
There are lots of experiments that have been shared on the Creatability website with support and further resources.
Jutta Treviranus has developed a ” a guiding framework for inclusive design, suitable for a digitally transformed and increasingly connected context. ”
The three dimensions of the framework are:
1. Recognize, respect, and design for human uniqueness and variability.
2. Use inclusive, open & transparent processes, and co-design with people who have a diversity of perspectives, including people that can’t use or have difficulty using the current designs.
3. Realize that you are designing in a complex adaptive system.
The three blogs about ‘The Three Dimensions of Inclusive Design’ were published in March, April and May 2018 and encourage us to think very seriously about how we can make everything we do in our digital world more accessible and inclusive. In her final blog Jutta says:
Including difference is how we evolve as a human society. Inclusive design is about far more than addressing disability. But disability has been called our last frontier. It is the human difference that our social structures have not yet integrated. This is paradoxical because disability is a potential state we can all find ourselves in. If we reject and exclude individuals who experience disabilities, we reject and exclude our future selves and our loved ones.
Trying a “lawnmower of justice” for AI – leveling the playing field-restricting the repeats of any data element so the norm doesn’t overwhelm the edges. Takes longer to learn but handles the unexpected, detects weak signals & transfers to new contexts better #inclusion#AIpic.twitter.com/fhaEVdk0Nu
“A growing awareness among professionals and advances in artificial intelligence are transforming inclusive design, says Satya Nadella, CEO of Microsoft (LinkedIn’s parent company). “We used to call it assistive technologies and it used to be a checklist of things you did after the product was built,” he says. Now it’s “about taking this way upstream into the design process. What if we said upfront we want a design for people of different abilities to fully participate?” He points to the new Xbox adaptive controller, where even the packaging was designed to be accessible, or new AI that helps people with dyslexia read and comprehend written text.”
“This notion of inclusive design and the breakthroughs in AI, the combination of these two, the juxtaposition of these two in building the next wave of products is probably going to be what we are going to see in a much more mainstream way” Microsoft CEO Satya Nadella
The researchers are among 19 leading academics at the University that will now bring to the Institute specific projects covering topics from machine learning for space physics to AI and inclusion.
The Alan Turing Institute was founded in 2015 to undertake world-class research that is applied to real-world problems, drives economic impact and societal good, leads the training of a new generation of scientists and shapes public conversation around data.