Challenges to Implementation of AI and inclusion

treasure map

In no particular order as part of our roadmap we have been looking at the challenges facing aspects of inclusion for those who come under the umbrella of protected characteristics named in the UK’s Equality Act 2010

In particulary for those with disabilities and those becoming less able due to age or debilitating illnesses, the list seems to grow despite the innovations being developed thanks to the use of clever alogrithms and increasing amounts of data and high powered computing power. This is our first attempt at publishing our ideas…

road barriers


Understanding the role and meaning Inclusion

  • Equity v equality

Disability is a Heterogeneous not homogeneous

  • Single ‘Disability’ classification not helpful as every disabled person can have very different needs
  • Small data for individual disabilities compared to big data for all (e.g. remove individuals whose data identifiable)

Skills and Abilities rather than deficit model

  • Looking at what an individual can do rather than focussing on the disabilities/difficulties

Designing for average rather than edge cases and outliers

  • Every disabled person may have very different needs compared to peers without a disability

Assumptions of Stakeholders

  • Changing attitudes
  • Lack of understanding – AI and ethics, data collection, algorithms, transparency  
  • Expectations of experts – will have a magic wand
  • Eugenics issues (e.g. Autism genetic correction)

Few disabled people involved in AI (Nothing about us without us)

  • Disabled people need to be involved in AI decisions
  • More disabled people need to understand AI

Capacity Issues

  • Resources – human, financial, tools
  • Policies and Procedures
  • Lack of general ICT as well as AT/AAC technologies that are regularly used in many settings

Cohesive Approach

  • Collaboration

AT and AAC Market

  • Small Market
  • Localisation issues

Lack of Competencies

  • Knowledge building

Black box non transparent Deep NN machine learning

  • Difficult to understand implications of AI DNN for disabled people

Lack of interest

  • Disabled people’s inclusion of little interest to Turing researchers and Turing research challenges and programmes (lack of knowledge due to lack of undergraduate courses, PhD supervisors, High impact Journals, Research funding etc.)

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

Aiming to tackle some inclusion challenges …

accessibility icons in a grid

We found that Microsoft Azure AI for Accessibility grants were not available in August so hope our previous bid will be moved into the November group. We have applied for a grant under the title of “AI for AAC Symbol Equality, Diversity and Inclusion”. The aim is to develop an online tool to generate automatically personalised pictographic symbol sets for Augmentative and Alternative Communication users and improve image recognition for symbols using three different AI services.

This will require the development of Machine Learning algorithms using Generative adversarial neural networks to produce new and adapted symbols and we would like to garner support for the gathering of open licenced AAC symbol data as well as make use of Microsoft’s systems.

FUTUREGYM: A gymnasium with interactive floor projection for children with special needs

Another grant bid has gone into the Economic and Social Sciences Research Council (ESRC)as part of a UKRI-JST Call on Artificial Intelligence and Society. This bid is about “Assistive AI for Augmentative and Alternative Communication in Shared Activities” workng with the University of Tsukuba and their FutureGym interactive environment The children who took part in the activities tended to have complex communication needs and social behaviour issues. The aim is to introduce symbols and photographs to support the gestures and body movements generally used to express enjoyment or interactions with others as part of the journey towards aided communication where children have limited use of speech or are unable to verbalise their feelings.

During September we also completed a background paper on AI and ICT Accessibility for the International Telecommunication Union in Geneva – this was followed up by a video and an invitation to run a session on the subject at the Regional Forum for Europe on Accessible Europe: ICTs 4 ALL that will take place in St George’s Bay, St. Julian’s, Malta, from 4 to 6 December 2019.

Text analysis, simplification and text to speech using AI with more presentations at AAATE 2019

During July and August we caught up with collegues on projects with we have been involved during the last year because of our work in areas across a range of disabilities. The time culminated with a special thematic session on AI and Inclusion at the Association for the Advancement of Assistive Technology in Europe (AAATE) 2019 conference on “Global Challenges in Assistive Technology”.

student using a computer

Dr Chaohai Ding has been working on a Knowledge Transfer Project with MicrolinkPC (a specialist company providing Assistive Technologies and disability support. The project involved the use of the Natural Language Process and Deep Learning to develop a decision support system for assessors in the workplace assessment process. This involved training the AI model based on the free text extracted from many historical assessments and predicting the reasonable adjustments based on the difficulties and conditions provided by those in the workplace with a range of impairments. The aim being to offer an evidence base for stakeholders involved in the assessment process for the provision of the workplace reasonable adjustments. This ensures “workers with disabilities, or physical or mental health conditions, aren’t substantially disadvantaged when doing their jobs”.

The results have yet to be published, but as Chaohai has admitted some text-based evidence, when related to disability, can be hard to classify to see where patterns are arising that support clearly defined characteristics to aid decision support.

Working on the W3C WCAG Cognitive and Learning Disabilities Accessibility Task Force (Coga TF) has allowed us to be in touch with John Rochford (Program Director and Faculty Member at the Eunice Kennedy Shriver Center of the University of Massachusetts Medical School ). He was interviewed on the AXEchat last month. (If you do not want to listen to the entire YouTube Video start 11 minutes into the conversation and you will hear about his work with AI and text simplification. ) John’s aim is to provide text on the web that can be easily read by those with cognitive impairments. He has called the project ‘easytextAI ‘ and is two years into the work and presented at CSUN 2019 on ‘Creating Simple Web Text for People with ID to Train AI

John’s work links in with Horizon 2020 EasyReading EU project that we have been involved with as a member of their International Advisory Board. This project also uses AI to provide support for disabled users of the web.

“The Easy Reading Framework is available as a browser plug-in or as a web app for mobile devices. With the help of the tools integrated into the framework, web content can be adapted to the individual needs of users in real time. The software offers (partially) automated support functions through the use of HCI techniques such as pop-ups, text-to-speech (TTS), subtitling by mouseover or eye-tracking. With the help of the tracking functions, eye movements and heart rates, it can be determined, among other things, whether the user is experiencing cognitive stress. In such cases, the Easy Reading Framework proactively offers support through the tools corresponding to the user profile. ” RehaData

Continuing the theme of text support, involvement with the A is For App EU project has resulted in more research into the use of AI in apps for reading fluency and examples can be found on the Microsoft Azure AI for Accessibility projects site.

Finally, we had a special thematic session at the AAATE 2019 Global Challenges in Assistive Technology: Research, Policy & Practice Conference in Bologna at the end of August. The full proceedings are available (PDF download) but in our session we explored the following topics.

  • AI and Inclusion: A Roadmap for Research and Development.
  • The four idols of AI for health and wellbeing
  • AI Bias in Gender Recognition of Face Images: Study on the Impact of the IBM AI Fairness 360 Toolkit
  • Machine Learning: Design by Exclusion or Exclusion by Design?
  • Accessibility and Stigma: Designing for Users with Invisible Disabilities
  • IoT-Based Observation Technology for Assessment of Motor and Cognitive Conditions in Children with Severe Multiple Disabilities
  • IoT-Based Continuous Lifestyle Monitoring: The NOAH Concept

AI and ICT Accessibility

With thanks to Ablenet – Using Assistive Technology

Whilst exploring the ideas around digital accessibility and web accessibility we must not forget the wide range of technologies that come under the heading of Information Communication Technologies (ICT) and this includes Assistive Technologies (AT).

Many organisations think of AT as being “any information and communication technology product, device, equipment and related service used to maintain, increase, or improve the functional capabilities of individuals with specific needs or disabilities.” This definition comes from an International Telecommunication Union Model ICT Accessibility Report (2014) . Functional capabilities also include executive functioning so we must not forget how planning, organisation and memory can be supported, reduction in stress and anxiety to improve mental health etc. Now by collecting data about all the issues that can arise we can widen the scope of asssitive technologies to enable them to further enhance inclusion. Think of Augmentative and Alternative Communication (AAC) devices (used by those who may not be able to speak clearly or are nonverbal) offering easy to reach symbol choices based on the location of a user and the type of tasks they are undertaking. An early example of this type of technology is the Livox app.


Working with all forms of media have resulted in huge strides in image recognition supporting text descriptions and Mike was presenting at the Media and Learning Conference in Leuven on June 5-6 and presented innovations around the accessibility of video for learning.

He described how access can be enhanced by using current technologies and discussed the potential for AI to improve the availability of accessible media.

In the last few years ITU has been behind many initiatives involving Artificial Intelligence and ICT Accessibility with summits such as the AI for Good Global Summit and working on standards related to the ethical issues around AI and Inclusion.

Being aware of the impact AI and ICT was having on us all, members of the team became involved with a document produced by the European Disability Forum called ‘Plug and Pray‘. This report looked into the affect that some of the technologies being developed in the AI arena could have on individuals with disabilities.

 EDF logoPlug and Pray – A disability perspective on artificial intelligence, automated decision-making and emerging technologies (Accessible PDF)

“Some conclusions of the report include:

  • Teams working on new technologies are not diverse enough. Industry needs to assure that their teams reflect diversity of general population; 
  • Accessibility and principles of Universal Design should be part of the curricula when teaching design, computer sciences, user experience and other related subjects.
  • Organisations of persons with disabilities and organisations working on digital rights need to work closer together. “

Exploring Blockchain and Digital Accessibility

If Blockchain is to become adopted by the masses, Accessibility is a must-have for Decentralized Applications and Blockchain Applications aiming to be game changers. 

Nathaniel Biddle – Steem Blockchain blog service
locks between linked items

There are so many ways blockchain technologies could perhaps support those with disabilities by enabling access to services online with increased security. The known blockchain technologies could provide access to safer banking but there are also possibilities such as secure messaging services for supporting services and internet ID systems so that CAPTCHAs are a thing of the past. Some have been looking at voting systems. “By capturing votes as transactions through blockchain, governments and voters would have a verifiable audit trail, ensuring no votes are changed or removed and no illegitimate votes are added.” (CBInsights June 2019) Extend the ideas secure certified documents to assessments and exams in schools, colleges and universities and it would be possible for more students to use their own assistive technologies and computers taking tests in a place of their choosing.

Medical records and personal details would be easier to share across countries and perhaps closer to home, even cross county or state boundaries! The travel through life with a secure personalised health, educational and employment passport or portfolio could become a reality. This would help to prevent the need to repeatedly communicate the same information to a myriad of gatekeepers servicing the wide range of facilities and resources available in most settings.

Let’s make AI Inclusive!

inclusion - group around two wheelchairs

It is time alternative formats for certified documents were ensured and Blockchain technologies could offer the potential to avoid the need for locked inaccessible formats. There have been digital accessibility standards for at least ten years such as the W3C Web Accessibility Initiative Web Content Accessibility Guidelines (WCAG) and more recently a mandate such as EN 301 549 which covers procurement of ICT products and services in Europe. If developers do not adhere to these standards from the very beginning of the design process Blockchain will prove to be yet another barrier to ease of use and further prevent access to those who use assistive technologies.

AI and Inclusion versus AI and Assistance.

Over the last few months we have been debating what inclusion means to us all and how AI and machine learning can help.

There have been many ways in which we have seen the assistive side of the technology and how it can be used with apps that support speech technologies, image recognition and automatic captions.  These assistive technologies have benefitted us all but, what about the barriers that could be removed to make it even easier for people to feel included?

equity rather than equality

Courtesy Advancing Equity and Inclusion: A guide for municipalities, City for All Women Initiative (CAWI), Ottawa

Areas such as digital accessibility, augmentative and alternative forms of communication (AAC)  and AI in education, are just a few of the subjects we have been exploring.  

If you are surfing the web using a screen reader, and it fails to work because text descriptions for images or labels on forms have been omitted you may well be unable to complete the task in hand.  However, alternative text can be generated automatically using image recognition and if the text around that image is explored in more detail, there is a chance that the accuracy of the alternative text can be improved with more contextual information.  If people don’t consider accessibility when developing web sites and their content we need many more machine enabled accessibility checks that actually work effectively without too many false positives or negatives.  Then we need the automatic fixes to make sense of these barriers!

If someone with a communication difficulty who uses symbols wants to join a conversation where everyone is talking at a rate of 150 plus words per minute it is hard to compete only managing around 10 – 12 words per minute.  It should be possible to speed input with better forms of prediction and language correction when users need to choose symbols. Once again context sensitivity could help.

Why can’t we also make symbol sets interchangeable so that users who work with one set of symbols are not dependent on the text translation to work with other AAC users – the ability to harmonise symbol sets with some standardisation should be possible – maybe image recognition and better use of natural language processing could help.

A conference with the ED-ICT network will hopefully result in discussions around the support AI could provide in education. Several AI technologies have been mentioned in international reports that could have an impact on some of our students when coping with the barriers of day to day life in universities. Could the better use of natural language processing further improve automatic captioning for lecture capture and provide more accurate search results when looking for academic papers, Biometrics and Blockchain in Education could offer enhanced security for aspects of our management systems and assessments perhaps allowing better supporting strategies for those who benefit from remote access. i feel these aspects of inclusive education need more research to support disabled students.

Global Accessibility Awareness Day (GAAD) 2019

easy to read logo

To celebrate the Global Accessibility Awareness Day, the European Disability Forum made their report “Plug and Pray – A disability perspective on artificial intelligence, automated decision-making and emerging technologies (Accessible PDF)” available to all. As the team had some involvement with the report it is exciting to see it now in print and in an Easy to Read version.

Some conclusions of the report include:

  • Teams working on new technologies are not diverse enough. Industry needs to assure that their teams reflect diversity of general population; 
  • Accessibility and principles of Universal Design should be part of the curricula when teaching design, computer sciences, user experience and other related subjects.
  • Organisations of persons with disabilities and organisations working on digital rights need to work closer together. “

But as Shadi Abou-Zahra said in the latest EDF news letter on the subject of ‘Technology is for People, Not the Other Way Round’

” As technology continues to evolve at lightning speed, we also see the opportunities and challenges continue to multiply. A friend of mine was recently using a mobile app that uses artificial intelligence techniques to recognise objects and text in front of the camera. He was using it to orient himself in a hotel he had just checked into. The app explained the layout of the hallways and read out the door numbers on the signs so that he could find his room as a blind person traveling alone for business. This is mind blowing considering that mainstream deployment and use of artificial intelligence is only in its early stages.”

There will be more presentations and discussions around AI and Inclusion at the AAATE 2019 Conference on Global Challenges in Assistive Technology Research, Policy & Practice 27 – 30 August, Bologna (Italy)

AI and Digital Accessibility

Microsoft held an evening at the London on March 28th where the invitation to the event encouraged us to believe that:

“With the advancement in conversational intelligence, Deep learning, and Reinforcement learning, Artificial Intelligence has the potential to revolutionize the way we live and interact with our surroundings. AI for accessibility is taking leap[s] into the realm of opportunities and changing people[s’] lives for better.”

London – Microsoft Data & AILive stream on YouTube

It proved to be an interesting evening where Microsoft demonstrated how their Office products embed AI and accessibility within the process of developing documents. They offer automatic image labelling, accessibility checks, captioning and translations alongside supporting apps useful in many settings. Examples include Seeing AI (a smart phone app) providing information about the world around us via the camera with speech output and We Walk a smart cane that helps those who have visual impairments avoid obstacles. Virtual and augmented reality, haptics and working to support and support for those with hearing impairments were on show.

electronic wheelchair user

Companies showcased their applications throughout the evening and there was a fascinating presentation about wheelchair control via eye tracking from Professor Aldo Faisal (Imperial College)

Interestingly innovative AI ideas for those with cognitive impairments such as learning disabilities were not high on the agenda and yet many of the innovations in this area can also help those with dementia and stroke when communication can be affected.

Professor Clayton Lewis has written a White Paper for the Coleman Institute for Cognitive Disabilities on “Implications of Developments in Machine Learning for People with Cognitive Disabilities” He discusses a roadmap, with many of the strategies we have been collecting. Examples include making text easier to understand, the use of Natural Language Processing (NLP) for text simplification and clarification, visual assistants using image recognition to detect issues occurring in the home with chatbots to assist with problems and ideas around brain connected systems. As with many authors, Professor Lewis reflects on issues around ethics, security and privacy, the lack of disability specific data and algorithms and includes these thoughts under policy projects. But he also stresses that:

…we may expect continued progress in deep learning, as well, perhaps, as significant new ideas. Besides awaiting (and encouraging) these developments, our community should consider how more limited capabilities may be useful in the applications important to us.

At the end of April 2019 there was a short holiday period and as a wonderfully instructive and interesting read that explains all things AI to the non-mathematician, Associate Professor Hannah Fry’s ‘Hello World: Being Human in the Age of Algorithms‘ proved to be a good option! Associate Professor Fry has been interviewed about the book by Demetri Kofinas (YouTube) and this video introduces some of the ideas she explains.

Unicef and “AT in Inclusive Education for Children with Disabilities” at the UN in Geneva

two people at the exhibition stand
Questions about linking symbol sets!

On March 7/8 2019 we had a stand in the UN E building during the 40th session of the Human Rights Council This meant we met many interesting people from around the world thanks to an invitation from UNICEF in partnership with the Permanent Mission of the Republic of Bulgaria and other International Organisations in Geneva.

Twenty-one companies where showcasing their assistive technologies and services from alternative ebook organisations such as Bookshare and eKitabu to innovative wheelchairs, prosthetics and apps

Livox parrot logo

There were several companies that were using machine learning as the back bone of some their products for example Livox which is an augmentative and alternative communication (AAC) app designed to work on Android tablets to help children to develop speech and language skills. The app uses machine learning to adapt to the child’s situation and user skills. “Through artificial intelligence Livox will learn user’s routine and bring information according with the use based on time and location.”

two childlike robots
Kasper and a friend

Compusult are working with the University of Hertfordshire to develop Kasper into an assistive intelligent robot to support children with social behaviour difficulties such as autism. Much research has been undertaken to show the impact Kasper has had on children and the most recent publications are available from the Robot House at the University.

AAC board with symbols
cBoard symbol chart

The UNICEF sponsored eKitabu service and cBoard app developers have been exploring the use of machine learning as a way of gathering data that can show how their technologies have been used. cBoard, uses open symbols and open board format and wants to see how their users’ speech and language skills develop as a result of using their app. This type of reporting and data collection can be found in an article about logging data by CoughDrop

(EPFL) Swiss Federal Institute of Technology, Lausanne were showcasing some of their research into how technologies from eyetracking to robots could be used in learning situations supporting better MOOC design to handwriting. VerbaVoice has also been used in education to offer online interpreting for the live visualisation of language as captions and sign language. This helps those who are deaf and can be used for streamed language translations from the text provided.

The use of the internet to access software for the designing of affordable 3D printed prosthetics was also on show with Prosfit based in Bulgaria and ProjectVive from the USA showing how it is possible to 3D print the hardware that makes any tablet a usable AAC device with mounting kits, switches and amplifiers.

Finally Voiceitt was showing how speech recognition used with non-standard speech is possible . The system enables someone with poor articulation or dysarthria to turn their speech into text. There are examples of beta testers using the technology that has been funded by an EU Horizon 2020 programme.

IBM Project Debater

IBM say that that “Project Debater is the first AI system that can debate humans on complex topics. The goal is to help people build persuasive arguments and make well-informed decisions.” They showcased a fascinating debate on pre-school subsidies with experts and a live audience in San Francisco. Project Debater won the argument!

Project Debater has a very clear synthetic female voice and puts together enormous amounts of data about a subject and presents it in a very succinct way that makes perfect sense.

One wonders how this system could perhaps be used to work with those who need to be encouraged to have conversations and like to use technology rather than face humans in a debate. Also to help with practicing debating skills when social behaviour skills need encouraging and shyness needs to be overcome. The system could possibly help those with cognitive disabilities understand issues and be trained to allow more time between statements, perhaps to slow down a little and to use easy to comprehend language. Lots of ideas to think about and IBM have provided more information on how the project works.