fbpx

2023 Tech Summit Talk | Joe Devon

Joe Devon | Snapshots in AI & Inclusion

Talk Overview

In a captivating presentation at the Remarkable Tech Summit, Joe Devon opened our minds to the revolutionary role of AI in digital accessibility. He emphasised the invaluable perspectives people with disabilities bring to AI development, stimulating thought on sensory input and cognitive processing. Devon ignited interest about AI’s future role in personalising information, ultimately enhancing all our abilities.

Full transcript available below.

Top Insights

1. AI – A Game Changer for Digital Accessibility: Joe Devon emphasised the potential of AI to revolutionise digital accessibilities, urging for inclusive research and development.

2. Addressing Aphantasia through AI: Shedding light on the concept of aphantasia, Devon argued how understanding and accommodating such conditions could significantly enhance AI models.

3. AI Innovations for Inclusion: Devon discussed how AI could generate automated speech recognition, visual recognition, text-to-speech for increased accessibility, and even clone voices for those who need it.

4. Sensory Substitution – A Novel Approach: Introducing the concept of sensory substitution, Devon spoke about devices like the BrainPort and haptic vest that could allow blind and d/Deaf people to experience ‘vision’ and ‘hearing’ respectively.

5. Predicting an All-Inclusive Future with AI: In his conclusion, Devon predicted that AI will augment all of our abilities, transforming information in real time to suit the unique needs of each individual, thus challenging the boundaries of accessibility.

About the speaker: Meet Joe Devon
Joe Devon - headshot

Joe Devon, Co-Founder Global Accessibility Awareness Day.

LinkedIn: Joe Devon
X: @joedevon 

Joe Devon, Head of Accessibility and Al Futurist at Formula Monks, is a technology entrepreneur and web accessibility advocate. He co-founded Global Accessibility Awareness Day (GAAD) and serves as the Chair of the GAAD Foundation, focusing on promoting digital accessibility and inclusive design. Joe explores artificial intelligence’s (AI) potential to revolutionize digital accessibility, developing Al solutions to enhance online experiences for people with disabilities.

Joe Devon:
I have a little image here. Everything here is generated by Midjourney. And this is just an image of someone who has lots of things coming out of their mind, and the condition of people that have extremely vivid memories of imagery is hyperphantasia.

And, this is really an experience of a vivid mind’s eye where you can visualise things very well. Who here has a poor visual memory? And I’m like that as well. So we’ve got Molly. You’ve got you as well. So, Molly, can you tell us what did you imagine when I mentioned the beach?

Molly Levitt:
I mean, I live on a beach, so I had a very clear picture of what I knew but I was not imagining anything new.

Joe Devon:
OK. And I probably have aphantasia and similar to you, really struggle to visualize anything.

And here we have an image of a man with a cloud in front of his face because he’s got nothing. And the inability to generate images in your mind’s eye is called aphantasia, and it’s another form of blindness when you think about it in your mind’s eye.

Working in accessibility, as well as AI, has made me rethink the fields of accessibility as well as the field of artificial intelligence. Because that artificial intelligence is an attempt to try and understand sensory input, as well as cognitive processing, and producing generative output, just like a human being does. And when you think about disability, what is disability other than a disruption in sensory input, cognitive processing, and generative output?

So think about that for a second. Everything that we’re trying to do in artificial intelligence can really be improved when we are thinking about people with disabilities. And if you’ve seen lately about some of the things people are talking about when it comes to AI, they’ll talk about things like automated speech recognition, which generates automated captions. They’ll think about visual recognition, which can generate automated alt text and hopefully very soon, automated audio description. And then you’ve got text to speech, which can be great for people that might need to have their voices cloned and be able to generate their voices.

And it made me think. What if we start to rethink accessibility from the sense of trying to understand the different abilities and senses that everybody has and focus AI researchers on the fields of disability so that as they’re building their technology, they are testing with people with disabilities. It’s going to improve the models considerably.

Has anyone heard of an anauralia? No. Anauralia is the inability to have an inner monologue. So some people, just like we talked about the inability to visualise something in your mind’s eye, it’s the inability to have a monologue. Does anybody here not have an inner monologue? It’s pretty rare but it does happen. And what’s interesting is if you’re doing artificial intelligence, if you’re focusing on these little differences, you’re going to learn a lot about what you’re doing when you’re building artificial intelligence, and one example is I have a friend who is a child, Dakota, a child of d/Deaf adults, and he’s heard since birth. However, he thinks visually because his mother tongue is American Sign Language. And so this is just one of these tiny little details that when you’re trying to emulate using artificial intelligence to try and emulate human beings, you’re not going to be thinking about how to build models that are useful for different kinds of people unless you speak to people with disabilities.

Colour perception is another one that’s really interesting. Anybody know why monitors have you heard of RGB? Red, Green, Blue, Yes. Do you know why monitors are RGB? So they’re the primary colours, and it’s because most people have three primary colours that they can see because they’ve got three colour cones. And I liked your answer. So I’m gonna give you a dollar. There you go. Awesome.

But did you know that women actually have a backup colour cone and in rare cases, some women actually express all four colour cones, and they therefore have four primary colours, and this is a condition that is called tetrachromacy. And we can talk about disabilities where you’re looking at ‘what does the average person have?’, and if the average person doesn’t have an ability or has some kind of impairment compared to the average, we’ll call it a disability. But what about tetrachromacy? Women that have this and it’s only in women can see 100,000,000 colors, whereas the rest of us that are tetrachromats can see a maximum of a million colors. And interestingly enough, the retina displays can show a billion colors, so can 8K and, nonetheless, because it’s based on RGB technology, it comes out flat to tetrachromats because it is RGB based. So this is another example where you can push technology further by testing with people with disabilities.

Some of you may figure out where I’m going with this. This is a question here. ‘Do words or numbers evoke specific colours or tastes for you?’ There’s a dollar in it for whoever says yes, but please don’t lie. Do you associate numbers or letters with colours? Oh, no. OK. Usually there’s at least one or two in the audience and this is called synesthesia. But at one talk where I gave this in this black and white slide one person raised their hands like, ‘Yes, I know what you’re getting at here. There are fireworks coming out of this black and white slide and colour streaks’, and he just described something incredible and this is the kind of thing I wasn’t prepared for that, and everybody in the audience was completely shocked. I gave him 5 bucks, not $1, and it is just incredible. And what synesthesia really is. Does anybody know?

Audience (Ted):
Yes, it’s when you see sounds? It’s when you taste colours, you see sounds. It’s when your senses overlap.

Joe Devon:
Yes, cross functional! Here is a dollar, can somebody help to get this over to Ted, please? Thank you. It is cross-sensory. You’re having one sense, even though the colour cones are not actually activated, they actually do get activated by some other means. And a good way to show this is, this is a chart with fives and twos all in black, but people that are synesthetes they identify specific letters or numbers to colours, and it could be taste as well. And so here’s another slide where the two are in red, and the fives are in green, and what’s really interesting, too, is because these patterns kind of come out at you, if you have synesthesia, your memory tends to be much better.

So all of these little examples. There’s probably hundreds of them. I’ve taken a 45-minute presentation and turned it into 15 min. I just gave you like a few of these. But there’s a lot more that you can do. And there’s lots of reasons why studying different kinds of people with their abilities is going to power the future of technology. But in addition to that, as you see here, there’s a lot of companies here working on BCIs there is a gentleman over here wearing a cognition device and it says, ‘My name is Chris’. And he’s using his brain to control the screen and be able to communicate. And when you think about it, only tech companies that are working with people with disabilities are going to be able to create a great brain-computer interface. Because how are you going to do it with the general population? You absolutely need to work with people with disabilities, so this is really the future. Sensory substitution. Anyone know what that is? Yes, sir. You Yes, we’ll get you a microphone…

Brandon Briggs:
Sensory substitution devices are software some kind of device that you can use for example, visual elements can be converted into sound based off of, you know, some sort of algorithm. And so you can do that for different types types of senses. So haptics, or visuals or auditory.

We do this for the James Webb telescope. We can’t see different radio waves and the different types of life that they’re getting from the telescope. And so people turn it into visuals that look pretty. And that’s kind of a sensory substitution experience.

Joe Devon:
Yeah, very good. That’s a dollar. All right. I got your dollar. Okay, so, over here we have a picture of the brain port and the brain port uses sensory substitution where they have a camera mounted. I don’t know if anybody here has used it. But there’s a camera mounted on glasses and it streams digital data to what they call a lollipop. I wanted to try it, but they said you’re not blind and you need FDA approval. But if you’re blind, you can actually use this device and see through your tongue. And that’s where one sense substitutes for another sense.

And there’s also some of you may have seen the Ted talk with David Eagleman, where there’s a haptic vest worn, and it streams audio data from the iPhone to have haptic touches on people’s backs and someone who’s d/Deaf is able to actually hear through a haptic touch on their back. So this is what’s coming. This is the future of technology. And then I’ll just do one other example.

Has anyone seen Humane? The Ted Talk on Humane? Yes, sir. in the back tell us what you saw?

Markeith Price:
It’s like a computer body. I can’t even explain it. But basically they’re trying to make devices like non-visual. Am I correct?

Joe Devon:
More or less, they’ve just come out with a little bit more information, so it’s a pin. They released it in a fashion show, and it’s a pin that projects a user interface on the palm. But it also does similar to Alexia or Siri, where you can talk to it, and it uses AI to communicate. So it’s definitely a super interesting new wearable and what I think is going to become what I think is going to be happening. Oh, wait, You can’t leave off your dollar.

So we’ll get you that what’s definitely coming is what we’re going to see BCIs working for everybody. We’re going to see haptics being an input device and we’re going to see that AI will transform, for example, from one language to another, from one input to another. So, for example, if you’re blind, you need all of your information to be verbal, so it will translate information to be verbal. Or if you’re d/Deaf, it will translate it into visuals. And if you’re deaf-blind, you will be able to have haptics to communicate with you. And so your real life will be generated in real time by artificial intelligence. And this is the future, AI will augment all of our abilities.

Thank you.

Other stories you might be interested in
A headshot of Molly Lazarus smiling, wearing a black t-shirt, next to text about an ask me anything session for the Accelerator program with a "Apply Now" button.
blog

2024 Accelerator AMA US Startups

https://youtu.be/dblFsiBdy78 The above video is the full recording of the Accelerator AMA US Startups, it includes sign language interpretation in ASL, closed captions, with a

Read More »