Tech
Artificial Intelligence

Be My Eyes meets GPT-4: Is AI the next frontier in accessibility?

With AI at our fingertips, disability activists hope to demolish barriers to access.
By Chase DiBenedetto  on 
Lucy is crouching down and hugging a black Labrador in front of a garden wall. Next to her and the dog is a large iPhone screen showing the homepage of the Be My Eyes app, which is photoshopped into the picture.
Be My Eyes users like Lucy Edwards are navigating the world with the help of an OpenAI innovation. Credit: Lucy Edwards / Be My Eyes

Nearing the 10-year anniversary of losing her eyesight, Lucy Edwards(opens in a new tab) is reclaiming countless visual experiences...with the help of artificial intelligence.

As a partner with visual assistant mobile app Be My Eyes(opens in a new tab), Edwards is testing the limits of the latest accessibility revelation, the Be My Eyes Virtual Volunteer(opens in a new tab). The AI-driven tool acts as tour guide, food blogger, personal assistant — you name it — ushering in a new form of complex, human-mimicking assistance using OpenAI's hyper-realistic AI language model. With a single app, Edwards' whole world is expanding, on her own terms.

So far, she's used it to help her read fashion catalogs(opens in a new tab), translate ingredients from Chinese to English and search the web for recipes(opens in a new tab), write alt text for images within her own photo library(opens in a new tab), and help her read restaurant menus(opens in a new tab). Edwards has also demonstrated the potential of using the Virtual Volunteer as a personal trainer(opens in a new tab) and as a guide to navigating the London Tube(opens in a new tab)

Edwards herself is a content creator(opens in a new tab) and disability activist known for her "How Does a Blind Girl"(opens in a new tab) series and travel vlogging lifestyle, among much, much more. Edwards' millions of followers interact with her content as she navigates a world inequitably designed for the sighted population, raises awareness about her disability, and discovers life-changing innovations. She jumped at the chance to test the new tool, as a self-proclaimed tech-savvy millennial.

"I was ready for AI before it even existed, because I knew what I was missing. The whole internet could change completely for me," Edwards told Mashable, "because most of the internet isn't accessible as a blind user."

Be My Eyes was founded in 2015 to connect users who are blind or have low vision to sighted volunteers through a simple system of real-time, video-chat assistance. The Virtual Volunteer is an expansion of that foundational service, taking the framework of a visual detection software, used in features like iOS 16's Door Detection, and adding onto it the language complexity of GPT-4. In doing this, the tool has expanded the amount of information available to blind and low-vision users in ways never before seen, adding a sense of depth and immediately individualized interaction to accessibility tools. 

"From feeling so lost and upset when I lost my eyesight, to now thinking I could have all this back is — I don't know, it makes me cry," Edwards said.

View this post on Instagram
(opens in a new tab)

Riding the AI hype wave over access barriers

OpenAI's new GPT-4 plopped into the laps of users already toying with huge questions about AI's place in our world: How do we protect artistic integrity with AI tools on the market? In a world of misinformation(opens in a new tab), is it possible to tell when AI is the "mind" behind something? Are we slowly replacing the need for human skill, and, more importantly, human empathy?

Amid all these concerns — and there are quite a few — GPT-4 is rapidly making technological waves, with its new version doing so alongside the claim of social good(opens in a new tab). In addition to its partnership with Be My Eyes, OpenAI has made its tech available to other learning apps like language platform Duolingo(opens in a new tab) and free education channel Khan Academy(opens in a new tab). GPT-4 was also introduced to Envision smart glasses(opens in a new tab), which let wearers hear visual descriptions of the world around them. 

Mike Buckley, CEO of Be My Eyes, explained to Mashable that the new Virtual Volunteer tool was a long-anticipated, and requested, expansion of Be My Eyes, rather than a trendy redesign of the popular, million-user app. "It's not a shift, necessarily. It's an addition," he said. "This is directly responsive to the people who are blind and low-vision in our community that want something like this."

In a Be My Eyes survey polling blind and low-vision users, Buckley explained, the predominant feedback on barriers to use was that some users actually felt uncomfortable with Be My Eyes' human aspect. Most respondents said they don't use the app as often because they "don't want to take a volunteer away from someone who might need them more," and others recounted that it was because they were "wary about calling a stranger or a paid agent." Buckley explained that some were worried an urgent call wouldn't be picked up in time, and a significant portion of surveyed users said it was an issue of independence, not wanting to feel reliant on another volunteer. 

"Up to this point we just haven't seen a technological tool that would solve these needs quickly enough and accurately enough to launch something like this," he said. But the public availability of ChatGPT, and the collaboration with GPT-4, changed that reality for the company, accelerating an addition to their services. 

A screenshot of the Be My Eyes Virtual Volunteer tool, which is prompting the user to add a picture and question.
Credit: Be My Eyes
A screenshot of the user's inputs to the Virtual Volunteer. They are uploading a photo of two striped shirts and asking the AI which is the red-striped one.
Credit: Be My Eyes

When Edwards got the call to beta test the tool along with other blind and low-vision users (who can still apply to test the service(opens in a new tab)), she says she was once again brought to tears. "I am such an autonomous person. Thinking about AI… that's just me and my phone. From end to end, it's me and the tech. That is true autonomy: my phone and me in harmony with no other assistance," Edwards said. "That's basically like having my eyesight back in some ways."

She and the rest of the Virtual Volunteer testers are part of a WhatsApp group along with Be My Eyes' leaders, providing constant 1:1 feedback on the AI's successes and failures. Edwards says she reports about two to four minor issues every day, but that she's found it to be impressive overall.

"It's not perfect," Buckley said, "but it is remarkable."

What will it take for AI to gain the trust of accessibility advocates?

Some online have expressed a sense of wariness toward a completely AI-led accessibility tool like this, and much of that relates to safety and fact-checking, especially as the app advertises real-world uses in situations like transportation or work. 

Buckley assured Mashable that accuracy and safety are the number one priority for Be My Eyes' AI use. "The reason we're launching this in a small beta and taking our time is that we want to see how it's really performing in the real world. I've probably done 1,000 tests myself. Our team has done hundreds and hundreds more. I have not had a hallucination(opens in a new tab). I tried to get it to act badly, but that doesn't mean it's going to be perfect in the real world.

"What we've told the beta testers is that this doesn't replace a white cane," Buckley said. "This doesn't replace a guide dog. Be smart. Be safe. Be cautious."

Edwards herself had no hesitations about trying out a tool like the Virtual Volunteer, mainly because she's already established trust with Be My Eyes and other accessibility-forward companies. "I think because they're doing it and they're collaborating with OpenAI, I trust it more. It's a process, whereas if it was just me going on ChatGPT — like I have been doing — I don't trust that as much."

In its beta form, the new Virtual Volunteer has a built-in prompt for users to connect with a human volunteer if they feel the AI assistant isn't working, and the assistant will also let users know when it's unsure of what exactly it's viewing. At any time, a user can switch to human help as the app's original function will remain the same, working in tandem with the AI assistant.

A screenshot of the Virtual Volunteer's response to a question about striped shirts. The response reads, “Hi, there! Based on the image you have provided, the red striped shirt is the one of the right side. Is there anything else I can help you with?” Below the response are three buttons that read, “Reply”, “Call a Volunteer”, and “End Chat”.
Credit: Be My Eyes
A screenshot of the user asking the Virtual Volunteer "What does the other one look like?” The AI responds, “The other shirt in the picture appears to have blue and yellow stripes. Is there anything else I can assist you with?”
Credit: Be My Eyes

At its most basic summation, the Virtual Volunteer isn't unlike the current visual assistance tools on the market, from Apple's detection tools to visual detection apps like Seeing AI(opens in a new tab) and Lookout(opens in a new tab). What is unique is the amount of customizable feedback one can get from the OpenAI language model. Rather than reading out only text-based information, like a screen reader would, or describing in basic terms the object in a user's visual field, the Virtual Volunteer lets users like Edwards interact with a full array of feedback. With superior image recognition and analytic capabilities, pictures and text get equal descriptions, and users can ask layered follow-up questions. The volunteer can respond to prompts on just about anything captured and uploaded with only a phone camera. 

"You're going to see some spaces adopt AI more generally. I know that there might be older people, or people who have seen the inner workings of AI, that might have some hesitancy. I don't want to undermine that. But personally, I'm really excited," Edwards said. 

Beyond the technical concerns of heralding AI into this space, though, the tool brings up the question of the necessity of human interaction. 

Buckley says that just as many Be My Eyes users prefer human volunteers as those who prefer virtual ones, and that the Virtual Volunteer is entirely about choice. "This is about empowering our community with the choices they want to make to solve their needs and increase independence. It's about serving them. That's why we're doing this, and that's also why it's free." In a social reality that puts many people with disabilities at a physical and financial disadvantage(opens in a new tab), free accessibility tools can be life-changing. 

Edwards explained that she's been using Be My Eyes in conjunction with other visual assistance apps, much like Buckley recommends other users do. Using her guide dog, Molly, and tools like Microsoft Soundscape and the paid subscription app Aira(opens in a new tab) (which uses professionally trained human volunteers to assist blind users), Edwards has a robust navigational toolkit, one that includes both digital and human resources to utilize as she chooses.

"We know AI is powerful, but it's got to be shaped and moved and fostered in a way that this community owns, and serves their needs," Buckley said. 

Broadly, the tool is just one aspect of a larger discussion about tech innovation, accessibility, and the freedom of the internet. In the fight for a more accessible digital culture, Edwards said, AI-based tools can help more people secure access while they wait for companies and industry leaders to finally do the work themselves. 

"What I was very hopeless about is that no matter how much I campaigned and campaigned and campaigned, I was never going to get 100 percent of the websites on Google to be screen reader accessible," she explained. "Here is a possible future where that can happen now. It's just the beginning, isn't it?"

Want more news on tech and accessibility delivered straight to your inbox? Sign up for Mashable's Top Stories newsletter today.

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


Recommended For You
How ChatGPT could be changing poker

How 3D modelling is helping the restoration of Paris’ Notre-Dame


The best dating apps for bisexual people: Where to meet people who get it

'Euphoria' meets Britney Spears in new teaser for 'The Idol'

Trending on Mashable

'Wordle' today: Here's the answer, hints for April 21

Dril and other Twitter power users begin campaign to 'Block the Blue' paid checkmarks

How to remove Snapchat's My AI from your Chat feed

The biggest stories of the day delivered to your inbox.
By signing up to the Mashable newsletter you agree to receive electronic communications from Mashable that may sometimes include advertisements or sponsored content.
Thanks for signing up. See you at your inbox!