Since its launch in 2023, Google has continued to enhance its Gemini AI model - the evolution of Google Assistant and Google’s AI chatbot Bard coming together. These updates aim to provide more user-friendly experiences across various devices and platforms, most of which are beneficial to people living with a vision impairment. Gemini refers to both the built-in AI model on Google devices but also an app, which is available to download across both Apple and Android products.
Enabling Gemini on your Google device
You can switch your built-in assistant from Google Assistant to Google Gemini in a few simple steps:
Open your settings app
Select “Apps” and then “Assistant”
Change your selection from “Google” to “Gemini”
Your device will then prompt you to follow some on-screen instructions. You can follow the same steps to change your assistant from Gemini back to Google; however, by the end of 2025, this will no longer be possible as Google Gemini becomes the new default assistant for Google devices.
Downloading the Gemini app
The Gemini app is available on Google devices and Apple devices running iOS 10 or newer. To download the app, open either Google Play or Apple Store, and search for ‘Google Gemini’. The app icon will appear as a white background with a blue star icon. Select and download. Once Google Gemini has been installed on your device, follow the in-app instructions to get set up.
Chat capabilities
Gemini's latest updates have included changes to its chat capabilities. The AI Google Gemini is now better equipped to handle more complex queries or commands and uphold more natural and intuitive conversations. This allows for a more seamless experience, particularly when using Google Gemini with voice commands. Previously, users had to ask full sentences to prompt a response. However, with the new updates, you can interrupt Gemini's responses and have a more natural conversation. Additionally, Google Gemini can better understand context – so users won’t need to repeat key information about their query.
Text and image interpretation
The updates have also seen improvements to Gemini’s understanding of different imagery or text file inputs. For example, users can upload an image to the Gemini app, and the AI assistant will describe the image and make deductions for you – this also includes reading text, such as signage, or translating languages when prompted. Currently, Google Gemini requires users to upload an image, however, there are rumours of upcoming updates which will allow Gemini to integrate with your phone camera, providing real-time descriptions!
Live Video Mode
Google Gemini’s new Live Video Mode can offer real-time visual assistance through your smartphone camera. By simply pointing your phone at a scene, object, or environment and asking Gemini questions, users can receive instant spoken feedback. This hands-free, conversational approach makes the technology much easier to use. It can be helpful in everyday situations such as navigating unfamiliar places, identifying items on a shelf, or reading appliance controls. Although this isn't a direct replacement for other orientation and mobility skills, Live Video Mode can be a great tool to help users develop greater independence. To access Live Video Mode, follow these steps:
Open your Google Gemini app, ensuring you’re logged in
Press the icon with three lines and a star, located in the bottom right corner of your screen
The app will open a camera screen with four icons at the bottom. Point your device at the sign, object or environment you’d like more information about, and ask something like ‘what’s in front of me?’.
Gemini will respond in real time. You can interrupt, ask further questions, or pause the response, verbally.
Accessing and creating documents
The chatbot can also complete tasks such as writing messages, creating lists, and forming documents through either voice commands or text input. It can read documents aloud, summarise content and help with editing. This can be especially useful for people living with sight loss as it allows for more efficient document navigation, improving independence.
Google Home integration
One of the significant updates is the new Google Home extension, which allows integration of Gemini into Google Home devices. This enables users to interact more naturally and seamlessly with their smart home systems. For individuals with vision impairment, this means they can control various smart devices in their home, such as bulbs, thermostats, or Google Nest, using speech. The Gemini-powered assistant can understand commands, help you manage routines, and provide updates on your home devices.
Enabling the Google Home extension
To use Gemini Google Home integration, you will need to download and set up the Google Home app on your device. It’s important to note that Gemini can only be used with devices that your Google account can access.
Open the Gemini app on your device.
Navigate to your account using your profile icon in the top right corner.
A menu will then appear. Select “Extensions”.
This will display a menu of different Gemini extensions, based on your current apps. These are sectioned into categories, such as Travel, Learning, Media, and Device Control. The Google Home extension is under “Device Control”. Please ensure to toggle on the Google Home option.
It’s important to be aware that AI is still an experimental model, and some extensions of Google Gemini are ‘beta’; still in development and testing. Google Gemini is available as either an app, or as a built-in assistant, on Android devices that run Android 10 and up, and is available as a downloadable app on iOS devices. Use our tech selector tool to find the best mainstream and assistive technology for your vision impairment.