Google Unveils SignGemma: AI Tool to Translate Sign Language into Text by Year-End

At Google I/O 2025, the tech giant introduced SignGemma, a powerful AI model designed to translate sign language into spoken text. Currently in its testing phase, this tool is available to developers and selected users, with a broader rollout expected by the end of the year.
For millions of Deaf and hard-of-hearing individuals around the world, sign language is a vital means of communication. However, it often presents barriers in daily interactions with those unfamiliar with it. Google’s new AI initiative, SignGemma, aims to change that by offering real-time sign language-to-text translations, improving accessibility and inclusion on a global scale.
Described as Google’s “most capable sign language understanding model ever,” SignGemma was unveiled by Gemma Product Manager Gus Martins during the keynote. According to Martins, the project stands apart from previous attempts thanks to its open model framework and ability to deliver real-time, accurate translations.
“We’re thrilled to announce SignGemma, our groundbreaking open model for sign language understanding, set for release later this year,” Martins said. “It’s the most capable sign language understanding model ever, and we can’t wait for developers and Deaf and hard-of-hearing communities to take this foundation and build with it.”
At present, SignGemma is most accurate when translating American Sign Language (ASL) into English. However, Google has stated that the model is trained to support a range of sign languages and plans to expand its capabilities over time.
The launch of SignGemma is part of a broader push by Google to prioritise accessibility in AI technology. At this year’s I/O conference, the company announced several updates focused on inclusivity, including enhanced AI integration in Android’s TalkBack feature. Users will now receive AI-generated descriptions of images and be able to ask follow-up questions about what’s on their screen, making the Android experience more intuitive for visually impaired users.
Additionally, Google has rolled out updates to Chrome, such as automatic Optical Character Recognition (OCR) for scanned PDFs. This makes previously inaccessible documents readable and searchable for screen reader users. On Chromebooks, a new feature called Face Control enables users to navigate their device using facial expressions and head gestures—another step forward in Google's mission to empower every user.
To ensure SignGemma is both useful and respectful, Google is adopting a collaborative development approach. The company is actively inviting developers, researchers, and members of the global Deaf and hard-of-hearing communities to test the tool and share feedback.
“We're thrilled to announce SignGemma, our groundbreaking open model for sign language understanding,” read an official post from DeepMind on X. “Your unique experiences, insights, and needs are crucial as we prepare for launch and beyond, to make SignGemma as useful and impactful as possible.”
With SignGemma, Google is not just expanding its AI capabilities—it’s building a bridge between the hearing and Deaf communities. As it nears public release, the tool stands to transform communication and redefine accessibility in the digital age.