Google is expanding its real-time caption feature, Live Captions, from Pixel phones to anyone using a Chrome browser, as first spotted by XDA Developers. Live Captions uses machine learning to spontaneously create captions for videos or audio where none existed before, and making the web that much more accessible for anyone who’s deaf or hard of hearing.
When enabled, Live Captions automatically appear in a small, moveable box in the bottom of your browser when you’re watching or listening to a piece of content where people are talking. Words appear after a slight delay, and for fast or stuttering speech, you might spot mistakes. But in general, the feature is just as impressive as it was when it first appeared on Pixel phones in 2019. Captions will even appear with muted audio or your volume turned down, making it a way to “read” videos or podcasts without bugging others around you.
Chrome’s Live Captions worked on YouTube videos, Twitch streams, podcast players, and even music streaming services like SoundCloud in early tests run by a few of us here at The Verge. However, it seems that Live Captions in Chrome only work in English, which is also the case on mobile.
Live Captions can be enabled in the latest version of Chrome by going to Settings, then the “Advanced” section, and then “Accessibility.” (If you’re not seeing the feature, try manually updating and restarting your browser.) When you toggle them on, Chrome will quickly download some speech recognition files, and then captions should appear the next time your browser plays audio where people are talking.
Live Captions were first introduced in the Android Q beta, but until today, they were exclusive to some Pixel and Samsung phones. Now that they’re on Chrome, Live Captions will be available to a much wider audience.