The what’s what of translation hearables
Technology is gradually breaking down language barriers. You can use translation tools to read websites and text, and can even use tech like VR to place yourself in another country entirely. So it’s a real bummer that, in person, the barrier goes back up.
Universal translation has been in the human conscious since Star Trek popularized the idea many decades ago – even if it was really just a way to invent a host of languages and make audiences read subtitles. Cynicism aside, it was still an inspiring idea of unification for engineers of the future.
Those future engineers are now actual engineers. They lead companies that are tackling real-time translation hearables in full force. Last year we told you hearables would be a big deal, and in 2018 we expect that translation earbuds, specifically, will see a marked improvement.
But just what’s the current state of play? Well, we reveal all below.
The bridge crew
While the Apple AirPods are the most well known name in hearables – despite AirPods not being true hearables – it’s been down to a handful of smaller startups to carry the mantle into our hearable future.
Bragi and its Dash are the first major example. Though Bragi didn’t invent the idea of real-time translation, it has been talking about it since it hit the scene and finally brought it to us in the Dash Pro earphone. It’s also rolled out the same feature to the existing Dash in an update. Bragi’s translation implementation, however, feels a little more like patchwork than something perfectly seamless.
That’s because Bragi’s solution leans heavily on iTranslate Pro on your iPhone (there’s no Android support yet). So once you purchase a subscription to iTranslate Pro and download that to your phone, you have to take out your phone and open up the app to translate directly into your ear. It’s still cool and futuristic, but nowhere near the future many people might imagine. However, if you and the other person are both wearing the Dash, you can do it without pulling out a phone, which is definitely cooler.
Comparatively, Google’s solution is slightly better in this regard, but generally its package falls short of the standard we’d like to see. While its able to create a nice way of interacting Google Assistant, we found that the Google Pixel Buds -the company’s first dip into hearables – disappoints due to sound leakage, an awkward design and consistent pairing and reliability problems.
Oh, and we’d be remiss if we didn’t mention the untimely demise of Doppler Labs, a company which had primarily focused on augmented sound but promised to bring translation to its second generation ear computer.
Meanwhile, over in the UK, there’s MyManu, a startup that has a small translation hearable called Clik. Clik packs in a microprocessor and microphone that can translate up to 37 different languages for you in real-time. You use the companion app to download 9 different language packs, which can be stored directly on the headphones.
Once you chose your language, the Clik will listen for it. After a sentence, it’ll start translating in real time without the need for a data connection. Basically, it’s using that one sentence to determine context. And while Clik doesn’t require data, an advantage over other real-time translation hearables, it does need a connection when you’re listening to someone over a conference call or if you’re dealing with multiple languages at the same time.
MyManu actually won Marriott’s TestBED accelerator program, and was recently trialled for 10 weeks in the hotel chain’s Madrid location. With a successful Kickstarter and a trial that’ll allow it to learn from thousands of users, MyManu is in a good position to make some noise in the future.
Translation tech isn’t just good for traveling, though, it’s also good in the medical field. In Japan, Fujitsu – citing the higher number of non-Japanese speakers making their way into Japanese hospitals – have developed a hands-free translator it says will listen and translate unprompted. The tech isn’t as general purpose as the others on this list, as it’ll focus in on more on medical diagnosis.
Take me to the Pilot
And then there’s Waverly Labs’ Pilot, the poster child of translation hearables that raised an impressive $4,424,256 during its crowdfunding campaign in 2016. It’s been about 18 months since that campaign ended.
The way translation works on the Pilot is similar to how it works with the Bragi Dash and Pixel Buds. However, instead of holding your phone out for the person to talk to, you only need to hand them an earbud and let them talk into it. Unlike another rival, Pilot doesn’t yet have an offline mode, though Ochoa tells Wareable the company is working on it.
“Things have been moving along really well for us,” Waverly Labs CEO Andrew Ochoa told us way back in March 2017. The company has about 22,000 units on pre-order and 4.5 million in pre-sales. Ochoa also told Wareable if pre-production and beta testing went well, units would begin shipping out to backers in summer 2017.
Well, things didn’t go well enough. In April, Waverly Labs announced that its head of design engineering came back from pre-production in China wanting to improve the quality of Pilot’s audio and translation systems. The company agreed and the estimated shipping date for the device was delayed until later in the year. Currently, some backers are still waiting for their buds, though shipments did begin rolling out in December.
So it turns out that packing a tiny device with a whole bunch of technology is really hard- who’d have known? Ochoa told us that the problems it ran into include noise-cancelling algorithms, microphone placement and antenna tuning, which it’ll be working on during the delay.
At launch, Pilot will only support a handful of languages, but Waverly Labs is working on expanding it to more, including German, Greek, Russian, Arabic, Mandarin, and Korean.
To boldly translate
Two of the biggest problems for translation hearables right now are latency and translation accuracy. If you’ve ever watched an interview with a translator, you know it can take a long time for things to get across, and that’s why reducing latency is one of the biggest goals for Waverly Labs and its competitors.
The sooner the technology can translate what someone is saying, the more it feels like an actual conversation rather than two people trying to have a conversation. Imagine a world where someone who only spoke Japanese could speak to someone who only spoke Punjabi with the flow of two people who spoke the same language. Less time awkwardly trying to understand each other, more time actually building a connection. This is why Google’s demo of its real-time translation blew people away, it looks and feels intuitive and instant.
And then there’s translation accuracy. As anyone who has used Google Translate can tell you, translations done by machines can feel pretty robotic. They lack the charm of human speech: the turns of phrase, the double meanings, the innuendo, the tone. For hearable translators to fully take us into the future, it needs to be able to translate more than just words.
The futurist dreams of a truly global society that can easily make its way past the language barrier has been there for decades. The demand, as evidenced by massive crowdfunding campaigns, is high. All that’s left is for these handful of companies to deliver.