When Jorge Rodriguez was a medical student, he saw a patient on his surgical rotation who only spoke Mandarin. None of the doctors on the team spoke the language — all they knew was the word for pain. To check on the patient, they would push on her abdomen, ask “pain?” in Mandarin, and use her response to guide her care.
“We didn’t actually know if she understood what we were asking,” Rodriguez says.
Now a health technology equity researcher at Brigham and Women’s Hospital in Boston, Rodriguez studies the growing need for interpreters and translators in U.S. health care. Patients who speak limited English are at a higher risk of a bad medical outcome than English-speaking patients because they have more trouble communicating with a medical team. Interpreters are expensive, and as U.S. demographics shift toward fewer English speakers, hospitals sometimes see them as an impractical option. To address these needs, some health care professionals are starting to rely on tech.
Virtual interpreting services already offer a nearly endless range of languages and are available nearly on demand. And according to unpublished data collected by Rodriguez, some physicians now resort to using Google Translate — even though it’s not validated for health care settings.
“Access to phone and video increases the number of patients who are having their conversations in their preferred languages,” Elaine Khoong, a general internist and assistant professor of medicine based at Zuckerberg San Francisco General Hospital, told OneZero. “We certainly don’t have enough in person.” Khoong says technology can help fill gaps in interpreter access, even though what’s available isn’t perfect.
Often, hospitals can’t afford to have a full staff of interpreters. Even though federal guidelines recommend that government-funded health care organizations develop policies to provide for patients with limited English proficiency, there is no additional funding to implement these policies. One analysis found that around a quarter of hospitals with a high need for interpreters did not offer them, and a national study by a health care accreditation body showed that hospitals struggle with interpreter costs. (However, though interpreters do increase the cost of delivering care, when they’re used, the cost of follow-up visits goes down.) Demand for interpreters will likely continue to grow: In 2015, more than 25 million people in the United States reported speaking limited English, and the number of people speaking languages other than English at home continues to rise. Hospitals and health care facilities that don’t typically see patients with limited English proficiency, like those in rural areas, may soon start to treat them.
Unlike in-person interpreters, who can take a long time to arrive at a patient’s bedside, translation technology allows for immediate care. In one Australian hospital struggling with the lag, physicians worked with Dana Bradford, senior research scientist in the eHealth Research Center at Australia’s national science agency, to create a solution.“There were tons of interpreters, but they were always busy,” Bradford says. She developed an iPad app called CALD Assist that comes preloaded with medically relevant phrases — like “Do you have pain?” and “Please swallow” — in 10 languages most commonly used at the hospital so doctors and nurses could talk with patients before interpreters arrive.
However, Bradford says, “it was never designed to replace interpreters.”
Remote interpreters and tech solutions can be less effective than in-person interpreters and aren’t suited to all medical situations. Over the phone, interpreters can’t see patients, which means elements of communication like facial expressions and body language get lost, Khoong says. It’s also harder for patients to feel comfortable and build a rapport with a person behind a video screen who they’ve seen only once. Both interpreters and patients have been shown to prefer in-person interactions to phone or video services, and according to one study, only around 40% of deaf patients were satisfied with video interpreters for sign language.
Even when interpreters or interpreting systems are available, however, many doctors don’t wait to use them, choosing Google Translate instead, Rodriguez says. In an unpublished study, Rodriguez surveyed nearly 180 Massachusetts physicians and found that roughly a third of them use Google Translate in their clinical practice, even though the Massachusetts Board of Registration in Medicine issued an advisory in 2016 cautioning that machine translation programs can be incorrect and that professional translators should be used to ensure accuracy for any written materials. (For example, Google translated “Your child is fitting” from English to Swahili as “Your child is dead.”) Rodriguez has also heard of physicians using the voice feature on Google Translate to interpret conversations in “low-risk” situations, justifying it as the best they could do. “Just like how in hospitals now we take out our phones and use them for flashlights, we similarly look for solutions like that,” he says.
Khoong, who says she had “informally” used Google Translate in her medical practice for written materials, ran a study testing the accuracy of Google Translate for written medical instructions and found that it could accurately translate 92% of sentences in Spanish and 81% in Chinese. However, she says, a small percentage of the incorrect translations were inaccurate in a way that could harm the patient.
As a result of her study, Khoong now takes more care with Google Translate, using it only for written materials in Cantonese, which she speaks. She makes it a point to write on the materials that they were translated by machine, and she translates only short, simple phrases, which the study showed the service is better at.
Bridging the language gap in hospitals using technology without introducing the potential to misunderstand a patient requires focused attention. For now, physicians are on their own when it comes to deciding the most appropriate translation or interpretation method to use in a given situation, and there is little consensus. “Most people would probably say that if there’s some sort of change in diagnosis, you should use a person, or if they’re consenting to a procedure, or starting a new medication,” Rodriguez says, noting that most physicians would probably also agree that end-of-life conversations or decisions should be done in person. “But we don’t have official guidelines.”
Interpreting technologies aren’t perfect yet, but they already play a key role in U.S. health care and will continue to do so. They’re particularly important in health care facilities that haven’t ordinarily treated non-English speakers. “Those are places where machine translation could play a bigger role,” Rodriguez says, and it’s important to have accurate tools for those doctors to use.
They’re also important for rural or underserved areas that might only see a handful of patients who need interpreters or translators, Khoong says. “It wouldn’t make sense to put the huge investment in in-person interpreters. That’s where having phone, video, or machine learning tools are going to be helpful,” she says. “Ultimately, at the end of the day what you’re hoping for is that increased communication results in better health outcomes.”