I am a third-year PhD student at the University of Michigan in the Molecular, Cellular, and Developmental Biology program. I am also a hard-of-hearing (HOH) student. HOH refers to difficulty hearing caused by many disorders that range from mild to severe hearing loss. My hearing loss is classified as a genetic, sensorineural hearing loss that is present at birth and characterized by difficulty understanding speech. When asked to describe ‘how I hear,’ I usually say that hearing sounds like dial-flipping through radio stations and piecing together the snippets of words to catch a sentence. A truer description of the sensation is difficult, as I’m sure the reader can imagine, hearing loss, and the way sound is heard and processed, is a multifaceted experience. I have found that the short film by National Geographic “What It’s Like to Read Lips,” is a close replication and does an incredible job capturing the challenges of lip-reading, and I recommend it as a starting place for anyone who is interested in experiencing and understanding how hearing loss affects day-to-day activities. My hearing loss has profoundly affected my participation in science, and in the spirit of increasing diversity in the scientific community, I’d like to share my experiences, trials, and victories.
What sound does science make?
For anyone who has graduated through the ranks of undergraduate to graduate research, there is a striking difference between the learning plans of undergraduate biology courses and graduate courses. So different, in fact, that I have often heard many versions of the phrase: “the first thing I learned in graduate school was to forget everything I learned during my undergraduate classes.” While this is usually said to describe the depth, minutia, and exceptions of biology concepts, it is also true for the teaching styles in undergraduate versus graduate classrooms. As an undergraduate biology major, my lessons focused on readings before class, paying attention during lectures, and performing application of material on exams. This is drastically different from the way we learn about science beyond our bachelor’s degrees. As an undergraduate, I performed well in my classes because they were isolated, structured learning that was always supported by a written form of communication. I would pre-read the chapters, I took notes in a silent lecture hall, and was questioned about my knowledge through a written exam. This structure was so accessible that I was a paid notetaker for my biology classes through the Department for Disability Access and Advising (D2A2). To me, this proved that there was no difference between me and my peers—I felt as though I had “conquered” my disability. This temporary comfort came to a crashing halt the moment I stepped foot into a scientific conference.
Scientific conferences of any size present a major hearing obstacle. At the forefront of this obstacle is the presentation of “spoken science,” e.g., posters, talks, and audience questions. All of these formats rely on an ability to rapidly absorb and process auditory information, often in noisy environments, and each of these areas possess their own challenges. For those who rely on lip-reading to bridge the gap in spoken communication, these challenges are confounded by a singular common factor: people. Depending on the individual, they may speak at an unusual speed, have facial hair, have an accent, speak quietly, they might mumble, perhaps they face their slides or data when they speak (this is my personal arch nemesis). Or perhaps they are a perfect speaker with pristine enunciation, but they are far away, maybe they are on a stage with an overhead light that obscures their facial features, perhaps there is a bad microphone, or worse, no microphone. However, these situations are not devoid of visual aids! Poster sessions contain posters and talks contain PowerPoint slides. While this is true, these visual aids communicate only certain parts of the entire message. Posters are edited for extreme brevity, which often means that written words are removed to favor graphical abstracts and data. Similarly, the PowerPoints that support a talk are often data-forward, meaning that text is often reduced to bullet points or discarded entirely. Although the reduction of text makes for more direct posters and more interesting talks, this excludes individuals who may be more reliant on reading text to reinforce understanding.
The current state of hearing assistance
One of the massive barriers to science careers that deaf and HOH people face is a lack of scientific signs in American Sign Language (ASL). For those unfamiliar with ASL, much of ASL communication is done through improvisation and combined signs; there are actually very few designated signs for words and some are simply spelled out. For example, if I wanted to say ‘cytoskeleton’ in a conversation, I would not be able to sign it. I would have to fingerspell, meaning that every time I wanted to say it, I would make the sign for each letter: C-Y-T-O-S-K-E-L-E-T-O-N. Obviously, in a scientific conversation where many of the words communicated would be scientific in nature and thus not represented by a sign, this results in a very long, exhaustive exchange. In an effort to mitigate this gap, there are online platforms that accept submissions to coin new ASL signs based on community feedback (see ASLcore.org and ASLClear.org).
While there are resources and aids available to deaf and HOH students—from interpreters, to microphones worn by speakers that can connect to an individual’s hearing aids, to several kinds of captioning services—they are designed to cover a broad spectrum of hearing disorders, and the hearing needs of any one individual student may not be captured. There are services that employ live-captioning to provide a text companion to spoken word, but my personal experience with these technologies has been both frustrating and futile. Captioning services on streaming platforms like Zoom are catastrophically inaccurate when trying to decipher scientific conversations (I invite you to turn your captions on during your next meeting). As a graduate student instructor this fall, I took a deep dive into the captioning services available through multiple streaming platforms and found that there is limited ability to add vocabulary words to the program glossary. With the number of scientific words that are not captured by the software and the restricted ability to expand the glossary, it would be impossible to use captioning on these platforms to cover more than one or two classes, let alone a conference.
As a first-year PhD student, I was excited to learn about CART, which is a revolutionary speech-to-text service that is actually connected, not to a captioning software, but to a live human being. The speaker wears a microphone that feeds to the technician, who translates the speech into text using a keyboard that uses phonetic sounds instead of letters (i.e., stenography). I tried CART services for a seminar course offered by my department to see if it would improve my comprehension. However, I found that CART too had limitations; it was time-consuming to set up the program before class, and I had to explain to each speaker that there would be two microphones they would be wearing so that I could ‘hear’ (and that, yes, they had to use both). Only 1 out of 15 seminar speakers submitted their slides to the CART service request the night before their talk, and there were still many words that the technician was unfamiliar with. In the end, I abandoned the service because it was too distracting, and more times than not, I had to fill in the spaces where the technician didn’t know the word either. I think that this technology has a lot of promise, but it’s not going to be useful until there are technicians who are well-versed in a broad array of scientific terminology. Interestingly, I received many positive comments about this technology from fellow students who were non-native English speakers, which indicates to me that the technology could be helpful to many scientists who would benefit from a visual aid to understand a topic.
Navigating sound during a pandemic: “Can you hear me now?”
As hundreds of lectures, seminars, and entire conferences shifted to an online format last spring, it seemed that online learning would be an equalizer. No longer would there be background noise, the need to “turn around” to see your slides, and captioning services exist on many of the mainstream video conferencing platforms (although I have discussed those limitations above). However, video conferencing comes with a new set of challenges: unstable internet connections, a plethora of unfavorable audio qualities, users neglecting their cameras, and background noise. An unexpected complication was our return to laboratories following lockdown; when scientists are joining meetings from their lab space, all of the aforementioned challenges are complicated by the speaker wearing a mask, and the addition of muffling the sound through fabric only adds to the problem.
The future of hearing assistance
STEM is one of the least represented careers in deaf and HOH populations; I myself have struggled to find ways to make my career more accessible to me. Old stigmas still exist such as the outdated belief that it would be a safety hazard to employ deaf and HOH persons, and technology is balanced on the edge of a confusing juxtaposition between uselessness and innovation. I dream about attending talks that are accompanied by live subtitles, and those subtitle technologies are familiar with the scientific words that will be used. I truly think that the CART technology is on the brink of something great, but will never reach full potential until it is staffed by stenographers who are also scientists. I haven’t seen any sign language interpreters at the conferences I have attended so far, but I think this is another area that deserves attention. If the reader is looking to reach a wider hearing and non-hearing audience, expand the written word that accompanies your talk or lecture:
- Provide transcripts (if you have practiced your talk and have a good idea of what you’ll be saying).
- Add comprehensive lists of keywords to your slides (include acronyms, abbreviations, and jargon).
- Add visual cues to your slides.
- Explore pre-recorded talk transcripts (there are many applications that provide transcripts, the transcript can then be edited for accuracy, and uploaded alongside the talk).
I hope that one of the takeaways from the pandemic is the success of recorded talks, even those that happen in person, in order to provide transcripts and reach a wider audience. Despite the challenges and frustrations of being a HOH student and scientist, I have found that most people are happy to make meaningful changes to accommodate me. My lab-mates are very conscientious about facing me when they speak and using clear enunciation. In mask-wearing times this also means being patient while repeating this process several times in a row, with increasing volume. I am very grateful to be surrounded by people who care about me and my inclusion. I am hopeful that the future of science is more diverse and accessible, and I believe that those changes come from education, inclusion, and acceptance.
For more information on the needs and challenges of deaf and HOH scientists, please see the following articles:
https://www.sciencemag.org/careers/2007/03/deaf-needs-hearing-impaired-scientists
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6234809/pdf/cbe-17-es10.pdf
About the Author:
Zie Craig (they/them/theirs) is a PhD student in the Molecular, Cellular, and Developmental Biology Department at the University of Michigan. They work in Ann Miller’s Lab, investigating Anillin-regulated contractility during cell-shape changes. Outside of the lab, they enjoy playing with their two cats, playing Dungeons & Dragons, horseback riding, and baking.