An app that assists novice ASL interpreters with imagery and text for unfamiliar terms during simultaneous interpretation.

20 Weeks
Winnie Chen
Kevin Oh
User Research
Usability Testing
UX & UI Design


Interpreters are a vital part of accessibility and inclusion for deaf and hard-of hearing (DHH) individuals. However, interpretation is a highly stressful job requiring a lot of experience, knowledge, and energy. Through research we found that it is especially difficult for novice interpreters who start out interpreting an unfamiliar topic with terms they might not know.

Therefore, we wondered:
How might we assist novice interpreters to better prepare for their interpretation session?


SignSavvy, an app that assists interpreters by providing them with live imagery and text when unfamiliar terms are used in simultaneous interpretation sessions. It also helps them schedule and keep track of their sessions, and learn vocabulary that might arise in future sessions.


Real-Time Assistance

When unfamiliar terms occur during the simultaneous interpretation session, Signsavvy will explain the word using synonyms or images, to help interpreters understand the word in the shortest amount of time. Since some words are better explained using synonyms, while others are more obvious with images, users can select Auto Mode to let the AI choose which mode to display.


Individual View

Once users start the session, AI will analyze the conversation, pick up vocabulary that interpreters might not know, and display it in the users' preferred mode.


Team Interpreting

When interpreters work in teams, the off-interpreter can switch to the list view. This makes it easier to verify accuracy and check out vocab for the on-interpreter if needed.


Share with the
Deaf Individual

Not only the interpreter but also the deaf individual can benefit from seeing the information. Interpreters can share their screens by clicking the screen mirroring icon.


Provide Feedbacks

After each session, users can select which information is helpful and unhelpful. The AI will learn from human input and improve its algorithms over time. Users can also add words to their vocab list to learn in the future.


Add sessions &
set up assistance

When interpreters receive a new job, they can add an upcoming session by entering general information. To set up vocabulary assistance, users can add the subject field and preferred display mode. They can also enter a few vocabularies they already know to help the AI better predict what to show.


Practice Beforehand

Based on the users' input, AI will generate a list of vocabulary that might occur in the session. Users can remove words they already know. This helps AI identify users' vocabulary level and better predict words users might not know during the session.


Review Vocabulary

User's saved words are sorted based on topic, date, or Alphabetical order under the vocabulary tab. Users can review and learn words during their free time to be better prepared for the upcoming session.


Create Contact

Users can create profiles for their clients and coworkers. This allows interpreters to take notes on each individual's style and preferences so that they can quickly be reminded of the information before the next upcoming session with this individual.


We spent the first 10 weeks conducting primary and secondary research on the interpretation service to better understand the problem area. We began our research by first creating a detailed study guide, including our research questions & methods, interview protocol, so that we could approach the problem strategically.

Research Questions


What challenges do interpreters face when they are delivering simultaneous interpretation?


How can technology be used to assist interpreters?


What are the factors that influence the message delivery in an interpretation process?

Research Methods

Direct Observation

We attended the UW Presidential Address in which interpreters and DHH (Deaf and Hard-of-Hearing) individuals were present and observed their interactions for 1.5 hours. There were two interpreters present, switching on-and-off every 20 minutes. We later found out that it was standard practice for interpreters working in pairs for any assignment that is more than one hour. Interpreting is mentally taxing, and sign language interpreting adds a physical dimension as it is a visual language. Therefore, interpreters need to alternate to reduce physical and mental fatigue. Besides, the presence of the off-interpreter can monitor and help the on-interpreter when necessary.

We also found that despite the proficiency of the interpreters, there are external factors outside of their control that can make the interpretation process difficult, such as speaker talking too fast, or loud surroundings. There were several times when the crowd got too loud that the interpreter couldn't hear the speaker.

UW Presidential Address: Interpreter signing to the DHH

Semi-Structured Interviews

To get a more comprehensive understanding of the interpretation service, we interviewed 12 individuals in total, including 9 interpreters, 1 deaf individual, and 2 research experts in the field. We wanted to understand the experience of all stakeholders as well as their challenges.

Interview with a DHH Linguistic Professor and Her Interpreter

Cultural Probes

When we asked our participants during the interview challenges they faced during the interpretation process, we found that it was usually tough for them to recall in that short amount of time as they might be used to everything. Therefore, to further inspire interpreters' thoughts and help them remember things they might forget to mention, we used a set of probes. We created three types of context cards, asking each participant what superpower they wish to have in these situations. Through that, we received more difficulties within specific scenarios and identified some potential design opportunities.

Probes We Created for the Activity
Participant Doing Probe Activity

Affinity Mapping

To really make sense of all the data we received, we use an affinity diagram to organize and group different data points based on their similarities. Through that, we found some common themes.

Synthesizing Our Research Findings

Identify the Problem

One of the biggest challenges we found is that the simultaneous interpretation is very stressful as interpreters are constantly multi-tasking and processing. They need to hear the speaker, analyze the information, translate to the deaf and hard-of-hearing individuals; meanwhile, they are also communicating with their partners. In addition, other factors such as the inability to hear the content or unfamiliarity with the subject could make the process even harder.

Therefore, we decided to focus on the during-interpretation stage and wondered:

How might we assist interpreters
in minimizing stress during an interpretation process?

Based on our insights, we came up with four design principles that we wanted our response to stick with:


Stress-Free: Help relieve stress and intensity during the fast speed simultaneous interpretation.


Informative: Provide information and context to interpreters ahead of time to help them better prepared for the upcoming session.


Assistive: Help facilitate interpreters on the interpretation process instead of replacing them.


Personal: The design should consider individual users and their needs, and also uphold the human quality of those individuals involved in the interaction.


We began to ideate as many ideas as possible. From 20+ concepts, we narrowed down to 6 ideas that were better responses to our HMW statement, and more aligned with our design principles. Eventually, we decided to go with IDEA 4 — AR Captioning & Dictionary concept, as we believed that this concept was one of the most innovative ones and had the most potentials.


Through research, we found that it is particularly difficult when interpreters are interpreting for unfamiliar topics. Therefore, we hypothesized that when an unfamiliar term occurs during simultaneous interpretation, providing interpreters with contextual information can make the process less stressful.

Concept 1 - Assisted with Imagery

Some of the questions we had for our initial prototype:
1. What form of captions is better? Paragraph or keyword?
2. How do interpreters think about the speed of captions? 
3. Which types of contextual information are more helpful? Caption, image, or ASL signing video?
4. Is it distracting for interpreters to look at the content while interpreting?

“ASL is like painting in a 3D space. You describe the shape and size, texture, the abstract info rather than using linear words.”
— Participant 3

How We Did it

We made 3 concept variations and tested them with 5 participants. Because sign language is a visual language, besides showing interpreters captions and how to sign the word, there is also an imagery option so that the interpreters can look at the picture and sign it.

We mimicked the simultaneous interpretation session through role-playing, using an iPad as the interface and placing it in the participant's line of sight to test each concept.

Caption Only
ASL Signing Video
Concept Testing with an ASL Interpreter Participant

What We Learned


Video clips of sign language words are not helpful because one word can have multiple ways of signing, depending on the context, and there is not enough time for interpreters to look at the video and mimic it.


Images of the term that show the relationship are more helpful than images of what it is / means. 


Preference for the types of contextual information depends on persons and situations. 

These great insights had informed our later design: 



AR technology that assists novice interpreters with imagery and captioning in order to help deliver successful interpretation.

Highlight Terminology & Provide with Corresponding Imagery

During the simultaneous interpretation session, SignSavvy uses real-time captioning to analyze the conversation, highlights words that interpreters might not know, and provides the corresponding imagery based on user input, to help interpreters better understand the context.

Set Preferences

Before interpreters start the session, they can input their intent, subject field, and familiarity with the topic area to help the artificial intelligence better predict and determine what to show.


The storyboard below demonstrates the device's application in real life.

Jane has recently passed her interpreter Exam and she is now officially a certified interpreter.

Jane receives her first interpretation job for the next morning at the University of Washington requested by a deaf student for office hours about her physical computing class.

Jane is a bit concerned because despite her interpretation training, she has no idea what physical computing is, and she wants to do well on her first job

The next day, Jane gets ready to leave for work. She puts on her Augmented Reality Contact lenses that will assist her with her interpretation job. She hopes that this will help interpret even in an unfamiliar topic.

Jane arrives on campus and enters the building where the job is scheduled.

This is Isabelle. She is the deaf student that requested an interpreter.

Jane and Isabelle have a conversation in ASL and Isabelle tells Jane that she is stuck with one of her physical computing projects.

After Jane has a basic understanding of the context, she input her preferences so that the artificial intelligence technology can make better predictions and suggestions during interpretation.

This is Professor Harrington. She is finally ready for Isabelle to come in.

During the conversation, Jane does not know the technical language and terms being used. Luckily, she is wearing her AR lens that processes what is being said in real-time.

Jane can see captions, highlighted specialized terms that are not familiar to her, and a corresponding visual to help her understand what that highlighted word is and how it is being used in the context.

Isabelle and Professor Harrington end up having a fruitful conversation. Isabelle now understands her homework, making the interpretation job a success.

What We Learned

Initially, we chose the augmented reality technology because of its non-intrusiveness. However, even though we tried to set up the prototype as close as what interpreters can see using AR glasses, it is still hard for interpreters to imagine how it looks like without the actual AR device. During the testing, many participants expressed the concern that AR technology is a bit far from their everyday life.

Participants also mentioned that receiving information about the upcoming session is very important for interpreters to be better prepared.

In addition, aside from giving people images of unfamiliar terms, synonyms are also great to help interpreters know what that term means in the context without knowing it.

What's more, some participants had trouble entering their familiarity with the topic in the preference setting since there is no standard for them to determine if they are novice, general, or expert.

Therefore, we made 4 major design decisions:


We made a rough sketch of the flow of the app and based on that, created a mid-fidelity prototype for usability testing.


We conducted in-depth usability testings with 2 interpreter participants. We made an interactive prototype using Figma and projected it in iPhone for participants to use. We asked participants to think aloud so that we knew their thoughts, concerns, and frustrations while interacting with the prototype.

These 2 participants were new to our project. They were thrilled by the idea of an app designed for the interpreters since, currently, there was no such product in the market.

Even though the prototype was only wireframes, the feedback we received was insightful and actionable. Participants not only gave us detailed feedback on the interface but also provided fresh perspectives on the app's structure and functionality to make it more fit in with their current workflow.

Usability Testing with an Interpreter

What We Learned

Here are 3 key things we learned from the testing:


Deaf individuals can also benefit from seeing contextual information. 


Interpreters often work in pairs. Considering how the app might look differently when the interpreter is on or off. 


Users want more autonomy. Instead of relying on the system and AI, give users more flexibility to edit the recommendations and their preferences.

We then turned all great feedback into actionable steps to further refine the app's ux, ui, and other functions.



How might we assist novice interpreters to better prepare for their interpretation session?


SignSavvy, an app that provides live imagery and text when unfamiliar terms occur during simultaneous interpretation sessions. It also helps interpreters schedule and keep track of their upcoming sessions, and
learn new vocabulary that might arise in future sessions.


Information Confidentiality

Since everything interpreters learned from the session is confidential, we need to make sure that the conversation is erased after the session and only keep vocabularies used for learning purposes.

User Privacy

As interpreters will be documenting some of the client information in SignSavvy for future references, we need to make sure that the app is secure so that there is no leak of clients' personal information.

Design for Different Languages

I want the app right now. There is no such app designed specifically for interpreters currently"

Many of our participants had said to us. Right now, SignSavvy only has an English version; however, sign language interpreters are all over the world. Designing for different languages is important so that more interpreters can use SignSavvy.

Develop & Test

It will be great if we can collaborate with engineers and build out the product so that we can test it out under different contexts and environments, to improve not only the interface and user experience but also its algorithm. Ultimately, we hope SignSavvy can actually serve and help interpreters, especially those just starting out.


SignSavvy is a special project for me. It gave me an opportunity to explore a whole new field - interpretation service and understand a new language - American Sign Language (ASL). I respect every sign language interpreters. They are an important part of accessibility and inclusion as they are advocating for deaf and hard-of-hearing individuals. I also fall in love with ASL. It is a beautiful language.

"ASL is like painting in a 3D space. You describe the shape and size, texture, the abstract information rather than using linear words."

- Participant 3

One of the biggest challenges for this project was recruiting people. It was hard to start as none of us knew any interpreters. Luckily, we received great help from the UW Interpreter Coordinator and some Ph. D. researchers, connecting us with people in the field. But it was also tough to find that many participants (to interview and do usability testing) since interpreters are super busy. So I learned the importance of maintaining the relationship.

After the initial round of interviews, we followed up with all participants, asking if they would like to participate in the usability testing. We also updated them with our design and made hand-drawn customized postcards as appreciations to their time and input. Because of this, many of our participants were willing to come back and helped with our project again.

Customized Thank You Cards for Participants

More Projects

OverviewResearchData SynthesisProblem AreaIdeationConcept Testing 1Speculative DesignConcept Testing 2Mid-fi PrototypeUsability TestingFinal DesignReflection