SignSavvy

An app that assists novice ASL interpreters with imagery and text during simultaneous interpretation

PROJECT TYPE
Design Research
Case Study
UX/UI
Duration
20 Weeks
Tools
Figma
Illustrator
Photoshop
Lots of sticky notes
Teammates
Cecilia Zhao
Winnie Chen
My Roles
Participant Interviews & Research
Data Synthesis
Ideation & Design
Concept & Usability Testing
Photography
UX & UI Design
Problem
Interpretation is a highly demanding job that often requires interpreters to work in subject matters requiring expertise and language that they may be unfamiliar with.
response
SignSavvy is an app that assists novice ASL interpreters by providing them with live imagery and text when unfamiliar terms are used in interpretation sessions. It also helps them organize and keep track of their scheduled sessions, and learn vocabulary that might arise in future sessions to help interpreters build repertoire and language in specialized fields.
In-SESSION

REAL-TIME ASSISTANCE

When difficult or technical vocabulary is used during a simultaneous interpretation session, Signsavvy will display the word with corresponding synonyms or an image to help interpreters quickly understand in context to the conversation. Not only would this help the interpreters if they mishear, but also allow them to see the spelling if they need to fingerspell.

In-SESSION

TEAM INTERPRETING

Interpretation sometimes happens in teams especially for sessions lasting longer an hour. The role of the off-interpreter is to be another set of eyes/ears and assist the on-interpreter with cues as needed. When interpreters work in teams, the off-interpreter can switch to the list view. This makes it easier to verify accuracy and look up vocabulary for the on-interpreter if needed.

In-SESSION

LIVE SHARING WITH DHH INDIVIDUAL

Interpreters can share their screens with their users by clicking the screen casting icon. Deaf and hard-of-hearing people who are already used to having lack of information access might want to be included and may benefit in seeing what the interpreter sees.

Post-SESSION

SAVE VOCABULARY

After each session, interpreters can select which words are helpful to save to their vocab list. They can also deselect words they wouldn't want to show up in future sessions.

Post-SESSION

REVIEW VOCABULARY

Saved words from previous sessions are sorted by topic, date, or alphabetical order under the vocabulary tab. Users can review and learn vocabulary during their free time to be better prepared for upcoming sessions.

Pre-SESSION

REVIEW PROFILES
& PAST SESSIONS

Interpreters can create profiles for their users and coworkers. This allows them to take notes on each individual's style, needs, or language level so they can keep track of that information for future sessions. It also saves past sessions with that individual and the vocabulary used in those sessions.

PRE-SESSION

SCHEDULE SESSIONS
& SET PREFERENCES

When interpreters receive a new job, they can add an upcoming session to the schedule and enter in details and preferences.

To set up vocabulary assistance, they can add the subject field and a few keywords to help the AI better predict what to show.
Since some words are better explained using synonyms, while others are more obvious with images, users can select their preferred display mode. If they select auto on, the AI will choose which mode to display depending on what explains the word best.

Pre-SESSION

PREPARE BEFOREHAND

Based on the users' input and previous sessions, the AI will generate a list of vocabulary predictions that might occur in the session. Interpreters can remove words they already know. This helps AI identify their vocabulary level and better predict which words to show.

Context

Why Interpreters?

Interpreters are a vital part of accessibility and inclusion for deaf and hard of hearing individuals. However, interpretation is a very stressful job requiring a lot of experience, knowledge, and energy. It is especially difficult for novice interpreters who start out interpreting an unfamiliar topic with terms they do not know. Because of this, there are very few interpreters despite the high demand.

Process

Research Questions

After doing secondary research on the role of interpreters and deaf culture, we wondered…

What challenges do interpreters face when they are delivering simultaneous interpretation?

How might technology be used to assist interpreters?

What are factors that can affect the message delivery in interpretation?

Participants

For this entire project, we worked with...

12 Interpreters
1 Deaf Linguistics Professor who
uses interpreters
for her classes
2 Deaf Students who uses interpreters
3 Research Experts on accessibility for deaf and hard of hearing people

Research Methods

Direct observation at a live event with interpreters

Our team attended our university’s presidential address to see how interpreters work at a live event. We observed and documented how they interact with the speaker, their partner interpreter, the deaf/ hard of hearing section, and the rest of the audience. Afterwards, we got to speak with one of the interpreters and one deaf individual to get some understanding on why they did certain actions and how they think it went.

UW presidential address event

Semi-structured Interviews

We conducted semi-structured interviews with 12 participants to find answers to our research questions. After setting up an interview protocol and a list of questions, we were able to reach out to many ASL interpreters in the greater Seattle area and ask about their experiences/challenges being an interpreter. This method was how we got most of our data and insights.

Interview with a deaf professor

Card sorting probe activity

For some of the interpreters we interviewed, we also conducted a card sorting/writing activity. It required the participant to come up with a superpower that would solve or help them with a challenge in a specific scenario or context during their job. Then they would give their reasoning  why it would help them. By doing this we were able to identify specific challenges and wants within those scenarios and contexts in interpreting.

For the first few rounds, we would provide the scenario through a context card (description or image). In the last rounds, we had the interpreter come up with their own context card in case we missed any important scenarios the interpreter had in mind.


In the context of a business meeting with multiple people, my superpower would be teleportation so that I can teleport next to the person speaking while signing so that the deaf individual can see who is speaking while I interpret. -P2

Template for probe activity
Context card examples
A participant doing the probe activity

Data Synthesis

Affinity Diagramming

After each interview or activity, we reviewed our notes/transcripts/recordings and “extracted” qualitative data points (quotes and observations) by writing them down on sticky notes. By posting and grouping all the sticky notes accumulated on large boards, we were able to identify common themes and turn those into actionable insights.

Journey Mapping

From our time working with our participants, we gained a good sense of the interpreter’s process. We created a journey map to visualize interpreters’ interactions and procedures, and highlight touch points and pain points in their process.

We divided their process into three sections: pre-interpretation, during interpretation, and post-interpretation. With our limited time and research, we decided to narrow our focus to pre-interpretation and during interpretation since those areas had more pain points.

Key Insights

1

Interpretation is optimal when the interpreters are well-prepared and better informed.

We don't get the information ahead of time sometimes…we just have to do the best we can skimming through the information.    -P7

We're often on our phone as we're Googling stuff... and they (off interpreter) can tell when we don't know the word. Our team will google it really fast and then sign it to us.   -P3

2

Interpretation is stressful because it takes a heavy cognitive load and requires multitasking and multi-level processing.

It's a ton of cognitive load. It’s just so many things you're translating— you're constantly signing something and listening for what you're supposed to do next.   -P2

Typically work in teams because of the physically tiring and just constant thinking and interpreting. They have proven after 20 minutes of interpreting that the interpreter will start to omit information.   -P10

3

Technology can assist but not replace interpreters.

It's so personalized. These machines can’t.. Each deaf person does it differently. There's just so many variables going on. How you express it and consume is so rooted in a person's creative mind and how they embody it and their technical output. I mean, it's one of the most human experiences.   -P3

4

Despite the proficiency of the interpreters, there are external factors outside of their control that can make the interpretation process difficult.

Sometimes we can't hear
or maybe the speaker has an accent and is difficult to understand.   -P7


In group conversations, they will be talking over each other even though they know an interpreter is there.   -P8

Design Principles

Informative

Provides information and context to interpreters ahead of time to help them be better prepared for the interpretation session.

Stress-free

Helps relieve stress and intensity during the fast paced simultaneous interpretation.

Assistive

Helps facilitate smooth interpretation for interpreters instead of replacing them.

Personal

Considers each interpreter's personal and situational needs while upholding the human quality of all individuals involved in the interaction, going beyond the one-size-fits-all approach.

Ideation

Keeping our principles in mind we came up with our HMW statement:
How might we assist interpreters by equipping them with information they need for their interpretation sessions?

We then ideated without keeping business constraints or feasibility in mind since we wanted a full range of ideas we could explore. After doing several ideation sprints our team narrowed down to these six ideas and fleshed them out.

Of those six, we ultimately decided to explore idea 5 because we thought it had the most potential to assist interpreters during their sessions in a meaningful way despite the speculative nature of the AR and AI technology.

Concept Testing 1

What we did

We came up with three different variations of our idea 5 concept without the dictionary feature and tested those with five participants.

To do this, we mimicked several simultaneous interpretation sessions through the wizard-of-oz method and role-playing. We used a laptop with a prepared slide deck to act as the AR “interface”, placing it in the participant's line of sight to simulate what the interpreter would see when using this AR technology. Cecilia controlled the “interface”, I acted as the deaf individual (who the interpreter looks and signs to), and Winnie acted as the speaker by reading prepared paragraphs out loud. The participant would then sign what was being read by Winnie to me (the deaf individual) while they had the aid of the interface.

We would observe and video record how they signed and tried to see how they reacted to the different variations of visual aids. After each paragraph was read, we discussed with the participant to gauge how helpful the visual aids were and what parts were not.

What we learned

1

Video clips of sign language words are not helpful.

  • Not enough time to look at the clip and mimic it
  • Video definitions are often not accurate
  • ASL terminology can vary depending on slang, dialects, and context
  • Visually distracting

2

Highlighted key words are very helpful.

  • Nice to have the word visible so that the interpreter can see the spelling
  • Helpful for words that are unfamiliar to the interpreter
  • People fingerspell some terms because the deaf community might have not developed a sign for them

3

Expert interpreters do not need this assistive visual aid.

  • Expert interpreters are familiar with vocabulary used in their interpreting jobs from years of experience
  • In jobs with two interpreters (one on, one off) the on interpreter would rely on the off interpreter for support if they miss, mishear, or misunderstand anything

4

Images that show the relationship or meaning of the word in context of the conversation are more helpful than images that only show what it is.

I've interpreted organic chemistry and physics, and these sort of sciences where a lot of times what we're talking about is the relationship between categories of things in the world. So me understanding what kind of thing that is and what relationship it has to the other things is what is most useful to me.
-P12

5

Confidentiality between interpreters and their users is very important.

Privacy is always a concern when they (deaf people) have to have someone else (interpreter) in the room. That sucks. Especially if you know them or... if you go to a deaf event and you were just at their physical. That sucks. So as an interpreter you have to be trustworthy.
-P3

  • Interpreters have a code of ethics to maintain the privacy of the user or patient

6

Interpreting can have high stakes.

I've heard many horror stories of experiences with unqualified interpreters in dire situations with terrible outcomes, even deaths.
-P11

Design Decisions 1

1

Use Images instead of ASL Video Clips


Interpreters can select topic and audience’s language level to help AI know which words to highlight.

2

Have minimum captioning with one highlighted term at a time


Interpreters can keep track of the word used in the conversations and see the spelling in case they need to fingerspell.

3

Target our product to novice interpreters


Interpreters who are interpreting alone for an unfamiliar subject or topic can benefit while experts do not need it.

4

Add a user input stage for preferences at the beginning of an interpretation session


Interpreters can select the topic and familiarity level to help the AI know which words to highlight.

5

Incorporate confidentiality into the design


Any transcript data stored will be unaccessible and automatically wiped 20 minutes after a session in order to protect people’s privacy and rights.

Speculative Design

AR technology that assists novice interpreters with imagery and captioning in order to help deliver successful interpretation

Highlights Terminology & Provides Corresponding Imagery

SignSavvy helps interpreters understand unfamiliar terms during an interpretation session by analyzing the conversation and providing visual captioning with a highlighted of a difficult term. For each term it highlights, it shows a corresponding image to provide a better understanding for the interpreter.

Set Preferences

Before interpreters start their session, they can input preferences and contextual information about the session to help the artificial intelligence (AI) predict and determine what images to show. With machine learning that collects data of the vocabulary being used, SignSavvy will be able to predict the direction of the conversation as it builds and show images that are even more helpful to the interpreter.

Storyboard

Concept Testing 2

What we did

We presented our speculative design concept and storyboard to five participants to get feedback. This was the first time we showed and explained our design to our participants. We wanted to know if this technology would be something interpreters can see themselves using in the future.

Concept Testing with a Research Expert

What we learned

1

Initially, we chose the augmented reality technology because of its non-intrusiveness. Although participants thought it was a cool and interesting concept, they couldn't really imagine using AR technology within their jobs in this day and age. During the testing, many participants expressed the concern that AR technology is a bit out-of-touch for their everyday life.

2

Participants also reiterated that receiving information about the upcoming session is very important for interpreters to be better prepared.

3

As well as showing images of unfamiliar terms, synonyms can also be helpful for interpreters to quickly know what a term means in context to a conversation.

4

Some participants had trouble entering their familiarity with the topic in the preference setting since there is no set standard for them to determine if they are novice, general, or expert. Also, it wasn't very clear how entering these preferences would actually help come up with a better image or synonym.

Design Decisions 2

1

Apply our existing AR concept into an app


AR technology is a bit foreign and out-of-touch for interpreters to use in this day and age, but they are always using their phones to check emails, schedule, and google terms. An app can easily fit in with their workflows.

2

Get rid of captions but still show the highlighted word


Because of our transition from an AR interface to a physical one, we realized that having even a little captioning would be too much for the interpreter to read from a little device.

3

Expand our scope to the full experience (before, during, and post interpretation session)


Instead of solely focusing on assisting interpreters during the session, we also want to help them prepare before the session and afterwards to help them learn from their previous sessions.

4

Add synonyms as one of the display modes


Before we had two display modes (word only, and word + image). Interpreters have varying preferences for the display of contextual information depending on the word, topic, situation, and the people they are interpreting for. Showing synonyms instead of an image could be helpful in certain situations.

5

Replace intent and familiarity with entering key words in preferences


Having users enter keywords relating to the subject/topic of the session beforehand will help the AI better understand the type of conversation that might happen and provide better images and synonyms.

UX & UI Design

An app that assists novice ASL interpreters with imagery and text during simultaneous interpretation

User Scenario Storyboard

Sketches and Wireframes

Low fidelity wireframes by me
Mid fidelity wireframes by me and Cecilia

Usability Testing

What we did

We conducted in-depth usability testings with two interpreter participants. We made an interactive prototype using Figma and projected it on an iPhone for participants to use. We asked participants to think aloud so that we knew their thoughts, concerns, and frustrations while interacting with the prototype.

The two participants were new to our project. They were thrilled by the idea of an app designed for interpreters since there are currently no products like it available. They gave us detailed feedback on the interface and provided fresh perspectives on the app's structure and functionality to make it fit in better with their current workflow.

Usability Testing with an interpreter

What we learned

1

Deaf and hard of hearing individuals also benefit and might want to see the information the interpreter is seeing.

2

Since interpreters often work in pairs, we should consider what information would assist the off interpreter and how the app might look/function differently for them.

3

Interpreters should have an input for what vocabulary shows up in future sessions.

Design Decisions 3

1

Add a screen mirroring function so it can be shared with the deaf individual, partner interpreter, or a bigger screen


Deaf people who are already used to having lack of information access might want to be included in seeing the visual assistance. Mirroring/casting to a bigger screen in the room can be helpful if it is in their line of sight.

2

Add an in-session "list" view where it shows all the list of highlighted words for the off interpreter


In instances where an off interpreter is using the app, the off interpreter can switch to a list view to show live list of highlighted words from that session.

3

After each session, prompt the interpreter to choose which words were helpful and which ones they want to save


Interpreters can choose which words they would want to show up again in future sessions and the words to save to their vocab section.

"Final" Interfaces

Reflection

If I did it again...

After finishing this project, I realized there was a blindspot in our design. Currently the interaction model requires the interpreter to create a session, save it, click back into it, then start that session. The interpreter cannot go into live in-session mode unless they create an upcoming session beforehand. What would the interaction be like for interpreters who forgot or didn't have time to set up a session, and want to go right into using the real-time assistance? Although this wouldn't be the typical use case for an interpreter, a good design should also work for the edge cases. I believe that interpreters would appreciate the freedom to go straight into in-session mode without having to create a session. I can also see how interpreters might think that this is just a scheduling app without the main feature (in-session assistance) being available from the first screen they see.

If I were to continue with this project, I would make a way for the user to go straight into in-session mode if they wanted to and allow them to fill in the job information afterwards. Ideally the interpreter would create a session once they receive a job and prepare before that session but that doesn't always happen.

Design Implications

In every project that has the potential to affect people, I believe that as designers we carry the responsibility to question and carefully consider the implications and potential problems/unintended outcomes that might arise from the design. These are a few questions I came up with about how SignSavvy might affect people.

  • How might this negatively affect how interpreters learn on the job?
    For example, will assistive technology prevent interpreters to rely on their own memory and skill?
    Will they rely too much on the tech?
  • How would DHH people feel that interpreters are using assistive tech?
    Would DHH individuals have a say whether their interpreter uses this or not?
  • Would using this technology be an indicator to others that the interpreter is a novice?
    How might that change people's behavior?
  • Will more people actually become interpreters with this technology available?
  • Can this technology be expanded to other areas of communication or learning?
  • How might this technology affect how humans interact with one other? (eye contact, dependency on mobile devices, etc.)

Now I do not know the answer to these questions. Obviously we weren't able to test on a large scale, but these questions are important to ask and think about. If this were to be developed and put out in the real world, we would need to find answers to these questions.

Takeaways

Before this project, I thought design research was valuable but only to a certain extent. Now I have learned that research should always happen if the time and resources are available. Each time we went out and interviewed or tested, we learned something new that changed our direction. Diligent research really helps understand not only the people who we are designing for, but also their thoughts, attitudes, and routines which drive how they interact with the world.

Another takeaway is that asking the right questions early on in the design process is crucial because those questions lead to valuable answers which acts as the foundation for the design to be built on.

The most valuable takeaway from this project is not coming up with a cool product or design. It is how much I was able to learn from diving into an unfamiliar topic which was deaf culture, sign language, and interpreters. I got to learn about how others navigate and interact with the world differently. I also grew to love interpreters not just for how well they handle their difficult job but more for their passion and dedication for accessibility and inclusion to DHH people. Because I learned more about deaf culture and DHH people, I have the perspective to look into their world and be a more knowledgable and empathetic designer and person.


ASL is like painting in a 3D space. You describe the shape and size, texture, the abstract information rather than using linear words.
-P3
Customized Thank You cards for participants
See the full case study here
CASE STUDY PDF