SignSavvy

An app that assists novice ASL interpreters with imagery and text during simultaneous interpretation

PROJECT TYPE
Design Research
Case Study
UX/UI
Duration
20 Weeks
Tools
Figma
Illustrator
Photoshop
Lots of sticky notes
Teammates
Cecilia Zhao
Winnie Chen
My Roles
Participant Interviews & Research
Data Synthesis
Ideation & Design
Concept & Usability Testing
Photography
UX & UI Design
Problem
Interpretation is a highly demanding job that often requires interpreters to work in subject matters requiring expertise and language that they may be unfamiliar with.
response
SignSavvy is an app that assists novice ASL interpreters by providing them with live imagery and text when unfamiliar terms are used in interpretation sessions. It also helps them organize and keep track of their scheduled sessions, and learn vocabulary that might arise in future sessions to help interpreters build repertoire and language in specialized fields.
In-SESSION

REAL-TIME ASSISTANCE

When difficult or technical vocabulary is used during a simultaneous interpretation session, Signsavvy will display the word with corresponding synonyms or an image to help interpreters quickly understand in context to the conversation.

Not only would this help the interpreters if they mishear, but also allows them to see the spelling if they need to fingerspell.

Context

Why Interpreters?

Interpreters are a vital part of accessibility and inclusion for deaf and hard of hearing individuals. However, interpretation is a very stressful job requiring a lot of experience, knowledge, and energy. It is especially difficult for novice interpreters who start out interpreting an unfamiliar topic with terms they do not know. Because of this, there are very few interpreters despite the high demand.

Process

Initial Research

Research Questions

As we defined our users, we wanted to scope out our problem space with research questions. After doing secondary research on the role of interpreters and deaf culture, we wondered…

What challenges do interpreters face when they are delivering simultaneous interpretation?

What are factors that can affect the message delivery in interpretation?

Participants

We reached out to our school's Disability Services Office in order to get contact info and mass cold email interpreters and experts in the area. Our participants were happy to help despite their busy schedules.

10 Interpreters
1 Deaf Linguistics Professor
2 Research Experts

Research Methods (Discover)

Card Sorting Probe Activity (5)
Goal: Identify specific pain-points and ideas for solutions by having participants brainstorm what superpowers they would want in specific interpreting contexts to solve their problems.

Data Synthesis (Define)

Journey Mapping
Goal: Visualize interpreters’ typical workflow, and highlight touch-points & pain-points in their process.

Key Insights

1

Interpretation is optimal when the interpreters are well-prepared and better informed.

We don't get the information ahead of time sometimes…we just have to do the best we can skimming through the information.    -P7

We're often on our phone on the job as we're Googling stuff   -P3

2

Interpretation is stressful because it takes a heavy cognitive load and requires multitasking and multi-level processing.

It's a ton of cognitive load. It’s just so many things you're translating— you're constantly signing something and listening for what you're supposed to do next.   -P2

Typically work in teams because of the physically tiring and just constant thinking and interpreting. They have proven after 20 minutes of interpreting that the interpreter will start to omit information.   -P10

3

Technology can assist but not replace interpreters.

It's so personalized. These machines can’t.. Each deaf person does it differently. There's just so many variables going on. How you express it and consume is so rooted in a person's creative mind and how they embody it and their technical output. I mean, it's one of the most human experiences.   -P3

4

Despite the proficiency of the interpreters, there are external factors outside of their control that can make the interpretation process difficult.

Sometimes we can't hear or maybe the speaker has an accent and is difficult to understand.   -P7

In group conversations, they will be talking over each other even though they know an interpreter is there.   -P8

Design Principles

Informative

Provides information and context to interpreters ahead of time to help them be better prepared for the interpretation session.

Stress-free

Helps relieve stress and intensity during the fast paced simultaneous interpretation.

Assistive

Helps facilitate smooth interpretation for interpreters instead of replacing them.

Personal

Considers each interpreter's personal and situational needs while upholding the human quality of all individuals involved in the interaction, going beyond the one-size-fits-all approach.

How might we assist interpreters by equipping them with information they need for their interpretation sessions to offset their cognitive load?

Ideation

After doing several ideation sprints our team narrowed down to six ideas. Of those six, we ultimately decided to explore idea 5 because we thought it was a nonintrusive intervention that was the closest to align with our principles and our goal of assisting interpreters despite the speculative nature of the AR and AI technology.

Other Ideas

Concept Testing 1

We tested  three different variations of our idea 5 through the wizard-of-oz method and role-playing an interpretation session to simulate what interpreters would see when using this AR technology. We asked them about what they thought of each variation at the end.

Goal:
To observe how interpreters respond to each variation and gauge how helpful the visual aids are.

Participants

5 Interpreters
Concept Testing with an ASL Interpreter

What we learned

How we responded

1

Video clips of sign language words are not helpful but images are.

We quickly found out that the video definition clips were useless because they were visually distracting, often inaccurate, and almost impossible to mimic. 

However, images could quickly communicate meaning correlating to the highlighted term at a glance.


ASL is like painting in a 3D space. You describe the shape and size, texture, the abstract information rather than using linear words.   -P3

2

Highlighted key terms are very helpful for fingerspelling.

Fingerspelling occurs often in ASL since a lot of technical terms don't have a sign developed by the deaf community. Seeing the spelling helps the interpreter know how to fingerspell that term to the DHH individual.

It is also helpful in scenarios where interpreters don’t know the term or if they couldn’t hear or understand what was spoken.

3

Confidentiality between interpreters and their users is very important.

Interpreters have a code of ethics to maintain the privacy of the user or patient.


Privacy is always a concern when they (deaf people) have to have someone else (interpreter) in the room. If you go to a deaf event and you were just at their physical. That sucks. So as an interpreter you have to be trustworthy.
-P3

3

Any transcript data of the conversation will be inaccessible and automatically wiped after the session in order to protect people's privacy and rights.

Concept Testing 2

What we learned

How we responded

1

AR technology is a bit out-of-touch.

While our AR concept was fascinating to our participants, they couldn't really imagine using AR technology especially without a real working device or prototype. Therefore, it was difficult to evaluate.

However, an app could also easily fit in with their workflows since interpreters are already using their phones for their jobs.

1

Apply our existing AR concept into a mobile app.

We wanted to ground our speculative concept into a more practical one. We also decided to remove captions since reading multiple words from a small device would be difficult.

2

The more interpreters know about the topic of the session beforehand, the better.

Interpreters reiterated that preparation for upcoming sessions is an important process and a part of that was to become familiarized with the topic.

Luckily, because our concept shifted onto an app, we could implement their procedures before and even after their sessions such as learning specialized vocabulary.

2

Help interpreters build repertoire and language in specialized fields by facilitating vocabulary learning.

4

DHH individuals might also want to see and benefit from the information the interpreter is seeing.

DHH people are often already excluded from information that is accessible to others. Also, just as it is difficult for interpreters to constantly finger-spell, it is also difficult for the DHH individual to constantly read fingerspelling.

4

Add a screen mirroring function so live information can be shared with DHH individuals.

Screen mirroring would allow DHH individuals to process the information quicker and feel included in the process. It could also reduce the need for fingerspelling since the interpreter can just point to the device each time there is technical vocabulary.

UX & UI Design

While we were gathering and synthesizing our learnings from Concept Testing 2, we were simultaneously designing the user experience and the interfaces.

Storyboard by Winnie
Rough sketch wireframes by Cecilia
Low fidelity wireframes by me
Mid fidelity wireframes by me and Cecilia

Usability Testing

After designing interfaces and an interactive prototype on Figma, we conducted usability testing with two new participants. They were thrilled and impressed by the idea of a product designed for interpreters since they are often overlooked when it comes to inclusive technology.

Goals:

  • See if the UI would communicate its functionality to an unknowing user
  • See if the app's structure is intuitively navigable
  • Get feedback on the existing features
  • Validate the need for this product within their current workflows

Participants

2 Interpreters
Usability Testing with an interpreter

What we learned

How we responded

1

Knowledge levels in each interpreter vary depending on the topic and amount of experience.

The terms that are highlighted should be terms that the interpreter does not know. Because knowledge levels are different, we had to personalize the experience depending on interpreters' familiarity with various topics and words.

1

Have interpreters input conversation topic and keywords for the AI to gauge their knowledge level.

When interpreters input the topic and keywords they know before the session, the AI will generate a list of vocabulary that might occur in the session. They can remove words they already know from this list. This would help the AI identify the interpreter’s vocabulary level and decide which terms to highlight during the session.

2

Interpreters often work in pairs. We should consider what role the app would play in that scenario.

For any session that lasts longer than an hour, interpreters work in pairs and switch roles every 20 minutes so they don’t omit information during interpreting due to the cognitive overload. The role of the "off" interpreter is to be another set of eyes/ears and assist the “on” interpreter by feeding them any information or terms they might have missed or signed incorrectly.

2

Have an in-session "list" view where it lists all the highlighted words for the "off" interpreter.


In instances where an "off" interpreter is using the app, they can switch to a vertical list view to show a live list of highlighted words in that session. This would assist the “off” interpreter in tracking the conversation and feeding information to the “on” interpreter.

Reflection

If I did it again...

When we designed this concept, we tried to cater the experience to the typical use case for an interpreter, where they are able to schedule a session and prepare beforehand. Currently the interaction model requires the interpreter to create a session, save it, click back into it, then start that session.

What should the interaction be like for interpreters who forgot or didn't have time to set up a session, and want to go right into using the real-time assistance?

Although this wouldn't be the typical use case for an interpreter, a good design should also work for the edge cases. I believe that interpreters would appreciate the freedom to go straight into in-session mode without having to create a session when they forgot to create a session. I can also see how interpreters might think that this is just a scheduling app without the main feature (in-session assistance) being available from the first screen they see.

Therefore, if I were to continue with this project, I would make a way for the user to go straight into in-session mode and allow them to fill in the job information afterwards.

Design Implications

In every project that has the potential to affect people, I believe that as designers we carry the responsibility to question and carefully consider the implications and potential problems/unintended outcomes that might arise from the design. These are a few questions I came up with about how SignSavvy might affect people.

  • How might this negatively affect how interpreters learn on the job?
    For example, will assistive technology prevent interpreters to rely on their own memory and skill?
    Will they rely too much on the tech?
  • How would DHH people feel that interpreters are using assistive tech?
    Would DHH individuals have a say whether their interpreter uses this or not?
  • Would using this technology be an indicator to others that the interpreter is a novice?
    How might that change people's behavior?
  • Will more people actually become interpreters with this technology available?
  • Can this technology be expanded to other areas of communication or learning?
  • How might this technology affect how humans interact with one other? (eye contact, dependency on mobile devices, etc.)

Now I do not know the answer to these questions. Obviously we weren't able to test on a large scale, but these questions are important to ask and think about. If this were to be developed and put out in the real world, we would need to find answers to these questions.

Takeaways

Before this project, I thought design research was valuable but only to a certain extent. Now I have learned that research should always happen if the time and resources are available. Each time we went out and interviewed or tested, we learned something new that changed our direction. Diligent research really helps understand not only the people who we are designing for, but also their thoughts, attitudes, and routines which drive how they interact with the world.

Another takeaway is that asking the right questions early on in the design process is crucial because those questions lead to valuable answers which acts as the foundation for the design to be built on.

The most valuable takeaway from this project is not coming up with a cool product or design. It is how much I was able to learn from diving into an unfamiliar topic which was deaf culture, sign language, and interpreters. I got to learn about how others navigate and interact with the world differently. I also grew to love interpreters not just for how well they handle their difficult job but more for their passion and dedication for accessibility and inclusion to DHH people. Because I learned more about deaf culture and DHH people, I have the perspective to look into their world and be a more knowledgable and empathetic designer and person.

Customized Thank You cards for participants