VTour

A service that helps renters remotely connect with a local to allow for a trusted virtual tour of future living spaces.

DURATION
TEAM
MY ROLE
9 Weeks
Noya Dolev
Pan Li
User Research
UX & UI Design
Usability Testing
Video Editing

BACKGROUND

Finding a place to live has always been a stressful and laborious process. It becomes more challenging during the COVID-19 pandemic as many people were unable to visit and check out the place physically. Therefore, we decided to explore the remote house-hunting experience. After conducting primary and secondary research, we found that there is a lack of trust during the remote process, as there are many factors that are hard to look for remotely.

Therefore, we wondered:
How might we enhance the house-hunting experience and help renters trust the process remotely

RESPONSE

VTour, a service to aid renters in finding trusted viewers in the area to tour their wanted listings on behalf of them. It includes the bellowing features to best foster renters' trust during the remote house-hunting process.

Matchmaking Survey

find a viewer who is similar to YOU

A set of questions we ask both renters and viewers about their lifestyle preferences, which will result in a matching percentage. The higher the percentage, the more similarities there are between the renter and the viewer. The goal is to help renters gain more confidence by letting a viewer who has similar preferences represent them when going on a tour.

viewer profile

details about
your viewer

Aside from general information, the viewer profile also includes reviews from other renters and viewers' matchmaking survey answers in order to help renters get a better understanding of the match percentage.

personalized checklist

make sure they check everything you care

The checklist is designed to help renters communicate what they look for as well as guide viewers in touring the homes in a more professional way. It is automatically generated based on imported listings and answers from the renter's survey. Renters can add and edit any criteria to make sure all their needs are delivered to the viewer.

virtual tour

check out the place with the viewer

During the day of the inspection, renters can join the tour remotely and go through it with the viewer. It is different from other leasing office virtual tours as the viewer is a neutral third party who is present during the tour, which allows them to give feedback without bias.

Overall Report

A SUMMARY OF THE place

Some people want to know every detail about the place, while others might not. This overall report is a summary of the place, filled by the viewer, and sent to the renter after the tour. It has a score and comments on aspects that the renter cares about specifically. If the renter wants to know more details, they can check the attached checklist.

RESEARCH

We spent the first 2 weeks conducting primary and secondary research to better understand the problem area. We began our research by first creating a detailed study guide, including our research questions & methods, interview protocol, so that we could approach the problem strategically.

Research Questions

1

What are the challenges renters encounter during the housing-hunting process?

2

What are common platforms/products used for remote housing searching?

3

What are the important factors renters consider when searching
for housing?

Research Methods

Secondary Research

Our secondary research focused mainly on how people usually find a home to rent and what factors people think are essential during the process. Through research, we found that today, any renter can easily access millions of apartment listings through hundreds of digital databases and rental search engines. These options can be found as overwhelming, inefficient, and time-consuming.

We also found that there are many factors to consider when looking for a place to live, including the community, property damages, and the place's overall feel and atmosphere. These factors are extremely hard to look for remotely and not being physically present.

Surveys conducted by RentPath between 2017 and 2018

Semi-Structured Interviews

Once we had a general understanding of the industry, we wanted to get deeper insights into the problem space. Therefore, we interviewed 12 participants, mostly students and young professionals.

We structured a 45-min interview with all participants, where we asked questions about their experiences in order to learn about their challenges when looking for housing.

Card Sorting

We asked 6 participants to rank housing searching factors from most to least important based on their housing-hunting experience during the interview. Our goal was to learn about what are the main factors renters consider when looking for housing.
Although each participant had their own personal preferences, price, commute, and location were the three most important factors for most of our participants.

Card Sorting Results

Observation Analysis

We asked the other 6 participants to imagine that they will be moving to Los Angeles in June for a new job, and now they are going to start looking for a place to live. We wanted them to share their screens with us and think-out-loud so that we can go through the process with them and ask questions as they go. The goal was to learn what the process of finding housing looks like and the challenges renters face during the process.

Screen Sharing with One of Our Participants

Understand the Industry

It is also important for us to understand the current solutions on the market so that we can discover the gap. Therefore, we analyzed 10 competitors in the market, in terms of their strengths and weaknesses, as well as included features to find our design opportunities.

Parts of the Competitive Analysis

DATA SYNTHESIS

Affinity Mapping

We used affinity mapping to understand and find patterns in all data points we learned from the research. We then grouped them into categories and came up with 6 insights.

Narrow Down
the Scope

We then combined these insights into 4 areas of opportunity:
"Research," "Comparison," "Trust," and "Remote."

1

Research: How might we optimize the searching process for renters in order to increase efficiency when looking for housing?

2

Comparison: How might we help users in their decision-making process to eventually come up with the right fit for them?

3

Trust: How might we establish trust and transparency when going through the remote housing-hunting experience?

3

Remote: How might we establish trust and transparency when going through the remote housing-hunting experience?

To narrow down the scope, we decided to focus on the "trust" and "remote" aspects. We acknowledged that "research" and "comparison" will still be important during the house-hunting process, but these are things that many platforms are trying to work towards currently. Therefore, we thought it was more interesting and challenging to look into how "trust," and "remote" intersect. Based on that, we came up with the HMW statement:

How might we enhance the
house-hunting experience and help renters trust the process remotely

Based on our insights, we came up with four design principles that we wanted our response to stick with:

1

Stress-Free: Help relieve stress and intensity during the fast speed simultaneous interpretation.

2

Informative: Provide information and context to interpreters ahead of time to help them better prepared for the upcoming session.

3

Assistive: Help facilitate interpreters on the interpretation process instead of replacing them.

4

Personal: The design should consider individual users and their needs, and also uphold the human quality of those individuals involved in the interaction.

IDEATION

We began to ideate as many ideas as possible. From 20+ concepts, we narrowed down to 6 ideas that were better responses to our HMW statement, and more aligned with our design principles. Eventually, we decided to go with IDEA 4 — AR Captioning & Dictionary concept, as we believed that this concept was one of the most innovative ones and had the most potentials.

CONCEPT TESTING 1

Through research, we found that it is particularly difficult when interpreters are interpreting for unfamiliar topics. Therefore, we hypothesized that when an unfamiliar term occurs during simultaneous interpretation, providing interpreters with contextual information can make the process less stressful.

Concept 1 - Assisted with Imagery


Some of the questions we had for our initial prototype:
1. What form of captions is better? Paragraph or keyword?
2. How do interpreters think about the speed of captions? 
3. Which types of contextual information are more helpful? Caption, image, or ASL signing video?
4. Is it distracting for interpreters to look at the content while interpreting?
 

“ASL is like painting in a 3D space. You describe the shape and size, texture, the abstract info rather than using linear words.”
— Participant 3

How We Did it

We made 3 concept variations and tested them with 5 participants. Because sign language is a visual language, besides showing interpreters captions and how to sign the word, there is also an imagery option so that the interpreters can look at the picture and sign it.

We mimicked the simultaneous interpretation session through role-playing, using an iPad as the interface and placing it in the participant's line of sight to test each concept.

Caption Only
ASL Signing Video
Image
Concept Testing with an ASL Interpreter Participant

What We Learned

1

Video clips of sign language words are not helpful because one word can have multiple ways of signing, depending on the context, and there is not enough time for interpreters to look at the video and mimic it.

2

Images of the term that show the relationship are more helpful than images of what it is / means. 

3

Preference for the types of contextual information depends on persons and situations. 

These great insights had informed our later design: 

SPECULATIVE DESIGN

SignSavvy

AR technology that assists novice interpreters with imagery and captioning in order to help deliver successful interpretation.

Highlight Terminology & Provide with Corresponding Imagery

During the simultaneous interpretation session, SignSavvy uses real-time captioning to analyze the conversation, highlights words that interpreters might not know, and provides the corresponding imagery based on user input, to help interpreters better understand the context.

Set Preferences

Before interpreters start the session, they can input their intent, subject field, and familiarity with the topic area to help the artificial intelligence better predict and determine what to show.

STORYBOARD

The storyboard below demonstrates the device's application in real life.

Jane has recently passed her interpreter Exam and she is now officially a certified interpreter.

Jane receives her first interpretation job for the next morning at the University of Washington requested by a deaf student for office hours about her physical computing class.

Jane is a bit concerned because despite her interpretation training, she has no idea what physical computing is, and she wants to do well on her first job

The next day, Jane gets ready to leave for work. She puts on her Augmented Reality Contact lenses that will assist her with her interpretation job. She hopes that this will help interpret even in an unfamiliar topic.

Jane arrives on campus and enters the building where the job is scheduled.

This is Isabelle. She is the deaf student that requested an interpreter.

Jane and Isabelle have a conversation in ASL and Isabelle tells Jane that she is stuck with one of her physical computing projects.

After Jane has a basic understanding of the context, she input her preferences so that the artificial intelligence technology can make better predictions and suggestions during interpretation.

This is Professor Harrington. She is finally ready for Isabelle to come in.

During the conversation, Jane does not know the technical language and terms being used. Luckily, she is wearing her AR lens that processes what is being said in real-time.

Jane can see captions, highlighted specialized terms that are not familiar to her, and a corresponding visual to help her understand what that highlighted word is and how it is being used in the context.

Isabelle and Professor Harrington end up having a fruitful conversation. Isabelle now understands her homework, making the interpretation job a success.


What We Learned

Initially, we chose the augmented reality technology because of its non-intrusiveness. However, even though we tried to set up the prototype as close as what interpreters can see using AR glasses, it is still hard for interpreters to imagine how it looks like without the actual AR device. During the testing, many participants expressed the concern that AR technology is a bit far from their everyday life.

Participants also mentioned that receiving information about the upcoming session is very important for interpreters to be better prepared.

In addition, aside from giving people images of unfamiliar terms, synonyms are also great to help interpreters know what that term means in the context without knowing it.

What's more, some participants had trouble entering their familiarity with the topic in the preference setting since there is no standard for them to determine if they are novice, general, or expert.

Therefore, we made 4 major design decisions:

DESIGN

We made a rough sketch of the flow of the app and based on that, created a mid-fidelity prototype for usability testing.

USABILITY TESTING

We conducted in-depth usability testings with 2 interpreter participants. We made an interactive prototype using Figma and projected it in iPhone for participants to use. We asked participants to think aloud so that we knew their thoughts, concerns, and frustrations while interacting with the prototype.

These 2 participants were new to our project. They were thrilled by the idea of an app designed for the interpreters since, currently, there was no such product in the market.

Even though the prototype was only wireframes, the feedback we received was insightful and actionable. Participants not only gave us detailed feedback on the interface but also provided fresh perspectives on the app's structure and functionality to make it more fit in with their current workflow.

Usability Testing with an Interpreter

What We Learned

Here are 2 key things we learned from the testing:

1

Deaf individuals can also benefit from seeing contextual information. 

2

Interpreters often work in pairs. Considering how the app might look differently when the interpreter is on or off. 

3

Users want more autonomy. Instead of relying on the system and AI, give users more flexibility to edit the recommendations and their preferences.

We then turned all great feedback into actionable steps to further refine the app's ux, ui, and other functions.

FINAL DESIGN

Challenges

How might we assist novice interpreters to better prepare for their interpretation session?

Response

SignSavvy, an app that provides live imagery and text when unfamiliar terms occur during simultaneous interpretation sessions. It also helps interpreters schedule and keep track of their upcoming sessions, and
learn new vocabulary that might arise in future sessions.

NEXT STEPS

Information Confidentiality

Since everything interpreters learned from the session is confidential, we need to make sure that the conversation is erased after the session and only keep vocabularies used for learning purposes.

User Privacy

As interpreters will be documenting some of the client information in SignSavvy for future references, we need to make sure that the app is secure so that there is no leak of clients' personal information.

Design for Different Languages

"I want the app right now. There is no such app designed specifically for interpreters currently."
Many of our participants had said to us. Right now, SignSavvy only has an English version; however, sign language interpreters are all over the world. Designing for different languages is important so that more interpreters can use SignSavvy.

Develop & Test

It will be great if we can collaborate with engineers and build out the product so that we can test it out under different contexts and environments, to improve not only the interface and user experience but also its algorithm. Ultimately, we hope SignSavvy can actually serve and help interpreters, especially those just starting out.

FINAL THOUGHTS

SignSavvy is a special project for me. It gave me an opportunity to explore a whole new field - interpretation service and understand a new language - American Sign Language (ASL). I respect every sign language interpreters. They are an important part of accessibility and inclusion as they are advocating for deaf and hard-of-hearing individuals. I also fall in love with ASL. It is a beautiful language.

"ASL is like painting in a 3D space. You describe the shape and size, texture, the abstract information rather than using linear words."

- Participant 3

One of the biggest challenges for this project was recruiting people. It was hard to start as none of us knew any interpreters. Luckily, we received great help from the UW Interpreter Coordinator and some Ph. D. researchers, connecting us with people in the field. But it was also tough to find that many participants (to interview and do usability testing) since interpreters are super busy. So I learned the importance of maintaining the relationship.

After the initial round of interviews, we followed up with all participants, asking if they would like to participate in the usability testing. We also updated them with our design and made hand-drawn customized postcards as appreciations to their time and input. Because of this, many of our participants were willing to come back and helped with our project again.

Customized Thank You Cards for Participants

More Projects

OverviewResearchData SynthesisProblem AreaIdeationConcept Testing 1Speculative DesignConcept Testing 2Mid-fi PrototypeUsability TestingFinal DesignReflection