top of page

Styler—the hands-free hairstyling app


Styling hair can be a hassle—especially when you're following a hair tutorial. Between the combs, brushes, and hair dryers in your hands (or the hair product coating your fingertips) interacting with a phone screen becomes a tricky and messy task.


Styler is a gesture-based app allowing users to watch and interact with hair tutorials without touching the screen. The eight gestures used to control the app include: 

1) Play

2) Pause

3) Rewind/Previous Step

4) Fast-forward/Next Step

5) Zoom In

6) Zoom Out

7) Open/Close Overview

8) Open/Close Help Menu


School (solo)


Sep-Oct 2021 (8 weeks)


Figma, Dimension, Premiere


Gesture-based interaction, Wizard of Oz testing, green screen filming, brand design, prototyping

Frame 52.png

Product demo


My process fell into the following stages: A) user research to understand common pain points, B) gesture design, C) app design, and D) creation of the product demo.

The problem—styling is messy!


Interview with target user

The original project brief didn't ask us to do any research, providing three prompts: a cooking app, a music-streaming app, or a car navigation system—all gesture-based. 


That said, I had something different in mind: hairstyling (I was the only student to delve into a problem space all my own). 

In order to keep up with my classmates (who were going straight into gesture design), I interviewed one family member about her experience following hair tutorials, finding the following pain points:

1) Difficulty when interacting with one's hair in addition to a phone screen

2) Difficulty with the size of the phone screen (not easy to see)

3) Always "going back" to re-watch parts of the tutorial because she missed a step

4) A habit of reading the step's description/watching the whole video before starting

Problem Statement

How might we design and onboard users to a gesture-based app interface so  they can interact with hair tutorials remotely and seamlessly?

The solution—Styler

It was clear to me from the interview that hairstyling (at least where tutorials are concerned) could really benefit from a gesture-based interface. So, I created Styler, a gesture-based hairstyling app, and began the lengthy and difficult process of designing its gestures. 

Gesture design

I began with ten gestures: Play, Pause, Fast-forward, Previous Step, Next Step, Zoom In, Zoom Out, Open/Close Overview, and Open/Close Help Menu.


A few of them came to me easily, such as Pause (a "stop" motion), Play (a point motion, as if to say "go"), and Help Menu (an extended hand). Others were a little more ambiguous. I ended up creating one gesture for the Rewind and Previous Step functions, doing the same for Fast-forward and Next Step. They would be differentiated later within the UI modally. 

Four iterated gestures

Gesture representation

Representing these gestures in a 2D format was even trickier. The lack of depth and space made the gestures much less concrete and easier to confuse. I tried three different styles for each gesture (one with a photographed hand, one in only outlines, and one against a colored background with a phone nearby for reference) in the hopes that one of them would clear up the confusion.

A round of testing with my classmates helped give me an idea of what gestures/representations worked and which ones didn't. As you'll notice from my final prototype, I ended up going with a representation halfway between the outlined hand idea and the solid color idea.


Gesture design and representation testing results

Task flows

Task flows were an important step towards fleshing out the functions and features of my app. Since I didn't have a brief to refer to, I returned to my interview notes, picking out the actions my participant took while following hair tutorials.


Task-flowing these actions allowed me to track such a user's journey with my app, raising new questions and opening up the possibility for concision. One such opportunity was the aforementioned combination of Rewind and Previous Step, and Fast-forward and Next Step.

UI sketches

Next came the visual design of it all. For this particular project, we were encouraged to look outward at the UI of similar apps and use them as a basis for our own designs. The UI on the far left is the one I chose to build off of. You can read my rationale below it.

Using it as inspiration, I sketched the approximate look of four main screens. As you'll see in my wireframes, the visual design sticks close to these sketches, but there are some notable changes—such as the orientation of the slider in "Step Overview" and the organization of the homepage. 

Echo and semantic feedback

One of my UI sketches is labeled "Echo Feedback," but what is it? Echo feedback describes the reaction of the interface to whatever the user is doing—in my case, making a gesture. Styler's users need to have some kind of indication that the app is paying attention. Semantic feedback, on the other hand, is a visual cue notifying users that they've successfully completed a task.

In this UI sketch, the three dots represent my echo feedback. In my wireframes, it's a single circle. When the circle turns into a hand, that's semantic feedback. The cycle starts over when the hand starts to fill with color, becoming echo feedback as the user is shown that their gesture is being carried out. The checkmark at the end of the countdown is the matching semantic feedback. See how it works in the wireframes below:


I ultimately created four tasks to show off my gesture design. I started with very long flows and quickly realized that I was biting off way more than I could chew. When I moved into my final design, I narrowed my focus and focused on shorter, user-testing-friendly tasks. 


Wireframes, set #1 (click to access the Figma file)


Wireframes, set #2 (click to access the Figma file)


Onboarding, wireframes

Styler's onboarding is especially important for a number of reasons: 1) it's the user's first experience with the app and sets a precedent for future interactions, and 2), if done correctly, it can ease the app's learning curve and leave users feeling invested in the experience.


Given that I had eight gestures to "teach," I had to consider the onboarding flow's length, cognitive load, potential mis-rememberings on the user's part, and general pacing. I'm still rather dissatisfied with my onboarding experience (I think it's too long too wordy); with more time, that's where I would focus my efforts.



Wireframes, set #2 (click to access the Figma prototype)

Wizard of Oz testing

When it came to user-testing, I was initially stumped. How could we test a gesture-based technology that wasn't, well, real? The answer: "Wizard of Oz" testing. In these types of tests, the tester shares the prototype from their screen and acts as the "mouse," doing whatever the user tells them to. 

At first, Wizard of Oz testing was a little awkward. It almost felt as if my participant was more hesitant to try things because she was not truly the one "in control." After a bit of encouragement, she gave me some very valuable feedback, including: 

Wizard of Oz usertesting script

1) Practice what you preach: using gestures right off the bat in onboarding, instead of a tapped "Next"

2) Lotta gestures: either lessen the number of gestures or make it easier to remember them

3) Information overload: the onboarding isn't too long but it packs in way too much info

Frame 26.png

Style guide

I knew from the start that I wanted to go for a minimalistic design. Since there's already so much going on with the gestures (e.g., their semantic and echo feedback) I figured I would keep the color palette bright but laid-back: blues, pinks, and yellows.

Styler style guide

Product video, green screening

Until this project, I'd never used a green screen before and scarcely understood the way they worked. Luckily, it wasn't all that complicated—just a bit tedious. 

First, I set up a tripod and recorded myself making the gestures in front of a green piece of paper. Then, I imported the footage into Premiere and used the "Ultra Key" effect to cut everything but my hand out of the image. Lastly, I created a 3D background in Adobe Dimension and placed the hand outline over it, achieving the look seen in my product video. 

Frame 52.png

Screenshot from Styler product video

Put a bow on it!

Beyond teaching me how to design gesture-based technology, this project allowed me to exercise a broad range of skills, ranging from wireframing to Wizard of Oz testing to green screening. I'm most proud of the app itself: I think it solves a real problem, though I could have done more research to back it up.

With more time, I would interview more users, clean up my hand gestures (they still look a little sketchy/choppy), re-record my VO (due to the environment I was in, I was unable to speak loudly and my VO suffered as a result) and iterate my onboarding flow (I still think it's a little clunky, especially towards the end with the introduction of modes).

bottom of page