User Research at Saturn: Driving Innovation through User Empathy

By Joe Parry

July 8, 2024

🛠 Building to Solve a Personal Problem

Saturn emerged from a genuine need and problem observed firsthand by Dylan and other classmates while attending Staples high school. This circumstance represents the purest form of user research: solving one’s own problems. To build a great product that brings value to people, it’s essential to put yourself in their shoes and see the world through their eyes. When that person is yourself and those around you, the exchange of information from user research to product building is immediate, allowing you to directly address and solve your own challenges.

Initially, the direct experience of students at Staples drove the development of Saturn. An intimate understanding of students’ needs at a singular high school with a particular set of conditions meant that user research was naturally integrated into the product development process. As time went on and the product has scaled and evolved, the distance between user problems to product development has grown, prompting a more focused effort on understanding how users experience the product and addressing their needs. A continued focus on user research ensures that as the product changes, it continues to resonate with users and solves the right problems. Maintaining this close connection with our target audience has become increasingly critical, especially since the team is longer in high school themselves.

As Saturn has grown and expanded into dozens, then hundreds, and now thousands of high schools across the US, what started as general observations or informal conversations with classmates as a form of product feedback has evolved into the user research practice we employ at scale today.

⚖️ Our Approach to User Research at Scale

We integrate user research into three primary phases of the product development process: ideation and brainstorming, design prototyping, and post-launch feedback. 

  1. 🧠Ideation and brainstorming: Prior to designing a new feature or user experience, we’ll ideate on various problems we could solve. These could arise from our own minds as things we might want to see in the product ourselves, a principled direction we want to take as a business, or from feedback we’ve received from users in the past. As an example: we’ve long heard a desire from users for a groups feature to organize clubs and teams on the platform; we’ve recently completed development of this feature and are beginning to roll it out. 

    It’s important to conduct research up front to make sure we are focused on the right problems, and not potentially solving for a problem that doesn’t exist; after all, we have finite resources and must allocate them preciously! At this stage we’ll distribute surveys to our users to solicit input, have open forum discussions to see how various ideas resonate, and gauge feedback to how users are reacting to the direction.


  2. 🎨Design prototyping: Once we have a clear-ish direction on the problem to solve, we move into the design phase, where we explore several visual options for what the experience could look like. Typically at this stage we will have several viable options of what the final product could look like, and this is another good juncture to get reactions from our users on preferences, and to check us on any blind spots we may have had during the design phase. We’ll typically do this informally by showing our community of beta test users various prototypes in Figma and synthesizing feedback for design to then incorporate and iterate upon.

  1. ♻️Post-Launch Feedback: After rolling out a feature or change, along with looking at data to see how users are engaging with the product, we also continue to gather qualitative feedback from a broader set of users in production. This ongoing research helps us understand how users are experiencing the final product and allows us to make necessary adjustments based on real-world usage. 

🤔 Controlling for User Bias

One acute challenge we face in conducting user research is that we can receive biased feedback if we’re not diligent about accounting for it. We have a champagne problem here — our most engaged users LOVE the product, and are eager to give back and help add to the experience by participating in research and product feedback efforts. As a result, if we’re not careful to source insights from a diverse and representative sample of users, we can subject ourselves to an echo chamber of one-sided feedback, only to find that the broader adoption data is not there once we action the research and build something. 

For example, our highly engaged community of beta testers, which tends to skew toward organized, type A user personas (they are calendar power users, after all), may request a set of highly customizable features because they would actually use them. This doesn’t mean we shouldn’t cater to power users and build features to accommodate their preferences, but we must be careful not to mistake this as a broader demand from our entire user base. To address this, we try to be intentional about recognizing when user bias is acceptable and when it’s not. For instance, user bias may be less critical when gauging usability reactions to a design prototype than when brainstorming potential new features. To control for bias in practice, we gather feedback from a diverse subset of our users across various dimensions when the sample distribution is a relevant input.

Saturn’s team on a User Research call

🏭 Our Process: Tactics we use to derive user insights

Now down to the nuts and bolts. Each week, we take a question that is top of mind for our team and conduct a research “mission”. This mission could map to any of the 3 phases in the lifecycle detailed in the above section (ideation, design, post-launch), which also informs the criteria for the audience we select to participate in the mission. Once we’ve dialed in the objective of the mission and audience criteria, we carry out the research through various channels: 

  1. 📜Opt-in surveys embedded in the product — This method is best for when we want to form a quantitative baseline and source a higher volume of feedback on a concrete, specific question, such as “How many groups are you a member of at your school?” This method also allows us to target users across a wide variety of different user attributes and usage patterns to then slice the results by and identify differences that stand out between different user personas.

  1. 🖥️User interviews and focus groups over zoom — This method is best for more open-ended exploratory discussions, where we want to discuss ideas with our users and gauge reactions in a more flexible environment.

  1. 📳Group chats and Instagram polls — We engage a close community of beta testers / volunteers through a series of group chats and our instagram. These channels are best for getting a quick low-touch signal on a question that we need an answer to, and where user bias is less of a concern. An example could be “Does your school prom include sophomores or only juniors and seniors?”

  1. 🏫School visits and in-person events — We will periodically host events on or nearby school campuses in our network, and these are a great opportunity to get unfiltered product feedback as well as observe users interacting with the app in real-time.

  1. 🗣️AI audio survey tools — More recently we’ve been experimenting with AI-based survey tools like Alpharun, wherein we pose questions in a survey format but the user responds via audio; the feedback is then synthesized into an AI-generated summary with common themes and takeaways. This method acts as a hybrid of sorts between a survey and a live user call over zoom, and is time-saving for both parties.

Saturn User Research Calendar / Roadmap

🚙Research in Action: An illustrative example 

One recent example that underscores the importance of staying close to our users occurred on a school visit. We hosted an ice cream social event at a nearby school to showcase and test our public events functionality on the calendar with a real world audience. Prior to the event we placed the public event in the calendar for the school, and in order for students to enter the event we required them to show that they had found the event and RSVP’d. After RSVP’ing yes, a modal would appear prompting users to also add the event to an external calendar (Apple cal or Google cal). 

As we observed students coming through the line we began to notice something interesting. When the external calendar prompt appeared, many students were tapping on the Add to Gcal button. When we inquired further, we discovered that most of them didn’t actually use Google calendar, but were simply tapping on the option out of confusion for how to dismiss the modal. Back at HQ, we had been debating the utility of this integration feature and whether it could be removed to streamline the product, but when we looked at the data we would see a meaningful amount of users adding events to their Gcal, and thus opted to keep it in as users were getting value from it. After repeatedly seeing users tapping the option as a means to dismiss it, we realized that the data we were seeing was misleading, and that observing the user behavior was the confirmation we needed to proceed.

Saturn team at a campus event