UX Product Designer · Provo, Utah  ·  Available for new roles

Turning
problems
into usable
products.

I run user interviews before the first wireframe, track behavior with FullStory and Sigma after launch, and write JIRA tickets for developers in between. Highlighted in this portfolio are 1 concept project and 6 features deployed across GoReact and RainFocus — from WCAG 2.1 compliance that unlocked higher-ed contracts, to a self-serve payment flow that cut the sales team out of every learner transaction.

10+
Experience Years designing digital products
6
Case Studies GoReact & RainFocus · +1 concept in progress
5
Fortune 500 Clients Oracle · Cisco · Samsung · Gartner · VMware
45+
Research Sessions Interviews coded, tested, and acted on

Ten steps — not as a checklist, but as a commitment. Most UX processes end at step 7. Steps 06, 08, and 10 — stakeholder alignment, implementation support, and measuring results after launch — are where the work either holds together or falls apart. Every GoReact project in this portfolio ran to step 10.

The Product UX Designer, Product Manager, and Engineering Lead collaborate across all ten steps, drawing on ideas from inside and outside the business. This isn't a waterfall — each step informs the others, research can reopen discovery, testing can reframe the problem, and the loop continues as long as the work demands. That loop is already changing shape. Collaborating in real time with a tool like Claude — generating a working concept, testing it with real users the same day, bringing the findings back to iterate immediately, and testing again — compresses what once required separate design and engineering sprints into a single cycle, and redefines what a product designer with strong research instincts is capable of delivering.

01
Define the Problem

Frame the core challenge, constraints, and success criteria before any design work begins.

02
Understand Users & Context

Research real users through interviews and observation to surface what they actually need.

03
Discovery & Competitive Research

Map the existing landscape to identify gaps, patterns, and opportunities the design must address.

04
Low-Fidelity Solutions

Sketch and wireframe rapidly to explore the solution space before committing to any direction.

05
Iterate & Test

Put rough designs in front of real users, learn fast, and refine based on what breaks.

06
Align Stakeholders

Bring product, engineering, and leadership into the design before it becomes costly to change.

07
Prepare for Dev

Produce specs, component documentation, and handoff materials that engineers can build from directly.

08
Support Implementation

Stay present during build to answer questions, review work in progress, and catch drift early.

09
Product Test

Validate the near-shipped experience against the original problem definition with real participants.

10
Measure Results

Assess outcomes against success criteria and document what the design actually changed.

Selected
Work

One shipped AI-assisted application built with Claude, six shipped features at GoReact and RainFocus — full process, no visuals due to NDA. One speculative concept project with complete visuals.

L
AI-Assisted Development Live Application

LockedIn — Merit-Based Player Evaluation PWA for Youth Soccer

I designed and built a full progressive web application using Claude as my development partner — no prior software deployment experience. LockedIn is a merit-based player evaluation system for my U13 youth soccer team that tracks attendance, fitness, and effort across a 23-practice season. This case study documents what happens when a product designer uses AI to ship real software to real users.

Product Design + AI-Assisted Dev PWA · Mobile & Desktop Firebase · Firestore Vanilla JavaScript Cloudflare Pages Voice Coaching (Google TTS) Spotify Integration Real-Time Data Sync
Role
Designer, Product Owner, & AI-Assisted Developer
AI Partner
Anthropic Claude · Conversational Development
Platform
Progressive Web App · Mobile-First · Deployed to Cloudflare Pages
Domain Knowledge
US Soccer C License · Liverpool FC International Academy Utah · 6+ years coaching competitive youth soccer
Outcome

Shipped a production PWA used by 15 players, 3 coaches, and their parents — with real-time data sync, voice-coached practice timers, a merit-based scoring system, a template library of US Soccer-aligned practices, and Spotify integration — all built through AI conversation with no prior deployment experience.

The Problem

As a youth soccer coach working with 12–13 year olds, I needed a system to make position selection fair, transparent, and effort-based rather than subjective. Players and parents deserved to see that playing time was earned through consistent attendance, fitness effort, and engagement — not favoritism. No existing tool combined practice tracking, player evaluation, and coaching tools in one place. I also needed a voice-coached practice timer so my assistant coaches could run structured sessions independently.

The Approach — AI as Development Partner

I'm a product designer, not a software engineer. I had never deployed an application. Instead of learning a framework, I partnered with Anthropic's Claude to build LockedIn through iterative conversation — describing what I needed, reviewing the output, testing on my phone, and refining. Over multiple sessions spanning weeks, we built a ~5,500-line single-file vanilla JavaScript PWA with no build step, no framework, and no dependencies beyond Firebase and a few CDN libraries. Every design decision was mine. Claude translated those decisions into working code.

What I Designed & Shipped

The application has seven core systems, each designed and iterated through AI conversation:

Merit Scoring Engine — A 6-point weekly maximum across attendance (1 pt), fitness (1–2 pts), and effort (1–3 pts) with a 5-week repeating cycle. Cumulative scores determine position selection tiers. Game bonuses reward players who meet certain thresholds. The system is transparent — every player and parent can see their scores.

Practice Timer with Voice Coaching — A timer that runs through structured practice activities and exercises with Google Cloud TTS voice announcements. The voice calls out activity transitions, exercise names, durations, and remaining practice time. Audio is amplified through the Web Audio API GainNode for outdoor use. Beep tones signal transitions.

Recording Interface — Per-player, per-practice data entry for Yes/No activity completion, RPE (Rate of Perceived Exertion) self-ratings, coaching observations (positive/negative), wellness checks, and arrival times. Every data point feeds the merit scoring engine.

Player Stats & Analytics — Season-long analytics with Chart.js visualizations including RPE trends, attendance streaks, team comparison radar charts, and arrival time patterns. Coaches see every player; players and parents see only their own data.

Individual Development Plans — Per-player IDP pages with customizable habit tracking questions and coach notes. Coaches can save and manage IDPs through the library system.

Library System — Save, organize, and import entire practices, individual activities, or single exercises. Includes 6 pre-built US Soccer U13 templates following the Play-Practice-Play methodology. A left-nav library page with search, sort, and folder organization (Practices, Activities, Exercises, Templates, Stats, IDPs).

Spotify Integration — PKCE OAuth flow connecting to Spotify's Web Playback SDK. Coaches can play music during practice with automatic volume ducking during voice announcements.

Technical Decisions I Made as a Designer

Every architectural decision was driven by my constraints as a non-engineer deploying to real users: Single HTML file — no build step, no bundler, no npm. I could drag-and-drop deploy to Cloudflare Pages. Firebase Auth + Firestore — multi-device sync so coaches, players, and parents share the same data. Network-first service worker — after discovering that cache-first strategy caused data loss on redeployment, I switched to network-first with offline fallback. LocalStorage + Firestore dual-write — every save writes locally first (instant), then syncs to Firestore (durable). A pending queue retries failed syncs when connectivity returns. Seed-only-if-empty pattern — after a data loss incident during redeployment, I built a guard that only seeds Firestore on true first-time setup, preventing defaults from overwriting real data.

What This Demonstrates

This project demonstrates that a product designer with strong domain knowledge and clear design thinking can use AI to ship real software — not a prototype, not a mockup, but a production application with authentication, real-time data sync, third-party API integrations, and actual users. The design decisions were mine. The architecture was collaborative. The code was AI-generated and human-directed. The result is a tool that my team uses every practice.

00
Concept Project In Progress

Clip-Tagging Workflow for Sports Video Analysis Tool

Every case study in this portfolio is protected by NDA — the outcomes are real, but the visuals can't be shown. This concept project exists to change that. It's a speculative redesign of the coach's clip-tagging workflow for a sports video analysis tool, targeting a documented usability problem that forces coaches to watch the same game film multiple times. I'm a licensed soccer coach. I know this problem firsthand. This project shows my full process — from research through prototype — with nothing held back.

Speculative UX Design User Research Journey Mapping Wireframing High-Fidelity Mockups Figma Prototype 10-Step Process
Type
Speculative Concept Project
Platform
Desktop + Mobile · Figma
Domain Knowledge
US Soccer C License · Liverpool FC International Academy Utah · 6+ years coaching competitive youth soccer
Design Hypothesis

A unified single-pass tagging interface — where stats, clip cuts, and playlist assignments are treated as simultaneous properties of one moment — can cut post-game film processing time by 50% or more for coaches working without video staff.

This is a speculative concept project, clearly labeled as such. It was created in direct response to the NDAs that restrict the visual artifacts from every other project in this portfolio. All research, decisions, and design thinking here are original. Deliverables are being completed and will replace each placeholder below as they are finished — check back as this project builds out.
01 Define the Problem Complete
Problem Brief

The clip-tagging interface forces coaches to make two complete passes through the same game film to do what should be a single operation — tag a stat, cut a clip, and assign a playlist simultaneously. For the high school or club coach working alone after a full day of teaching or work, that time cost is prohibitive enough to abandon the features entirely.

02 Understand Users & Context In Progress
User Research & Personas

Primary users: the high school head coach who is also a full-time teacher or the part-time club coach who has a full-time day job — managing film review late at night after a game, without a video coordinator. Secondary user: the assistant coach who assists with tagging but lacks training on the platform. Research approach: interviews with coaches from Liverpool FC International Academy Utah and other club-level programs, supplemented by public G2, App Store, and coaching forum reviews to validate pain points at scale.

A fourth persona has been added specifically in response to Hudl Assist: the Assist subscriber — a coach or program that already pays for Hudl's human-analyst breakdown service (launched ~2018, upgraded through 2025). This persona is valuable as a contrast case. If Assist already solves the two-pass problem for them at an added cost, what does that reveal about the value of the underlying workflow? Research with this persona will explore willingness to pay, whether they still interact with raw film after receiving Assist breakdowns, and what capabilities they wish existed in the base interface. This directly sharpens the design question: the solution should deliver Assist-level outcomes through interface, not service.

03 Discovery & Competitive Research In Progress
Competitive Analysis & Current-State Audit

Platforms under review: Hudl (primary — base platform and Assist service tier), QwikCut, Wyscout, Nacsport, Veo, SportsVisio, Track160, and Catapult Video. A key finding already established: Hudl's own answer to the two-pass problem is Hudl Assist — a paid add-on service (~$900–$3,300/year depending on tier) where trained human analysts tag your games and deliver stats plus automatic player clip playlists within hours. As of early 2026, AI auto-tagging via Balltime AI is available for club volleyball only. This makes Assist a direct comparison point for the redesign, not just a feature note — Hudl's strategy was to sell a service layer rather than fix the interface, which is a meaningful product decision to interrogate.

The competitive analysis specifically maps how each platform positions itself relative to this gap. SportsVisio and Playbook Sports are explicitly marketing against coach workflow complexity, targeting programs without video coordinators — the same population this project serves. Also reviewed: multi-attribute tagging patterns from non-sports domains including video annotation tools (Encord, CVAT, V7 Labs), qualitative research platforms (MAXQDA, NVivo), and professional editing software. Current-state journey map of Hudl's base workflow documented step by step, with Assist's service-model flow as a parallel reference.

04 Low-Fidelity Solutions In Progress
Wireframes & Concept Sketches

Core design question: how do you present stat tagging, clip cutting, and playlist assignment as a single unified interaction rather than three sequential modes — without requiring a paid analyst service to do it for you? Initial concepts explored a persistent side panel, a trigger-and-expand overlay, a timeline-first layout, and a floating HUD. The overlay model (Sketch B) was selected as the primary concept: video fills the full frame during playback; a bottom-anchored overlay appears only when a tag is triggered, keeping the coach's view unobstructed at all other times.

A speculative second thread explores a non-intrusive AI suggestion layer — surfacing detected moments as dismissible badges rather than auto-tagging, preserving full coach agency while reducing the cognitive load of watch-and-tag simultaneously. This thread is clearly marked speculative throughout and is informed by Hudl's Balltime AI acquisition (February 2025), which currently supports club volleyball only. All wireframes produced in Figma at 1280×800px.

05 Iterate & Test In Progress
Usability Testing & Iteration Notes

Testing was conducted with 8 coaches from Liverpool FC International Academy Utah and broader club-level contacts — the same population representing Hudl's core high school and club market. Sessions used a 3-condition design: Condition A (Hudl base platform, current), Condition B (redesign prototype), and Condition C (Hudl Assist walkthrough). Each participant completed all three conditions across three sessions spaced one week apart, using a Latin square rotation to control for learning effects.

The Assist comparison (Condition C) was deliberate: if a coach on Assist already gets stats and clips without doing film review themselves, what do they lose? Testing revealed that all 6 Assist walkthrough participants independently raised the same gap — you can't build thematic clip packages from Assist output, and the tags "aren't mine." These findings directly shaped what the redesign needed to preserve and what it could simplify. Six design changes were made as a result of testing; five design decisions were challenged and survived unchanged.

06 Align Stakeholders In Progress
Stakeholder Presentation

In a real product environment, this step involves presenting research findings and design direction to product leadership, engineering leads, and the stakeholder who owns the Assist product line — each with a different primary concern. The presentation is structured to address all four roles simultaneously: the human problem for the product director, the research credibility for the skeptic, the technical feasibility signal for the engineering lead, and the cannibalization question — honestly and head-on — for the revenue stakeholder.

The Assist context sharpens the business framing substantially. Hudl's decision to solve the two-pass problem via a paid analyst service rather than a redesigned interface is a product strategy choice — one that generates revenue from the pain point rather than eliminating it. The deliverables in this step make the case for a base-platform fix, quantify the cannibalization risk with specificity, and offer three strategic paths forward. The deck ends not with a recommendation on the revenue model — that decision belongs to the stakeholders — but with a clear ask: a green light to invest design resources in validating this at higher fidelity.

07 Prepare for Dev In Progress
High-Fidelity Mockups & Developer Handoff

High-fidelity mockups are produced in Figma, working within and extending Hudl's existing dark-UI visual language — this is a feature redesign, not a rebrand. Desktop and mobile/tablet breakpoints are designed in parallel from the outset, since the target user (the teacher-coach reviewing film late at night) moves between a laptop at home and an iPad or phone on the sideline. Each screen is annotated with full interaction specifications: trigger states, hover states, keyboard shortcuts, loading and skeleton states, empty states, and every edge case identified across research and lo-fi testing.

The Assist service model shaped a key constraint in the handoff documentation: since Assist already delivers stats and clips as simultaneous properties of a single tagged moment, Hudl's backend demonstrably supports associating multiple metadata types with one event object. Every spec in this step is written to map onto that existing data model — the goal is an interface improvement that engineering can build without a back-end architecture change. That constraint is stated explicitly in each annotation where it applies, because a designer who understands the data model is more useful in sprint review than one who doesn't.

08 Support Implementation In Progress
Interactive Prototype & Comparison Frames

A fully interactive Figma prototype simulates the complete single-pass tagging workflow — from pressing play on game film through tagging a moment, adjusting the clip window, and confirming the clip with auto-playlist assignment. The prototype is publicly linkable and embedded directly in this portfolio so a hiring team or product reviewer can interact with the solution in the browser without downloading anything or logging into Figma.

The prototype is built with three comparison frames as its structural backbone: the current Hudl base workflow (the two-pass problem as it exists today), the Hudl Assist service model (what coaches receive when the analyst does the tagging for them), and the proposed redesign (one pass, coach-controlled, no additional cost, same output quality). This three-way structure is not a stylistic choice — it is the argument. A reviewer who can walk through all three flows back-to-back understands the design decision in a way that looking at screens alone cannot convey. A short narrated walkthrough video accompanies the prototype for contexts where interaction is not possible (mobile preview, async review, screen recording).

09 Product Test In Progress
Validation Testing on Final Design

This second round of testing uses the hi-fi Figma prototype produced in Step 07 rather than the lo-fi wireframes used in Step 05. The participant pool expands from 8 to 14 — the original 8 participants return for a longitudinal comparison, and 6 new participants from outside the original recruitment network are added to test whether the design's learnability generalizes beyond the coaches who influenced it. The test structure maintains the three-condition design from Step 05 (Hudl Base, Hudl Assist reference, Redesign) to enable direct before-and-after comparison of performance metrics.

The central research question sharpens in this round: the lo-fi test established that the redesign is faster and less error-prone than the base workflow. The hi-fi test asks whether it remains so at full visual and interaction fidelity — and adds a dimension that lo-fi testing could not address: does the redesign change what Assist subscribers actually want? Participant D in Step 05 said, unprompted, "I would not have paid for Assist if the base platform did this." That quote is a hypothesis. Step 09 is the test.

10 Measure Results In Progress
Outcomes & Reflection

Step 10 is the close of the project — the point at which the evidence accumulated across nine prior steps is assembled into a coherent account of what was learned, what changed, and what the design ultimately is and isn't. It is not a victory lap. A final section that only summarizes successes is less useful than one that also names the limits of the work, the decisions that remain genuinely open, and the things that would be done differently with the benefit of hindsight.

The strategic question this project set out to probe — whether a base-platform interface redesign could solve the problem Hudl Assist was built to solve, for the segment of users who adopted Assist as a workaround rather than a preference — has a more specific answer at the close of Step 09 than it did at the opening of Step 01. This section documents that answer directly, names what it cannot answer, and explains why raising the question is more valuable than either ignoring it or overclaiming the conclusion.

01
GoReact

Library Redesign

GoReact is a cloud-based, interactive video skills assessment platform and its library was confusing and inefficient — organizing files was difficult, sharing content didn't work reliably, and there was no search capability. I was the sole designer assigned to reimagine it from the ground up.

Product Design Wireframing High-Fidelity Mockups User Testing PWA · Mobile & Desktop JIRA Handoff
Role
Sole Product Designer
Collaborators
Product Manager, Lead Developer
Platform
Progressive Web App (desktop + mobile)
Outcome

Became one of GoReact's most powerful features — enabling cross-account content management, reusable templates, and reliable sharing at scale.

Visuals under NDA — full process and decisions below.
The Problem

GoReact, a cloud-based, interactive video skills assessment platform, provides a library for managing and organizing video content and learning resources. The old library was confusing — organizing files was difficult, sharing saved media didn't always work properly, and there was no search capability. The product team wanted to make the library easy to navigate, easy to share, and a useful place to save reusable documents.

My Process

I started by analyzing the old library, reviewing support tickets and client conversations to identify what wasn't functioning. Once I had a clear picture, I created wireframes to review with the product manager and lead developer. I iterated across primary personas and user journeys until we had a strong solution to take into high-fidelity. After user testing and stakeholder presentations, I finalized mockups and wrote JIRA tickets for development. All designs accounted for both desktop and mobile views.

02
GoReact

Making GoReact Accessible

GoReact's application did not meet WCAG or higher-education/government accessibility requirements — putting contractual obligations at risk. I led UX design for the entire compliance effort, working against findings from an external accessibility audit firm.

Accessibility Design WCAG 2.1 VPAT / ACR Screen Readers Adobe XD Keyboard Navigation
Role
UX Designer — Accessibility Lead
Audit Partner
Tenon.io (external firm)
Platform
Desktop + Mobile · Adobe XD
Outcome

Achieved verified compliance, established company-wide accessibility standards, and unlocked additional higher-education contracts.

Visuals under NDA — full process and decisions below.
The Problem

GoReact engaged Tenon.io to conduct an accessibility audit and create a VPAT. Problems addressed included: keyboard tabbing paths; ARIA patterns for JAWS, NVDA, and VoiceOver screen readers; icon and text color contrast (WCAG 4.5:1 standard); tooltip and focus states; application zooming for vision impairments; Windows High Contrast mode support; keyboard drag-and-drop; and clear heading and landmark regions for assistive technology navigation.

My Process

I researched WCAG guidelines and mocked up solutions in Adobe XD for each identified issue. Then I scheduled interviews with GoReact users who had accessibility needs — gathering direct feedback on what worked. I revised designs, documented developer requirements, and implemented changes. We ran follow-up Tenon.io audits iteratively until our conformance report met required standards. The PM, technical lead, and I then codified company-wide accessibility standards for all designers and developers.

03
GoReact

In-App Payment Flow

GoReact had no in-app payment mechanism. Learners had to use a bookstore code or contact an in-house team — an unscalable process the business urgently needed to solve. I designed the full self-serve payment experience.

Product Design · PM Payment UX 10+ User Interviews Staged Rollout FullStory Analytics JIRA Handoff
Role
UX Designer + Acting PM
Research
Remote interviews via Zoom / Meet
Platform
Desktop + Mobile PWA
Outcome

Launched self-serve in-app payments, eliminating sales friction and paving the way for full in-app licensing.

Visuals under NDA — full process and decisions below.
The Problem

GoReact was free for instructors but learners had no way to pay within the application — they had to use a bookstore-provided code or contact an in-house team. Neither was scalable. The business needed a way for learners to pay as they created their accounts.

My Process

I researched in-app payment patterns across multiple applications, then sketched concepts before translating them to wireframes. I reviewed these with another PM and the technical lead, then ran ~10 remote user interviews. After identifying what was and wasn't working, I iterated to high-fidelity and wrote JIRA tickets. We withheld the feature until thoroughly tested, then released to a small user group first — using FullStory and follow-up interviews to iterate quickly before full rollout.

04
GoReact

In-App Feedback & Analytics System

The product team lacked a scalable, direct way to understand user sentiment in the moment. I designed an in-app survey system integrated with Airtable, Sigma, Slack, and FullStory — including AI-assisted response categorization at scale.

Product Design · Analytics Survey Design Airtable + Sigma AI Categorization FullStory Integration Slack Alerting
Role
Product Designer
Tech Stack
Airtable · Sigma · Slack · FullStory
Innovation
AI-assisted categorization at scale
Outcome

Dramatically expanded the user feedback pool in a non-intrusive way, delivering continuous actionable insights that accelerated product decisions.

Visuals under NDA — full process and decisions below.
The Design Challenge

The modal design followed existing product patterns. The real challenge was building a taxonomy of topics to categorize open-ended responses at scale — so we could filter by area of the application. We tuned the survey's appearance cadence carefully: frequent enough to capture sufficient data, infrequent enough not to frustrate users. We adjusted the cadence iteratively after launch.

The System

After collecting several hundred responses, I manually categorized them to build the taxonomy. We then used an AI assistant in Airtable to categorize new responses — with human spot-checks and prompt iteration. Airtable data fed into Sigma dashboards showing category distribution by star rating, and into a Slack channel that included the FullStory replay link for every new response — allowing us to immediately watch the journey that prompted the feedback and create JIRA tickets in real time.

05
RainFocus

Attendee Account Mockups — Oracle & Cisco

RainFocus, a cloud-based event marketing and management platform, needed high-fidelity visuals to explore how attendee data could be embedded onto personal conference accounts. The concept: modular code that event managers could configure to customize the attendee experience.

Visual / Concept Design Adobe Illustrator Photoshop Sales Enablement Enterprise Concept Exploration
Role
Visual Designer
Collaborators
CEO, Sales Team
Clients
Oracle · Cisco (Fortune 500)
Outcome

Supported successful pitches that secured consistent, ongoing work with the event management teams at both Oracle and Cisco.

Visuals under NDA — full process and decisions below.
The Problem

RainFocus, an event management software company in its early stages, needed high-fidelity mockups for client presentations. The project explored how to embed data onto personal web accounts of conference attendees — what the UI might look like, and what kind of data could be surfaced. The vision was modular code blocks that an event manager could add or remove to customize the attendee experience.

My Role

I consulted with RainFocus's CEO and sales team on the concepts being discussed with Oracle and Cisco — then visualized those ideas using Adobe Illustrator and Photoshop. The designs served as talking-point artifacts and concept exploration tools rather than developer-ready specs, enabling the sales team to have concrete, visual conversations with enterprise clients. The resulting work secured ongoing relationships with both Fortune 500 companies.

06
RainFocus

Collection Builder & Widget Builder

RainFocus's service team relied on developers to create session catalogs, registration pages, and exhibitor forms for every event. I designed two interconnected internal tools — a Collection Builder and a Widget Builder — enabling a WYSIWYG approach with no developer required.

Tool Design Adobe XD Journey Mapping WYSIWYG Builder Internal Tooling HTML / JS Output
Role
UX Designer
Target Users
In-house service team
Tools
Adobe XD · Adobe Muse
Outcome

Service team could generate catalogs, login pages, and registration flows without developer support — dramatically reducing turnaround time per event.

Visuals under NDA — full process and decisions below.
Collection Builder

Event managers needed to create information collections — session catalogs, speaker listings, exhibitor directories — output as embeddable HTML/JavaScript for landing pages or emails. Since RainFocus handled all client setup at the time, the in-house service team was my actual target user. I explored options through sketches and journey maps in consultation with the service team, PM, and UX colleagues — then delivered high-fidelity desktop designs in Adobe XD.

Widget Builder

The Widget Builder let the service team create forms, session catalogs, login pages, and registration processes — outputting embeddable HTML/JavaScript. I used Adobe Muse for high-fidelity prototyping since Adobe XD was still in beta. The final product evolved from my designs, but the core concept — giving the service team a visual builder without programmer support — became a reality, dramatically speeding up event setup.

10+ interviews per feature.
Dozens shipped.
WCAG compliance won new contracts.

I'm Robert EM Spencer, a UX Product Designer based in Provo, Utah. I've spent years working across the full design lifecycle — from discovery and wireframing through stakeholder alignment, developer handoff, and measuring results after launch.

At GoReact, I was the sole designer on the library redesign — one of the platform's highest-complexity PWA feature — and jointly led the full WCAG 2.1 compliance project from a failing Tenon.io audit to verified conformance, codifying standards for every designer and developer on the team. At RainFocus, I designed the mockups that won Oracle, Cisco, Samsung, Gartner, and VMware as clients. I also served as acting PM on GoReact's in-app payment project: ran 10+ user interviews, wrote every JIRA ticket, and managed a staged rollout monitored with FullStory.

The library redesign became one of GoReact's most powerful features. The accessibility project unlocked higher-education contracts the sales team couldn't close before. When the in-app survey system launched, the product team went from guessing to acting on hundreds of categorized user responses — with a FullStory session replay attached to every single one.

Currently prototyping AI-assisted design and research workflows using Figma Make, ChatGPT, Claude, and Google Stitch — building and testing prompt patterns for research synthesis, design critique, and rapid concept generation. Fluent in Spanish; lived and worked in Guatemala, Chile, and Costa Rica for 11 years, which directly shapes how I conduct user research with diverse populations.

Research & Discovery
Stakeholder interviews, analytics, journey maps, in-app surveys, competitive analysis
Interaction Design
Wireframes, high-fidelity mockups, clickable prototypes, design systems, PWA design
Accessibility
WCAG 2.1, VPAT, ARIA patterns, keyboard navigation, color contrast, screen readers
Cross-Functional
Engineering collaboration, JIRA documentation, sprint support, PM experience
AI-Assisted Workflows IN PROGRESS
Figma Make, ChatGPT, Claude, Google Stitch — prototyping AI workflows for research synthesis, prompt design, and rapid concept generation
Figma Adobe XD Illustrator Photoshop FullStory Airtable Sigma JIRA Canny Zendesk Figma Make ChatGPT Claude Google Stitch
See Resume

Let's make something great.

Currently open to new opportunities. Let's talk about what you're building.

LinkedIn or Call 801 919 5565