Amazon
Redesigning Associate Sortation Workflows for Network Clarity
TL;DR
Role: Lead Researcher: Logistics Experience Team (Mobile tool, Desktop tool, Remote tool)
Focus: Fulfillment workflows across associates, gate personnel, and long/mid -haul drivers
Methods: Field ethnography, heuristic evaluations, card sorting, usability testing, SUS benchmarking, service blueprinting, stakeholder workshops
Impact: Fresh creation of mobile, redesign of remote tools, overhaul of desktop systems, and a unified UX strategy implemented across Amazon Logistics
Project Context
Amazon Logistics relies on a high-volume fulfillment network spanning sortation and fulfillment centers, each with inbound and outbound docks, varying levels of automation, and complex, interdependent workflows. Package movement across this network involves both human and robotic performers, requiring systems that work agnostically across roles, tools, and facility types. As part of this “Flow Initiative”, I led UX research across three key logistics tools:
Mobile Sortation Tool: A net-new mobile app for sortation associates to scan, box, and route packages
Desktop Cart Tool: A system-wide overhaul for grouping boxed shipments into carts for outbound trailer loading
Remote Gate Tool: An updated browser tool for dock agents to manage trailer arrivals and gate assignments
Beyond product-level research, I created interconnected strategic service maps that visualized task flows, feedback loops, and system dependencies across all tools. I also developed proprietary Jobs-to-be-Done (JTBDs) to unify the design vision across personas regardless of whether tasks were performed by people or machines. This case study captures over a year of embedded systems research, focused on improving alignment, clarity, and orchestration across Amazon’s fulfillment ecosystem.
Ecosystem Context
The Challenge
The core challenge was that different tools across the fulfillment workflow, while functionally distinct, shared overlapping responsibilities around package movement, verification, and trailer readiness. These overlaps weren’t clearly mapped, leading to:
Duplicated or ambiguous task flows, where the same scan, verify, or handoff step was performed by different roles using different tools
Misaligned logic between tools, especially between mobile, desktop, and remote interfaces, resulting in delays, confusion, and redundant work
Lack of visibility across roles, with each performer (associate, dock agent, operations manager, system) unaware of what had already been completed
Inconsistent mental models, as terminology and IA differed between tools despite similar container/ box/ product flow objectives
Most critically, existing tools were designed in silos, with no shared framework to coordinate interactions across facilities with different levels of human-machine mix. The work needed to account not just for user friction, but for systemic breakdowns in coordination, timing, and feedback across the entire sortation-to-trailer lifecycle.
My Role & Approach
Mobile Sortation Tool
For this net-new mobile app, I led research from the ground up:
Conducted on-site ethnographic studies with first-time and experienced associates to map their scan, box, and routing behaviors
Co-created task flows with product teams to reduce ambiguity in shift-start moments and trailer handoff steps
Ran card sorting and usability tests on early mobile prototypes, surfacing issues in information prioritization and confidence around "what’s next"
Captured SUS benchmarks post-launch, achieving a score of 87.5—indicating strong usability for a brand-new system
Desktop Cart Tool
As a system-wide overhaul, SCARTA required reframing legacy logic and task ownership:
Led cognitive walkthroughs with long-time sortation staff to trace how cart logic and scan sequences caused misroutes
Used eye-tracking and heatmaps to identify visual hierarchy breakdowns in high-density scan workflows
Conducted cross-product mapping to realign cart creation logic with trailer assignment and mobile sortation tool inputs
Facilitated workshops with design, product, and operations teams to co-define new IA and feedback logic
Remote Gate Tool
For this browser-based dock management tool, I worked closely with gate agents and upstream system owners:
Observed dock agents and drivers during trailer verification, gate assignment moments to understand task timing, queue pressures, and physical/digital context
Identified real-time visibility gaps that delayed gate clearance or caused duplicate verification
Conducted heuristic audits of the interface, uncovering missing feedback cues and status clarity issues
Helped redesign task models to align better with desktop cart tool and mobile sortation tool terminologies
Cross-Ecosystem “Flow-Initiative”
At the systems level, I supported a unified UX vision that bridged human and automated performers:
Created strategic service blueprints connecting all tool workflows, highlighting task handoffs, feedback delays, and redundancy patterns
Developed Jobs-to-be-Done (JTBDs) framed around outcomes instead of roles—ensuring alignment across human, robotic, and hybrid task execution
Worked with Ops leadership and CXOs to influence roadmap priorities based on real-world breakdowns, not product silos
Defined shared information architecture and terminology systems adopted across all three tools
Mobile Sortation On-site Ethnography
Desktop Cart Eye Tracking Setup
Remote Gate Drive Along
Impact and Reflections
Cross-Ecosystem Insights
Recurring patterns across tools revealed deeper system-level breakdowns:
Task ambiguity during handoffs: Sortation associates, cart handlers, and gate agents performed interdependent actions (e.g., verify, scan, assign), yet lacked clarity on handoff status, leading to redundant work and stalled workflows.
Inconsistent information architecture (IA): Each tool had distinct navigation and logic models—even for shared actions—forcing users to relearn similar tasks and creating mental overhead.
Terminology drift across tools: Labels like “verify,” “load,” and “handoff” had conflicting meanings in different interfaces, especially across human-robot mixed environments.
Feedback loop opacity: Users across roles couldn’t see the state of upstream/downstream steps, eroding confidence and increasing error-checking behaviors.
Mismatch in automation context: High-autonomy sites relied on bots, while others used human judgment—but the tools weren’t designed to flex between both extremes.
Mobile Sortation Tool
Shift-start screens were cognitively dense, leading to decision fatigue, especially for new associates unfamiliar with package routing logic.
Next-step uncertainty during cart or container handoffs caused delays and hesitation—surfacing a broader need for flow consistency across tools.
IA lacked guidance hierarchy, exacerbating task confusion and scan errors in busy environments.
Desktop Cart Tool
Legacy cart assignment logic created downstream errors: Carts were assigned without accounting for package density or destination alignment, causing inefficient loading.
Eye-tracking revealed attention drop-offs, guiding prioritization of visual hierarchy and key action zones.
SCARTA’s IA conflicted with SAM and Remote Gate Tool, creating cognitive load at the system level.
Task flows didn’t flex for automation density, making the same UI inefficient across high- and low-autonomy sites.
Remote Gate Tool
Unclear completion cues and trailer visibility delays made gate agents repeat work or delay assignments.
Terminology mismatches with SCARTA and SAM created confusion at the moment of trailer intake.
Dock logic lacked transparency, frustrating both new and experienced gate agents during trailer check-in and assignment.