Back to all work
AI · Amazon Customer Service · 2024
Amazon Customer Care Center + AI Four AI tools. Built for every CS team. Each one earned through research.

Every day, millions of customers contact Amazon about problems, a package that never arrived, a charge they don't recognize, a device that stopped working. The people who help them are called Customer Service Associates, or CSAs. My team designed and owned the UX platform those associates work on, the Amazon Customer Care Center (AC3). When Amazon decided to invest in AI to make that work faster and more consistent, my team was the natural owner. That framework powers every business unit at Amazon CS: retail, devices, digital, shipping, and more. When leadership decided to invest in AI, my team was the natural owner, we had the relationships, the research, and the platform to make it real.

~78% faster to resolve 80%+ time savings on automation 7 Marketplaces ~30s saved per contact
Role
Sr. UX Design Manager
Timeline
2023–2025
Scope
Global · All Amazon CS markets
01
THE PROBLEM

A faster tool. But still too much time lost in every contact.

When a customer contacts Amazon with a problem, a Customer Service Associate, or CSA, is the person who resolves it. How long that takes, how consistent the answer is, and how the customer feels at the end all depend on the tools the associate is using.

A previous project I led had already made significant improvements to the core associate experience, faster to learn, faster to use, more consistent outcomes. The details are in the AC3 case study.

But even with a better tool, too much of each contact was still consumed by manual work. Associates were stopping mid-conversation to look up policies. They were asking customers questions Amazon's own data could already answer. They were navigating through a series of steps to reach a resolution that was, in most cases, already predictable. Through regular site visits, watching associates handle real contacts and interviewing customers immediately after, consistent themes emerged across teams and regions.

The associate
"I'm spending half the contact on things the system should already know."
Associates were navigating to the same resolutions repeatedly, asking questions Amazon's data could answer, and pausing mid-contact to look up policy, breaking the human connection they were there to provide.
The customer
"Why do I have to tell you all of this again?"
Customers were repeating context they had already given, to the bot, to a previous agent. Every repeated question signalled that Amazon wasn't listening.
The associate
"I already know what this resolution is. Why does it take so long to get there?"
Routine contacts followed predictable patterns. The system had what it needed. But associates still navigated step by step, and the workflows were error-prone.
The customer
"It's broken. I need a replacement. Why is this complicated?"
Customers had given Amazon everything needed to act. The workflow still made them answer questions the system could answer itself, some had learned which answers got them what they wanted, creating bad data.
02
HOW WE WORKED

Why my team was the right team for this.

My team's position was unusual. We owned the UX framework, the shared design layer and component system that every Amazon CS business unit builds on. If a designer at any CS vertical needed to ship a new feature, they used our patterns. That gave us something most product teams don't have: a clear view across all of them. That gave us something most product teams don't have: a clear view across all of them.

Designers on my team were embedded in retail, devices, digital, shipping, and Amazon Business, close enough to see the real problems their teams were facing. That network gave us a direct path to gather requirements, test early directions, and get signal from business leaders before we'd over-invested in any one approach. When themes emerged from our ongoing site visits, watching associates handle contacts, interviewing customers, we had the context to understand what we were actually seeing and the relationships to act on it quickly. My science counterpart was a close partner throughout. We watched prototype tests together, connected what we observed to what the models needed, and pushed the direction forward together. We set the vision. They helped us make it real.

Ongoing site visits and field research
Regular visits to customer service sites, watching associates handle real contacts, interviewing customers immediately after their experience. A practice I've kept throughout my career and track formally as a design leadership commitment.
Embedded designers across every vertical
Designers on my team sat inside retail, devices, shipping, digital, and Amazon Business. They surfaced the real problems. They helped test early concepts. And they made sure what we built actually worked for the teams they supported.
A close science partnership
My science counterpart was in the room when prototypes were tested. Research shaped what the models did. Model constraints shaped what we designed. It ran both ways, and that's what made it work.
03
DESIGN PRINCIPLES

The AI had to earn trust before it could ask for it.

Every initiative here lived in a high-stakes context. A wrong AI suggestion could mean the wrong resolution, the wrong policy applied, or a customer commitment Amazon couldn't honor. The easy path is to put a chat interface on everything and call it AI. But a chatbot is just a hammer, and not every problem is a nail. We took a different approach. The associate tool already had a structure associates trusted: every contact was identified, categorised, and resolved through a consistent set of steps and screens. Rather than bolt AI onto the side as a separate chat window, I directed the team to surface AI intelligence through those same familiar patterns. The AI makes a suggestion with its supporting evidence visible. The associate reviews it and decides. Nothing is hidden, nothing is forced, and the manual path is always one click away.

Show the reasoning, not just the answer
Every AI suggestion surfaced its evidence through the match and solve cards, the core UI patterns associates use to identify the customer's issue and take action on it. The AI shows its work using patterns the associate already knows. The AI shows its work. The associate decides.
The human always makes the call
No AI feature executed a high-stakes action without associate confirmation. On routine contacts, complex ones, everything in between, the system surfaces, the human decides.
The manual path is always there
An AI suggestion was never a gate. Dismiss it and the standard flow was right there. Trust couldn't be forced, so we made the cost of not trusting as low as possible.
Only surface what the system is sure about
We defined confidence thresholds with the science team for every feature. When the model wasn't confident enough, no suggestion appeared. A missing suggestion is better than a wrong one.
04
THE FEATURES

Four design initiatives. Each solving a real problem. All of them working together.

Each of these came from something real we observed, not a product roadmap looking for AI use cases. And because my team owned the framework, each one was built to be used by any team across any business unit, not just the team that asked for it first. They also work together: an associate using Predictive Resolution is better served if they already trust CS Helper. Each layer of AI capability made the next one easier to accept.


CS Helper
Give associates an answer, not a search result.

Research surfaced two problems happening simultaneously. Associates were leaving contacts mid-conversation to search documentation, breaking the human moment at exactly the wrong time. And because different associates interpreted policy differently, customers were getting inconsistent answers depending on who picked up.

Initial concept, searchable documentation panel
Dale Cooper
Dale Cooper
Available
POLICY & CONTENT REPOSITORY
Search policies, procedures, FAQs…
SUBSCRIPTION POLICIES
Prime Video, Auto-renewal & free trial policy
Free trial auto-conversions are covered under CX-POL-2847. Customers contacting within 30 days who did not intend to subscribe qualify for a one-time exception refund.
CX-POL-2847 · Updated March 2024
Prime membership cancellation & refund
Refund eligibility depends on usage since last charge. Partial refunds available within 30 days with no Prime benefits used.
CX-POL-1104 · Updated Jan 2024
Goodwill credits, eligibility & limits
Tier 1 associates may offer up to $10. Tier 2 up to $25. Supervisor approval required for repeat contacts.
CX-POL-0891 · Updated Feb 2024
Escalation guidelines, billing disputes
CX-POL-0342 · Updated Dec 2023
MATCH
prime
video
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
🍳
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
📚
Atomic Habits
Paperback · James Clear
Delivered Mar 9
Key takeaways from exploration
Still broke the flow
Associates had to context-switch to a search panel, know what to look for, read the result, and apply it. Just as disruptive as before, only slightly faster.
Too much interpretation
Policy documents are dense. Associates still had to read, understand, and apply the information themselves, which is exactly where inconsistency came from.
Still inconsistent
Different associates found different documents and interpreted them differently. Customers still got different answers depending on who picked up.
Iteration, CS Helper panel alongside match cards
Dale
Dale Cooper
In chat · Marcus Webb
MATCH
prime
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
🍳
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
📚
Atomic Habits
Paperback · James Clear
Delivered Mar 9
prime
Amazon Prime Video
Refund eligible
Charge dateMarch 14, 2025
Amount$8.99
Content streamedNone confirmed
CS Helper
Policy assistant
CS Helper. Ask about this contact or Amazon policies.
What's the refund policy for auto-conversion?
Can I offer goodwill on a second contact?
YOU
What's the refund policy for Prime Video free trial auto-conversion?
CS Helper
Eligible for full refund within 30 days of auto-conversion if no content was streamed. Marcus has zero streaming activity, refund appropriate. One-time exception per CX-POL-2847.
Prime Video, Auto-renewal Policy
CX-POL-2847 · Full policy →
Ask CS Helper…
How the final design addressed this
Closer, but still separate
The floating chat was more accessible than a search panel, but still felt like a separate tool. Associates had to switch context to open and interact with it.
Suggestions surfaced proactively
CS Helper began suggesting responses based on the contact context, a key step toward reducing the manual effort of policy lookup.
Still needed refinement
Positioning and integration needed work. The goal was for CS Helper to feel like it belonged, not like an add-on sitting on top of the interface.
Final design, embedded AI assistant with suggested responses
CS Helper, collapsed, suggestion in chat
CS Helper final design, suggestion visible in chat
CS Helper, expanded with policy context
CS Helper final design, full panel with policy documentation
How the final design addressed this
Always available
CS Helper sat alongside the contact as a dedicated panel, accessible without navigating away, always available at any point in the conversation.
Answers, not links
Plain language question in, policy-grounded answer out. Associates got the exact answer they needed for this contact, cited to the actual policy document.
Consistent across every associate
The same question gets the same answer regardless of tenure or experience, eliminating the variance that created unpredictable customer outcomes.
Before CS Helper
~65%
Of associates regularly accessed the central policy repository during contacts, navigating away from the contact to find answers, averaging 2–3 policy lookups per contact.
After CS Helper
~18s
Average handle time reduction per contact in cohort testing. Associates stayed present and got policy answers without leaving the contact, and gave more consistent answers as a result.

Context Summarizer
Arrive knowing the situation. Don't make them repeat it.

The previous tool was a full case management system, a dense, data-heavy interface where associates had to navigate through complete contact histories, transcripts, and customer records before they could begin helping. When contacts had long histories, associates either spent significant time reading through everything, or skipped it entirely and started from scratch. Either way, the customer paid for it.

Before, heavy case management tool (previous system)
Dale
Dale Cooper
In call · Laura Palmer
CONTACTS
Search…
Laura Palmer
LIVE
Missing Amazon 5W USB Charger
Now · Inbound call
Laura Palmer
Late delivery, bread maker
March 11 · Resolved
Laura Palmer
Billing inquiry · Prime
Feb 28 · Resolved
Laura Palmer
Return inquiry
Jan 9 · Resolved
LP
Laura Palmer
Prime member · 5 years · 4 prior contacts
ACTIVE CALL
REASON
Missing item
ORDER
#113-884723
ITEM
5W USB Charger
DELIVERY
Today · Handed to resident
ELIGIBILITY
Prime · No return needed
PRIOR
3 prior contacts
Late delivery, Cuisinart bread maker
March 11 · Refund issued · Resolved
Key takeaways from exploration
Information overload
Full transcripts and raw data were overwhelming. Associates either spent valuable time reading before accepting, or gave up and started from scratch.
Wrong timing
Showing history inside the contact meant associates were still catching up while the customer was already talking. They needed to arrive prepared, not get prepared mid-contact.
No synthesis
The tool showed what happened, not what it meant. Associates still had to piece together the story themselves from raw logs and data fields.
Iteration, summary drawer inside the contact
AC3 · Customer Care Center
JC
Dale Chen
Online
00:02:14
Customer chat
Marcus Webb
Marcus Webb
Hi, I was charged again for Prime Video. I already contacted you about this 4 months ago. This is unacceptable.
Dale Chen
Hi Marcus, I can see this is the second time this has happened. I'm really sorry about that. Let me take a look at your account right now.
Marcus Webb
Thank you. I just want this sorted out. I never use Prime Video.
Suggested ·
Contact history summary
Customer contacting about a Prime Video charge of $8.99. Second contact, prior contact 4 months ago.
Prior contacts (3)
Nov Prime Video charge · $8.99 Unresolved
Aug Late delivery Resolved
video
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
Sony WH-1000XM5
Noise cancelling · wireless
Delivered Feb 28
How the final design addressed this
Less to read
An AI-generated summary replaced the raw log, associates got a readable narrative instead of raw data. A clear improvement over the previous system.
Still inside the contact
Testing showed that placing the summary inside the contact still meant associates were catching up after picking up. The timing was better, but not solved.
The key insight
Associates said the same thing in testing: "I wish I knew this before I picked up." That drove the decisive change, surface it before the contact is accepted, not after.
Final design · State 1, AI summary before accepting the contact
Dale Cooper
Available
INCOMING CHAT · BILLING INQUIRY
MW
Marcus Webb
Prime Video billing, second contact about this charge
PREVIOUS CONTACT · 4 months ago
Called about the same Prime Video charge ($8.99). Received a courtesy credit but auto-renewal was not disabled. Left satisfied but the issue persisted.
Unresolved Billing · Inbound call
RECENT ORDER · RELATED
prime
Amazon Prime Video subscription
$8.99 charged March 14 · Auto-renewal from free trial

Before the associate accepts, they see who is calling and the full AI-generated summary of context from prior contacts. No suggested action, just what they need to know.

Final design · State 2, Summary persists at top during the contact
Dale Cooper
In chat · Marcus Webb
prime
AI SUMMARY
Customer contacting about a Prime Video charge of $8.99. Free trial auto-converted March 14. Second contact, prior contact 4 months ago, unresolved.
MATCH
prime
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
🍳
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
📚
Atomic Habits
Paperback · James Clear
Delivered Mar 9

After accepting, the summary strip remains at the top above the match cards, always visible, always scannable.

How the final design addressed this
Arrive prepared
The summary appeared before the associate accepted, so by the time they picked up, they already knew who the customer was, what happened, and what had been tried.
The first word counts
Instead of diagnostic questions, associates could open with something genuinely helpful. Customers stopped being asked to repeat themselves.
Suggested action included
When the AI had enough confidence, a suggested resolution appeared alongside the summary, so the associate arrived prepared and often already knew the answer.
Before
45–90s
Time associates spent reviewing contact history before they could meaningfully begin, time the customer waited with nothing happening.
After
~30s saved
Per contact in cohort testing. Associates arrived prepared instead of catching up. Customers stopped repeating themselves.

Human-in-the-Loop
Keep the bot in the conversation. Put a human behind it.

Amazon's automated chat bot was handling a growing volume of contacts without a human involved. But bots have limits, ambiguous situations, edge cases, anything that requires real judgment. When the bot hit those limits, the only option was to hand the contact off entirely to a live associate, starting over. The customer experienced it as a disruption. The efficiency gains evaporated.

How the contact flow changed
Before
Customer
Chat bot
Hits a limit
Hard handoff to CSA
Customer experience breaks
After
Customer
Chat bot
CSA approves silently
Bot continues
Customer never knows
Initial concept, bot asks the associate to approve within the chat conversation
Dale
Dale Cooper
In chat · Marcus Webb
MATCH
prime
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
🍳
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
📚
Atomic Habits
Paperback · James Clear
Delivered Mar 9
CUSTOMER CHAT
Marcus Webb
MARCUS
I was charged again for Prime Video. This happened 4 months ago.
YOU
I can see this is the second time. Really sorry about that.
Bot requesting approval
I'd like to issue a full refund of $8.99 and disable auto-renewal. Approve?
Type a message…
Key takeaways from exploration
Buried in the chat
Associates felt the suggestion was just another chat message. Hard to distinguish the bot's proposed action from the conversation itself, was the associate approving, or the bot?
Unclear ownership
Testing showed associates weren't sure if they were confirming the bot's action or taking their own. The interface blurred the line between the bot and the associate's judgment.
Speed vs clarity tradeoff
Acting fast enough to keep the bot experience seamless meant not enough time to evaluate. Associates felt rushed, or they slowed down and lost the efficiency gain entirely.
Final design, dedicated approval surface, same pattern as predictive resolution
Dale
Dale Cooper
In chat · Marcus Webb
CUSTOMER CHAT
Marcus Webb
MARCUS
I was charged again for Prime Video. This already happened before.
YOU
I can see this is the second time. I'm really sorry about that.
MARCUS
I just want this sorted out.
Type a message…
Bot requesting approval
Not visible to customer
Customer said
Charge dispute
Amount
$8.99 · Mar 14
Confidence
High · 94%
Prior contacts
Yes · same issue
Issue full refund · $8.99
Free trial auto-conversion · no content streamed · second contact
MATCH
prime
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
🍳
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
📚
Atomic Habits
Paperback · James Clear
Delivered Mar 9
How the final design addressed this
Right in their face
Associates asked for clarity, so we gave it to them. The approval surface sat at the top of the screen, unmissable. No hunting for it in a chat thread.
Same pattern, instant recognition
The same widget structure as predictive resolution, contact data, the bot's proposed action, confirm/override. Associates recognized it immediately and acted in seconds.
Seamless for the customer
The customer never knew a human was involved. The conversation continued uninterrupted. The judgment stayed human.
Before
Hard break
When the bot couldn't resolve the contact, the customer experienced a full handoff, starting over with a live associate with no context carry-over.
After
80%+
Time savings across 7 global marketplaces. The customer experience stayed seamless. Leadership prioritized scaling across more use cases immediately.

Predictive Resolution
Everything the associate needs, before the contact starts.

For a large category of contacts, Amazon already had all the information needed to resolve them before the associate even picked up. The item was missing. Amazon knew what it was, when it shipped, where it was delivered, and what the policy said. But the associate still had to navigate through a manual series of questions to reach a resolution that was already predictable. Some customers had even learned which answers produced the outcome they wanted, which generated inaccurate data and made it harder for Amazon to understand and fix the real underlying problems.

Before, manual workflow: static radio button questions, no AI
Step 1, Navigate to contact type
Old AC3 workflow
Step 2, Work through radio button questions
Old AC3 radio buttons
Key takeaways from exploration
Static, repetitive workflows
Associates navigated the same sequences of radio button questions for every contact, even when the customer had already provided all the information needed to act.
Gameable by design
Because the outcome depended on the answers given, customers (and some associates) learned which selections produced the desired result, generating unreliable data that made defect tracking impossible.
No intelligence, no time savings
The workflow didn't use any of the data Amazon already had. Every contact started from zero, regardless of what the customer had already said, or what history existed.
First AI concept, AI pre-fills the match and solve cards
Dale
Dale Cooper
In call · Laura Palmer
MATCH
🔌
Amazon 5W USB Charger
iOS and Android · OEM
Delivered today
🐕
Logical Dog Harness
Full grain · medium
Delivered today
🕶️
Polarized Sunglasses
UV400 · unisex
Delivered today
🔌
Amazon 5W USB Wall Charger
Replacement eligible
Workflow: Request a replacement
1. How was the item damaged or affected?
2. Can the customer provide evidence of damage?
3. Is a return required before issuing replacement?
How the final design addressed this
AI in the loop
The pre-filled cards showed that AI could predict the right contact type and resolution, the underlying capability was real. But the presentation created more problems than it solved.
Clear lesson: separate it
The AI suggestion needed its own dedicated surface, distinct from the standard flow, with its own evidence. The associate needed to know they were looking at AI, not their own navigation.
Receipts were the missing piece
The data to build trust was there, charge date, contact reason, prior contacts, confidence level. It just needed to be surfaced clearly alongside the suggested action.
Final design, dedicated widget with contact data, resolution, and receipts
Dale
Dale Cooper
In chat · Marcus Webb
CUSTOMER CHAT
Marcus Webb
MARCUS
I was charged again for Prime Video. This already happened before.
YOU
I can see this is the second time. I'm really sorry about that.
MARCUS
I just want this sorted out.
Type a message…
AI resolution
High confidence · 96%
Contact reason
Subscription charge dispute
Charge · date
$8.99 · March 14
Prior contact
Yes · same issue
Issue full refund · $8.99
Free trial auto-conversion · no content streamed · first-time exception eligible
MATCH
prime
Amazon Prime Video
$8.99/mo · auto-renewed Mar 14
Charged today
🍳
Cuisinart Food Processor
14-cup · stainless steel
Delivered Mar 12
📚
Atomic Habits
Paperback · James Clear
Delivered Mar 9
How the final design addressed this
Purpose-built surface
The predictive resolution widget owned the top of the screen. Unmissable. Associates knew exactly what they were looking at before the contact even started.
The receipts were right there
Contact reason, charge amount, prior contacts, confidence level. The evidence was scannable in seconds, enough to verify without reading a word of prose.
Verification, not navigation
The associate's job shifted from navigating to a resolution to confirming one. Same outcome, fraction of the time, with the full manual flow always available below.
Before
Step-by-step
Associates navigated step by step through the contact workflow to reach a resolution the system already had the data to predict, introducing time, error, and inconsistency.
After
~78%
Faster to resolve in cohort testing. VP leadership restructured roadmaps around scaling it. The POC result wasn't just a metric, it was a mandate.
05
SCALING WHAT WORKED

Built once. Available to everyone.

With AI experiments running across retail, devices, digital, shipping, and Amazon Business simultaneously, there was a real risk of fragmentation, the same problem solved five different ways, five separate times, none of them able to learn from the others. Because my team owned the UX framework, we were in a position to prevent that. Any design pattern that proved out in one business unit could be evaluated for whether it belonged in the shared platform, available to every team, not just the one that built it first. I set up a structured cross-team review process across four organizations. Designers from each vertical brought their work. We identified what was genuinely reusable, refined it, and graduated it into the framework. The result: teams didn't have to reinvent. They could onboard onto proven patterns and focus on what was unique to their context.

20+ AI interaction patterns identified, validated across verticals, and built into the shared platform, so every team onboarding to AC3 could use them without starting from scratch.
— Cross-org UX sync outcome
06
OUTCOMES

Results that moved roadmaps.

Every initiative here was measured against real contacts with real associates, not simulated scenarios or internal demos. The results were strong enough that leadership didn't just note them. They restructured roadmaps around them.

~78%
Faster to resolve contacts, predictive resolution
80%+
Bot automation time savings, 7 global marketplaces
~30s
Per-contact time savings, context summarizer
20+
Reusable AI patterns shipped to platform
07
SEE IT IN ACTION

See it in context.

Two interactive prototypes built to the level of fidelity we used for associate testing. Each one walks through a contact type that happens thousands of times a day at Amazon.

Scenario 1
Missing item
A customer calls about a USB charger that was marked delivered but isn't there. See the context summarizer, match cards, and solve flow in action.
Open demo
Scenario 2
Repeat charge dispute
A frustrated customer contacts about a Prime Video charge they've raised before. CS Helper surfaces the policy, the summarizer provides context, and the AI suggests a resolution.
Open demo
Next case study
AC3 Scaling, how we built the team behind the platform
View case study →