Proving the Amazon Customer Care Center framework worked was only half the problem. The other half: how do you scale UX quality across 35 organizations and 2,000+ pages of functionality. Dozens of teams are moving fast and there isn't a designer on every project? This is the story of how I built the team, the infrastructure, and the quality systems that made it possible.
The POC landed just before annual planning. That mattered.
The Amazon Customer Care Center, or AC3, had just been proven. A new UX framework for Amazon's global customer service platform, validated through a biometric study and a proof-of-concept that cut associate training from three weeks to three hours and handle time by 60%. The design work was done: seven prototypes, a biometric study, a proof of concept that cut training from three weeks to three hours and handle time by 60%. Business leaders had seen the numbers. The decision came down: deprecate the old tool entirely. Every contact type, every market, every team migrated to the new platform. Every Amazon customer who contacts customer service about a missing package, an unexpected charge, or a return would be helped by an associate using AC3.
That moment, the POC landing just before Amazon's annual operational planning cycle, was the unlock. The results went directly into budget conversations. What had been a team of five was about to grow. And every business unit, now tasked with executing the migration, started planning to hire their own embedded designers.
The proof of concept worked. Leadership was excited. The decision came quickly: every business unit now had a charter to migrate their workflows to AC3. And as each org started planning that migration, each one started planning to hire their own embedded UX designer to own it.
Every org wanted their own embedded designer. I made the case for a different model.
With every org now chartered to migrate to AC3, the natural move was for each one to hire their own embedded UX designer. That would have meant a designer in Retail, a designer in Devices, a designer in Shipping: each operating independently, each building their own interpretation of the framework. I made the case against it. An embedded model would have fragmented the design language we'd worked hard to build, created inconsistent outcomes for associates, and left each designer creatively isolated. I went to each org leader directly and made the argument: centralize the headcount into a shared studio, and everyone wins: the platform stays coherent, the designers grow faster, and every problem one team solves becomes available to all of them.
We owned design across the highest-volume parts of Amazon CS, which meant solving real problems deep in each vertical while staying connected enough to inform the framework.
Before making any hiring decisions, I dug into routable contact volume data across the entire business. The answer was clear: 95% of Amazon's total CS contact volume lived in just four parts of the business. Structuring the team around that data meant we weren't spreading effort evenly across an org chart. We were placing design capacity where it would have the most impact on the most customers.
Embedding designers in Retail, Devices, Shipping, and Amazon Business meant our team was close enough to those problems to solve them properly, understanding the specific workflows, contact types, and edge cases that made each vertical different. And because those same designers were part of a shared studio, what they learned in one vertical fed back into the framework. The same pattern that solved a problem in Retail could be available to Devices the next week.
We focused our attention on the problems that needed it most, trusting the framework to handle the rest.
Even with 15 designers, we couldn't personally triage every request across 35 organizations. We believed the framework, if documented and accessible enough, could handle the common, lower-complexity problems on its own. That freed the team to focus on the high-ambiguity work where our direct involvement would have the most impact. I needed a model that made this transparent: one that showed business partners exactly what to expect from each level of engagement, and why.
The insight behind the model: UX payoff scales exponentially with investment, but only past a threshold of ambiguity. Below that threshold, most problems had already been solved. The patterns existed. The right answer was to make those patterns accessible and let teams move independently.
The engagement model I used to align org leaders: UX investment vs. UX payoff
I secured dedicated headcount to build the infrastructure that let teams move independently.
During financial and product planning, I made the case for a resource that went beyond documentation. I laid out the onboarding timelines, the number of teams moving to AC3, the UX gap, and the bottleneck risk if every team had to wait on the Framework UX team. The only way to enable truly federated ownership was to invest in the infrastructure that let teams build well independently.
The hardest thing to document wasn't the visual rules. It was the mental model shift. In the old world, you put data on the page because it existed. In AC3, every piece of information has to earn its place. That principle had to be as accessible as the component specs, grounded in the research that produced it, and maintained in partnership with the research team.
As adoption grew, the Framework team's investment in AAA accessibility: screen reader support, contrast standards, keyboard navigation, compounded across every team building on the system. Every builder got accessibility compliance for free. The Framework team also used this focus to develop vertical-agnostic capabilities like Email composer and Chat, platform-level work that no single vertical would have prioritized but every vertical benefited from.
I wrote the document that identified the cracks. Then fixed them.
As the team grew from five to fifteen, I recognized that the quality mechanisms built for a small team weren't scaling. Designers in different verticals were solving similar problems without visibility into each other's work. I put the risks and opportunities in writing. The response was cross-functional "pizza teams" (small enough to feed with two pizzas).
Small, cross-functional squads of designers, writers, researchers, and operations, organized around problem domains rather than vertical org charts. Each squad developed deep expertise in their space, staying consistent within their domain while feeding into the shared platform.
I stood up a meeting that stopped divergence before it started.
Pizza teams solved alignment within domains. But a different risk persisted: business verticals independently designing solutions for the same contact type, with no visibility into each other's work. Projects would launch with diverging approaches. By the time anyone noticed, the work was built.
I created the Experience Quality meeting, a recurring cross-functional forum bringing together leaders from business, product, tech, operations, and UX across all verticals. Admission was specific: high potential reach across CSAs, and high potential for pattern reusability at the platform level.
Documentation tells you how. The gate ensures you did.
Self-service works until it doesn't. Teams were building faster than they were learning, and work was shipping with inconsistencies the Builder Center alone couldn't prevent. I added a gate: a design review that every piece of work had to pass before it shipped. Not a rubber stamp. A real check against AC3 standards across three dimensions: visual consistency, structural integrity, and conceptual alignment with the Listen · Match · Solve model.
When quality criteria can be expressed as rules, and rules as code, the design team stops being a bottleneck and becomes a standard-setter. That was the goal from the beginning.
Every Amazon customer contact. One platform. One standard.
By 2023, AC3 handled 100% of contact volume across 23+ marketplaces in 25+ languages. Handle time held its gains as more complex use cases were added. The platform became learnable. The quality system I built kept it that way as it scaled past what any design team could cover directly.
The studio model held because the case for it was grounded in what was right for both the designers and the work. The engagement model protected the team's focus. The Builder Center made federated ownership real. Pizza teams kept quality consistent within domains. The XQ meeting caught divergence early. The bar raiser, automated into the codebase, ensured standards didn't erode as the organization scaled.
The most important design decisions weren't in the product. They were in the system that made good product decisions the default.