Seller Central had been built by dozens of teams over many years. The data existed. The tools existed. But sellers had to navigate a system that reflected Amazon's org chart, not their actual problems. Canvas was a bet that generative AI could change that: ask a question, get a purpose-built workspace in response.
You could see the org chart in the design.
Amazon Seller Central had grown alongside Amazon itself. Every new business unit that needed to expose data or tools to sellers built their own piece of it. The result was a platform where the information architecture was a direct reflection of Amazon's internal org structure, not how sellers actually thought about running their business.
A seller trying to understand a slump in sales had to visit their advertising dashboard, their inventory page, their pricing tools, and their account health page separately, synthesize the information manually, and form their own conclusions. The platform told you what the data was. It didn't help you understand what it meant or what to do about it.
Stop organizing data. Start answering questions.
The insight behind Canvas was simple: instead of asking sellers to navigate to their data, let them ask a question, and have the system build a focused workspace around the answer. Generative AI made this technically possible. But technically possible is not the same as usable at scale.
There was already a lightweight Canvas prototype when my team got involved. The concept worked in a demo. But there were no rules. No framework for what a canvas should contain, how it should behave, what it should allow sellers to do, or how the AI should decide what to surface. It was the wild west. My team's job was to define the framework that would make Canvas work across every business stream, not just the one it was originally designed for.
One framework. Every business stream.
My team owned day-to-day seller operations across five major business areas: Listings, Finance, Compliance, Brand, and Fulfillment by Amazon. That coverage was what made our involvement in Canvas critical. A framework only works if it works everywhere, not just for one type of seller or one type of problem.
We took the Canvas concept and stress-tested it across every vertical we owned. What does a canvas look like when a seller is facing a policy violation? When they're planning for a product launch? When their sales are declining and they don't know why? When they need to build a promotion strategy across multiple programs simultaneously? Each scenario had different data needs, different action types, and different levels of urgency. The framework had to handle all of them consistently.
Individual data points rarely told the whole story. The combination did.
A seller asking why their sales dropped didn't have a finance problem, or a listings problem, or an advertising problem. They had all three, and the story only made sense when the data was read together. Finance showed the decline. Listings showed which products had gone stale. Advertising showed where spend had dropped off. Each stream told a fragment. The canvas told the whole thing.
The UX team's job was to figure out which fragments belonged together for which type of question, and how to display them so the relationship between them was obvious. We validated this the way UX teams do: watching real sellers try to answer real questions. We ran sessions at Accelerate and in earlier testing, observing where sellers got stuck, what they reached for that wasn't there, and what felt immediately legible versus what required explanation.
What emerged were patterns. Not templates. Patterns. Diagnostic questions needed trend data plus contributing factors, displayed together so the cause was visible. Action-oriented questions needed ranked recommendations plus a single clear next step. Urgent questions needed the alert first, always, before anything else. Planning questions needed progress tracked across multiple dimensions at once.
Those patterns became the composition rules. And the rules were what made Canvas scalable. The same framework could handle a restock question for a small seller and a promotional planning question for a brand managing hundreds of products, because the rules adapted the assembly to the question type, not to the seller.
Five questions. Five canvases. One framework.
We designed and tested five canvas scenarios that collectively covered the range of seller problems we encountered across business streams. Each one starts from a real question a seller would ask, assembles relevant data from across the platform, and ends with a specific action the seller can take.
Real sellers. Real questions. Real feedback.
Amazon Accelerate is Amazon's annual conference for third-party sellers. In 2025, we brought Canvas to the conference and put it in front of real sellers, with real accounts, asking real questions about their actual businesses. Not a focus group, not a usability test with scripted tasks. Sellers at a conference booth, doing what they actually do.
The feedback confirmed the core thesis. Sellers immediately understood the interaction model. The concept of asking a question and getting a focused workspace in return required almost no explanation. What they wanted more of was breadth: more types of questions answered, more business areas covered, more actions available at the end of each canvas.
to
Seconds
Try it yourself.
Two prototypes built to the fidelity we used at Amazon Accelerate. The desktop version covers all five canvas scenarios. The mobile version shows how Canvas adapts to a smaller surface without losing the core interaction model.