Your Org Chart Was Not Built for AI
By Twisha Shah-Brandenburg (MDM 2014)
March 18, 2026

The modern corporation was designed for humans making decisions inside hierarchies. It was not designed for networks of people and AI agents making decisions together.
Consider a financial services firm using AI to flag potentially fraudulent transactions. In a traditional control model, flagged cases route upward for layered review before action is taken. By the time a decision is finalized, the customer has already experienced a frozen account, or worse, a fraudulent transfer has cleared.
Traditional management assumes leaders at the top have the best information and enough time to review decisions before they are executed.
AI breaks that assumption. By the time approval reaches the top of the hierarchy, the context has already shifted.
From Control to Distributed Authority
The instinct is to solve this by moving faster up the same chain. But the better answer is to redesign where decisions get made.
How Does Distributed Authority Work?
Give teams the authority to act within clear boundaries, closer to the work. Leadership sets the limits and defines what’s out of bounds. Teams make the calls. Escalation happens only when someone crosses a line.
In practice, this means defining risk thresholds, identifying which decisions AI can resolve automatically, specifying when humans must review, and reserving only the highest-impact scenarios for executive escalation.
This isn’t less control. It’s smarter control.
When decisions move at machine speed, centralized approval becomes the slowest part of your system. Structured autonomy becomes your advantage.
Why the Old Playbook Fails
Organizations rarely collapse overnight. They lose adaptability.
Modern corporations were built for scale and control: clear reporting lines, distinct authority, and functions optimized in isolation. That model worked when environments were stabler, and work moved at human speed.
Today, people and intelligent systems share decision-making across the enterprise. Forecasts trigger supply chain shifts before humans review them. AI agents negotiate pricing. Decisions emerge from interactions, not from the org chart. Yet most organizations still operate as if centralized planning can steer it all.
And this is what happens next:
Productivity rises while engagement falls. Service speeds increase while satisfaction stalls.
Customer experiences break across handoffs nobody owns. Customers move toward companies that design better AI experiences.
Innovation initiatives launch but fail to scale because the connective tissue between teams has atrophied.
The org chart remains intact, but learning erodes. Leadership teams cannot see the whole because it is distributed across too many platforms and actors.
The New Questions Leaders Must Answer
Your systems now learn from AI-assisted behavior. Customers draft messages with AI tools. Employees rely on copilots. Bots and AI agents interact with your platforms. In many cases, companies cannot distinguish between a human decision and an AI acting on someone’s behalf.
That is not a technical footnote. It shifts accountability. If recommendation systems learn from automated browsing rather than human intent, metrics may rise while relevance declines. If AI agents distort feedback loops, the strategy begins to optimize for synthetic behavior. When something fails, responsibility becomes difficult to trace.
The question is no longer only, simply, “What does the customer need?”
It is:
- Who is authorized to make decisions in this system?
- Where must human judgment remain non-delegable?
- How will we detect when automated behavior distorts learning?
- Who is accountable when AI operates within policy but produces harm?
These are not abstract governance questions. They shape the trajectory of your business.
How Design Contributes to Organizations Leveraging AI
Most organizations are preparing for AI the way they once prepared for digital transformation: upgrading infrastructure, modernizing platforms, and investing in tools.
Necessary, however, not sufficient.
When companies went digital, the hard part was not the website. It was reworking journeys, governance, incentives, and cross-functional coordination. Design made those systems coherent.
AI demands the same discipline, but at the level of decision-making itself. Consider an AI decision that increases conversion but erodes accessibility or regulatory trust. A narrow fix tunes the model. A design-led response does something different.

First, it clarifies decision intent.
What outcome are we optimizing for beyond conversion? Revenue, yes. But also, accessibility compliance, long-term trust, and brand equity.

Second, it defines guardrails before scale.
What thresholds trigger human review? What populations require bias checks? What signals override automated recommendations?

Third, it makes the system legible.
Who can explain how the model works in plain language? Can frontline teams understand and challenge it? Can a regulator trace the decision path?

Fourth, it installs feedback loops.
Where does human judgment re-enter? How do we capture edge cases? How quickly can we correct drift?
This is craft at the right altitude. Design does not just polish interfaces. It builds decision frameworks. It structures tradeoffs. It defines escalation paths. It operationalizes transparency.
In the digital era, design improved interactions. In the AI era, design shapes how decisions are made, monitored, and corrected. That is not aesthetic work. It is governance by design. And it is how organizations prevent speed from outpacing responsibility.
These accountability decisions do not belong inside the algorithm. They belong at the leadership level because durable growth depends on trust, judgment, and the ability to learn faster than your competitors.
In mixed human, AI systems, accountability must be intentionally designed and led. Without it, responsibility diffuses, learning degrades, and performance eventually follows.
A Personal Note
I share this perspective because my own leadership has been reshaped. A decade ago, I moved beyond seeing design purely as craft. Early in my career I focused on improving interfaces and journeys. My time at the Institute of Design gave me tools to think systemically about organizations: how decisions flow, how incentives shape behavior, and how technology, people, and policy interact. That shift helped me see organizations less as hierarchies to manage and more as living systems to steward.

Twisha Shah-Brandenburg (MDM 2014), Service Design Leader, Target
This did not happen by accident. My education at the Institute of Design and my continued engagement with its Master of Science in Strategic Design Leadership fundamentally changed how I currently lead at the forefront of AI-enabled organizations. The grounding in behavioral insight, systems thinking, and adaptive leadership, frameworks that trained me to ask not just what people need, but who counts as a participant when humans and agents act together, and how to maintain coherence when authority is distributed.
If you are navigating AI acceleration and wrestling with transparency, accountability, and trust, I have found that what is needed is not another tool. It is a new way of seeing. You do not need to become a designer, but you do need to become the kind of leader this moment requires.
The question is not whether your organization will adopt AI. It is whether its leaders will be ready to steward it.
I explore these ideas further on my Substack, Making an Impact, where I write about leadership, the future of design in organizations, and how individuals can create meaningful change inside complex systems. Subscribe to follow the work and join the conversation, and join me on April 1 at ID for the Delivery Fellowship event, How to Strengthen Product Strategy with Design.