Uber Employees Built an AI Clone of Their CEO and Cyberpunk Culture Loves the Irony
When the suits in a glass tower get copied by code, the street-level storytellers lean in with wry applause.
A product manager in a fluorescent-lit conference room feeds a slide deck to a bot that talks like the boss, and the team leaves feeling calmer and a little more cunning. The image is equal parts corporate theater and a cyberpunk screenplay where authority can be rehearsed, versioned, and iterated on before it ever meets flesh.
Most outlets framed this as another internal productivity hack and a sign that engineers will lean on tooling to get presentations tighter before executive reviews. That is true, but the more consequential story is how the practice turns leadership into a design problem and accelerates cultural shifts that matter to small companies, creative collectives, and anyone who builds products at the intersection of tech and image management.
This article relies mostly on contemporary press reports following a podcast comment by Uber CEO Dara Khosrowshahi, and it reads those announcements against what cyberpunk aesthetics and industry mechanics actually mean for startups and studios. (businessinsider.com)
When a CEO becomes a rehearsal partner: what happened and why it landed
Dara Khosrowshahi told Steven Bartlett on The Diary of a CEO podcast that some teams at Uber built a so called Dara AI to rehearse presentations before meeting him. The anecdote confirms a bottom up pattern of engineers building bespoke tools to shave friction off high stakes moments. (techcrunch.com)
This is not fantasy. The revelation was reported across business and tech outlets within days of the podcast, and it arrived alongside a detail Khosrowshahi shared that about 90 percent of Uber’s software engineers use AI in daily workflows, with roughly 30 percent being power users. Those numbers change how the internal culture evolves at scale. (businessinsider.com)
Why cyberpunk fans are reading this as confirmation, not parody
Cyberpunk has always trafficked in simulations of authority, from corporate avatars to synthetic public faces. An AI CEO clone converts a trope into everyday practice. The aesthetic payoff is subtle and unnerving: the corporation becomes layerable, its responses reproducible, and its aura editable by engineers who are not elected to govern. People who collect vinyl records and nostalgia for neon will also appreciate that power can now be rehearsed on a laptop, which is excellent for drama and inconvenient for control.
Crew dynamics change when power can be sandboxed. Teams get one fewer excuse to blame hierarchies. The bot is blunt, caring only about signals and metrics, which explains why some managers love it and others pretend not to notice. Engineers named it in a way that sounds like a boutique gym membership, which is faintly absurd and therefore delightful.
Industry context: who else is turning leaders into datasets and why the moment is now
Large cloud providers, AI model makers, and platform companies have made the primitives of this work cheap and accessible. Foundational models from major vendors plus off the shelf voice and fine tuning services mean an internal engineer can stitch together a plausible executive mimic in days to weeks. The wider AI arms race among big tech and model providers accelerates this because the compute and tooling are now commodities.
Startups and studios are watching because the marginal cost of deploying an internal persona is low while the productivity upside for fast feedback loops is high. That combination is why CFOs are quietly asking whether to buy more GPUs instead of hiring more heads. (businessinsider.com)
The core mechanics: what engineers actually trained and when
Reports say the Dara AI was trained primarily on public materials such as interviews, earnings calls, and internal cues that replicate the CEO’s tenor and priorities. Engineers tuned the model to anticipate common lines of questioning and to pressure-test slide decks, which makes the bot function as a rehearsal adversary rather than a PR tool. The project emerged publicly after comments on February 24, 2026, when the podcast aired and journalists picked up the story. (techcrunch.com)
Building this kind of mimic typically combines a large language model with curated prompt engineering and, where needed, a lightweight voice synthesis stack. The result is not perfect impersonation but a serviceable facsimile good enough to reveal weak arguments and sloppy slides. That pragmatic messiness is the point: it forces clarity.
A social media pull line that hits the feed
Employees rehearsing their boss in software feels like a future in which authority comes with a rehearsal mode and a changelog.
Practical implications for businesses with 5 to 50 employees, with real math
A five to 50 person firm can replicate this pattern without raising a board meeting. Assume a small agency runs 10 client pitches a month and each pitch currently costs five hours of prep time across team members. If an internal AI rehearsal tool reduces prep time to two hours, that saves three hours per pitch. At a blended labor rate of 60 dollars per hour, that is 180 dollars saved per pitch, or 1,800 dollars a month for 10 pitches.
Building a private model for voice and persona could range from a few thousand to tens of thousands of dollars in initial engineering and monthly inference costs depending on usage. A cheaper path is a hosted persona tooling service that charges per 1,000 tokens; that converts the fixed cost into variable spend and makes pilots affordable. Those numbers make the ROI easy to test in one quarter.
The cost nobody is calculating and the cultural ROI
Operational savings are obvious but cultural costs are subtle. When leadership is emulable, incentives shift from convincing people to aligning with a model’s priors. That produces faster consensus in meetings but can also hollow out dissent. For creative firms, the loss of friction can be a tax on originality. The trick is to treat the AI persona as a rehearsal device not as the final gatekeeper, which requires active governance and rotation of human reviewers.
Risks that matter and open questions for governance
Legal exposure includes rights of publicity and consent if voices or likenesses are synthesized without clear authorization. There are also internal trust issues if employees weaponize persona models to game performance reviews or simulate approvals. The tech community is debating how to certify synthetic personas and whether training on nonpublic material creates liability. Companies should draft brief policies for model provenance and access control before any tool becomes a de facto executive proxy. (webpronews.com)
What cyberpunk culture will do with this besides write excellent fan fiction
Writers and designers will incorporate this new normal into narratives and product art. The image of a CEO whose responses can be forked and branched becomes a design motif for future UI explorations. Expect an uptick in speculative fiction and UX experiments that treat corporate personas like software releases, which is exactly the sort of grimly entertaining future a cyberpunk reader can applaud while sipping espresso.
Closing note on what small leaders should actually change tomorrow
Install a lightweight policy, run a two week pilot with one persona use case, and require a named human approver for any synthetic output used externally. That preserves speed without surrendering accountability.
Key Takeaways
- Small teams can pilot an internal executive persona cheaply and test ROI within one quarter using hosted models.
- Treat persona models as rehearsal tools, not decision makers, and enforce human approval gates.
- Legal and cultural costs are real; draft provenance and access policies before broad rollout.
- Cyberpunk sensibilities show this will reshape workplace optics as much as processes.
Frequently Asked Questions
How expensive is it to build an internal CEO-style rehearsal bot for a small company?
Costs vary widely but a minimal pilot using hosted services can be done for a few thousand dollars in tooling and setup, with variable inference costs thereafter. Building an in-house fine tuned model raises the upfront spending to the mid five digits and requires engineering time for safe deployment.
Will using a CEO clone save my team time on presentations?
Yes, a rehearsed AI adversary can eliminate common clarity issues and reduce slide prep. Practical pilots typically show time savings in the order of tens of percent depending on how much revision was happening before.
Are there legal risks to cloning a manager’s voice or style?
Yes, rights of publicity, employment law, and consent issues can arise if voice or likeness is synthesized without authorization. Document consent and keep models private to lower legal exposure.
Can a small company rely entirely on persona models for approvals?
No, dependence on a synthetic proxy for final approvals creates governance blind spots. Always require a human sign off for decisions that carry legal or financial weight.
What governance steps should a 10 person startup take first?
Adopt a short policy that covers dataset provenance, access control, and an approval workflow. Run a two week pilot, measure time saved, and assign a human owner to oversee the model’s outputs.
Related Coverage
Readers interested in how AI changes organizational behavior may want to explore reporting on worker agentization, synthetic media regulation, and how creative teams are adapting generative tools for concepting and iteration. Coverage of AI ethics boards and tooling for model provenance will also be useful for teams planning pilots on a budget.
SOURCES: https://www.businessinsider.com/uber-employees-use-ai-clone-ceo-prepare-meetings-presentations-2026-2, https://techcrunch.com/2026/02/24/uber-engineers-built-ai-version-of-boss-dara-khosrowshahi/, https://tech.yahoo.com/ai/articles/uber-employees-created-ai-clone-200000586.html, https://www.ndtv.com/feature/uber-ceo-reveals-employees-are-using-his-ai-clone-before-approaching-him-11137875, https://www.webpronews.com/uber-employees-built-an-ai-clone-of-its-ceo-to-rehearse-presentations-and-its-brutally-honest/