Larian Will Continue Using Generative AI but Not for Art or Writing
A studio that makes one of the most celebrated role playing games in a generation just told worried fans they will not ship AI-made art or dialogue, and yet they will keep experimenting with the same tools that caused the uproar.
A concept artist looks at a screen full of rough images and says aloud that none of this will be in the game. A writer rolls their eyes, not because the placeholder text is bad, but because the iteration that makes a line sing takes human labor. That tension played out publicly when Larian Studios answered angry questions from fans and former employees in a January 9 2026 Reddit AMA, and then tried to translate those answers into a clear policy. The obvious reading is that the studio backed down to placate outrage; the overlooked fact is that Larian has sketched a middle road that matters for how the AI industry sells tools to creators and to enterprises choosing where to invest in models and data governance.
Larian’s move looks like reputation management on the surface, but beneath it is a test case for operational limits on generative AI in creative production. The company made two simple promises: no generative AI will produce final concept art for Divinity, and no AI-written dialogue will appear in the game. This reassurance was posted publicly after a flurry of reporting and community reaction. (theverge.com)
Why this decision matters beyond fandom
Most companies respond to backlash by issuing vague reassurances. Larian did not. The studio tied its policy to specific production milestones and to the provenance of training data, signaling a practical framework for when AI can be allowed in a content pipeline and when it cannot. That approach forces AI vendors and platform teams to answer a real question: can models be constrained to proprietarily sourced training sets and audited outputs at scale? If the answer is no, enterprises will either build in house or avoid the tech entirely. (80.lv)
The competitive landscape game studios are watching
Big publishers such as EA and Take Two have publicly embraced widespread AI pilots, while some indie studios and publishers have banned AI use outright. Larian’s choice puts it in the middle ground and raises a commercial pressure point for toolmakers like Unity, Epic, and the model-hosting vendors: provide train-on-premise options or lose customers who insist on data ownership. Industry observers will watch how this affects licensing, model fine tuning, and contract language around intellectual property.
What Larian actually said and when it happened
The initial public spark came from an interview in mid December 2025 where CEO Swen Vincke described experiments with generative tools across departments, naming uses such as idea exploration and placeholder material. Coverage noted pushback and faster than expected escalation to a detailed AMA. GameSpot reported the original comments on December 16 2025 and tracked the follow up. (gamespot.com)
In the January 9 2026 AMA the studio narrowed the line: “There is not going to be any GenAI art in Divinity,” Vincke wrote, and writing director Adam Smith said that AI-generated text had scored a 3 out of 10 in their trials, too low for production. That blunt assessment framed the studio’s final public policy. (pcgamer.com)
The core story the AI industry should care about
Larian’s position is not a technology rejection. It is a governance model. The company will still use generative AI to speed up ideation, to clean up certain technical artifacts, and to run internal experiments, but it will not let those artifacts become creative outputs without explicit provenance and ownership of the training material. That conditional stance forces model providers to offer stronger audit trails and tooling for customers that want absolute control over training data and lineage. (windowscentral.com)
Larian’s compromise is a practical demand: if you want studios to rely on your models for content, show the receipts for every pixel and sentence.
The cost nobody is calculating
For studios that insist on proprietary training, the math shifts dramatically. Hosting a multimodal model suitable for high fidelity art or narrative generation internally can cost 500,000 to 2,000,000 dollars in one time engineering and hardware setup, and then 50,000 to 200,000 dollars per month for inference and maintenance, depending on usage. If a studio runs AI for ideation only, cost can drop to a few thousand dollars per month by using lightweight models and batching jobs, but the trade off is lower quality and more human iteration. For publishers deciding whether to buy third party services or to build, Larian’s stance means comparing total cost of ownership against legal risk and reputational risk, not just model accuracy.
Practical implications for businesses and tool vendors
Medical and legal customers already demand auditable models; creative firms will follow. For any company that relies on creative output, a practical deployment pattern emerges: use small, cheap models for riffing and concept boards; keep artists and writers for ownership and final output; invest in a model ops platform that records data provenance and generates immutable logs for each generated asset. For example, a mid sized studio that replaces two junior concept artists with AI will save about 120,000 dollars per year in salaries but will incur at least 60,000 dollars per year in additional cloud and compliance costs while taking on legal risk if training provenance is unclear. The financial math is not just salary versus compute; it is insurance, PR, and possible litigation.
The risks and open questions that remain
Severity of downstream harms is unclear. Proprietary training reduces one class of legal exposure but does not eliminate bias, hallucination, or subtle copying of artist style. The effectiveness of human oversight scales poorly; 10 to 20 percent of generated stubs are often discarded, creating hidden waste. There is also a people risk: casual experimentation can erode trust among creative staff if usage is perceived as surveillance or premature replacement. This is a labor relations problem disguised as a technology debate.
Why now is different
Generative model capacity, licensing pressure, and a wave of high profile disputes over scraped training data converged in late 2025, forcing vendors to offer more controlled options and studios to formalize policies. Larian’s public clarity is important because it creates a template for conditional adoption that other creative industries can reference when negotiating enterprise contracts. The question for the AI industry is whether vendors will meet that demand or keep selling convenient, opaque APIs.
A short forward look for product leaders
Expect a market bifurcation: one set of tools aimed at rapid, opaque creativity for consumer apps, and a second set for regulated creative production built around provenance, on prem training, and granular auditing. The latter will be smaller in unit sales but higher in contractual value to studios that demand evidence and control.
Key Takeaways
- Larian will not ship AI generated art or dialogue in Divinity and framed its policy publicly on January 9 2026. (theverge.com)
- The studio still uses generative AI for ideation and internal tooling, creating demand for provenance and in house training options. (80.lv)
- Writing director Adam Smith judged AI text experiments as scoring a 3 out of 10, underscoring current quality limits for narrative use. (pcgamer.com)
- Vendors that provide auditable models and train on customer data will win enterprise deals in creative industries, but at higher implementation cost. (gamespot.com)
Frequently Asked Questions
Will Larian ever use AI generated art in future games?
The company left the door open to AI generated in game assets only if models are trained exclusively on data the studio owns and with transparent provenance. That creates a higher barrier than using third party APIs.
Does this mean AI is useless for game development?
No. Larian and other studios find value in AI for ideation, automation of routine technical tasks, and rapid experimentation. The key difference is whether those outputs are finalized into shipped creative content.
What should tool vendors change after this story?
Vendors should add features for data lineage, customer owned training pipelines, and exportable audit logs so studios can prove where training material came from and demonstrate control.
How will this affect artists and writers?
Short term, it preserves jobs that produce final assets. Medium term, it raises expectations for skills that combine creative craft with model literacy, so training budgets will shift toward hybrid tooling skills.
Is this a PR move or a durable policy?
The AMA and follow ups included technical caveats and explicit governance language, which suggests a durable policy rather than a single PR line. Time will test whether internal practice matches public words.
Related Coverage
Readers interested in how enterprise buyers will demand model provenance should look for reporting on model ops platforms and legal cases about training data consent. Coverage of other studios that have banned AI entirely will help contrast outcomes for talent retention and product speed. Finally, follow pieces on how middleware vendors are adapting to provide on prem model training and auditing tools.
SOURCES: https://www.theverge.com/games/859551/baldurs-gate-3-larian-studios-gen-ai-concept-art-reddit-ama https://www.pcgamer.com/games/rpg/larians-head-writer-has-a-simple-answer-for-how-ai-generated-text-helps-development-it-doesnt-thanks-to-its-best-output-being-a-3-10-at-best-worse-than-his-worst-drafts/ https://www.windowscentral.com/gaming/larian-ceo-swen-vincke-says-it-isnt-using-generative-ai-for-divinity-art-anymore-but-its-still-experimenting-with-it https://80.lv/articles/larian-says-it-ll-keep-using-generative-ai-but-not-for-art-or-writing https://www.gamespot.com/articles/baldurs-gate-divinity-dev-reveals-how-it-uses-generative-ai/1100-6537001/