Supreme Court Poised to Reject Thaler’s Latest Bid to Copyright an AI Image
After a long legal run, the high court is likely to leave intact a line of rulings saying machines cannot be authors under current law.
A lawyer closes a folder, a clerk stamps a file, and an artist refreshes an image that an algorithm spat out three minutes earlier. The human in the room feels oddly exposed: did more of the credit go to the coffee or to the code? That tension underpins the latest chapter in Dr. Stephen Thaler’s yearslong effort to have a machine recognized as an author for a work generated autonomously by software he built. The scene belongs in a courtroom drama, except the principal actor is a server rack, and the plot is statutory interpretation rather than melodrama.
The plain reading of recent court decisions is the obvious takeaway: U.S. law still treats authorship as a human-only status. The less obvious and more consequential angle is how that narrow ruling shifts the battleground to design, contracts, and product strategy for AI companies that monetize generative output. That is the story that matters to entrepreneurs, platform operators, and legal teams, because ownership rules will now be written more in terms of corporate governance than court opinions.
What the courts have actually done and why the Supreme Court is paying attention
A federal appeals panel in Washington recently affirmed that the Copyright Act requires human authorship, rejecting Dr. Thaler’s registration for the AI-created image titled A Recent Entrance to Paradise. This ruling follows an earlier district court denial and sits alongside a comparable Federal Circuit decision that machines cannot be named inventors on patents. Summaries of the appellate opinion and its reasoning are available via legal analysts at Crowell & Moring. (crowell.com)
Where the petition stands right now
Thaler filed a petition asking the Supreme Court to review the D.C. Circuit’s decision, arguing that the Copyright Act’s plain text does not expressly require a human author. The petition was docketed and distributed for a conference where justices consider whether to take the case. The procedural timeline and filings are tracked on SCOTUSblog, which shows the petition, briefs, and conference scheduling. (scotusblog.com)
How the federal government weighed in and what that signaled to the court
The Department of Justice filed a brief urging the Supreme Court to deny review, arguing that the appellate court’s human authorship reading is correct and consistent with the statute’s structure and purpose. The DOJ brief frames the dispute as narrow: the Copyright Office will still register works created with human expressive input even when AI tools are used. That is exactly the sort of administrative backstop the court finds comforting, which roughly translates to: leave this to specialists and Congress. IPWatchdog covered the government brief and its practical emphasis on statutory coherence. (ipwatchdog.com)
Why this ruling matters more to product teams than it does to philosophers
If the Supreme Court leaves the lower-court ruling in place, platforms and developers cannot rely on registration to lock down exclusive rights for output that was produced without meaningful human creative input. That does not ban commercial use of AI output; it just means that the easiest route to exclusive enforcement is unavailable. For companies building generative-image marketplaces, the calculus changes: legal exclusivity must come from contracts, licensing terms, or technical provenance systems rather than statutory copyright registration.
Consider a midmarket stock image company that sells exclusive licenses for images at $300 each. If 10 percent of its catalog shifts to purely AI-generated content and those images cannot be registered, potential infringement recoveries shrink and insurance premiums go up. A conservative back-of-envelope shows a $30,000 hit in top-line revenue for each 1,000-image batch where exclusivity becomes harder to police, not counting legal overhead. That is the kind of arithmetic that turns policy into business decisions, and yes, someone in the finance team will now ask whether the machine needs an employment contract. The short answer is no, but the question deserves the laugh. (Also, maybe buy better coffee for the on-call engineer.)
Legal certainty about who owns what in generative systems will drive whether companies compete on licensing, on exclusivity, or on trust.
What this decision leaves unresolved for creators and AI vendors
Courts so far have not set a bright line for how much human contribution is enough to claim authorship. The Copyright Office’s January 2025 report said outputs of generative AI can be protected only when a human has determined sufficient expressive elements, a practical standard that leaves room for interpretation in close cases. That report will be the reference point for future filings and agency practice. (newsroom.loc.gov)
Why startups and platforms should retool contracts and UI flows now
For companies that sell or license generative content, the defensible move is contract work. Terms-of-service, contributor agreements, and clearly documented prompt histories will become primary evidence of human authorship. Platforms should also implement verifiable logs that show a named person curated, selected, or substantially edited an output. Those metadata trails can be cross-referenced in a registration application to show human creative input rather than relying on novel statutory readings.
A concrete scenario: a design studio charges $5,000 for a custom campaign and documents two hours of iterative prompt engineering plus 20 minutes of Photoshop-level edits by a named artist. That documentation is likely sufficient to claim the result as authored by a human for registration purposes, converting a product that would otherwise be unenforceable into one where statutory remedies are available. Keep the logs; assume a judge will ask for them during discovery. If this sounds like extra bookkeeping, that is because it is.
The cost nobody is calculating yet
Investors often prize speed and scale in AI models, but this legal posture means companies must allocate engineering resources to provenance, compliance, and contract tooling. A single senior engineer could cost $200,000 a year; building a robust provenance and rights-management system could require two to three engineers for 6 to 12 months, a notional upfront cost of $300,000 to $600,000 plus product and legal integration. That expense is a direct tax on the business model for companies that had planned to monetize unregistered, purely machine-generated works at scale.
Risks and open questions that will keep litigators busy
The biggest legal unknown is where courts will draw the line between prompt engineering that is “sufficiently creative” and mere instructions. Another open question is whether Congress will intervene with a statutory fix, although legislative timelines are slow relative to industry pace. Also unresolved is whether foreign regimes with different authorship doctrines will create arbitrage that firms can exploit for global licensing. The debate is moving to faster channels than federal statutes can match, which is why risk management matters more than doctrinal purity.
A practical way forward for legal teams and product leaders
Legal teams should start by mapping every product flow that produces generative content and labeling whether a human is the author under the Copyright Office’s guidance. Product leaders should prioritize metadata capture, user-facing prompts that secure assignment or license rights, and clear contributor agreements. That combined approach buys commercial certainty and avoids relying on a single unpredictable judicial outcome.
What happens next
If the Supreme Court declines review it will effectively leave the D.C. Circuit’s human authorship rule in place as the controlling law for the near term. That result pushes the market to rely on contracts, documentation, and product controls to manage rights in generative outputs. Courts, regulators, and Congress will continue to tinker with the doctrine, but product teams cannot afford to wait for them.
Key Takeaways
- If the Supreme Court leaves the lower-court ruling intact, purely AI-created works remain ineligible for registration under current law.
- The practical consequence is a shift from statutory copyright enforcement to contracts and provenance systems as the primary means of commercial control.
- Businesses should invest in metadata capture and clear assignment or licensing agreements to protect commercially valuable outputs.
- Expect litigation focused on how much human creativity in prompts or edits suffices for copyright protection.
Frequently Asked Questions
What does it mean if the Supreme Court denies review of Thaler’s petition?
A denial means the D.C. Circuit decision stands as the controlling precedent for that jurisdiction, reinforcing the Copyright Office’s human-authorship approach nationwide for similar cases. It does not create new statutory law but makes litigation and business responses more predictable.
Can a company still claim ownership of AI outputs through contracts?
Yes. Contracts, license agreements, terms of service, and documented assignment clauses remain fully operative and are often the strongest tool for commercial exclusivity for AI-generated material. Those private arrangements can provide remedies even when statutory registration is unavailable.
How much human involvement is needed for registration?
There is no bright line yet; the Copyright Office and courts evaluate whether a human determined expressive elements in the final work. Substantial edits, selection, arrangement, or creative integration typically strengthen a registration claim.
Will Congress fix this with new law?
Legislative change is possible but uncertain and likely slow. Meanwhile, businesses should assume current administrative guidance and court rulings will govern operations for the next several years.
Should startups pause AI feature rollouts because of this?
No. Pause is not necessary, but product and legal strategy must include rights management, provenance capture, and clear user agreements to avoid downstream enforcement problems.
Related Coverage
Readers interested in the policy side should follow developments at the Copyright Office’s AI initiative and upcoming litigation that tests human contribution thresholds. Coverage of patent law’s parallel projects on AI inventorship also offers useful lessons for structuring ownership in lab and R and D environments.
SOURCES: https://www.scotusblog.com/cases/case-files/thaler-v-perlmutter/ , https://ipwatchdog.com/2026/01/28/doj-urges-supreme-court-deny-cert-thalers-latest-bid-copyright-work-created-ai/ , https://www.crowell.com/en/insights/client-alerts/dc-circuit-rejects-copyrightability-of-artwork-created-autonomously-by-ai , https://newsroom.loc.gov/news/copyright-office-releases-part-2-of-artificial-intelligence-report/s/f3959c36-d616-498d-b8f9-67641fd18bab , https://arstechnica.com/tech-policy/2023/08/us-judge-art-created-solely-by-artificial-intelligence-cannot-be-copyrighted/