Poll: AI chatbots used by two-thirds of New Yorkers even if they do not trust the results
How a superficial convenience boom is forcing product teams, regulators, and enterprise buyers to rethink what “adoption” actually means for AI
A subway commuter in Midtown asks a chatbot whether a lingering cough requires a doctor and gets a confident paragraph that sounds authoritative enough to calm a phone call to a clinic. Across town, a nonprofit communications director runs the same tool to rewrite a grant pitch and files the result as a first draft. These are everyday scenes that the numbers in a new New York poll try to capture, and they feel true in a way that makes executives both excited and vaguely defensive.
The obvious reading of the poll is simple: people will use tools that make immediate tasks easier, even when they know the tools are imperfect. The subtler business implication is less discussed but more consequential: when adoption and trust diverge, product roadmaps, liability models, and monetization strategies must be designed for an audience that treats AI like a fast but flaky assistant rather than a trusted oracle. This gap is where the industry will either win predictable value or create costly missteps for clients.
What the poll actually measured and why it matters now
The Siena Research Institute found that 67 percent of New Yorkers say they have used AI chatbots, with 44 percent using AI at least weekly and nearly half reporting increased use since last year. The survey was conducted March 3 to March 14, 2026, and highlights a city-sized experiment in real time. The headline number matters because New York is both a large consumer market and a bellwether for industries that depend on dense urban usage patterns. Siena Research Institute
Why the trust gap should make product teams change their playbook
Most products are built assuming that adoption implies endorsement. That assumption breaks down when two-thirds of users log in but many prefer search engines or human advice for verification. Product managers should stop optimizing only for time saved and start measuring how many recommendations are verified by users and how often the tool’s output is corrected or ignored. That is less glamorous to present in quarterly results, but it is the metric that determines retention and enterprise liability.
How competitors will respond and what to watch for
Major platform vendors will pursue two strategies in parallel: improve quality with larger models and guard against reputational risk with transparency features and citations. Firms like OpenAI, Google, Anthropic, and Microsoft are all chasing the same user behavior, but they differ on where they place trust signals and how they integrate human oversight. Expect feature wars around source attribution, confidence scores, and easy escalation to human agents. The result will be a sharper product-market fit for vendors who solve verification rather than simply increasing raw capability.
Numbers that anchor the debate
The broader trend is consistent with national surveys showing modest everyday use of chatbots for specific tasks and limited reliance for news and critical information. Few Americans report getting news regularly from chatbots, and a sizable share of users say the tools make it hard to know what is true. Those patterns mean urban adoption in New York is not an outlier but a concentrated instance of a national dilemma. Pew Research Center
Health advice is a pressure test
When people choose convenience over official channels, stakes rise. Polling from health research groups shows most people who use AI for medical information remain skeptical of its accuracy, and recent reporting documents cases where chatbots provided incorrect medical or voting information. The combination of heavy use and low trust in high-risk domains creates an industry problem that product and compliance teams can no longer delegate to PR. KFF and AP News
Adoption without trust is a high-velocity user behavior that exposes platform weaknesses faster than any synthetic benchmark.
Concrete scenarios businesses should run today
Imagine a regional retail chain with 100,000 customers in New York. If 67 percent of those customers try a chatbot for simple order questions, roughly 67,000 customers may interact with AI at least once. If 5 percent of customers contact support each month, that creates 3,350 monthly support interactions, and if an AI resolves 60 percent of those correctly the company avoids 2,010 human-handled tickets. If a human ticket costs about 20 dollars to handle, that is a nominal monthly saving of about 40,200 dollars before accounting for bot maintenance, monitoring, and mistake recovery. Those headline savings look good until a handful of misresolved tickets cause chargebacks or regulatory complaints, at which point the math flips quickly. This is budget modeling, not wishful thinking.
Regulatory pressure New York is already signaling
New York regulators have publicly warned that chatbots can produce inaccurate answers for civic processes, and state offices have tested common tools with discouraging results. That oversight is a prompt for vendors to bake audit logs, provenance, and safe fallback paths into enterprise offerings rather than treat them as optional extras. Expect compliance teams to demand incident response SLAs and model documentation from providers when negotiating contracts in the next 12 to 24 months. CNBC reporting on New York Attorney General findings
Risks and open questions that should keep boards awake
The central risks are legal liability for bad advice, brand damage from repeat misinformation, and the cost of overengineered verification that kills margins. Another open question is whether users will shift from generalist chatbots to niche, certified domain assistants for finance, legal, and medical queries. That market bifurcation would reward firms that invest in curated knowledge bases and human-in-the-loop workflows. Also, the user behavior curve is unpredictable; just because New Yorkers try tools does not mean they will pay for reliable versions. That will make pricing strategy both delicate and interesting, like raising a rent-controlled apartment without offending the tenant.
Where this could lead next for the AI industry
Vendors who accept that adoption is not the same as trust will design products for verification, institutional integration, and accountable failure modes. Those features will become the premium that enterprises pay for, and the companies that delay will find monetization harder than expected.
Key Takeaways
- Two-thirds of New Yorkers have used AI chatbots, but significant portions still trust traditional sources more for critical information.
- Product roadmaps must prioritize verification features and escalation paths to human experts to convert casual use into commercial value.
- Short term cost savings from automation can be offset by liability and brand risks unless provenance and monitoring are built in.
- Regulatory action in New York will accelerate demand for auditable AI features and tighter vendor contracts.
Frequently Asked Questions
How worried should a mid-size company be about customers using chatbots for support?
A mid-size company should prepare for meaningful traffic diverted to chatbots and plan for monitoring and escalation. Treat chatbots as triage tools and measure resolution accuracy; plan contingencies for chargebacks and error remediation.
Can adding source citations to AI responses fix the trust problem?
Citations help but are not a panacea; they reduce friction for verification but require the linked sources to be reliable and current. Businesses should combine citations with clear confidence indicators and easy escalation to live agents.
Will regulators force companies to stop using chatbots for sensitive tasks?
Regulation is likely to impose standards, not blanket bans, focusing on auditable processes and consumer protections for high-risk domains. Expect requirements for transparency, record keeping, and human oversight rather than an outright prohibition.
Should enterprises build their own chatbots or buy from major cloud providers?
The choice depends on data sensitivity and the need for customization; building offers control and provenance but costs more, while buying provides speed and scale but raises questions about shared model behavior. Contracts should explicitly cover liability, update cadence, and access to model performance logs.
How can small businesses monetize improved trust in chatbot outputs?
Small businesses can monetize by offering premium verified responses, white glove human review services, or subscription access to domain-certified assistants. Price these services based on reduced error rates, faster resolution times, and demonstrable customer retention gains.
Related Coverage
Readers interested in how enterprise contracts are changing with AI should explore negotiations over liability clauses and model transparency. Coverage of domain-specific assistants in healthcare and finance will show which industries are moving fastest to paid, verified AI services. Follow reporting on the intersection of AI usability and consumer protection for the clearest signals of where vendors must invest.
SOURCES: https://sri.siena.edu/2026/04/14/ny-split-on-pros-and-cons-of-ai-by-43-37-nyers-say-disadvantages-are-too-great/, https://www.pewresearch.org/short-reads/2025/10/01/relatively-few-americans-are-getting-news-from-ai-chatbots-like-chatgpt/, https://www.kff.org/health-information-and-trust/press-release/poll-most-who-use-artificial-intelligence-doubt-ai-chatbots-provide-accurate-health-information/, https://www.cnbc.com/2024/11/01/ai-chatbots-arent-reliable-for-voting-questions-government-officials.html, https://apnews.com/article/0ea249aa0db3fa351efa2a76af3a2348