Why enterprises are still hesitant to adopt AI? Tech Mahindra’s Kunal Purohit answers
What looks like a rush to embrace AI is often a cautious shuffle in the boardroom. Here is what leaders are actually worrying about.
A chief technology officer in a midmarket bank scrolls through dashboards that glow with pilot results and bright charts. The pilots look promising, the vendor demos sparkle, and the C suite has asked for a roadmap by next quarter. Two months later the vendor is on month six of integration, the pilot data is not compatible with live streams, and the roadmap is back in the drawer. That scene is more common than any press release about a new model.
The obvious interpretation is that enterprises need better models and more cloud credits. The less visible reality is that organizations are struggling with operating models and governance that were not designed for intelligent, agentic systems, and that gap is the real brake on adoption. Tech Mahindra executive Kunal Purohit framed the issue around human and system readiness during a keynote at Techspectations 2026, arguing that AI can accelerate work but only if enterprises fix fundamentals first. Onmanorama. (onmanorama.com)
The industry backdrop most leaders are skipping over
Cloud providers and model makers press endless productivity narratives while system integrators warn about integration bills. The competitive set now includes Microsoft, Google, OpenAI, and major consulting houses that package AI into business outcomes, but adoption is not a single technology decision. It is a reengineering of data flows, compliance, and roles. Tech Mahindra’s own research and commentary frames the dilemma as less about model accuracy and more about enterprise operating models that cannot absorb autonomous agents. Tech Mahindra. (techmahindra.com)
Why pilot success rarely translates into enterprise value
Many pilots are run in curated environments with hand patched data pipelines and a small circle of enthusiasts. Once the pilot tries to run across regions and real product lines, the seams split. Consulting firms have documented that only a small fraction of organizations have successfully scaled AI beyond isolated pilots, and that measurable enterprise value concentrates where leadership aligns investment with operating redesign. Boston Consulting Group. (bcg.com)
Kunal Purohit’s five practical barometers for readiness
Purohit’s address distilled what practitioners already feel in their bones. First, AI must be grounded in organizational context. Second, outcomes need near deterministic behavior to build trust. Third, security and role reimagination are mandatory. Fourth, data quality must be an ongoing process. Fifth, sustained change management is non negotiable. These are not slogans. They are operational requirements that shift project cost and timeline assumptions materially. Onmanorama. (onmanorama.com)
AI will compress repetitive execution and elevate engineering judgement.
The math that CFOs and CIOs should stop skipping
Imagine a pilot that shows a 40 percent improvement in a single invoice processing task. That looks great until integration adds 200 percent to the project budget and data harmonization requires a six person team for 12 months. Leaders who reallocate 3 to 5 percent of revenue to AI and align cross functional ownership see different outcomes than those who treat AI as an IT line item. Companies that scale AI deliberately invest at materially higher levels, which correlates to the creation of measurable value across units. BCG. (bcg.com)
A blunt example: a retailer with 1 to 2 million transactions a day cannot rely on a pilot that handled 50,000 transactions and a manually scrubbed dataset. The incremental cost to get the model into live throughput is not marginal. Expect infrastructure and governance to generate most of the bill, not the model training. One hopes the CFO likes surprises, but history shows they prefer fewer of them. Dry aside: nobody ever built a budget around optimistic vendor slides and moonlight.
Security, ethics, and accountability are not checkbox items
Purohit warned that AI agents need defined roles and authentication because an agent with unchecked data access is an operational risk. This is a technical point and a legal one. Organizations must design identity and access policies that assume agents will act at scale, and they must instrument every output for provenance and auditability. That work sits squarely in the intersection of security engineering and legal compliance, not marketing. Onmanorama. (onmanorama.com)
Where governance meets engineering in the real world
Gartner has argued that without AI engineering practices and production level rigour, pilots frequently stall and fail to deliver value. Organizations need signal level monitoring, model maintenance plans, and an operational playbook for drift and data lineage. Without those, production is a slow motion accident. Gartner. (gartner.com)
Practical scenarios firms should test this quarter
A bank should run a two month experiment that includes both pilot and production constraints. Define the error budget, integrate with live data feeds, and assume 30 to 50 percent increase in infrastructure cost in the first year. Staffing should include ML engineers, data engineers, and a product owner with clear SLA authority. If that model cannot be cost justified within 18 months, it likely needs redesign. This is not sexy but it is decisive.
Risks and open questions that will shape adoption
The greatest risk is organizational fragility. AI amplifies existing weaknesses such as fragmented data ownership and weak change management. Second risk is regulatory attention that will mature faster than internal governance in many sectors. Third risk is complacent vendor contracts that place data and compliance liability on buyers. These are solvable but only with investment, time, and executive attention. Witty aside: expecting an algorithm to fix messy processes is like hiring a personal trainer and then asking them to use magic to remove sugar from the fridge.
The near horizon in practical terms
Expect 2026 to be a year where intent becomes the primary interface for engineering and where agentic design moves from labs into production in pockets. The winners will be firms that treat AI as an operating model challenge as much as a technology play. Tech Mahindra points to language that captures this transition and to internal examples where productivity gains are real when governance is baked in early. Tech Mahindra. (techmahindra.com)
Key Takeaways
- Enterprises resist AI because pilots expose structural gaps in data, governance, and operating models that are costly to fix.
- Scaling requires investment in AI engineering, production governance, and cross functional ownership to turn pilot wins into enterprise value.
- Security and role based authentication for agents must be designed up front, not retrofitted after a production incident.
- Firms that budget for integration and ongoing model operations will capture disproportionate value from AI investments.
Frequently Asked Questions
What is the main reason AI pilots fail to become production projects?
Operational mismatch and under engineered integration are the most common causes. Pilots often run on curated data with manual fixes that do not scale into real time enterprise systems.
How much should a company expect to budget for AI beyond the model cost?
Plan for significant infrastructure and data engineering spend that can equal or exceed model development. Typical budgets should account for monitoring, governance, and staffing for ongoing model operations.
Can small teams adopt AI safely without large budgets?
Yes, but small teams must be realistic about scope and start with one production oriented use case that includes governance and a measurable SLA. Avoid toy pilots that never face messy real world data.
How should security teams treat agentic AI differently?
Treat agents as identities with rights and responsibilities and require provenance tracking on outputs. Agents need the same role based access and audit trails that human users have.
When will AI become invisible infrastructure in enterprises?
Adoption will be gradual and uneven across sectors, with pockets of scaled use in the next 12 to 36 months where companies have already invested in data platforms and governance.
Related Coverage
Explore how AI changes software delivery life cycles and why platform strategies matter for scaling. Read about the shift from coder centric workflows to human centric orchestration and what that means for talent and training budgets on The AI Era News.
SOURCES: https://www.onmanorama.com/news/business/2026/02/27/enterprises-still-hesitant-adopt-ai-tech-mahindra-kunal-purohit-techspectations-2026.html, https://www.techmahindra.com/insights/views/closing-enterprise-ai-gap-experimentation-execution/, https://www.bcg.com/publications/2023/scaling-ai-pays-off, https://www.mckinsey.com/featured-insights/week-in-charts/ai-at-work-but-not-at-scale, https://www.gartner.com/en/documents/6960566. (onmanorama.com)