AI Technology

Unlocking the Potential of Artificial Intelligence in the Workplace

Artificial Intelligence In The Workplace has arrived and possesses the potential for transformation akin to the steam engine’s impact on the 19th-century Industrial Revolution. With the emergence of powerful large language models (LLMs) from key players like Anthropic, Cohere, Google, Meta, Mistral, and OpenAI, we’ve transitioned into a new era of information technology. Research by McKinsey estimates the long-term AI opportunity could add $4.4 trillion in productivity growth potential through corporate applications.

Here lies the challenge: AI’s long-term potential is vast, but immediate returns remain less clear. Over the next three years, 92 percent of companies plan to increase their AI investments. Yet, while nearly all companies are investing, only 1 percent of leaders describe their organizations as “mature” in deployment, defined as AI being fully integrated into workflows and driving substantial business outcomes. The critical question for business leaders is how to strategically deploy capital and guide their organizations towards AI maturity.

This research report, inspired by Reid Hoffman’s book Superagency: What Could Possibly Go Right with Our AI Future, explores a related question: How can companies leverage AI to amplify human agency and unlock new levels of creativity and productivity in the workplace? AI promises enormous positive and disruptive change. While this transformation will take time, leaders should not hesitate. Instead, they must advance boldly now to avoid becoming uncompetitive in the future. The history of major economic and technological shifts shows such moments can determine a company’s trajectory. The internet, born over 40 years ago, led to trillion-dollar companies like Alphabet, Amazon, Apple, Meta, and Microsoft and fundamentally altered work and access to information. AI is at a similar nascent stage; the risk for business leaders lies not in thinking too grandly, but too conservatively. examples of artificial intelligence and machine learning highlight the diverse capabilities already impacting various sectors.

AI disruptor image suggesting transformative impact.AI disruptor image suggesting transformative impact.

This report examines companies’ technological and business preparedness for AI adoption, based on survey findings (see “About the survey” sidebar below). It concludes that employees are largely ready for AI. The primary obstacle to success is leadership.

About the survey

To create our report, we surveyed 3,613 employees (managers and independent contributors) and 238 C-level executives in October and November 2024. Of these, 81 percent were from the United States, with the remainder from five other countries: Australia, India, New Zealand, Singapore, and the United Kingdom. The surveyed employees held diverse roles, including business development, finance, marketing, product management, sales, and technology.

All survey findings discussed in the report, except for two sidebars detailing international variations, pertain exclusively to US workplaces. This approach ensures statistically significant conclusions regarding the US workplace from US employee and C-suite responses. Analyzing global findings separately facilitates comparison between US responses and those from other regions.

Due to rounding, percentages may not always sum to 100 percent.

Three-quarters of US survey respondents work for organizations with at least $100 million in annual revenue, and half are at companies exceeding $1 billion annually. All US C-suite respondents are from organizations with annual revenues of at least $1 billion. Regarding workforce size, 20 percent of US respondents are at companies with fewer than 10,000 employees, 49 percent at companies with 10,000 to 50,000, and 31 percent at those with over 50,000.

The analysis extended significantly beyond surveys, including interviews with dozens of C-level executives and industry experts to understand their perspectives on AI’s transformative potential and their strategies for navigating this transition. Discussions with experts from Stanford HAI, the Digital Economy Lab at HAI, and McKinsey’s leading AI experts further enriched the report. Our survey and research primarily focused on generative AI (gen AI); however, participants may not have consistently distinguished between gen AI and other AI forms.

Additionally, we developed a comprehensive database of over 250 potential AI use cases, building on the 63 gen AI use cases identified by McKinsey’s Digital Practice. This database integrates proprietary McKinsey research on personal productivity, industry reports, and secondary research from sources including the US government’s Federal AI Use Case Inventories, NASA, press articles, and public interviews with technology leaders.

The Transformative Power of AI

Imagine a world where machines not only handle physical tasks but also think, learn, and make autonomous decisions. This world includes humans, working alongside machines in a state of “superagency” that boosts individual productivity and creativity (see “AI superagency” sidebar). This is the transformative promise of AI, a technology poised to exceed the impact of past innovations like the printing press or the automobile. AI moves beyond simple automation to automate cognitive functions. Unlike previous inventions, AI-powered software can adapt, plan, guide, and even make decisions, making it a catalyst for unprecedented economic growth and societal change across nearly all aspects of life. It is set to reshape how we interact with technology and each other.

Scientific discoveries and technological innovations are stones in the cathedral of human progress.

Reid Hoffman, cofounder of LinkedIn and Inflection AI, partner at Greylock Partners, and author

Many breakthrough technologies, such as the internet, smartphones, and cloud computing, have reshaped daily life and work. AI distinguishes itself by offering more than just access to information. It can summarize, code, reason, engage in dialogue, and make choices. AI can lower skill barriers, enabling more people to gain proficiency in various fields, across languages, and at any time. AI holds the potential to fundamentally alter how people access and use knowledge, leading to more efficient and effective problem-solving and fostering innovation for broader benefit.

AI superagency

What impact will AI have on humanity? Reid Hoffman and Greg Beato’s book Superagency: What Could Possibly Go Right with Our AI Future (Authors Equity, January 2025) delves into this question. The book highlights how AI could amplify human agency and potential, envisioning a human-led approach to our AI future.

Superagency, a term coined by Hoffman, describes a state where individuals, empowered by AI, significantly enhance their creativity, productivity, and positive impact. Even those not directly interacting with AI can benefit from its wider effects on knowledge, efficiency, and innovation.

AI is the latest in a line of transformative “supertools”—including the steam engine, internet, and smartphone—that have reshaped our world by augmenting human capabilities. Like its predecessors, AI has the potential to democratize access to knowledge and automate tasks, provided it is developed and deployed safely and equitably.

Over the past two years, AI has progressed dramatically, and enterprise-level adoption has accelerated due to reduced costs and increased access to capabilities. Numerous notable AI innovations have emerged (Exhibit 1). For instance, we’ve seen a rapid expansion in “context windows,” the short-term memory of LLMs. A larger context window allows an LLM to process more information simultaneously. For example, Google’s Gemini 1.5 processed one million tokens in February 2024, while Gemini 1.5 Pro handled two million tokens by June of the same year. Overall, five major innovations are driving the next wave of business impact: enhanced intelligence and reasoning, agentic AI, multimodality, improved hardware and computational power, and increased transparency.

Intelligence and reasoning are improving

AI is becoming significantly more intelligent. Performance on standardized tests is one indicator. OpenAI’s ChatGPT-3.5, introduced in 2022, performed well on high-school exams (e.g., 70th percentile on SAT math, 87th percentile on SAT verbal) but often struggled with broader reasoning. Today’s models approach the intelligence level of people with advanced degrees. GPT-4 can pass the Uniform Bar Examination, ranking in the top 10 percent of test-takers, and answers 90 percent of questions correctly on the US Medical Licensing Examination.

The development of reasoning capabilities represents a major leap forward for AI. Reasoning enhances AI’s ability for complex decision-making, moving beyond basic comprehension to nuanced understanding and the capacity to create step-by-step plans to achieve goals. For businesses, this means reasoning models can be fine-tuned and integrated with domain-specific knowledge to provide highly accurate, actionable insights. Models like OpenAI’s o1 or Google’s Gemini 2.0 Flash Thinking Mode offer reasoning capabilities, providing users with a human-like thought partner rather than just an information retrieval engine.

A person interacting with a digital interface related to AI in action, suggesting practical applications and learning experiences.A person interacting with a digital interface related to AI in action, suggesting practical applications and learning experiences.

Agentic AI is acting autonomously

I’ve always thought of AI as the most profound technology humanity is working on . . . more profound than fire or electricity or anything that we’ve done in the past.

Sundar Pichai, CEO of Alphabet

The growing ability to reason allows models to take autonomous actions and complete complex tasks across workflows. This signifies a profound advancement. For instance, in 2023, an AI bot could support call center agents by synthesizing data to suggest responses. In 2025, an AI agent can converse with a customer, plan subsequent actions like processing payments, checking for fraud, and initiating shipping, and execute them autonomously.

Software companies are integrating agentic AI capabilities into their core products. Salesforce’s Agentforce, for example, is a new layer enabling users to build and deploy autonomous AI agents for complex tasks like simulating product launches and orchestrating marketing campaigns. Marc Benioff, Salesforce cofounder, chair, and CEO, describes this as creating a “digital workforce” where humans and automated agents collaborate to achieve customer outcomes.

Multimodality is bringing together text, audio, and video

Current AI models are advancing towards more sophisticated and diverse data processing across text, audio, and video. The quality within each modality has improved significantly over the last two years. Google’s Gemini Live now offers enhanced audio quality and lower latency, enabling human-like conversations with emotional nuance. OpenAI’s Sora demonstrations show the capability to translate text descriptions into video.

Hardware innovation is enhancing performance

Advances in hardware and resulting increases in computational power continue to boost AI performance. Specialized chips facilitate faster, larger, and more versatile models. Enterprises can now adopt AI solutions requiring high processing power, enabling real-time applications and scalability. For example, an e-commerce company could enhance customer service with AI-driven chatbots leveraging advanced GPUs and TPUs. Distributed cloud computing ensures optimal performance during peak traffic, while integrated edge hardware allows deploying models to analyze photos for more accurate insurance claims processing.

Transparency is increasing

AI, like most transformative technologies, grows gradually, then arrives suddenly.

Reid Hoffman, cofounder of LinkedIn and Inflection AI, partner at Greylock Partners, and author

AI is gradually becoming less risky, although greater transparency and explainability are still needed. These factors are crucial for improving AI safety and mitigating potential bias, which are essential for widespread enterprise deployment. While significant progress is required, new models are rapidly improving. Stanford University’s Center for Research on Foundation Models (CRFM) reports notable advances in model performance, with its Transparency Index showing Anthropic’s score increasing by 15 points to 51 and Amazon’s more than tripling to 41 between October 2023 and May 2024.

Beyond LLMs, other AI and machine learning (ML) forms are improving explainability, allowing outputs of models supporting critical decisions (e.g., credit risk) to be traced back to their data sources. This enables testing and monitoring critical systems almost constantly for bias and other issues arising from model drift and shifting data inputs, even in systems initially well-calibrated.

All of this is vital for detecting errors and ensuring compliance with regulations and company policies. Companies have enhanced explainability practices and implemented necessary checks, but continuous evolution is required to keep pace with growing model capabilities.

Achieving AI superagency in the workplace involves more than just mastering technology. It equally requires supporting people, establishing processes, and managing governance. The following sections explore the non-technological factors shaping AI deployment.

Employees are Ready for AI; Now Leaders Must Step Up

Employees are the key to making their organizations AI powerhouses. They are more prepared to embrace Artificial Intelligence In The Workplace than business leaders perceive. They are more familiar with AI tools, desire more support and training, and are more likely to believe AI will replace a significant portion of their work soon. It is now critical for leaders to step forward. They have more opportunity than they realize, so the onus is on them to be bold and capture AI’s value now.

People are using [AI] to create amazing things. If we could see what each of us can do 10 or 20 years in the future, it would astonish us today.

Sam Altman, cofounder and CEO of OpenAI

Beyond the tipping point

Our survey shows nearly all employees (94 percent) and C-suite leaders (99 percent) report some familiarity with gen AI tools. However, business leaders underestimate employees’ extensive use of gen AI. C-suite leaders estimate only 4 percent of employees use gen AI for at least 30 percent of their daily work, while employees self-report this figure is three times higher (Exhibit 2). Furthermore, only 20 percent of leaders believe employees will use gen AI for over 30 percent of daily tasks within a year, while employees are twice as likely (47 percent) to hold this belief (see sidebar “Who is using AI at work? Nearly everyone, even skeptical employees”).

The good news is that our survey suggests three ways companies can accelerate AI adoption and move towards maturity.

Leaders can invest more in their employees

As noted, employees anticipate AI will dramatically impact their work. Now, they want companies to invest in the training necessary for success. Nearly half of surveyed employees want more formal training, viewing it as the best way to boost AI adoption. They also desire access to AI tools through betas or pilots and suggest incentives like financial rewards and recognition can improve uptake.

However, employees are not receiving the necessary training and support. More than a fifth report minimal to no support (Exhibit 3). Employees outside the United States also express a desire for more training (see sidebar “Global perspectives on training”).

Who is using AI at work? Nearly everyone, even skeptical employees

Our research analyzed employee attitudes toward AI using archetypes: “Zoomers,” “Bloomers,” “Gloomers,” and “Doomers.” We found 39 percent identify as Bloomers (AI optimists wanting to collaborate), 37 percent as Gloomers (skeptical, favoring regulations), 20 percent as Zoomers (want rapid deployment with few guardrails), and 4 percent as Doomers (fundamentally negative view) (exhibit).

Even skeptics are familiar with AI; 94 percent of Gloomers and 71 percent of Doomers report some gen AI familiarity. Approximately 80 percent of Gloomers and about half of Doomers are comfortable using gen AI at work.

Global perspectives on training

To gain insight into global AI adoption, we examined trends across Australia, India, New Zealand, Singapore, and the United Kingdom. Generally, employees and C-suite leaders in this “international” group share similar AI views with their US peers, though experiences differ in key areas like training.

Many international employees worry about insufficient training despite reporting significantly more support than US employees. Some 84 percent of international employees feel they receive significant or full organizational support for AI skills, compared to just over half of US employees. International employees also have more opportunities to participate in developing gen AI tools at work than their US counterparts, with differences of at least ten percentage points in activities such as providing feedback, beta testing, and requesting specific features (exhibit).

C-suite leaders can help millennials lead the way

Many millennials, aged 35 to 44, serve as managers and team leaders. Our survey indicates they report the most experience and enthusiasm for AI, positioning them as natural champions for transformational change. Millennials are the most active generation of AI users. Some 62 percent of 35- to 44-year-old employees report high AI expertise, compared to 50 percent of 18- to 24-year-old Gen Zers and 22 percent of baby boomers over 65 (Exhibit 4). By leveraging this enthusiasm and expertise, leaders can empower millennials to play a crucial role in AI adoption.

Given their managerial roles, many millennials can effectively support their teams in becoming more adept AI users, thereby helping their companies progress towards AI maturity. Two-thirds of managers report fielding team questions about AI tools at least weekly, and a similar percentage recommend AI tools to their teams to solve problems (Exhibit 5).

READ MORE >>  Hindi Technical News Stay Updated with the Latest Tech Developments

Since leaders have the permission space, they can be bolder

In many transformations, employees are resistant to change, but AI presents a different scenario. High employee readiness and familiarity provide business leaders with the opportunity to act boldly. Leaders can learn from employees about their current AI usage and their vision for future work transformation. They can provide essential training and empower managers to scale AI use cases beyond pilots.

It is critical that leaders seize this moment. It is the only way to increase the likelihood of their companies achieving AI maturity. But they must move swiftly, or they risk falling behind.

Delivering Speed and Safety

AI technology is advancing at an unprecedented pace. ChatGPT was released approximately two years ago; OpenAI reports over 300 million weekly users and adoption by over 90 percent of Fortune 500 companies. The internet did not reach this level of usage until the early 2000s, nearly a decade after its inception.

Soon after the first automobiles were on the road, there was the first car crash. But we didn’t ban cars—we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road.

Bill Gates, cofounder of Microsoft

A majority of employees identify as AI optimists, with Zoomers and Bloomers constituting 59 percent of the workforce. Even Gloomers, one of the less optimistic segments, report high gen AI familiarity, with over a quarter planning to increase their AI use next year.

Business leaders must embrace this speed and optimism to prevent their companies from being left behind. Yet, despite the excitement and initial experimentation, 47 percent of C-suite leaders feel their organizations are developing and releasing gen AI tools too slowly, often citing talent skill gaps as a primary reason for the delay (Exhibit 6).

Leaders are attempting to address the need for speed by increasing AI investments. Of surveyed executives, 92 percent anticipate boosting AI spending in the next three years, with 55 percent expecting increases of at least 10 percent from current levels. However, spending alone is no longer sufficient; companies now expect results. As the initial gen AI hype fades, business leaders face growing pressure to demonstrate ROI from their deployments.

We are at a turning point. The initial AI excitement may be waning, but the technology continues to accelerate. Bold and purposeful strategies are required for future success. Leaders are taking the first steps: One quarter of surveyed executives have defined a gen AI roadmap, and over half have a draft in refinement (Exhibit 7). Given the rapid technological changes, roadmaps and plans will constantly evolve. The key for leaders is to make clear choices about which valuable opportunities to pursue first and how to collaborate with peers, teams, and partners to realize that value.

The dilemma of speed versus safety

A significant hurdle remains: Regulation and safety are often viewed as insurmountable challenges rather than opportunities. Leaders wish to increase AI investments and accelerate development but struggle with ensuring AI safety in the workplace. Data security, hallucinations, biased outputs, and misuse (e.g., creating harmful content, enabling fraud) are challenges that cannot be ignored. Employees are acutely aware of AI’s safety issues; their top concerns include cybersecurity, privacy, and accuracy (Exhibit 8). The question is, what will it take for leaders to address these concerns while simultaneously moving forward at light speed?

Employees trust business leaders to get it right

While employees recognize the risks, including the potential for AI to replace a significant portion of their work, they place high trust in their own employers to deploy AI safely and ethically. Notably, 71 percent of employees trust their employers to act ethically in AI development. In fact, they trust employers more than universities, large technology companies, or tech start-ups (Exhibit 9).

Our research aligns this finding with a broader trend where employees show higher trust in their employers (73 percent) to “do the right thing” than in other institutions like the government (45 percent). This trust should empower leaders to confidently navigate the speed-versus-safety dilemma. This confidence extends outside the United States, even though employees in other regions may desire more regulation (see sidebar “Global perspectives on regulation”).

Global perspectives on regulation

A significant percentage of international C-suite leaders surveyed across five regions (Australia, India, New Zealand, Singapore, and the United Kingdom) identify as Gloomers, favoring greater regulatory oversight. Between 37 to 50 percent of international C-suite leaders self-identify as Gloomers, compared to 31 percent in the United States. This may reflect a greater acceptance of top-down regulation in many countries outside the US. Half or more of surveyed global C-suite leaders worry that ethical use and data privacy issues are hindering employee gen AI adoption.

However, our research indicates that attitudes towards regulation are not suppressing the economic expectations of business leaders outside the United States. More than half of international executives (versus 41 percent of US executives) aim for their companies to be among the first AI adopters, with those in India and Singapore particularly bullish (exhibit). This desire among international business leaders to be first movers is likely driven by expected revenue from AI deployments. Some 31 percent of international C-suite leaders anticipate AI delivering a revenue uplift exceeding 10 percent in the next three years, compared to just 17 percent of US leaders. Indian executives are the most optimistic, with 55 percent expecting a 10 percent or greater revenue uplift over the next three years.

Risk management for gen AI

In Superagency, Hoffman argues that new capabilities naturally bring new risks, which should be managed rather than necessarily eliminated. Leaders must contend with external threats like intellectual property (IP) infringement and AI-enabled malware, alongside internal threats arising from the adoption process. The initial step in building fit-for-purpose risk management is a comprehensive assessment to identify vulnerabilities across a company’s operations. Leaders can then establish robust governance, implement real-time monitoring, and ensure continuous training and regulatory adherence.

One powerful control is respected third-party benchmarking, which can enhance AI safety and trust. Examples include Stanford CRFM’s Holistic Evaluation of Language Models (HELM) initiative, offering comprehensive benchmarks for fairness, accountability, transparency, and societal impact, and MLCommons’s AILuminate tool kit. Organizations like the Data & Trust Alliance unite large companies to create cross-industry metadata standards for enterprise AI model transparency.

While benchmarks have significant potential to build trust, our survey indicates only 39 percent of C-suite leaders use them to evaluate their AI systems. When used, benchmarks primarily focus on operational metrics (e.g., scalability, reliability, cost) and performance metrics (e.g., accuracy, precision, latency). Ethical and compliance concerns are a lower priority: only 17 percent of benchmarking C-suite leaders prioritize measuring fairness, bias, transparency, privacy, and regulatory issues (Exhibit 10).

The focus on operational and performance metrics reflects an understandable desire to prioritize immediate technical and business outcomes. However, neglecting ethical considerations can create future problems. When employees distrust AI systems, adoption suffers. While benchmarks don’t eliminate all risk or guarantee full efficiency, ethics, and safety, they are a valuable tool.

Even companies excelling in all three areas of AI readiness—technology, employees, and safety—are not necessarily scaling or delivering expected value. Nevertheless, leaders can leverage ambitious goals to transform their companies with AI. The next section explores how.

Embracing Bigger Ambitions

Most organizations that have invested in AI are not seeing the desired returns. They are not capturing AI’s full economic potential. About half of C-suite leaders at companies that have deployed AI describe their initiatives as still developing or expanding (Exhibit 11). They have had ample time to progress further. Our research shows over two-thirds of leaders launched their first gen AI use cases more than a year ago.

This is a time when you should be getting benefits [from AI] and hope that your competitors are just playing around and experimenting.

Erik Brynjolfsson, professor at Stanford University and director of the Digital Economy Lab at the Stanford Institute for Human-Centered Artificial Intelligence (HAI)

Pilots often fail to scale for numerous reasons, including poorly designed strategies. However, a lack of bold ambition can be equally crippling. This section examines current AI investment patterns across industries and highlights the potential for those willing to aim higher.

Methodology

To create our report, we surveyed 3,613 employees (managers and independent contributors) and 238 C-level executives in October and November 2024. Of these, 81 percent were from the United States, with the remainder from five other countries: Australia, India, New Zealand, Singapore, and the United Kingdom. The surveyed employees held diverse roles, including business development, finance, marketing, product management, sales, and technology.

All survey findings discussed in the report, except for two sidebars detailing international variations, pertain exclusively to US workplaces. This approach ensures statistically significant conclusions regarding the US workplace from US employee and C-suite responses. Analyzing global findings separately facilitates comparison between US responses and those from other regions.

Due to rounding, percentages may not always sum to 100 percent.

Three-quarters of US survey respondents work for organizations with at least $100 million in annual revenue, and half are at companies exceeding $1 billion annually. All US C-suite respondents are from organizations with annual revenues of at least $1 billion. Regarding workforce size, 20 percent of US respondents are at companies with fewer than 10,000 employees, 49 percent at companies with 10,000 to 50,000, and 31 percent at those with over 50,000.

The analysis extended significantly beyond surveys, including interviews with dozens of C-level executives and industry experts to understand their perspectives on AI’s transformative potential and their strategies for navigating this transition. Discussions with experts from Stanford HAI, the Digital Economy Lab at HAI, and McKinsey’s leading AI experts further enriched the report. Our survey and research primarily focused on generative AI (gen AI); however, participants may not have consistently distinguished between gen AI and other AI forms.

Additionally, we developed a comprehensive database of over 250 potential AI use cases, building on the 63 gen AI use cases identified by McKinsey’s Digital Practice. This database integrates proprietary McKinsey research on personal productivity, industry reports, and secondary research from sources including the US government’s Federal AI Use Case Inventories, NASA, press articles, and public interviews with technology leaders.

AI investments vary by industry

Different industries exhibit distinct AI investment patterns. Among the top 25 percent of spenders, companies in healthcare, technology, media and telecom, advanced industries, and agriculture lead (Exhibit 12). Industries spending less include financial services, energy and materials, consumer goods and retail, hardware engineering and construction, and travel, transport, and logistics. The consumer industry, despite having the second-highest potential for value realization from AI, appears least willing to invest, with only 7 percent of respondents in the top quartile of gen AI spending based on self-reported revenue percentage. This hesitation might be linked to the industry’s low average net margins in mass markets, requiring higher confidence thresholds for costly organization-wide technology upgrades.

In some industries, employees are cautious

Employees in the public sector, aerospace and defense, and semiconductor industries express considerable skepticism about AI’s future development. In the public sector and aerospace and defense, only 20 percent of employees anticipate a significant AI impact on daily tasks within the next year, compared to roughly two-thirds in media and entertainment (65 percent) and telecom (67 percent). Moreover, our survey shows only 31 percent of social sector employees trust their employers to develop AI safely, the lowest confidence level among all industries (cross-industry average is 71 percent).

This relative caution in these sectors likely reflects near-term challenges from external constraints such as stringent regulatory oversight, outdated IT systems, and lengthy approval processes.

There’s a lot of headroom in some functions

Our research reveals that functional areas with the greatest economic potential from AI are also those where employee outlook is moderate. Employees in sales and marketing, software engineering, customer service, and R&D contribute roughly three-quarters of AI’s total economic potential, yet their self-reported optimism is middling (Exhibit 14). This could be because these functions have already piloted AI projects, leading employees to more realistic views of AI’s benefits and limitations. Alternatively, the high economic potential might fuel concerns about job replacement. Regardless, leaders in these functions should consider investing more in employee support and empowering change champions to improve sentiment.

Gen AI has not delivered enterprise-wide ROI, but that can change

Across all industries, surveyed C-level executives report limited returns on enterprise-wide AI investments. Only 19 percent see revenues increasing more than 5 percent, another 39 percent report a moderate 1 to 5 percent increase, and 36 percent report no change (Exhibit 15). Only 23 percent see AI leading to any favorable change in costs.

Despite this, company leaders are optimistic about the value they can capture in the coming years. A full 87 percent of executives expect revenue growth from gen AI within three years, and about half believe it could boost revenues by more than 5 percent in that timeframe (Exhibit 16). This suggests significant positive changes are anticipated over the next few years.

Big ambitions can help solve big problems

To drive revenue growth and improve ROI, business leaders may need to commit to transformative AI possibilities. As the AI hype recedes and focus shifts to value, attention is increasingly on practical applications that can build competitive advantages.

[It] is critical to have a genuinely inspiring vision of the future [with AI] and not just a plan to fight fires.

Dario Amodei, cofounder and CEO of Anthropic

To assess companies’ progress in this shift, we examined three categories of AI applications: personal use, business use, and societal use (see sidebar “AI’s potential to enhance our personal lives”). We mapped over 250 applications from our work and public examples to understand the spectrum of impact, from localized use cases to transformations with universal impact. Our conclusion? Since most companies are early in their AI journeys, most AI applications are localized use cases still in pilot stages (Exhibit 17).

In many cases, this localized focus is appropriate. However, creating AI applications capable of revolutionizing industries and generating transformative value requires something more. Robotics in manufacturing, predictive AI in renewable energy, drug development in life sciences, and personalized AI tutors in education—these are the types of transformative efforts that can yield the greatest returns. These initiatives didn’t arise from a reactive mindset. They are the product of inspirational leadership, a unique future vision, and a commitment to transformative impact. This kind of courage is needed to develop AI applications that can revolutionize industries.

It is in [the] collaboration between people and algorithms that incredible scientific progress lies over the next few decades.

Demis Hassabis, cofounder and CEO of Google DeepMind

To truly harness AI’s potential, companies must challenge themselves to envision and implement more breakthrough initiatives. Success in the AI era depends not solely on technology deployment or employee willingness but also on visionary leadership. The necessary components are present: highly capable and rapidly advancing technology, and employees who are more ready than leaders perceive. Leaders have more opportunity than they realize to deploy AI swiftly in the workplace. To do this, leaders must expand their ambitions towards systematic change, establishing the groundwork for genuine competitive differentiation. To be more ambitious about AI, companies need to increase the proportion of transformative initiatives in their portfolios. The next section examines the obstacles leaders must overcome and how they can do so.

AI’s potential to enhance our personal lives

Beyond the business context, individuals are increasingly using AI in their personal lives. Previous research analyzed AI’s potential impact across 77 personal activities, considering age, gender, and working status in the United States. While there is limited desire to automate certain activities like leisure, sleeping, and fitness, the data indicates significant opportunities for AI, combined with other technologies, to assist with chores or labor-intensive tasks. As of 2024, our research identified approximately one hour of such daily activities with the technical potential for automation. By 2030, expanded use cases and continued improvements in AI safety could increase automation potential to three hours per day. When individuals use AI-enabled tools—such as an autonomous vehicle for commuting or an interactive personal finance bot—they can repurpose time for personal fulfillment or other productive pursuits.

Using human-centric design and leveraging gen AI’s “emotional intelligence” potential are unlocking new personal AI applications that extend beyond basic efficiencies. Individuals are starting to use conversational and reasoning AI models for counseling, coaching, and creative expression. For example, people are using conversational AI for advice and emotional support or to realize artistic visions with only verbal prompts. Furthermore, supporting the idea that AI superagency will advance society, AI has the potential to be a democratizing force, making previously expensive or exclusive experiences—like animation generation, career coaching, or tax advice—accessible to much wider audiences.

Technology Is Not the Barrier to Scale

Without question, AI presents a rare and phenomenal opportunity. Almost 90 percent of leaders anticipate AI deployment will drive revenue growth in the next three years. However, securing that growth requires corporate transformation, an area where businesses historically struggle, with nearly 70 percent of transformations failing.

As we build this next generation of AI, we made a conscious design choice to put human agency both at a premium and at the center of the product. For the first time, we have the access to AI that is as empowering as it is powerful.

Satya Nadella, chairman and CEO of Microsoft

To join the minority of companies that succeed, C-level executives must self-reflect. They must embrace the vital role of their leadership. C-suite leaders in our survey are more than twice as likely to cite employee readiness as a barrier to adoption than to attribute blame to their own role. Yet, as previously noted, employees indicate they are quite ready.

READ MORE >>  Why Google Shut Down Its AI Program A Closer Look

This section examines how leaders can take charge, acknowledging that the AI opportunity demands more than technology implementation; it requires a strategic transformation. There are undeniable operational headwinds facing AI adoption. To address these, leadership teams must commit to rewiring their enterprises. best artificial intelligence programs are part of the toolkit leaders need to consider.

The operational headwinds that slow execution

Business adoption of AI faces several operational headwinds. Our interviews and research identified five of the most challenging: aligning leadership, addressing cost uncertainty, workforce planning, managing supply chain dependencies, and meeting the demand for explainability.

Leadership alignment is a challenging but critical first step

Achieving consensus among senior leaders on a strategy-led gen AI roadmap is complex. The key is recognizing that leadership alignment is not simple or to be assumed. It requires ongoing engagement from senior leaders across business domains, each with potentially distinct objectives and risk tolerances. Together, leaders must clearly define where value lies, how AI will drive this value, and how risk will be managed. They must jointly establish metrics for performance evaluation and investment adjustments. To facilitate alignment, they might appoint a gen AI value and risk leader or create an enterprise-wide leadership and orchestration function. These actions can enhance collaboration among business, technology, and risk teams. Although challenging, aligning leadership is crucial to ensure AI projects are cohesive, mitigate liability, and deliver transformative business outcomes.

Cost uncertainty makes it difficult for enterprises to predict ROI

Many companies are still deciding whether to use off-the-shelf AI solutions from vendors or to customize them (“shape”). Shaping can be more costly but offers greater potential for differentiation. Furthermore, while leaders can budget for AI pilots, the full cost of building and managing AI applications at scale remains uncertain. Planning for a limited pilot differs greatly from assessing costs for a mature solution used daily by most employees. These factors force difficult trade-offs. To keep pace with AI development, technology leaders must prioritize accelerated decision-making.

Workforce planning is more difficult than ever

Managing uncertainty about the future workforce is challenging. Employers are unsure how many AI experts they will need, what specific skills are required, whether that talent exists, how quickly they can hire, and how to remain attractive employers for in-demand talent. Conversely, they don’t know how rapidly AI might reduce demand for other skills, necessitating workforce rebalancing and retraining.

Supply chain dependencies can wreak havoc

Fragile supply chains expose enterprises to disruptions and technical, regulatory, and legal issues. The AI supply chain is global, with R&D concentrated in China, Europe, and North America, and semiconductor/hardware manufacturing in East Asia and the United States. Current geopolitics add complexity. Additionally, models and applications are increasingly developed in open-source forums spanning many countries.

Demand for greater explainability is a central challenge

Safe AI deployment is increasingly a prerequisite. Yet, most LLMs often function as “black boxes,” not revealing why or how they arrived at a specific response or what data was used. If AI models cannot provide clear justifications for their responses, recommendations, or decisions—for example, explaining why a credit card application was denied—they will not be trusted for critical tasks.

These AI-specific headwinds are significant but manageable. Companies are making progress. For instance, they might use dynamic cost planning or secure infrastructure like NVIDIA clusters. Chief HR officers (CHROs) are developing training programs to upskill existing workforces and support employees transitioning roles. However, lasting success requires more.

To capture AI value, leaders must rewire their companies

McKinsey’s Rewired framework outlines six foundational elements for sustained digital transformation: road map, talent, operating model, technology, data, and scaling (Exhibit 18). Successful implementation cultivates a culture of autonomy, leverages modern cloud practices, and builds multidisciplinary agile teams.

While these six elements are universally applicable, AI introduces important nuances for leaders:

  • Adaptability. AI technology advances so rapidly that organizations must quickly adopt new best practices to stay competitive. These practices can involve new technologies, talent, business models, or products. A modular approach, for example, helps future-proof tech stacks. As natural language becomes an integration medium, AI systems are becoming more compatible, enabling businesses to swap, upgrade, and integrate models and tools with less friction. This modularity helps enterprises avoid vendor lock-in and quickly utilize new AI advancements without constantly rebuilding their tech stacks.
  • Federated governance models. Managing data and models requires balancing team autonomy for developing new AI tools with centralized risk control. Leaders can directly oversee high-risk or high-visibility issues, such as setting policies and processes to monitor models and outputs for fairness, safety, and explainability. They can also delegate monitoring of performance-based criteria like accuracy, speed, and scalability to business units.
  • Budget agility. Given technological advancements and the opportunity to select an optimal mix of LLMs, small language models (SLMs), and agents, business leaders should maintain flexible budgets. This allows enterprises to simultaneously optimize AI deployments for cost and performance.
  • AI benchmarks. These tools are powerful for quantitatively assessing, comparing, and improving the performance of different AI models and systems. If technologists adopt standardized public benchmarks—and if more C-level executives use benchmarks, including ethical ones—model transparency and accountability will improve, increasing AI adoption, even among more skeptical employees.
  • AI-specific skill gaps. Notably, 46 percent of leaders identify skill gaps as a significant barrier to AI adoption. Leaders must attract and hire top talent, including AI/ML engineers, data scientists, and AI integration specialists. They also need to create an appealing environment for technologists, offering experimentation time, access to cutting-edge tools, opportunities in open-source communities, and a collaborative engineering culture. Upskilling existing employees is equally critical: Research from McKinsey’s People and Organizational Performance Practice emphasizes tailoring training to specific roles, like bootcamps for technical teams on library creation and prompt engineering classes for functional teams.
  • Human centricity. To ensure fairness and impartiality, business leaders must incorporate diverse perspectives early and often in the AI development process and maintain transparent communication with teams. Currently, less than half of C-suite leaders (48 percent) would involve non-technical employees in early AI tool development stages (ideation, requirement gathering). Agile pods and human-centric practices like design-thinking and reinforcement learning from human feedback (RLHF) will help leaders and developers create AI solutions that all people want to use. In agile pods, technical team members work alongside employees from business functions (HR, sales, product) and support functions (legal, compliance). Leaders can also address employee concerns about job losses by being transparent about new skill requirements and headcount changes. Forums for employees to provide input, voice concerns, and share ideas are valuable for maintaining a transparent, human-first culture.

Meeting the AI Future

The pace of AI advancement in the last two years is stunning. Some react to this speed by viewing AI as a threat to humanity. But what if we follow Reid Hoffman’s advice and imagine “what could possibly go right with AI”? Leaders might realize that all the necessary components for AI superagency in the workplace are falling into place.

Learn from yesterday, live for today, hope for tomorrow.

Albert Einstein, theoretical physicist

They might notice that employees are already using AI and eager to use it more. They may find that millennial managers are powerful change champions ready to support their peers. Instead of focusing on the 92 million jobs projected to be displaced by 2030, leaders could plan for the estimated 170 million new jobs and the new skills they will require.

This is the crucial moment for leaders to set bold AI commitments and address employee needs through on-the-job training and human-centric development. As leaders and employees collaborate to reimagine their businesses from the ground up, AI can evolve from a productivity enhancer into a transformative superpower—an effective partner that increases human agency. Leaders who replace fear of uncertainty with imaginative possibility will discover new AI applications, not just for optimizing existing workflows but as a catalyst to solve larger business and human challenges. Early AI experimentation focused on proving technical feasibility through narrow use cases, such as automating routine tasks. Now, the horizon has shifted: AI is poised to unleash unprecedented innovation and drive systemic change that delivers real value.

Glossary

The following terms in this report are defined specifically for its context.

Adoption and deployment: Deployment typically refers to the extent to which an organization rolls out a technology product (whether developed in-house or purchased off the shelf), and adoption reflects how extensively these products are used to generate measurable business value. Given the emerging nature of AI, many companies are simultaneously deploying and adopting, iterating as they go. Therefore, this report often uses adoption and deployment interchangeably to refer to the overall uptake of AI tools.

Agentic AI: Systems with autonomy and goal-directed behavior capable of making independent decisions, planning, and adapting to achieve specific objectives without direct, ongoing human input.

Application programming interface (API): Intermediary software components that allow two applications to talk to each other; a structured way for AI systems to programmatically access (usually external) models, data sets, or other pieces of software.

Artificial Intelligence (AI): The ability of software to perform tasks that traditionally require human intelligence, mirroring some cognitive functions usually associated with human minds.

Deep learning: A subset of machine learning that uses deep neural networks, which are layers of connected “neurons” whose connections have parameters or weights that can be trained. Deep learning is especially effective at learning from unstructured data such as images, text, and audio.

Digital workforce: A collaborative ecosystem where humans and automated agents work together, leveraging digital platforms, AI, and cloud computing to enhance productivity, efficiency, and scalability across various industries.

Employee: A worker in a corporate setting, either a manager or independent contributor. Examples of the type of employees represented in this report include people working in product management, marketing, technology, business development, sales, and finance.

Foundation models: Deep learning models trained on vast quantities of unstructured, unlabeled data that can be used for a wide range of tasks out of the box or adapted to specific tasks through fine-tuning. Examples of these models are DALL-E 2, GPT-4, PaLM, and Stable Diffusion.

Generative AI (gen AI): AI that is typically built using foundation models and has capabilities that earlier forms lacked, such as the ability to generate content. Foundation models can also be used for nongenerative purposes (for example, classifying user sentiment as negative or positive based on call transcripts).

Graphics processing units (GPUs): Computer chips originally developed for producing computer graphics, such as for video games, that are also useful for deep learning applications. In contrast, traditional machine learning usually runs on central processing units (CPUs), normally referred to as a computer’s “processor.”

Hallucination: A scenario where an AI system generates outputs that lack grounding in reality or a provided context. For instance, an AI chatbot may fabricate information or present a false narrative.

Large language models (LLMs): A class of foundation models that can process massive amounts of unstructured text and learn the relationships between words or portions of words, known as tokens. This enables LLMs to generate natural-language text, performing tasks such as summarization or knowledge extraction. GPT-4 (which underlies ChatGPT) and the Llama family of models from Meta are examples of LLMs.

Modality: A high-level data category such as numbers, text, images, video, and audio.

Multimodal capabilities: The ability of an AI system to process and generate various types of data (text, images, audio, video) simultaneously, enabling complex tasks and rich outputs.

Productivity (from labor): The ratio of GDP to total hours worked in the economy. Labor productivity growth generally comes from increases in the amount of capital available to each worker, the education and experience of the workforce, and improvements in technology.

Prompt engineering: The process of designing, refining, and optimizing input prompts to guide a gen AI model toward producing desired and accurate outputs.

Reasoning AI: AI systems that perform logical thinking, step-by-step planning, problem solving, and decision making using structured or unstructured data, going beyond pattern recognition to draw conclusions and solve complex problems.

Superagency: A state where individuals, empowered by AI, amplify their creativity, productivity, and positive impact. Even those not directly engaging with AI can benefit from its broader effects on knowledge, efficiency, and innovation.

Unstructured data: Data that lack a consistent format or structure (for example, text, images, video, and audio files) and typically require more advanced techniques to extract insights.

Acknowledgments

Hannah Mayer is a partner in McKinsey’s Bay Area office, where Lareina Yee is a senior partner, Michael Chui is a knowledge developer and senior fellow, and Roger Roberts is a partner.

The authors wish to thank Alex Panas, a senior partner in the Boston office; Eric Kutcher, a senior partner in the Bay Area office; Kate Smaje, a senior partner in the London office; Noshir Kaka, a senior partner in the Mumbai office; Robert Levin, a senior partner in the Boston office; and Rodney Zemmel, a senior partner in the New York office, for their contributions to this report.

The authors were inspired by the impact delivered by our QuantumBlack, AI by McKinsey, colleagues, led by Alex Singla and Alex Sukharevsky, and our gen AI lab leaders, especially Carlo Giovine and Stephen Xu.

The research was led by consultants Akshat Gokhale, Amita Mahajan, Begum Ortaoglu, Estee Chen, Hailey Bobsein, Katharina Giebel, Mallika Jhamb, Noah Furlonge-Walker, and Sabrina Shin. This report was edited by executive editors Kristi Essick and Rick Tetzeli.

Thank you to Reid Hoffman, along with his chief of staff Aria Finger and representatives at Superagency: What Could Possibly Go Right with Our AI Future publisher Authors Equity, especially Deron Triff, for their ongoing collaboration. Working together with Hoffman, who brings the distinctive perspective of being both an investor in and a mentor to the creators of AI, we looked at a central question: How can businesses win with AI in the medium and long terms? We benefited from working sessions with Hoffman, CEOs, and AI industry thought leaders.

We would like to thank the members of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) who challenged our thinking and provided valuable feedback.

This report contributes to McKinsey’s ongoing research on AI and aims to help business leaders understand the forces transforming ways of working, identify strategic impact areas, and prepare for the next wave of growth. As with all McKinsey research, this work is independent and has not been commissioned or sponsored in any way by any business, government, or other institution. The report and views expressed here are ours alone. We welcome your comments on this research at SuperagencyReport@McKinsey.com. Learn more about our gen AI insights and sign up for our newsletter.

These questions have no easy answers, but a consensus is emerging on how to best address them. For example, some companies deploy both bottom-up and top-down approaches to drive AI adoption. Bottom-up actions help employees experiment with AI tools through initiatives such as hackathons and learning sessions. Top-down techniques bring executives together to radically rethink how AI could improve major processes such as fraud management, customer experience, and product testing.

These kinds of actions are critical as companies seek to move from AI pilots to AI maturity. Today only 1 percent of business leaders report that their companies have reached maturity. Over the next three years, as investments in the technology grow, leaders must drive that percentage way up. They should make the most of their employees’ readiness to increase the pace of AI implementation while ensuring trust, safety, and transparency. The goal is simple: capture the enormous potential of gen AI to drive innovation and create real business value.

To meet this more ambitious era, leaders and employees must ask themselves big questions. How should leaders define their strategic priorities and steer their companies effectively amid disruption? How can employees ensure they are ready for the AI transition coming to their workplaces? Questions like the following ones will shape a company’s AI future:

For business leaders:

  • Is your strategy ambitious enough?Do you want to transform your whole business? How can you reimagine traditional cost centers as value-driven functions? How do you gain a competitive advantage by investing in AI?
  • What does successful AI adoption look like for your organization? What success indicators will you use to evaluate whether your investments are yielding desired ROI?
  • What skills define an AI-native workforce? How can you create opportunities for employees to develop these skills on the job?

For employees:

  • What does achieving AI mastery mean for you? Does it extend to confidently using AI for personal productivity tasks such as research, planning, and brainstorming?
  • How do you plan to expand your understanding of AI? Which news sources, podcasts, and video channels can you follow to remain informed about the rapid evolution of AI?
  • How can you rethink your own work? Some of the most innovative ideas often emerge from within teams, rather than being handed down from leadership. How would you redesign your work to drive bottom-up innovation?

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button