The Wharton School of the University of Pennsylvania and GBK Collective have published a study on adopting Generative AI in the USA. The report “Growing Up: Navigating Gen AI’s Early Years” (registration required) has some surprising results.
For example, 72% say they use GenAI at least once a week, up from 37% in 2023. Marketing & Sales teams are the fastest-growing group, with usage tripling to 62%. Importantly, there is a slight drop in concerns over job replacement from 75% to 72%.
The 76-page survey is compiled from 802 15-minute quantitative tracking surveys conducted in the USA only and took place in July 2024. All respondents worked at companies with over 1,000 employees and classed themselves as senior decision makers in HR, IT/Business Intelligence, Legal, Marketing/Sales, Operations, Product/Engineering, Purchasing/Procurement, Finance/Accounting or General Management.

Stefano Puntoni, Sebastian S. Kresge Professor of Marketing at the Wharton School and Co-Director of AI at Wharton, said, “Generative AI has rapidly evolved from a tool of experimentation to a core driver of business transformation.
“Companies are no longer just exploring AI’s potential—they are embedding it into their strategies to scale growth, streamline operations, and enhance decision-making. The novelty phase is over. We’re now starting to see the integration of AI into various business processes, as companies look to unlock its long-term value across the enterprise.”
Some key facts from the report?
With 76 pages, this report offers much to choose from, some contradictory and some of concern. It is unclear, indeed, and didn’t appear to be asked how many are using private or public instances of GenAI. From the responses, it is quite possible many are using a mix, which raises questions about data security, privacy, and compliance.
Growth and adoption
Growth and adoption are improving significantly as the headline statistic of weekly usage almost doubled to 72%. Other interesting numbers include:
- GenAI spending is predicted to increase 25% in 2025 but is already up 130% over 2023. 72% say budgets will increase in the next year, most over $5 million. However, 57% anticipate spending increase to slow to between 1-10%. The report authors believe that is due to a lack of reportable ROI.
- Most (78%) have seen GenAI integrated across business functions. However, the study seems to have focused on LLM usage with no mention of SLMs. The latter is where departments and business functions are beginning to deploy private GenAI. It seems a strange question to have missed out in the report.
- A long table of where GenAI is used shows Document and proposal writing/editing (64%) and data analysis and analytics (62%) are the most used areas. This is likely because of tools like Microsoft Copilot in Microsoft 365 and tools like Grammarly.
- The least used are legal contract generation (41%) and customer loyalty and retention programs (46%). Oddly, these are two areas where there are plenty of case studies showing potential benefits. In the legal space, the use of AI has led to a number of bespoke AI solutions.
- Purchasing/procurement are the heaviest users (94%) showing an increase of 44% over 2023. Product development/engineering (78%) are next with an increase of 38%. Management has also increased its use by 44% with usage standing at 69%.
Who is leading GenAI adoption?
Like most new technologies, the temptation is to rely on consultants to drive early adoption. What is interesting from the survey is that there are already signs of maturity with internal teams leading deployments.
- 46% say their GenAI strategy is led by a single internal team, up from 34% in 2023. That move to a single team has impacted multi-team deployments. Last year that was 47% but this year, it dropped to 45%. The number of consultant and partner led engagements was just 7% last year and has dropped to 6%.
- The Chief AI Officer (CAIO) role has also risen rapidly, with 46% of companies saying that they have that role. Unfortunately, the report did not look at whether this was a conversion of the Chief Data Officer (CDO) to the CAIO. It also failed to show the impact on the C-Suite of adding a new role or provide any structural chart showing relationships between the CISO, CAIO, CDO, CFO, CTO, and Chief Compliance Officer.
It’s all good news or is it?
The report gives the impression that there are many high-fiving happy people around GenAI. It’s as if the technology has moved through the hype cycle to maturity, but that is far from the case. Those who were curious last year (53%) are less so now (44%) because they are converts. The same is true of those with worries (19% down to 10%) and the sceptics (23% down to 16%).
The impact on employee skills is interesting. 90% believe GenAI enhances employees’ skills, up 10% from last year. At the same time, 72% see it replacing employees’ skills, down from 75% last year. What is not clear is what skills? If the GenAI is doing report writing/editing and data analytics, what are employees gaining?
Teaching employees how to use prompt engineering might sound a good idea but effective prompt engineering takes time. It’s also questionable as to how many employees actually need that as a skill. A better UI would be a wiser investment.
Access and security
What is raising concern is the apparent lack of restrictions on the use of GenAI. Larger businesses with $2B in annual revenue have far more restrictions than smaller companies. The average across all three sizes of business are:
- Any employee can use it without restrictions – 36%
- Only some employees can use it, without any restrictions – 15%
- Any employee can use it with some restrictions – 28%
- Only some employees can use it, with some restrictions – 14%
- No employee can ever use it, it is completely banned for work purposes (1% Only $2B+ companies)
It is not just restrictions on access that are of concern, but also the lack of policies and guidelines on GenAI use. Are people using public GenAI or private instances trained only on corporate data? How are they asking questions? What data are they providing when asking for documents to be written? How are data privacy and security being handled?
All of this is covered on page 42 of the report and it is, seriously disappointing. The biggest issue is the lack of clarity on public/private GenAI. Without that understanding, many of these policies make little sense. In fact, many of these policies will be hard to enforce or detect data leakage without controls on how users access GenAI.
Enterprise Times: What does this mean?
As already noted, this is a large (76-page) report. It contains a lot of data and, helpfully, the questions that were asked. There is a need for anyone seeking to use this report to guide their use of GenAI to read it carefully before just taking the numbers and responses as written.
This report clearly shows that GenAI is continuing to achieve adoption across enterprises. However, there is no clear indication of the ROI that organisations are achieving or what they should expect. Without ROI, investment will dry up, and there is already some indication of that.
Another area that needs more clarification than given in this report, is data protection and compliance. It is glossed over with a couple of questions and no context on how it is being implemented. Nor is there any indication of how organisations are preventing data leakage due to the use of GenAI.
As useful as some of the numbers are in this report, it is a perfect example of why quantitative reports create more questions than answers. It should not have been hard for the report authors to have identified the grey areas and gained some qualitative data.