Six Key Predictions for Artificial Intelligence in the Enterprise

November 7, 2023

As we head into 2024, AI continues to evolve at breakneck speed. The adoption of AI in large organizations is no longer a matter of “if,” but “how fast.” Companies have realized that harnessing the power of AI is not only a competitive advantage but also a necessity for staying relevant in today’s dynamic market. In this blog post, we’ll look at AI within the enterprise and outline six key predictions for the coming year. At glance, they are:

  1. Companies Will Be Shocked at the Cost of Generative AI
  2. Prompt Engineering Will Not Be the Be-All and End-All of Gen AI
  3. Everyone Works in AI Now
  4. Smaller, Open Source LLMs Will Overtake Larger, Generic LLMs
  5. Businesses Will Know Which Use Cases to Test, But Not Which Ones Will Deliver ROI
  6. Point Solutions Will Continue to Consolidate, Leading to the Preference for End-to-End Platforms

Without further ado, let’s dive in!:

Prediction #1: Companies Will Be Shocked at the Cost of Generative AI

ClearML recently conducted a research survey of 1,000 Fortune 1000 C-level executives in charge of their organization’s Gen AI initiatives. One of the questions we asked was which Generative AI cost drivers have respondents considered for developing, deploying, maintaining, and running generative AI in their enterprise as part of total cost of ownership (TCO). 

Based on survey answers, we found that most respondents believe their Gen AI costs are centered around model development, training and systems infrastructure. For example, the costs associated with how a model works – human capital, the tools and systems to run it, and the app/UI for users. 59% of executives overseeing AI reported that tools, systems, and infrastructure integration costs (APIs, integrations, monitoring tools, etc.) are the top AI cost drivers. 48% reported that model development and training costs (human capital/talent) are a top cost driver, and 42% indicated application development for user interface as a top cost driver.  

Key Gen AI Cost Drivers

This is an excellent example of the gap between a company’s vision and reality. We believe respondents are underestimating how messy data can be, and the heavy lifting needed for data prep as well as underestimating the cost of vast usage by users at a company-wide scale. It’s worth noting that this is even more challenging if their company is using AI as a Service. 

Similarly, respondents are underestimating the time required by SMEs to work with the team to ensure the model is accurate and “good enough” to roll out either internally or externally. Most importantly, a shockingly low 8% of respondents said they would attempt to control their budget by limiting models and/or access to Gen AI to better manage their budgets, which means they are not thinking about running costs, which we expect is going to be a huge surprise for them. 

Moreover, our previous survey on Gen AI adoption in the enterprise found that although AI and ML adoption is now a key revenue and ingenuity engine within the enterprise, an astonishing 59% of C-level leaders are inadequately resourced to deliver on business leadership’s expectations of Generative AI innovation. They lack the budget and resources needed to drive adoption successfully across the enterprise and create value. Clearly, something’s got to give.

Prediction #2: Prompt Engineering Will Not Be the Be-All and End-All of Gen AI

Prompt Engineering is great for solving very specific and narrow use cases, but unfortunately, that doesn’t scale. And if you are using something like OpenAI, when the API changes, your prompts will break. Moreover, it doesn’t help you adopt Gen AI across organizational functions.  So despite the hype surrounding prompt engineering, its dominance may be transient for several reasons: 

  • Advancements in AI Models: Future AI models are expected to become more intuitive and proficient at understanding natural language. As AI research progresses, these models will require less explicit and meticulously engineered prompts to generate accurate and contextually relevant responses. This reduction in prompt dependency will make prompt engineering less critical.
  • Improved Generative AI Capabilities: New iterations of AI language models, such as GPT-4 and beyond, are already demonstrating enhanced prompt-crafting capabilities. These models can craft more effective and context-aware prompts themselves, diminishing the need for manual intervention in the form of prompt engineering.
  • Interoperability Challenges: The effectiveness of prompts is closely tied to specific algorithms and language models. This creates challenges when using prompts across different AI models or versions. As AI models continue to evolve, maintaining prompt compatibility and adapting them to new systems will become increasingly cumbersome.
  • The Evolving Role of AI Engineers: While prompt engineering is currently a specialized skill, future AI systems are expected to demand a shift in the role of AI engineers. They will increasingly focus on problem formulation and domain expertise rather than fine-tuning prompts. This transition will de-emphasize prompt engineering.
  • Complexity and Domain Expertise: Many AI tasks require a deep understanding of the specific domain or problem at hand. Instead of perfecting prompts, future AI practitioners will benefit more from the ability to formulate well-defined problems and communicate them effectively to AI systems. Problem formulation will emerge as a more enduring and versatile skill.
  • Overemphasis on Prompt Crafting: Excessive emphasis on crafting the perfect prompt can sometimes divert attention from the core problem itself. This overemphasis may reduce one’s sense of control over the creative process and detract from exploring innovative solutions. In the future, problem formulation is likely to take precedence over prompt engineering.
  • AI Ecosystem Changes: The AI ecosystem is dynamic and continually evolving. As new techniques and models emerge, the relative importance of various AI-related skills will shift. Problem formulation is poised to gain prominence as AI applications diversify and mature.
  • Enhanced User-Friendly Interfaces: Future AI systems are expected to feature more user-friendly interfaces that reduce the need for users to craft complex prompts. This simplification will broaden AI adoption and reduce the reliance on prompt engineering.

All of these factors collectively suggest that prompt engineering’s prominence in the AI landscape is likely to diminish over time. In its stead, we believe fine-tuning will take center stage as part of an end-to-end, agnostic LLM approach. This solution solves for multiple use cases, giving your organization economies of scale, as well as:

  • Improved Steerability: Fine-tuning allows businesses to make the model follow instructions more effectively, such as generating concise outputs or responding consistently in a given language. For instance, developers can use fine-tuning to ensure that the model always responds in German when prompted to use that language.
  • Reliable Output Formatting: Fine-tuning enhances the model’s ability to consistently format responses—an essential aspect for applications that require a specific response format, such as code completion or composing API calls. Developers can use fine-tuning to more reliably convert user prompts into high-quality JSON snippets that can be seamlessly integrated with their own systems.
  • Alignment with Brand: Fine-tuning is a great way to refine the qualitative feel of the model’s output, including its tone, making it better aligned with a company’s brand. Businesses with recognizable brand voices can use fine-tuning for the model to maintain consistency with their tone.

Lastly, fine-tuning also empowers businesses to shorten their prompts while maintaining similar performance. Early testers have reduced prompt sizes by incorporating fine-tuned instructions directly into the model itself, thereby accelerating each API call and reducing costs.

Prediction #3: Everyone Works in AI Now

When asked how many users will need access to the Generative AI model per business use case, an astounding 85% reported they expect between 501 and 10,000+ users will need access to a Generative AI model within their respective use case or multiple use cases: 

Number of Users Needing Access to Gen AI Models

Given the extensive number of users needing access to Gen AI models, it was imperative to understand what percentage of employees were expected to use Gen AI in the first year of testing preliminary business use cases and rollout. 

It’s clear that AI business leaders have an optimistic outlook of Generative AI adoption across their workforce, with 50% of leaders indicating 11-25% of employees, and an additional 18% of leaders reporting that 26-50% or more of employees, will use Generative AI in the first year of testing or rollout:

Percentage of Employees Using Gen AI - First Year

Now that we know what percentage of employees are expected to use Gen AI in its first year of testing and rollout, we wanted to know the percentage in year two after rollout. AI and ML leaders see additional cross-organizational adoption in the second year after initial adoption, with 30% or respondents indicating that 16-25% of employees are expected to use Gen AI, with 50% of respondents stating 26-50% of employees will use GenAI – and an additional 14% reporting they expect an astounding 51-75% of employees to use Gen AI as part of their day-to-day in the second year:

Number of Employees Using Gen AI - Second Year

As a follow up to AI leaders’ forecast on internal employee adoption of Gen AI, we asked them what percentage of their employees they expected would eventually use this technology. It’s plain that respondents foresee vast Generative AI adoption across use cases, departments, and business units over time, with 40% of respondents reporting that they anticipate 26-50% of their workforce eventually using Generative AI in their discipline and an additional 39% sharing that they expect 51-75% of their entire workforce using Generative AI, and lastly 12% indicating a staggering 76%-90% of their employees will be using Gen AI eventually:

Ultimate Employee Adoption of Gen AI

Prediction #4:  Smaller, Open Source LLMs Will Overtake Larger, Generic LLMs

Smaller open-source language models (LLMs) have been gaining momentum, and several factors contribute to their potential to overtake larger, generic LLMs:

  • Specialization and Customization: Smaller open-source LLMs can be tailored to specific domains, industries, or use cases. Unlike larger models that serve a wide range of applications, smaller LLMs can provide more precise and domain-specific results. This customization makes them more attractive for organizations seeking targeted language generation.
  • Resource Efficiency: Larger LLMs demand substantial computational resources and infrastructure, making them less accessible to smaller organizations or those with budget constraints. Smaller models can operate efficiently on modest hardware, reducing costs and democratizing access to advanced natural language processing.
  • Faster Training and Inference: Smaller LLMs have shorter training times and faster inference speeds. This agility is crucial for real-time applications, chatbots, and other interactive systems, where low latency is a priority. Large models often struggle with speed due to their immense size.
  • Reduced Ethical and Environmental Concerns: The colossal size of generic LLMs raises ethical questions about the environmental impact and energy consumption of training such models. Smaller LLMs are more environmentally friendly, aligning with sustainability goals and ethical considerations.
  • Community Collaboration: Open-source models thrive on community contributions. Smaller LLMs often encourage active collaboration, enabling developers and researchers to address issues and improve the model continually. A strong community can lead to rapid advancements and fine-tuning.
  • Privacy and Data Control: Smaller models can be trained on organization-specific data, ensuring data privacy and control. In contrast, larger models may require sharing data with third parties for training, raising privacy concerns.
  • Niche and Emerging Markets: Smaller LLMs are well-suited for niche or emerging markets where specialized knowledge is crucial. Industries like healthcare, law, or finance benefit from LLMs that understand the specific terminologies and contexts unique to their field.
  • Low Latency and Edge Computing: Edge computing, where processing occurs closer to the data source, benefits from smaller LLMs due to their lower resource requirements and faster response times. This is valuable for applications like IoT devices and autonomous systems.
  • Diversity and Inclusion: Smaller LLMs can support languages and dialects that might be overlooked by larger, generic models. This inclusivity is essential for preserving linguistic diversity and supporting underrepresented languages.
  • Experimentation and Research: Smaller LLMs can serve as testing grounds for new ideas and research. Researchers and developers can experiment with innovative approaches and linguistic theories more effectively on models that are easier to manage.

In summary, smaller open-source LLMs offer practical advantages in terms of customization, efficiency, and community collaboration, positioning them to excel in specific use cases and potentially outpace larger, generic counterparts. If your company would like to leverage those benefits, please request a demo of ClearGPT, ClearML’s Gen AI platform.

Prediction #5: Businesses Will Know Which Use Cases to Test, But Not Which Ones Will Deliver ROI

When it comes to business use cases for Gen AI, the majority of respondents highlighted five critical use cases, with 43% highlighting “Strategy, analysis, and planning (corporate planning, risk management, finance)” as their leading use case, followed closely with 40% choosing “Feature for customers within the product” as their leading use case. 38% of respondents chose “External chatbot/automation to handle low-level tasks (customer support, sales, etc.)” as a key use case with 37% flagging “Content generation (sales, marketing, HR, etc.)” as a critical use case. Closing the top-five use case list was “Content recommendation/generation engine for enabling talent (customer support or sales representatives)” with 32% of AI leaders ranking it as top priority:

Top Gen AI Business Use Cases

The question becomes, which of these use cases, if any, will deliver payback on an organization’s investment in AI? In our first survey this year, we found that 57% of respondents’ boards expect a double-digit increase in revenue from AI/ML investments in the coming fiscal year, while 37% expect a single-digit increase. Meanwhile 59% of C-level leaders say they lack the necessary budget and resources for successful Generative AI adoption, hindering value creation. 66% of respondents face challenges in quantifying the impact and ROI of their AI/ML projects on the bottom line due to underfunding and understaffing. And 42% need more expert machine learning personnel to ensure success. We’re having a hard time understanding how a double-digit increase in revenue would actually happen when the majority of respondents say they don’t have enough budget, can’t quantify impact, and need more resources. 

Prediction #6:  Point Solutions Will Continue to Consolidate, Leading to the Preference for End-to-End Platforms

Our first survey in 2023 found that 88% of respondents indicated their organization is seeking to standardize on a single AI/ML platform across departments versus using different point solutions for different teams. There are several reasons why the AI point solutions market is steadily consolidating into a preference for end-to-end platforms:

  • Streamlined Integration: Many organizations are seeking comprehensive and unified AI solutions to simplify the integration of AI tools into their existing systems. Platform solutions offer a one-stop-shop for various AI capabilities, reducing the complexity of connecting disparate tools.
  • Enhanced Efficiency: Platforms provide a more efficient way to manage and deploy AI tools. Users can access a wide array of AI functionalities from a single interface, streamlining workflows and reducing the time and effort needed for managing multiple standalone tools.
  • Seamless Collaboration: As AI projects often require cross-functional collaboration, platform solutions facilitate seamless interaction between different teams and stakeholders. A centralized platform encourages knowledge sharing, enhances transparency, and fosters effective cooperation.
  • Interoperability: The shift towards platform solutions promotes interoperability between different AI technologies. Users can leverage a variety of tools within the same ecosystem, ensuring compatibility and data consistency across the AI landscape.
  • Scalability: AI platform solutions are designed to accommodate growing needs. They offer scalable options, allowing organizations to expand their AI capabilities as their requirements evolve. This scalability is crucial in a dynamic business environment.
  • Reduced Complexity: Managing multiple standalone AI tools can be daunting, particularly for smaller businesses. Platform solutions simplify AI adoption by offering a centralized dashboard where users can access and configure the tools they need.
  • Cost Efficiency: Investing in individual AI tools can be costly, both in terms of acquisition and integration. Platform solutions often provide a more cost-effective approach, bundling multiple tools into a single package with potentially lower licensing fees.
  • Enhanced User Experience: Consolidated platforms offer a cohesive user experience. Users don’t have to navigate between different tools with varying interfaces and settings, making the AI journey more user-friendly and intuitive.
  • Robust Support and Updates: Platform providers typically offer robust customer support and regular updates. This ensures that users have access to the latest AI capabilities, bug fixes, and improvements, all from a single source.
  • Strategic Focus: By opting for platform solutions, organizations can shift their focus from managing a plethora of AI tools to strategically leveraging AI for business growth. This transition allows them to concentrate on innovation and value generation rather than tool maintenance.
  • Competitive Advantage: The consolidation into platform solutions enables organizations to stay competitive by harnessing a wide range of AI capabilities effectively. It empowers them to adapt to evolving market trends and customer needs swiftly.
  • Data Synergy: Platform solutions can facilitate data synergy by providing a centralized data management hub. This enables businesses to derive deeper insights from their data, fostering more informed decision-making.

In conclusion, the consolidation of the AI tool market into platform solutions offers organizations a wide range of benefits, including streamlined integration, efficiency, scalability, and cost-effectiveness. As the demand for AI continues to grow, these platform solutions are likely to remain at the forefront, providing a holistic approach to AI adoption and application across various industries.

Next Steps

Get started with ClearML by using our free tier servers or by hosting your own. Read our documentation here. You’ll find more in-depth tutorials about ClearML on our YouTube channel and we also have a very active Slack channel for anyone that needs help.

Interested in learning more about how to leverage AI & machine learning in your organization? Request a demo of our Gen AI platform, ClearGPT. If you need to scale your ML pipelines and data abstraction or need unmatched performance and control, please request a demo of ClearML.

Facebook
Twitter
LinkedIn