The AI trend may seem to be following a similar trajectory of hype and adoption as previous enterprise tech trends such as cloud and machine learning, though it’s different in significant ways, including:

  • AI requires massive amounts of compute for the processes that let it digest and recreate unstructured data.
  • AI is changing how some organizations look at organizational structure and careers.
  • AI content that can be mistaken for photographs or original artwork is shaking up the artistic world, and some worry it could be used to influence elections.

Here are our predictions for five trends in AI, which often refers to generative models, to keep an eye on in 2024.

AI adoption increasingly looks like integration with existing applications

Many generative AI use cases coming to market for enterprises and businesses integrate with existing applications as opposed to creating completely new use cases. The most high-profile example of this is the proliferation of copilots, meaning generative AI assistants. Microsoft has installed Copilots next to the 365 suite offerings, and businesses like SoftServe and many others provide copilots for industrial work and maintenance. Google offers a variety of copilots for everything, from video creation to security.

But all of these copilots are designed to sift through existing content or create content that sounds more like what a human would write for work.

SEE: Is Google Gemini or ChatGPT better for work? (TechRepublic)

Even IBM asked for a reality check about trendy tech and pointed out that tools like Google’s 2018 Smart Compose are technically “generative” but weren’t considered a change in how we work. A major difference between Smart Compose and contemporary generative AI is that some AI models today are multimodal, meaning they are able to create and interpret pictures, videos and charts.

“We’ll see a lot of innovation about that (multimodality), I would argue, in 2024,” said Arun Chandrasekaran, distinguished VP, analyst at Gartner, in a conversation with TechRepublic.

At NVIDIA GTC 2024, many startups on the show floor ran chatbots on Mistral AI’s large language models since open models can be used to create custom-trained AI with access to company data. Using proprietary training data lets the AI answer questions about specific products, industrial processes or customer services without feeding proprietary company information back into a trained model that might release that data onto the public internet. There are a lot of other open models for text and video, including Meta’s Llama 2, Stability AI’s suite of models, which include Stable LM and Stable Diffusion, and the Falcon family from Abu Dhabi’s Technology Innovation Institute.

“There’s a lot of keen interest in bringing enterprise data to LLMs as a way to ground the models and add context,” said Chandrasekaran.

Customizing open models can be done in a few ways, including prompt engineering, retrieval-augmented generation and fine-tuning.

AI agents

Another way AI might integrate with existing applications more in 2024 is through AI agents, which Chandrasekaran called “a fork” in AI progress.

AI agents automate the tasks of other AI bots, meaning the user doesn’t have to prompt individual models specifically; instead, they can provide one natural language instruction to the agent, which essentially puts its team to work pulling together the different commands needed to carry out the instruction.

Intel Senior Vice President and General Manager of Network and Edge Group Sachin Katti referred to AI agents as well, suggesting at a prebriefing ahead of the Intel Vision conference held April 9–11 that AI delegating work to each other could do the tasks of entire departments.

Retrieval-augmented generation dominates enterprise AI

Retrieval-augmented generation allows an LLM to check its answers against an external source before providing a response. For example, the AI may check its answer against a technical manual and provide the users with footnotes that have links directly to the manual. RAG is intended to increase accuracy and decrease hallucinations.

RAG provides organizations with a way to improve the accuracy of AI models without causing the bill to skyrocket. RAG produces more accurate results compared to the other common ways to add enterprise data to LLMs, prompt engineering and fine-tuning. It is a hot topic in 2024 and is likely to continue to be so later in the year.

Organizations express quiet concerns about sustainability

AI is used to create climate and weather models that predict disastrous events. At the same time, generative AI is energy- and resource-heavy compared to conventional computing.

What does this mean for AI trends? Optimistically, awareness of the energy-hungry processes will encourage companies to make more efficient hardware to run them or to right-size usage. Less optimistically, generative AI workloads may continue to draw massive amounts of electricity and water. Either way, generative AI may become a matter that contributes to national discussions about energy use and the resiliency of the grid. AI regulation now mostly focuses on use cases, but in the future, its energy use may fall under specific regulations as well.

Tech giants address sustainability in their own way, such as Google’s purchase of solar and wind energy in certain regions. For example, NVIDIA touted saving energy in data centers while still running AI by using fewer server racks with more powerful GPUs.

The energy use of AI data centers and chips

The 100,000 AI servers NVIDIA is expected to send to customers this year could produce 5.7 to 8.9 TWh of electricity a year, a fraction of the electricity used in data centers today. This is according to a paper by PhD candidate Alex de Vries published in October 2023. But if NVIDIA alone adds 1.5 million AI servers to the grid by 2027, as the paper speculates, the servers would use 85.4 to 134.0 TWh per year, which is a much more serious impact.

Another study found that creating 1,000 images with Stable Diffusion XL creates about as much carbon dioxide as driving 4.1 miles in an average gas-powered car.

“We find that multi-purpose, generative architectures are orders of magnitude more expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters,” wrote the researchers, Alexandra Sasha Luccioni and Yacine Jernite of Hugging Face and Emma Strubell of Carnegie Mellon University.


In the journal Nature, Microsoft AI researcher Kate Crawford noted that training GPT-4 used about 6% of the local district’s water.

The roles of AI specialists shift

Prompt engineering was one of the hottest skill sets in tech in 2023, with people rushing to bring home six-figure salaries for instructing ChatGPT and similar products to produce useful responses. The hype has faded somewhat and, as mentioned above, many enterprises that heavily use generative AI customize their own models. Prompt engineering may become part of software engineers’ regular tasks more going forward, but not as a specialization — simply as one part of the way software engineers perform their usual duties.

Use of AI for software engineering

“The usage of AI within the software engineering domain is one of the fastest growing use cases we see today,” said Chandrasekaran. “I believe prompt engineering will be an important skill across the organization in the sense that any person interacting with AI systems — which is going to be a lot of us in the future — have to know how to guide and steer these models. But of course people in software engineering need to really understand prompt engineering at scale and some of the advanced techniques of prompt engineering.”

Regarding how AI roles are allocated, that will depend a lot on individual organizations. Whether or not most people doing prompt engineering will have prompt engineering as their job title remains to be seen.

Executive titles related to AI

A survey of data and technology executives by MIT’s Sloan Management Review in January 2024 found organizations were sometimes cutting back on chief AI officers. There has been some “confusion about the responsibilities” of hyper-specialized leaders like AI or data officers, and 2024 is likely to normalize around “overarching tech leaders” who create value from data and report to the CEO, regardless of where that data comes from.

SEE: What a head of AI does and why organizations should have one going forward. (TechRepublic)

On the other hand, Chandrasekaran said chief data and analytics officers and chief AI officers are “not prevalent” but have increased in number. Whether or not the two will remain separate roles from CIO or CTO is difficult to predict, but it may depend on what core competencies organizations are looking for and whether CIOs find themselves balancing too many other responsibilities at the same time.

“We are definitely seeing these roles (AI officer and data and analytics officer) show up more and more in our conversations with customers,” said Chandrasekaran.

On March 28, 2024, the U.S. Office of Management and Budget released guidance for the use of AI within federal agencies, which included a mandate for all such agencies to designate a Chief AI Officer.

AI art and glazing against AI art both become more common

As art software and stock photo platforms embrace the gold rush of easy images, artists and regulators look for ways to identify AI content to avoid misinformation and theft.

AI art is becoming more common

Adobe Stock now offers tools to create AI art and marks AI art as such in its catalog of stock images. On March 18, 2024, Shutterstock and NVIDIA announced a 3D image generation tool in early access.

OpenAI recently promoted filmmakers using the photorealistic Sora AI. The demos were criticized by artist advocates, including Fairly Trained AI CEO Ed Newton-Rex, formerly of Stability AI, who called them “Artistwashing: when you solicit positive comments about your generative AI model from a handful of creators, while training on people’s work without permission/payment.”

Two possible responses to AI artwork are likely to develop further over 2024: watermarking and glazing.

Watermarking AI art

The leading standard for watermarking is from the Coalition for Content Provenance and Authenticity, which OpenAI (Figure A) and Meta have worked with to tag images generated by their AI; however, the watermarks, which appear either visually or in metadata, are easy to remove. Some say the watermarks won’t go far enough when it comes to preventing misinformation, particularly around the 2024 U.S. elections.

Figure A

Metadata on an image generated by DALL-E shows the image’s provenance.
Metadata on an image generated by DALL-E shows the image’s provenance.

SEE: The U.S. federal government and leading AI companies agreed to a list of voluntary commitments, including watermarking, last year. (TechRepublic)

Poisoning original art against AI

Artists looking to prevent AI models from training on original art posted online can use Glaze or Nightshade, two data poisoning tools made by the University of Chicago. Data poisoning adjusts artwork just enough to render it unreadable to an AI model. It’s likely that more tools like this will appear going forward as both AI image generation and protection for artists’ original work remain a focus in 2024.

Is AI overhyped?

AI was so popular in 2023 that it was inevitably overhyped going into 2024, but that doesn’t mean it isn’t being put to some practical use. In late 2023, Gartner declared generative AI had reached “the peak of inflated expectations,” a known pinnacle of hype before emerging technologies become practical and normalized. The peak is followed by the “trough of disillusionment” before a rise back up to the “slope of enlightenment” and, eventually, productivity. Arguably, generative AI’s place on the peak or the trough means it is overhyped. However, many other products have gone through the hype cycle before, many eventually reaching the “plateau of productivity” after the initial boom.

Source link