Generative AI

What it is and why it matters

Generative AI consumes existing data, learns from it, then generates data with similar characteristics. For example, it can generate text, images, audio, video and computer code.

The evolution of generative AI

Traditional AI and machine learning systems recognize patterns in data to make predictions. But generative AI goes beyond predicting – it generates new data as its primary output. Imagine receiving the full text for a speech just seconds after giving a chatbot (like ChatGPT) a few words to describe your idea. Generating music, art or images from text-based descriptions. Or developing a business strategy through conversational, back-and-forth “prompting” with a generative AI tool.

Where did it all start?

Contrary to popular opinion, generative AI is not new – it’s built from technologies we’ve used for decades, including AI, machine learning and statistical methods. Three core generative AI technologies are digital twins, large language models and synthetic data generation.

While the origins of generative AI could be traced farther back, we’ll start with 1966 and a chatbot named ELIZA.

Joseph Weizenbaum, who built ELIZA, designed it to imitate Rogerian psychotherapists who mirror what the patient says. ELIZA used pattern matching to accomplish this feat. ELIZA was one of the first programs to attempt the Turing Test – an imitation game that tests a machine’s ability to exhibit intelligent behavior like a human.

As methods of analyzing unstructured text data evolved, the 1970s through the 1990s saw growth in semantic networks, ontologies, recurrent neural networks and more. From 2000 through 2015, language modeling and word embedders improved, and Google Translate emerged.

In 2014, Ian Goodfellow and colleagues developed the generative adversarial network (GAN), setting up two neural networks to compete (i.e., train) against each other. One network generated data while the other tried to determine if the data was real or fake. Transformer models were introduced in 2017. They included a self-attention mechanism that allowed them to weigh the importance of different parts of the input when making predictions. Architectures like BERT and ELMo became popular, too.

Generative pre-trained transformer (GPT) models appeared next, with the first GPT model arriving in 2018. This model was trained on large quantities of text data from the internet. With 117 million parameters, it could generate text similar in style and content to the training data. By 2023, large language GPT models had evolved to a point where they could perform proficiently on difficult exams, like the bar exam.

The rapid rise of generative AI tech

A disruptive technology, the impact of generative AI has been compared to discoveries like electricity and the printing press. With the potential to drastically boost productivity, conversational AI models like ChatGPT have rocketed in popularity among business and everyday users – and raised concerns about data privacy, bias in AI, ethics and accuracy. The global market for generative AI is expected to grow to $110.8 billion by 2030.

Policymakers use digital twin technology to determine how new tax measures could affect citizens

Determining “winners” and “losers” of potential tax changes before implementing regulations is crucial for Belgium’s Federal Public Service Finance. When it needs fast, accurate answers, FPS Finance uses Aurora, a digital twin of the calculator that processes the country’s income taxes, to simulate future debt reforms. Better simulations mean better-informed policymakers – and better results.

Generative AI in today’s world

Embracing trustworthy artificial intelligence

Consumers have more trust in organizations that demonstrate responsible and ethical use of AI. Learn why it’s essential to embrace trustworthy AI systems designed for human centricity, inclusivity and accountability.

Benefits and risks of generative AI

Curious about how generative AI works and what you need to consider before using it? Get an introduction to the technology, learn about a framework for adopting generative AI tools, and consider whether and how to adopt the technology.

Exploring the use of AI in education

Students have used generative AI for creating content and graphics, writing code, building mobile apps and solving problems. While generative AI can be fun and useful, we need humans to spot and correct wrong answers or “hallucinations.”

Unreal realities: The state of generative AI

Can the explosion of generated imagery create an unreality that sets humans up for failure? Learn the true meaning of the term “deepfake,” discover how deepfakes can be used for good, and see how emerging techniques can help detect and identify generated media.

Popular AI tools and how they’re used

There are many popular AI tools in the news. But did you know there are more than 1,500 in the market, including generative AI tools?

See which tools are most prevalent today and how they’re applied across industries.

Who's using generative AI?

Generative AI spans a wide range of industries and business functions across the world. As it grows in popularity, the technology has simultaneously triggered excitement and fear among individuals, businesses and government entities. Let’s look at how some industries are using generative AI today.


Banks and other financial services organizations can use generative AI to improve decisions, mitigate risks and enhance customer satisfaction. When generative AI models are trained to learn patterns and spot anomalies, they can flag suspicious activities in real time. By creating simulated data for stress testing and scenario analysis, generative AI can help banks predict future financial risks and prevent losses. And virtual assistants (like chatbots) can provide humanlike customer service 24/7.


Insurers can use synthetic data for pricing, reserving and actuarial modeling. For example, insurance companies can use synthetic data that resembles historical policy and claims information to train and test pricing models – helping them assess how different pricing strategies would perform without using sensitive personal information from customers. Synthetic data can also help in evaluating low-probability events like earthquakes or hurricanes.

Life sciences

There are many promising applications for generative AI in life sciences. In drug discovery, it can speed up the process of identifying new potential drug candidates. In clinical research, generative AI has the potential to extract information from complex data to create synthetic data and digital twins, which are representative of individuals (a way to protect privacy). Other applications include identifying safety signals or finding new uses for existing treatments.


Manufacturers can use generative AI to help optimize operations, maintenance, supply chains – even energy usage – for lower costs, higher productivity and greater sustainability. A generative AI model will learn from existing performance, maintenance and sensor data, forecasts, external factors and more, then provide recommended strategies for improvement.

Public sector

Natural language processing (NLP) and chatbots can help public sector workers respond faster to citizen needs, such as improving emergency services to those in flood-prone areas or assisting underserved neighborhoods. Generative AI techniques – such as predictive models and simulations – can analyze vast amounts of historical data, public sentiment and other indicators, then generate recommendations to reduce congestion, improve infrastructure planning and fine-tune resource allocation.


In retail, success requires understanding shopper demand, designing shopping experiences that engage customers, and ensuring reliable and stable supply chain execution. Some retailers, for example, are using generative AI with digital twin technology to give planners a glimpse of potential scenarios – like supply chain disruptions or resource limitations. This is made possible through sophisticated AI simulation and data modeling.

The results of generative AI, at their core, are a reflection of us, humans. ... Consumers must continue to apply critical thinking whenever interacting with conversational AI and avoid automation bias (the belief that a technical system is more likely to be accurate and true than a human). Reggie Townsend VP of the SAS Data Ethics Practice

Considerations for generative AI models

Models are expensive to run – requiring tremendous amounts of compute power and data. You should carefully evaluate the ROI before implementing a generative AI model. There are ethical considerations, too. Where did the data come from – and who owns it? Is it trustworthy? Do you understand precisely how the model was built?

How generative AI works

Some popular examples of generative AI technologies include DALL-E, an image generation system that creates images from text inputs, ChatGPT (a text generation system), the Google Bard chatbot and Microsoft's AI-powered Bing search engine. Another example is using generative AI to create a digital representation of a system, business process or even a person – like a dynamic representation of someone’s current and future health status.

There are three main types of generative technologies (digital twins, large language models and synthetic data generation).

Digital twins

Digital twins are virtual models of real-life objects or systems built from data that is historical, real-world, synthetic or from a system’s feedback loop. They’re built with software, data, and collections of generative and non-generative models that mirror and synchronize with a physical system – such as an entity, process, system or product. Digital twins are used to test, optimize, monitor or predict. For example, a digital twin of a supply chain can help companies predict when shortages may occur.

Large language models

A large language model (LLM) is a powerful machine learning model that can process and identify complex relationships in natural language, generate text and have conversations with users. These models rely on techniques like deep learning and neural networks. Defined as natural language-processing AI models, LLMs are trained on massive amounts of text data. The resulting models have up to billions of parameters. OpenAI’s ChatGPT is an example of a popular large language model.

Synthetic data generation

Synthetic data generation refers to on-demand, self-service or automated data generated by algorithms or rules rather than collected from the real world. Synthetic data is often generated to meet conditions lacking in real data. It reproduces the same statistical properties, probabilities, patterns and characteristics as the real-world data from which it’s trained. Many organizations use synthetic data to preserve privacy or to overcome other challenges with collecting and using real-world data, such as cost, time-intensive data preparation processes or bias.

Many other technologies enable and support generative AI:

An algorithm is a list of step-by-step instructions designed to accomplish a specific task or solve a problem. Many computer programs are a sequence of algorithms written in a way the computer can understand. As algorithms begin to supplement or replace human decisions, we must explore their fairness and demand transparency into how they’re developed.

Artificial intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform humanlike tasks. AI often relies heavily on deep learning and NLP. Through such technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns.

Deep learning is a subset of machine learning that trains a computer to perform humanlike tasks, such as recognizing speech, identifying images and making predictions. It improves the ability to classify, recognize, detect and describe using data. Deep learning models like GANs and variational autoencoders (VAEs) are trained on massive data sets and can generate high-quality data. Newer techniques like StyleGANs and transformer models can create realistic videos, images, text and speech.

Machine learning is a method of data analysis that automates analytical model building. It’s a branch of artificial intelligence that trains a machine how to learn. Machine learning is based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.

Natural language processing is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, to fill the gap between human communication and computer understanding.

Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain. Neural networks use algorithms to recognize hidden patterns and correlations in raw data, cluster and classify it, and continuously learn and improve over time.

Reinforcement learning is when an algorithm discovers through trial and error which actions produce the greatest rewards. A machine learning model, reinforcement learning relies on a reward signal for its feedback mechanism as it gradually learns the best (or most rewarding) policy or goal. It’s often used for robotics, gaming and navigation.

5 steps for fine-tuning a model

Generative AI relies on many different AI algorithms and technologies to generate data that has similar probabilistic distributions and characteristics to the data from which it learns. Rather than building from scratch, you can follow these five steps to fine-tune a pre-trained foundational large language model.

1. Define the task.

Choose a suitable pre-trained large language model and clearly define the task for which it’s being fine-tuned. This could be text classification (namely entity recognition), text generation, etc.

2. Prepare the data.

Gather and preprocess your task-specific data – for tasks like labeling, formatting and tokenization. Create training and validation (and possibly test) data sets.

3. Fine-tune.

Train the modified model on your task-specific data, using the training data set to update the model's weight. Monitor the model's performance on the validation set to prevent overfitting.

4. Evaluate and test.

After training, evaluate your fine-tuned model on the validation set, making necessary adjustments based on results. When satisfied, test the model on the test set to get an unbiased estimate of performance.

5. Deploy.

When you're confident in the model's performance, deploy it for its intended use. This could involve integrating the model into an application, website or another platform.

What is synthetic data?

Data is essential for building models, but high-quality data can be hard to find, biased or expensive. One way to solve those issues is by using synthetic data, which is created artificially (often with algorithms). If we use real-world data sets to generate additional, synthetic data – with appropriate properties for building good machine learning models – we can train models for virtually any purpose, like researching a rare disease.

Next steps

See how AI solutions can augment human creativity and endeavors.

Connect with SAS and see what we can do for you.