This page exists on your local site.

Go there now
Stay here
X

Generative AI

What it is and why it matters

Generative AI (GenAI) consumes existing data, learns from it, then generates data with similar characteristics. For example, it can generate text, images, videos, audio and computer code.

GenAI is changing the world

Traditional AI and machine learning systems recognize patterns in data to make predictions. But generative artificial intelligence goes beyond predicting – it generates new data as its primary output. Imagine receiving the full text for a speech just seconds after giving a chatbot (or other tools like ChatGPT) a few words to describe your idea. Generating music, art or images from text-based descriptions. Or developing a business strategy through conversational, back-and-forth prompting with a generative AI tool. Bloomberg Intelligence found that GenAI could become a $1.3 trillion market by 2032.

Real-world applications of generative AI

Generative AI is expected to reshape our future in predictable and unimaginable ways. In this explainer video, you’ll hear real-world examples of generative AI spanning industries and business cases using large language models (LLMs), synthetic data generation and digital twins. You’ll also learn about some important considerations and risks of adopting generative AI technology, including bias, hallucinations, data privacy and security.

Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected

    The evolution of generative AI

    While it has taken the world by storm, generative AI is not new – it’s built from technologies we’ve used for decades, including AI, machine learning and statistical methods. The origins of generative AI could be traced farther back, but we’ll start with 1966 and a chatbot named ELIZA.

    Joseph Weizenbaum, who built ELIZA, designed it to imitate Rogerian psychotherapists who mirror what the patient says. ELIZA used pattern matching to accomplish this feat. ELIZA was one of the first programs to attempt the Turing Test – an imitation game that tests a machine’s ability to exhibit intelligent behavior like a human. 

    As methods of analyzing unstructured text data evolved, the 1970s through the 1990s saw growth in semantic networks, ontologies, recurrent neural networks and more. From 2000 through 2015, language modeling and word embedders improved, and Google Translate emerged.

    In 2014, Ian Goodfellow and colleagues developed the generative adversarial network (GAN), setting up two neural networks to compete (i.e., train) against each other. One network generated data while the other tried to determine if the data was real or fake. Transformer models were introduced in 2017. They included a self-attention mechanism that allowed them to weigh the importance of different parts of the input when making predictions. Architectures like BERT and ELMo became popular, too.

    Generative pre-trained transformer (GPT) models appeared next, with the first GPT model arriving in 2018. This generative model was trained on large quantities of text data from the internet. With 117 million parameters, it began generating text similar in style and content to the training data. By 2023, large language GPT models had evolved to a point where they could perform proficiently on difficult exams, like the bar exam.

    Generative AI in today’s world

    Woman using laptop at desk in home office

    The race to success with generative AI

    Organizations around the world are racing to implement GenAI technology in pursuit of strong business results. See how different regions and industries stack up in this GenAI report based on a global survey of 1,600 organizations.

    A man holding a smart phone while sitting at a desk

    Benefits and risks of using generative AI

    Ready to learn more about how generative AI works and what to consider before using it? Get tips for adopting generative AI tools and learn about AI development, governance and deployment – including ethical considerations.

    Man sitting in office looking concentratedly at laptop

    Tips for dealing with AI-generated content

    Can average content consumers distinguish real from fake? To mitigate potential perils, ethical content creators should assume responsibility for labeling AI-generated content – and consumers should stay alert to what's happening in this space.

    Man with headphones using laptop

    Unreal realities: The state of generative AI

    Can the explosion of generated imagery create an unreality that sets humans up for failure? Learn the true meaning of the term “deepfake,” discover how deepfakes can be used for good, and see how emerging techniques can help detect and identify generated media.

    Policymakers use digital twin technology to determine how new tax measures could affect citizens

    Determining “winners” and “losers” of potential tax changes before implementing regulations is crucial for Belgium’s Federal Public Service Finance. When it needs fast, accurate answers, FPS Finance uses Aurora, a digital twin of the calculator that processes the country’s income taxes, to simulate future debt reforms. Better simulations mean better-informed policymakers – and better results.

    Who's using generative AI?

    Generative AI spans a wide range of industries and business functions across the world. As it grows in popularity and sparks the development of a range of specialized AI assistants, the technology has simultaneously triggered excitement and fear among individuals, businesses and government entities. See how some industries are using GenAI today.

    Banking

    Banks and other financial services organizations can use generative AI to improve decisions, mitigate risks and enhance customer satisfaction. When generative AI models are trained to learn patterns and spot anomalies, they can flag suspicious activities in real time. By creating simulated data for stress testing and scenario analysis, generative AI can help banks predict future financial risks and prevent losses. And virtual assistants (like chatbots) can provide humanlike customer service 24/7.

    Insurance

    Insurers can use synthetic data for pricing, reserving and actuarial modeling. For example, insurance companies can use synthetic data that resembles historical policy and claims information to train and test pricing models – helping them assess how different pricing strategies would perform without using sensitive personal information from customers. Synthetic data can also help in evaluating low-probability events like earthquakes or hurricanes.

    Life sciences

    There are many promising applications for generative AI in life sciences. In drug discovery, it can speed up the process of identifying new potential drug candidates. In clinical research, generative AI has the potential to extract information from complex data to create synthetic data and digital twins, which are representative of individuals (a way to protect privacy). Other applications include identifying safety signals or finding new uses for existing treatments.

    Manufacturing

    Manufacturers can use generative AI to help optimize operations, maintenance, supply chains – even energy usage – for lower costs, higher productivity and greater sustainability. A generative AI model will learn from existing performance, maintenance and sensor data, forecasts, external factors and more, then provide recommended strategies for improvement.

    Public sector

    Natural language processing (NLP) and chatbots can help public sector workers respond faster to citizen needs, such as improving emergency services to those in flood-prone areas or assisting underserved neighborhoods. Generative AI techniques – such as predictive models and simulations – can analyze vast amounts of historical data, public sentiment and other indicators, then generate recommendations to reduce congestion, improve infrastructure planning and fine-tune resource allocation.

    Retail

    In retail, success requires understanding shopper demand, designing shopping experiences that engage customers, and ensuring reliable and stable supply chain execution. Some retailers, for example, are using generative AI with digital twin technology to give planners a glimpse of potential scenarios – like supply chain disruptions or resource limitations. This is made possible through sophisticated AI simulation and data modeling.

    The results of generative AI, at their core, are a reflection of us, humans. ... Consumers must continue to apply critical thinking whenever interacting with conversational AI and avoid automation bias (the belief that a technical system is more likely to be accurate and true than a human). Portrait of Reggie Townsend Reggie Townsend VP of the SAS Data Ethics Practice

    Ethical considerations for generative AI use in business

    A disruptive technology, the impact of generative AI has been compared to discoveries like electricity and the printing press. With the potential to drastically boost productivity, conversational AI models have rocketed in popularity while raising concerns about AI ethics, data privacy, accuracy, hallucinations and bias. Due to its evolving capabilities that mimic human intelligence, GenAI has created waves of AI anxiety and sparked debates about how it should be used and governed.

    Learn why it’s essential to embrace trustworthy AI systems designed for human centricity, inclusivity and accountability.

    Women working on computer together

    How generative AI works

    Some popular examples of generative AI technologies include DALL-E, an image generation system that creates images from text inputs, ChatGPT (a text generation system), the Google Bard chatbot and Microsoft's AI-powered Bing search engine. Another example is using generative AI to create a digital representation of a system, business process or even a person – like a dynamic representation of someone’s current and future health status.

    There are three main types of generative technologies (digital twins, large language models and synthetic data generation).


    Many other technologies enable and support generative AI:

    An algorithm is a list of step-by-step instructions designed to accomplish a specific task or solve a problem. Many computer programs are a sequence of algorithms written in a way the computer can understand. As algorithms begin to supplement or replace human decisions, we must explore their fairness and demand transparency into how they’re developed.

    Artificial intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform humanlike tasks. AI often relies heavily on deep learning and NLP. Through such technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns.

    Data management is essential for ensuring trusted, ethical and bias-free outputs. It’s particularly critical for AI, machine learning tasks and LLMs that are trained on huge data sets then used to understand and generate content.

    Deep learning is a subset of machine learning that trains a computer to perform humanlike tasks, such as recognizing speech, identifying images and making predictions. It improves the ability to classify, recognize, detect and describe using data. Deep learning models like GANs and variational autoencoders (VAEs) are trained on massive data sets and can generate high-quality data. Newer techniques like StyleGANs and transformer models are good at creating realistic videos, images, text and speech.

    Machine learning is a method of data analysis that automates analytical model building. It’s a branch of artificial intelligence that trains a machine how to learn. Machine learning is based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.

    Natural language processing is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, to fill the gap between human communication and computer understanding.

    Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain. Neural networks use algorithms to recognize hidden patterns and correlations in raw data, cluster and classify it, and continuously learn and improve over time.

    Reinforcement learning is when an algorithm discovers through trial and error which actions produce the greatest rewards. A machine learning model, reinforcement learning relies on a reward signal for its feedback mechanism as it gradually learns the best (or most rewarding) policy or goal. It’s often used for robotics, gaming and navigation.

    Implementing generative AI models

    Models are expensive to run – requiring tremendous amounts of computing power and data. You should carefully evaluate the ROI before implementing a generative AI model and consider the distinctions between different types of models, such as foundation models and domain models. There are ethical considerations, too. Where did the data come from – and who owns it? Is it trustworthy? Do you understand precisely how the model was built?

    Woman security guard watching monitors

    5 steps for fine-tuning a model

    Generative AI relies on many different AI algorithms and technologies to generate data that has similar probabilistic distributions and characteristics to the data from which it learns. Rather than building from scratch, you can follow these five steps to fine-tune a pre-trained foundational large language model.

    1. Define the task.

    Choose a suitable pre-trained large language model and clearly define the task for which it’s being fine-tuned. This could be text classification (namely entity recognition), text generation, etc.

    2. Prepare the data.

    Gather and preprocess your task-specific data – for tasks like labeling, formatting and tokenization. Create training and validation (and possibly test) data sets.

    3. Fine tune.

    Train the modified model on your task-specific data, using the training data set to update the model's weight. Monitor the model's performance on the validation set to prevent overfitting.

    4. Evaluate and test.

    After training, evaluate your fine-tuned model on the validation set, making necessary adjustments based on results. When satisfied, test the model on the test set to get an unbiased estimate of performance.

    5. Deploy.

    When you're confident in the model's performance, deploy it for its intended use. This could involve integrating the model into an AI application, website or another platform.

    Discover how organizations are using GenAI tools

    Many organizations are adopting generative AI, with banking and insurance industries leading the way in adoption for daily business operations. This infographic highlights some early successes – such as managing risk, improving employee satisfaction, saving on operational costs and achieving higher customer retention.


    Next steps

    See how GenAI solutions can augment human creativity and endeavors.

    A data and AI platform

    With SAS® Viya®, there’s no such thing as too much information. Learn about the quickest way to get from a billion points of data to a point of view.