Generative AI
What it is and why it matters
Generative AI (GenAI) consumes existing data, learns from it, then generates data with similar characteristics. For example, it can generate text, images, videos, audio and computer code.
GenAI is changing the world
Traditional AI and machine learning systems recognize patterns in data to make predictions. But generative artificial intelligence goes beyond predicting – it generates new data as its primary output. Imagine receiving the full text for a speech just seconds after giving a chatbot (or other tools like ChatGPT) a few words to describe your idea. Generating music, art or images from text-based descriptions. Or developing a business strategy through conversational, back-and-forth prompting with a generative AI tool. Bloomberg Intelligence found that GenAI could become a $1.3 trillion market by 2032.
Real-world applications of generative AI
Generative AI is expected to reshape our future in predictable and unimaginable ways. In this explainer video, you’ll hear real-world examples of generative AI spanning industries and business cases using large language models (LLMs), synthetic data generation and digital twins. You’ll also learn about some important considerations and risks of adopting generative AI technology, including bias, hallucinations, data privacy and security.
The evolution of generative AI
While it has taken the world by storm, generative AI is not new – it’s built from technologies we’ve used for decades, including AI, machine learning and statistical methods. The origins of generative AI could be traced farther back, but we’ll start with 1966 and a chatbot named ELIZA.
Joseph Weizenbaum, who built ELIZA, designed it to imitate Rogerian psychotherapists who mirror what the patient says. ELIZA used pattern matching to accomplish this feat. ELIZA was one of the first programs to attempt the Turing Test – an imitation game that tests a machine’s ability to exhibit intelligent behavior like a human.
As methods of analyzing unstructured text data evolved, the 1970s through the 1990s saw growth in semantic networks, ontologies, recurrent neural networks and more. From 2000 through 2015, language modeling and word embedders improved, and Google Translate emerged.
In 2014, Ian Goodfellow and colleagues developed the generative adversarial network (GAN), setting up two neural networks to compete (i.e., train) against each other. One network generated data while the other tried to determine if the data was real or fake. Transformer models were introduced in 2017. They included a self-attention mechanism that allowed them to weigh the importance of different parts of the input when making predictions. Architectures like BERT and ELMo became popular, too.
Generative pre-trained transformer (GPT) models appeared next, with the first GPT model arriving in 2018. This generative model was trained on large quantities of text data from the internet. With 117 million parameters, it began generating text similar in style and content to the training data. By 2023, large language GPT models had evolved to a point where they could perform proficiently on difficult exams, like the bar exam.
Generative AI in today’s world
Who's using generative AI?
Generative AI spans a wide range of industries and business functions across the world. As it grows in popularity and sparks the development of a range of specialized AI assistants, the technology has simultaneously triggered excitement and fear among individuals, businesses and government entities. See how some industries are using GenAI today.
The results of generative AI, at their core, are a reflection of us, humans. ... Consumers must continue to apply critical thinking whenever interacting with conversational AI and avoid automation bias (the belief that a technical system is more likely to be accurate and true than a human). Reggie Townsend VP of the SAS Data Ethics Practice
Ethical considerations for generative AI use in business
A disruptive technology, the impact of generative AI has been compared to discoveries like electricity and the printing press. With the potential to drastically boost productivity, conversational AI models have rocketed in popularity while raising concerns about AI ethics, data privacy, accuracy, hallucinations and bias. Due to its evolving capabilities that mimic human intelligence, GenAI has created waves of AI anxiety and sparked debates about how it should be used and governed.
Learn why it’s essential to embrace trustworthy AI systems designed for human centricity, inclusivity and accountability.
How generative AI works
Some popular examples of generative AI technologies include DALL-E, an image generation system that creates images from text inputs, ChatGPT (a text generation system), the Google Bard chatbot and Microsoft's AI-powered Bing search engine. Another example is using generative AI to create a digital representation of a system, business process or even a person – like a dynamic representation of someone’s current and future health status.
There are three main types of generative technologies (digital twins, large language models and synthetic data generation).
Many other technologies enable and support generative AI:
An algorithm is a list of step-by-step instructions designed to accomplish a specific task or solve a problem. Many computer programs are a sequence of algorithms written in a way the computer can understand. As algorithms begin to supplement or replace human decisions, we must explore their fairness and demand transparency into how they’re developed.
Artificial intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform humanlike tasks. AI often relies heavily on deep learning and NLP. Through such technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns.
Data management is essential for ensuring trusted, ethical and bias-free outputs. It’s particularly critical for AI, machine learning tasks and LLMs that are trained on huge data sets then used to understand and generate content.
Deep learning is a subset of machine learning that trains a computer to perform humanlike tasks, such as recognizing speech, identifying images and making predictions. It improves the ability to classify, recognize, detect and describe using data. Deep learning models like GANs and variational autoencoders (VAEs) are trained on massive data sets and can generate high-quality data. Newer techniques like StyleGANs and transformer models are good at creating realistic videos, images, text and speech.
Machine learning is a method of data analysis that automates analytical model building. It’s a branch of artificial intelligence that trains a machine how to learn. Machine learning is based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.
Natural language processing is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, to fill the gap between human communication and computer understanding.
Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain. Neural networks use algorithms to recognize hidden patterns and correlations in raw data, cluster and classify it, and continuously learn and improve over time.
Reinforcement learning is when an algorithm discovers through trial and error which actions produce the greatest rewards. A machine learning model, reinforcement learning relies on a reward signal for its feedback mechanism as it gradually learns the best (or most rewarding) policy or goal. It’s often used for robotics, gaming and navigation.
Implementing generative AI models
Models are expensive to run – requiring tremendous amounts of computing power and data. You should carefully evaluate the ROI before implementing a generative AI model and consider the distinctions between different types of models, such as foundation models and domain models. There are ethical considerations, too. Where did the data come from – and who owns it? Is it trustworthy? Do you understand precisely how the model was built?
5 steps for fine-tuning a model
Generative AI relies on many different AI algorithms and technologies to generate data that has similar probabilistic distributions and characteristics to the data from which it learns. Rather than building from scratch, you can follow these five steps to fine-tune a pre-trained foundational large language model.
1. Define the task.
Choose a suitable pre-trained large language model and clearly define the task for which it’s being fine-tuned. This could be text classification (namely entity recognition), text generation, etc.
2. Prepare the data.
Gather and preprocess your task-specific data – for tasks like labeling, formatting and tokenization. Create training and validation (and possibly test) data sets.
3. Fine tune.
Train the modified model on your task-specific data, using the training data set to update the model's weight. Monitor the model's performance on the validation set to prevent overfitting.
4. Evaluate and test.
After training, evaluate your fine-tuned model on the validation set, making necessary adjustments based on results. When satisfied, test the model on the test set to get an unbiased estimate of performance.
5. Deploy.
When you're confident in the model's performance, deploy it for its intended use. This could involve integrating the model into an AI application, website or another platform.