Prompt engineering is the process of crafting or refining prompts to elicit desired responses or behaviors from individuals or systems. In various contexts such as human-computer interaction, machine learning, and education, prompt engineering involves designing prompts that effectively communicate tasks, instructions, or cues to achieve specific goals or outcomes.
Let’s say you’re making spaghetti marinara for dinner. Sauce from a jar is perfectly fine. But what if you buy your tomatoes and basil from the farmers market to make your own sauce? Chances are it will taste a lot better. And what if you grow your own ingredients in your garden and make your own fresh pasta? A whole new level of savory deliciousness.
Just as better ingredients can make for a better dinner, better inputs into a generative AI (gen AI) model can make for better results. These inputs are called prompts, and the practice of writing them is called prompt engineering. Skilled prompt engineers design inputs that interact optimally with other inputs in a gen AI tool. These inputs help elicit better answers from the AI model, meaning the model can perform its tasks better, such as writing marketing emails, generating code, analyzing and synthesizing text, engaging with customers via chatbots, creating digital art, composing music, or any of the other hundreds, if not thousands, of current applications.
Gen AI has an important role to play in the future of business and society. But where does prompt engineering fit in? And how do you write a good prompt? Read on to find out.
Learn more about McKinsey’s Digital Practice.
First things first: a refresher on gen AI. Gen AI models are applications typically built using foundation models. These models contain expansive artificial neural networks, inspired by the billions of neurons connected in the human brain. Foundation models are part of what’s called deep learning, which refers to the many deep layers within neural networks. Deep learning has powered many recent advances in AI—things you’re probably already using, like Alexa or Siri—but foundation models represent a significant evolution within deep learning. Unlike previous deep-learning models, foundation models can process massive and varied sets of unstructured data. AI that is trained on these models can perform tasks such as answering questions and classifying, editing, summarizing, and drafting new content.
Sourced from McKinsey & Company