System Prompts
In this lesson, we will explore how system prompts influence AI outputs and how different parameters can be adjusted to refine results. We will interact with AI tools like Anthropic’s Console and OpenAI’s Playground to compare models, experiment with system prompts, and analyze responses.
Exercise: Experimenting with System Prompts
Start with either Anthropic's Console OR OpenAI's Playground.
Anthropic has a library of system prompts you can browse.
Choose a Prompt
Create a system prompt relevant to your project you've worked on so far
Define the type of response you want from the AI and inputs needed from the user
Change Parameters and Test Across Platforms
Input the same system prompt in both OpenAI’s Playground and Anthropic’s Console.
Adjust parameters such as temperature and max tokens to observe differences.
Analyze Outputs and Iterate
Compare AI responses based on clarity, relevance, and creativity.
Reflection Questions:
• How did the AI’s response change when adjusting the temperature?
• What differences did you notice between OpenAI’s and Anthropic’s outputs?
• How would you modify your system prompt to achieve more reliable results?
By the end of this exercise, we will have a deeper understanding of how system prompts shape AI behavior and how to fine-tune them for specific applications. This is especially valuable as a designer working closely with engineers to refine product output. The entire process—from defining prompts to analyzing and iterating on results—mirrors the workflow we would use when collaborating with engineers to enhance the quality of a product that integrates large language models (LLMs). By mastering these steps, we can better communicate our needs, optimize AI performance, and contribute to more effective and intelligent product development.