Zero-shot prompting is characterized by creating a prompt that relies solely on the large language model's internal knowledge base.
Example:
Prompt: "Write a short poem about the ocean."
Model Response: "The waves dance beneath the moon, singing songs of endless blue, where the sky meets the sea, a horizon of dreams anew."
One-shot prompting is characterized by creating a prompt that contains one example of the expected result the large language model should provide.
Example:
Prompt: "Write a short poem about the ocean. Example: 'The forest whispers in the night, secrets hidden from the light.' Now, write about the ocean."
Model Response: "The ocean hums a deep, dark tune, mysteries held beneath the moon."
Few-shot prompting is characterized by creating a prompt that contains several examples of the expected result the large language model should provide.
Example:
Prompt: "Write a short poem. Example 1: 'The forest whispers in the night, secrets hidden from the light.' Example 2: 'The mountain stands tall and grand, a silent guardian of the land.' Now, write a poem about the ocean."
Model Response: "The ocean whispers to the shore, tales of depth and distant lore."