Automatic prompting is one of the hot topics.
4 research papers on auto-prompting and one bonus
▪️ AutoPrompt
▪️ LLMs are Human-Level Prompt Engineers
▪️ Automatic Chain-of-Thought prompting in LLMs
▪️ LLMs are Zero-Shot Reasoners
🎁 Bonus!
🧵
1. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Proposes AutoPrompt, a method to automate the creation of prompts based on a gradient-guided search.
arxiv.org/pdf/2010.15980.pdf
2. LLMs are Human-Level Prompt Engineers
Propose an Automatic Prompt Engineer (APE) method.
It outperforms the prior LLM baseline and performs better or comparable to the human-generated instructions.
arxiv.org/pdf/2211.01910.pdf
3. Large Language Models are Zero-Shot Reasoners
Proposes Zero-shot-CoT, a zero-shot template-based prompting for chain-of-thought reasoning.
arxiv.org/pdf/2205.11916.pdf
4. Automatic Chain of Thought Prompting in Large Language Models
Proposes an Auto-CoT method automating the Manual-CoT while achieving comparable results.
Auto-CoT allows making CoT prompting more scalable and less talent-dependent.
openreview.net/pdf?id=5NTt8GFjUHkr
🎁 Bonus! Why think step-by-step? paper
Discusses how humans and language models can reason through a series of mental steps, leading to accurate inferences.
And compares it to the chain-of-thought reasoning of LMMs.
arxiv.org/pdf/2304.03843.pdf