选择(遗传算法)
计算机科学
自然语言处理
人工智能
作者
Viet-Tung Do,Van-Khanh Hoang,Duy‐Hung Nguyen,Shahab Sabahi,Jeff Yang,Hajime Hotta,Minh-Tien Nguyen,Lê Thái Hùng
出处
期刊:Cornell University - arXiv
日期:2024-04-03
被引量:1
标识
DOI:10.48550/arxiv.2404.02717
摘要
Large Language Models (LLMs) can perform various natural language processing tasks with suitable instruction prompts. However, designing effective prompts manually is challenging and time-consuming. Existing methods for automatic prompt optimization either lack flexibility or efficiency. In this paper, we propose an effective approach to automatically select the optimal prompt for a given input from a finite set of synthetic candidate prompts. Our approach consists of three steps: (1) clustering the training data and generating candidate prompts for each cluster using an LLM-based prompt generator; (2) synthesizing a dataset of input-prompt-output tuples for training a prompt evaluator to rank the prompts based on their relevance to the input; (3) using the prompt evaluator to select the best prompt for a new input at test time. Our approach balances prompt generality-specificity and eliminates the need for resource-intensive training and inference. It demonstrates competitive performance on zero-shot question-answering datasets: GSM8K, MultiArith, and AQuA.
科研通智能强力驱动
Strongly Powered by AbleSci AI