计算机科学
管道运输
程序设计语言
质量(理念)
工程类
哲学
认识论
机械工程
作者
Shreya Shankar,Haotian Li,Parth Asawa,Madelon Hulsebos,Yiming Lin,J.D. Zamfirescu-Pereira,Harrison Chase,Will Fu-Hinthorn,Aditya Parameswaran,Eugene Wu
标识
DOI:10.14778/3685800.3685835
摘要
Large language models (LLMs) are being increasingly deployed as part of pipelines that repeatedly process or generate data of some sort. However, a common barrier to deployment are the frequent and often unpredictable errors that plague LLMs. Acknowledging the inevitability of these errors, we propose data quality assertions to identify when LLMs may be making mistakes. We present spade, a method for automatically synthesizing data quality assertions that identify bad LLM outputs. We make the observation that developers often identify data quality issues during prototyping prior to deployment, and attempt to address them by adding instructions to the LLM prompt over time. spade therefore analyzes histories of prompt versions over time to create candidate assertion functions and then selects a minimal set that fulfills both coverage and accuracy requirements. In testing across nine different real-world LLM pipelines, spade efficiently reduces the number of assertions by 14% and decreases false failures by 21% when compared to simpler baselines. spade has been deployed as an offering within LangSmith, LangChain's LLM pipeline hub, and has been used to generate data quality assertions for over 2000 pipelines across a spectrum of industries.
科研通智能强力驱动
Strongly Powered by AbleSci AI