Differentially Private In-Context learning

Ashwinee Panda, Tong Wu, Jiachen Wang, Prateek Mittal

The Third Workshop on Trustworthy Natural Language Processing Paper

TLDR: An important question in deploying large language models (LLMs) is how to augment LLMs with private data.We propose Differentially Private In-context Learning (DP-ICL) to enable LLMs to adapt to new tasks while maintaining privacy guarantees. DP-ICL performs private inference by establishing a noisy
You can open the #paper-TrustNLP_23 channel in a separate window.
Abstract: An important question in deploying large language models (LLMs) is how to augment LLMs with private data.We propose Differentially Private In-context Learning (DP-ICL) to enable LLMs to adapt to new tasks while maintaining privacy guarantees. DP-ICL performs private inference by establishing a noisy consensus over an ensemble of exemplars using the Report-Noisy-Max mechanism. We evaluate DP-ICL on four benchmarks and find that it achieves comparable performance (\textless{} 2\textbackslash{}\% degradation) with non-private ICL.