Return to Projects

Responsible prompting

Project

Prompting is a common first point of contact between people and large language models (LLMs), and the quality of the prompt submitted heavily influences the output of the model, the successful maintenance of system guardrails, and the overall user experience. Prompt creation is often highly dependent on specialized knowledge and can be particularly challenging for non-AI experts due to its iterative and time consuming nature. In addition, prompting practices are always evolving, and multiple creative and harmful ways of interacting with generative models are constantly emerging, which makes it harder for people to know and use the most recent prompting practices.

To address this problem, Responsible Prompting is an LLM-agnostic and lightweight recommender system that aims to dynamically support users in crafting prompts that reflect responsible intentions and help avoid undesired or negative outputs. As a user writes a prompt, it recommends additional sentences to consider based on relevant social values or concepts and provides feedback regarding specific aspects that might need to be removed or edited.

Highlights

  • LLM-agnostic and lightweight recommender system in prompting-time.
  • Easily customizable to different use cases.
  • Recommends good practices of prompt engineering while people are typing prompts.
  • Potential to increase the quality of prompts and reduce time in prompting tasks.
  • As users consume recommendations, they learn different prompt engineering practices (learn by doing).

Project Goals

  • To Employ Responsible AI principles in prompting-time by recommending social values to be added and harmful uses to be avoided.
  • To Improve Human-AI Interactions by providing user guidance to non-experts in prompt engineering.
  • To Drive Open-Source Innovation by inviting contributors from multiple contexts to co-create the dataset of social values, sentences, and harmful uses to be taken into account while people are crafting prompts.