Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

0
Has anyone done negative prompting for LLMs?
Post Flair (click to view more posts with a particular flair)
Post Body

Has anyone done negative prompting for LLMs?

I read this paper on how to apply Classifier free guidance on LLMs: https://arxiv.org/pdf/2306.17806

Classifier-Free Guidance (CFG) [37] has recently emerged in text-to-image generation as a lightweight technique to encourage prompt-adherence in generations. In this work, we demonstrate that CFG can be used broadly as an inference-time technique in pure language modeling. We show that CFG (1) improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of tasks: Q&A, reasoning, code generation, and machine translation, achieving SOTA on LAMBADA with LLaMA-7B over PaLM-540B; (2) brings improvements equivalent to a model with twice the parameter-count; (3) can stack alongside other inference-time methods like Chain-of-Thought and Self-Consistency, yielding further improvements in difficult tasks; (4) can be used to increase the faithfulness and coherence of assistants in challenging form-driven and content-driven prompts: in a human evaluation we show a 75% preference for GPT4All using CFG over baseline.

and I was wondering if anyone tried this technique?

Author
Account Strength
70%
Account Age
2 years
Verified Email
Yes
Verified Flair
No
Total Karma
2,827
Link Karma
1,211
Comment Karma
1,616
Profile updated: 5 days ago
Posts updated: 2 days ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
1 week ago