Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

1
LLM Reasoning Steps
Post Body

The thing I use LLMs for the most is for writing code, and I've been trying to figure out a way just to give it the code and a description of a new feature and have it figure out what code it needs to change. The ultimate goal would be to have it make a commit with the code changes - I just haven't come up with a good method to guarantee that the code changes from the LLM will be complete and contain all the necessary code for that feature.

To this goal, I've been reading papers (shoutout to emergent mind) see what other people have figured out.

I came across a paper basically about forcing LLMs to do some reasoning about the question before answering - it has a few really prompt engineering ideas. The notable ones I took away were Repeat State and Self Verification.

Self-Verification: Humans will check if their answers are correct when answering questions. Therefore, before the model gets the answer, we add a self-verification process to judge whether the answer is reasonable based on some basic information. - the paper.

That's an excellent idea: What if you had the LLM generate tests for the code first and then had it write the code? If the project doesn't already have tests, the LLM could generate them, and suddenly, you have some test coverage!

I've played around with having an LLM generate tests for code and found it can be tricky, but there could have been a problem with giving it enough context about the problem it was solving (and I didn't completely understand how mocks worked 😅)

There's another paper I'm going to re-read that also suggested guiding and validating LLM output by generating tests, so it's definitely an idea to keep in mind.

Repeat State: Similar to repeated readings, we include a small summary of the current state after a long chain of reasoning, aiming to help the model simplify its memory and reduce the interference of other texts in the CoT.

This idea is super interesting: generally, LLMs favor recency in the prompt, so it'll start 'forgetting' things at the beginning of the chat. I've found that it remembers when you explicitly tell it to.

I decided not too long ago that LLMs have limited jsx problem solving ability, but in reality there's just a lot of noise and it'd take lots of time to isolate just the relevant logic to give to the LLM. and at that point it'd be simplified code.
What if after giving the code, you included a 'repeat state' to have it: 1. pay special attention to and isolate the relevant code 2. get an understanding of how the code works and what it does

to be honest, this feels like letting the LLM 'think out loud'

Author
Account Strength
90%
Account Age
11 years
Verified Email
Yes
Verified Flair
No
Total Karma
2,374
Link Karma
576
Comment Karma
1,755
Profile updated: 1 day ago
Posts updated: 6 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
11 months ago