This is something I've been wanting to test out for a while now that OpenAI has a GPT API available: Could it be used to determine if a post is from a scammer?
I had some free time today so I tried it out and the results were surprising. Below is a summary.
Method:
I looked back through the history of scammers I've banned and also legitimate F4M posts that were not deleted. I gave them to ChatGPT and asked very simply, "is this reddit post from a scammer?" n=10 scam posts, 10 legitimate posts.
My hypothesis was that it wouldn't do very well given just the text of the post. When assessing a post myself I look at numerous factors like the username, age of account, post history, and other secret criteria not to be publicly disclosed. Without that additional context I didn't think there would be enough information to make at least a semi-clear determination.
Results:
To my surprise, ChatGPT actually did a pretty good job at determining if a post was a scammer or not. This was an informal test and the sample size was quite small, but it identified 100% of the scam posts as scams. On the other hand, it had a roughly 25% false positive rate with the legitimate posts being considered scams. Granted, it's notable that the ones it did incorrectly flag were all overtly sexual in nature and some of those raised enough red flags on my end that I required verification from the user to post here.
I won't post a screenshot of the legitimate ads since those belong to someone, but here are examples of what it (correctly) deemed a scam post:
One interesting case was it considering the use of the !unlock
command to be a sign of a scammer. After telling it to ignore that it flagged the post as likely legitimate: https://i.imgur.com/MK6Ctwl.png
This is all to say that I think it warrants further exploration about the possibility of creating a reddit bot that checks posts and auto-removes anything flagged for manual review before letting it be posted. I'm also interested in how GPT-4 performs, I'm on the waitlist for API access to it.
Pricing is another item to consider. We don't get many F4M posts here, but I also don't have much desire to sink a ton of money into a reddit board for trying to find hookups. Additionally, this type of content may be considered against OpenAI's content policy as evidenced by the warnings in the screenshots and not allowed anyway. Regardless, I found the experiment interesting and thought you all might too. If future iterations prove successful it may just mean the creation of a powerful tool against reddit scammers.
To be continued?
Subreddit
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/r4rSeattle/...