This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
I run a small development team that does PoCs for one of our clients, a large bank. Recently we did a ML demo that was trained on a much simplier ML system to turn their Employee Handbook into an interactive "chatbot".
That is you could ask it "How many PTO days do I get?" or "What are my dental plan options?" and it would answer, giving references to the correct passages in the handbook. Nothing truely groundbreaking, but it really sparked an idea with them... which is the whole point. Love when doing my job actually works.
This led to the inevitable "ok, can you do that with a ChatGPT type model?" mainly to make it less "formal" if you will and respond conversationally. Use case: take the mortage loan application system and make a virtual assistant for it.
Okay... we can do that.
Then came the kicker: "But... it needs to be context sensitive. It has to give diffent, but correct, answers to the customer, the loan officer, the underwriter, etc. There is information each persona should have, but not others. How do we make sure it gives only the authorized answer to a question?"
We dubbed this the "Santa Clause Problem."
See, our imaginary ChatGPT instance knows everything about Christmas. It can answer any question you want about it.
But how do get it not to spill the beans that Santa is not real to the 4 YO asking it that question, but give the right answer to an adult?
More importantly, how do you make sure it does not LEAK the information over a series of questions? or synthize an answer from related data?
We considered a few options:
- Train it to lie.
Have it withhold information based on the user classification. The leakage risk seems high here, because it KNOWS but it has a block about answering. Carefully worded cross examination might leak information.
And you have to flag information as related to each of the roles you want seperated, and any concepts you don't want it addressing.
- Train instances on different data sets. The dataset for the customer only contains data about the customers portion of the loan process. Same for the loan agent, the underwriter, etc.
This too seems problematic because the AI may be able to infer an answer based on what information it does have, putting together seemingly unrelated information to Infer that Santa must not be real.
It also has the problem of needing data curation.
We are still kicking around the idea... got smart guys on my team, we shall see if they come up with a viable solution.
Subreddit
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/ChatGPT/com...