This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
First of all, sorry if this isn't the right subreddit for this type of question. I'm not exclusively looking into OpenAI models. If you think there's a better sub for this, please let me know.
Hey! I'm currently working on a project that will heavily utilise multi LLM function calls. While in theory function calls seem pretty straight forward, I found that most models I've tested don't have great support for them and some are downright unable to figure out "what I want from them".
So far, I've tested Mixtral-8x7B-Instruct-v0.1
on Anyscale which triggered very easily and had the tendency to not response correctly anymore after a function call was executed. I've also tried claude-3-sonnet
, gpt-4-turbo-preview
and gpt-3.5-turbo-0125
on OpenRouter which had even worse results than Mixtral, because often the function wasn't invoked correctly, like malformed JSON or completely ignoring the predefined parameters. The SDK I'm using is openai-node
, the official OpenAI Typescript SDK.
So far, I haven't tried the OpenAI models directly via OpenAI API, mainly because OpenRouter also has access to them, but if they have better native support for function calling, I'm happy to try that as well.
My main question is, does anyone have experience with this and can give me some recommendations or ideas on how to precede here. Are there models which are optimized for multi function calls? Are there SDKs with better support for function calls than the official OpenAI one?
Any resources in regards to optimizing this would be helpful!
Thanks!
Post Details
- Posted
- 8 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/OpenAI/comm...