Provider a stronger LLM for Bevel Intelligence (higher paid tier, or allow us to provide our own API key to access a stronger model)
planned (soon)
Quinn Comendant
Bevel Intelligence is GREAT. But at the moment, it’s just a toy, limited by its use of a weak underlying LLM (large language model), e.g., notice in the attached screenshot how it believes a RHR of 62.2 bpm is higher than 64.7 bpm. It's so dumb.
If Bevel’s AI were backed with a stronger LLM, it would – pardon my french – completely kick ass. These stronger models exist now, and I want Bevel to use them now.
I know you’ll probably upgrade the model incrementally as stronger models become available, but because of the scale of Bevel’s operations you will always choose a cost‑effective model, whereas I want to use the
strongest model available
at any point in time. This option could be provided for users willing to pay a higher subscription fee, or allow us to pay for our own token usage by providing our own LLM API keys.
Leah
marked this post as
planned (soon)
Amanda
marked this post as
planned (tbd)
Thanks Quinn Comendant! More coming soon :)