Provider a stronger LLM for Bevel Intelligence (higher paid tier, or allow us to provide our own API key to access a stronger model)
complete
Quinn Comendant
Bevel Intelligence is GREAT. But at the moment, it’s just a toy, limited by its use of a weak underlying LLM (large language model), e.g., notice in the attached screenshot how it believes a RHR of 62.2 bpm is higher than 64.7 bpm. It's so dumb.
If Bevel’s AI were backed with a stronger LLM, it would – pardon my french – completely kick ass. These stronger models exist now, and I want Bevel to use them now.
I know you’ll probably upgrade the model incrementally as stronger models become available, but because of the scale of Bevel’s operations you will always choose a cost‑effective model, whereas I want to use the
strongest model available
at any point in time. This option could be provided for users willing to pay a higher subscription fee, or allow us to pay for our own token usage by providing our own LLM API keys.
Photo Viewer
View photos in a modal
Leah
marked this post as
complete
Bevel Intelligence is now improved in Bevel 3.0. Please let us know your thoughts as you try it out!
T
Thomas Warnick
I would still like to have the opportunity to use an LLM of my choice, as I can then ensure that the data is processed in Germany/Europe.
Leah
Merged in a post:
Bevel API/CLI/MCP
V
Vladut Muresan
Hey,
It would be amazing to have an open bevel api for us to use with our openclaw. I think at least for us Pro paying user would be nice.
Thanks
Leah
Merged in a post:
Support GROQ as a Provider for Bevel Intelligence
S
Snake C
It would be great to support GROQ as an optional provider for Bevel Intelligence. GROQ has a free tier with no credit card required and uses rate limits rather than upfront payment, which could make Bevel Intelligence more accessible for users who want a lower-cost or BYOK option. Adding GROQ could also give users more flexibility in model/provider choice while helping Bevel offer a scalable alternative for lightweight AI usage.
E
Erik
Would love to pay more for a better AI. I’ve found a prompt that works pretty well, but definitely get some hallucinations sometimes still.
D
Den R
Excuse me, but it is crap. I’m using trial now, and don’t want buy PRO because of stupid chatGPT under the hood. Standard Gemini or DeepSeek are much, much better.
I’d like buy Pro because app is great! Thanks for this! I hope you complete this task asap. Thank you, team! 🫶
R
Robert
Even during training sessions in Apple Health, the current LLM incorrectly recognizes the intervals and concludes something. A final spurt is called expiration and stuff like that.
In addition, the LLM encourages significantly more training than generally recommended. Z.B. When running, this is dangerous for joints and tendons.
You often agree when you say crap yourself. Sounds like ChatGPT4.0. (The LLM known for such errors)
Amanda
marked this post as
in progress
Leah
marked this post as
planned (soon)
Amanda
marked this post as
planned (tbd)
Thanks Quinn Comendant! More coming soon :)