Introducing the Empower-Functions Family

We are excited to announce the launch of our new family of empower-functions models, offering enhanced flexibility and performance to meet diverse customer needs.

Daiyi Yang
May 21, 2024
1 min

Previously, we launched the empower-functions model as a drop-in replacement for GPT-4, providing much faster response times and lower costs for real-world use cases involving tool usage. We’re grateful to see this model has been effectively utilized in various scenarios, including customer support, voice agents, and user intent classification.

However, we've received feedback requesting more flexibility, such as the ability to run models locally and even more affordable and faster options for simpler use cases or the ones with more strict response time. To address these needs, we proudly introduce the empower-functions family of models in different sizes:

- Llama3-Empower-Functions-Small: Based on the Llama3-8B model, this is our fastest and most cost-effective option (2x faster and cheaper vs empower-functions model). Also, we include a 4-bit quantized GGUF format, which can be fully run locally with 7.5G RAM.

- Empower-Functions-Medium: Previously known as the empower-functions model, this is based on the Mixtral-8x7B model. It offers a balanced performance and cost, making it suitable for most use cases.

- Llama3-Empower-Functions-Large: Based on the Llama3-70B model, this provides the best performance, excelling in complex scenarios that require a thorough understanding of context and prompts.

Enhancements and New Features

In addition to offering different model sizes, we have made several iterations to improve model quality through a mix of SFT(supervised fine tuning) and DPO(Direct preference optimization), see here for details. We have also introduced new features such as sequential calling, enhancing the capabilities compared to the v1 empower-functions model.

Updated Price

As part of our commitment to being the best place for running LoRAs of state-of-the-art models, we have also updated our pricing to allow users to host LoRAs most cost-effectively. The new pricing will take effect today:

- Up to 8B: $0.2 per million tokens

- 8.1B to 16B (including Mixtral 8x7B) $0.5 per million tokens

- 16.1B and Up (including Mixtral 8x22B) $1.2 per million tokens


All our models are available in empower platform today. And the source of the model can be found in Hugging Face repository, and we offer a pip package and a set of examples for easy prompting and running of the models in our github repository. We hope you enjoy using the new family of empower-functions models. Please feel free to contact us with any feedback or inquiries. Thank you!

Ready to start?

Deploy and serve your first fine-tuned LLM in 1 minute for free!

a black and white image of a black and white backgrounda black and white image of a black and white background