I think small LLMs are underrated and overlooked. Exceptional speed without compromising performance.
In the race for ever-larger models, its easy to forget just how powerful small LLMs can be—blazingly fast, resource-efficient, and surprisingly capable. I am biased, because my team builds these small open source LLMs - but the potential to create an exceptional user experience (fastest responses) without compromising on performance is very much achievable. I built Arch-Function-Chat is a collection of fast, device friendly LLMs that achieve performance on-par with GPT-4 on function calling, and can also chat. What is function calling? the ability for an LLM to access an environment to perform real-world tasks on behalf of the user.'s prompt And why chat? To help gather accurate information from the user before triggering a tools call (manage context, handle progressive disclosure, ...