Liquid AI’s LFM2-24B: A 24B-Parameter MoE That Runs on 32GB RAM and Makes Cloud APIs Look Overpriced
Liquid AI’s new sparse MoE model activates only 2.3B parameters per token, delivering server-class AI performance on consumer hardware while challenging the cloud-only paradigm.