1 article found
Two years after release, Meta’s 8B model remains the default choice for fine-tuning, raising critical questions about innovation stagnation in open-weight LLMs.