Microsoft fully open-sources its Phi-4 LLM to win over developers
In a significant move, Microsoft has fully open-sourced its Phi-4 “small” large language model. A key requirement for a LLM to be considered open-source is for the model — and the weights (the internal parameters for the model) — to be made available for any purpose. The model was announced in December, but was made fully available to the public today.
Open-source models and their weights allow developers to customize the model further. In contrast to Phi-4, OpenAI does not share the models or weights for its recent models such as GPT-3, GPT-4, or its “o” series models.
Microsoft refers to the model as “small” as it was trained with 14 billion parameters, versus Meta’s Llama 3.1, which was trained with 70 billion parameters. Despite the smaller set of parameters, the Phi-4 model outperforms OpenAI’s GPT-4o-mini and Meta’s Llama-3.3. The model scored high on benchmarks for science, math, and coding.
Open-source models and their weights allow developers to customize the model further. In contrast to Phi-4, OpenAI does not share the models or weights for its recent models such as GPT-3, GPT-4, or its “o” series models.
Microsoft refers to the model as “small” as it was trained with 14 billion parameters, versus Meta’s Llama 3.1, which was trained with 70 billion parameters. Despite the smaller set of parameters, the Phi-4 model outperforms OpenAI’s GPT-4o-mini and Meta’s Llama-3.3. The model scored high on benchmarks for science, math, and coding.