• Microsoft’s new Phi-3 Mini AI model is efficient, with capabilities similar to GPT-3.5, but in a smaller size, perfect for handheld devices.
  • With 3.8 billion parameters, Phi-3 Mini is part of a series of lightweight AI models Microsoft is debuting for better portability and performance.
  • The AI model can run locally on low-power hardware, saving energy and costs compared to cloud-based processing at a lower price.

Microsoft has launched the first of three lightweight AI models, Phi-3 Mini, to attract a wider client base with cost-effective options, according to Reuters. The latest model features 3.8 billion parameters and is currently available on Ollama, Hugging Face, and Azure.

About Phi-3 Mini

Phi-3 Mini is the first of the three small language models (SLM) to be released by the company. It is betting its future on a technology that is anticipated to have a profound effect on society and the way people work.

Executive Vice President of Microsoft Azure AI Platform Eric Boyd claims that the Phi-3 Mini is “just in a smaller form factor” than models like the GPT-3.5. It should perform better on devices like laptops and phones and be less expensive as a compact AI model. A children’s book was created by an LLM (large language model) using a list of 3,000 words to teach Phi-3, which is already an improved version because it can reason and code more effectively.

This might be able to compete with Meta’s Llama 3 8B, which is starting to catch up to GPT-4 in certain areas. An AI model consumes more power and energy the more powerful it is. Since the Phi-3 Mini can run locally on low-power hardware instead of outsourcing its computing tasks to pricey cloud-based processing centres, its smaller size than its competitors is an advantage.

It’s small enough to be deployed on a smartphone and sits beneath 7B and 14B-parameter versions called Phi-3 Small and Phi-3 Medium.

Sébastien Bubeck, Microsoft’s vice president of GenAI research, said: “Phi-3 is not slightly cheaper, it’s dramatically cheaper, we’re talking about a 10x cost difference compared to the other models out there with similar capabilities.”

Hugging Face, a machine learning model platform, Ollama, a framework for running models locally, and Azure’s AI model catalogue will all have Phi-3-mini available right away, according to Microsoft.

Also read: The ultimate guide to iPhone EU chargers

Also read: Exploring the OpenAI and Microsoft partnership

More information

According to Microsoft, SLMs are easier for businesses with limited resources to use because they are made to complete simpler tasks.

The SLM has also been optimized for Nvidia’s graphics processing units (GPUs) and will be accessible on the company’s software tool, Nvidia Inference Microservices.

Microsoft gave the UAE-based AI startup G42 $1.5 billion last week. Additionally, in the past, it collaborated with the French startup Mistral AI to make its models accessible via the Azure cloud computing platform.