How to run Mistral AI?

  • Mistral AI is a French company selling AI products. It was founded in April 2023 by previous employees of Meta Platforms and Google DeepMind. 2 models have been published and are available as weights and 3 models are available via API only.
  • Mistral AI is a real open-source project with an Apache 2.0 license, meaning it can be used without any restrictions. Additional tools are needed to enable local operations, such as Ollam and LM Studio.

Perplexity AI, a French company, was founded in April 2023 by previous employees of Meta Platforms and Google DeepMind, releasing both open-weight and API-only models as a response to proprietary models. Mistral AI is a real open-source project with an Apache 2.0 can run without any restrictions with the help of additional tools.

What is Mistral AI?

Perplexity AI, a French company, was founded in April 2023 by previous employees of Meta Platforms and Google DeepMind. Perplexity AI is a young company that specialises in AI and machine learning solutions, focusing on developing advanced algorithms and technologies to tackle complex problems across various industries, including finance, healthcare, and technology.

2 models, Mistral 7B and Mixtral 8x7B have been published and are available as weights. 3 models, Mistral Small, Mistral Medium and Mistral Large, are available via API only, which means these models are closed-source and only available through the Mistral Application Programming Interfaces.

Also read: French AI startup Mistral shakes things up with surprise release of LLM that’s better than ChatGPT

With the launch of Mistral Large, Mistral AI has also launched a chatbot called Le Chat, a counterpoint to ChatGPT, to replicate OpenAI’s successful path. Microsoft announced a new partnership with the company in February to expand its presence in the rapidly evolving AI industry.

How to run Mistral AI?

Mistral AI is a real open-source project with an Apache 2.0 license, meaning it can be used without any restrictions. let’s learn how we can install this on a local machine without much need of coding.

Also read: How to create a large language model (LLM)?

The world of large language models (LLMs) is often dominated by cloud-based solutions. Therefore, additional tools are needed to enable local operations. Ollama, for example, offers an exciting option for running LLMs locally with the support of the Mistral model integration. LM Studio uses a quantised version of the model, making it easy for users to download the model and run it on a laptop.

Take LM Studio for example, you can first visit the official website of LM Studio to download the Windows or Mac version of the file. It is a small tool with a download size of about 400 MB.

Once downloaded and installed following the instructions, you can search for Mistral 7B in this search box. Click enter and then see the Mistral 7B variants. Choose one version to download, and the file size is around 5 GB.

After the Mistral AI model is loaded onto local systems, we can try to interact with it and ask questions, and the response time will depend on the capacity and memory of the system and so on.

In a software environment like LM Studio, a Local Inference Server would allow you to run machine learning models on your hardware, and API Calls would be the method by which you send data to and receive data from these models.

Monica-Chen

Monica Chen

Monica Chen is an intern reporter at BTW Media covering tech-trends and IT infrastructure. She graduated from Shanghai International Studies University with a Master’s degree in Journalism and Communication. Send tips to m.chen@btw.media

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *