French AI company Mistral has released a full AI-powered coding stack designed to help large organizations incorporate generative AI into their software development.
At the core of the release is the updated model Codestral 25.08, which the company says accelerates and secures enterprise coding while providing full control and customizability.
Over the past year, AI-based code assistants have become popular, but mistral points out that many large companies especially those in the finance, healthcare, and defense sectors have had problems using them.
Most products are cloud-based, and are not allowed to run offline or in private for example, tightly regulated sectors.
The new Mistral stack fixes these points and is supposed to be a complete and self-contained system. Customers can run it in the cloud, on-premises, or in private virtual networks.
Companies can customize the model to suit their own codebases and policies, with access to all model weights and a user-friendly interface for adjusting weights and settings.
Experience of Use and Configuration
As part of the new stack, four components are offered, support for each of which is already available:
Code Completion fast and accurate prompts. In the new Codestral 25.08, the acceptance of suggestions has grown by 30%, and such errors have become less.
Semantic Search the Codestral Embed tool is said to provide the best functionality for searching large code bases. Mistral argues that this tool is superior to those provided by OpenAI and Cohere.
Agentic Workflows the Devstral system takes steps like refactoring, testing, and PR management.
IDE integration everything comes with native JetBrains and VS Code plugins.
For all AI-powered operators, the most important change is Codestral 25.08, a model that the company claims is designed for real applications.
The model can generate “filler” for code and works well in production environments it does not have a high response latency and can be used in different ways without specific adaptations to the infrastructure.
In addition, the autonomous coding agent, Devstral, has made strides. On the SWE-Bench Verified, a benchmark for AI code comprehension, it gets up to 61.6%, surpassing the Claude 3.5 and the GPT-4 models. 1 mini.
The advantage of devstral is that it is capable of running on consumer hardware such as a high-end gaming PC or a Mac with 32GB RAM.
User Friendly, Enterprise Ready
This tech all sits inside Mistral Code, which plugs into JetBrains and VS Code. Combine inline suggestions, auto-commit messages, and an instant semantic search. Developers can now pose questions such as:
“How do we deal with Stripe timeouts?” inside their IDE and receive relevant snippets out of their own projects.
Between IT and security teams, the Mistral Console provides complete visibility and control. Enabling compliance with enterprise-grade features such as Single Sign-On, audit logs, etc.
Already in Use
The system is already in use at companies such as Capgemini and Abanca. Matthieu Chereau, Director of Technology & Innovation at Capgemini, said, “In regulated industries (such as energy and defense), the stack from Mistral is a true enabler to accelerate our developments with large language models.
Due to high standards of data privacy regulation in Europe, all these models are being run in a self-hosted manner by the major spanish bank, Abanca.
We are always looking for the best tools to bring to our clients and Capgemini VP Alban Alev helped us to explain how Mistral Codestral helps them:
“Using Mistral Codestral re-shapes our AI development approach after some basic support, Codestral has become an indispensable product in our daily work.”
How to Get Started
Get the complete coding stack for Mistral AI today! Teams can start with autocomplete and semantic search, eventually adopting agentic workflows.
Mistral also offers arrangements for customized deployments including on-prem by getting in contact. There is also a console through which developers can use API keys.