- Service uses AI agents powered by IBM’s watsonx and Granite Time Series models to analyse network data and predict issues
- Aims to reduce reactive troubleshooting by detecting hidden problems early across on-premises, private and public cloud environments
What happened
IBM’s new service, called Network Intelligence, provides companies with a single point of reference to fix network issues faster by using AI agents that can analyse data, identify potential problems and suggest remediation steps. The service addresses the growing complexity of IT environments that now comprise a mixture of on-premises data centres, private clouds, multiple public clouds and software-as-a-service applications.
The AI agents leverage pre-trained models that study network designs, telemetry, traffic flows, alarms and time-series data to spot hidden issues and early warning signs, proposing likely causes and recommending actions to help teams respond. The underlying technology draws on IBM’s watsonx platform and the company’s Granite Time Series Foundation Models, compact AI models developed by IBM Research.
These models are specifically tuned for networking and trained on massive amounts of telemetry, alarms and flow data from different types of environments. IBM claims this gives the system a deeper understanding of network behaviour compared to rule-based systems, generic machine learning or large language models, enabling better detection of issues that typically go unnoticed and earlier warnings before performance degradation occurs.
Why it’s important
Network complexity has become a critical pain point for enterprises pursuing multi-cloud strategies. As organisations distribute workloads across multiple cloud providers and maintain legacy on-premises infrastructure, visibility and troubleshooting have become exponentially more difficult. Traditional monitoring tools, which rely on predefined rules and thresholds, struggle to keep pace with the dynamic nature of modern distributed systems.
IBM’s pitch is that Network Intelligence can help companies move away from reactive responses to outages, reducing alerts, simplifying problem analysis and catching rare or complicated issues that older monitoring tools miss. This proactive approach could significantly reduce downtime costs and improve user experience, particularly for organisations running business-critical applications across hybrid environments.
The use of foundation models specifically trained on networking data represents a shift in how AI is being applied to infrastructure management. Rather than using general-purpose large language models, IBM has developed domain-specific models that understand the patterns and anomalies particular to network behaviour. If successful, this approach could establish a template for applying AI to other complex infrastructure domains, from storage systems to security operations. IBM is offering different subscription plans and a free tier for companies wanting to test the service in limited production environments before full commitment.