The AI tipping point has arrived
AI has shifted from model training to enterprise-scale execution. In 2026, generative AI, agentic systems, and real-time inferencing are rapidly moving from pilot towards integration into core operations, products, and customer experiences-representing the next major industry opportunity. From public safety, intelligent transportation, and smart manufacturing to smart cities, AI will be powering the physical world.
As AI scales into production, proliferating industries, a structural gap has emerged. AI applications must operate closer to where the data is generated-at the network edge, across factories, cities, and vehicles. Service providers are uniquely positioned to close this gap-managing the distributed infrastructure required to deliver AI capabilities securely at national and global scale. The operator network is no longer simply a transport layer; it has become the central fabric of the digital economy, enabling AI applications where persistence, security, and simplicity are foundational to connectivity. Organizations that build a distributed AI foundation now will lead the next decade of real-time, AI-infused intelligence.
Transforming the service provider edge
This shift requires a new architecture at the service provider edge. Operators are adopting AI grids to leverage their existing networks to offer managed services for applications like physical AI with carrier-grade reliability, sovereignty, and compliance. Cisco today announced a new Cisco AI Grid with NVIDIA reference architecture to power these services, with AT&T as the first to bring these inferencing capabilities to market.
Infrastructure not built for distributed AI
However, realizing the promise of distributed AI requires overcoming several infrastructure constraints that enterprises need to increasingly rely on service providers to address. First, predictability. Real-time AI applications-robotics, autonomous systems, video analytics, and public safety-require millisecond precision that requires AI applications to run locally or in proximity. Second, security. As AI extends across thousands of distributed endpoints, security also needs to be distributed and fused into the network. Third, operational complexity. Managing hybrid environments across data centers, edge locations, and access networks increases operational complexity, integration challenges, and time to value. AI innovation is accelerating faster than infrastructure modernization. Without a unified, secure, and intelligent architecture purpose-built for distributed AI, organizations risk fragmented deployments and adoption delays.
Cisco AI Grid with NVIDIA for service providers: Infrastructure for the AI era
We are introducing the Cisco AI Grid with NVIDIA, a full-stack AI architecture that enables service providers to deliver real-time AI inferencing services across distributed networks. Integrated with Cisco Mobility Services Platform and built on NVIDIA's AI Grid reference architecture, the grid helps operators participate in the AI value chain, starting with wireless and then expanding into heterogenous access technology.
Cisco AI Grid with NVIDIA integrates key foundational capabilities into a unified platform. It combines distributed compute powered by Cisco UCS servers with the NVIDIA RTX PRO 6000 Blackwell Server Edition; intelligent networking built on Cisco Nexus switching and Silicon One-based routing optimized for AI traffic, and embedded security across infrastructure layers. Additionally, the reference architecture aligns with Cisco Mobility Services Platform, incorporating inferencing applications, observability, orchestration, and lifecycle management. Purpose-built for high-impact use cases-including public safety, video intelligence, and transportation, the Cisco AI Grid with NVIDIA becomes a monetizable platform for service providers, while delivering predictability and operational simplicity to enterprises.
"The AI grid is the next major opportunity for telecom operators as they turn the network into a distributed AI platform. Together, Cisco AI Grid with NVIDIA accelerated computing and their Mobility Services Platform give telcos a full-stack path to turn secure, real-time AI inference at the network edge into highly monetizable, AI-native enterprise services," said Chris Penrose, Global Head of Business Development for Telco, NVIDIA.
Platform approach to transform from connectivity to embedded AI service
The Cisco platform approach transforms AI from an integration challenge into a scalable business model. Instead of building distributed AI environments from scratch, organizations leverage pre-validated designs and a unified operations framework integrated with Cisco Mobility Services Platform. Operators can deliver AI inferencing services directly from edge locations to connected endpoints, supporting low-latency applications and precision across autonomous vehicles, robotics, IoT, and industrial automation. This reduces risk, standardizes deployment, and enables predictable scale across compute, networking, and security. Rather than remaining connectivity providers, service providers become active participants in the AI economy-monetizing the network, delivering AI as a service, enabling ecosystem innovation, and capturing new revenue streams without building entirely new platforms.
Cisco Mobility Services Platform operates one of the world's largest global IoT platforms, supporting 293 million mobile IoT subscribers-including 130 million connected vehicles-across leading service providers and more than 31,000 enterprises worldwide. The platform processes 2 petabytes of data and 123 million API calls daily, providing the scale and operational foundation for mission-critical services. Cisco AI Grid with NVIDIA builds on this foundation, extending Cisco Mobility Services Platform to enable service providers, developers, and ecosystem partners to deliver distributed AI inferencing services at the network edge.
Global service providers leading the way
Leading operators are already capturing this opportunity.
AT&T is bringing new edge AI inferencing capabilities to enterprises and developers, built on its dedicated IoT core powered by Cisco AI Grid with NVIDIA. This enables organizations to run AI inference closer to where data is generated-enabling real-time decision-making across distributed environments. Through this integration, AT&T combines business-grade connectivity, advanced network features, localized AI compute, and zero-trust security to support mission-critical use cases-the first successful deployment being near real-time AI inferencing for public safety at AT&T's Discovery District.
"Scaling AI services that are both highly secure and accessible for enterprises and developers is a core pillar of our IoT connectivity strategy," said Shawn Hakl, SVP, Head of Product, AT&T Business. "By combining AT&T's business-grade connectivity, localized AI compute, and zero-trust security while working with members of the NVIDIA Inception program and harnessing Cisco AI Grid with NVIDIA infrastructure and Cisco Mobility Services Platform, we're bringing real-time AI inference closer to where data is generated-accelerating digital transformation and unlocking new business opportunities."
SoftBank Corp. is advancing its Telco AI Cloud vision to build social infrastructure for the AI era by leveraging its telecommunications foundation. Technologies such as Cisco AI Grid with NVIDIA highlight how distributed AI inferencing can be enabled at the network edge. By bringing AI capabilities closer to where data is generated, SoftBank supports real-time industrial AI for robotics, autonomous transportation, and public safety-unlocking new growth and reinforcing its evolution into an AI platform provider.
"Cisco AI Grid with NVIDIA is a key step toward enabling distributed, real-time AI at the edge. At SoftBank, our Telco AI Cloud vision integrates AI computing with nationwide telecom infrastructure to deliver scalable, low-latency AI services. This aligns with the shift toward AI-native telecom platforms, and we look forward to exploring how these technologies can accelerate the industry's transformation," said Ryuji Wakikawa, Vice President and Head of the Research Institute of Advanced Technology at SoftBank Corp.
Building a vibrant partner ecosystem
Infrastructure creates potential-applications create value. Cisco is fostering a growing ecosystem of independent software vendors (ISVs) and developers building AI-powered solutions on Cisco AI Grid. This ecosystem forms a virtuous cycle: operators gain differentiated services, ISVs gain scalable infrastructure, and enterprises gain AI capabilities without complex integration hassle. Partners such as Linker Vision demonstrate the impact, delivering AI-powered video inferencing for smart cities, retail analytics, and industrial automation. By leveraging secure AI Grid within mobile operator networks, applications process data at the edge with millisecond latency, scale across thousands of endpoints, and deliver real-time insights.
The urgency to act now
The AI infrastructure opportunity is massive and time-sensitive. Early movers will secure ecosystem partnerships, build durable customer relationships, and monetize edge assets ahead of competitors. Late adopters risk being confined to commoditized connectivity while others capture the AI value layer. Cisco AI Grid with NVIDIA provides a secure, modular, and pre-validated path forward. It streamlines implementation, brings operational consistency across distributed environments, and ensures infrastructure keeps pace with rapidly growing AI workloads. Additionally, it delivers predictable outcomes through programmable service access, reliable application performance, and trusted data governance.
The AI era is here. The question is not whether to build distributed AI infrastructure-but how fast. With a secure AI Grid from Cisco, organizations can move decisively, monetize intelligently, and lead confidently. The infrastructure powering the next decade of AI starts now-and it starts with Cisco. Contact us for more information.
Cisco Mobility Services Platform
