OpenAI made a series of moves this past week that signal a decisive pivot toward enterprise AI agent deployment. The headline: multiyear strategic partnerships with McKinsey, BCG, Accenture, and Capgemini to roll out its Frontier platform at scale. Alongside that, OpenAI retired five older models from ChatGPT, expanded its thinking mode context to 256K tokens, shipped interactive code blocks, and added new security controls. Taken together, it’s the clearest picture yet of where OpenAI is placing its bets.
Frontier Alliances: The Enterprise Play
The centerpiece announcement is Frontier Alliances — multiyear deals with four of the world’s largest consulting firms to deploy OpenAI’s Frontier enterprise AI agent platform.
The partnerships break down by role:
- McKinsey & Company and Boston Consulting Group — strategy and operating model design, helping enterprises figure out where AI agents fit in their organizations
- Accenture and Capgemini — end-to-end systems integration, data architecture, and cloud infrastructure to actually build and deploy the agents
Frontier itself is OpenAI’s platform for building, deploying, and managing AI agents — what OpenAI calls “AI coworkers.” It functions as a semantic layer that connects CRMs, data warehouses, ticketing tools, and internal applications, allowing AI agents to navigate business software, execute workflows, and make decisions across an organization’s entire tech stack.
The platform uses open standards with no vendor lock-in. It works with ChatGPT, Atlas, Codex, and third-party agents. For regulated industries, it includes governance controls: identity management, permission boundaries, and audit trails.
Early customers include Intuit, State Farm, Thermo Fisher, and Uber. OpenAI also deploys “Forward Deployed Engineers” — embedded technical staff who work alongside consulting teams during implementation.
Why This Matters More Than Another Model Release
The consulting partnerships reveal OpenAI’s diagnosis of the AI adoption gap: the bottleneck isn’t model capability. It’s implementation. Most enterprises know AI agents can automate workflows. What they lack is the organizational change management, systems integration, and governance frameworks to make it happen.
By partnering with McKinsey and BCG on strategy, OpenAI is inserting itself into the earliest phase of enterprise AI decisions — before companies have chosen a platform, before they’ve scoped their agent deployments, before competitors get a seat at the table. Accenture and Capgemini then handle the build-out, creating a full pipeline from strategy to production.
This is a direct response to Anthropic’s enterprise gains. Claude Code and Claude Cowork have been steadily gaining traction in enterprise environments, and the recent Claude Sonnet 4.6 release — with near-flagship performance at mid-tier pricing — makes Anthropic’s offering increasingly hard to ignore. OpenAI’s answer isn’t a better model (though GPT-5.3-Codex remains competitive). It’s a better go-to-market machine.
For organizations evaluating AI agent platforms, the practical implication is that Frontier will soon come pre-packaged with implementation support from the consulting firms most enterprises already work with. That’s a powerful distribution advantage, regardless of benchmark comparisons.
Retiring GPT-4o and Older Models
On February 19, OpenAI retired five models from ChatGPT: GPT-4o, GPT-4.1, GPT-4.1 mini, o4-mini, and GPT-5 (both Instant and Thinking variants). API access remains unchanged for now, but the consumer-facing retirement signals that OpenAI is consolidating around its latest model generation.
This cleanup makes sense. With GPT-5.3-Codex as the flagship and GPT-5.2 as the general-purpose workhorse, maintaining older models in the ChatGPT interface created confusion without adding value. For developers still using these models via API, the retirement is a signal to start planning migration.
Expanded Context and Product Updates
Several product updates shipped alongside the strategic announcements:
256K thinking context (February 20): ChatGPT’s thinking mode now supports 256K total tokens — 128K input plus 128K max output — up from 196K previously. For complex analysis tasks that require extensive reasoning, this removes a constraint that previously forced users to break problems into smaller pieces.
Interactive code blocks (February 19): Users can now write, edit, and preview code inline in ChatGPT. Diagrams and mini-applications render directly in the chat interface, and a split-screen view allows side-by-side code review. This blurs the line between ChatGPT as a conversational tool and ChatGPT as a development environment.
20-file uploads: The upload limit doubled from 10 to 20 files per conversation, with broader file type support.
Security features: Lockdown Mode for high-security users and Elevated Risk labels across ChatGPT, Atlas, and Codex. As AI agents gain more autonomy and access to enterprise systems, these kinds of governance features move from nice-to-have to essential.
The Competitive Picture
OpenAI’s week of announcements doesn’t exist in a vacuum. February 2026 has been one of the most competitive months in AI history:
- Claude Opus 4.6 and Sonnet 4.6 from Anthropic
- Gemini 3.1 Pro from Google, leading on 12 of 18 benchmarks
- MiniMax M2.5 matching frontier SWE-bench scores at a fraction of the cost
- Qwen 3.5 offering 201-language open-weight capabilities
In this environment, OpenAI’s Frontier Alliance strategy is a bet that distribution and enterprise integration matter more than marginal benchmark advantages. If the consulting partnerships land major enterprise deployments before competitors establish their own channels, the model-level performance differences become secondary. In a commoditizing market, the platform that enterprises are already using wins — even if it’s not always the one with the highest benchmark score.
For businesses navigating this landscape, the takeaway is familiar but increasingly urgent: build for model flexibility. The enterprise AI market is moving too fast to lock into any single provider. The organizations that will benefit most are those building model-agnostic architectures that can take advantage of whichever platform — Frontier, Claude Cowork, Vertex AI, or something else entirely — proves most effective for their specific workflows.
Official announcements: Frontier Alliances | Fortune coverage | CNBC coverage | TechCrunch
