Most enterprises are running AI pilots right now. Some are testing Salesforce agents. Others are experimenting with ServiceNow automation or deploying Microsoft Copilots. The pilots are going well. Teams are excited. ROI looks promising. So leadership asks the natural question: should we expand this to the whole organization? And that's when the real decision arrives. Because scaling a pilot means committing to a platform, and committing to a platform means making architectural choices that will either enable or constrain your AI capabilities for years to come. The difference between a smart platform choice and an expensive mistake often comes down to one thing: whether you asked the right questions before you scaled.

In Today’s Email:

In today's email, we're addressing the question that comes right after your AI pilots start showing promise: where do we build from here? We'll explore why your platform choice is really an architectural decision that will shape your AI capabilities for years, examine the four different approaches most enterprises are considering and the tradeoffs each one involves, provide a framework for assessing which path makes sense for your organization's maturity and capabilities, and outline what you should be doing in the next 90 days to make this decision well.

News

Before We Dive In: The Platform Race Intensifies

Last week, OpenAI CEO Sam Altman issued an internal "code red" memo to employees, signaling the intensity of competition in the AI platform space. The trigger was Google's Gemini 3 launch in November, which outperformed ChatGPT on several key benchmarks, combined with Anthropic's release of Claude Opus 4.5 on November 24. According to reports from The Verge and The Information, OpenAI is fast-tracking the release of GPT-5.2, originally scheduled for later in December, with a potential launch as early as December 9 (I’m writing this on 12/8, although it will actually be to you on 12/10, so the release might be old news by then). The memo reportedly stated that OpenAI's new reasoning model is "ahead of Gemini 3" in internal evaluations, but the accelerated timeline has delayed other initiatives including advertising plans and AI agent development. This is the second "code red" declaration in the AI space in three years. The first came in December 2022 when Google issued its own urgent response to ChatGPT's surprise launch. The roles have now reversed, with OpenAI scrambling to catch up to Google's momentum. For enterprises watching this race, the message is clear: the AI platform landscape is moving faster than any vendor roadmap can predict, which makes your choice of platform architecture even more critical.

Why Platform Choice Matters Now (Not Later)

Eighteen months ago, ChatGPT showed executives something they hadn't quite believed was possible: AI that could actually understand context, follow complex instructions, and produce work that didn't immediately look like garbage. The demos were impressive. The potential seemed obvious. And every enterprise software vendor took note.

What followed was predictable. Salesforce announced Einstein GPT, then Agentforce. ServiceNow added AI to their workflow platform. SAP built AI into their ERP systems. Microsoft put Copilot everywhere. Oracle, Workday, and every other major vendor followed with their own AI features. The message was clear: you don't need to figure out AI on your own because we've already built it into the platforms you're using.

For most organizations, this sounded like the perfect solution. Your vendors handle the complexity, you get AI capabilities without hiring a data science team, and everything integrates with your existing systems. It's the classic "nobody ever got fired for buying IBM" logic, updated for the AI age.

But here's what the vendor pitches don't emphasize: when you choose an enterprise AI platform, you're not just choosing software. You're choosing an architecture that will determine what kinds of AI capabilities you can deploy, how quickly you can adapt to new developments, and whether you'll be able to take advantage of better models when they arrive next year or the year after that.

The hidden cost of getting this wrong shows up in stages. First, your pilot works great, so you expand it. Then you discover limitations in what the platform can actually do. Then you realize you can't easily switch to better models or add capabilities the vendor doesn't support. By the time you understand you need a different approach, you've invested millions and trained hundreds of people on a platform that can't deliver what you actually need. The switching costs become prohibitive, so you either accept the limitations or start a painful migration process.

This is why platform choice matters right now, while you're still running pilots and before you've committed to enterprise-wide deployment. The decision you make in the next few months will either give you flexibility to evolve as AI capabilities advance, or lock you into a vendor's roadmap and architectural decisions. There's no neutral choice and no way to delay the decision. Not choosing a platform is choosing to let individual teams make their own choices, which creates its own set of problems.

The Four Platform Approaches

Most enterprises are evaluating one of four basic approaches to building their AI capabilities. Each has legitimate use cases, and each comes with tradeoffs that may not be obvious until you're deep into implementation.

Extend Your Core Systems

This is the path most enterprises consider first. You're already running Salesforce for CRM, or SAP for ERP, or Oracle for financials. Now these vendors are adding AI capabilities directly into the platforms you already use. Salesforce calls it Agentforce. SAP is embedding AI across their suite. Oracle is doing the same. The pitch is compelling: get AI capabilities without adding new systems, maintain your existing workflows, and leverage the data that's already in your core platforms.

The advantages are real. Your teams already know these systems. The data is already there. Integration is handled by the vendor. Training requirements are lower because people are working in familiar interfaces. For organizations where IT complexity is already a problem, staying within existing systems can make a lot of sense.

The tradeoffs show up over time. You're limited to whatever AI capabilities your vendor chooses to build. If they're slow to adopt new models or approaches, you wait. If their architecture makes certain things difficult or impossible, you're stuck. And you're tightly coupled to that vendor's technology choices, pricing model, and roadmap. This works well when your use cases align closely with what the vendor offers. It becomes constraining when you need something different.

Workflow-First Platforms

ServiceNow, Microsoft Power Platform, and similar tools take a different angle. Instead of embedding AI into business applications, they provide platforms for building automated workflows that can incorporate AI capabilities. The idea is that most business processes can be modeled as workflows, and AI agents can be deployed within those workflow frameworks. Salesforce, with its Agentforce platform bridges both extending your core and workflow-first.

The strength of this approach is flexibility within structure. You can build custom workflows that match your actual business processes rather than adapting to how a vendor thinks your business should work. These platforms typically offer extensive integration capabilities, so you can connect AI to multiple back-end systems.

The challenges emerge at scale. Workflow platforms can become complex quickly as you add more processes and integrations. Maintaining dozens or hundreds of automated workflows requires coordination and governance that many organizations underestimate.

AI-First Platforms

Some organizations are choosing to build directly on foundation model providers like Anthropic, OpenAI, or Google. Instead of getting AI as a feature of another platform, they're treating AI capabilities as the foundation and building everything else around it. This might mean using Claude or GPT-4 through APIs, or working with specialized AI platform companies that provide orchestration and tooling on top of foundation models.

The advantage is maximum flexibility. The tradeoff is obvious: you're building more yourself. This requires internal AI/ML expertise, development resources, and the ability to handle integration, security, monitoring, and all the other operational concerns that vendor platforms handle for you.

This approach makes sense for organizations with strong technical teams, unique requirements that vendor platforms don't address well, or strategic conviction that AI capabilities will be core to their competitive advantage.

Hybrid Approach

More sophisticated organizations are pursuing a hybrid strategy: use vendor platforms where they make sense, build custom AI solutions where needed, and create an orchestration layer that allows different systems to work together. This might mean using Salesforce for customer-facing AI, building custom agents on Claude for internal operations, and connecting everything through a unified workflow engine.

The benefit is that you get the best of both worlds. The cost is coordination complexity. Someone has to design the overall architecture, maintain the integration layer, manage multiple vendor relationships, and ensure everything works together reliably. This requires mature IT governance, clear architectural standards, and the organizational capability to execute on a more complex technology strategy. Most organizations pursuing this approach are already operating at higher levels of technical maturity.

The Assessment Framework

The right platform choice for your organization depends less on which vendor has the best marketing and more on honest answers to a few critical questions about where you are now and where you're trying to go.

Where is your AI maturity right now?

If you worked through the AIOps Maturity Model assessment from a few weeks ago, you already have a sense of where your organization sits on the spectrum from experimentation to systematic deployment. This matters more than almost anything else when choosing a platform approach.

Your maturity level should guide your platform ambition. Trying to build AI-first architectures when you're still at Level 2 usually leads to expensive failures. Settling for limited vendor platforms when you're at Level 4 or 5 wastes your hard-won capability.

What problems are you actually trying to solve?

This sounds obvious, but most organizations skip past it too quickly. They know they want "AI agents" or "AI automation" without getting specific about what business problems those agents need to solve.

The specificity of your answer matters. "We want AI to help our sales team" is too vague. "We want AI agents that can analyze customer interaction history, identify upsell opportunities, and draft personalized outreach that our sales reps can refine" is specific enough to evaluate whether a given platform can deliver.

Do you need agents or better automation?

The term "AI agent" gets used loosely, and the distinction between agents and automation matters when choosing platforms. Automation follows predefined rules and workflows (deterministic). Agents make decisions, adapt to context, and operate with more autonomy (probabilistic). Most agentic platforms blend both capabilities. If you're trying to deploy agents that can operate more independently, understand complex context, and make judgment calls, you need platforms designed around agentic architectures.

Ask yourself what level of autonomy you actually need. If the answer is "AI should help humans complete tasks faster," that's automation. If the answer is "AI should handle entire categories of work with minimal human intervention," that's agents. The platform requirements are different.

What's your technical capability to build versus buy?

This is the question most executives want to avoid because the honest answer is often uncomfortable. Building and operating AI systems requires specific capabilities: people who understand machine learning, engineers who can integrate AI with existing systems, and operations teams who can monitor and maintain AI deployments.

Some organizations have these capabilities. Most don't, at least not yet. And there's no shame in that. The question is whether you're willing to build that capability or whether you need to rely on vendors to provide it.

The Decision Path

Based on your answers to those assessment questions, a clearer picture should be emerging about which platform approach makes sense for your organization right now. Here's how to think through the decision without pretending there's one right answer for everyone.

If you're at Maturity Level 1 or 2: Start with what you know

If you're at Maturity Level 3 or 4: Match platform to ambition

If you're at Maturity Level 5: You're probably building custom already

Red flags that suggest you're choosing the wrong approach

Certain patterns indicate that an organization's platform choice doesn't match their actual situation or needs. Watch for these warning signs:

  1. You're at Level 1 or 2 but planning to build custom AI-first architectures. This almost always ends badly. You don't yet know enough about what you need to make good architectural decisions, and you'll waste time and money building infrastructure before you understand your actual requirements.

  2. You're at Level 4 or 5 but staying with vendor platforms primarily because switching seems hard. This is the sunk cost fallacy in action. The pain of switching grows over time, so delaying the decision doesn't reduce the pain, it increases it.

  3. Your AI roadmap includes capabilities that your chosen platform doesn't support, but you're planning to "figure it out later" or assuming the vendor will eventually add what you need. Hope is not a strategy. If your platform can't support your roadmap, either change your platform or change your roadmap.

  4. You're choosing platforms based on vendor relationships rather than technical fit.

  5. Your platform choice is being driven by a single pilot that went well, without broader assessment of whether that platform will serve your other use cases. One successful pilot does not validate an enterprise platform strategy.

  6. You're planning a hybrid approach but you don't have the organizational capability to manage that complexity.

The right time to recognize these red flags is before you've committed to enterprise-wide deployment, not after. Most platform mistakes are obvious in retrospect. The skill is seeing them before they become expensive problems.

What This Means for Your Next 90 Days

Platform decisions often happen slowly and then all at once. You run a few pilots, they show promise, someone asks about scaling, and suddenly you're in procurement conversations about enterprise licenses and multi-year commitments. The organizations that navigate this well are the ones that use the pilot phase to gather intelligence about platform fit before they're forced to make large-scale decisions.

Here's what you should be doing in the next 90 days, regardless of which platform approach you're leaning toward:

  1. Map your current platform commitments

  2. Assess your internal AI and ML capability

  3. Run small experiments before committing

  4. Build escape hatches even in vendor solutions

  5. Have the governance conversation now (Governance-by-design)

The platform wars are just beginning. Salesforce, Microsoft, Google, ServiceNow, SAP, and every other major enterprise vendor are racing to embed AI into their offerings and convince you that their approach is the one you should bet on. They'll all have impressive demos, customer testimonials, and roadmaps that promise to address any concern you raise.

None of them are lying. Each platform approach works well for some organizations in some situations. The question is not which platform is best in the abstract. The question is which approach matches where your organization is right now and where you're trying to go. That depends on your maturity level, your technical capability, your specific use cases, and your tolerance for vendor lock-in versus operational complexity.

The organizations that get this right are the ones that think carefully about platform choice before they're forced into rapid decisions by successful pilots and executive pressure to scale. They use the pilot phase to learn not just whether AI works, but what kind of platform architecture will actually serve their needs. They're honest about their capabilities and constraints. And they build in flexibility even when committing to specific platforms, because they know the AI landscape is changing faster than any vendor roadmap can predict.

The organizations that get this wrong are the ones that treat platform selection as a procurement decision rather than an architectural one. They choose based on existing vendor relationships or whoever has the best demo. They skip the hard questions about maturity, capability, and real requirements. And they discover 18 months later that they've locked themselves into platforms that can't deliver what they actually need.

There's no neutral choice here. Deciding not to choose a platform is still a choice, one that typically leads to uncoordinated experimentation and technical debt. Choosing to move fast without thinking through the implications is a choice that often leads to expensive do-overs. The skill is making deliberate choices based on clear-eyed assessment of where you are and where you're going.

Your AI platform decision matters more than most of the technology choices you'll make this year. It will shape what you can build, how quickly you can adapt, and whether you'll have options or constraints as AI capabilities continue to evolve. The good news is that you don't have to get it perfect. You just have to be thoughtful enough to avoid the obvious mistakes and maintain enough flexibility to adjust as you learn.

The next 90 days are your window to get this right. Use them well.

AI Agents Accelerator

AI Agents Accelerator

Get AI Agents for FREE. Master powerful AI Agents (no coding), automate workflows to save time, scale your business, and earn more. Join 4,000+ entrepreneurs, AI agency owners and professionals rec...

Keep Reading

No posts found