Strategy Before Shiny Objects

If there’s one pattern I keep seeing, it’s this: a leader hears about an AI use case, a vendor demos something slick, and suddenly the room is full of enthusiasm — and the project starts the next week. I get the excitement. I’m a fan of new tech too. But from my time in BI and from what I’ve learned digging into AI, the smarter move is almost always the same: strategy first, tools second.

Buying or building shiny tech without a strategy is expensive theater. It looks like progress. It feels innovative. But without clear goals, data that can support the solution, and a plan to change how work actually gets done, it’s unlikely to deliver sustained value.


Why strategy matters

AI can automate work, surface insights, and scale personalization — but it won’t decide which problems are worth solving. Too many organizations chase the cool tool and then wonder why the results are underwhelming. The real value comes from aligning technology to a measurable business outcome and preparing the org to adopt the solution.

Analogy: don’t buy a Ferrari before you learn how to drive. A Ferrari won’t magically make you a better driver — it only looks impressive in the driveway until you know how to use it.


A practical, repeatable framework

Here’s a simple framework I use (and have found useful when talking with execs and technical teams) to make sure strategy comes first:

  1. Define the business problem and success metrics.

    Be specific. “Improve customer experience” is too vague. “Reduce churn by 15% in Q4” is actionable. Pick 1–2 KPIs you’ll measure.

  2. Map the current process and pain points.

    Document how the work gets done today, who touches it, and where the bottlenecks or frustrations are. This helps identify where automation or prediction could help.

  3. Assess data readiness.

    Inventory where the required data lives, how clean it is, and whether definitions are consistent. If you can’t trust the baseline metrics, AI will only multiply the noise.

  4. Run the “should vs. could” test.

    Just because you could apply AI doesn’t mean you should. Favor tasks that are repetitive, rules-based, and measurable, and avoid using AI for high-stakes judgments without human oversight.

  5. Design a narrow pilot.

    Start with a small, well-scoped experiment: clear input, output, and measurement period. Use a simple success/fail criterion and a short runway (6–12 weeks for many pilots).

  6. Put governance and human-in-the-loop rules in place.

    Decide where humans will review outputs, how you’ll monitor model performance, who owns outcomes, and how you’ll surface problems like bias or drift.

  7. Plan change management and integration.

    Outputs must be embedded into daily workflows. If a model flags a high-risk customer but no one knows to act on it, the business value disappears. Train teams and update processes.

  8. Measure, learn, iterate, and scale.

    If the pilot meets your criteria, iterate to improve, then scale thoughtfully. If not, capture learnings — failed pilots can still reduce risk and inform future strategy.


Mini case example

A logistics company I followed wanted faster delivery routing. The bright, flashy option was an enterprise routing AI. Instead of jumping in, they defined success (reduce route time by 10%), mapped current routes, and discovered the data was scattered across three systems with inconsistent location formats. They ran a 6-week pilot: clean the location data + run optimized routes for one region. Result: 7% route-time reduction in the pilot region and clear ROI to justify the full rollout. The difference? They fixed the foundation before buying the Ferrari.


Final thought

Shiny tech sells headlines. Strategy delivers outcomes. If you want AI to create real business value, start with the problem, not the product. Define your success, check your data, and pilot narrowly. The rest — the fancy demos, the buzzwords — come later.

Next
Next

The AI Effect: Why We Stop Calling It AI