You have bottlenecks. Manual processes that eat up hours. Customer inquiries that take too long. Data sitting in spreadsheets that nobody analyzes. Someone on your leadership team has suggested AI as the answer, and maybe they are right. But maybe they are not.
This is not a trivial distinction. According to the RAND Corporation, more than 80% of AI projects fail, twice the failure rate of non-AI technology projects.1 And the most common root cause is not bad technology. It is that the organization misunderstood what problem needed to be solved with AI in the first place.1 They picked the wrong problem, or they picked the right problem and applied the wrong tool.
Before you spend a dollar on AI, you need a clear-eyed way to determine whether your problem is actually an AI problem. This article gives you that framework.
The Three Buckets: Automation, AI, or Something Else Entirely
Most business bottlenecks fall into one of three categories, and the right solution depends on which category your problem belongs to, not on what technology is getting the most press coverage.
Bucket 1: Rule-Based Automation (You Probably Don't Need AI)
If your process follows clear, predictable rules, and a human could write a step-by-step instruction manual for it, you likely need robotic process automation (RPA) or basic workflow software, not AI. Think invoice processing with consistent formats, routing emails based on keywords, or generating standardized reports from structured data.
IBM's research makes the distinction clearly: RPA is process-driven, while AI is data-driven. RPA follows the exact steps an end user defines. AI finds patterns in data that humans may not see.2 Confusing the two is one of the most expensive mistakes companies make. You do not need a machine learning model to move data between two spreadsheets. You need a script.
McKinsey's research shows that many of the cost reductions companies attribute to "AI" are actually coming from straightforward process automation in functions like HR, IT, and legal.3 These are valuable improvements. They just are not AI.
Bucket 2: Machine Learning and Predictive AI (Pattern Recognition at Scale)
This is where AI genuinely earns its keep. If your problem involves recognizing patterns in large, messy datasets (predicting which customers will churn, detecting fraud in financial transactions, optimizing supply chain logistics, or forecasting demand), machine learning is likely the right tool.
The key characteristics of a good machine learning problem are: the data is too complex or voluminous for humans to analyze manually, the patterns change over time and require continuous learning, and the decisions need to be made at a speed or scale that humans cannot match.
McKinsey estimates that 75% of the $2.6 to $4.4 trillion in annual economic value from generative AI alone falls in four areas: customer operations, marketing and sales, software engineering, and R&D.4 And when it comes to traditional AI and ML, the highest-value applications are in supply chain optimization, where respondents most commonly report revenue increases of more than 5%.3
Bucket 3: Generative AI (Creating, Summarizing, Conversing)
Generative AI is a different animal. It excels at creating content, summarizing information, answering questions in natural language, and handling tasks where the output is text, images, or code rather than a prediction or classification.
Deloitte's State of Generative AI report found that the most advanced GenAI initiatives target IT (28%), operations (11%), marketing (10%), and customer service (8%).5 Customer-facing chatbots, internal knowledge bases, content creation, and code generation are the use cases delivering real returns.
But here is the critical nuance: generative AI is not a replacement for analytical AI. If you need to predict which product will sell best next quarter, you need machine learning. If you need to draft 50 personalized sales emails, you need generative AI. Different tools, different problems.
Five Questions to Ask Before Calling It an AI Problem
Before investing in any AI initiative, run your business problem through these five filters. If you can answer "yes" to at least four, you likely have a genuine AI opportunity. If not, simpler solutions will probably serve you better, and sooner.
1. Is the task too complex for fixed rules?
If someone on your team could write a complete decision tree covering every scenario, you do not need AI. You need automation. AI becomes necessary when the logic is too nuanced, when exceptions outnumber rules, or when the "right answer" depends on context that changes. Fraud detection is a good example: the patterns shift constantly, which is why rule-based systems miss sophisticated fraud while machine learning catches it.
2. Do you have enough data, and is it accessible?
AI models need training data, and the amount matters. For traditional machine learning, the standard rule of thumb is ten times more data points than the number of features (variables) your model tracks.6 For deep learning and complex tasks, you may need thousands or even tens of thousands of labeled examples per category.6
But volume alone is not sufficient. RSM's 2025 Middle Market AI Survey found that 32% of mid-market companies cite data quality as a top barrier to AI implementation.7 Industry experience consistently shows that companies with mature data practices achieve significantly better AI outcomes. If your data is scattered across disconnected systems, inconsistently formatted, or riddled with gaps, fixing data infrastructure should come before any AI project.
3. Is the volume high enough to justify the investment?
AI is not cost-effective for low-frequency tasks. If your customer service team handles 30 inquiries a day, a chatbot may cost more to build and maintain than the labor it saves. If they handle 3,000, the math changes dramatically.
BCG found that leading companies focus on depth over breadth. They prioritize an average of 3.5 use cases compared with 6.1 for less advanced peers,9 and they expect more than twice the ROI on each one. The lesson: pick problems where AI can operate at a scale that justifies the upfront and ongoing cost.
4. Can you define what "success" looks like with a specific metric?
If you cannot articulate a measurable outcome (reduce response time by 40%, cut manual processing costs by $200K annually, improve forecast accuracy from 65% to 85%), you are not ready for an AI project. You are ready for a strategy conversation.
RAND's research found that one of the primary causes of AI project failure is deploying models optimized for the wrong metrics, or models that do not fit into the overall business workflow.1 Without a clear success metric tied to a real business outcome, even a technically successful AI model will feel like a failure.
5. Is the process stable enough to build on?
AI amplifies the process it is built on. If your underlying workflow is broken, inconsistent, or undocumented, AI will amplify the dysfunction. Harvard Business Review warned in 2025 about the "AI experimentation trap": companies funding scattered AI pilots disconnected from real business value,10 repeating the same mistakes they made during digital transformation a decade ago.
Before automating a process with AI, make sure the process itself is sound. Sometimes the bottleneck has nothing to do with technology. It may be an organizational design problem, a communication breakdown, or a simple matter of hiring one more person.
Quick Decision Guide: Is This an AI Problem?
Where AI Actually Delivers, and Where It Does Not
McKinsey's eight years of AI research paint a consistent picture of where the technology creates measurable value and where it falls short. Understanding these patterns can save you months and significant budget.
High-Value AI Use Cases
- Customer operations: Contact center automation, real-time agent guidance, and conversational AI. McKinsey estimates generative AI could reduce human-serviced contacts by up to 50% in industries like banking and telecom.4
- Marketing and sales: Personalized content, churn prediction, cross-selling models, and demand forecasting. This is where companies most commonly report meaningful revenue increases.3
- Supply chain and inventory: Demand forecasting, logistics optimization, and procurement analytics. This function reports the highest share of revenue increases above 5%.3
- Software engineering: Code generation, testing automation, and technical documentation. Companies consistently report cost savings here.3
- Risk and fraud detection: Financial transaction monitoring, compliance screening, and anomaly detection. Deloitte found that cybersecurity AI initiatives are far more likely to exceed ROI expectations, with 44% delivering above-expected returns.5
Where AI Underperforms Expectations
- Low-volume, high-judgment decisions: Strategic planning, executive hiring, or M&A evaluation. AI works best with lots of data and clear feedback loops. Rare, high-stakes decisions with limited historical data are better served by experienced humans with good frameworks.
- Processes that are broken or undocumented: If the underlying workflow is inconsistent, AI will not fix it. It will scale the inconsistency. Gartner predicts that over 40% of agentic AI projects will be canceled by end of 2027,11 primarily due to escalating costs and unclear business value.
- Tasks where simple automation works: Data entry, file transfers, report formatting, and standardized email routing. These are automation problems, not AI problems. Solving them with AI is like hiring a surgeon to apply a bandage. Technically possible, but expensive and unnecessary.
- Problems that are really about people or processes: If your sales pipeline is slow because your CRM is poorly configured, or customer complaints are rising because of a staffing shortage, AI will not address the root cause. It will paper over it.
The Data Readiness Test
Even when you have confirmed that your problem is a genuine AI problem, there is one more gate to pass: data readiness. Jumping into AI without adequate data is like building a house on sand.
Research from Atlan's AI readiness assessment shows that data maturity goes beyond volume. What matters is the ability to reliably feed your models with clean, relevant information.8 A practical checklist:
- Availability: Is the data you need actually collected today? If the answer is "we would need to start tracking that," you are 6-12 months away from being AI-ready for this use case.
- Quality: Is the data consistent, complete, and accurate? An AIIM industry survey found that 77% of organizations rated their data as average, poor, or very poor in quality and readiness for AI.12
- Accessibility: Can the data be accessed programmatically, or is it locked in PDFs, email threads, or someone's personal spreadsheet? Data silos are the silent killer of AI initiatives.
- Volume: Do you have enough examples for the model to learn patterns? For traditional ML, aim for at least 10x the number of features. For text classification, 1,000+ labeled examples per category is a practical minimum.6
- Governance: Do you have policies governing who can access this data, how it can be used, and how long it is retained? Deloitte found that regulatory compliance jumped from 28% to 38% as the primary obstacle to AI deployment in a single year.13
If you scored poorly on three or more of these items, your first "AI project" should actually be a data infrastructure project. It is less glamorous, but every AI initiative you pursue afterward will benefit from it.
A Honest Conversation About AI ROI
The ROI picture for AI is more nuanced than most vendors will tell you. McKinsey's latest data shows that only 39% of organizations attribute any EBIT impact to AI, and among those, most report less than 5% of earnings coming from AI-driven improvements.3 That does not mean AI lacks value. It means the value is concentrated in specific, well-chosen use cases, not in broad, unfocused adoption.
BCG's 2025 research quantifies the upside when AI is done right: leaders achieve 1.7x revenue growth, 3.6x three-year total shareholder return, and 1.6x EBIT margin compared to laggards.14 But "doing it right" means being ruthlessly selective. Those leaders pursue roughly half as many use cases as their peers, focus on the highest-impact opportunities, and commit to measuring outcomes from day one.9
The companies getting the most value from AI are not the ones doing the most AI. They are the ones choosing the right problems.
For mid-market companies in particular, partnership-based approaches (working with specialized vendors or consultants who build alongside your team) succeed roughly 67% of the time, while purely internal builds succeed only one-third as often.15 The right partner will tell you when AI is not the answer. That honesty is itself a form of value.
Start Here: Three Steps This Week
You do not need a six-month strategy engagement to start thinking clearly about whether your problems are AI problems. Three things you can do this week:
Step 1: List your top five operational bottlenecks. Not technology wishes. Actual pain points that cost you time, money, or quality every week. Be specific: "Processing supplier invoices takes 12 hours per week across three people" is useful. "We need to be more data-driven" is not.
Step 2: Run each bottleneck through the decision guide above. For each one, determine whether it is a rule-based automation problem, a pattern-recognition problem that needs AI, or a process/organizational problem that needs fixing before any technology is applied.
Step 3: For the one or two that pass the test, assess your data readiness. Do you have the data, in sufficient quantity and quality, to feed an AI solution? If yes, you have a viable AI opportunity. If no, you have a clear next step: get the data right first.
The most important thing is intellectual honesty. Some of your biggest problems will turn out to be automation problems. Some will be process problems. And one or two will be genuine AI opportunities that could transform how your business operates. The goal is to tell the difference before you spend the money.
If you want help sorting through which of your bottlenecks are AI problems and which are not, a conversation about your specific situation is the best starting point. And we are happy to talk, even if the answer turns out to be "you don't need AI for this."
Sources
- RAND Corporation. Ryseff, J., De Bruhl, B. F., Newberry, S. J. "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed." 2024. rand.org
- IBM. "What is Robotic Process Automation (RPA)?" 2024. ibm.com
- McKinsey & Company. "The State of AI: How Organizations Are Rewiring to Capture Value." March 2025. mckinsey.com
- McKinsey & Company. "The Economic Potential of Generative AI: The Next Productivity Frontier." June 2023. mckinsey.com
- Deloitte. "State of Generative AI in the Enterprise." 2024. deloitte.com
- Shaip / DataRobot. "How Much Training Data Do You Really Need for Machine Learning?" 2024. shaip.com; datarobot.com
- RSM US LLP. "RSM Middle Market AI Survey 2025." rsmus.com
- Atlan. "AI Readiness Assessment: Your 2026 Implementation Guide." atlan.com
- BCG. "Where's the Value in AI?" October 2024. bcg.com
- Harvard Business Review. "Beware the AI Experimentation Trap." August 2025. hbr.org
- Gartner. "Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027." June 2025. gartner.com
- Gartner / AIIM. "AI & Automation Trends: 2024 Insights & 2025 Outlook." aiim.org
- Deloitte. "State of Generative AI in the Enterprise: Regulatory Compliance Trends." 2024. deloitte.com
- BCG. "AI Leaders Outpace Laggards with Double the Revenue Growth and 40% More Cost Savings." September 2025. bcg.com
- BCG / MIT. "State of AI in Business 2025." Via workos.com