Is This Even AI?
A practical classification framework for distinguishing AI systems from traditional software — essential groundwork before any governance work begins.
Is This Even AI?
Your procurement team just got pitched an "AI-powered" approval workflow. Your HR platform quietly rolled out a "candidate fit score." Your customer support tool now has a "smart reply assistant." Your board is asking what your AI governance posture looks like — and you need an answer.
The problem is that not all of these are actually AI. And treating them as if they are — or aren't — is one of the most common and costly mistakes organizations make when they start building a governance program.
Before you can govern AI, you need to know what you're governing. That starts here.
Why Misclassification Is a Real Risk
Most governance frameworks assume you already know what's in scope. They skip straight to policies, risk tiers, and compliance checklists. But if you haven't correctly identified which systems are AI and which are just software with a marketing label, everything built on top of that foundation is wrong.
Misclassification runs in two directions — and both cause harm.
Over-classification means treating rule-based automation as AI. Governance resources get spread across tools that don't need oversight, while the systems that actually learn from data, shape decisions, and affect people get less attention than they deserve. Teams become desensitized to governance processes. The word "AI" loses meaning.
Under-classification means treating genuine AI as ordinary software. A hiring tool that's quietly reproducing historical bias never gets a fairness review. A generative assistant drafting customer communications never gets a content policy. A vendor contract for an ML-based fraud detection system gets reviewed by procurement instead of legal. By the time something goes wrong, there's no audit trail, no clear accountability, and no defense.
A consistent, practical classification habit is the first line of defense against both.
The Core Distinction: Rules vs. Learning
The most important question in AI classification isn't about the marketing label. It's about how the system actually works — and there are two fundamentally different answers.
A rules-based system does exactly what it was programmed to do. The logic is fixed. Someone — a developer, an administrator, an analyst — wrote explicit instructions: if condition A, then action B. The system follows those instructions every time, without deviation, without learning, without producing anything its designers didn't anticipate.
An AI system works differently. Instead of following explicit rules, it was trained on data — examples, outcomes, patterns — and developed its own internal model of how to respond. That internal model can produce outputs no one explicitly programmed. It can behave differently as conditions change. And critically, it can be wrong in ways that are difficult to predict or explain.
This distinction determines the nature of the governance risk. With a rules-based system, you can audit the rules. You know exactly why it did what it did. With an AI system, the logic is often opaque, the outputs can surprise you, and the system's behavior can shift over time without anyone changing a line of code.
That's why classification matters — and why "AI-powered" on a vendor slide isn't enough to answer the question.
The Four-Question Classification Framework
Run any system through these four questions in order. They'll tell you whether it's AI, and if so, how much governance attention it needs.
Question 1 — Does it learn from data?
Training, fine-tuning, online learning — any process where the system builds an internal model from examples rather than following rules someone explicitly wrote. If yes, the system is potentially AI. If no, stop here. What you're looking at is automation, not AI, and your existing software controls are likely sufficient.
Question 2 — Does it generate outputs that weren't explicitly programmed?
A system can learn from data and still only produce a fixed set of pre-defined outputs — lower risk. But if it generates novel text, decisions, scores, predictions, or recommendations that no one hard-coded, that's where AI governance begins to apply. A yes to both questions 1 and 2 is your minimum threshold for classification.
Question 3 — Does it affect people's money, jobs, rights, or access to services?
This is the stakes question. A system that recommends articles on your company intranet carries very different risk than one that scores job applicants, adjusts customer pricing, routes insurance claims, or flags employees for review. The higher the consequences for real people, the higher the governance priority.
Question 4 — Can users see how it works and meaningfully override it?
A system that shows its reasoning keeps human judgment in the loop. A system that produces outputs without explanation and makes overrides difficult or invisible is operating more like an autonomous decision-maker. That demands stronger governance, not lighter.
The threshold rule: A yes to questions 1 and 2 puts a system in scope for AI governance. Questions 3 and 4 determine the risk tier and urgency of review.
Applying the Framework: Systems You'll Encounter
Here's how the four questions apply to tools your organization is likely already using or evaluating.
ChatGPT, Claude, Copilot, or any large language model tool
Trained on vast datasets, generates novel text outputs, can produce anything from a helpful summary to a legally consequential draft. Clearly AI. Governance requirements depend on how it's being used. A team member using it to brainstorm internally is different from it drafting customer-facing communications or legal documents.
A candidate fit score in your hiring platform
If the score comes from a machine learning model trained on past hiring decisions, this is AI — and it's high-stakes AI. It affects who gets opportunities. It may be reproducing historical bias. It almost certainly requires a fairness review, documentation, and human oversight controls. The fact that a recruiter makes the final decision doesn't change the classification.
A smart reply assistant in your customer support tool
If it's generating responses based on a language model, it's AI. If agents are sending those responses with minimal review, it's influencing official communications — which means it needs content guidelines, a review protocol, and clarity on when escalation is required.
An automated fraud detection service
If it analyzes transaction patterns, generates risk scores, and updates its own logic over time, it's AI. Whether it's high-risk AI depends on how consequential those scores are and how transparent the scoring process is to the humans acting on it.
A smart approval workflow in your procurement platform
If an administrator configured the routing rules in a setup screen and the logic never updates unless someone manually changes it, this is rules-based automation — not AI. Your existing controls are sufficient, regardless of what the vendor calls it.
On vendor language: "AI-powered," "intelligent," and "adaptive" are marketing terms, not technical classifications. A tool can use all three and still be a rules engine underneath. Apply the four questions regardless of what the pitch deck says.
Three Scenarios That Show What's at Stake
Scenario 1 — When Over-Classification Wastes the Resources You Need
A shared services team enables a new smart approval feature in their procurement platform. It automatically routes purchase orders based on dollar amount, cost center, and vendor type — logic an administrator set up in a configuration screen that never changes unless someone edits it manually.
Because the vendor markets it as intelligent, someone flags it for an AI governance review. Weeks pass. The governance team works through the intake process, the risk assessment, the legal review. Resources that should be protecting the organization are spent on a routing tool.
Meanwhile, the HR technology team deployed a machine learning-based candidate scoring system six months ago. No one flagged it. No one reviewed it. It has never been classified as AI.
The classification: The procurement tool is rules-based automation. The hiring scorer is governed AI. Getting that backwards — in either direction — has real consequences.
Scenario 2 — When Under-Classification Hides the Risk
A company's talent platform introduces a candidate fit score displayed next to each applicant's profile. Recruiters quickly learn to sort by score. The top candidates get calls. The rest don't.
The score is generated by a machine learning model trained on five years of the company's historical hiring data. It learned which profiles led to hires — and it's now applying that pattern to every new applicant.
What the model also learned: the company historically hired more men than women for technical roles, and more candidates from a small set of universities. That pattern is now embedded in the score.
Because the feature was introduced as a recommendation helper rather than an AI system, it was never reviewed. No fairness testing. No documentation. No oversight structure. No one outside the HR tech team knows how it works.
When a rejected candidate files a discrimination complaint, the organization has no audit trail, no explanation for how scores were generated, and no record of any human review of the system's logic.
The classification: Governed AI. Learning from data, generating consequential outputs, affecting real people's opportunities, operating without visibility or meaningful override. Every one of the four questions returns a yes.
Scenario 3 — When a Productivity Tool Becomes a Policy Problem
A customer support manager rolls out a generative AI reply assistant to help agents handle volume. Agents type a short description of the customer's issue; the assistant drafts a full response. It saves time. The drafts sound professional. Agents start sending them with minimal edits.
Over time, the assistant starts including details that aren't quite right — refund terms more generous than policy allows, security assurances the company can't back up, commitments to follow-up timelines that no one tracks. Because agents are under pressure and the drafts sound authoritative, these responses go out.
Six months in, a legal review surfaces dozens of written commitments to customers that conflict with official policy. The organization has no record of which responses were AI-generated, no content guidelines that were ever applied, and no escalation protocol that was ever triggered.
The classification: Governed AI. The consequences weren't visible until they accumulated — which is exactly why governance needs to happen before deployment, not after.
Building Your AI Inventory
Classification isn't a one-time exercise. It's a habit that runs every time a new tool is introduced, every time a vendor pushes a platform update, and every time a smart feature appears in software your team already uses.
When conducting an AI inventory, start broad — include anything that might qualify, then apply the four questions to filter.
Cast the net over:
- Any LLM-based tools or copilots, including ones embedded in existing platforms
- ML models in production, whether built internally or sourced from vendors
- Automated scoring, ranking, or decision tools
- Anything a vendor describes as intelligent, adaptive, or predictive
- Any tool that generates text, images, code, or recommendations
Filter out:
- Rule-based systems with fixed logic and no learning component
- Static databases and reporting tools
- Standard statistical analytics
- Any tool where a human explicitly programmed every possible output
What remains is your AI inventory. Every system on that list needs an owner, a provisional risk tier, and a review cadence. That list is also the foundation for everything else in your governance program — risk assessments, vendor reviews, policy scope, regulatory compliance mapping.
Without it, you're governing in the dark.
Quick Reference
| Question | Yes means... |
|---|---|
| Does it learn from data? | Potentially in scope — keep going |
| Does it generate unprogrammed outputs? | In scope for AI governance |
| Does it affect people's money, jobs, or rights? | Higher priority — review sooner |
| Is it opaque and hard to override? | Higher risk tier — govern more closely |
Pro Tip from the Method 9 Team
The most dangerous AI systems in most organizations aren't the ones anyone is worried about. They're the ones that arrived as recommendation helpers or productivity features, got waved through as ordinary software, and have been quietly shaping consequential decisions ever since.
When a new smart feature appears — in a pitch, in a platform update, in a colleague's message — ask three questions before moving on: Does it learn from data? Does it affect anyone's money, job, or opportunities? Can the people using it see how it works and push back when it's wrong?
A double yes followed by a shaky not really on the third question is a red flag. Treat it as governed AI until you have a reason to do otherwise.
Continue Learning
This is a free preview module. Method 9 members access the full library of compliance frameworks, assessment tools, and implementation templates.
Explore Membership