Nobody knows what's in your tool stack
Ask your head of sales what tools the marketing team uses. Ask your VP of marketing what the CS team pays for. Ask your finance lead how many active SaaS subscriptions you have.
You'll get three different answers. None of them will be right.
The average B2B company with 50 employees runs 15-30 SaaS tools. That's not a guess. We see it in every audit we run. And in almost every case, the team underestimates the number by 30-40%.
Here's why that matters: you're paying for tools nobody uses, missing tools you actually need, and running a stack that can't support AI because nothing connects to anything.
This guide walks you through a 5-step audit that fixes all three.
Why tool stacks get out of control
It starts innocently.
Marketing signs up for an email tool. Sales buys a prospecting database. CS adopts a ticketing system. Engineering spins up monitoring. Each decision makes sense in isolation.
Then someone leaves. Their tools stay. A new hire brings their preferred tools. Now you have two prospecting databases and nobody knows which one is the source of truth.
Multiply this by 3-5 years and you get what we call tool stack debt. Same concept as technical debt, but for your operational infrastructure.
Common symptoms:
- Multiple tools doing the same job
- Manual data entry between systems
- "Nobody knows how that integration works"
- Reports that take days to pull because data lives in four places
- New hires asking "which tool do I use for X?" and getting different answers
Sound familiar? You're not alone. Let's fix it.
The 5-step tool stack audit
Step 1: Inventory everything
Before you can fix your stack, you need to see it.
How to do it:
-
Pull financial records. Check your expense management tool (Brex, Ramp, Expensify) for every SaaS subscription. Include annual contracts, monthly subscriptions, and those "free trials" that started billing 8 months ago.
-
Check SSO and admin panels. Your Google Workspace or Okta admin panel shows every app your team has connected. This catches tools that don't run through procurement.
-
Survey department leads. Send each department lead a simple question: "List every tool your team uses daily, weekly, and rarely." Give them 24 hours.
-
Check credit card statements. SaaS charges under $50/month slip through every approval process. They add up.
What you're building: A master spreadsheet with every tool, its cost, the department that owns it, and the number of active users.
Most companies find 20-40% more tools than they expected. We once ran an audit for a 60-person B2B company that thought they had 18 tools. The actual count was 34.
Step 2: Measure actual usage
Having a license isn't the same as using it.
How to do it:
-
Check admin dashboards. Most SaaS tools show last login dates and activity metrics in their admin panel. Pull 90-day active user counts.
-
Categorize each tool:
- Active: Used by 70%+ of licensed users, weekly or more
- Underused: Used by 30-70% of licensed users
- Ghost: Used by fewer than 30%, or not used in the last 30 days
-
Calculate waste. For every ghost tool, multiply the per-seat cost by the number of inactive seats. For underused tools, calculate the cost of unused licenses.
What you'll find: Most companies waste 20-35% of their SaaS spend on unused or underused licenses. For a company spending $200K/year on SaaS, that's $40K-70K.
Real examples we've seen:
- A Series B startup paying $18,000/year for a sales intelligence tool that 2 out of 12 reps used
- Three separate project management tools (Asana, Monday, Linear) across a 40-person company
- A $6,000/year analytics tool that one person used once a quarter for a board report
Step 3: Map overlaps
Now look for tools that do the same thing.
How to do it:
-
Group tools by function. Create categories: CRM, marketing automation, email, project management, analytics, support, billing, communication, documentation, security.
-
Flag duplicates. Any category with 2+ tools is a red flag. Sometimes it's justified (different teams, different needs). Usually it's not.
-
Check for feature overlap between adjacent tools. Your CRM might have email marketing features. Your support tool might have a knowledge base. Your project management tool might have docs. You might be paying for standalone tools that duplicate features you already have.
Common overlaps we find:
- CRM + email marketing (HubSpot does both, but teams also have Mailchimp)
- Slack + project management (teams using Slack channels as task trackers alongside Asana)
- Google Docs + knowledge base + wiki (Notion, Confluence, and Google Docs all doing the same thing)
Step 4: Identify gaps
This is the step most audits skip. Cutting waste is easy. Finding what's missing is harder -- and more valuable.
How to do it:
-
Map your data flows. For each critical business process (lead to close, onboarding, renewal, support), trace how data moves between tools. Where does it break? Where is someone copying data manually?
-
Ask each department: "What task takes the longest because you don't have the right tool or integration?" You'll hear things like:
- "I spend 2 hours every Monday pulling data from Salesforce and Stripe into a spreadsheet for the pipeline review"
- "We can't see support tickets in the CRM so AEs don't know when their accounts have issues"
- "Billing changes don't sync to the CRM so our revenue numbers are always wrong"
-
List integration gaps. Which tools should talk to each other but don't? This is your integration debt.
What to look for: The gaps usually aren't missing tools. They're missing connections between the tools you have.
Check our tools directory to see if integrations exist for the connections you need.
Step 5: Score AI readiness
This is the step that separates a cost-cutting exercise from a strategic audit.
How to do it:
-
For each tool, answer:
- Does it have an API? (yes/no)
- Is the API well-documented? (yes/no)
- Does an MCP server exist for it? (yes/no)
- Can data be extracted programmatically? (yes/no)
-
Score each tool 0-4 based on the answers above. Tools scoring 0-1 are AI blockers. They hold data hostage.
-
Identify your AI foundation candidates. Which tools contain the most valuable data? Usually: CRM, billing, support, product analytics. If those four score 3-4, you're in good shape. If any of them score 0-1, that's your first project.
Why this matters now: AI agents are only as useful as the data they can access. If your CRM data is trapped in Salesforce with no API connection to billing or support, an AI agent can't answer "which accounts are at risk of churning?" That question requires CRM + billing + support + product data in one place.
The AI readiness test
Every tool in your stack can be scored on four factors that determine whether AI can actually work with it. Not in theory -- in practice, right now, with the tools and agents available today.
The four readiness factors:
- API availability -- Does the tool expose a REST or GraphQL API at all? Some tools, especially legacy or SMB-focused ones, don't. No API means no programmatic access, full stop.
- API quality and documentation -- An API that exists but has no documentation, inconsistent endpoints, or rate limits so aggressive they break real workflows is nearly as bad as no API. Quality matters.
- MCP server availability -- Does the tool have a Model Context Protocol server, either official or community-maintained? MCP servers are what make a tool plug-and-play with AI agents. Without one, connecting AI to that tool means custom integration work.
- Data exportability -- Can you get your data out reliably, in a structured format, on demand? CSV export on request from the finance team is not the same as a self-serve bulk export or a live data feed.
Score each tool 0-4, one point per factor.
A score of 4/4 means AI can work with this tool today with minimal setup. A score of 1/4 means AI is effectively blocked from accessing whatever data lives in that tool.
What scores look like in practice:
HubSpot scores 4/4 -- full documented API, great developer docs, an official MCP server, and self-serve CSV export on any object. You can connect an AI agent to HubSpot in an afternoon.
Your legacy ERP likely scores 1/4 -- an API technically exists but was documented in 2014 and half the endpoints are broken, no MCP server exists, and bulk export requires a finance team request with a three-day turnaround. AI can't do much with that.
A modern tool like Linear or Notion might score 3/4 -- solid API, good docs, no official MCP server yet, but community servers exist that cover most use cases.
What low AI readiness actually means for your operations:
Low-readiness tools hold data hostage. If your pricing history lives in a tool with no API and no export, every AI use case that needs pricing context -- deal analysis, churn risk scoring, margin review -- is blocked at the source. You can build the most sophisticated AI workflow in the world and it stalls the moment it needs data from that tool.
The MCP server question is especially important to understand. Tools with MCP servers are genuinely plug-and-play. You point an AI agent at the server and it can query the tool in natural language, pull structured data, and write back results. Tools without MCP servers require you to build a custom integration layer -- API calls, auth handling, data normalization -- before AI can touch them. That's not impossible, but it's weeks of work, not an afternoon.
Run this scoring exercise across your top 15-20 tools. The pattern you'll see is that your most modern, developer-friendly tools cluster at 3-4, and your oldest or most niche tools cluster at 0-2. That gap is your AI readiness gap.
The integration matrix: mapping your data flows
Knowing how AI-ready each tool is in isolation is useful. Knowing how your tools connect to each other is more important. This is where the integration matrix comes in.
What it is: a simple grid that maps which tools in your stack talk to which other tools, and how. It sounds basic. Most teams have never built one, and when they do, they're usually surprised by what they find.
How to build it:
Take your top 10 tools by usage or data volume. List them down the left side of a spreadsheet. List the same tools across the top. For each cell in the grid, mark one of four states:
- Native integration -- the tools connect directly, usually maintained by one of the vendors
- Zapier / webhook -- connected via a third-party automation layer
- Manual export -- data moves between them only when someone manually exports and imports
- No connection -- the tools don't talk at all
Fill in the whole matrix. It takes about an hour if you involve the people who actually manage the tools.
What the matrix reveals:
Look for rows and columns where most cells say "no connection" or "manual export." Those are your isolated tools -- data silos that exist outside your connected stack. No AI can bridge those gaps without custom work.
Look for tools that have lots of native integrations. Those are your connective tissue tools -- the ones where breaking or changing them would ripple across everything else.
A real example of why this matters:
If your billing platform has no connection to your CRM, you can't answer "which accounts are at risk of churn?" without a manual export. Someone downloads a CSV from billing, cross-references it with CRM data, and spends 20 minutes building a view that answers a question that should take seconds. An AI agent could answer that question in 20 seconds -- but only if the connection exists. If it doesn't, the AI hits a wall at exactly the moment it needs to be useful.
This kind of gap is more common than it should be. Billing isolated from CRM. Support isolated from product analytics. Finance isolated from sales pipeline. Each one is a question your team can't answer quickly.
A note on what "connected" actually means for AI:
Most native integrations are read-only sync jobs -- data flows one direction on a schedule. That's better than nothing, but it's not the same as an MCP server connection. An MCP server lets an AI agent query the tool in natural language, pull specific records, and write data back. It's interactive, not just a pipeline. When you're thinking about where to invest in better connections, prioritize MCP server coverage for your highest-value tools.
Most B2B stacks have 3-5 completely isolated tools when you map this honestly. Those are your first targets. You don't need to fix everything -- you need to fix the ones where the missing connection is blocking real business questions.
Stack consolidation: what to cut and what to keep
Once you have AI readiness scores and an integration matrix, the obvious next question is: what should we cut? This is where the audit gets harder, because consolidation decisions are not as clean as they look on paper.
Why consolidation is harder than it looks:
Every tool has data in it. Migrating that data cleanly -- especially historical data -- takes real time and carries real risk of loss or corruption. Every tool has workflows built around it. Sales reps have their HubSpot sequences set up. The ops team has their Zapier automations pointing at the old tool. Finance has reporting built on exports from the current system. Cutting a tool doesn't just mean canceling a subscription; it means unraveling and rebuilding whatever depends on it.
There's also the habit layer. Teams that have used a tool for two years have workflows, shortcuts, and muscle memory built around it. Switching costs are real even when the new tool is objectively better.
A practical decision framework:
Keep a tool if it scores high on AI readiness (3-4/4) OR it's deeply embedded in active workflows with no clean migration path available right now. High AI readiness means the tool is already an asset for where you're going. Deep embedding means cutting it creates more risk than value in the near term.
Cut a tool if it scores low on AI readiness (0-1/4) AND it has low actual usage (check login data and seat utilization, not what people say they use) AND a capable replacement already exists in your stack or is cheap to adopt. All three conditions need to be true. Low readiness alone isn't enough if the team genuinely depends on it.
The "don't consolidate for the sake of it" warning:
You'll sometimes find two tools in your stack doing what looks like the same job. Before you cut one, check whether different teams genuinely use them for different reasons. Marketing and sales both using different analytics tools might look like redundancy, but if marketing needs content attribution and sales needs pipeline velocity, they may legitimately need different things. Forcing consolidation to hit a "fewer tools" number will create friction without creating value. The goal is not a short list. The goal is no wasted spend and no stranded data.
The right sequence:
Start with ghost tools -- tools with near-zero usage that are just costing money. Canceling them carries essentially no risk and no migration work. Do this first.
Then look at overlapping tools where one is clearly inferior -- same job, one team uses one, another team uses the other, and there's an obvious winner. Consolidate these with the teams involved, not top-down.
Then, once you've cleared the noise, look at AI readiness investment for your core stack. These are the tools that run your business and score 2/4 or below. The question is whether to replace them or invest in building the integration layer they're missing.
What not to do:
Don't try to migrate everything at once. One major tool migration at a time is the practical limit for most ops teams running alongside the rest of the business. Don't make consolidation decisions without the teams that actually use the tools -- you'll miss dependencies, create resistance, and probably have to reverse the decision six months later.
The goal isn't fewer tools. The goal is the right tools, connected, with no data stranded in silos.
What to do with the results
You now have four outputs:
- A complete inventory with costs
- A usage map showing waste
- An overlap analysis
- An AI readiness score
Here's the action plan.
Quick wins (do this week)
- Cancel ghost tools. No meeting needed. Just cancel them.
- Downgrade underused tools to lower tiers or fewer seats.
- Expected savings: 15-25% of SaaS spend.
Medium-term (next 30 days)
- Consolidate overlapping tools. Pick one, migrate, cancel the other.
- This usually requires a department-level decision. Don't try to force it across departments.
Strategic (next quarter)
- Close integration gaps. Build the connections between your critical tools.
- Improve AI readiness scores for your core data tools.
- This is where you either DIY it or bring in help.
How Shyft's free scan automates this
We built the free AI scan because we were tired of running this audit manually.
Here's what it does:
- Identifies your current tool stack
- Maps integrations and gaps
- Scores AI readiness for each tool
- Shows available MCP servers and AI skills for your tools
- Gives you a prioritized action plan
It takes 5 minutes. No sales call required.
If the scan surfaces problems you want help fixing, our audit service goes deeper. We spend 1-2 weeks inside your stack, mapping every data flow, integration, and gap. You get a full report with a build plan.
From there, we can build the infrastructure that connects everything. MCP servers, data pipelines, AI agents. You own it all. No lock-in.
The real cost of not auditing
Let's make this concrete.
A 50-person B2B company spending $250K/year on SaaS tools with:
- $50K in wasted licenses (ghost tools, unused seats)
- 10 hours/week of manual data entry across tools (that's $25K/year in labor cost)
- 2 missed churn signals per quarter because CRM doesn't connect to support ($200K in lost ARR)
The audit costs nothing. The waste costs everything.
Do the audit. Cut the waste. Close the gaps. Then make your stack ready for AI.
Take the free scan | Browse our tools directory | See our services