A few months ago I was catching up with a consultant friend who works with small and mid-sized companies. He told me about a client, a founder running a small services company, who had asked him, almost sheepishly, “Should we be doing something with AI?” The founder had seen the headlines, watched competitors mention it on their websites, and had a vague sense that he was falling behind. But when my friend asked what problem he’d want AI to solve, the founder didn’t have an answer. He just knew he was supposed to be thinking about it.
That conversation stuck with me because I’ve been on the other side of it. I led a project applying large language models to automate parts of a manual data onboarding process that was time-intensive and inconsistent. My role was on the strategic side, figuring out where LLMs could actually help, evaluating proofs of concept, and mapping out how we’d get from experiment to something real. It taught me a lot about what AI adoption actually looks like when you’re a small team without a dedicated ML group or an innovation budget.
Since then, I’ve paid attention to how small companies and startups are approaching AI, and I keep seeing the same three situations.
The company that wants to but doesn’t know how. These are founders and leaders like my friend’s client. They’re bought in on the potential but stuck on the practical side. They don’t have an AI expert on staff, they’re not sure what’s real versus what’s marketing, and they don’t know where to start. So they don’t start, or they start by buying a tool that someone on the team saw in a demo without a clear picture of what success looks like.
The company that’s heard too many horror stories. These leaders have read about hallucinations, about companies getting burned by AI-generated content that turned out to be wrong, about data privacy lawsuits, about bias. They’re cautious by nature (which is usually an asset) and they’ve decided the risk outweighs the reward. They haven’t yet realized that this position gets harder to maintain as AI becomes embedded in the tools they already use, whether they realize it or not. They also understand that they may be behind their competitors on using AI, but it is an abstract risk at best.
The company that’s already all in, no guardrails. This one worries me the most, honestly. These are teams where individual contributors are already using ChatGPT or Copilot or Claude to write code, draft customer emails, generate reports, and nobody has talked about what that means at an organizational level. The work is getting done faster, which feels like a win, until you start asking questions about accuracy, consistency, and what happens when that AI-assisted output becomes the foundation for decisions that affect customers.
I recognize these patterns because I’ve been adjacent to all three, sometimes within the same organization at different stages.
The thing all three have in common
I believe the gap between these three positions is smaller than it looks. A company that’s afraid of AI and a company that’s using it impulsively actually need the same thing: a practical way to think about what they’re doing with AI and why. It doesn’t have to be complicated.
When we started our AI data onboarding project, we were essentially a small team evaluating how to bolt LLM capabilities onto an existing platform. We didn’t have the budget or time to hire a machine learning team or spin up a research division. We identified a specific, well-understood problem (a manual onboarding process that was time-intensive and produced inconsistent results), evaluated whether current LLM capabilities could meaningfully help, and scoped out what an implementation path would look like.
That project gave me a working mental model for how small companies can approach AI without either ignoring it or diving in blind.
Start with the problem, not the technology
This sounds obvious, but it’s where most small companies go wrong in both directions. The cautious ones evaluate AI as a general concept (“Is AI safe? Is it reliable?”) instead of evaluating it against a specific workflow. The enthusiastic ones adopt tools because they’re exciting, then go looking for places to use them.
The question that actually matters is narrow and practical: “We have this specific process that takes this many hours and has this error rate. Can AI improve it meaningfully?” If the problem isn’t concrete yet, the solution won’t be either.
Treat accuracy as an implementation detail, not a dealbreaker
A lot of the hesitation I see around AI comes from the assumption that output needs to be perfect to be useful. That assumption kills adoption at cautious companies and creates blind spots at aggressive ones.
AI output needs to be verifiable. That’s a different standard, and it’s one that small teams can actually design for.
On our project, this was a condition of the proof-of-concept phase. The AI could do the heavy lifting of the repetitive, pattern-matching work, but the workflow needed to keep a human in the review loop before anything moved forward. The value was that it could turn a multi-day manual process into a review-and-approve workflow that took a fraction of the time, even when the AI’s output needed corrections.
If you’re designing AI-assisted processes with the assumption that someone will verify the output, you’ve addressed the accuracy concern without waiting for the technology to become infallible (which it won’t).
Think about scale before you need to
When one person on your team starts using AI tools to do their job better, that’s an experiment. When five people are doing it, that’s a practice. When it’s embedded in how you deliver to clients, that’s infrastructure. Each of those stages has different implications, and the transitions sneak up on you.
The individual experiment phase is where most small companies are right now, and it’s fine. People are figuring out what works. The risk is that nobody is paying attention to what happens when those individual experiments become team-level habits, because that’s when you start dealing with consistency questions. Is everyone prompting the same way? Are we all using the same tools? What happens when one person’s AI-generated output becomes another person’s input? Can we reproduce the results in the same way every time?
Most companies I’ve seen don’t need a formal AI governance policy at first (please don’t waste time on a 40-page document nobody will read). What helps is someone asking: “If we’re going to do this, how do we want to go about it?” Even a short conversation about shared prompts, preferred tools, and where human review is required goes a long way.
Take security seriously without taking it personally
Small companies tend to be either cavalier about security (“we’re too small to be a target”) or paralyzed by it (“we can’t use anything cloud-based”). AI tools introduce some legitimate security considerations that land somewhere in between.
When you paste client data into a general-purpose AI tool, where does that data go? What are the retention policies? If you’re using an API, what are the terms of service around training on your inputs? I wish we could dismiss these as hypothetical or fear-mongering. They aren’t, but they’re the same kind of due diligence you’d do (or should do) before adopting any third-party tool that touches sensitive information.
The practical move is straightforward: read the terms of service for the tools your team is using, understand what data flows where, and make a conscious decision about what’s acceptable for your business. Most major AI providers offer enterprise tiers or API configurations with stronger data protections. For a lot of small companies, the answer is as simple as “use the API instead of the free consumer tier.”
Bringing it back
I don’t think AI adoption for small companies is as complicated as it’s been made to seem. The companies that are afraid of it can start with bounded, specific use cases where the risk is low and the value is observable. The companies that are already charging ahead can slow down just enough to put some basic structure around what they’re doing. And the companies that haven’t started yet can take comfort in the fact that the first step is genuinely small: pick one painful process, try applying AI to it, and see what happens – with guardrails, a human in the loop, and a clear way to measure whether it actually helped.
The hardest part of AI adoption at a small company is getting past the idea that you need to have it all figured out before you begin. You don’t. You just need a real problem, a bit of forethought, and a willingness to iterate.