When I walked into The Ignition Group for the first time, the company had about a dozen paying clients and a proof of concept that had been duplicated for each of them. In one of our first conversations, the founder was straightforward with me: “We know this setup isn’t going to work long-term. That’s why we need you.”
He was right, and he knew it before I did. The founder had built something clients wanted – a healthcare data platform that let health insurance companies compare and research provider information. He’d taken the proof of concept to people he had relationships with, and several bought it on the spot. They wanted it immediately, so he did the rational thing: he copied the entire system for every client. Database, front-end, reports, each running in its own environment. A dozen clients, a dozen copies of everything.
It worked. Clients were paying and revenue was real. But when it was time to update the data, the cracks in the system showed.
This is Part 1 of a three-part series about the architecture decisions behind a healthcare data SaaS platform — from proof of concept through acquisition. Part 1 covers the migration from a dozen duplicated client databases to a single multi-tenant system. Part 2 covers the product architecture and automation that made the platform scalable. Part 3 covers the cloud migration, system reliability, and what survived through acquisition.
Twelve databases and the update grind
Each client subscribed to a unique subset of the data we provided. On a regular basis, when fresh data came in, the data team had to update a master database, then extract the right subset of data points for each client, import those subsets into each client’s individual database, and optimize everything for reporting. Across a dozen clients, with different subscriptions, different data configurations, and different edge cases.
The process took weeks. It was prone to error. Processing would stall partway through and have to be restarted. The data team was dedicated and skilled, but they were fighting the architecture every cycle.
And that was just the data side. Every time we needed to push a UI change or a report update, it had to be deployed to each client’s environment separately. Twelve deployments for one change. The system did allow for per-client customization, which was a selling point. But the maintenance cost was scaling linearly with every new client we onboarded – and growth was a priority.
The founder understood all of this. It’s why he brought me on. We needed a way to make data and application updates easy, fast, and reliable. The current architecture had a ceiling, and we were getting close to it.
Six weeks before I said a word
In my first conversations with the founder and the data team, the consensus was clear: we needed “fewer” databases. Maybe a single front-end for all clients. Everyone felt the pain, and everyone had ideas about the direction.
I didn’t propose a solution right away. Instead, I spent about six weeks talking to the technical contractors, learning from the data team, and analyzing the existing system. There was a lot to understand, including data flows, client-specific configurations, and pain points that were structural versus the ones that were operational. I wanted to understand which problems were symptoms and which were root causes before I committed us to a path.
During that analysis, I considered and rejected one option that looked attractive on the surface: database-switching per client. The idea would be to run a single application that connected to a different database depending on which client was logged in. It would have solved the deployment problem, moving to one UI and one codebase. It would also have kept client-data isolation easy. But it wouldn’t have touched the core issue. Data updates would still need to happen per database. Maintenance would still scale with client count, as would total data storage size. The quarterly grind would continue.
About six weeks in, I went back to the founder and the data team with a recommendation: I believed we could solve the main issues by moving to a single multi-tenant database. One database, one application, one deployment. Update the data once, and have it flow automatically to every subscribing client. They thought I was… ambitious.
Working backward from the business problems
Saying “multi-tenant” is easy. Making it work for a healthcare data company with a dozen clients who each expected data isolation, unique subscriptions, and reliable quarterly delivery was the hard part. That required working through a specific set of requirements.
Client and user security was non-negotiable. Each client could only see the data and features they had subscribed to. Some clients also supplemented our standard data with their own proprietary provider network data to use for comparison in our reports, and that data had to be completely invisible to other clients. The security model, both for subscription boundaries and client-uploaded data, had to be baked into the data layer, not bolted on top.
A subscription system was the business enabler. With the old architecture, changing what a client had access to meant modifying their individual database, essentially a small project for each change. I wanted clients’ data access and feature entitlements to be configurable on the fly, without rebuilding anything. If sales sold a new data product or a client upgraded their subscription, that change should be a configuration update, not an engineering task.
A data structure that could support fast production pushes was the whole point. The old process – extract subsets from a master database, import into each client database, optimize for reporting – took weeks every cycle. I wanted a structure where we could move data from back-end processing to production daily, or more frequently if the business needed it. One update, automatically available to every subscribing client based on their subscription configuration.
Every one of these requirements traced back to a specific business problem the team had told me about in those early conversations. While I’d love to say everything I designed was theoretically elegant, the architecture was driven by what would make the data team’s life manageable and let the company onboard clients without each new deal creating more operational drag.
Building it with live clients
We spent about six months building the new database, the subscription system, and the processes the data team would use to populate and manage it. Testing had to be thorough. We were replacing the core of a system that paying clients depended on.
The UI security layer and a new front-end application to replace the proof-of-concept interface took another two to three months. We built them in parallel with the database work, but the UI couldn’t fully come together until the subscription and security models were solid underneath.
All told, it was about eight to nine months from the initial architecture decision to having the new system live. And throughout that entire period, the existing dozen client environments were still running and still needed to be maintained. We couldn’t pause delivery to rebuild. The data team was still grinding through their update cycles on the old system while we built the new one alongside it.
What changed
The most immediate impact was the 75% reduction in time-to-availability for new data. What had taken weeks of per-client extraction, import, and optimization became a single update that flowed to subscribing clients through the subscription system. The data team went from spending most of each cycle wrestling with the update process to having that process handled by the architecture itself.
Client onboarding changed fundamentally. Under the old model, bringing on a new client meant duplicating the entire environment – database, UI, reports – and configuring it from scratch. A technical project every time. Under the new model, onboarding was a configuration exercise: set up the client’s subscription, define their data access, assign their users. The time and effort dropped dramatically.
The subscription system became something I hadn’t fully anticipated – a business lever. Sales could offer new data products and feature tiers without requiring engineering work for each deal. When a client wanted to expand their subscription, it was a settings change. That flexibility made the company more responsive to the market and easier to grow.
And the platform grew. The system that started with a dozen clients on duplicated databases would eventually support 75 clients on a single multi-tenant architecture by the time the company was acquired.
The decision that mattered most
Looking back, the technical decision to move to a single multi-tenant database was important. But the decision that mattered more was spending six weeks understanding the problem before committing to a solution.
The database-switching option I rejected would have been faster to implement. It would have solved the visible symptom of duplicate deployments, and it would have felt like progress. But it would have left the data team in the same update grind, still extracting and importing per client, still falling behind as the client base grew. The core scaling problem would have remained, and we would have had to solve it later on a larger and more complex system.
The six weeks of analysis gave me the clarity to see that the operational bottleneck wasn’t application deployment. It was data processing. The multi-tenant decision solved both problems, but it was the data problem that justified the investment and the risk of a major architectural change with live clients depending on the system.
For founders and early engineering leaders weighing similar decisions: match your architecture to the constraint that’s actually limiting your growth. The obvious pain point and the real bottleneck aren’t always the same thing. Taking the time to understand which is which can be the difference between a solution that buys you six months and one that carries you through a 6x increase in your client base.