01 — Context
How we got here
The last decade produced an extraordinary proliferation of database technology. The rise of cloud computing made it possible to run a globally distributed database with no infrastructure team. The emergence of serverless reduced operational overhead to near zero. And the explosion of different data models — relational, document, real-time, graph, vector — gave developers the right tool for every kind of problem.
But each of these advances came from a different company, built on a different platform, exposed through a different API, and managed through a different console. The richness of choice that made modern software better also created a new class of problem: fragmentation at the infrastructure layer.
By 2025, it was not unusual for a single product — a SaaS application, a data platform, a consumer app — to be backed by three, four, or five distinct database systems simultaneously. Relational databases for structured transactional data. Document stores for flexible schema content. Real-time event databases for collaboration features. Vector databases for AI-powered search. Each serving a genuine purpose. Each requiring independent management.
The databases got better. The tooling to manage them together did not keep pace.
The single-database era
Most applications used one database — typically MySQL or PostgreSQL. The operational model was simple: one server, one schema, one console. The team knew it well. Credentials were stored in one place. Backups followed one procedure.
Specialised databases become mainstream
MongoDB, Redis, and Elasticsearch move from niche to standard. Multi-database architectures become common as teams reach for the best tool for each job. Credential sprawl begins. Each database demands a different skill set and a different operations playbook.
The serverless database explosion
Supabase, PlanetScale, Neon, Firebase, and MongoDB Atlas reach maturity simultaneously. Each offers a fully managed, cloud-native database with a polished console. Adoption accelerates. The average engineering team's database estate doubles. So does the number of consoles, API keys, and billing accounts they maintain.
The fragmentation problem becomes acute
AI-driven development accelerates application complexity. Teams ship more features, faster, using more specialised infrastructure. The cognitive cost of managing a five-database estate — each with its own interface, its own permission model, its own query syntax — begins to measurably slow teams down. The need for a unifying layer becomes urgent.
02 — The problems
Six problems that cost engineering teams every day
These are not abstract complaints. They are concrete, daily frictions that accumulate into significant lost time, security risk, and operational debt. Each one has been validated through conversations with the teams and developers who experience them repeatedly.
Problem 01
Interface fragmentation
Every database provider ships its own management console with its own navigation model, its own terminology, its own query editor, and its own data presentation format. A developer who is fluent in the Supabase table editor must re-learn an entirely different UI paradigm when they open MongoDB Atlas, and again when they open the Firebase console, and again in the AWS RDS console. This re-learning is not a one-time cost. It happens every time they switch contexts, every day.
Problem 02
Context switching overhead
Debugging an issue that spans database systems — finding the user record in one database, the associated event log in a second, and the cached state in a third — requires opening multiple browser tabs, authenticating into multiple systems, and mentally correlating data across completely different visual formats. Research consistently shows that context switching imposes a significant cognitive penalty. In database management, that penalty is paid constantly, by every member of the engineering team.
Problem 03
Credential sprawl and insecurity
Connection strings, service role keys, MongoDB URIs, Firebase service account JSON files, and MySQL passwords end up distributed across .env files, team wikis, shared Notion pages, Slack messages, and local desktop clients. The more databases a team runs, the more places these secrets live. The more places they live, the higher the probability that one of them ends up somewhere it should not — committed to a repository, shared in a chat message, or pasted into an AI tool. Each new database provider added to a stack increases the credential attack surface.
Problem 04
Absence of cross-database visibility
There is no standard tool that allows a team to look at the health of their entire database estate in one view. Monitoring is fragmented across provider-specific dashboards, third-party APM tools, and manually maintained spreadsheets. When something goes wrong — a slow query, a growing table, an index that should exist but does not — the team is likely to find it reactively, after a user reports a problem, rather than proactively from a unified view of all their databases.
Problem 05
Onboarding friction
When a new engineer joins a team that runs five database systems, they must learn five different consoles, obtain credentials for five different systems, understand five different permission models, and develop fluency in five different query interfaces. This is a significant and entirely avoidable onboarding cost. It delays time-to-productivity and creates gaps in institutional knowledge when team members leave and their mental model of each database console leaves with them.
Problem 06
No unified data operations layer
There is no standard way to perform common data operations — searching for a specific record, auditing a data change, seeding a staging environment from production data — across database providers. Teams build bespoke scripts, maintain custom tooling, and develop tribal knowledge for each database system they run. When a provider changes their API, the bespoke tooling breaks. When the engineer who wrote the tooling leaves, the knowledge of how to maintain it leaves too.
03 — The cost
What fragmentation actually costs
The cost of database fragmentation is not just measured in the time engineers spend navigating multiple consoles. It is measured in the security incidents that result from credential sprawl, the delayed incident response caused by the absence of a unified view, and the onboarding costs that compound every time the team grows. The following is a conservative estimate of the daily operational overhead imposed by a typical five-database engineering environment.
Estimated daily time cost per engineer — five-database environment
Estimates based on self-reported time tracking across teams managing three or more database systems. Actual costs vary by team size, database complexity, and existing tooling.
Task-by-task: what multi-database management looks like today versus with Tellus
| Task | Without Tellus | With Tellus |
|---|---|---|
| Find a user record that may exist in any of three databases | Open three browser tabs, authenticate into each console, run separate queries, manually correlate results. 12–18 minutes. | Query across all three projects from a single interface. Results in one view. 2–3 minutes. |
| Check database health across all systems before a release | Review provider-specific metrics dashboards one at a time. No unified health signal. 25+ minutes. Gaps in coverage are likely. | Run a single AI health scan from the Tellus dashboard. Unified score with issue prioritisation. 3 minutes. |
| Onboard a new engineer to the data layer | Share five sets of credentials, explain five different console UIs, document five different operational procedures. 1–2 weeks to independent operation. | Add engineer to Workspace with assigned role. One interface to learn, credentials managed centrally. 1–2 days to independent operation. |
| Rotate credentials after a suspected exposure | Identify every location where the credential is stored (often incomplete). Update each one. Risk of missing a location in a .env file or team wiki. | Update credential in one Tellus project. Centrally stored. No risk of missing a secondary copy in a text file. |
| Debug a slow operation that spans two database systems | Reproduce in both consoles simultaneously. Correlate timing and data across two different visual interfaces. High cognitive overhead. | Query both databases from the same interface with the same query model. Results visible side by side. |
04 — Security
The credential sprawl security crisis
Credential sprawl is not merely an inconvenience. It is a significant and growing security risk. Every database credential that exists outside of a tightly controlled secrets management system is a potential breach vector. And in the multi-database world, those credentials multiply.
A single engineering team running five database systems is managing, at minimum, five connection strings or API keys. In practice, the number is far higher. Each database typically has separate credentials for production, staging, and local development. Each of those may have further variations for different team roles — read-only access for analysts, full access for senior engineers, restricted access for automated pipelines. The total credential count can easily reach thirty or forty secrets for a modest engineering team.
These secrets have to live somewhere. In practice, they live in many places simultaneously: in .env files on developer laptops, in CI/CD pipeline configuration, in shared team wikis, in Slack threads where they were posted for quick access, and in local database client applications where they are saved as named connections.
The risk surface grows with every additional copy. The probability that any single one of these copies ends up in a place it should not — committed to a version-controlled repository, shared in a public forum, or visible in a screenshot — increases with every team member, every developer machine, and every operational context in which the credential is needed.
Where production credentials commonly end up — and why each location is a risk
Estimated proportion of teams storing credentials in each location
Survey data from engineering teams at companies with five to two hundred employees. Teams reported all locations where database credentials for production systems are known to be stored.