If you're running multitenancy on Laravel Octane using the pattern most quick-start guides show, you have a bug. Either you're leaking the previous tenant's PDO into the next request (a security incident waiting for a paging shift), or you're burning a fresh TCP + TLS + auth handshake on every request (latency you'll feel the moment your tenant database lives in another region).
This isn't a theoretical problem. It's the first thing that breaks when you move a hand-rolled Laravel multitenant app from FPM to Octane. Packages like Stancl/tenancy paper over a lot of this with bootstrappers, but if you're rolling your own tenant switching (or you've inherited a codebase that does), the Octane migration is where it bites you. I ran into this building NightOwl, a performant dashboard on top of a self-hosted monitoring agent. Customers run the agent in their Laravel app and own the PostgreSQL the agent writes to; our API reads from those customer-owned databases on demand. Here's what actually works.
The naive approach (and why it leaks)
Most quick-start guides for "manual" Laravel multitenancy walk you through some variant of this:
public function handle(Request $request, Closure $next)
{
$app = ConnectedApp::find($request->route('app'));
Config::set('database.connections.tenant', [
'driver' => 'pgsql',
'host' => $app->db_host,
'database' => $app->db_name,
'username' => $app->db_user,
'password' => $app->db_password,
// ...
]);
DB::purge('tenant');
DB::reconnect('tenant');
return $next($request);
}
Under FPM this works fine, but not for the reason most people think. FPM workers handle many requests over their lifetime — they're not fresh processes per request. What saves you is that Laravel's Application and DatabaseManager are reinstantiated per request by the framework's bootstrap. Each new DatabaseManager starts with an empty $connections array, so the previous tenant's PDO simply doesn't exist anymore from the framework's point of view. The leak gets garbage-collected for free, but only because the manager itself does.
Under Octane, that free reset goes away. The application container persists across requests, and so does the DatabaseManager and its cached connections. The same naive code is now wrong in two different ways depending on whether you remember the DB::purge line.
Forget DB::purge: the second request to the same worker hits a live PDO that's still pointing at the first tenant's database. Your Config::set mutated the config repository, but the DatabaseManager had already resolved the old config into a Connection wrapper holding its own copy of the original config. Octane's default DisconnectFromDatabases listener calls disconnect(), not purge() — it closes the PDO but keeps the Connection wrapper around. When the next request queries, the wrapper reconnects using its stored config (tenant A's), not the new one you just Config::set. Eloquent queries on tenant happily run against the wrong database. This is the leak, and it's specifically a leak through the Connection object's stored config — not the config repository.
Remember DB::purge: now every request pays the full TCP + TLS + auth handshake to your tenant Postgres. In our setup, cross-region + TLS + SCRAM auth runs in the tens of milliseconds; on slower paths I've seen it stretch past 100ms. Whatever the exact number, it's on every request before your application code even starts. You moved to Octane to reduce per-request overhead and instead added it back.
What you actually want
Three things, in order of importance:
- Tenant isolation is per-request, not per-worker. Two consecutive requests to the same worker, for two different tenants, must hit two different databases. No exceptions.
- The TCP + TLS + auth handshake is amortized across requests. Once a worker has connected to tenant A, the next request for tenant A on that worker reuses the existing PDO.
- The pool is bounded. A worker that has served 50 different tenants over its lifetime shouldn't be holding 50 open PDOs. Some kind of eviction has to happen.
That's the spec. The naive approach satisfies (1) but breaks (2) and (3) entirely.
The shape of the fix
The fix is a per-worker LRU cache of tenant connections, plus a trick to swap the canonical connection name without closing the underlying PDO.
The middleware keeps a private static array $cache keyed by app ID. Static properties on a class survive across requests under Octane because the worker process is reused (this is the whole point of Octane). Under FPM the static $cache also persists across requests on the same worker, but it doesn't matter — Laravel reboots its DatabaseManager per request, so any cached entry points at a connection name the new manager has never heard of, hasLiveConnection() returns false, and you reconnect. Same observable behavior, different mechanism.
Each cache entry stores a credential fingerprint, not the PDO itself. The PDO lives where Laravel's DatabaseManager already keeps it: in DatabaseManager::$connections, keyed by connection name. We give each tenant its own connection name (nightowl_tenant_{appId}) so the DatabaseManager does the actual lifetime tracking. The static cache only tracks ordering and credential identity.
Then at the end of each request's setup, we alias the canonical connection name (nightowl in our case) to whichever tenant connection this request needs. Eloquent models declare protected $connection = 'nightowl' and never know about the tenant-named connections. The aliasing happens by mutating DatabaseManager::$connections directly via reflection. The underlying PDO stays open across the switch.
Here's the core of it:
class ConnectTenantDatabase
{
private const ALIAS = 'nightowl';
private const MAX_CACHED_TENANTS = 10;
private static array $cache = [];
public function handle(Request $request, Closure $next): Response
{
$app = $this->resolveApp($request);
if (! $this->activateConnection($app)) {
return response()->json([
'error' => 'Unable to connect to tenant database',
], 503);
}
return $next($request);
}
private function activateConnection(ConnectedApp $app): bool
{
$config = $app->getDatabaseConfig();
$fingerprint = sha1(serialize($config));
$name = self::connectionName($app->id);
$cachedFingerprint = self::$cache[$app->id] ?? null;
if ($cachedFingerprint !== null && $cachedFingerprint !== $fingerprint) {
// Credentials rotated. Drop the stale connection.
$this->disposeConnection($name);
unset(self::$cache[$app->id]);
}
config(["database.connections.{$name}" => $config]);
$manager = app('db');
if (! $this->hasLiveConnection($manager, $name)) {
try {
$manager->connection($name)->getPdo();
} catch (\Exception $e) {
unset(self::$cache[$app->id]);
return false;
}
}
// Touch the cache entry so it becomes most-recently-used.
unset(self::$cache[$app->id]);
self::$cache[$app->id] = $fingerprint;
$this->aliasNightowlTo($manager, $name);
$this->evictOverflow();
return true;
}
private static function connectionName(string $appId): string
{
return self::ALIAS.'_tenant_'.$appId;
}
}
A few things are doing heavy lifting here that aren't obvious from the surface read.
The reflection trick
This is the load-bearing optimization. Aliasing is implemented by mutating DatabaseManager::$connections directly:
private function aliasNightowlTo(DatabaseManager $manager, string $tenantName): void
{
$ref = $this->connectionsRef();
$connections = $ref->getValue($manager);
if (! is_array($connections) || ! isset($connections[$tenantName])) {
return;
}
$connections[self::ALIAS] = $connections[$tenantName];
$ref->setValue($manager, $connections);
}
The reflection property is cached on the class so we pay the introspection cost once per worker, not once per request.
The alternative is DB::purge('nightowl') followed by Config::set('database.connections.nightowl', $config) followed by DB::connection('nightowl'). That works, but purge closes the underlying PDO if no other reference exists, and the next request will pay the full handshake to re-establish it. Aliasing via reflection points the canonical name at an already-open PDO instead. The handshake stays paid.
This is the kind of thing you'd never reach for if you didn't need it. We didn't, until we measured the per-request handshake cost on a cross-region tenant database and watched it dominate the latency budget.
The alternative worth naming is the model-side version: override getConnectionName() on every Eloquent model to return the current tenant's connection name, and skip the reflection entirely. We didn't pick it because models would have to know multitenancy exists — every new model has to remember the override, and any third-party trait or package that hardcodes getConnection() quietly breaks tenant isolation. Aliasing the canonical name in the manager keeps the models dumb, which has been worth the reflection.
The credential fingerprint
sha1(serialize($config)) is doing one specific job: detecting credential rotation. If a customer changes their tenant database password and the new credentials propagate to our platform DB, the next request for that tenant will compute a different fingerprint than what's cached. The middleware sees the mismatch, disposes the stale connection (which will fail to authenticate going forward anyway), and opens a fresh one with the new credentials.
Without the fingerprint check, the cached PDO would keep working until the database closed it on its end (which could be hours), and meanwhile new requests for the same tenant would be trying to use the stale connection. The fingerprint converts a "we'll discover this when it breaks" into "we discover this on the next request".
md5, crc32, or xxhash would all work for this — the comparison is non-cryptographic, you just need a stable digest. I picked sha1 out of habit; if your linter complains about it for being cryptographically deprecated, swap in xxh128 or md5 and don't think about it again.
LRU eviction
The cache is bounded at MAX_CACHED_TENANTS = 10 per worker. The eviction logic exploits a property of PHP arrays that's easy to miss:
private function evictOverflow(): void
{
while (count(self::$cache) > self::MAX_CACHED_TENANTS) {
$evictedAppId = (string) array_key_first(self::$cache);
unset(self::$cache[$evictedAppId]);
$this->disposeConnection(self::connectionName($evictedAppId));
}
}
PHP arrays maintain insertion order. Every time a tenant is touched, we unset and re-insert the key, which moves it to the end of the array. array_key_first therefore returns the least-recently-used app. No need for a separate doubly-linked list or priority queue, just the insertion-order guarantee of PHP arrays since 7.0.
When the cache overflows, the oldest entry is evicted, the corresponding connection is purged from the DatabaseManager, and the underlying PDO is closed. Memory and connection slot both return to the pool.
The connection math
This is where you decide if the pattern actually fits your deployment.
For a worker that caches up to N tenants, and a deployment with W workers, the upper bound on simultaneously-open tenant PDOs is N × W. Each of those PDOs holds one connection slot in the target tenant database. If you have one tenant database per customer, this is bounded by your customer count regardless. If multiple customers share a database (rare in self-hosted models, common in pool-based multitenancy), N × W can multiply against max_connections quickly.
Concrete example. We run Octane with FrankenPHP, workers=auto on a 4-core box, which gives roughly 4 workers. MAX_CACHED_TENANTS = 10. Upper bound: 40 open tenant PDOs total across the API process. Each tenant database has its own max_connections budget. For a single-tenant-per-customer model the math doesn't bind anywhere interesting because each tenant Postgres only has to support a peak of 4 connections from us (one per worker that's recently served them).
If your model packs many customers into one Postgres, you want PgBouncer between the API and Postgres, with transaction-level pooling. The middleware doesn't care — the PDO talks to PgBouncer, PgBouncer multiplexes onto a smaller pool of real Postgres connections. We deferred PgBouncer entirely for now because each customer brings their own Postgres, and the math doesn't justify the operational complexity until something else forces it.
The cap of 10 is arbitrary and tuned to our request distribution. Tenants that haven't been touched in 10 requests across a worker get evicted. If your traffic pattern has fatter tails (one customer dominates, then another), bump it up. If your worker memory is tight, drop it. The right number is whatever keeps your hit rate above 90% in production.
The rollback path
The same middleware code works under FPM with no changes — see the mechanism note in "The shape of the fix" above. The practical consequence is that you keep a real Octane rollback path: if something else in Octane misbehaves in production, you can flip the runtime back to FPM (or php artisan serve as a stopgap — serve spawns a fresh process per request, which is a third execution model with its own characteristics) without touching the multitenancy code. The static cache becomes dead weight under FPM, the handshake gets paid every time, and everything else works.
Worth knowing: the LRU eviction logic runs under FPM too, but it's cleaning up DatabaseManager instances that have already been garbage-collected by the time the next request boots a new one. Harmless dead code in the FPM path that becomes load-bearing the moment you switch back to Octane.
What's missing from this pattern
Honest tradeoffs:
The credential fingerprint detects rotation but doesn't detect target host changes that resolve to a different physical database. If a customer points their db_host at a new DNS name that happens to resolve to the same IP, the fingerprint changes and we'll reconnect (correct). If they don't change config but their DNS points somewhere new, the fingerprint stays the same and we'll keep using the old connection until something breaks. This is fine for our threat model (customer-controlled DNS, customer-controlled credentials, no scenario where they swap databases under us silently) but it might not be fine for yours.
We don't health-check cached connections. The first sign that a stale PDO is dead is when a query fails. Most Postgres deployments will close idle connections after some interval, and we'll see the failure on the next request. The middleware currently treats this as a request-level failure rather than transparently reconnecting. Adding a "if query fails with gone away, drop and retry once" wrapper is on the list, but in practice the cap of 10 and the typical traffic distribution mean most cached connections are used often enough not to age out. Failure is also bounded on the way in: we set PDO::ATTR_TIMEOUT = 5 in the connection options, so a dead tenant host fails fast instead of hanging the worker.
Octane's OperationTerminated listener runs DisconnectFromDatabases::class by default, which would close all open connections at the end of every request. We remove that specific listener in config/octane.php while keeping FlushOnce and FlushTemporaryContainerInstances — we still want the container scrubbed between requests, we just don't want the database connections closed. The LRU is the only intentional cross-request static in our middleware, and it's the whole point of running Octane here. If you adopt this pattern, you have to make the same change. If you forget, you'll wonder why the cache is empty on every request.
Why not Stancl/tenancy?
Reasonable question — Stancl is the dominant Laravel multitenancy package and it has Octane support. I didn't end up using it for NightOwl, for reasons specific to the shape of this particular API.
Tenant ownership is inverted. Stancl's main mode assumes the platform creates and manages tenant databases (php artisan tenants:create provisions a DB, runs tenant migrations, optionally seeds it). NightOwl's customers bring their own PostgreSQL. We never provision, never migrate from this side (the agent package handles tenant migrations on the customer's infra), and we never see the database outside of read-only API requests. The onboarding model Stancl is built around doesn't map.
Tenant identification is path-based. Stancl's bootstrappers are designed around InitializeTenancyByDomain / BySubdomain / RequestData. We route on /data/{app}/... and resolve the tenant from a route parameter against a row of encrypted credentials. You can do this with InitializeTenancyByRequestData, but at that point you're paying for the package without using its primary affordance.
The Octane optimization is the whole point. Stancl supports Octane, but to my reading its bootstrappers still reinitialize the tenant context on every switch. The LRU + reflection-aliasing pattern isn't something it gives you out of the box, so you'd have to layer it on top and fight the package's lifecycle to do so. For a thing whose only job is "swap Eloquent's default connection per request", layering on Stancl looked like more work than skipping it.
Surface area we don't need. Stancl ships tenant events, queue isolation, cache scoping, asset scoping, central-vs-tenant context, the tenancy() helper. For an API that's literally "look up app → activate its PDO → run queries", that's a lot to learn, debug, and audit. A smaller dependency surface is also a smaller security review surface, which matters when self-hosted customers evaluate us.
Honest disclosure: I didn't formally bench Stancl against the custom middleware. I came in knowing the shape the data API had to be (path-routed, BYO Postgres, read-only) and the custom middleware was the obvious thing to write. A more rigorous answer would have measured both.
If you're building a typical SaaS where you own customer databases, you should probably just use Stancl. The case for rolling your own is narrow: you control the tenant identification, you don't manage tenant DB lifecycle, and you have a performance or security constraint that Stancl's abstractions get in the way of. Three out of three for NightOwl. Probably not for most apps.
Where this fits
If you're running Laravel multitenancy on FPM and you're happy, you don't need any of this. The naive Config::set + DB::purge pattern works fine and the per-request handshake cost is small enough not to matter.
If you've moved to Octane (or you're evaluating it) and your tenant databases are in the same region as your app, you'll see a small win from this pattern, maybe enough to justify the complexity, maybe not.
If you've moved to Octane and your tenant databases are in another region (or you're going through any kind of long-haul connection), you need something like this. The handshake cost dominates request latency and there's no other way to amortize it.
The pattern reused here (per-worker static cache + reflection-based aliasing + LRU eviction) generalizes beyond tenant database connections. Anywhere you have a per-request resource that's expensive to acquire and safe to share across the same request boundary, you can apply it. We picked databases because that was our actual bottleneck. Yours might be HTTP clients with TLS, gRPC channels, or anything else with a fat connection setup cost.
None of this is in the Octane docs, which is where I got stuck for a while. If you're running into the same wall, the middleware above is the version that's been holding up in production for us.
United States
NORTH AMERICA
Related News
How Braze’s CTO is rethinking engineering for the agentic area
11h ago
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
22h ago
KDE Receives $1.4 Million Investment From Sovereign Tech Fund
2h ago
Instagram’s new ‘Instants’ feature combines elements from Snapchat and BeReal
2h ago
Six Claude Code Skills That Close the AI Agent Feedback Loop
2h ago