ar.io
Back to Articles
Why Access is the Most Underestimated Risk in Digital Infrastructure

Why Access is the Most Underestimated Risk in Digital Infrastructure

By Philip Mataras

Most organizations plan for data loss. The bigger risk is access loss—when the data is intact but unreachable.

When X went down last week (the third Cloudflare-related outage in three months), I noticed something in the conversation that stuck with me. Nobody was worried about their posts being deleted. The content was fine. The problem was simpler: millions of people couldn’t get to it.

That distinction, between data loss and access loss, is the reason we built ar.io, a critical access layer for society’s most important data when primary systems fail.

Once you start looking for it, the pattern shows up everywhere. Azure’s West US 2 region failed in January. Data wasn’t destroyed, just unreachable for eight hours. Verizon had a nationwide outage that same week. Meta announced Workplace is shutting down in June. Not hacked, not breached, just discontinued. Millions of enterprise users now have nine months to migrate or lose access to everything they built on that platform.

The data survives. Access doesn’t. And I don’t think most organizations have fully absorbed what that means.

I spent almost six years at KPMG working on enterprise infrastructure. A lot of that time was spent thinking about data protection: backup strategies, disaster recovery plans, business continuity frameworks. The mental model was always the same. Protect against loss. Replicate everything. Assume the worst thing that can happen is deletion or corruption.

That mental model exists for good reason. Deletion happens. Ransomware is a real and growing threat — the kind that encrypts everything and holds it hostage until you pay. Hardware fails. Human error wipes out databases. These risks deserve every bit of attention they get.

But there’s another category of failure that is far more common and often more costly for businesses, institutions, and citizens alike: access loss. The data remains intact, but the path to it fails. An outage. A dependency going down. A subscription lapsing. A vendor deciding to shut down a product. The data is fine. You just can’t reach it.

When I left KPMG and started building ar.io, this is what kept showing up. Not catastrophic deletion events, but access failures. And I started wondering why our risk frameworks didn’t weight them equally.

With that in mind, we’ve just reintroduced ar.io with a clearer message for enterprises and institutions: always access. Because explaining “permanent decentralized storage with distributed access” to a CIO who’s never touched this technology misses the point. The point is certainty.

The numbers tell the story

Between August 2024 and August 2025, AWS, Azure, and Google Cloud experienced over 100 combined service outages. Not a hundred instances of data loss, a hundred instances of access disruption. Organizations now average 86 outages per year. That’s more than one per week.

When AWS went down for 15 hours last October, four million users couldn’t reach their applications. When CrowdStrike pushed a bad update in July 2024, 8.5 million Windows systems crashed. The cost to Fortune 500 companies alone: $5.4 billion. The data wasn’t touched. Access was the failure.

According to recent research, 91% of mid-size and large enterprises now report that a single hour of downtime costs them over $300,000. For nearly half, it exceeds $1 million per hour. Globally, downtime costs the world’s largest companies $400 billion annually.

What should concern anyone thinking seriously about infrastructure risk is this: analysts are no longer treating outages as edge cases. Forrester is predicting at least two major multi-day hyperscaler outages in 2026. Their reasoning is worth paying attention to. Cloud providers are prioritizing investment in AI infrastructure while legacy systems receive less attention. Your uptime is competing with their AI roadmap. Right now, it’s not winning.

Subscription fragility makes this worse

When your data lives on a platform, your access depends on the subscription continuing. Stop paying, lose access. Vendor changes terms, lose access. Vendor decides the product is no longer strategic, as Meta did with Workplace, and suddenly you’re on someone else’s timeline.

This isn’t a flaw in the model. It is the model. Subscription-based infrastructure creates access fragility by design. As long as the business case makes sense to the provider, you have access. When it doesn’t, you don’t, and you rarely get a vote.

The same logic applies to centralized infrastructure more broadly. The data may be yours. The access often isn’t.

What comprehensive data protection actually looks like

I’ve come to think of it as three layers.

  • First, protection against deletion and corruption, data that can’t be altered or destroyed once it’s stored.

  • Second, protection against ransomware, storage that’s immutable by design, so there’s nothing to encrypt or hold hostage.

  • Third, protection against access loss, multiple independent paths to your data, so no single outage or business decision cuts you off.

Most backup strategies address the first two. Almost none address the third.

100% access to critical data should be a baseline expectation. The fact that it isn’t tells you something about how our infrastructure was designed, for a world where access was assumed. That assumption is breaking down.

This is what we set out to solve with ar.io. Permanent, immutable storage already exists. What was missing was distributed access: multiple independent paths to data, operated by different parties in different locations, so no single failure can cut you off.

Today, hundreds of independently operated access points serve data through ar.io. When one path fails, others keep working. Not because every failure was predicted, but because the system was designed assuming failures would happen.

That assumption changes how you architect systems. You stop asking “what if this fails?” and start asking “when this fails, what still works?”

I don’t think access risk is going away. If anything, it’s accelerating. More services depend on fewer providers. More business decisions are made by platforms you don’t control. More critical operations run on infrastructure that assumes continuous availability, an assumption the data increasingly shows is wrong.

The organizations that get ahead of this won’t be the ones scrambling when those predictions come true. They’ll be the ones who already built for all three failure modes: deletion, corruption, and access loss.

Because the data will probably survive.

The real question is whether you’ll be able to reach it.