From Protocol:
Cloudflare is ready to launch a new cloud object storage service that promises to be cheaper than the established alternatives, a step the company believes will catapult it into direct competition with AWS and other cloud providers. The service will be called R2 — “one less than S3,” quipped Cloudflare CEO Matthew Prince in an interview with Protocol ahead of Cloudflare’s announcement Tuesday morning. Cloudflare will not charge data-egress fees for customers using R2, taking direct aim at the fees AWS charges developers to move data out of its widely popular S3 storage service.
R2 will run across Cloudflare’s global network, which is most known for providing anti-DDoS services to its customers by absorbing and dispersing the massive amounts of traffic that accompany denial-of-service attacks on websites. It will be compatible with S3’s API, which makes it much easier to move applications already written with S3 in mind, and Cloudflare said that beyond the elimination of egress fees, the new service will be 10% cheaper to operate than S3.
Cloudflare has always framed itself as a disruptor; R2 lives up to its reputation.
Cloudflare’s Evolution
I already wrote earlier this year about Cloudflare’s unique advantages in a world where the Internet is increasingly fragmented, thanks to the distributed nature of its service, and why that positioned the company to compete with the major cloud providers in the long run. What is worth referring to with this announcement, though, is this clip I posted of Prince’s initial launch of Cloudflare at TechCrunch Disrupt 2010, particularly this bit from the Q&A:
So from a competitive standpoint, obviously you’re intruding on some of the stuff that the bigger boys are doing, and they’ve been at this for a long time. What’s to stop them from coming in and replicating your model?
There are companies that are doing things at the high end of the market, and they make very fat margins doing it. I’m really a big fan of Clay Christensen, he was a business school professor of mine, and I like the idea of businesses that come in from below. The big incumbents have an Innovator’s Dilemma trying to come down and deal with a company like ours, but we welcome the competition. We think we make a really great product. It’s designed for a certain type of users that are very different than the users that a larger company might be trying to attract.
Prince was spot-on about the competitive response of incumbents to Cloudflare’s offering for the long-tail of websites: it never came, because Cloudflare was serving a new market. This is how Christensen defined new market disruption in The Innovator’s Solution:
The third dimension [is] new value networks. These constitute either new customers who previously lacked the money or skills to buy and use the product, or different situations in which a product can be used — enabled by improvements in simplicity, portability, and product cost…We say that new-market disruptions compete with “nonconsumption” because new-market disruptive products are so much more affordable to own and simpler to use that they enable a whole new population of people to begin owning and using the product, and to do so in a more convenient setting.
That’s not the end of the story, though: new market disruptors don’t stand still, but can leverage the huge runway provided by the new market to build up their product capabilities in a way that eventually threatens the incumbent. Christensen continued:
Although new-market disruptions initially compete against nonconsumption in their unique value network, as their performance improves they ultimately become good enough to pull customers out of the original value network into the new one, starting with the least-demanding tier. The disruptive innovation doesn’t invade the mainstream stream market; rather, it pulls customers out of the mainstream value network into the new one because these customers find it more convenient to use the new product.
This was Cloudflare Workers, edge compute functionality that was a great match for Cloudflare’s CDN offering, but certainly not a competitor for AWS’s core offerings. Back to Christensen:
Because new-market disruptions compete against nonconsumption, the incumbent leaders feel no pain and little threat until the disruption is in its final stages.
This is where R2 comes in.
The AWS Transformation
In a vacuum, most businesses would prefer making a fixed cost investment instead of paying on a marginal use basis. Consider Spotify’s music-streaming business: one of the company’s core challenges is that the more customers Spotify has the more it has to pay music labels — streaming rights are a marginal cost. A streaming service like Netflix, on the other hand, that spends up front for its own content, gets to keep whatever increased revenue that content drives for itself. This same logic applies to computing capacity: buying your own servers is, in theory, cheaper than renting compute from a service like AWS.
When it comes to compute, however, reality is very different than theory.
- First, usage may be uneven, whether that be because a business is seasonal, hit-driven, or anything in-between. That means that compute capacity has to be built out for the worst case scenario, even though that means most resources are sitting idle most of the time.
- Second, compute capacity is likely growing — hopefully rapidly, in the case of a new business. Building out infrastructure, though, is not a linear process: new capacity comes online all at once, which means a business has to overbuild for their current needs so that they can accommodate future growth, which again means that most resources are sitting idle most of the time.
- Third, compute capacity is complex and expensive. That means there are both huge fixed costs that have to be invested before the compute can be used, and also significant ongoing marginal costs to manage the compute already online.
This is why AWS was so transformative: Amazon would spend all of the up-front money to build out compute capacity for all of its customers, and then rent it on-demand, solving all of the problems I just listed:
- Customers could scale their compute up-or-down instantly in response to their needs.
- Customers could rent exactly as much compute as they needed at any moment in time, even as they were able to seamlessly handle growth.
- AWS would be responsible for all of the up-front investment and ongoing maintenance, and because they would operate at such scale, they would get much better prices from suppliers than any individual company could on its own.
It’s impossible to overstate the extent to which AWS changed the world, particularly Silicon Valley. Without the need to buy servers, companies could be started in a bedroom, creating the conditions for the entire angel ecosystem and the shift of traditional venture capital to funding customer acquisition for already proven products, instead of Sun servers for ideas in PowerPoints. Amazon, meanwhile, suddenly had a free option on basically every startup in the world, positioning itself to levy The Amazon Tax on every company that succeeded.
The scale of these options became clear to the world in 2015 when Amazon broke out AWS’s financials for the first time; I called it The AWS IPO:
The big question about AWS, though, has been whether Amazon can keep their lead. Data centers are very expensive, and Amazon has a lot less cash and, more importantly, a lot less profit than Google or Microsoft. What happens if either competitor launches a price war: can Amazon afford to keep up?
To be sure, there were reasons to suspect they could: for one, Amazon already has significantly more scale, which means their costs on a per-customer basis are lower than Microsoft or Google. And perhaps more importantly is the corporate culture that results from a “your-margins-are-my-opportunity” mindset: Amazon can stomach a few percentage points of margin on a core business far more comfortably than Microsoft or Google, both fat off of software and advertising margins respectively. Indeed, when Google slashed prices in the spring of 2014, Amazon immediately responded and proceeded to push prices down further still, just as they had ever since AWS’s inception (the price cuts in response to Google were the 42nd for the company). Still, the question remained: was this sustainable? Could Amazon afford to compete?
This is why Amazon’s latest earnings were such a big deal: for the first time the company broke out AWS into its own line item, revealing not just its revenue (which could be teased out previously) but also its profitability. And, to many people’s surprise, and despite all the price cuts, AWS is very profitable: $265 million in profit on $1.57 billion in sales last quarter alone, for an impressive (for Amazon!) 17% net margin.
Those numbers last quarter are up to $4.2 billion in profit on $14.9 billion in revenue for a net margin of 28%: Amazon has increased its margins, even as Microsoft and Google have increased their focus on the cloud. A big reason is that Microsoft in particular has pursued different customers, coaxing existing businesses to the cloud, thanks in part to their early focus on hybrid solutions.1 Google, meanwhile, has been even further behind, particularly in terms of matching AWS’s sheer breadth of services (even if they weren’t always technically great), making it harder for businesses already used to AWS to shift.
The egress fees R2 is targeting, though, have played a big role as well.
S3’s Egress Pricing
S3 is the OG Amazon Web Service: it launched on March 14, 2006, with this press release:
Amazon Web Services today announced “Amazon S3(TM),” a simple storage service that offers software developers a highly scalable, reliable, and low-latency data storage infrastructure at very low costs…
Amazon S3 is storage for the Internet. It’s designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers…
S3 lets developers pay only for what they consume and there is no minimum fee. Developers pay just $0.15 per gigabyte of storage per month and $0.20 per gigabyte of data transferred.
Prices have, as you might expect, come down over the ensuing 15 years:
- A gigabyte of storage today is $0.023, a decrease of 85%
- Moving data into S3 is free, a decrease of 100%
- Moving a gigabyte out of S3 is $0.09, a decrease of 55%
These numbers are the base rates; prices vary based on different storage tiers, whether or not you use Amazon’s CDN, and, more importantly, whether or not you have a long-term contract with AWS (more on this in a moment). What is consistent across all of those variables, though, are differences in cost between moving data into AWS, and the cost of moving data out; a blog post from earlier this year called the difference AWS’s “Hotel California”:
Another oddity of AWS’s pricing is that they charge for data transferred out of their network but not for data transferred into their network…We’ve tried to be charitable in trying to understand why AWS would charge this way. Disappointingly, there just doesn’t seem to be an innocent explanation. As we dug in, even things like writes versus reads and the wear they put on storage media, as well as the challenges of capacity planning for storage capacity, suggest that AWS should charge less for egress than ingress.
But they don’t.
The only rationale we can reasonably come up with for AWS’s egress pricing: locking customers into their cloud, and making it prohibitively expensive to get customer data back out. So much for being customer-first.
Even if companies are careful to not make any of their back-end services AWS-specific, the larger you grow the more data you have on AWS, and moving that data off is an eye-watering expense. And so, when another company builds a service that looks interesting — like, say, Cloudflare Workers — it’s easier to simply wait for Amazon’s alternative and build using that, and oops, now you’re even more locked into AWS!
What is happening in terms of the value chain is straightforward: Amazon paid fixed costs for its infrastructure, and is charging for it on a marginal basis; all of the upside here accrues to AWS, as seen in the service’s margins. That is also an important part of AWS’s retention strategy: for most AWS customers the easiest solution to rising costs is to simply sign a long-term contract, dramatically decreasing their prices (again, Amazon has the margin to spare) while ensuring they stay on AWS that much longer, accumulating that much more data and relying on that many more AWS-specific services. Hotel Seattle, as it were.
That blog post, by the way, was co-written by Prince, where he made the case that based on Cloudflare’s understanding of bandwidth costs, AWS was making a 7959% margin on US/Canada egress fees; Prince’s conclusion at the time was that AWS ought to join the Bandwidth Alliance and discount or waive egress fees when sending data to Cloudflare (which doesn’t cost AWS anything, thanks to an industry-standard private network interface), but two months on, the true point of Prince’s post was clearly this week’s announcement.
R2’s Low-End Disruption
From the Cloudflare blog:
Object Storage, sometimes referred to as blob storage, stores arbitrarily large, unstructured files. Object storage is well suited to storing everything from media files or log files to application-specific metadata, all retrievable with consistent latency, high durability, and limitless capacity.
The most familiar API for Object Storage, and the API R2 implements, is Amazon’s Simple Storage Service (S3). When S3 launched in 2006, cloud storage services were a godsend for developers. It didn’t happen overnight, but over the last fifteen years, developers have embraced cloud storage and its promise of infinite storage space.
As transformative as cloud storage has been, a downside emerged: actually getting your data back. Over time, companies have amassed massive amounts of data on cloud provider networks. When they go to retrieve that data, they’re hit with massive egress fees that don’t correspond to any customer value — just a tax developers have grown accustomed to paying.
Enter R2.
The reason that Cloudflare can pull this off is the same reason why S3’s margins are so extraordinary: bandwidth is a fixed cost, not a marginal one. To take the most simplified example possible, if I were to have two computers connected by a cable, the cost of bandwidth is however much I paid for the cable; once connected I can transmit as much data as I would like for free — in either direction.
That’s not quite right, of course: I am constrained by the capacity of the cable; to support more data transfer I would have to install a higher capacity cable, or more of them. What, though, if I already had built a worldwide network of cables for my initial core business of protecting websites from distributed denial-of-service attacks and offering a content delivery network, the value of which was such that ISPs everywhere gave me space in their facilities to place my servers? Well, then I would have massive amounts of bandwidth already in place, the use of which has zero marginal costs, and oh-by-the-way locations close to end users to stick a whole bunch of hard drives.
In other words, I would be Cloudflare: I would charge marginal rates for my actual marginal costs (storage, and some as-yet-undetermined-but-promised-to-be-lower-than-S3 rate for operations), and give away my zero marginal cost product for free. S3’s margin is R2’s opportunity.
Modular Disruption
Cloudflare, at least in AWS terms, remains a minnow; the company had $152 million in revenue last quarter, 10 percent of AWS’s revenue upon its unveiling six years ago. Prince, though, is thinking big; from that Protocol article:
“We are aiming to be the fourth major public cloud,” Prince said. Cloudflare already offers a serverless compute service called Workers, and Prince thinks that adding a low-cost storage service will encourage more developers and companies to build applications around Cloudflare’s services.
That is one way this could play out: R2 is a compelling choice for a certain class of applications that could be built to serve a lot of data without much compute. Moreover, by virtue of using the S3 API,2 R2 can also be dropped into existing projects; developers can place R2 in front of S3, pulling out data as needed, once, and getting free egress forever-after.
Still, AWS is far more than storage; the second AWS product was EC2 — Elastic Compute Cloud — which lets customers rent virtual computers that by definition are far more capable than a necessarily limited edge computing service like Workers, along with a host of database offerings and the sort of specialized services I mentioned earlier. Not all of these will necessarily translate well to Cloudflare’s distributed infrastructure, either.
Again, though, Cloudflare’s distributed nature is the entire reason the company’s cloud ambitions are so intriguing: R2 may be a direct competitor for S3, but that doesn’t mean that anything else about Cloudflare’s cloud ambitions has to be the same. Go back to Christensen and The Innovator’s Solution:
Modularity has a profound impact on industry structure because it enables independent, nonintegrated organizations to sell, buy, and assemble components and subsystems. Whereas in the interdependent world you had to make all of the key elements of the system in order to make any of them, in a modular world you can prosper by outsourcing or by supplying just one element. Ultimately, the specifications for modular interfaces will coalesce as industry standards. When that happens, companies can mix and match components from best-of-breed suppliers in order to respond conveniently to the specific needs of individual customers.
As depicted in figure 5–1, these nonintegrated competitors disrupt the integrated leader. Although we have drawn this diagram in two dimensions for simplicity, technically speaking they are hybrid disruptors because they compete with a modified metric of performance on the vertical axis of the disruption diagram, in that they strive to deliver rapidly exactly what each customer needs. Yet, because their nonintegrated structure gives them lower overhead costs, they can profitably pick off low-end customers with discount prices.
This is where zero egress costs could be an even bigger deal strategically than they are economically. S3 was the foundation of AWS’s integrated cloud offering, and remains the linchpin of the company’s lock-in; what if R2, thanks to its explicit rejection of data lock-in, becomes the foundation of an entirely new ecosystem of cloud services that compete with the big three by being modular? If you can always get access to your data for free, it becomes a lot more plausible to connect that data to best-of-breed compute options built by companies focused on doing one thing well, instead of simply waiting for Amazon to offer up their pale imitation that doesn’t require companies to pay out the nose to access.
Moreover, like any true disruption, it will be very difficult for Amazon to respond: sure, R2 may lead Amazon to reduce its egress fees, but given the importance of those fees to both AWS’s margins and its lock-in, it’s hard to see them going away completely. More importantly, AWS itself is locked-in to its integrated approach: the entire service is architected both technically and economically to be an all-encompassing offering; to modularize itself in response to Cloudflare would be suicidal.
At the same time, this is also why Cloudflare’s success in becoming the fourth cloud, should it happen, will likely be additive to the market: companies on AWS are by-and-large not going anywhere, but there are new companies being formed all of the time, and a whole bunch of companies that have yet to move to the cloud, as well as the aforementioned Internet fragmentation that plays to Cloudflare’s advantage. Here it is a benefit to Cloudflare that it is a relatively small company: opportunities that seem trivial to giants will be big wins, giving the company the increasing scale it needs to flesh out its offerings and build its new cloud ecosystem. Success is not assured, but the strategy is sound enough to make Prince’s late professor proud.
-
Amazon resisted hybrid for a long time, because it’s a technically terrible solution relative to just moving everything to the cloud, which is to say that Amazon made the same mistake all of Microsoft’s competitors make: relying on a “better product” instead of actually meeting customers where they are and solving their needs ↩
-
Copying APIs is a favorite tactic of Amazon when it comes to open source projects. ↩
Originally published on Stratechery : Original article