In the early 2010s, I worked for a subsidiary of the New York Times. ConsumerSearch published professional reviews of everything from home appliances to big purchase items. At the time, the old saying certainly rang true: “The brightest minds in the world are engaged in figuring out how to get their site in Google’s top 5.” My day job could be described as writing code to boost our search rankings.

Akamai was one of the secrets to our success. A review site is naturally image-heavy. Getting our images pushed out toward the edge meant faster page load times. Plow that data through Google’s proprietary algorithm, and we saw page load times correlate with page rank. And that success led us to experiment with bolder (at the time) uses of CDNs to cache content, CSS, and scripts. Via Google’s scoring, we understood how critical speed was for success.

In the last few years, we’ve seen the rise of edge computing. The term “edge computing” has become something of a catch-all term for running server-side code at some location between the big clouds and a user’s own equipment. IoT, 5G towers, CDNs, and telecommunications POPs have all become grounds for executing server-side workloads.

CDNs and Edge Computing

Content delivery networks (CDNs) are a natural fit for edge computing. They are strategically located geographically. They generally have high amounts of computing power as well as broad network pipes. And distributing customer code onto these platforms is a well-worn pattern (unlike, say, 5G towers).

However, there is a notable difference between delivering static material like image files and executing a customer-supplied application in an edge context. Executing an application has a few more requirements:

  • The application must be cordoned off and securely executed so that it cannot be used maliciously as an attack vector.
  • The application requires runtime access to the hardware, including CPU and memory.
  • The application must have a well-defined (ideally short) execution cycle.

A generation of such applications has already been developed, with CloudFlare workers and Vercel edge functions as two examples. Both use bespoke JavaScript-only solutions, though, that constrain developers to only specific tools. Other solutions based on Docker containers or virtual machines have appeared, but because of the slowness of the underlying technologies, neither approach seems to have panned out in a big way.

This is where WebAssembly brings a unique value proposition.

Born in the Browser

In the previous section, I noted that running applications at the edge comes with requirements regarding security, access to system resources, and a defined execution cycle. These are core features of WebAssembly (often abbreviated Wasm).

Originally built for the browser, WebAssembly was intended to solve a specific problem: Run programs written in languages other than JavaScript inside of the browser, and let JavaScript interact with those programs.

This had been tried before. Java Applets, Microsoft Silverlight, Adobe Flash… each of those provided an in-browser runtime for a non-JavaScript language. But they all fell short because of their proprietary technology (seen as an anathema in the open web) and their focus on just one language or language family. WebAssembly was born of the desire to “do better” — and it was a cooperative effort from the browser teams at Mozilla, Microsoft, Apple, and Google.

The design requirements for WebAssembly included:

  • A secure runtime environment that would protect the browser (and the browser’s user) from attack via a WebAssembly app.
  • A platform- and architecture-neutral format that could make use of the system resources (CPU and memory), but in a safe and cross-platform way.

The WebAssembly working group at W3 defined a specification, and a vast number of programming languages began support WebAssembly as a target of compilation.

WebAssembly has the Edge

Throughout the history of computing, we’ve seen many technologies outgrow their initial purpose. The internet itself was intended as a defense communication network. And the Web was a forum for trading physics papers. Java was going to power embedded devices. Ruby was a shell scripting language. And WebAssembly was for the browser.

Both those key design characteristics — secure runtime, access to system resources, multi-language support across a huge variety of architectures and operating systems — opened other possibilities.

And one such possibility is using WebAssembly as a compute format inside of CDNs.

WebAssembly is an excellent vehicle for serverless functions. A concept originally developed by Amazon Lambda, a serverless function is an application that does not run all the time (like a server), but instead executes just-in-time to handle a single event. For example, a web server starts, handles hundreds of thousands of requests over a long lifecycle, and only shuts down for maintenance, reboots, and other extraordinary circumstances. In contrast, an serverless HTTP function starts when a new request comes in, processes that request, and then shuts down. Its lifecycle may be only milliseconds.

Serverless functions are perfect for the CDN case because they make minimal use of system resources, and use those resources for only fractions of a second (or perhaps minutes at the top end).

WebAssembly, as a format, is a strong serverless technology for two big reasons: First, WebAssembly binaries are small, and that means less memory usage. Second, WebAssembly binaries can cold-start in under one millisecond. In contrast, containers and virtual machines take seconds or even minutes to cold start. WebAssembly has a huge speed advantage.

Add to that the WebAssembly security sandbox and its multi-platform support, and you have the ideal runtime characteristics for edge. A developer can create their serverless function in the language of their choice, compile it once on their own workstation, and deploy it to the edge knowing it will execute in a secure runtime.

And for edge operations, because the binaries are platform and operating system neutral, edge providers can move applications as close to the end user as possible without special hardware considerations.

Kubernetes Runs Wasm (on the Edge)

The last few years have seen Kubernetes, the orchestration technology, move to the edge. After all, Kubernetes provides all the right tools for taking an application and spreading it to all of the places it needs to run, then maintaining that application over its lifespan.

The problem: Even while Kubernetes itself performs well on the edge, containers (the usual runtime fare of Kubernetes) consume too many resources and are too slow to really make the best use of edge resources.

Now, Kubernetes can schedule WebAssembly as well as containers. Projects such as the open source SpinKube make it easy to extend any Kubernetes cluster with WebAssembly runtime classes. With WebAssembly’s near-instant startup times, small binary sizes, and cross-platform/cross-OS capabilities, serverless style applications can be deployed to the CDN edge with ease.

We are watching the early indications that WebAssembly is making its way onto the edge. Because it supports so many languages (over two dozen) and because the format is standardized by the venerable W3C, WebAssembly is clearly here for the long run. As a compute engine, it is ideal for CDN-style cases. There is no doubt that these early forays into CDN computing will soon become the mainstream.

We’ve come a long way from merely pushing pictures to the edge. Compute at the edge brings us much closers to true distributed computing. That doesn’t just mean better performance, it means reliability and even security.

Photo credit: Expanalog on Unsplash