Living on the Edge

by

in

Cloud services delivery endpoint is moving more and more to network edges and closer to clients little by little.

Why is that?

Because delivery of the content is not limited any more to human clients. Services are consumed by loads of automated services discussing with each other through API calls. And the whole IoT megatrend is creating even more demands for very low latency communications across the globe.

And as you cannot increase the speed of light, or move the existing warehouse scale data centers, which we all utilize nowadays in a form or another, the only possibility to make the execution faster is to move execution of the services closer to the clients. And including even the dynamic parts of the services, which have traditionally been served from backends (or origins in CDN lingo) thus passing through all of the infrastructure.

Delivering content in scale

Content delivery networks (CDN) are not any new invention. Akamai was founded almost already two decades ago, in 1998. Amazon CloudFront was announce a decade later, in 2008, and is now maybe the main competitor to Akamai. Based on the statistics, at the end of 2016, Amazon CloudFront currently dominates the market with the number of websites running behind CloudFront (100k against 1,25M websites of total), but Akamai still handles more traffic (about 35% against 10%).

For high volume websites using CDN strategy has been de facto for a long time for delivering static content. That is images, videos – anything that does not change over time. Including assets which are not specifically directly visible to the end users, like javascripts distributed to be run locally in browsers, stylesheets and any static html elements in pages.

Specifically this was the case, when it was expensive to have broad capacity on premises. CDN providers were the ones with the thickest tubes, and they still are. Some really big internet companies were early adopters for CDN solutions already almost two decades ago.

Check out also interesting content delivery approach Netflix has taken with its Open Connect. If you have content which is interesting enough and massive volume, this is what you do.

From static to dynamic

But delivering static content with CDN is not simply enough any longer. You need to have the ability to serve also content which is customized based on dynamic input elements.

The concept is called Edge Computing and it pushes the content decision closer to the consumer layer. Computing and logical decisions are made closer to the logical extremities of the network, rather than in the core data centers.

Generating dynamic content at the edge

The following is a simplified AWS architecture mockup of content delivery strategy including dynamic response capabilities right at the edge.

In this architecture, several pathways to serving responses and payload to the consumers are drafted out. Static content can be heavily cached right at the edges and utilizing targeted purges, caching can be very effective.

And actually, you can also utilize CloudFront in front of API gateway as well to do caching, but it comes with couple of gotchas you need to be aware of. In some cases you might want to cache API responses for requests with specific parameters for some time to reduce latency and load.

With dynamic content, the strategy here is to only let the request through to dynamic backend, which actually need to be processed at the backend. One very simple use case would be to avoid (possibly unintentional) denial-of-service situation, where backends need to process all requests, which are either not valid or not allowed. Transferring these kinds of decisions right to the edge once again reduces backend load and decreases latency, as responses for these cases are nearly instant.

So the edge layer not only provides the ability to serve faster, it also allows to protect backend systems.

So, as a solution, one of AWS recent announcements, Lambda@Edge.

Not a silver bullet

Lambda@Edge was announced at Re:invent 2016.

Currently Lambda@Edge opens up new possibilities, but it’s certainly not without it’s limitations. It is to be thought as additional layer of building faster and more resilient infrastructure. You can do all the things without it – and in simple use cases you probably even should. But for those massive scale services, this might just do the trick.

Lambda@Edge has limited runtime and footprint. The functions will not have write access to write to the local file system, and cannot make remote network calls to external services. Also you are limited to node.js as the implementation method and it has currently no built-in libraries available.

Most of these are directly because the functions are run at the edge locations – as the name implies. So there is not available computing resources, which you would have at the massive region data centers. It is a tradeoff. Simplicity and speed to the things where it matters the most.

But still a bullet

Lambda@Edge can currently do at least the following:

  • Inspect cookies, headers and authorized tokens
  • Add, drop, and modify headers
  • Do redirects or rewrite URLs based on the previous
  • Generate new HTTP responses

Lambdas can be triggered from four different trigger points in the request / response sequence:

  • When a request is first received from a viewer (viewer request).
  • When the Lambda function receives a response from the origin (origin response).
  • When the Lambda function forwards a request to the origin (origin request).
  • Before the Lambda function responds to the end viewer (viewer response)

This might not sound like much out of the box, but it is quite useful toolset for multitude of use cases. This set allows you to do the latency sensitive decisions right at the edge and take away load from the dynamic back ends.

AWS documentation in more detailed level is available here.

Having a holistic content delivery strategy

One of the most important things is to have a good understanding about the services you are delivering and the requirements for them.

And the requirements not only in technical sense, but also business requirements, targeted service levels and financial targets. All this in the end forms a strategy which you need to fulfil with a set of tools.

Currently released AWS Lambda@Edge feature is a valuable asset in that toolbelt and I am personally waiting for it to have more features in the near future, which will make it even more useful and capable tool to provide additional measures to optimize service delivery strategies for complex services with strict requirements.


The author works as Chief Technologist for Managed Cloud Services at Cybercom Group. Cybercom is a Nordic based IT consultancy offering managed services and solutions to their client in the connected world.