Madhuri Yechuri - unikernels and event-driven serverless platforms

Oct 27, 2017

Emit is the conference on event-driven, serverless architectures.

Madhuri Yechuri is the founder of Elotl. She talked with us about shrink wrap needs for applications, and how serverless ranks in cost-savings, agility, security and observability.

Madhuri then delved into some surprising benefits of using unikernels in a serverless world: shorter cold starts, better security, and smaller package sizes. Could unikernels replace containers as the engine for FaaS?

The full video is below, or scroll on ahead to read the transcript.

More videos:

The entire playlist of talks is available on our YouTube channel here: Emit Conf 2017

To stay in the loop about Emit Conf, follow us at @emitconf and/or sign up for the Serverless.com newsletter.

Transcript

Madhuri: Good morning everybody, and thank you so much for attending this session about unikernels and their relevance to event-driven serverless platforms. The agenda for this talk, I'm going to give a little bit of introduction about myself, and talk about the application deployment paradigms of the past, present, and looking into the future. And then talk about, what are the pain points that serverless event-driven paradigm is trying to solve, and how it exhibits the potential to solve these pain points.

And after that, we are going to look at, what are the shrink wrap needs for applications that are running in a serverless event-driven fashion? What are the shrink wrap needs in the sense that what does a function that's running in an event-driven serverless fashion expect from the underlying infrastructure? How is it going to shrink wrap my function? And we will look at these shrink wrap needs of event-driven serverless applications and evaluate the existing offerings out there for shrink wrap, which is containers, and see how containers solve these pain points of the application shrink wrap needs. And then we'll talk about unikernels and see a demo of unikernels, and compare the metrics of unikernels and containers to see how unikernels fit in or do not fit in, or have potential to fit in, to meet the needs of the shrink wrap for the event-driven serverless applications. And we'll follow that up with acknowledgements and a Q&A. During the talk, if you have any question, please raise your hand so that we can address questions as we move along.

A little bit about myself. I've been a systems engineer for 17 years. I started out with database server technologies at Oracle, and then I worked for VMware on virtualization layer in the management infrastructure platform, and then I worked...I spent some time in container ecosystem in a company called ClusterHQ. We were the first providers for dealing with applications that have state associated with them. How do you deal with stateful applications that are running in production deployments? And most recently, I've been playing around with unikernels.

So let's look at the application deployment paradigms of the past, present, and future. In the past, we used to happily deploy our large monolithic applications in our secure, private cloud, and we would shrink wrap them using virtual machines. And that's what we knew, and that's what worked fantastically, and that's what we were comfortable with, and that was what was secure. And as of today, we are moving from large monolithic applications to small microservices, and we are not just deploying them in our secure, private cloud, but we are also comfortable deploying them in our public cloud in addition to our private cloud. And we are moving towards containers as the shrink wrap model of choice and away from virtual machines.

And moving forward, what does the future entail for our nice little applications? We are not just looking at microservice applications, we are looking at microservices and nanoservices in the format of serverless event functions basically. And we are not just looking at private cloud, we are looking at private cloud and public cloud and also remote IoT edge devices, which are mounted on locations like on an oil rig in the middle of Gulf of Mexico, for example. And this is where the application deployment paradigms get really interesting because we're not just looking at these three deployment options, we are looking at a mix and match of these cloud options. So we're looking at hybrid cloud that stretches between private and public, hybrid cloud that stretches between public cloud and a remote IoT edge device, and a hybrid cloud that stretches between all three potentially.

And are containers the only options that we have for shrink wrapping our applications to deploy them on all these permutations and combinations? We are all here at EMIT Conference so we all understand the pain points of deploying always on microservices. You're spending a lot of money as a customer in paying for your compute nodes, whether your microservice is up running or not. And as an infrastructure provider, you have to keep these instances up and running, which is costing you resource utilization with respect to CPU memory and network resources. Whether or not you're exposing that spend of yours to your end customer or not, you as an infrastructure provider, you're still paying for those resources. So there are a lot of issues with keeping these always-on microservices, especially looking forward towards deploying them on these hybrid deployment paradigms of IoT and private and IoT and public cloud, etc.

So serverless has the potential to solve these pain points because you only pay for what you're using. So as an end user, your infrastructure cost bills are really low, and as an infrastructure provider, you potentially do not have to keep these microservice instances, the compute nodes to run these microservices up and running, and you do not also need to spend time and money on orchestration frameworks to deploy initial placement, load balancing, and health checks of instances that are sitting idle, not running any microservices. So a serverless event-driven platform, it solves all these pain points of always-on microservices.

So now that we understand that, okay, we need to move towards event-driven serverless model in order for cost savings for the end user, cost savings for the infrastructure platform provider as well, what are the needs of this lightweight microservice or nanoservice event-driven function or application for shrink wrapping itself in order to run on the platform? The shrink wrap model needs to be lightweight for the application. By lightweight, what I mean is that when the app is sitting on-disk, and when the app is transiting from on-disk to its eventual compute node, whether it be the first startup or updates of the app, and the third is when the app is up and running on the compute node, when it's occupying CPU and memory resources and network resources potentially on the compute node, it needs to be lightweight because we want to potentially deploy it across these hybrid cloud scenarios. So the first need is the app needs to be super lightweight.

And the second need is that the shrink wrap needs to be super agile. By agile, what I mean is it needs to be reusable and recyclable. By reusable, what I mean is that if your function has started and stopped but is going to start back up again pretty soon, we shouldn't be destroying the shrink wrap and recreating the shrink wrap. The amount of resource utilization and time taken to recreate the shrink wrap should be really...the overhead should be minimal. And that is the reusable nature of the shrink wrap. And by recyclable, what we mean is that as soon as your function has stopped running, we should be...and you know that it's no longer going...you don't need to run it again for a long period of time, you should be able to reclaim its CPU memory and network resources as soon as possible so that you can give it to future functions. So this is not of a big concern for private cloud and public cloud where you have access to a lot of resources, but if you think about IoT edge device, you have...you still have limited resources, so you should be able to reclaim the resources really quickly. So that is the second need of the application from its underlying shrink wrap, from the infrastructure platform.

So the third need is security. When the app is sitting on-disk, and when it is in transit from on-disk to its target compute node, and when it's up and running, occupying CPU cycles, RAM cycles, and network bandwidth, it needs to be secure. And that's what the application expects from its shrink wrap.

And the fourth requirement is observability. Can the event-driven application function that's running in its shrink wrap, how observable it is. Do the existing application, performance, management tools that are available in the market, can I reuse them or do we need to build new tools because the app's behavior is completely different from what these tools were built for? So observability is the fourth need of the application from a shrink wrap. So these are the four main requirements from the application for shrink wrap, from its underlying shrink wrap, in order for it to run smoothly in an event-driven serverless model.

So let's take a sample application, which is a Node.js webserver that listens on port 8002 and responds with a "Hello World" to incoming requests. So let's take this simple Nodejs webserver app and evaluate it for it on all these four axes and collect metrics to see how the app performs and how the shrink wraps perform for this given app.

So the first shrink wrap we're going to look at is obviously container. So containers are being used for production for event-driven serverless models nowadays, and there is a reason for that. So let's see how containers perform on all these four axes. So the first metric we look at is the on-disk size of the application as a container image. So for the Node.js webserver, the Docker container base image that was chosen was Alpine because its relatively smaller on-disk size. So this is potentially one of the smallest available ways to get your smallest on-disk size. So the on-disk image size is about 50MB, which is pretty good. It's better than a couple of hundreds of MBs of the default Ubuntu or CentOS based on its sizes. So our Node.js webserver is small enough, so that's pretty good.

And the next metric we're going to look at is the start times, so how quickly can you provision the application, which is an indicator of call start time, So how quickly can you start your event-driven function when the request comes in? So the start time for the Nodejs webserver was around one second, which is not too bad. The experiment was done on an Ubuntu 16.04 server with 4 gigs of RAM. And the reason for choosing this server is kind of I wanted to pick a server that was a representative server midway between an average public cloud machine and an average IoT device so that it's a good indicator of how the app is going to perform. So start time of one second is pretty good.

And what is the resource overhead associated with running our Node.js webserver as a Docker container? And the memory overhead associated with the Node.js container was around 270 MB, which is low enough, and it's not of a big concern when you're running on your regular compute node, say [inaudible 00:13:45] small and for large instance, but it could be of concern when you're deploying to a remote edge device that has 1 gig of RAM, for example. And the last metric is how secure is your app that is running, up and running? So if we look at a regular container that's based on any Linux, so all of the Linux security vulnerabilities are exposed, your app is exposed to all of the underlying base image Linux security vulnerabilities, which is both good and a bad thing. Good in the sense that, I mean, Linux is in production in so many places, so if it works for so many other people in production, it should be good enough for you. And bad in the sense that maybe your app, especially your serverless event-driven app has been custom-built to do one thing and one thing only. So it doesn't need the entire backing of an entire Linux. So you are bringing along a whole amount of extra baggage to run your teeny-tiny single function. So that is the bad part of it.

And as far as observability's concerned, you can use the traditional APM tools like Amazon CloudWatch, for example, or you can use the up-and-coming APM tools that are being custom-built for event-driven serverless applications like IOpipes to monitor your apps. So the observability factor is also pretty good. And no wonder containers are the default shrink wrap of choice for all of the major serverless platforms out there because Lambda uses it, Google Cloud Functions uses it, Azure Functions uses it. So it makes sense because looking at these metrics, containers are performing really well.

So let's move on to thinking about, are containers the only option for us to shrink wrap our event-driven serverless applications, or is there another option which could be used in addition to containers, right? So let's look at what a unikernel is. Unikernel is a single process, single address base application where you take your app and you statically compile it with only the parts of the OS that your app needs. For example, if your app is a stateless app, like a Node.js webserver that we just looked at, you don't require the entire five-system component of an application, of the operating system to be present in your executable. So it's a single-process, single-address base application that's statically compiled with only the parts of the OS that it actually needs and uses. And it doesn't have any extra things that are baked into it. It doesn't have a shell. There is no way to fork another process because it's a single process. So you could have a multi-threaded application but it will be a single process.

So let's look at a demo of a Node.js webserver that is running in a unikernel. So out here, we are looking at the Hellojs webserver, and it prints out attempting to run the webserver in a unikernel to the console, and it listens...responds with a Hello World and listens on port 8002, which is the default port. And also prints out a message saying the server is running on local host at port 8002. So I'm using a simple script to start up the app as a unikernel. So you can see that the app is up and running, and it prints out attempting to start the webserver in a unikernel, and it's listening on port 8002. So that is the IP of the running application as a unikernel. In order to validate that the app is actually running, if we call the IP at port 8002, we should get the Hello World back. So how is this unikernel application running on the Ubuntu server? So it simply starts up as a teeny-tiny VM using QEMU. So you can use emulated virtual machine to run your application on your default Linux server, for example. So the start up of the app is no more different from the starting up of a container.

So now, let's compare how this app, that's shrink wrapped as a unikernel performs on the metrics that we are interested in. So if we look at the on-disk image size, comparing the application on-disk size in an Alpine container, which was at 50-something MB, the on-disk image size for the...after shrink wrap as a unikernel is 20-something MB. So it's teeny-tiny. And the difference in the image size doesn't make a big deal on a private cloud or a public cloud machine, but if you want to deal with...if you want to think about hybrid cloud opportunities, then yes, the difference in the image size makes a lot of difference, especially with respect to image updates, etc.

And the start up time of the application that was shrink wrapped in a unikernel was much less than one second. So this makes a big difference when you're looking at cold starts. So if you do not want to keep a pre-warmed function container or a shrink wrap up and running, but want to have cold start opportunities, then the lower start time is pretty good.

And the third metric is the application runtime memory overhead, which is an indicator of how much resource overhead is required to run your app as a unikernel. Out here, the container shrink wrapped application outperformed the unikernel. For the unikernel application, the runtime memory overhead was around 600MB, which is 3 times larger than the container shrink wrapped image. So there is...out here, container outperformed unikernel.

So the fourth metric is security vulnerabilities. When your app is up and running as a unikernel, just by the fact that the application was compiled with only the parts of the OS that it needs, it doesn't have a large attack surface to attack. So the attack surface is much smaller. So it's slightly more secure just by the fact that there isn't much to attack basically. So you don't have a shell to SSH into. And you don't have, like, for example, VENOM attack on the Linux server side. Someone hacked into a Linux server using a CD-ROM device driver. So in our case, it's a Nodejs webserver, so you do not need a CD-ROM driver. So the app is not compiled with a CD-ROM driver. So you avoid a lot of attacks just because the attack surface is really, really small.

And the final metric is observability. For application performance measurement metrics, there aren't enough products out in the market yet to observe the apps that are running in unikernels. That doesn't mean that they're not observable, it just means that the products haven't been built yet. So with respect to observability, container is more observable right now than unikernel. And there is room for growth for unikernal in observability. Yeah.

Man 1: Just a quick question on that last one. What would be the difference between just going like [inauible 00:21:33] that node app that you had there and any [inaudible 00:21:36] or any other APM out there? Like there's other APMs out there for remote servers today and they still work, right?

Madhuri: Yes. Yeah. Yes, yeah. Yes, yap, yap, yap. Yeah. You can definitely add in a third-party thing, but there hasn't been...there aren't products out there that have been custom-built for observing event-driven functions running in a unikernel, for example. So you can definitely take the existing ones and use them for Nodejs, for example.

So looking at the highlighted red borders, those are the metrics where one outperformed the other. So it kind of makes us realize that, "Hey, it's not...unikernels have some potential to potentially be useful as a shrink wrap in addition to containers. So there might be scenarios where containers are more suitable, and there might be scenarios where unikernels are more suitable. But there's definitely...they show a lot of promise a potential shrink wrap format for running event-driven serverless functions.

So the takeaways from this talk hopefully is that moving from monolithic to microservice apps to nanoservice apps, serverless event-driven model is the right way to go because it demonstrates cost savings to the customers and to the infrastructure providers as well. Containers are a great fit as a shrink wrap model for now, for the application deployment paradigms that exist right now. Moving forward, looking at the hybrid deployment paradigms for private cloud, public cloud and IoT, unikernels show a lot of promise as a potential shrink wrap format in addition to containers. So I don't think, and I don't believe that they are a replacement for containers, but there definitely...there could be use cases where one could be a better fit than the other.

Thank you so much for Nick and Casey at serverlesss.com for helping organize the talk. And thank you for listening and I hope to open it up to Q&A. Any other questions? Yeah.

Man 2: Why is the memory overhead so much higher for the [inaudible 00:24:05]?

Madhuri: That's a really good question. So the memory overhead is coming from QEMU. So there are...so we took the default QEMU to run in emulated VM, so there is a lot of overhead that can be chopped off because you don't need a heavyweight emulator like QEMU to run a unikernel because you just need a monitor that knows that what's running above me is a single process, single address space entity. And there's been some really good work at IBM Almaden on a project called Solo5 and uKVM. uKVM is a monitor that could be used instead of QEMU. And if you use uKVM, uKVM is conscious of the fact that what's threading above it is a unikernel and not a heavyweight virtual machine. So if you use uKVM, you eliminate that overhead. Does that answer your question? Yeah.

Man 3: Do you think that in general the unikernel would be expected to use less memory or more than a container-based model?

Madhuri: So if you use uKVM, it should less memory than a container model, but it's...uKVM work is very much in its infancy right now, so there is a lot of potential for improving that and as is the case in containers as well. So if you look at the image size, right? The image size for the Alpine container with the Nodejs was 50-something MB, so there are ways to bring it down because there are products in container market that are being developed to actually cherry-pick which parts of the Linux OS you want to compile into your base image. So there is room for improvement on both sides. So the metrics that were collected are with the products that are available today in the market, but there's potential on both sides to, like, converge on the metrics. Does that answer your question?

David: Thank you very much.

Subscribe to our newsletter to get the latest product updates, tips, and best practices!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.