- Extracting the most efficiency out of compute resources remains a key focus of IT
- Containers as applications or micro services are the current emerging focus to gain more efficiency than virtual machines
- In the long run, serverless will also receive more attention
- Serverless is a powerful paradigm, there are maturity gaps today, but all players should be doing some incubation with it
A few years ago, I was engaged with a large Communications Service Provider (CSP) about the idea of breaking network stacks into micro services. In the introductory discussions we had, the CSP boiled down both their reason for wanting to pursue it, and what was in it for Network Operating System (NOS) providers:
- Their reason: They pay for the same functionality from multiple vendors, and even multiple implementations within a single vendor. For example, IPSEC tunnels. It would be more efficient, more effective, and less expensive for them to scale up a single implementation, as needed, as a micro service.
- Their rationale for what was in it for NOS suppliers: Currently, the NOS supplier cut of the spend for network functions was limited to the number of hardware boxes purchased. In a micro services world, if a NOS supplier was awarded the business, they would get a bigger piece of the pie.
Putting aside the validity of the above claims, I did some quick research on micro services, and as a one time software developer, I immediately recognized its appeal. Five years ago, many people had not even heard the term “micro service”, and fewer understood it. Even today, while everyone has now heard the term, there are immaturities in micro service development paradigms – much more so a few years ago. Timing may not have been right back then for a large refactoring of existing codebases, and arguably, refactoring existing codebases is never a good idea, especially if pursuing a fundamentally different value proposition / value chain.
While any number of approaches to microservices could be pursued, the two most common approaches today are containers and serverless. Containers are possibly the most platform agnostic approach, especially if Kubernetes is the orchestration layer. However, there are a shortage of Kubernetes skills, and it remains complex from an operations perspective. Tool vendors are rushing into fill gaps, and the current market momentum is in this direction. Therefore, as NFV pivots to “cloud native” approaches, we can expect a great deal of interest in containerized network functions, and of course, some are already available. Even in the containerized world though, we face the question of whether a vNF has been reimagined for a containerized world / micro services, or is simply a code blob now executing in a container – a perennial question at all stages of the NFV journey.
Looking further out is the serverless approach, which may not begin to achieve serious traction in the telecom/enterprise markets until 2025 or beyond. Serverless will ultimately yield greater compute efficiencies than containers, so with cost still a critical focus for many CSPs and Enterprises, it is bound to get a look at some point. Quick recap: a virtual machine executes a computer operating system and one or more applications, a container does not need to execute a computer operating system and can focus on application code, and a serverless function is not a full application but a single function. Serverless functions are spun up on demand as needed, are stateless, are charged with micro billing in a public cloud environment, have a short life-time, and are event-driven (For example, is assumed that IoT drive will driver serverless adoption/growth).
If we stick to the idea that a serverless component is a single function, that may be smaller in scope than a network “function” / microservice needs to be. In theory, serverless components could be shared across multiple network tasks. The obvious concern with code segmentation of this granularity is the management, operations, and development complexity of so many small code chunks, in addition, the performance overhead among these small code chunks. In a public cloud environment, this complexity can also lead to billing complexity – does the cloud user understand what was billed for, and when is there is a crossover between the cost of a public cloud and the cost of a private cloud. Entrepreneurs are targeting some of these challenges.
I have had the opportunity to do some serverless programming myself. While my efforts are not at a scale that Enterprise-grade insights could be gleaned, they are enough for me to have experienced some of the magic. I am a serverless fan. That said, it is clear there are maturity issues and capabilities gaps. Some of these gaps were highlighted at CloudExpo 2019 last week, as I discussed in the report “CloudExpo 2019, Santa Clara, USA: Cloud Chaos & Path Forward“. The two biggest capabilities gaps are cold start latency (the time to spin up a micro service not still held in memory) and stateless execution. There are work arounds to cold start, including some solutions that directly address the issue, but none the less, there probably needs to be more universal attention to this issue. Stateless execution is a big challenge. Due to the stateless mode, serverless tends to be thought of as an event-reaction paradigm. An event happens, a micro service/serverless function is spun up for one time processing, and then the micro service dies. Serverless functions can access persistent storage, but performance probably would not meet many stateful use cases. I can think of some other workarounds, but none the less, serverless functions are generally considered to be stateless. To be clear, stateless has its benefits, it just does not meet all needs.
In the networking world, telecom companies could be argued to have not even nailed the virtual machine paradigm for Network Function Virtualization, let alone the current cloud hot trend of Kubernetes (container orchestration). So it would seem a little premature to be discussing serverless, especially when there are some capabilities gaps. None of that should stop us from considering what the future might look like, and why.
In 2019, it is hard to imagine any large scale CSP/Enterprise going all in on serverless. 5-10 years from now though, I would expect some network functions to be serverless, and there to be general interest in all forms of cloud-native microservices. For developers, serverless has the appeal of isolating chunks of code from changes in other chunks of code as well as code reuse: write once, use many, by many. For business leaders, serverless allows a higher degree of over provisioning, which means it is fundamentally less expensive than virtual machines or containers: more workloads on less servers, which is one compelling reason cloud providers like Amazon may care about them. When there are compelling economic advantages, shift happens. From my perspective, I see serverless as delivering many of the promises made by object-oriented programming, all be it that they are quite different, and I am not suggesting they both have the same capabilities. In making such a statement I am referring mostly to reusable building blocks, which remains a development aspiration whether developers are cutting and pasting code (not desirable, but often done) or having a more structured approach to reuse.
Given NFV may need a little bit of a reboot anyway, it would be advisable for CSPs to put some micro services on containers, especially the stateful ones, but also start working with the serverless paradigm. It is a powerful paradigm and its capabilities may change over time. Even with the current installed based of virtual machines (sometimes also sitting under containers) and the growing installed base of containers, serverless is worth a look, as are serverless network functions (sNF?). There are for sure gaps in the technology, and unless your company is a new cloud-first CSP, then broadly adopting serverless is going to be hard. For everyone, some serverless incubation should be occurring. CSPs, Enterprises, and network operating system / NFV suppliers alike.
The need to gain greater efficiency has been a constant business focus going back as far as Adam Smith, and surely before then as well. Greater efficiency is unlikely to fade as a business focus. Serverless is the next step in compute efficiency beyond containers. Today there are immaturities and gaps. However, cloud native is a fast moving world. CSPs, Enterprises, and network operating system suppliers cannot afford to ignore serverless as a future approach with at least some applicability, with compelling economic and development appeal.