Hot and Hotables: Wha is hot, what could become hot

The Important

  • Many feel their privacy is under attack. There will be technological, entrepreneurial, and public sector responses
  • At a time when the general-purpose CPU had been declared the winner in seemingly all IT domains, specialized silicon is arising: AI/ML, Storage, networking, and more.
  • IT professionals need to learn the principles of the Cloud era and apply them to the work they are doing.
  • Many interesting issues surround segment routing in networking
  • Multiple high-speed fabrics are being touted for inside the datacenter activities
  • Smartphones are reaching levels where adoption growth has slowed in developed countries and begs the question, what’s next.

Introduction

With so much going on in tech, it is a challenge for technology professionals to distill and summarize major trends, so they can integrate them into what is relevant to the specific work they do. This is fundamentally the mission of the “Hot and Hotables” framework. It is a mission that necessarily requires iteration over time as insights and new information emerge. This article introduces some of The Important themes, already integrated into the framework.

No alt text provided for this image

Distributed Applications

The Internet is fundamentally a tool for distributed empowerment. By vision, the Internet is a platform where anyone in the world can receive and contribute value, both in a social sense and an economic sense.

Yet, we find ourselves in a world where millions of Internet users have voluntarily exchanged information about themselves, in return for free Internet-based services. Primarily social media, but not just. This state of the world was humming along seemingly without a problem, until:

  • Internet platforms started to aggressively monetize customer information
  • Facebook was marched before congress to discuss the 2016 elections and the Cambridge Analytica relationship, among other things
  • High-profile tech celebrities started to distance themselves from Facebook
  • Jeff Bezos’s personal wealth went into the stratosphere, even after splitting a substantial chunk with his ex-wife
  • U.S. government accused the post-Covid Western World hit application, Tik Tok, as being a tool of the Chinese government (Tik Tok had previously been a big hit in Asia)

The above are just a sampling of issues that have impacted consumer/voter confidence in Internet platforms, each is subject to interpretation, and none of them represent an assertion that Internet platforms are intending to do harm. What they represent is the exploration of a new reality, a new normal, that all are trying to navigate: politicians, Internet businesses, and Internet users/consumers/voters.

While some governments are rushing into this new world with regulation, for example General Data Protection Regulation (GPDR), tech entrepreneurs are suggesting there is another way: truly distributed applications that treat privacy as central to the value proposition.

Blockchain is one of the leaders in this tech push, best known for the Bitcoin cryptocurrency. Blockchain is a fascinating and powerful new approach to distributing and validating information, while providing levels of privacy that at first scan seem robust, however, in reality, are still being understood by everyday Internet users. In addition, as discussed in the article “Blockchain: The Important”, Blockchain clearly has strengths and weaknesses. Entrepreneurs are currently on a journey to discover what problems Blockchain is a good fit for and also develop new technologies where Blockchain is not a good fit.

The new world of distributed applications has the potential to be among the most disruptive new areas of Internet investment, especially as significant areas of human activity rush online, E-Everything, in a post-pandemic world.

Artificial Intelligence and Augmented Experiences

While general artificial intelligence / artificial general intelligence appears to be some way off, the graphics processing unit and open source software has catapulted machine learning into a new era.

As a society, we tend to mix up concepts such as artificial intelligence, narrow artificial intelligence, general artificial intelligence, science fiction fantasies of the Borg & robotic takeovers, and the practical benefits of mining large amounts of data for social and business value. Smart people, companies, and governments have already realized the value of mining the huge amounts of data thrown off by the Internet.

Data is the petroleum of the machine learning engine, and therefore data ownership is an issue of some interest, intersecting with the previous discussion in this article, large Internet platforms, and distributed applications.

Putting aside who owns and who can access data, the new era of pervasive and accessible machine learning is already yielding new disruptive value. With GPUs powering this revolution, it is perhaps not surprising that image processing has seen significant advancements. There is though, a new breed of silicon startups and approaches that are looking to develop specialized chips for both machine learning training and for AI/ML inference – two different activities in the ML pipeline. Exactly how far these new silicon approaches will move the ball forward, or how well they might expand the scope of excellence in AI/ML is a matter of speculation at this point. However, it is an area of technology investment that is not to be overlooked. To say that I watch the evolution of these technologies with some interest, would be an understatement. To go from a world where GPUs were the accidental tourists to a world where engineers are designing silicon specifically for AI/ML is, to say the least, a milestone in evolution. And who knows, with NVIDIA now the most valuable silicon play in the world, perhaps they will surprise by continuing to demonstrate that the GPU remains the best architecture for AI/ML.

Whether Virtual Reality and Augmented Reality belong in this categorization of technology is open to debate. However, I have put them here because clearly they have the potential to reshape human experiences, both through fully artificial/virtual experiences, and experiences where “reality” is augmented by technology. To say that there are a few entrepreneurs involved in these technologies is also an understatement. In some of the dashboard and database examples already being demonstrated, it does not take much imagination to construct a future state of image processing, natural language processing, machine learning, and virtual/augmented reality technologies coming together in powerful ways. Science fiction fantasies may well be on the cusp of becoming science realities.

Cloud

Cloud disrupted all of IT on two fundamental dimensions: customer experience and operations excellence. Every company in the world must now ask itself whether it can match these disruptions, or whether it must hand over its IT jewels to cloud platforms. The sometimes usage-based models of the cloud are on one-hand an experimenter’s dream come true, while on the other-hand an Enterprise CFO’s nightmare come true. This is not to say that billing models cannot be constructed that deal with this fundamental tension. The more important observation is the obvious one, Cloud changed everything.

From the perspective of a technologist, DevOps, continuous development, integration, and delivery is one of the big disruptors. The journey to operations being run by software developers. That’s a radical journey for some companies, while others were of course “born in the cloud”. On top of DevOps shoulders sits Systems Reliability Engineering, a term that most people in our industry came in contact through Google, and a set of operations principles that has been of particular interest to networking professionals, at least.

An operations paradigm where basic IT units, compute, storage, and network are seemingly as simple as possible, but are made to dance through the powers of operations software, is still a radical journey for many Enterprises and their technology suppliers. It is also a journey that has nuance. As RIFT proponent Tony Przygienda recently articulated in a podcast, just because flash memory is simple to interface with, doesn’t mean it is simple technology internally. The journey to networking being as “simple” as compute and storage, is a journey where the question is constantly being asked: is networking simple at every level of granularity, or simply simple to interface to. That is an interesting and important question, especially for those that believe in the old saying that you cannot reduce complexity, you can only move it around. The truth surely is to be found somewhere in the middle, because ultimately, radical simplicity is the right response to overwhelming complexity.

Networking

Networking is the area of technology I have spent my adult professional life immersed in, the area I know best, and the area I comment on most frequently. The Hot and Hotables framework reflects my comfort level in this broad area of technology.

Major inflections in networking today are: controllers, programmability, segment routing, and pluggable optics (point-to-point and PON). These naturally intersect with other areas of networking investment such as 5G, unified infrastructure, and Edge Compute.

Optical pluggables is an area that is being watched with great interest. Most of today’s attention is on 400ZR/400ZR+, optical pluggables with the potential to reshape data center interconnect over short distances (< 80-120 KMs). Another area with not as much attention, but none the less interesting, is the world of symmetrical 10G PON, as imagined by optical technology suppliers such as TiBit.  

Segment Routing, and network programming for segment routing, are the center of attention in networking today. Definitely for Service Provider networking, and potentially all server-to-server and server-to-handset networking. See “Segment Routing: The Journey Back Home”. There is so much to analyze and comment on here. It is a rich field of inquiry.

In the short-term, the industry focus will be on the transition from IP/MPLS & MPLS-TP to Segment Routing for MPLS (SR MPLS). At what rate is it actually happening, where will it happen first, what are the immediate drivers, etc? In the medium to longer-term, the attention will be on the journey to Segment Routing for IPv6, which implies a long-awaited journey to IPv6, which the networking industry has seemingly delayed for decades already. Will SRv6 end up being the bridge to, and Trojan horse for, IPv6? Fascinating question.

Compute

Just as it appeared that the general-purpose CPU and complex instruction set computing paradigm had conquered all other approaches to compute, the Graphics Processing Unit catapulted NVIDIA into being the most valuable chip company in the world, and all manner of specialized silicon is now entering the conversation.

In short, while the general-purpose processor was the centerpiece of IT innovation over the last couple of decades, it could not sustain productivity improvements in areas that by themselves became big enough to support specialist silicon investment and adoption.

Networking has notably been driven by systems-supplier developed network processing units (NPUs), and more recently by what the industry currently refers to as Merchant silicon (Broadcom, etc). Systems-supplier developed network processing units have been as critical to packet networks as DSPs have been to the coherent revolution in optical. However, merchant silicon has had a tremendous impact.

Today, all the major router vendors use merchant silicon for their Telco-focused access products, especially in the 4G/5G domain. Merchant silicon has been extensively used in hyperscaler datacenter networking, and the further adoption of Spine/Leaf architectures has the potential to drive merchant silicon into new segments of networking.

More broadly than merchant revolution, we now see specialized silicon being touted in areas such as machine learning training, machine learning inference engines, and the data processing unit. We are rediscovering that the general-purpose CPU may not be the best answer for all compute problems.

If the coming silicon developments seem intriguing, they have been overshadowed by the software revolution. Software paradigms that seek improved statistical utilization of horizontally scaled chip and system architectures have led to hypervisors, virtual machines, containers, and serverless. Containers are where the current industry focus is, with serverless being too immature for the comfort-level of most, today.

As a person who cut his programming teeth on assembly/assembler-level languages, I’ve seen many high-level programming languages come and go. Having worked in networking for many decades, I’ve seen the C language dominate, as it has in many other areas of IT as well. The python revolution has been amazing of course. In a post-Oracle-owned Java world, an open-source programming language has emerged at the right time for the sentiments of many IT professionals. There is still a plethora of programming/scripting languages in use today: Java, Javascript, C, C++, Swift, Ruby, C#, PHP, and more.

While python has seemingly become the lingua franca of the cloud and data science worlds, the Google-developed “GO” language is getting significant interest as a kind of systems programming language with some of the safety of Java & python. Other capabilities of interest include concurrency and simplicity. Simplicity from a hyperscaler? Well, there is a shock 🙂 Not really of course. If you are not getting the simplicity thing, you are not getting where IT is going. My favorite quote from any relatively recent tech talk is this one: “Adding features to Go would not make it better, just bigger.” Memorize some abstraction of this and quote it often. A product/service/solution is not finished when you have shoved as much as possible into it, it is finished when you have taken everything possible out of it. Everything should be as simple as possible, but no simpler. GO is a shot across the bow of IT complexity: greatness in strategy, products, and all things comes from saying “NO”, often, loudly, and with perseverance over time. Have clarity about the problem, ruthlessly pursue the best answer, and then say no to old approaches, developed for old problems, confusing and complicating the new approach. That’s an approach to IT that many technology suppliers, and their largest customers, are going to struggle with. It is also the approach to IT that may well separate the winners from the losers over the next decade.

Storage

Storage is not an area I have looked at in detail for many years, and not since the early days of storage area networks, now a couple of decades ago. One clear revolution has been the flash memory revolution, and critical it has been in many data center and even individual/personal use cases.

Not surprisingly, given my concentration in networking, the area of storage technology that most piques my interest is NVMe-oF, which has the aspiration of creating high-performance storage networks with latencies that come close to direct-attached storage, allowing flash devices to be shared between servers.

Devices

As Mary Meeker has observed, smartphones are now used by the majority of people in developed countries, and therefore, may no longer be the offsetting source of revenue growth they have been, for the Service Provider long-run secular transition from usage-based voice to flat-rate data.

That singular insight leaves all in the SP ecosystem wondering what’s next. Will it be IoT or something else. This is far from a trivial question.

Summary / Conclusion

Hot and Hotables is a way of distilling some of the important trends in IT, with a focus on networking, the area I know best. What are the important takeaways, there are many, but a few of them include:

  • There is going to be some reaction to the concentration of information and commerce in a few tech titans. We do not know at this time what all the responses will be, including responses from government, but we do know there is already an entrepreneurial response: distributed applications that integrate privacy as a core value proposition.
  • AI/ML has already experienced a step-function improvement in training times, and therefore a step-function improvement in applicability. A new generation of AI/ML-specific silicon promises another significant improvement.
  • Cloud changes everything: customer experience, operations excellence, software architecture, and the way we think about complexity and simplicity. IT professionals need to learn the principles of the Cloud era and apply them to the work they are doing.
  • At a time when specialized silicon is breaking out all over IT, networking is heading in the direction of adopting merchant silicon everywhere. What’s next is an interesting question.
  • At a time when the Cloud-era principle of simplicity is being adopted everywhere, Segment Routing appears on the networking scene. Will it live up to advertising? Will it combine with controller-based intelligence in interesting ways? Will programmability eliminate or reduce the usage of command-line interfaces? All fascinating questions.
  • There is any number of data center fabric technologies/solutions being touted, often for compute, but now for storage as well. Are these fabrics incremental to, or subtractive from more general-purpose networking technologies?
  • Smartphones are reaching levels where adoption growth has slowed in developed countries and begs the question, what’s next.

Information tech has a way of surprising. Just when you feel things are getting boring, it grabs you by the collar and shakes you up. It has always been this way, and at least in the foreseeable future, it remains that way.

This site uses Akismet to reduce spam. Learn how your comment data is processed.