Will Explainable AI Be The Competitive Battlefront?

The Important

  • The explainability of accurate AI/ML algorithms is an inhibitor to usage
  • Entrepreneurs are stepping into address this problem
  • Explainability may become an area of differentiation in AI/ML solutions

Quick and free subscription to the leading Cloud, Network, and Security Newsletter, CloudNetSec from bohcay: link

Discussion

When I started looking at Artificial Intelligence (AI) / Machine Learning (ML) a few years ago, it was common to hear speakers comment about how one of the challenges with ML is that it can be a black box, you don’t really know why the algorithm is making the choice that it is. This is especially the case with neural networks, which can be one of the most accurate approaches, but the least explainable.

accuracyvsexplainabiliity.png


Figure 1. Accuracy vs Explainability, source: towardsdatascience.com

Fast forward to this weeks’ Global Big Data Conference, and it is clear that entrepreneurs are jumping on this problem, promising solutions that explain even neural networks.

Imagine a situation where an algorithm denies the allocation of resources or priority queuing to a network flow. Then the network user who creates the flow calls the network operator/IT manager and asks why (or a question that resolves to why). Now imagine the network operator/IT manager cannot provide an answer. Embarrassment ensues.

These are the kind of scenarios that have analysts reaching for a tried-and-true regression, something they feel they understand, and something they feel they can explain to others, warts and all, even if less accurate.

Putting aside those marketing less than AI, as “AI”, the further we go down this road, the more we reach for the most accurate approach to making decisions, the greater the challenge of explaining how that decision is being made.

I can imagine some segments of the network, and some network customer segments, where explainability is not important: the customer is not sophisticated, the impact of a portion of the network going down is not critical, the kinds of decisions being made are not difficult, etc. OTOH, I can imagine some network segments and some network customer segments where understanding why decisions are being will be critical, not just for explaining the scenario above, but also for learning and input to other analysis.

As we think forward to a world where network / IT operations people are moved up the value stack, and relieved of real-time, computationally hard, grunt work, we have to ask how they will be enabled to perform value-adding work when they have no idea why the algorithms in the network are doing what they are doing?

Where there is a problem, there is the opportunity for a solution. There are stand-alone companies starting up to address this problem, in addition to other problems such as on-chip accelerators for AI, optimizing AI/ML to reduce the resources used in a cloud environment, and training deep learning models. Where network / IT product / solution / service provides place their investments, and what they decide to build vs acquire on the tools/explainability front is interesting to consider.

My intuition is explainability and integration with other analysis are potential areas of differentiation as we head in the direction of self-driving IT / networking.

Quick and free subscription to the leading Cloud, Network, and Security Newsletter, CloudNetSec from bohcay: link

 

 

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.