How Safe is Your Embedded Software Environment?

The disclosure of the Spectre and Meltdown CPU vulnerabilities in 2018 brought a new level of focus to embedded environments. The exclamation mark was the assertion that a major U.S. CPU manufacturer advised Chinese tech companies of the vulnerability before they notified the U.S. government, thus mixing geopolitics with tech. One of the vulnerabilities dealt with [instruction] branch prediction, while the other deals with memory access. Both put a spotlight on low-level CPU vulnerabilities.

Not surprisingly, there have been a number of short and long-term reactions. At the recent Arm TechCon, Arm put a significant focus on its efforts. Arm is facing a tradeoff between what might be the optimal architecture and what they can do in the short-term with current architectures.

In the short-term, Arm is implementing memory mapping, pointer authentication, and privileged execution of security OSs. In the long-term, Arm is working with Cambridge University on CHERI (Capability Hardware Enhanced RISC) based architectures that aim to provide fine-grain control of what an executing process has access to. From the University of Cambridge website “The CHERI memory-protection features allow historically memory-unsafe programming languages such as C and C++ to be adapted to provide strong, compatible, and efficient protection against many currently widely exploited vulnerabilities. The CHERI scalable compartmentalization features enable the fine-grained decomposition of operating-system (OS) and application code, to limit the effects of security vulnerabilities in ways that are not supported by current architectures.”

CHERI is focused on the idea that processes should not be able to execute or change data in ways that lead to vulnerabilities, for example, processes should not be changing the instruction pipeline in unsafe ways or accessing the memory of other processes. This could be a significant step forward in embedded environment security. When I recently asked Arm executives about availability, they indicated there would not be generally available offerings for a number of years. One of the key issues is this kind of architecture might require a major change to software programming models and practices.

Arm is stuck in the kind of bind that all of us involved in tech have seen many times in our careers: what can we do given the limitations of the currently installed base of technology vs what will it take to move the customer base to a totally new way of doing things. Innovators dilemma? There is no implied criticism of Arm here. We have all seen this before, it is a difficult situation, especially if all the benefits of a new approach have not yet been proven in and all the migration issues have not been flushed out. It is on issues like this that great management teams earn their generous compensation packages. It takes talent, time, and perseverance to navigate transitions like this.

As we move to an IoT-driven world of 1 trillion+ devices, embedded security will only get more important. Even for traditional networking and security systems countries, securing control, data, service, and management plane processing is important, especially when general-purpose, non-fixed pipeline processing is involved.

Maybe the short-term responses around memory tagging, pointer authentication, and like approaches are enough for now. Are they enough in the future? I doubt this issue is going away anytime soon. The Important for any technology strategy is assessing the state of embedded processing vulnerabilities and having an explicit roadmap for addressing them, a roadmap this well understood and perhaps even well-communicated to customers, but even that decision is laced with challenging management considerations.

This site uses Akismet to reduce spam. Learn how your comment data is processed.