Team82 Logo Claroty

Team82 Research

OPC UA Deep Dive Series (Part 5): Inside Team82’s Research Methodology

/ July 25th, 2023
OPC UA Deep Dive Series (Part 5): Inside Team82’s Research Methodology

Executive Summary

  • Team82 shares its research methodology as it examines the security and attack surface of the OPC UA protocol

  • We explain our research environment, the commercial and custom fuzzers used, and vulnerabilities in common OPC UA implementation

  • We found and disclosed around 50 OPC UA vulnerabilities across 15 protocol stacks, which affects hundreds of OPC UA products.

  • We also developed about a dozen unique exploit techniques that are universal and affected multiple vendors, and pushed to change the specs.

In previous installments of Team82’s Deep Dive Series we discussed the hidden risks associated with OPC UA, and examined different components of OPC UA-based products including servers, clients, SDKs, and protocol stack libraries. We learned that OPC UA protocol stacks can be the weak link in the global supply chain because they are used by many vendors’ products with little to no modification. 

In Part 5, we explain our research methodology, how we approached this problem—including our preparation for ZDI’s Pwn2Own competition—how we set up the research environment, utilized fuzzers, and dove into the specification to find bugs and vulnerabilities in common protocol implementations. We will also discuss our results including the vulnerabilities we found, and the open-source tools we developed to uncover those issues. Finally, we will share how this type of research helped both vendors and developers to improve the overall security of their products, and pushed the specifications towards a more secure future.

Table of Contents

  1. Preparing the Environment

  2. Mapping the Targets

  3. Building a Basic OPC UA Client

  4. Finding Vulnerabilities in OPC UA

  5. Manual Research and-Diving into the Specifications

  6. Open-Source OPC UA Projects

  7. Results

Team82’s OPC UA Research Methodology

Preparing the Environment

Like most things in life, preparation is key. So before we could start looking for vulnerabilities, we had to build a working setup for each and every OPC UA server we researched. While this took quite a long time, it was a crucial step because it allowed us to to interact with and understand different OPC UA servers and client implementations. 

In our setup, we used two Intel NUC mini PCs and a VMware ESXi hypervisor to install all of the OPC UA servers as virtual machines; mostly they were Windows 10 or Linux Ubuntu-based. In addition, we “borrowed” a powerful server from the Claroty R&D department for our fuzzing efforts.

Intel NUC with VMWare ESXi
Our setup included two Intel NUC with VMWare ESXi as OS and dozens of Windows 10/Linux Ubuntu virtual machines.

We made a baseline clean template of Windows 10 and Linux Ubuntu and duplicated them multiple times. This gave us a lot of flexibility down the road because we could easily spin-up new machines as needed.

Mapping the Targets

Next we started to install the targets. Luckily for us, throughout the years of Pwn2Own competitions, ZDI (Pwn2Own ICS 2020, 2022, 2023) and Dale Peterson did a very good job selecting the popular OPC UA servers/clients/gateways/protocol stack libraries for the competition. Our initial list included:

Protocol Stack Libraries

OPC UA Servers



We observed overlap between some of the targets because some products have server and client capabilities with the same underlying protocol stack library with a specific implementation. An OPC UA implementation means specific code flow and behavior, for example how a specific message is being parsed and handled. So by understanding what’s under the hood, we could reduce our research effort and focus on a few specific implementations instead of dozens of targets. 

Our next goal was to categorize and analyze the different OPC UA servers, trying to understand what is their underlying OPC UA protocol stack. Sadly, there is no surefire way of knowing what protocol stack each product uses, however there are a few ways to try, including: 

  • Explore the OPC UA libraries (dll/so) and extract from them metadata and strings, and search for similarities in:

    • Library names

    • Copyright strings

    • Function names

    • Logging/debug strings

    • Hardcoded strings (magics, passwords, etc)

  • Reverse-engineer the main functions and try to find similarities in code flows

OTORIO did something very similar too, and they did it well, so it is worth reading their Black Hat research on this topic.

OPC UA protocol stacks
OTORIO’s research into mapping similarities in OPC UA protocol stacks

We knew that during the early days of OPC UA, the OPC Foundation organization released some basic OPC UA protocol stacks as open-source libraries. We also tried to match which products started from what OPC Foundation libraries. It wasn’t straightforward because in many cases, the vendors heavily modified the code and/or added their additions. Eventually we came up with the following:

OPC Foundation C/C++ Stack

  • Unified Automation - ANSI C Stack

  • UA Automation C++ Server (+Proprietary)

  • Softing Secure Integration Server (+Proprietary)

  • Softing edgeAggregator (+Proprietary)

  • Softing edgeConnector (+Proprietary)

OPC Foundation .NET Stack

OPC Foundation Java Stack

  • Prosys OPC UA SDK for Java (+Proprietary)

  • Prosys OPC UA Simulation Server (+Proprietary)

  • Prosys OPC UA Browser (+Proprietary)

Eclipse Milo - Java

  • Inductive Automation Ignition (+Proprietary)

Unknown / Proprietary

  • PTC Kepware KepServerEx - C

  • Node-opcua - Node JS

  • Open62541 - C

  • OPC UA rust - Rust

That’s better. Now we can focus on the underlying protocol stacks. 

When looking for protocol-level vulnerabilities, the holy grail is a protocol-stack vulnerability because the vulnerability exists in the stack and not in the application. This means that every product that uses the vulnerable protocol stack could be vulnerable as well. While it is still possible for a similar vulnerability to affect multiple products even if they do not share the same protocol stack (especially in a logical vulnerability in a protocol), a protocol-stack vulnerability increases the likelihood considerably. 

Building a Basic OPC UA Client

While it is easy to use a ready-made library for client communication, we wanted to build our own client from scratch. There are multiple reasons why:

  • We wanted to learn and understand the protocol; the best way to learn is by getting your hands dirty.

  • No library could give us the level of control we wanted for building a malformed/malicious payload, which could be crucial in our exploitation process. 

  • It’s ours, so we could customize it and build our own framework from it.

Before starting to write code, we first examined a couple of OPC UA clients and observed their interaction with different OPC UA servers. We chose to check:

We used the clients to connect to different servers and recorded the network traffic to see the minor differences between the OPC UA protocol implementations. Wireshark’s OPC UA dissector was really helpful and assisted us to understand the overall structure. We repeated this process while playing with different features of the clients until we felt comfortable enough to write our own client :)

Example of an OPC UA client
Example of an OPC UA client - Unified Automation UaExpert client.

We chose to build our client from the ground up, using Python’s construct library to build most structures. Now we could interact with all of our targets and even construct very specific payloads to reach specific code flows we found interesting. For example, we could craft a MSG packet with a header that declares a packet with a huge size while sending a small packet—all of this to see how the server reacts to such “malformed” packets.

OPC UA protocol paramter
Using our framework, we can easily modify any parameter in the OPC UA protocol

Throughout the research process we used this client as a framework and started to add different “recipes” to it, meaning we added different attack flows that exploited a specific vulnerability. With time, and as we found more vulns, we added more and more exploits to the framework—we are planning to release this as an open source project. 

OPC UA targets
Our OPC UA Exploit Framework enabled us to finegrain our exploits and reach very specific code flows in the OPC UA targets.

The framework would handle all the OPC UA sessions and allow researchers to focus on the exploit logic itself.

Finding Vulnerabilities in OPC UA

Now that we have our setup ready, we are familiar with all the targets and their underlying protocol stack implementations, and we have a fully capable OPC UA client / framework, it’s time to move forward to the fun part: finding vulnerabilities.

To find vulnerabilities we decided to go in three directions:

  • Fuzzers: Run different fuzzers in the background: network based, memory with source-code based, and closed-binary based.

  • Manual: Research the targets and find memory corruptions by hunting specific functions such as memcpy, sprintf, and others.

  • Specification: Dive into the specification and find esoteric features that could potentially be abused in popular protocol stack implementations.

We first focused on the fuzzers because we wanted them to run in the background while we moved to manual vuln hunting.

Fuzzing OPC UA

Fuzzing is a process of generating payloads automatically and sending them to an application, observing how the application behaves, and specifically “praying” for crashes (: . There are several methods of fuzzing, many of which we’ve used in our OPC UA journey, however the common denominator is the ability to send many payloads, ranging from tens-per-second all the way to tens-of-thousands, and look for crashes.

Beyond crashing the targets and finding vulnerabilities, it was also important to us to generate a unique corpus, a set of inputs for a fuzzing target, that we could later use to feed all the other fuzzers plus test against all the targets. Each corpus is supposed to trigger a specific code flow, so by collecting and using all the corpuses, we could potentially increase our coverage and even blindly shoot them at the targets to see how they react. Eventually we were able to collect tens of thousands of unique corpuses that we used in different ways.

OPC Fuzzer
Our OPC UA harness with libfuzzer burning some CPUs.

Network-Based Fuzzer

We wrote a network-based fuzzer that was built on top of BooFuzz. It’s a somewhat “stupid” fuzzer with no code coverage, but it turned out to be a good investment because it did find us a couple of bugs that we later would exploit at Pwn2Own ICS 2022. We released this as an open-source project.

network-fuzzer finding bugs
Our network-fuzzer was working around the clock to find bugs.

Coverage-Based Fuzzing

By far the fastest (and arguably most effective) fuzzer is a memory/coverage-based fuzzer. This fuzzer is inserted into a program during compilation, which helps the fuzzer to analyze the binary. Then, the fuzzer is executed, where instead of just trying inputs blindly, it analyzes the code branches each payload reaches, and mutates them according to an algorithm that tries to reach the most code branches possible.

The first step to code-coverage fuzzing is identifying the main logic you want to fuzz, for example, a function receiving an OPC UA payload, and parsing it into an OPC UA object. This function must be memory neutral, meaning it must not allocate memory without freeing it, and it must not change global values that may affect the future fuzzing process. 

We compiled libfuzzer and AFL++ with some old Ansi-C based OPC UA protocol stack that we found, and created a simple harness that takes input from STDIN and starts the main parsing flow. Similarly we also used FuzzSharp to fuzz the .NET protocol stack.

Ansi-C stack harness compiled with AFL
Example of our Ansi-C stack harness compiled with AFL.

We also used Unicorn to fuzz closed-source targets. We started at some specific parsing functions and set Unicorn free to fuzz it.

Manual Research and Diving into the Specifications

While the fuzzers were burning CPU cycles in the background, we could turn our attention to manual research. This involved doing a lot of reverse engineering work to untangle complex code flows and also looking at the OPC UA specifications to find esoteric features and nuances that could potentially be exploited. 

Here are a couple of examples of features that might be abused:

  • Let’s say the specification says it’s possible for a client to read a large OPC UA object. What happens if one client is deleting this object while another client is trying to read it at the same time?

  • What happens if a client asks the server to execute millions of methods (OPC UA Method) and then suddenly disconnects? (For example, see CVE-2022-1748)

We tried to abuse/hunt exactly these types of features. We could mainly classify the feature “abuse” into four categories we tried to exploit: denial-of-service conditions, information leaks, remote code execution, and authentication/authorization bypasses. We approached each category a bit differently, and for each, we tried to look for different things, here are a couple of ideas with some examples:

Denial of Service

Information Leak

  • Uninitialized memory

  • Unprotected functions (OPC Foundation UA .NET Standard CVE-2022-33916)

  • Undocumented functions

  • Hardcoded passwords/keys

  • Buffer overflows: Read

  • Logs and stack traces (OPC Foundation OPC UA Server .NET Standard CVE-2023-31048)

Remote Code Execution

  • Buffer overflows: Write, for example, by exploiting dangerous functions memcpy, sprintf, and others (PTC Kepware KepServerEx CVE-2020-27263)

  • Use-after-free bugs (PTC Kepware KepServerEx CVE-2020-27267)

  • Writing files: for example, by exploiting zipslip vulns.

  • Logical bugs such as command injections.

  • For web-based applications, we also searched for Web-related vulnerabilities such as cross-site scripting (XSS), authentication bypasses, and others.

Authentication Bypass

  • Hardcoded passwords/keys

  • Certificate-parsing issues (OPC UA .NET Standard Trusted Application Check Bypass CVE-2022-29865)

Open-Source OPC UA Projects

We also worked hard to evaluate and disclose vulnerabilities in OPC UA open-source projects. Since we developed unique attacks against OPC UA functionality, all we had to do was to set up the open source software (OOS) protocol stacks and use our OPC UA exploit framework to check them. After finalizing the tests, we created detailed technical reports and started the disclosure process.

vulnerable OPC UA projects
Our persistent efforts to contact all vulnerable OPC UA projects.

It was a long process contacting all the maintainers (Snyk helped us, thanks!), but eventually we managed to get to all vendors and privately disclose our findings.

Denial of service vulnerability in Eclipse Milo
CVE-2022-25897 Example to Unlimited Monitored Items - Resource Exhaustion - Denial of service vulnerability in Eclipse Milo.


We researched OPC UA off and on for a long time, mostly because we wanted to participate in Pwn2Own, which turned out to be a very good incentive for us and benefited the affected vendors. The results:

  • Pwn2Own ICS: We participated and demonstrated our OPC UA exploits at three Pwn2Own competitions: Pwn2Own ICS 2020, 2022, 2023

  • CVEs: We found and reported on ~50 OPC UA vulnerabilities/CVE across ~15 protocol stacks, which affects hundreds of OPC UA products.

  • Exploit Techniques: We developed ~12 unique exploit techniques that are universal and affected multiple vendors and pushed to change the specs.

  • Open-Source Tools: We have released an open-source tool, our OPC UA network fuzzer and will soon release our OPC UA exploit framework.

  • OPC UA Specifications: We helped to improve the specifications and pushed the vendors toward better and more secure products.

During our research we developed ~8 unique attack methods that are considered generic, meaning they affect multiple OPC UA protocols stacks; we had to disclose the same exploit technique to multiple vendors. For example, a “Chunk flooding DoS” exploit we developed affected more than seven different protocol stacks from different vendors.

Through our research we had the opportunity to participate in multiple Pwn2Own competitions and even take first place as Pwn2Own 2023’s Master of Pwn. For each competition we had to prepare for months and we found dozens of bugs, even though we didn’t use all of the bugs in the competition. For example, here is a Sankey flow diagram of the bugs we found for Pwn2Own 2022:

OPC UA Deep Dive Series (Part 5): Inside Team82’s Research Methodology
Sankey flow diagram of the bugs we found for Pwn2Own 2022.

What’s Next?

In the next part of our OPC UA Deep Dive series, we will unveil our exploit framework. This one-of-a-kind framework will be made publicly available on our GitHub repository, and we encourage researchers and vendors to use it to test the security of OPC UA implementations. 

Researchers and vendors will find the framework useful because that attacks it contains exploits specific functions within OPC UA implementations and was the centerpiece tool used throughout the research supporting the OPC UA Deep Dive series. 

In Part 5 of our series, we explained our research environment as well as the results of our work, including some of the vulnerabilities that have been patched after coordinated disclosures between Team82 and affected vendors. 

OPC UA Deep Dive series

Part 1: History of the OPC UA protocol

Part 2: What is OPC UA?

Part 3: Exploring the OPC UA Protocol

Part 4: Targeting Core OPC UA Components

Part 6: A One-of-a-Kind OPC UA Exploit Framework

Stay in the know

Get the Team82 Newsletter

Related Vulnerability Disclosures

Intel NUC with VMWare ESXi
OPC UA protocol stacks
Example of an OPC UA client
OPC UA protocol paramter
OPC UA targets
OPC Fuzzer
network-fuzzer finding bugs
Ansi-C stack harness compiled with AFL
vulnerable OPC UA projects
Denial of service vulnerability in Eclipse Milo
OPC UA Deep Dive Series (Part 5): Inside Team82’s Research Methodology
LinkedIn Twitter YouTube Facebook