In fuzz testing, interoperability means that the system under test (SUT) is in the correct state to receive fuzz test data for efficient and comprehensive testing. Defensics® is a generational, model-based fuzzer that recognizes the protocol that users are testing. It identifies how every message should be constructed, each field type, and the meaning of fields like checksums, lengths, and session identifiers, including the order of the messages and relation between fields in different messages. Thus, it can produce very high-quality test cases with specific anomalies in specific locations, and that means it can drive the target software into a wide variety of states to probe for vulnerabilities.
Figure 1. Defensics interoperability testing view.
But before it makes sense to start fuzzing, it is important to ensure the interoperability of the fuzzer and the SUT. This is to ensure the SUT is in a state that can process normal input as well as anomalous input. If the SUT is not in the correct state, fuzzing might not be efficient, or it might not even execute.
The protocol messages, including messages sent and expected responses, make up a test case. In Defensics terminology, a message without anomalies is called a valid message; thus, a test case with only valid messages is called a valid test case or a valid case in short.
For example, one valid test case for the Bluetooth AVRCP protocol has a Get Capability request message and an expected Capability response. The test sequence includes the request message, response message, and messages needed to set up the L2CAP connection to the test target.
Figure 2: The Get Capability test sequence in the Defensics Bluetooth AVRCP test suites
In many protocols, the role of the communication component affects the message types and expected direction of the communication. For example, during server direction testing, it is assumed that the client side starts the communication. The server role can also be called controller, gateway, central, or access point depending on the protocol. During client direction, the assumption is that the test target starts the communication.
Defensics typically has its own test suite for each role. In the AVRCP example, the Defensics Bluetooth AVRCP test suite testing a server target is on the left, and the Defensics Bluetooth AVRCP controller test suite testing a client target is on the right. All protocol messages tested by the Defensics test suite are listed on the test suite’s datasheet.
Figure 3: The Defensics Bluetooth AVRCP controller test suite showing an AVRCP Get Capability valid test case in test case view
When users run the interoperability check against a test target, checking is performed by running a valid test case from each test group. If the test target gives the expected response to the valid test case, the interoperability check for the valid test case passes. If a valid case fails, the test target might not support the feature, the feature might not be enabled, the test target or test suite configuration might be incorrect, or timeouts are too tight.
In any case, when a valid test case does not pass, the SUT is not in a correct state to receive the anomalized message or will not receive all anomalized messages (in the case of a multimessage test case). Based on interoperability check results, failed test groups can be excluded from the test plan to reduce testing time.
Figure 4: Interoperability testing with the Defensics Bluetooth LE link layer test suite showing that encryption is not supported by the test target
Test targets often do not support all protocol messages due to optional messages in the protocol, messages used only with certain types of test targets, and different protocol versions. There might even be mutually exclusive test cases. If the test target does not support one type of message transport channel (like TCP), another type of channel (like UDP) will not be supported at the same time.
Client direction testing is more challenging for interoperability, since it might require user action to start communication and is often more limited in supported features. For example, with Bluetooth clients, the user might need to select the connection and start the action. In those cases, manual action might lead to timeouts, and scripting is required to make testing efficient.
It might be possible to increase interoperability by changing the protocol parameters, the expected response, messaging order, or timing of the messaging between fuzzer and test target. The following sections summarize configuration methods provided by Defensics to respond to various needs of the test targets.
Valid test cases are provided in an XML-formatted file called a sequence file. A test suite might supply multiple sequence files to perform interoperability checking for different protocol features and functionality of the SUT. For example, valid cases for the various message transport channels can be found on different user-selectable sequence files. There might also be different sequence files for different protocol features.
Figure 5: The Defensics Bluetooth OBEX server test suite has its own sequence files to test MAP, PBAP, OPP, and FTP features with or without single response mode
Some test suites supply only a single sequence file, and some do not supply any user-editable sequences. Reload settings—reloading the test suite with a new protocol model and test sequences—can also be used to alter the valid test case sequence. For example, the transport channel could reload the test suite with only the TCP-supported valid test case if the default setting value was UDP on the suite load.
Often, it is enough to find one correct configuration to achieve maximum interoperability and full test coverage. But when a test target supports multiple transport channels and features, all available test sequences must be run with different settings to achieve maximum test coverage.
A sequence may include multiple outgoing protocol messages, which might hold parameters that are specific to a test suite or test target. Some are obvious like address and port, but there are many trivial configuration parameters that can appear as a configurable test suite setting. If there are too many optional parameters to provide them all as individual settings, the test suite developer must make a choice. Some of the most common parameters can be provided as a setting, and the rest can be changed by manually editing the sequence file.
Parameter default values are set to a known good value with minimum requirements from the test target. This is a design choice that maximizes interoperability. Often, the value is zero with valid test cases, and other values appear in anomaly test cases.
Figure 6: The Defensics Bluetooth L2CAP test suite configuration settings with default values
Parameters are not the only challenge. The timing and order of messages can also cause interoperability problems. Timing also affects testing speed. For example, increased input timeout will slow down fuzzing when an input message does not appear due to an anomaly. The designer might choose speed first, but to reduce the number of skipped cases, it is sometimes necessary to increase the timeout value with slow-responding test targets.
Figure 7: Timing configurations on the Defensics Bluetooth test suite
Test sequences often wait for response messages from the test target. There might be a requirement that the response must be of a known type and hold certain parameters to be considered a valid response. It might be that some parameters from the response are needed for further communication, like session ID or QoS level. Typically, test suites are lenient about response messages, but if the target does not respond as expected, the user has to change the test sequence from the sequence file.
Interoperability is not the first concern during fuzz testing since by default the test data is invalid, malformed, or unexpected. But to maximize test coverage, the test target must be in the correct state to receive the test data. Ensuring the correct test state requires interoperability between test suite and test target.
The Defensics team creates meaningful test sequences for interoperability with a wide range of implementations, but sometimes interoperability testing fails without any obvious reason. If the test suite documentation does not provide a solution, our support team is always happy to help with interoperability problems. Since there are often no golden samples for test targets, our developers are interested in receiving feedback from the field to continuously improve interoperability.