VetIoT: On Vetting IoT Defenses Enforcing Policies at Runtime
Smart homes are powered by numerous programmable IoT platforms. Despite tremendous innovations, these platforms often suffer from safety and security issues. One class of defense solutions dynamically enforces safety and security policies, which essentially capture the expected behavior of the IoT system. While many proposed works were built on this runtime approach, they all are under-vetted. The primary reason lies in their evaluation approach. They are mostly self-evaluated in isolation using a virtual testbed combined with manually orchestrated test scenarios that rely on user interactions with the platform's UI. Such hand-crafted and non-uniform evaluation setups are limiting not only the reproducibility but also a comparative analysis of their efficacy results. Closing this gap in the traditional way requires a huge upfront manual effort, which causes the researchers turn away from any large-scale comparative empirical evaluation. Therefore, in this paper, we propose a highly-automated uniform evaluation platform, dubbed VetIoT, to vet the defense solutions that hinge on runtime policy enforcement. Given a defense solution, VetIoT easily instantiates a virtual testbed inside which the solution is empirically evaluated. VetIoT replaces manual UI-based interactions with an automated event simulator and manual inspection of test outcomes with an automated comparator. We developed a fully-functional prototype of VetIoT and applied it on three runtime policy enforcement solutions: Expat, Patriot, and IoTguard. VetIoT reproduced their individual prior results and assessed their efficacy results via stress testing and differential testing. We believe VetIoT can foster future research/evaluation.
READ FULL TEXT