A Secure Design Pattern Approach Toward Tackling Lateral-Injection Attacks

10/23/2022
by   Chidera Biringa, et al.
0

Software weaknesses that create attack surfaces for adversarial exploits, such as lateral SQL injection (LSQLi) attacks, are usually introduced during the design phase of software development. Security design patterns are sometimes applied to tackle these weaknesses. However, due to the stealthy nature of lateral-based attacks, employing traditional security patterns to address these threats is insufficient. Hence, we present SEAL, a secure design that extrapolates architectural, design, and implementation abstraction levels to delegate security strategies toward tackling LSQLi attacks. We evaluated SEAL using case study software, where we assumed the role of an adversary and injected several attack vectors tasked with compromising the confidentiality and integrity of its database. Our evaluation of SEAL demonstrated its capacity to address LSQLi attacks.

READ FULL TEXT
research
01/24/2014

Using Neural Network to Propose Solutions to Threats in Attack Patterns

In the last decade, a lot of effort has been put into securing software ...
research
01/30/2022

Making Secure Software Insecure without Changing Its Code: The Possibilities and Impacts of Attacks on the DevOps Pipeline

Companies are misled into thinking they solve their security issues by u...
research
12/10/2019

V0LTpwn: Attacking x86 Processor Integrity from Software

Fault-injection attacks have been proven in the past to be a reliable wa...
research
03/08/2013

Security Assessment of Software Design using Neural Network

Security flaws in software applications today has been attributed mostly...
research
01/13/2023

PMFault: Faulting and Bricking Server CPUs through Management Interfaces

Apart from the actual CPU, modern server motherboards contain other auxi...
research
07/07/2023

From Lemons to Peaches: Improving Security ROI through Security Chaos Engineering

Traditional information security presents a poor ROI: payoffs only manif...
research
02/23/2023

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models

We are currently witnessing dramatic advances in the capabilities of Lar...

Please sign up or login with your details

Forgot password? Click here to reset