A multi layered approach to prevent data leakage Essay Example
A multi layered approach to prevent data leakage Essay Example

A multi layered approach to prevent data leakage Essay Example

Available Only on StudyHippo
  • Pages: 17 (4405 words)
  • Published: November 14, 2017
  • Type: Article
View Entire Sample
Text preview

The enterprise has a lack of protection in the area of databases. Malicious hackers, who possess skills, are no longer primarily interested in persuading millions of individuals to open email attachments that will spread throughout an infected machine's email address book. Instead, they have shifted towards a more professional strategy by focusing on and infiltrating networks to acquire data that can be utilized or sold for financial profit.

The databases in an enterprise contain valuable and sensitive information such as customer data, transaction details, financial performance numbers, and human resource data. However, databases are often not well protected compared to other areas. Although perimeter and network security measures offer some protection against certain attacks, there are still vulnerabilities specific to databases that can be exploited by attackers.

The database is vulnerable to being breached due to the comple


xity of database management systems and their constant development of features. These vulnerabilities are discovered by both ethical and non-ethical hackers, as well as users. After these vulnerabilities are found, they are reported to the vendors of the database management systems who work to patch them. However, this patching process often takes several months or even years, creating an opportunity for hackers to exploit and breach the database.

The scenario is reminiscent of Willie Sutton, the bank robber, who famously said he robbed banks because "that's where the money is." Similarly, current ATM systems employ a widely used approach of limiting the amount that can be withdrawn in each transaction and per day instead of randomly grabbing money from people on the street.

A layered approach to preventing data leakage

Implementing a layered approach can effectively prevent data leakage. This strategy

View entire sample
Join StudyHippo to see entire essay

should begin with robust protection at the source by securing sensitive information in critical databases. Additionally, it should incorporate monitoring and blocking capabilities at the database query level to restrict access for both internal and external users, including database administrators, based on their respective profiles. An enterprise solution should encompass monitoring and blocking mechanisms at multiple layers such as application layer, database layer, and file system layer. It should also have the ability to promptly escalate threat warnings throughout applications, databases, and file systems involved in handling sensitive information. These various components can then conduct thorough analysis and enforce stricter policies for any access requests targeting sensitive data.

Identifying and securing major databases that store sensitive information, such as credit card numbers and customer data, is typically a straightforward task. This initial measure is crucial because many information breaches, even those resulting from stolen laptops or the transmission of sensitive information via email, often begin with queries made to critical databases containing this sensitive information. By adopting this approach, it is possible to effectively minimize the leakage of sensitive data from centralized data stores to diverse distributed data stores.

1. New patterns of attack

There are new patterns of attack emerging, just as there are new attackers. While external hacking, accidental exposure, lost or stolen backup tapes, and lost or stolen computers still pose a significant risk for data leakage, database attacks have become more sophisticated. These attacks often involve authorized insiders who abuse privileges, hack application servers, and use SQL injections to extract critical data. Even databases with strong protection can be vulnerable if applications offer broad access privileges that go beyond

individual permissions. Attacks from home users injecting SQL commands into harmless fields pose a particularly dangerous threat, as they compromise database security from outside corporate networks. This method of attack has been used for a long time and typically involves a trusted, but untrustworthy employee who has broad access privileges. Many organizations have policies and processes in place to govern access to sensitive data, but struggle to find practical and cost-effective solutions for detecting or blocking activities that violate these policies.

Database attacks are frequently initiated by insiders. Database breaches, which are often carried out by organized criminals working alongside authorized insiders, aim to target valuable concentrations of vital business-related information.These attacks have immediate and significant impacts on businesses, and the damage caused to a company's reputation, as well as personal reputations, can be long-lasting.Database breaches have become a growing aspect of IT Risk. There is an increasing recognition of the "insider threat," particularly the threat posed by users who have privileged access. These users are responsible for a considerable number of data breaches. According to CERT's annual research, internal users are attributed to up to 50% of breaches.The 2006 FBI/CSI report on the insider threat reveals that two-thirds of surveyed organizations, including both commercial and government entities, experienced losses due to internal breaches. Some organizations attribute up to 80% of the damage caused to internal breaches. The report also highlights that at the time of breach, 57% of implicated insiders had privileged access to the breached data. This evidence demonstrates that perimeter and network security measures alone are insufficient in preventing such breaches. The consolidation of valuable information and the professionalization of computer crime

have driven the frequent launching of database attacks through insiders who possess complete authorization to access and steal information. Both the Computer Security Institute and the FBI survey contribute detailed findings on the rising occurrence of such attacks.Infrastructure security solutions like perimeter-based defenses, access controls, and intrusion detection prove ineffective in protecting against authorized insiders. The reality remains that no screening and authorization process can achieve perfection.

Unauthorized behaviors by authorized and unauthorized users

Database attacks, carried out by both authorized and unauthorized users, depict unauthorized behaviors. It is crucial to recognize that authorized insiders pose a significant threat to the safety and integrity of information. Even with a flawless screening process, no permission-based, asset-centric security system can fully eliminate this inherent vulnerability.

The problem is worsening as business enterprises and security companies are still in the early stages of addressing the resurgence of threats to their information assets. However, the issue is escalating. More information is being shared with customers, partners, and suppliers through Web portals connected to important databases. Companies are extensively integrating customer-facing applications such as customer relationship management, service provisioning, and billing, leading to critical information spreading throughout organizations. Furthermore, more businesses are outsourcing and offshoring critical processes to new "insiders" who may not have undergone internal screening procedures of their own organizations. In industries like pharmaceuticals and genetic research, valuable corporate assets could be exposed in easily accessible databases due to increasing automation of intellectual property management. This means that any organization, whether public or private, is susceptible to public embarrassment, financial loss, and government investigation when crucial information is stolen or compromised.

The more complex an application becomes, the more

likely it is to harbor hidden holes. Attack an application frequently and exploitable holes are likely to be discovered. Databases further complicate the issue as they are intricate systems that handle information exchange with other applications, which could be vendor-supplied or internally developed using APIs.

2. New security requirements are evolving, moving away from solely safeguarding devices and understanding specific users. Instead, emphasis is now placed on implementing policies concerning user interactions and information protection. Effective policy management techniques and technologies are needed to provide alerts or restrict access and activities that do not adhere to prescribed guidelines.

It is crucial for organizations to have multiple layers of defense to safeguard their sensitive data. The rise in internet usage has resulted in increased risks for businesses. If companies fail to secure their online infrastructures and entry points, they are jeopardizing their networks. While firewalls are widely utilized, they are inadequate in preventing hackers from pilfering confidential information. It is now recognized by organizations that implementing a robust online security policy is vital to instill confidence among customers and business partners regarding the protection of their data.

Blocking based on data volume access

Data-layer security requires the ability to detect unauthorized data access by external parties or authorized individuals, either through direct database access or through different networks, including the Internet. To continuously monitor transactions that may contain sensitive data and take appropriate action, alerts and blocking mechanisms are employed. These measures compare current usage with historical usage patterns established by enterprise administration. Patterns typically include categories like credit card numbers, Social Security numbers, and patient record identifiers that must be monitored and blocked if transmitted in violation

of policy.

3. Limitations of traditional approaches

Currently, there are various technologies used to secure databases. Like other areas of IT security, no single tool can provide complete protection against all threats and abuses. It is advised to use a combination of tools to achieve sufficient security. Traditional defenses based on perimeters and assets are ineffective in environments where perimeters are unclear and constantly changing. Attacks target data instead of assets, and the most probable threats come from authorized insiders who have the ability to bypass or disable defenses.

Perimeter-based defenses are crucial for IT security and more important than ever, but they do not adequately protect critical information stored in databases. They are ineffective against insider attacks from individuals with full authorization to operate within defended perimeters. If an organization's trust in their authorized personnel is justified, perimeters may not provide the same level of protection as before. With the security risks posed by mobile systems, wireless networks, peer-to-peer networks, high-capacity USB drives, portable hard drives, and other mobile storage devices, which have various methods of transferring information undetected across networks, perimeter defenses have limited effectiveness.

Designing and maintaining identity management and access controls pose a considerable challenge.

Unfortunately, it is a common practice in enterprises to utilize group usernames and passwords without revoking privileges for ex-employees. This approach is vulnerable to hacking, such as SQL injections that exploit escalated privileges. Implementing role-based access controls and permissions, rather than behavior-based ones, present difficulties in terms of design and maintenance. Additionally, "permission inflation" poses a gradual weakening of protections over time as individuals acquire new permissions due to job role

changes. Access controls also tend to not apply to application access, including cases like SQL injections.

Monitoring using network appliances can provide alerts (and if used in-line, prevention) on network access to the database, but they do not protect against insiders with access privileges / local access. These appliances often require network reconfiguration and, if used in-line, they gradually create a network bottleneck unable to handle encrypted traffic or expensive hardware. This class of network-based appliances monitors network traffic to detect SQL statements and analyze them based on policy rules to generate alerts for unauthorized database access and attacks. However, since the appliance only monitors the network, it cannot observe local database activity, leaving the database vulnerable to insiders with local access or those who can bypass the appliances. To ensure sufficient monitoring, the appliance must be installed at every choke point on the network where the database is accessed, effectively encircling the database from all sides.

For mission-critical databases that are often interconnected with various applications such as ERP, CRM, BI, and billing, this greatly increases the already high cost.

Slow and imperfect protection with Intrusion detection and audit

Intrusion detection is not capable of distinguishing between authorized and unauthorized queries on database servers. On networks, intrusion detection only defends against certain types of attacks on transmitted information. While audits are necessary for a robust IT policy or regulatory compliance program, they require significant time and effort, impacting system performance. Audits offer sluggish and flawed defense against internal attacks unless the audit data is accurate and clearly linked to data instead of infrastructure. Moreover, safeguarding the audit data from attacks is crucial. Perimeter defenses, employee screening, and security

measures focused on information cannot prevent accidents like the loss of a laptop containing credit card numbers.

Information-centric security enables managers and auditors to comprehend the extent of lost information and direct notification and remediation initiatives.

Native database audit tools: These tools offer a comprehensive record of database activity and are useful for forensic analysis. However, they significantly impact the database's performance and only provide post-event analysis without preventive capabilities or separation of duties. Furthermore, disabling these tools is easy. While most DBMSs have built-in auditing features, they are often not used due to their negative effect on performance. Additionally, as DBAs manage these tools, there may be a conflict of interest.

Protecting information against intruders, insiders, and malicious users is essential. Strong protection can be achieved through policy driven encryption of database fields, especially when combined with a multilayer security approach. Encryption systems should have separate policies and be integrated into the overall security strategy to ensure data accessibility only by authorized individuals and prevent exposure. Encryption is crucial for securing customer transactions and confidential information in databases. A comprehensive security program should include secure automated encryption management for critical data across all platforms. The cryptographic architecture should be flexible and modular to adapt to different enterprise scenarios. Balancing security and usability is always challenging, as each company will have unique requirements based on business policies and compliance considerations that influence the choice of data protection methods. Establishing an effective strategy for safeguarding data that aligns with your company's needs goes beyond mere compliance requirements.

4. Solutions for multi-tiered applications

The use of asset-centric approaches in securing databases for multi-tiered applications can actually be

counterproductive, as it diverts time, effort, and attention towards solutions that are unlikely to effectively prevent information loss and corruption. Advanced technologies such as multi-level applications, multi-tier storage, and service-oriented architectures (SOA) often involve privileged access to critical databases, making them more complex and vulnerable. The process of mapping information onto infrastructure assets in these environments is intricate and constantly changing. Asset-focused policies, alerts, security logs, and reports are intricately connected and may not provide the necessary protection or serve as adequate documentation for data-focused policies and regulatory compliance.

Who is the true user?

Security systems for stored data currently function in real-time, intrusively in-line with the data they safeguard. These systems can take the form of separate server appliances, operate alongside applications on the same host machine, or alongside data services machines like database servers. When applications request sensitive, encrypted data instead of authenticated users, the "legitimate user" may simply be the application itself. Even when an actual username is provided by the application along with the data request, the security system cannot determine whether the user is a hacker or has acquired legitimate user credentials unlawfully.

Even if the real user is not identified, a behavioral policy can restrict access.

Data security systems do not utilize application security events detected elsewhere in the same timeframe. When auditing event forensics via log files, it is common practice to correlate with those events, but this typically occurs long after the events have taken place. Some approaches utilize probability analysis across concurrent processes to determine the real application user accessing the data. Other solutions can fully track the user, but

they require application awareness through an application API or specific plug-in for each application environment. The behavioral policies that restrict data access analyze access patterns without the need to identify the true end user.

5. Solutions for Web-based Applications

Buffer overflows, SQL injection, and Cross-Site Scripting (XSS) are well-known security vulnerabilities commonly found in web applications. Despite their long existence, attackers continuously discover ways to exploit these vulnerabilities to gain unauthorized access and administrative privileges to databases.

To address buffer overflows, intrusion prevention systems can be implemented to help mitigate the associated risks. SQL injection attacks are particularly prevalent due to the use of Structured Query Language (SQL) in modern databases for data management. In this type of attack, the perpetrator injects a valid request followed by a single quote and a ";", appending an additional request that includes the desired command they wish to execute.

By taking advantage of poorly configured databases, attackers can deceive them into carrying out unauthorized actions by attaching malicious code alongside legitimate code.

Cross-site scripting occurs when a web application collects harmful data from users. This data is often acquired through hyperlinks containing malicious content. Users typically come across these links on external websites or encounter them while using instant messaging services, browsing web forums or reading email messages.

Attackers frequently encode the malevolent part of the hyperlink using hexadecimal or other encoding methods to make it appear less suspicious when clicked by unsuspecting users. Once collected by the web application, this data is used to generate user-oriented output pages.
The output page cleverly disguises the malicious data as legitimate content sourced from the website. It includes the original data received by the application.

Latency issues

with traditional application firewalls
Web application firewalls are often the most convenient method for safeguarding against these types of exploits. Code audits conducted internally or by external experts can also identify and resolve SQL vulnerabilities. Most application firewalls, regardless of whether they are installed as separate reverse-proxy server machines, situated on the same host machine as the application, or located with network firewall machines, generally operate in real-time, inline with the applications they protect. This results in delays as the application firewall analyzes the traffic, records the activity, alerts IT Operations and/or network firewalls about suspected attacks, and forwards the traffic to the application. Additional delays occur when examining HT-TPS traffic. For example, secure socket layer ("SSL") protocols used in HTTPS are terminated and decrypted before inspection; in certain implementations, the traffic is re-encrypted before being sent to the Web, application, and/or database servers for final HTTPS termination. Application firewalls are not configured to exploit security events or behavioral anomalies identified elsewhere in the environment at approximately the same time, although it is common practice to correlate those events when conducting forensic audits of log files long after the events have occurred.

Web application firewalls combined with an escalation system provide a highly effective protection against both external and internal attacks through automated, synchronized threat monitoring and response. Below is a description of an escalation system that can dynamically switch Web application firewalls between different protection modes.

6. Behavioral policy layers can restrict data access

Control database queries that return thousands of credit card numbers. This method, unlike monitoring tools that only examine inbound database commands, detects unauthorized or suspicious actions by monitoring traffic to

and from database servers. This enables the solution to quickly detect a database query that returns thousands of credit card numbers, deviating from the typical data access patterns.

Understanding the true extent of data theft can be achieved through the use of a Policy Engine. This engine monitors outbound responses from the database and can identify suspicious data access patterns based on the volume of returned records. Data Usage Policies are commonly utilized to detect activities by authorized users that deviate from normal business processes. The information gathered by the Data Usage Policy Engine is also valuable in comprehending the full scope of data theft, thereby reducing breach disclosure efforts and costs. In addition, access and security exception policies offered by a solution can monitor inbound database commands for unauthorized actions, including database changes, failed logins, and SELECT operations carried out by privileged users.

Control the amount of data that is accessed Protection rules govern the amount of data that users can access within specific time periods. The item access rates determine how many database records, file blocks, or web transactions a connection can access during a time window. These rates can be set based on the number of rows a user can retrieve from a database column. If a query result surpasses the item access rates, the request is denied before the result is sent to the user.

Prevent the user from accessing the query result. The method for detecting intrusion in a database relies on an intrusion detection profile. This profile consists of item access rates that determine the maximum number of rows a user can access within a specific time period. If a query exceeds

the defined item access rate in the user authorization profile, the transmission of the query result to the user is blocked.

Data inference policy rules encompass a variant of traditional intrusion detection called inference detection. Inference detection entails identifying specific patterns of information access, even when the user has proper authorization. The records of executed queries are gathered and compared to the inference pattern to assess if a series of accesses in the record align with the inference policy. If a match is found, the access control system is notified to modify the user's authorization, thereby classifying the received request as unauthorized.

Machine-learning can be used to predict future intrusions by analyzing accepted behavioral patterns and past intrusions.

The access restrictions on data can benefit from this machine-learning approach.

7. A multi-layered data defense system A layered approach to security is necessary to address the constant threat of new and innovative intrusion attempts from insiders and outsiders. Deploying multiple layers of protection ensures that if one or two fail, there are still other barriers in place to withstand the attack or at least delay the criminal. Criminals often choose the easiest targets, so making a system more difficult to breach can deter them from trying and lead them to pursue more vulnerable targets.

Data-layer protection is a method that oversees all requests for accessing sensitive data, including credit card and Social Security numbers, patient identifiers, or any custom patterns. It detects exceptions and anomalies in real time by comparing against policies and historical data, while also providing a complete audit trail to ensure compliance. It is essential for a sustainable solution to be adaptable, offering the ability to balance the

level of protection with database performance and other operational requirements.

A Multi-layer security advisory framework is a system that effectively handles certain types of attacks. This system has 5 risk-of-attack-levels (Threat Levels) that result in specific actions by local servers in the same policy domain when triggered. Data security events from sensors at various system layers (web, application, database, and file system) are collected. The Threat Level is then communicated to connected systems within a data flow. Additionally, the Threat Level can be adjusted based on factors like time of day and day of week.

A Score-card to keep track of usage abnormalities

A score-card is maintained for each entity (such as a user or service account/proxy-user, IP address, application, process, etc.) and data object (such as a database column, file, etc.), which has a history of processing sensitive data. The score-card provides a summary of current and historical information regarding the patterns of data access for each entity. It also includes a 'finger-print' that indicates any historical deviations from the acceptable access patterns for select/insert/update/delete (s/i/u/d) operations. If the score-card value is high, it triggers a more extensive analysis before granting access to the sensitive data. The protection policy across multiple layers of the system can be dynamically and automatically modified based on the prevention analysis results. Additionally, the score-card keeps track of when a remote system needs to reconnect to the central system to renew or recharge its capability to encrypt and decrypt data. The policy may require the local system to operate standalone for a specific duration or perform a fixed number of crypto operations before renewing the central password connection. This behavior mimics a

rechargeable key box and can automatically disable local access to sensitive data if the local system is stolen, cloned, or compromised in any way.

Escalation in a multi-node security system is a technique that enables collaborative processing and control of application-layer security. This is achieved by utilizing loosely and tightly interconnected nodes, such as application firewalls, application monitors, and data security enforcement points. Additionally, operational and escalation rules are applied. For instance, if a SQL Injection attack occurs at the application layer, the Web Application Firewall can automatically transition from monitoring mode to inline mode to block specific requests. This offers a dynamic and automated adjustment of the protection policy.

The escalation in a multi-layer security system involves the dynamic and automatic alteration of the protection policy in various system layers. This alteration includes changing the protection policy for data at one or more system layers. The modification is based on the results of the link prevention analysis. For instance, if there is an SQL Injection attack at the application layer, it can automatically elevate the alert level of the connected backend databases (System Threat Level). This higher alert level can activate a protection policy that enables additional logging and alerting, and potentially blocks certain requests when the scorecard is unbalanced.

Balance performance and protection - Data-layer protection must be flexible to accommodate various protection requirements, including performance, scalability, and operational needs. A multi-layer solution can effectively balance performance with the level of protection against internal threats, while minimizing necessary adjustments to the database and associated programs. This approach to data security offers real-time, policy-driven data protection with customizable balancing between zero performance impact and full protection against

internal threats to data at rest.

Selective activation of the intrusion analysis can be beneficial when accessing specific data columns or files, as it can initiate a more thorough intrusion analysis process. This is particularly advantageous when only a small number of items are sensitive to intrusion, as most queries are not directed towards these items. By selectively activating the intrusion detection, time and processor power can be saved.

Dynamically switch between monitor and in-line operation

The Leakage Prevention solution has the capability to dynamically change its operation between functioning as an in-line database gateway and as a passive monitoring device. This enables it to effectively block the output of transactions that violate security policies. In addition, the solution can initiate other enforcement actions such as transaction blocking, automated logouts of database users, VPN port shutdowns, and realtime alerts.


The proposed Multi Layered Approach presents a comprehensive solution for preventing Data Leakage and addressing the fundamental requirements of organizations in protecting their critical data. It allows for real-time detection and blocking of leakage of sensitive company information, including thorough analysis of all sensitive data leaving the database. This ensures that companies can respond promptly to policy violations. Furthermore, the approach minimizes the risk of fraud caused by insiders who abuse their privileges. By analyzing behavior against established policies and access history, anomalous activities can be identified even if they are carried out by authorized users. This empowers organizations to achieve a high level of security.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds