Transaction Management And Concurrency Control Computer Science Essay Example
Transaction Management And Concurrency Control Computer Science Essay Example

Transaction Management And Concurrency Control Computer Science Essay Example

Available Only on StudyHippo
  • Pages: 3 (650 words)
  • Published: August 1, 2018
  • Type: Case Study
View Entire Sample
Text preview

As the connection of networks and databases increases, the significance of a reliable database management system becomes evident. When selecting the appropriate system, it is important to consider Transaction and Concurrency Control, Recovery and Backup, and Security as key functions. The protection, backup, and security of databases containing valuable company information are essential to prevent data loss and unauthorized access.

Both Oracle and Microsoft have incorporated robust features into their database products to fulfill this requirement. This paper aims to assess the characteristics, functionality, and ease of management offered by these two databases.

Table of Contents

Introduction

Overview

SQL Server Overview

Oracle Overview

Transaction Management and Concurrency Control

Overview of Transaction Management and Concurrency Control

b) SQL Server TM and CC

c) Oracle TM and CC

This section will discuss the topic of comparison.

Backup and recovery

a) Introduction to Backup and Recovery

b) SQL Server B and R<

...

/p>

c) Oracle B and R

Comparison: d)

Security

a) Overview

SQL Server Security

c) Oracle Security

d) Comparison

Conclusion

Introduction

The purpose of this paper is to evaluate the features of transaction and concurrency control, recovery and backup, and security in Microsoft SQL Server and Oracle. Through analyzing these aspects, the goal is to enhance understanding of database functionality and gain knowledge about the similarities and differences between these two systems.

Introduction to Database Management Systems

Microsoft SQL Server is a relational database server that primarily utilizes T-SQL and ANSI SQL. ANSI SQL, which is a standardized version of SQL by the American National Standards Institute, serves as the foundation for various SQL languages including T-SQL. T-SQL is a proprietary extension that includes specific keywords for performing operations such as creating and altering database schemas, inputting and editing data, and managing and monitoring servers. Any application interacting with SQL Server communicates through

View entire sample
Join StudyHippo to see entire essay

T-SQL statements. The main difference between T-SQL and basic SQL lies in the incorporation of local variables, control flow language, modifications to delete and update statements, as well as supporting functions for date and string processing, and mathematics.

The first version of SQL Server, called Version 1.0, was released in 1989 and it was based on Sybase SQL Server. However, Microsoft later decided to end their co-licensing agreement with Sybase and began developing their own version of SQL Server. The latest release, SQL Server 2008, was launched on August 6, 2008. It includes various improvements in terms of speed and functionality that will be further discussed in subsequent sections.

Oracle Database, developed by Oracle Corporation, is a relational database management system that allows users to store and run functions and stored procedures using either the proprietary language extension PL/SQL or the object-oriented language Java.

The initial release of Oracle V2 took place in November 1979, offering essential query and join functionalities but lacking transaction support. The most recent version of Oracle Database is 11g, which was released in 2007 and incorporates various notable enhancements that will be further examined.

Transaction management and concurrency control

Overview

A transaction refers to a single logical unit of work that interacts with or modifies the contents of a database. It encompasses an action or series of actions performed by a user or application. The outcome of a transaction is the transformation of the database from one consistent state to another, indicating either success or failure. In case of failure, the transaction is terminated and the database reverts back to its previous consistent state. The Database Management System ensures that all updates pertaining to the transaction

are executed, thereby ensuring stability in case of failures.

Transactions adhere to four fundamental properties: Atomicity, Consistency, Independence, and Durability (ACID). Atomicity signifies that it is treated as a single indivisible unit of work. Consistency guarantees that data remains coherent even after an unsuccessful transaction or system crash. Independence isolates the effects of an incomplete transaction and conceals them from other transactions. Durability ensures permanent alterations to the database's state for successful transactions.

Concurrency control involves managing and controlling simultaneous database operations in order to prevent interference and maintain consistency. Problems such as lost updates, inconsistent analysis, and uncommitted dependencies can be resolved through concurrency control. The two main techniques used for concurrency control are locking and timestamping. [3]

SQL Server TM and CC

SQL Server ensures fulfillment of the ACID requirements through the utilization of transaction management, locking, and logging. In SQL Server, an explicit transaction is established by using the BEGIN TRANSACTION and COMMIT TRANSACTION commands. ROLLBACK TRANSACTION allows for the rollback of a transaction to either the initiation point or another save point within the transaction. SAVE TRANSACTION enables the set up of a savepoint within the transaction, which facilitates division of the transaction into logical units that can be revisited in case certain parts of the transaction are conditionally cancelled.

Locking is an essential aspect of ensuring transactional integrity and database consistency in SQL Server. It offers both optimistic and pessimistic concurrency controls. Optimistic concurrency control allows transactions to execute without locking resources, assuming that resource conflicts are unlikely but not impossible. On the other hand, pessimistic concurrency control locks resources for the duration of a transaction. SQL Server has the capability to lock various resources

including RIDs, keys, pages, extents, tables, and databases. It employs different lock modes, such as shared, update, exclusive, intent, and schema locks. Shared locks enable concurrent read operations that don't alter data (e.g., SELECT statements). Update locks prevent deadlock situations when multiple sessions are reading, locking, and potentially updating resources later. Exclusive locks are utilized for data modification operations (e.g., INSERT, UPDATE, DELETE) and ensure that multiple updates on the same resource simultaneously are not possible. Intent locks establish a lock hierarchy and comprise intent shared, intent exclusive, and shared with intent exclusive locks. Schema locks come into play when a schema dependent operation on a table is executed and encompass schema modification and schema stability locks [4].

When two transactions have locks on different objects and each user is waiting for a lock on the other object, a deadlock occurs. SQL Server can customize the deadlocking behavior by scanning for sessions that are waiting for a lock request using the SET DEADLOCK_PRIORITY command. To set the maximum time that a statement waits on a blocked resource, the SET LOCK_TIMEOUT command can be used. It should be noted that by default, the timeout period is not enforced.

Oracle TM and CC

Oracle Database offers two isolation levels, ensuring consistency and performance. Statement level read consistency guarantees that a query only sees data from the moment it started and does not encounter any modified or dirty data during execution. Transaction level read consistency extends this guarantee to all queries within a transaction. Oracle accomplishes this by utilizing rollback segments, which store previous versions of data that have been modified by either recently committed or uncommitted transactions. By doing so,

Oracle ensures consistent views and avoids any potential issues with phantom data.

Oracle Real Application Clusters (RACs) employ cache-to-cache block transfer for transferring read-consistent block images between instances. It utilizes high speed and low latency interconnects to respond to remote data block requests.

Oracle Database provides isolation levels: read committed, serializable, and read-only. Users can select the suitable isolation levels for transactions based on the application type and workload using the following statements: SET TRANSACTION ISOLATION LEVEL READ COMMITTED; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; and SET TRANSACTION READ ONLY. The isolation level can be altered for different transactions by using the ALTER SESSION function.

The default transaction isolation level in Oracle Database is Read committed. It allows each query executed within a transaction to see data that has been committed before the query began. However, this level of isolation does not prevent other transactions from modifying the data that is read by a query. Therefore, there is a possibility of non-repeatable reads and phantoms when the same query is executed twice within a transaction, as other transactions may change the data between the two query executions. Despite this, Read committed is suitable when there are not many conflicting transactions expected and it can offer higher throughput potential.

Serializable transactions only consider changes made at the beginning of the transaction, as well as changes made within the transaction itself through INSERT, UPDATE, and DELETE statements. These transactions do not encounter non-repeatable reads or phantoms. This isolation level is suitable for large databases and short transactions that update a small number of rows. It is most effective when there is a low chance of two concurrent transactions modifying the

same rows or when long-running transactions are mostly read-only. In a serializable transaction, a data row can only be modified if it can be determined that prior changes were committed before the current transaction started. Oracle Database utilizes control information in the data block to indicate which rows have committed and uncommitted changes. The amount of history retained depends on the INITRANS parameter of CREATE and ALTER TABLE. To ensure sufficient recent history information, higher values should be set for INITRANS for tables that will have many transactions updating the same blocks. If a serializable transaction fails with the CANNOT SERIALIZE ACCESS error, the application can choose to commit the work done up until that point, execute additional statements with ROLLBACK, or undo the entire transaction.

Read-only transactions restrict INSERT, UPDATE, or DELETE statements and only display changes made when the transaction started.

Oracle Database utilizes locks to manage concurrent access to data resources. Latches, which are low-level serialization mechanisms, safeguard shared data structures in the System Global Area. When executing SQL statements, Oracle automatically acquires the required locks, employing the least restrictive level of restrictiveness to ensure optimal data concurrency and integrity. Additionally, users have the ability to manually lock data. There are two types of locking: exclusive and share lock modes. Exclusive lock mode is employed to prevent sharing of the associated resource and is acquired for data modification. Only the initial transaction that locks the data has permission to modify it until the lock is released. Share lock mode permits sharing of the associated resource based on the operations performed. Users can obtain share locks to prevent writer access when reading data. Multiple transactions

can hold share locks on the same resource. All locks established by statements within a transaction persist until the transaction is completed or undone.

Due to row locks being acquired with the highest degree of restrictiveness, lock conversion is not required or performed. Oracle will automatically convert the restrictiveness of table locks from lower to higher when necessary. Lock escalation is when multiple locks are held at one level of granularity, and the database will raise the locks to a higher level of granularity. For instance, converting numerous row locks into a single table lock. Oracle Database never escalates locks since this would increase the likelihood of deadlocks occurring. A deadlock occurs when multiple users are awaiting data locked by each other, potentially halting transaction progress. Oracle detects deadlocks automatically and resolves the issue by rolling back one of the statements. To avoid deadlocks caused by users, tables should be locked in the same order for transactions accessing the same data.

There are three main categories of locks in Oracle Database: DML locks, DDL locks, and Internal locks and latches.

DML locks protect data, such as tables and rows, ensuring data integrity for multiple users. The finest level of locking is row locking, which provides the highest level of concurrency and throughput. Whenever a transaction modifies a row using INSERT, UPDATE, DELETE, or SELECT with the FOR UPDATE clause, it acquires an exclusive row lock for that specific row. If a transaction uses a row lock, it also utilizes a table lock for the corresponding table. Table locking is primarily used for concurrency control during DDL operations. Table locks are required when a table is modified by INSERT,

UPDATE, DELETE, SELECT with FOR UPDATE, or LOCK TABLE DML statements. These statements necessitate table locks to reserve DML access to the table for the ongoing transaction and prevent conflicting DDL operations. Table locks can be used at both the table and subpartition levels for partitioned tables. Table locks can be held in various modes, ranging from least restrictive to most restrictive: row share (RS), row exclusive (RX), share (S), share row exclusive (SRX), and exclusive (X).

The table lock types and their concurrency degrees are as follows. The least restrictive is the row share table lock, which allows the highest degree of concurrency for a table. This lock indicates that the transaction has locked rows in the table and intends to update them. It is specified as LOCK TABLE "" IN ROW SHARE MODE.
A slightly more restrictive lock is the row exclusive table lock. This lock indicates that the transaction holding the lock has made one or more updates to rows in the table or issued a SELECT FOR UPDATE statement. It is specified as LOCK TABLE "" IN ROW EXCLUSIVE MODE.
A share table lock is automatically made for a table specified in the LOCK TABLE "" IN SHARE MODE statement.
A share row exclusive lock is more restrictive and is made for a table specified in the LOCK TABLE "" IN SHARE ROW EXCLUSIVE MODE statement.
The most restrictive locks are exclusive table locks, which are specified as LOCK TABLE "" IN EXCLUSIVE MODE.

DDL locks safeguard the schema objects, such as table definitions, while internal locks and latches ensure the security of internal data structures like data files. Only the modified or referenced individual schema objects

are locked during DDL operations, and the entire data dictionary remains unlocked. There are three categories of DDL locks: exclusive DDL locks, share DDL locks, and breakable parse locks. Exclusive and share DDL locks persist until the DDL statement is executed and automatic commit is finalized.

Special locks called DDL locks are needed for most DDL operations to ensure that other DDL operations targeting the same object are not disturbed. If a different DDL lock is already in place, the operation will need to wait until it is released. Additionally, DDL operations generate DML locks on the altered schema object.

The use of share DDL locks is necessary for certain DDL operations to ensure concurrent data access. These operations include AUDIT, NOAUDIT, COMMENT, CREATE (OR REPLACE) VIEW/ PROCEDURE/ PACKAGE/ PACKAGE BODY/ FUNCTION/ TRIGGER, CREATE SYNONYM, and CREATE TABLE (excluding CLUSTER usage).

Breakable parse locks are acquired for a SQL statement and each schema object it references. These locks are created during the parse phase of SQL statement execution and are held as long as the shared SQL area for the statement is in the shared pool. A parse lock is not restrictive to any DDL operation and can be broken to allow conflicting DDL operations.

Latches and internal locks are used to protect internal database and memory structures, preventing users from accessing them. Latches serve as simple, low-level serialization mechanisms that safeguard shared data structures in the system global area, and their usage is dependent on the operating system. On the other hand, internal locks are higher-level and more complex mechanisms that encompass various types such as dictionary cache locks, file and log management locks, and tablespace and rollback

segment locks. Specifically, dictionary cache locks, which are very short-lived, are applied to dictionary caches when they are being modified or used. These locks ensure that parsed statements do not have inconsistent object definitions. They can be either shared or exclusive, with shared locks remaining until the parse is completed and exclusive locks persisting until the DDL operation is finished.

Both file and log management locks serve to protect various files and are often held for an extended period due to their function of indicating the status of the files.

Tablespace and rollback segment files ensure the protection of tablespaces and rollback segments. It is essential for all instances to reach a consensus regarding the status of a tablespace, online or offline. Locking rollback segments guarantee that only one instance can perform write operations on a segment.

Comparison

Microsoft SQL Server has improved by enabling the locking of smaller amounts of data at a time. It now uses row-level locking, meaning it only locks the rows that are being changed. However, it lacks a multi-version consistency model like Oracle, which can cause reads and writes to block each other to maintain data integrity. Unlike SQL Server, Oracle's database maintains a snapshot of the data to prevent queries from hanging or performing "dirty reads."

Backup and recovery is a crucial aspect of data management.

Overview

Database backup and recovery mechanisms are crucial for organizations as they guarantee readiness in case of a failure. Failures can happen due to different reasons, including transaction failure, system failure, media failure, or communication failure. Transaction failures may result from deadlocks, time-outs, protection violations, or system errors and can be resolved by partially or completely rolling back depending

on the seriousness. System failures can be rectified by restarting or rolling back to the most recent consistent state. Restore/roll forward functions aid in restoring the database after encountering a media failure.

SQL Server B and R

SQL Server databases are comprised of two physical hard drive files: the MDF and LDF files. The MDF files store all the data, while the LDF files keep a record of every data alteration. These logs enable undo operations and backups. The log file is cleared or truncated based on the database recovery model, which determines the time period. SQL Server can manage multiple databases with different recovery model configurations, including simple, full, or bulk-logged.

When enabling the simple recovery setting, log files are not retained permanently. Therefore, activating this setting requires performing a full backup. Full backups restore all data and do not allow for specifying a specific time.

The full recovery setting is used for databases that have a transaction log file history. The transaction log file keeps track of all data change operations. If the log file runs out of space, the database will stop functioning, so the auto grow function can be enabled.

Running in full recovery mode allows for the availability of differential and transaction log backups. A differential backup copies all data changes since the previous full backup, with each new full backup resetting the differential backup. On the other hand, transaction log backups copy all data changes since the previous full or transaction log backup. Despite their small size and speed, transaction log backups have a drawback in terms of recovery. If any log backup is damaged or unusable, the data cannot be recovered beyond the

last good backup. [7]

Oracle B and R

There are several methods for backing up Oracle databases:

  • Export/import
  • Cold or off-line backups
  • Hot or on-line backups
  • RMAN backups

Exporting the database extracts logical definitions and data to a file. Cold backups involve shutting down the database and backing up all data, log, and control files. Hot backups put tablespaces into backup mode and back up the files. Additionally, the control files and archived redo log files must be backed up. RMAN backups utilize the "rman" utility to backup the database. It is recommended to use and test multiple methods to ensure secure database backups.

On-line backups are only possible when the system is open and the database is in ARCHIVELOG mode. Off-line backups can be performed when the system is off-line and don't require the database to be in ARCHIVELOG mode. Restoring from off-line backups is easier because it doesn't require recovery, but on-line backups are less disruptive and don't require database downtime. Point-in-time recovery is only available in ARCHIVELOG mode. [8]

Comparison

Starting with version 10g, Oracle Database introduced the Automatic Storage Management (ASM) feature, which streamlines storage management beyond a certain point. The DBA assigns storage devices to a database instance and ASM automatically handles the placement and storage of files. In SQL Server, storage management needs to be performed manually, either using the Share and Store Management Console in SQL Server 2008 or by purchasing a separate tool.

Meanwhile, Oracle's Flash Recovery feature automates the management of backup files, utilizing the Flash Recovery area as a centralized storage location for all recovery-related files in the Oracle database. The DBA is able to modify the storage configuration without causing downtime. SQL Server

also offers backup file management through the use of a backup wizard, but it does not perform this task automatically. In SQL Server 2008, improvements were made to backup compression, resulting in reduced disk I/O and storage requirements for online backups, thereby increasing speed. Overall, there appears to be a tradeoff between the speed of SQL Server and the enhanced functionality of Oracle.

Oracle and SQL Server have different approaches to backups and data recovery. In Oracle, backups are fully self-contained, while in SQL Server, the DBA has to manually recreate the system database using the install CD. Oracle makes use of the Data Recovery Advisor (DRA) tool to automatically diagnose data failures, display repair options, and carry out repairs upon user request. Additionally, Oracle's Flashback technology allows for instantaneous recovery of dropped tables and logical data corruptions. On the other hand, SQL Server relies on rebuilding the transaction log, running repairs to address any corruptions, and ensuring the logical integrity of data remains intact. [9]

Security

Overview

Security is a crucial aspect of a database management system for any organization. In Dr. Osei-Bryson's lecture notes, security breaches are classified into unauthorized data observation, incorrect data modification, and data unavailability. Unauthorized data observation exposes confidential information to unauthorized users. Incorrect data modification, whether intentional or unintentional, can severely affect database consistency and lead to unreliable data. Data unavailability can be expensive for an organization depending on its usage.

Three requirements for a data security plan are secrecy and confidentiality, database integrity, and availability. Secrecy and confidentiality ensure that unauthorized parties cannot access the data. Maintaining database integrity safeguards the data from incorrect or inappropriate modifications. Availability involves preventing and minimizing

any damage caused by data unavailability.

Database management systems have an access control mechanism in place to ensure that users can only access the necessary data for their tasks. A security administrator grants users specific authorizations to determine what actions they can perform on each object. The database administrator is in charge of creating accounts, assigning security levels, and granting or revoking privileges.

SQL Server Security

A recent White Paper commissioned by Microsoft states that security is a crucial component of SQL Server's package. Microsoft SQL Server 2008 offers various security features, such as policy-based management for applying policies to database objects. These policies consist of a series of conditions that help enforce business and security rules.

Oracle Security

Oracle 11g offers robust authentication options like KPI, Kerberos, and Radius for all database connections except SYSDBA or SYSOPER connections. Tablespace encryption is another option available for encrypting entire tablespaces, particularly useful for handling large amounts of data. For enhanced security, the transparent data encryption master key can be stored in an external hardware security module. Additionally, Oracle 11g provides improved password protection, secure file permissions, optional default audit settings, and controls on network callouts from the database. [11]

Comparison

Transparent data encryption in SQL Server encrypts and decrypts data in the database engine without requiring extra application programming. This feature is included in SQL Server 2008 but comes with an additional charge of $10,000 per processor for Oracle Database 11g. SQL Server 2008 allows registration of Extensible Key Management and Hardware Security Module vendors, providing separate management from the database. This separation adds an extra layer of defense by keeping the keys separate from the data. Additionally, SQL Server 2008 supports auditing

through an Auditing object, enabling administrators to capture and log all activity on the database server.

According to the National Institute of Science and Technology's National Vulnerability Database, Oracle products experienced more than 250 security vulnerabilities over a span of four years. In contrast, SQL Server had no reported security vulnerabilities. The report did not specify the severity or type of vulnerabilities, nor did it mention the specific affected products. However, it does suggest a growing trend of vulnerability.

Microsoft Update is a user-friendly and simple patching solution for SQL Server. According to Computerworld, Oracle's patch management system is described as causing "excruciating pain" and reveals that "two-thirds of Oracle DBAs don’t apply security patches." Currently, Oracle seems to be lagging behind in patch management.

SQL Server provides several mechanisms to restrict access to sensitive data for highly privileged users, such as auditing object, individual permissions, module signing, Policy-based management, and other features. On the other hand, Oracle uses a tool called Database Vault for controlling privileged access, which comes with a price of 20k per processor.

Conclusion

The review compared the Transaction Management and Concurrency, Recovery and Backup, and Security functions of Microsoft SQL Server and Oracle 11g database. It found several similarities in functionality, but also noted significant differences in their database management philosophy. SQL Server was found to excel in terms of speed and security, while Oracle has been focusing on enhancing high-level functionality and automation. By examining the practical application of these DBMS functions in separate systems, I gained a better understanding of them.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New