Why Cryptography Is Important Computer Science Essay Example
Why Cryptography Is Important Computer Science Essay Example

Why Cryptography Is Important Computer Science Essay Example

Available Only on StudyHippo
  • Pages: 15 (3906 words)
  • Published: August 1, 2018
  • Type: Case Study
View Entire Sample
Text preview

The study of secrecy is often associated with cryptography, but nowadays it primarily centers on encryption. Encryption refers to the conversion of plain text into cryptic text as a means of safeguarding against unauthorized access. This process also encompasses decrypting the cryptic text in order to make it comprehensible at the receiving end. Figure 1 depicts the fundamental flow of commonly employed encryption algorithms.

The text contains a hyperlink enclosed in a paragraph tag.

A cryptographic system is composed of cryptographic algorithms and key management processes that are used in a specific application context. This system ensures the required level of security by employing network protocols and data encryption algorithms.

Cryptography dates back to around 1900 B.C. when an Egyptian scribe used non-standard hieroglyphs in an inscription. It developed after the invention of writing and was used for diplomatic correspondence and war strategies. With advancements in computer communications, new

...

cryptographic techniques emerged rapidly. In the field of data and telecommunications, cryptography is essential for ensuring secure communication over untrusted networks like the Internet.

Some specific security requirements exist within the context of any application-to-application communication, including:

Authentication is the process of verifying someone's identity. At present, the Internet primarily depends on inadequate host-to-host authentication methods that revolve around names or addresses.

The protection of a message to ensure that only the intended recipient can access it is referred to as privacy and confidentiality.

Integrity: The message received must be unchanged and identical to the original.

The verification of the true sender of a message is known as non-repudiation.

Cryptography protects information from theft or tampering and aids in user authentication. It includes three main cryptographic schemes: secret key (or symmetric) cryptography, public-key (or asymmetric

View entire sample
Join StudyHippo to see entire essay

cryptography, and hash functions. These types are further described. In all cases, the unencrypted data is referred to as plaintext. It is encrypted to become ciphertext and can be decrypted back into its original usable plaintext form.

Cryptography ensures the security of information.

Protecting against both external and internal hackers

Protecting against industrial espionage

Securing E-commerce

Securing bank accounts and electronic transfers

Ensuring the protection of intellectual property

Avoiding liability

Threats that pose a risk to the security of information are a concern.

The widespread use of email and networks is pervasive.

Storing sensitive information on the internet

Insecure technologies such as wireless.

The trend towards a more digital society is increasing.

There is a lack of adequate legal measures to protect the privacy of emails.

Steganography, which comes from the Greek term for hidden writing, is the art of concealing a message within another to render it undetectable. The core idea of steganography is to ensure that the transmitted message remains invisible to casual onlookers. Indeed, those who are not intended recipients must remain completely oblivious to its presence.

The difference between steganography and cryptography is in how the encryption of messages is perceived. In cryptography, decoding requires the correct key, while in steganography, it may not be apparent to most people that a hidden message exists, even though it can be easily decoded. Combining these techniques allows for two layers of security. Some computer programs use cryptography to encrypt messages and then hide the encrypted content within images using steganography.

SKC, or Secret Key Cryptography, uses a single key for both encrypting and decrypting data.

Public Key Cryptography (PKC) utilizes a pair of distinct keys, one for encryption and the other for decryption.

Hash functions use a mathematical transformation to permanently

encrypt data.

http://www.garykessler.net/library/images/crypto_types.gif

Secret key cryptography, which is also referred to as symmetric encryption, involves the use of a single key for both encrypting and decrypting data. In this process, the sender applies the key (or a predetermined set of rules) to encrypt the plaintext and transmits the resulting ciphertext to the recipient. The recipient then utilizes the same key (or ruleset) to decrypt the message and retrieve the original plaintext.

Both the sender and receiver need to have access to the secret key in order to use this cryptographic method, but distributing it efficiently is a difficult task.

There are two primary categories of secret key cryptography schemes: stream ciphers and block ciphers. Stream ciphers incorporate a feedback mechanism to modify the key and operate on individual bits, bytes, or computer words. In contrast, block ciphers encrypt data in blocks using the same key for each block. When applying the same key to a plaintext block, a block cipher will consistently produce the same ciphertext, whereas a stream cipher will generate varying ciphertexts for the same plaintext.

There are two main categories of stream ciphers: self-synchronizing and synchronous. Self-synchronizing ciphers determine each bit in the keystream based on the previous n bits. This type is called "self-synchronizing" because the decryption process can stay synchronized with the encryption process by keeping track of its position within the n-bit keystream. However, a garbled bit during transmission can result in multiple garbled bits at the receiving end, which is a drawback of self-synchronizing ciphers.

On the other hand, synchronous stream ciphers generate the keystream independently from the message stream using the same keystream generation function for both sender and receiver. Unlike self-synchronizing

ciphers, synchronous ciphers do not propagate transmission errors. However, they have a periodic nature, meaning that their keystream will eventually repeat.

The most significant modes in which block ciphers can operate are the following:

The Electronic Codebook (ECB) mode is the simplest and most straightforward implementation. It uses a secret key to encrypt a plaintext block, resulting in a ciphertext block. Notably, if two identical plaintext blocks are used, they will always produce the same ciphertext block. However, this mode is vulnerable to various brute-force attacks.

Cipher Block Chaining (CBC) mode uses a feedback mechanism in encryption. It XORs the plaintext with the preceding ciphertext block before encryption to prevent identical plaintext blocks from producing the same ciphertext.

Cipher Feedback (CFB) mode is a self-synchronizing stream cipher implemented as a block cipher. It enables the encryption of data in smaller units than the block size, which is useful for encrypting interactive terminal input. In 1-byte CFB mode, each incoming character goes into a shift register with the same size as the block. After encrypting the character, it gets transmitted within the block. On decryption, any additional bits in the block (beyond one byte) are discarded.

The block cipher known as Output Feedback (OFB) mode operates similarly to a synchronous stream cipher. It ensures that encrypting a specific plaintext block will not result in the same ciphertext block. This is accomplished by using an internal feedback mechanism which remains unaffected by both the plaintext and ciphertext streams.

Currently, there are multiple secret key cryptography algorithms being utilized.

The Data Encryption Standard (DES) is the first encryption standard approved by the National Institute of Standards and Technology (NIST). It originated from IBM's

Lucifer algorithm and became an official standard in 1974. However, over time, DES has been found to have several weaknesses and vulnerabilities, making it an insecure block cipher.

The 3DES (Triple DES) encryption standard is an enhanced version of DES. It employs the same encryption technique as DES but with three iterations for heightened security. Nevertheless, it is widely recognized that 3DES is comparatively slower than alternative block cipher methods.

AES, or Advanced Encryption Standard, was chosen by NIST in 1997 to replace DES as the encryption standard. The Rijndael algorithm, pronounced Rain Doll, won a competition and became the dominant encryption standard. Currently, a brute force attack is the sole effective approach for breaking AES encryption, where the attacker attempts all possible character combinations. It is worth noting that both AES and DES operate as block ciphers.

Bruce Schneier, a well-known cryptologist and president of Counterpane Systems, developed Blowfish. Blowfish is an encryption algorithm widely used in the public domain.

Blowfish, which was introduced in 1993, is a cipher that uses both variable length key and 64-bit block. While it can be optimized for hardware applications, its main usage lies in software applications.

Twofish is a block cipher that uses 128-bit blocks and keys of either 128, 192, or 256 bits. It is designed to offer both security and versatility, making it suitable for different platforms such as large microprocessors, 8-bit smart card microprocessors, and dedicated hardware. Bruce Schneier led the team responsible for creating this algorithm, which was among those considered during Round 2 of the AES process.

Public-key cryptography, regarded as a major breakthrough in the field of cryptography, has been acknowledged as the most significant advancement in

the past 300-400 years. In 1976, Martin Hellman and Whitfield Diffie publicly introduced modern PKC at Stanford University. Their publication detailed a cryptographic system featuring two keys that enables secure communication between two entities over an unsecured communication channel without the need for a shared secret key.

PKC relies on the presence of one-way functions, which are mathematical functions that can be easily computed but have relatively difficult to compute inverse functions. Here are two simple examples:

Comparison between multiplication and factorization: Multiplying two prime numbers, 3 and 7, yields the product of 21. In contrast, determining which pair of prime numbers were multiplied together to obtain a given number, such as 21, takes more time. Eventually, the solution is found but factoring requires more time compared to multiplication. The challenge becomes even greater when working with larger prime numbers containing around 400 digits; resulting in products with approximately 800 digits.

Exponentiation and logarithms can both be used to calculate and describe numbers. For example, if I want to find 3 raised to the power of 6, it is easy to calculate 3^6 = 729. However, if I give you the number 729 and ask you to find the values of x and y in log base x of 729 = y, it will take more time and effort to determine the two integers.

These examples above, although simple, illustrate two functional pairs used in PKC. One pair includes the ease of multiplication and exponentiation, while the other pair involves the difficulty of factoring and calculating logarithms. In PKC, a mathematical trick is utilized to discover a trap door within the one-way function. This allows for the inverse calculation

to become easy if certain information is known.

Asymmetric cryptography, also known as Generic PKC, uses a pair of mathematically linked keys to encode and decode information. Both keys are necessary and dependent on each other, meaning that knowing one key does not help determine the other key. To make this process work effectively, both keys need to be used regardless of their order.

Public Key Cryptography (PKC) involves the use of two keys: a public key and a private key. The public key is intended for open sharing, while the private key must remain confidential. With this system, message transmission becomes straightforward. Let's consider an example where Ram intends to send a message to Bobby. Ram encrypts the data using Bobby's public key, enabling Bobby to decrypt it with his private key. This method also ensures the ability to authenticate the sender's identity. For instance, Ram can encrypt some text using her private key. By decrypting it with Ram's public key, Bobby can verify that Ram sent the message and prevent her from denying involvement (non-repudiation).

Currently, the cryptographic algorithms utilized in public key systems are:

RSA is the most common Public Key Cryptography (PKC) implementation, which was developed by MIT mathematicians Ronald Rivest, Adi Shamir, and Leonard Adleman. It is widely used in various software products for key exchange, digital signatures, and encryption of small data blocks. RSA employs a flexible encryption block and key size. The key-pair is generated from a very large number, n, which is the result of multiplying two prime numbers chosen according to specific rules. These prime numbers can be over 100 digits long each, resulting in an n with approximately

twice as many digits as the prime factors. The public key communication includes n and a derivative of one of its factors. It is impossible for an attacker to determine the prime factors of n (and consequently the private key) solely based on this information. This factor adds to the security of the RSA algorithm. Despite misconceptions that attribute RSA's safety to the difficulty of factoring large prime numbers (because both large and small prime numbers only have two factors), advancements in computer capabilities now enable systems to factor numbers with more than 200 digits and potentially compromise schemes like RSA.

Diffie-Hellman: Following the publication of the RSA algorithm, Diffie and Hellman developed their own algorithm called D-H. D-H is specifically utilized for secret-key key exchange and is not intended for authentication or digital signatures.

The Digital Signature Algorithm (DSA) is a cryptographic algorithm specified in NIST's Digital Signature Standard (DSS), enabling the creation and utilization of digital signatures for message authentication.

Taher Elgamal developed a key exchange system known as ElGamal, which is a form of public key cryptography (PKC) similar to Diffie-Hellman. ElGamal is commonly used for the purpose of exchanging keys.

Elliptic Curve Cryptography (ECC) is an algorithm for public key cryptography that uses elliptic curves. ECC ensures strong security with smaller keys, like RSA and other methods of public key cryptography. It was specifically created for low-power computing devices like smartcards and PDAs.

Hash functions, also referred to as message digests and one-way encryption, are algorithms that operate without the need for a key (Figure 1C). Instead, they calculate a hash value of fixed length based on the plaintext. This calculation guarantees that the contents and

length of the plaintext cannot be recovered. Hash algorithms are frequently employed to generate a digital fingerprint representing the contents of a file. This fingerprint serves to verify that unauthorized individuals or viruses have not tampered with the file. Operating systems also commonly employ hash functions for password encryption purposes. Ultimately, hash functions enable assessment of a file's integrity.

Some commonly used hash algorithms today include:

Message Digest (MD) algorithms are a group of byte-oriented algorithms that generate a 128-bit hash value from a message of any length.

MD2 is specifically designed for systems that have limited memory, such as smart cards.

MD4, which was developed by Rivest, is similar to MD2 but specifically designed for fast processing in software.

MD5 is a cryptographic scheme created by Rivest as a response to vulnerabilities found in MD4. It is similar to MD4 but involves more data manipulation, resulting in slower processing times. Despite some weaknesses being revealed by Hans Dobbertin in 1996, MD5 has been widely utilized in various products.

Secure Hash Algorithm (SHA) is a cryptographic algorithm for NIST's Secure Hash Standard (SHS). SHA-1, initially released as FIPS 180-1, generates a hash value containing 160 bits. SHA-2, also known as SHA-2, includes five algorithms within the SHS framework: SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512. These algorithms can produce hash values ranging from 224 to 512 bits in length.

RIPEMD is a series of message digests that were created from the RIPE (RACE Integrity Primitives Evaluation) project. RIPEMD-160 was formulated by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel with the intention to function as a substitute for 128-bit hash functions. It was specifically designed to cater to processors with 32 bits.

The collection also encompasses other variations like RIPEMD-256, RIPEMD-320, and RIPEMD-128.

HAVAL is a hash algorithm developed by Y. Zheng, J. Pieprzyk, and J. Seberry that provides varying levels of security and can generate hash values of different lengths: 128, 160, 192, 224, and 256 bits.

Whirlpool is a recently developed hash function created by V. Rijmen and P.S.L.M. Barreto. It operates on messages that are shorter than 2256 bits and produces a message digest of 512 bits. Unlike MD5 and SHA-1, Whirlpool has a distinctive design that offers protection against the same attacks used on those hash functions (refer below for more information).

Tiger is a cryptographic hash function that was developed by Ross Anderson and Eli Biham. It was specifically designed to be secure, efficient on 64-bit processors, and serve as a suitable replacement for other hash functions such as MD4, MD5, SHA, and SHA-1 in various applications. Tiger has different variations including Tiger/192, which generates a 192-bit output and is compatible with 64-bit architectures. Additionally, there are Tiger/128 and Tiger/160 variants that produce hash lengths of 128 bits and 160 bits respectively to ensure compatibility with the aforementioned hash functions.

Why is it necessary to have multiple cryptographic schemes? Can’t we achieve everything we need using just one?

Each scheme is optimized for specific applications. For example, hash functions are ideal for maintaining data integrity. If any modification is made to a message's contents, the receiver will calculate a different hash value than what was originally sent by the sender. As it is highly improbable for two different messages to produce the same hash value, data integrity is highly assured.

Using secret key cryptography is an effective way

to encrypt messages and maintain privacy. To encrypt a message, the sender can generate a distinct session key for each message. The receiver needs to possess this identical session key in order to decrypt the message.

The uses of public-key cryptography include key exchange, non-repudiation, and user authentication. For user authentication purposes, the receiver can acquire the session key encrypted with the sender's private key to verify that only the sender transmitted the message. Nonetheless, public-key cryptography is seldom employed for message encryption due to its slower speed compared to secret-key cryptography.

The following HTML code displays an image at the URL: http://www.garykessler.net/library/images/crypto_3ways.gif.

Figure demonstrates the integration of various functions in a hybrid cryptographic scheme to create a secure transmission that includes a digital signature and a digital envelope. Specifically, it illustrates the sender of the message as Ram and the receiver as Bobby.

A digital envelope is composed of an encrypted message and an encrypted session key. Ram utilizes secret key cryptography to encrypt her message by employing the session key, which she generates randomly for each session. Additionally, she encrypts the session key using Bobby's public key. Together, the encrypted message and encrypted session key constitute the digital envelope. Upon receiving it, Bobby decrypts the encrypted message by first recovering the session secret key with his private key.

The digital signature is created in two steps. First, Ram calculates the hash value of her message. Next, she encrypts the hash value using her private key. When receiving the digital signature, Bobby decrypts it with Ram's public key to recover the hash value calculated by Ram. Bobby then applies the hash function to Ram's original message, which he

has already decrypted (refer to previous paragraph). If the resulting hash value differs from the value provided by Ram, Bobby concludes that the message has been tampered with. If the hash values are the same, Bobby can trust that the received message is identical to the one sent by Ram.

This scheme ensures nonrepudiation by demonstrating that Ram sent the message. If Bobby can verify that the message has not been altered using Ram's public key, then only Ram could have generated the digital signature. Additionally, Bobby has proof that he is the intended recipient because he can successfully decrypt the message, indicating that he possesses the correct private key for decrypting the session key.

An article in the industry literature (circa 9/98) argues that 56-bit keys are no longer sufficient protection for DES nowadays, due to the significant increase in computer speed compared to 1975. The writer suggests that instead of using 56-bit keys, we should opt for 56,000-bit keys to ensure adequate protection. However, the writer fails to consider that with each added bit, the number of possible key values doubles. This means that a 57-bit key has twice as many values as a 56-bit key (since 257 is two times 256). Thus, it is not true that we have to settle for weak cryptography just because 56,000-bit keys are impractical. In fact, a 66-bit key would have 1024 times the possible values as a 56-bit key.

Nevertheless, the question remains regarding the precise significance of key length in relation to its effect on security level.

The size of the key in cryptography is crucial as it influences the level of difficulty in decrypting encoded

data. Bigger keys offer increased security as advancements in computer technology have made it simpler to break ciphertext through brute force techniques rather than targeting established mathematical algorithms. A brute force attack involves systematically attempting all possible keys and applying them to the ciphertext. If the resulting plaintext is meaningful, it indicates a potential valid key. The Electronic Frontier Foundation (EFF) utilized this concept in their attack on DES.

The widespread adoption of cryptography in e-commerce relies heavily on Certificates and Certificate Authorities (CA). Merely using secret and public key cryptography is not enough to address the trust issues that arise between customers and vendors in evolving e-commerce relationships. One concern is how a website can obtain another party's public key. Verifying the authenticity of a sender's public key also poses challenges for recipients, who must ensure that senders are legitimately using their public keys. Additionally, complications arise when dealing with expired or compromised public keys, necessitating a revocation process.

Regarding certificates, we possess a general comprehension of their purpose. Whether it be a driver's license, credit card, or SCUBA certification, each has its own function. They validate our identity, enable specific actions, contain an expiration date, and display the governing authority.

Driving licenses can be used as a simple analogy to understand certificate chains. Similar to how my driver's license, issued by the State of Vermont, establishes identity and permissions for driving, certificates also establish identity, permissions, and the authority that issues them. Certificates may also include additional information such as organ donor status.

When I drive outside of Vermont but within the U.S., other jurisdictions trust the authority of Vermont to issue this "certificate" and rely on

the information it contains. However, when I travel to countries like Canada, they accept any U.S.-issued license instead of specifically recognizing the Vermont license. Not all countries acknowledge the validity of a Vermont driver's license as sufficient evidence of my driving abilities.

This analogy demonstrates how certificates themselves can contain further certificates within the certificate chain.

Certificates for electronic transactions are digital documents with distinct functions.

Establishing identity involves connecting a public key to an individual, organization, corporate position, or any other entity.

Assigning authority is accomplished by establishing the actions that are allowed or forbidden according to this certificate.

For data confidentiality, it is important to securely store the symmetric key of the session by encrypting it.

A certificate usually includes several details such as a public key, a name, an expiration date, the issuing authority's name, a serial number, relevant issuance policies and usage guidelines, the issuer's digital signature, and potentially additional information.The hyperlink http://www.garykessler.net/library/images/crypto_cert.gif is enclosed within a paragraph tag.

Figure shows a sample abbreviated certificate that is commonly found in browsers. While this specific certificate is issued by GTE Cybertrust, there are numerous root-level certificates already installed in browsers. When a secure website is accessed by a browser, the server sends its public key certificate to the browser. The browser then verifies the signature of the certificate using its stored public key. If there is a match, the certificate is deemed valid and the website it verifies is considered "trusted."

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New