Smart Card System Essay Example
Smart Card System Essay Example

Smart Card System Essay Example

Available Only on StudyHippo
  • Pages: 17 (4504 words)
  • Published: August 20, 2017
  • Type: Case Study
View Entire Sample
Text preview

The concept of "client/server" involves the collaboration between separate logical entities, typically over a network, to complete a task. This goes beyond simple communication between a client and a server over a network. The client/server approach utilizes both asynchronous and synchronous messaging techniques, with the help of middle-ware, for network-based communication. This approach includes a client (UI) and a server (database I/O) to enable distributed capabilities.
Sigma has successfully utilized this technique for more than 15 years to transfer its products across various platforms and databases while maintaining marketability and improved functionality over time. Sigma's client/server product uses an asynchronous method where messages are sent to request actions and receive response messages containing the requested information.

This product's approach involves sending intensive CPU processing requests to the server, which then performs the requested actions

...

and returns the results to the client. Sigma's architecture prioritizes re-usability and portability. Currently, Sigma utilizes a standard I/O routine that is separate from the user interface. This architecture supports character-based screens and various databases, with the user interface independent from database access. It aligns with the architecture commonly used in a GUI Client/Server environment. Sigma's client/server product adopts an asynchronous method by sending a message to request an action and receiving a message with the requested information. A typical example of a client/server application is the File Server, where clients request files from the server. This results in transmitting the entire file to the client, but it requires multiple message exchanges across the network.

The conventional client/server model is demonstrated by a Database Server. In this arrangement, clients send SQL requests to the server and the server executes each SQL statement and sends

View entire sample
Join StudyHippo to see entire essay

back the results to the client. To enable this communication, clients frequently utilize Open Database Connectivity (ODBC) as a standardized SQL interface for sending requests to the server. Another suitable option for transaction processing environments within the client/server model is Remote Procedure Call (RPC).

The creation of a Transaction Server is possible using Remote Procedure Call (RPC). Clients can make a remote procedure call and provide parameters. With just one message, the Transaction Server can execute compiled database statements and send the results back to the client. This distributed processing helps reduce network traffic and enhances performance. In addition, limiting database modifications to locally executing applications can increase site autonomy.

The Remote Procedure Call (RPC) is a communication mechanism between programs. The IP protocolTCP sends each datagram to IP, which requires the Internet address of the remote computer. IP's main role is to find a route for the datagram and deliver it to the destination, regardless of its contents or the TCP header.

A header is added to the datagram by gateways or intermediate systems in order to facilitate forwarding. This header contains the 32-bit source and destination Internet addresses (e.g., 128.6.4.194), the protocol number, and a checksum. The source Internet address indicates the address of the sending machine, while the destination Internet address indicates the address of the receiving machine. The protocol number instructs the receiving IP to route the datagram to TCP.

While TCP is the most commonly used protocol for IP traffic, other protocols can also utilize IP. Therefore, IP needs to be informed about which protocol to send the datagram to. Additionally, the checksum ensures that IP can confirm the header's integrity during transmission. TCP

and IP each have their own checksums. IP must be able to verify that the header remained undamaged during transmission in order to avoid sending messages to the incorrect destination.

TCP enhances efficiency and safety by calculating a distinct checksum for both the TCP header and data. IP addresses play a crucial role in transmitting data packets across networks, ensuring end-to-end significance. As packets traverse routers, they examine their routing tables to identify matches between destination IP address network numbers and entries in the table. If a match is found, the packet gets forwarded to the next hop router designated for the specified destination network. In case of no match, the router can either transfer the packet to the default gateway or discard it. The act of sending packets to a default router assumes that it possesses more network information within its routing table and can accurately direct them towards their final destinations. This LAN-to-Internet connection method is widely employed when establishing connectivity among PCs on a local area network.

Every PC will use the router as its default gateway to connect to both the LAN and Internet. In a host's routing table, the destination network is specified as 0.0.0.0 and the IP address of the default gateway acts as the next hop router. In a LAN environment, it is the MAC (Media Access Control) address that addresses the next hop router. It is important to note that MAC addresses change when packets pass through routers, while IP addresses remain unchanged.

Subnet masks divide a network into smaller subnetworks, making network design more complex but necessary. This division can help reduce network traffic and make managing the internetwork

easier. In LANs, each node has its own CPU and can access data and devices across the LAN.

In a LAN, multiple users can share expensive devices like laser printers and also exchange data. The LAN allows users to communicate with each other through methods like email and chat sessions. LANs differ from one another based on the following characteristics: topology, protocols, and media. Topology refers to the arrangement of devices on the network, whether in a ring or straight line. Protocols are the rules and encoding specifications for transmitting data, and they determine whether the network follows a peer-to-peer or client/server architecture. Finally, devices in a LAN can be connected using twisted-pair wire, coaxial cables, or fiber optic cables.

Some networks use radio waves to communicate instead of connecting media.
1.3.1. File Transfer Protocol
FTP enables basic file sharing between hosts by using TCP to establish a virtual connection for control information and a separate TCP connection for data transfers. The control connection utilizes TELNET protocol's image to exchange commands and messages between hosts.
1.3.2. User Datagram Protocol
UDP offers a simple and unreliable message service for transaction-oriented services.

The UDP header contains a source port identifier and destination port identifier, enabling the targeting of specific applications and services between hosts.

TCP offers reliable stream delivery and virtual connection service to applications by using sequenced acknowledgements and packet retransmission as necessary. TCP utilizes a 32-bit sequence number for counting bytes in the data stream. Each TCP packet includes the starting sequence number of the data in that packet, along with the acknowledgment number (the sequence number of the last byte received from the remote peer). This information is utilized to implement

a sliding-window protocol.

The and their contents in the text below have beenand unified:

Forward and reverse sequence numbers are completely independent, and each TCP peer must track both its own sequence numbering and the numbering being used by the remote peer. 1.4 Winsock 2.0 Architecture Windows Sockets version 2.0 (WinSock 2) formalizes the API for several other protocol suites, including ATM, IPX/SPX, and DECnet, and allows them to operate simultaneously.

WinSock 2 is fully compatible with the existing version 1.1, with some further clarification. This allows all existing WinSock applications to run without any modifications, except for WinSock 1.1 applications that use blocking hooks, which need to be rewritten to function without them.

In addition to enabling the coexistence of multiple protocol stacks, WinSock 2 also enables the development of network protocol independent applications. A WinSock 2 application can select a protocol based on its service requirements and adapt to variations in network names and addresses using the mechanisms provided by WinSock 2.

The architecture of WinSock 2 (1.4.1) offers increased flexibility. It allows for simultaneous support of multiple protocol stacks, interfaces, and service providers. The new architecture includes an additional layer below the top DLL and a standard service provider interface, enhancing flexibility. WinSock 2 follows the Windows Open Systems Architecture (WOSA) model, which separates the API from the protocol service provider. In this model, the WinSock DLL provides the standard API, while each vendor installs its own service provider layer below.

The API layer communicates with a service provider through a standardized Service Provider Interface (SPI), and it can handle multiple service providers at the same time. 1.5 Transmission Control Protocol (TCP) originally aimed to recover

from failures in nodes or lines, where the network updates routing tables to all router nodes. As this update process takes time, TCP is slow to initiate recovery. The TCP algorithms are not optimized for handling packet loss due to traffic congestion.

The traditional approach to addressing traffic problems on the Internet has been to increase line and equipment speed. However, TCP (Transmission Control Protocol) handles data differently by treating it as a stream of bytes with assigned sequence numbers. Each TCP packet includes a header indicating the starting byte and data amount. The receiver can detect missing or out-of-sequence packets, and TCP acknowledges received data while retransmitting lost data. Error recovery in TCP occurs directly between the Client and Server machines.

There is no official standard for monitoring issues in the middle of the network. However, each network has implemented some improvised tools. TCP/IP, which is used to ensure communication between systems of all vendors, is fully standardized on the LAN. But, in larger networks that span long distances and utilize phone lines, the situation is more unpredictable. New technologies emerge and become outdated within a short period of time.

The National Information Superhighway is being built by competing cable TV and phone companies, causing a lack of a single standard for citywide, nationwide, or worldwide communications. The original TCP/IP design is adaptable to the current technological uncertainty, allowing data to be sent across LANs, internal corporate SNA networks, or through cable TV service. In addition, machines connected to any of these networks can communicate with other networks through vendor-supplied gateways. Data packet transmission involves a series of handshaking sequences where the sending side of the end

node/repeater local port, point-to-point connection makes a request and the other side acknowledges it.

The end node sends a control signal to request a data packet transmission, which is controlled by the repeater. If the end node has a data packet ready to send, it transmits either a Request_Normal or Request_high signal. If not, it transmits the Idle_Up signal. The repeater polls all local ports to determine which end nodes are requesting to send a data packet and their priority level (normal or high).

The repeater chooses the next end node with a pending high priority request. Ports are chosen in the order in which they appear. If there are no pending high priority requests, then the next normal priority port is chosen (in order). This selection results in the selected port receiving the Grant signal. Once the end node detects the Grant signal, packet transmission begins. The repeater also sends the Incoming signal to all other end nodes, informing them of a potential incoming packet.

The repeater decodes the destination address from the transmitted frame while it is being received. When an end node receives the Incoming control signal, it stops transmitting requests and listens for the data packet on the media. After decoding the destination address, the repeater delivers the packet to the addressed end nodes and any promiscuous nodes. Nodes that do not receive the data packet receive the Idle_Down signal from the repeater.

6. After receiving the data packet, the end node(s) revert back to their state before receiving the data packet. This can involve sending an Idle_Up signal or requesting to send a data packet.
1.7. Conclusion
WinSock 2 has a completely redesigned architecture that offers

increased flexibility. The new architecture of WinSock 2 enables simultaneous support for multiple protocol stacks, interfaces, and service providers.

It is suitable for the Win32 platform but also designed to be compatible with Win95 without any conflicts. The 32-bit wsock32.dll is included in Windows NT and Windows 95 and works with the Microsoft TCP/IP stack. These 32-bit environments also have a winsock.dll file that acts as a "thunk-layer" so that 16-bit WinSock applications can run on the 32-bit wsock32.dll.

In Windows environments that are 16-bit, such as Windows version 3.1 and Windows for Workgroups 3.11, Microsoft's Win32s installs a thunk layer called wsock32.dll alongside any vendor's WinSock DLL being used. LANs have the capability to transmit data at high speeds, which surpasses telephone line capabilities. However, they do have limitations in terms of distance and the number of connected computers. Commonly used protocols in LAN transactions include File Transfer Protocol (FTP), User Datagram Protocol (UDP), and Transmission Control Protocol (TCP). UDP allows for faster data transmission compared to TCP, but TCP offers better data security and integrity. The client-server architecture consists of both client machines and server machines with two available forms.

The first type of client/server application is the File Server, where clients request files from the File Server. This leads to the complete file being sent to the client, but it requires multiple message exchanges over the network. Another typical client/server application is the Database Server, where clients send SQL requests to the server. The Database Server then executes each SQL statement and sends the results back to the client. Throughout its journey in a network, the packet maintains a constant source and destination IP

address. The packet may either be forwarded to the router designated as the default gateway or dropped by the router.

The belief is that packets are sent to a default router because it is expected to have more network information in its routing table, enabling it to correctly route the packet to its final destination. Chapter 2: Data Encryption and Cryptography Technology2.1 Introduction To Encryption And Cryptography TechnologyEncryption involves converting data or plaintext into a ciphertext that unauthorized individuals cannot easily comprehend. Decryption, on the other hand, is the process of converting encrypted data or ciphertext back into its original form for understanding. To achieve this conversion, a cryptographic algorithm must be applied. Most encryption algorithms are based on complex mathematics that operate in one direction and are typically reliant on the difficulty of factoring very large numbers (keys) used for encryption. These large numbers are the products of large prime numbers.

Many encryption programs use a single key for both encrypting and decrypting messages, a technique known as symmetric cryptography. This method is fast and easy to use for encrypting messages and folders, making it ideal for protecting local data. Cryptography is the scientific field concerned with information security. Examples of cryptography techniques include microdots and concealing words within images. However, cryptography is most commonly associated with converting plain text into encrypted text, and then decrypting it back into plain text. Those who work in this field are known as cryptographers. Cryptography primarily aims to achieve four objectives: confidentiality - ensuring that the information cannot be understood by unauthorized individuals.

Integrity refers to the impossibility of altering information during storage or transmission. No-repudiation refers to the

inability of the information creator or sender to deny their intentions in creating or transmitting the information. Authentication allows the sender and receiver to identify each other's identity and the origin or destination of the information.

DES (Data Encryption Standard) is a product cipher used by the U.S. Government. It operates on 64-bit data blocks and uses a 56-bit key. Triple DES is a variation of DES that also operates on 64-bit data blocks. There are different forms of Triple DES, some utilizing two 56-bit keys and others utilizing three.

The DES "modes of operation'' can also be utilized with triple-DES. Some individuals refer to E(K1,D(K2,E(K1,x))) as triple-DES. This technique is designed for encrypting DES keys and IVs for "Automated Key Distribution." Its official name is "Encryption and Decryption of a Single Key by a KeyPair." Others use the term "triple-DES" for E(K1,D(K2,E(K3,x))) or E(K1,E(K2,E(K3,x))). Key encrypting keys can be a single DEA key or a DEA key pair. Key pairs should be used when additional security is required (e.

In small networks, it is relatively simple to protect privacy using the symmetric algorithm DES (Data Encryption Standard), as it only requires exchanging secret encryption keys among each party. However, as the network grows, this method becomes expensive and difficult to manage. Additionally, DES has the disadvantage of requiring the sharing of a secret key.

In order for secure communication to occur, each person involved must trust the other with their secret key and promise not to share it with anyone else. This trust is necessary because each person needs a unique key for each person they communicate with. Therefore, they must trust each individual with one of their

secret keys. Due to this requirement, secure communication is typically limited to individuals who have some kind of pre-existing relationship, whether personal or professional.

There are two important issues that the DES encryption method does not address: authentication and non-repudiation. With shared secret keys, neither party can prove what the other person has done. This means that either party could modify data without being detected by a third party.

The RSA algorithm, invented in 1977 by Ronald L. Rivest, Adi Shamir, and Leonard Adleman, has been utilized in various cryptographic schemes and protocols worldwide. One such scheme is the RSAES-OAEP encryption scheme, which combines the RSA algorithm with the OAEP method. This encryption scheme, along with the RSASSA-PSS signature scheme with appendix, is recommended for new applications.

The creators of OAEP, Mihir Bellare and Phillip Rogaway, developed the scheme, with improvements made by Don B. Johnson and Stephen M. Matyas.2. 3.2 RSASSA-PSS (RSA Signature Scheme with Appendix - Probabilistic Signature Scheme) is an asymmetric signature scheme that combines the RSA algorithm with the PSS encoding method. The PSS encoding method was created by Mihir Bellare and Phillip Rogaway. In the effort to incorporate RSASSA-PSS into the P1363a standards initiative, Bellare, Rogaway, and Burt Kaliski (the editor of IEEE P1363a) made certain modifications to the original version of RSA-PSS to simplify implementation and integration into existing protocols.

The following is an example of RSA encryption. Plaintexts in this case are positive integers up to 2^{512}. Keys consist of quadruples (p,q,e,d), with p being a 256-bit prime number, q a 258-bit prime number, and d and e large numbers that satisfy the condition (de - 1) divisible by (p-1)(q-1). We

define E_K(P) = P^e mod pq, and D_K(C) = C^d mod pq. Classic and modern number theoretic algorithms can readily compute all the quantities involved (for example, Euclid's algorithm can compute the greatest common divisor, and methods like the Fermat test are used to find probable primes). E_K can be easily computed from the pair (pq,e), but as of now, there is no known simple way to compute D_K from the pair (pq,e). Therefore, whoever generates K can make the pair (pq,e) public.

Anyone can send a secret message to him; he is the only one who can read the messages. The main advantage of RSA public-key cryptography is enhanced security and convenience. Private keys never require transmission or disclosure to anyone. In a secret-key system, on the other hand, the secret keys must be transmitted (either manually or through a communication channel), and there is a possibility that an adversary can discover the secret keys during their transmission.

The JavaTM Cryptography Architecture (Java Security API) is a new Java core API based on the java.security package (and its subpackages). This API enables developers to integrate both low-level and high-level security functionality into their Java applications. The initial release of Java Security was introduced in JDK 1.

The text states that the Java Cryptography Architecture (JCA) provides a subset of functionality, including APIs for digital signatures and message digests. It also includes abstract interfaces for key and certificate management, as well as access control. Additional APIs for X.509 v3 certificates and other formats will be added in future JDK releases. The Java Cryptography Extension (JCE) extends the JCA API to include encryption and key exchange, creating a complete

cryptography API that is platform-independent. The JCE is provided separately due to export restrictions outside the United States. The JCA was designed with the principles of implementation independence and interoperability, as well as algorithm independence and extensibility. These principles allow users to utilize cryptographic concepts without worrying about the specific implementations or algorithms being used. In cases where algorithm-independence is not possible, the JCA offers standardized algorithm-specific APIs for developers to use.

The JCA (Java Cryptography Architecture) allows developers to specify the specific implementations they require when implementation-independence is not desired. The JCE (Java Cryptography Extension) 1.2.1 is a package that provides a framework and implementations for various cryptographic functions, such as encryption, key generation, key agreement, and Message Authentication Code (MAC) algorithms. It supports different types of encryption, including symmetric, asymmetric, block, and stream ciphers. Additionally, the software supports secure streams and sealed objects. The JCE 1.2.1 is designed to allow the integration of other qualified cryptography libraries as service providers and seamlessly add new algorithms. Qualified providers include those approved for export and those certified for domestic use only.

Qualified providers are signed by a trusted entity. JCE 1.2.1 is an additional component to the JavaTM 2 platform, which already has digital signature and message digest interfaces and implementations. This version of JCE serves as a non-commercial reference implementation that showcases how the JCE 1.2.1 APIs can be utilized. A reference implementation is similar to a proof-of-concept implementation of an API specification, demonstrating its feasibility and allowing for compatibility tests against it. However, compared to a commercial-grade product, a non-commercial implementation like this one may lack some aspects such as a comprehensive toolkit, advanced debugging

tools, high-quality documentation, and regular maintenance updates. The Java 2 platform already includes interfaces and implementations for digital signatures and message digests. JCE 1.2 was developed to expand the available APIs and implementations for cryptographic services in the Java 2 platform, specifically those that were subjected to U.S. regulations.

According to U.S. export control regulations, JCE 1.2 was released as an extension to the Java 2 platform. JCE 1.2.1 has important features such as a pure Java implementation and a pluggable framework architecture that only allows qualified providers to be plugged in. The software is exportable in binary form only and is distributed by Sun Microsystems for both domestic and global users. The jurisdiction policy files specify that there are no restrictions on cryptographic strengths.

6 Conclusions: Software implementations of the Data Encryption Standard (DES) are readily accessible. Various individuals have generously made DES code available for download via ftp. These individuals include Stig Ostholm [FTPSO], BSD [FTPBK], Eric Young [FTPEY], Dennis Furguson [FTPDF], Mark Riordan [FTPMR], and Phil Karn [FTPPK]. Additionally, Patterson's book [PAT87] provides a Pascal listing of DES. Antti Louko also offers further information at [email protected].

fi> has implemented DES with BigNum packages in [FTPAL]. As a result, we can utilize the DES algorithm to encrypt various security elements such as our administrator password, user passwords, and the server's database. The RSA algorithm is widely recognized and extensively used for encryption. Numerous RSA-related documents can be found on the Internet. Different cryptographic schemes and protocols based on RSA are incorporated in products worldwide. The RSAES-OAEP encryption scheme and the RSASSA-PSS signature scheme are particularly recommended.

We will analyze the encoding method used by them

and select a suitable option for encrypting our server's database. The "Java Cryptography Architecture" (JCA) is a framework that enables access and development of cryptographic functionalities on the Java Platform. It includes the components of the JDK 1.1 Java Security API that are associated with cryptography (currently, almost the entire API), along with a set of conventions and specifications outlined in this document. It introduces a "provider" architecture that supports multiple and interoperable implementations of cryptography. The JavaTM Cryptography Extension (JCE) 1.2.1 is utilized to enhance encryption in our system. This package requires JavaTM 2 SDK v 1.2.

The required versions for compatibility are 1 or later for JavaTM 2 Runtime Environment and 1.2.1 or later for Java Cryptography Architecture (JCA) APIs. We need encryption for the administrator database and smart card in the Control Access of lab computer project in a client server environment. Therefore, we are studying JCE 1.2.1 and utilizing its cryptographic strengths.

Chapter 3: Smart Card Technology

3.1. About The Java Card Technology

The Java Card specifications allow JavaTM technology to be executed on devices with limited memory, such as smart cards. Additionally, the Java Card API enables applications designed for one smart card platform utilizing Java Card technology to function on any other compatible platform.

Java smart card technology has become more efficient as a result of these two new technologies. The Java Card Application Environment (JCAE) is licensed to smart card manufacturers on an OEM basis, with over 90 percent of worldwide smart card manufacturing capacity being represented. There are several unique benefits to using Java Card technology, such as platform independence and the capability for multiple applications to run on a single card.

The Java programming language's design allows for small, downloadable code elements, making it secure and easy to run multiple applications on a single card. Additionally, card issuers have the ability to install applications after the card has been issued, enabling them to respond dynamically to the changing needs of their customers.The Java Card technology offers flexibility in programming smart cards, allowing the card issuer to easily change the frequent flyer program associated with the card without issuing a new card. Additionally, the Java Card API is compatible with various international standards, including ISO7816, and industry-specific standards like EMV.3.2.

Smart cards are small cards with an embedded chip that offer a range of benefits. They function as access control devices, ensuring that personal and business data is only available to authorized users. Additionally, smart cards can be used for purchasing or exchanging value. These cards provide data portability, security, and convenience. There are two types of smart cards: "intelligent" smart cards, which have a central processing unit (CPU) capable of storing information and making decisions based on the card issuer's requirements.

"Intelligent" cards provide a "read / write" capability, allowing the addition and processing of new information. The other type is referred to as a memory card, which is primarily used for storing information and contains stored value that can be spent in various transactions such as payphone, retail, vending, or related activities. Both types of cards are equipped with an integrated circuit chip that adds intelligence and safeguards the stored information from damage or theft. Due to these features, smart cards are significantly more secure compared to magnetic stripe cards, which carry the information on the

exterior of the card and are susceptible to easy copying.

There are also contactless smart cards and contactless smart card readers.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New