A Distributed File System is an application where clients can access and manipulate objects stored on a server as if they were local. When a user requests an object, the server sends a copy to the user's computer. The computer caches and returns it to the server. DFS combines file and directory services from multiple servers into a global directory, connecting all servers and making files available to end-users. This server allows multiple clients to simultaneously access the same data while updating files to ensure everyone has the latest version and reduce conflicts. To prevent data access failures, DFS uses either file or database replication. Examples of DFS include:
Microsoft's Distributed File System
IBM/Transarc's Distributed File System
Later in the discussion, we will talk about NFS as an example.
gn: justify">
The Distributed File System includes various ideas like Naming and Transparency, File Replication, Remote File Access, Caching, Fault Tolerance, Security, among others. Before delving into the specifics of the Network File System, it is important to briefly touch upon some of the essential concepts within the Distributed File System.
File Replication
File replication is a successful method for enhancing file accessibility and performance across various machines. This process entails placing multiple copies of the same file on different machines, guaranteeing that one replica remains unaffected by others. It is feasible to conceal the replication specifics from users, offering its own benefits. The naming scheme plays a role in assigning a replicated file name to each replica, with the replicas needing to be distinguishable through unique names while remaining invisible at a higher level. The primary challenge associated
with file replication lies in updating the replicas.
The concept of naming and transparency is important.
In a Distributed File System, the concept of Naming and Transparency is vital. Naming involves mapping between logical and physical objects. Users view files or objects as logical entities with names, while the system considers them as data blocks on disk. This mapping discloses the file's location on the disk to users. Additionally, in a transparent mode, a Distributed File System shares information about the file's network location. Name mapping in a Distributed File System can be classified into two types: Location Transparency and Location Independence.
Location transparency is the concept of hiding a file's physical storage location, so its name does not reveal where it is stored.
Location independence refers to the situation where the file name remains unchanged even if its physical storage location is changed.
The categories of files differ at various levels, making them related to naming.
Remote File Access
To access files stored in a Distributed File System, users can request the specific file from the server by using the naming scheme to determine its location. The remote-service mechanism handles sending requests to the server and receiving processed results. Users employ Remote Procedure Call (which will be further discussed) to send their requests to the server.
Caching
The concept of caching in a Distributed File System is simple. If data is not already cached, it is transferred from the server to the user's computer. Once on the user's computer, any access to this data can be done locally without needing further network access. Recently accessed disk blocks
stored in the cache remove the need for repeated network access. However, there is no direct connection between accesses and traffic to the server. Files are seen as one copy on the server but are spread across different caches. When a cached file is modified or updated, those changes are sent to synchronize with the main copy of the file.
The Network File System is a well-known concept of Distributed File System that we will examine as an example.
NETWORK FILE SYSTEM
The Network File System is a distributed file system that functions as a client/server application. It enables users to access, view, store, and update files on a remote computer as if they were using their own machine. To utilize NFS, the client must have an installed NFS client software while the server must have an installed NFS server software. This configuration facilitates uninterrupted file access. The communication between the client and server occurs through the TCP/IP protocol, ensuring secure transmission by establishing a reliable connection before data transfer takes place.
The following text presents NFS, a dependable file server developed by Sun Microsystems. It allows users to mount either a portion or the entire file onto their local disks. The accompanying diagram depicts a common scenario where an NFS server shares files with both client and server.
The z/OSTM Network File System
The source of this information is the website http://www-1.ibm.com/servers/eserver/zseries/zos/nfs/index.html.
NFS DESIGN AND ARCHITECTURE
NFS aims to provide a reliable, efficient, and seamless method for users in widespread communities to access file servers. The following are key features and design principles of NFS.
NFS is renowned for
its transparency, allowing users or applications to access files on a remote location as if they were local. Users are unaware of whether the accessed files are situated locally or remotely.
NFS is specifically designed to quickly recover from system failures and network issues, which helps restore services promptly. This feature significantly reduces service disruptions for users.
The NFS is a versatile file system that is compatible with different machines and operating systems. It can be easily transferred to various hardware and OS platforms, including mainframes.
Due to its network protocol independence, NFS can function on various transport protocols. This versatility enables NFS to utilize both current and upcoming protocols instead of being restricted to a single one.
NFS emphasizes achieving superior performance standards, enabling users to effortlessly access remote files just like local files. Performance is a crucial aspect for NFS.
NFS offers various security options, enabling users and administrators to choose the most appropriate one for their environment. This adaptability also enables the integration of upcoming security mechanisms.
All these features help decrease expenses by utilizing shared resources throughout the entire global enterprise.
The mechanism of the NFS protocol is described below.
NFS relies on two protocols, namely RPC and XDR, to facilitate communication between sender and receiver. Here is an overview of both protocols.
Remote Procedure Call (RPC)
The Network File System (NFS) utilizes the Remote Procedure Call (RPC) protocol to facilitate communication between a client and server. NFS functions as a session layer protocol, establishing a connection between the client and server on their respective hosts. With RPC, hosts can execute
commands as if they were operating locally, even when interacting with remote hosts.
The
tag presents the topic of External Data Representation (XDR).
XDR is a protocol used in the presentation layer to convert data between various computers and operating systems, ensuring reliable data exchange between clients and servers through RPC.
To achieve transparency, the use of proxy patterns is necessary.
The transparency of NFS is accomplished through the use of RPC and XDR protocols, which allow for communication between clients and servers. The diagram below illustrates the client/server relationship.The HTML-enclosed paragraph describes how stubs are used in client-server communication. Each side, the client and server, have their own stubs that interact when the client requests a service. The XDR protocol enables the connection between these stubs, enabling communication through its functions. A diagram illustrating this process is shown below.
The diagram above clearly displays the interaction between the stubs. NFS utilizes the proxy pattern to present remote objects as if they were local objects.
There are four primary services or protocols that support NFS.
The NFS relies on four primary services or protocols, namely nfs, mountd, nsm, and nlm. In the following section, we will offer a comprehensive explanation of these services.
Nfs
The protocol of NFS is responsible for file management, searching, reading and writing, authentication, and file statistics.
Mountd
This protocol is in charge of mounting exported file systems using nfs, allowing them to be accessed. Servers receive requests like mount and unmount, and they store all the information about the exported file systems.
The Nsm - Network Status Monitor is
a tool utilized for monitoring the network's status.
The main purpose of this protocol is to oversee the network's status, which includes nodes, and collect information about a machine's state. It also offers features like rebooting and restarting.
The Network Lock Manager, also known as Nlm, is a system.
The protocol ensures a lock system controls the prevention of multiple clients from simultaneously modifying data. It monitors the files currently in use and applies locks to them accordingly.
The architecture of NFS is as follows:
The NFS architecture has layers that improve portability. It includes three main layers: the UNIX file-system interface, the Virtual file system, and the NFS service layer. The first layer uses read, write, open, and close calls along with file descriptors. The second layer incorporates two important functions.
- The initial function
In order to ensure smooth access to various locally mounted media types, it is crucial to establish a distinct VFS interface that distinguishes general file-system operations from their particular implementation. This facilitates the potential for multiple implementations of file systems on a single machine.
The description of the second function is provided below.
The Virtual File System relies on the vnode, which has a unique numerical designator for each file specific to the network. The kernel maintains this vnode structure for every node.
The Virtual File System differentiates between local and remote files. Additionally, it distinguishes local files based on their file-system types.
The implementation of the NFS protocol is carried out in the third and final layer.
The given graphic illustrates a layered architecture in NFS.
The architecture of a NFS
is layered.
The source of the information provided is a webpage at http://www.media.kyoto-u.ac.jp/edu/lec/jnakamu/lecture/y98/miy98/part2/p871bp4.htm.
EXPORTING AND MOUNTING IN NFS
Servers can export their file systems, allowing clients to access these shared files by mounting them onto their local disks.
EXPORTING
To export a file in NFS, the requested files are sent to the user and mounted onto their local hard drive. The server handles the user's requests and returns the results. However, there are specific guidelines that must be followed during this process, which are outlined below:
- You have the option to export either the entire file or only a section of it. Additionally, you can execute commands in both the /home directory and the /home/users directory.
If a subdirectory of an exported filesystem is also exported, it must be located on a separate device.
The Parent File System can only be exported if the subdirectory is exported and located on a different device.
Exporting is restricted to local file systems only.
The completion of the transmission involves two steps: exporting and mounting. After the exporting process is complete, the responsibility for finishing the transmission falls on mounting.
MOUNTING
Clients can access the exported files or objects only after mounting them onto their local disk "tree". Access to files is only possible after mounting. The mounting process involves three types of mounts: predefined, explicit, and automatic.
The /etc/filesystems file stores data on predefined mounts, such as the host name, local and remote path, and other details. These mounts are used when specific mount portions are required for operation.
The root level
satisfies explicit user needs, necessitating intentional mounting. Explicit mounting is used for unintended mounting and special tasks.
The AutoFS command automatically mounts objects when a user or program attempts to access an unmounted directory.
The SCO Network File System (NFS) software enables administrators to remotely mount directories located on hosts across the network and access them as if they were local.
The Filesystem Manager allows you to administer both local filesystems and locally mounted remote filesystems.
The FileSystem Manager interface allows remote mounting of objects, such as directories, and their utilization as if they were local. This interface shows different types of filesystems, which include mounted, unmounted, and root filesystems.
The provided text is already unified and cannot befurther without changing its meaning.
The source of the information is http://osr5doc.ca.caldera.com:457/NetAdminG/fsD.aboutGUI.html.
The significance of NFS servers and clients
The NFS Server allows users to access the disk file system and share file systems by exporting files. On the other hand, the NFS Client accepts full pathnames to create a list of files to be mounted, while also verifying client requests and providing a handle in response.
The subject being discussed is the security of NFS.
There are two ways to hack NFS: Eavesdropping, which is the unauthorized interception of data during transmission, and Imposter attack, which happens when someone unauthorized gains access to the network.
Advantages and Disadvantages of NFS
NFS offers several benefits, including the ability to centrally store data. All user-accessed data can be stored on this central host. For instance, host server 1 can mount user accounts and other network hosts can subsequently
mount from host server 1.
One possibility is to store data that takes up more disk space on a single host and have clients mount the required files onto their local host. This allows for the storage and management of programs and files for a particular department on one host, while the accounting and finance department can have its own dedicated host.
A comparison between NFS and NTFS.
In terms of scalability, NFS surpasses NTFS in supporting processors. While NFS uses a straightforward Byte stream as its file format, NTFS utilizes structured objects made up of attributes for its files.
Every file in NTFS is stored in a structure called Master File Table (MFT).
NTFS employs a B+ tree structure for each directory in order to store the file name index.
In conclusion,
Overall, NFS is easy to install and implement, making it a highly portable file system. However, it can be inconsistent at times and does not scale well for large numbers of clients.
BIBLIOGRAPHY
Websites:
contains a link to "http://cs.gmu.edu/~menasce/osbook/distfs/sld096.html".The given HTML code displays a paragraph with centered text that includes a hyperlink to the webpage "http://www.media.kyoto-u.ac.jp/edu/lec/jnakamu/lecture/y98/miy98/part2/p871bp4.htm".The following HTML code contains a hyperlink to IBM's webpage that provides information about the zSeries z/OS NFS:
http://www-1.ibm.com/servers/eserver/zseries/zos/nfs/index.html
The URL to visit for information about distributed file systems is: http://www.cs.wisc.edu/~sschang/OS-Qual/fs/distributed_file_systems.htm.
Books:
Silberschatz , Abraham. Applied Operating System Concepts / Avi Silberschatz, Peter Galvin, Greg Gagne. New York; Chichester: Wiley , 2000.
- Networking essays
- Telecommunication essays
- Network Topology essays
- Telecommunications essays
- Computer File essays
- Desktop Computer essays
- Servers essays
- Android essays
- Application Software essays
- Benchmark essays
- Computer Network essays
- Computer Programming essays
- Computer Security essays
- Computer Software essays
- Cryptography essays
- Data collection essays
- Data Mining essays
- Graphic Design essays
- Information Systems essays
- Internet essays
- Network Security essays
- Website essays
- World Wide Web essays
- File System essays
- Gsm essays
- Mobile device essays
- Steganography essays