0% found this document useful (0 votes)
146 views30 pages

Distributed Filesystems Review

The document provides an overview and comparison of several distributed file systems, including Google File System (GFS), Kosmos File System (KFS), Hadoop Distributed File System (HDFS), GlusterFS, and Red Hat Global File System. It describes the architecture, features, status, limitations, and notable uses of each file system. The document is presented as a slideshow, with each file system covered across multiple slides containing details about its design and implementation.

Uploaded by

fmoreira9650
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views30 pages

Distributed Filesystems Review

The document provides an overview and comparison of several distributed file systems, including Google File System (GFS), Kosmos File System (KFS), Hadoop Distributed File System (HDFS), GlusterFS, and Red Hat Global File System. It describes the architecture, features, status, limitations, and notable uses of each file system. The document is presented as a slideshow, with each file system covered across multiple slides containing details about its design and implementation.

Uploaded by

fmoreira9650
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

Distributed File System Review

Schubert Zhang May 2008

File Systems
Google File System (GFS) Kosmos File System (KFS) Hadoop Distributed File System (HDFS) GlusterFS Red Hat Global File System Luster Summary

Slide 2

Google File System (GFS)

Slide 3

Google File System (GFS)


Specified applications oriented file system. Search engines. Grid computing applications. Data mining applications. Other application for the generation and processing of data. Workload Characters Performance, scalability, reliability, and availability requirements. Large distributed data-intensive applications. Large/Huge files (tens of MB to tens of GB in size). Primarily write-once/read-many. Appending rather than overwriting. Mostly sequential access. The emphasis is on high sustained throughput of data access rather than low latency of data access. System Requirements Inexpensive commodity hardware that may often fail. Adequate memory for Master-Server. GE network interface. Architecture Usually both client and chunkserer run on a same machine. Fixed-size chunks (usually 64MB) (memory of master). File replicated, chunk replicated (usually 3). Single master and multiple chunkservers and accessed by multiple clients.

Slide 4

Google File System (GFS)


Single masterserver metadata server Namespaces (files and chunks) File access control info Mapping from files to chunks Locations of chunks replicas Metadata in memory Namespaces and mapping stored in disk by checkpoints and operation log. Namespace management and locking Metadata HA and fault tolerance Replica Placement, rack-aware replica placement policy Chunk creation, re-replication, rebalancing chunk server management (heartbeat and control.) chunk lease management Garbage collection Minimize the masters involvement in all operations.
Slide 5

Google File System (GFS)


Large number of chunkserver No cache for file data Chunk allocation (lazy) Lease, data replication chain Blocks checksums Chunk state report P2P replication, Replication Pipelining and Clone Large number of clients Linked into each application. Interact with the master for metadata operation Data-bearing communication goes directly to the chunkservers No cache for file data, but cache metadata. Translate operation offset to chunk index. Applications/clients get over the limitations of GFS implementation.
Slide 6

Google File System (GFS)


Cluster scale and performance
Thousands of disks on over a thousand machines Hundreds of TB or several PB of storage Hundreds or thousands of clients

Limitations
No standard API such as POSIX. Not integrated File System operations. Some performance issues depend on applications and clients implementation. GFS does not guarantee that all replicas are byte-wise identical. It only guarantees that the data is written at least once as an atomic unit. Append operation atomically at least once issue. (GFS may insert padding or record duplicates in between.) Application/Client have opportunity to get a stale chunk replica. (Reader deal with it) If a write by the application is large or straddles a chunk boundary, it may be added fragments from different clients. Need tight cooperate of applications. Not support hard links or soft links.
Slide 7

Google File System (GFS)


Need further components to achieve completeness Chubby (Distributed lock and Consistency) BigTable (A Distributed Storage System for Structured Data ) etc.

Slide 8

Kosmos File System (KFS)


A open source implementation of the Google File System
Many clients for distributed computing Application FS OP

KFS Client Client library

Location Signaling

KFS Meta-data server (with HA)

Organization Signaling

Organization Signaling

Block Talk Signaling Block Data Stream

Block Team Talk KFS Block server Block Data Stream KFS Block server

Linux FS

Linux FS Many Block Servers for distributed storage


Slide 9

Kosmos File System (KFS)


Architecture Meta-data server = Google FS Master Block server = Google FS Chunk Server Client library = Google FS Client Workload characters Primarily write-once/read-many workloads Few millions of large files, where each file is on the order of a few tens of MB to a few tens of GB in size Mostly sequential access Implemented in C++ Client API support C++, Java, Python

Slide 10

Kosmos File System (KFS)


Valued Stuff
Client write cache (Google said not necessary) FUSE support: KFS exports a POSIX file interface, Hadoop does not (GFS does not, either) Monitor tools and shell Deploy scripts Job placement and local read optimization Can be integrated with Hadoop: replace HDFS, use the mapreduce of Hadoop. (patch to Hadoop-JIRA-1963) KFS supports atomic append, HDFS does not KFS supports rebalancing, HDFS does not

Status and Limitations


Not good implemented yet. No real user Failed to build a usable program. Similar limitations of Google FS.
Slide 11

Kosmos File System (KFS)


Client support FUSE
Client Implementation KFS Client Client Applications (e.g:shell command ls) OP Result FS OP KFS Meta-data Server

libfuse (FUSE user programming library) FS OP OP Result OP Result glibc FS OP KFS Block Server

glibc

/mnt/kfs (fuseFS) VFS / (local)

FUSE Kernel Module

Ext3 (for Local Disks )

Slide 12

Hadoop Distributed File System (HDFS)


A open source implementation of the Google File System HDFS relaxes a few POSIX requirements to enable streaming access to file system data. From infrastructure for the Apache Nutch. Moving Computation is Cheaper than Moving Data Portability Across Heterogeneous Hardware and Software Platforms, Implemented by Java. Java client API C language wrapper for this Java API HTTP browser interface Architecture (master/slave) Namenode = Google FS masterserver Datanodes = Google FS chunkservers Clients = Google FS clients Blocks = Google FS chunks Namenode Safe Mode The Persistence of File System Metadata like google FS

Not yet support periodic checkpoints.

Communication Protocols RPCs Staging, client data buffing (like POSIX implementation)

Slide 13

Hadoop Distributed File System (HDFS)

Slide 14

Hadoop Distributed File System (HDFS)


Status and Limitations Similar limitations of Google FS. Not yet support appending-writes to files. Not yet implement user quotas or access permissions. Replica placement policy not completed. Not yet support periodic checkpoints of metadata. Not yet support re-balancing. Not yet support snapshot. Whos using HDFS Facebook (implement a read-only FUSE over HDFS, 300 nodes) Yahoo! (1000 nodes) For some non-commercial usage (log analysis, search, etc.)
Slide 15

GlusterFS
Gluster for specific tasks such as HPC Clustering, Storage Clustering, Enterprise Provisioning, Database Clustering etc. GlusterFS GlusterHPC

Slide 16

GlusterFS

Slide 17

GlusterFS

Slide 18

GlusterFS

Clients

Storage Server Cluster

Application (shell: ls, etc.)

GlusterFS Client

Namespace Brick Namespace Bricks (AFR)

POSIX

FUSE libfuse

VFS

FUSE fuse.ko

Namespace Brick File Data Bricks (AFR, Stripe, etc.)

Slide 19

GlusterFS
Architecture
Different from GoogleFS series. No meta-data no master server. User space logical volume management scenario. Server node machines export disk storages as bricks. The brick nodes store distributed files in underling Linux file system. The file namespaces are also stored at storage bricks, just as the file data bricks. Except the size of the files is zero. Bricks (file data or namespaces) support replication. NFS like Disk Layout

Interconnect
Infiniband RDMA (High throughput) TCP/IP

Features
Support FUSE, complete POSIX interface. AFR (mirror) Self Heal Stripe (note: not good implemented)
Slide 20

GlusterFS
Valued Stuff
Easy to setup for a moderate cluster. FUSE and POSIX Scheduler Modules for balancing Performance tuning flexibly Design:
Stackable Modules,Translators, run-time .so implementation. Not tied to I/O Profiles or Hardware or OS

Well-tested and with different representative benchmarks. Performance and simplicity is better then Luster.

Limitations
Lacks global management function, no master. The AFR function depends on configuration, lacks automation and flexibility. Now, cannot automatic add new bricks. If a master component is added, it will be a better Cluster FS.

Whos using GlusterFS


Indian Institute of Technology Kanpur, 24 brick GlusterFS storage on Infiniband. Other small cluster projects.

Slide 21

Red Hat Global File System


Red Hat Cluster Suite Its a shared storage solution, which is a traditional solution. Depends on Red Hat Cluster Suite components Configuration and management function
Conga (luci and ricci)

GLVM DLM GNBD SAN/NAS/DAS

Slide 22

Red Hat Global File System


Deploy
GFS with a SAN (Superior Performance and Scalability) GFS and GNBD with a SAN (Performance, Scalability, Moderate Price) GFS and GNBD with Directly Connected Storage (Economy and Performance)

Slide 23

Red Hat Global File System


GFS Functions Making a File System Mounting a File System Unmounting a File System GFS Quota Management Growing a File System Adding Journals to a File System Direct I/O Data Journaling Configuring atime Updates Suspending Activity on a File System Displaying Extended GFS Information and Statistics Repairing a File System Context-Dependent Path Names (CDPN) Cluster Volume Management aggregate multiple physical volumes into a single, logical device across all nodes in a cluster. provides a logical view of the storage to GFS. Lock Management Cluster Management, Fencing, and Recovery Cluster Configuration Management
Slide 24

Red Hat Global File System


Status It is a shared storage solution. The solution is far from our target. A little too complicated and not easy to manage. High performance and scalability need high level storage hardware and network (eg.SAN). The implementation is not sample.

Slide 25

Luster
Sun Microsystems Target 10,000 of nodes, PB of storage, 100GB/sec throughput. Lustre is kernel software, which interacts with storage devices. Your Lustre deployment must be correctly installed, configured, and administered to reduce the risk of security issues or data loss. It uses Object-Based Storage Devices (OSDs), to manage entire file objects (inodes) instead of blocks. Components
Meta Data Servers (MDSs) Object Storage Targets (OSTs) Lustre clients.

Luster is a little too complex to be used. But it seems a verified and reliable File System.
Slide 26

Luster OSD Architecture

Slide 27

Summary

Shared Cluster Parallel Cloud

Slide 28

Summary Cluster Volume Managers SAN File Systems Cluster File Systems Parallel NFS (pNFS) Object-based Storage Devices (OSD) Global/Parallel File System Distribute/Cluster/Parallel Level
Volume level (block based) File or File system level (file, block or object(for OSD) based) Database or application level

Directly at the storage or in the network


Slide 29

Summary Traditional/Historical
Block level: Volume Management
EMC PowerPath (PPVM) HP Shared LVM IBM LVM MACROIMPACT SAN CVM REDHAT LVM SANBOLIC LaScala VERITAS

File/File System level:


Local Disk FS Distributed: NAS, Samba, AFP, DFS, AFS, RFS, Coda SAN FS

App/DB level: RDBMS, Email system

Advanced/Recent: File/FS level


Distributed: WAFS(NAS extention), NFM, GlobalFS, SANFS, ClusterFS
Slide 30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy