BeeGFS transparently spreads user data across multiple servers. By increasing the number of servers and disks in the system, you can simply scale performance and capacity of the file system to the level that you need, seamlessly from small clusters up to enterprise-class systems with thousands of nodes.
Key Benefits
- Distributed File Contents and Metadata:
One of the most fundamental concepts of BeeGFS is the strict avoidance of architectural bottle necks. Striping file contents across multiple storage servers is only one part of this concept. Another important aspect is the distribution of file system metadata (e.g. directory information) across multiple metadata servers. Large systems and metadata intensive applications in general can greatly profit from the latter feature.
- HPC Technologies:
Built on highly efficient and scalable multithreaded core components with native Infiniband support, file system nodes can serve Infiniband and Ethernet (or any other TCP-enabled network) connections at the same time and automatically switch to a redundant connection path in case any of them fails.
- Easy to use:
BeeGFS requires no kernel patches (the client is a patchless kernel module, the server components are userspace daemons), comes with graphical cluster installation tools and allows you to add more clients and servers to the running system whenever you want it.
- Client and Servers on any Machine:
No specific enterprise Linux distribution or other special environment is required to run BeeGFS. BeeGFS client and servers can even run on the same machine to enable performance increases for small clusters or networks. BeeGFS requires no dedicated file system partition on the servers - It uses existing partitions, formatted with any of the standard Linux file systems, e.g. XFS or ext4. For larger networks, it is also possible to create several distinct BeeGFS file system partitions with different configurations.
- Highly Concurrent Access:
Simple remote file systems like NFS do not only have serious performance problems in case of highly concurrent access, they can even corrupt data when multiple clients write to the same shared file, which is a typical use-case for cluster applications. BeeGFS was specifically designed with such use-cases in mind to deliver optimal robustness and performance in situations of high I/O load.
https://www.nextplatform.com/2016/02/24/arm-open-source-feed-buzz-around-hpc-file-system/
No comments:
Post a Comment