SAN MATEO, Calif.–(BUSINESS WIRE)–space hammer, the pioneer of the global data environment, today unveiled the performance capabilities that many of the world’s most data-intensive organizations depend on for high-performance data and storage in decentralized workflows. Hammerspace completely changes previous notions of how unstructured data architectures work, delivering the performance needed to free workloads from data silos, eliminate copy proliferation, and provide direct access to data for applications and users. regardless of where the data is stored.
Hammerspace enables organizations to take full advantage of the performance capabilities of any server, storage system, and network anywhere in the world. This capability enables a unified, fast and efficient global data environment for the entire workflow, from data creation to processing, collaboration and archiving across edge devices, data centers and clouds public and private.
1) High performance in data centers and to the cloud: saturate available Internet or private links
Instruments, applications, compute clusters and workforce are increasingly decentralized. With Hammerspace, all users and applications have globally shared and secure access to all data, regardless of storage platform or location, as if it were all on a local NAS .
Hammerspace overcomes data gravity to make remote data fast to use locally. Modern data architectures require data placement to be as local as possible to match the latency and performance requirements of the user or application. Hammerspace’s parallel global file system orchestrates data automatically and per policy in advance to make data present locally without wasting time waiting for data placement. And data placement happens fast! Using dual 100 Gb/E networks, Hammerspace can intelligently orchestrate data at 22.5 Gb/second where it’s needed. This level of performance allows workflow automation to orchestrate data in the background on a granular file basis directly, by policy, allowing you to start working with data as soon as the first file is transferred and without needing to wait for the entire dataset. be moved locally.
Unstructured data workloads in the cloud can take full advantage of as many compute cores allocated and the bandwidth needed for the job, even saturating the network in the cloud when one wants to connect the environment from computing to applications. A recent analysis of EDA workloads in Microsoft Azure showed that Hammerspace scales performance in a linear fashion, taking full advantage of the network configuration available in Azure. This high-performance cloud file access is needed for compute-intensive use cases, including processing genomic data, rendering visual effects, training machine learning models, and implementing architectures. high-performance computing in the cloud.
High performance between data centers and to the cloud in version 5 software includes:
Support for Backblaze, Zadara and Wasabi
Continuous system-wide optimization to increase scalability, improve back-end performance, and improve resiliency in very large distributed environments
New Hammerspace management GUI, with user-customizable tiles, better admin experience, and increased observability of activity within shares
Increased scale, increasing the number of Hammerspace clusters supported in a single global data environment from 8 to 16 locations
2) High performance on the interconnect within the data center: saturate Ethernet or InfiniBand networks within the data center
Data centers need massive performance to ingest data from instruments and large compute clusters. Hammerspace helps reduce friction between resources, getting the most out of your compute and storage environment, by reducing idle time waiting for data to be ingested into storage.
Hammerspace supports a wide range of high performance storage platforms that organizations have in place today. The power of the Hammerspace architecture lies in its ability to saturate even the fastest storage and networking infrastructures, orchestrating direct I/O and scaling linearly across otherwise incompatible platforms to maximize throughput. global and IOPS. It does this while offering the performance of a parallel file system combined with the ease of standards-based global NAS connectivity and out-of-band metadata updates.
In a recent test with medium-sized server configurations deploying just 16 DSX nodes, the Hammerspace file system took advantage of full storage performance to reach 1.17 Tbps, which was the maximum throughput NVMe storage could. manage, and with file sizes of 32 KB and low CPU usage. Tests demonstrated that performance would scale linearly to extreme levels if additional storage and networking were added.
Enhancements to high-performance interconnects within the data center in software version 5 include:
20% increase in metadata performance to speed up file creation in primary storage use cases
Accelerated collaboration on shared files in high-tenancy environments
RDMA support for global data over NFS v4.2, delivering high performance combined with the simplicity and open standards of NAS protocols for all data in the global data environment, regardless of location
3) High-Performance Server Local I/O: Provide applications with maximum theoretical I/O subsystem performance of cloud instances, virtual machines, and bare metal servers
High-performance use cases, edge environments, and DevOps workloads all benefit from harnessing all the performance of the local server. Hammerspace takes full advantage of the underlying infrastructure, delivering 73.12 Gbps performance from a single NVMe-based server, delivering nearly the same performance through the filesystem that would be achieved on the same hardware server with direct access to the kernel. The Hammerspace Parallel Global File System architecture separates the metadata control plane from the data path and can use built-in parallel file system clients with NFS v4.2 on Linux, resulting in minimal overhead in the data path.
For servers running at the edge, Hammerspace gracefully handles situations where edge or remote sites are disconnected. Because file metadata is global across all sites, local read/write continues until the site reconnects, at which time the metadata synchronizes with the rest of the global data environment.
David Flynn, Founder and CEO of Hammerspace and former Co-Founder and CEO of Fusion-IO
“Technology generally follows a continuum of incremental advances from previous generations. But every now and then, a leap forward is taken with paradigm-shifting innovation. Such was the case at Fusion-IO when we invented the concept of highly reliable high-performance SSDs that eventually evolved into NVMe technology Another paradigm shift awaits us to create high-performance global data architectures integrating instruments and sensors, edge sites, data centers, and various cloud regions .
Eyal Waldman, co-founder and former CEO of Mellanox Technologies, member of the advisory board of Hammerspace
“Innovation at Mellanox was focused on increasing data center efficiency by providing the highest throughput and lowest latency possible in the data center and in the cloud to deliver data faster to applications and unleash system performance capabilities. I see high-performance access to global data as the next step in innovation for high-performance environments. The challenge of fast networks and fast computers was well solved for years, but making remote data available in these environments was an ill-solved problem until Hammerspace came to market. Hammerspace helps take cloud and data usage to the next level of decentralization, where the data resides.
Trond Myklebust, Linux Kernel NFS Client Manager and CTO of Hammerspace
“Hammerspace helped drive the IETF process and wrote enterprise-grade code based on the standard, making NFS4.2 enterprise-grade parallel performance NAS a reality.”
Jeremy Smith, technical director of Jellyfish Pictures
“We wanted to see if the technology really stood up to all the hype around RDMA to NFS4.2 performance. The interconnectivity provided by RoCE/RDMA is truly exceptional. was an obvious choice.
Mark Nossokoff, Research Director at Hyperion Research
“The data consumed by both traditional HPC modeling and simulation workloads and modern AI and HPDA workloads is generated, stored, and shared across a disparate range of resources, such as the edge, HPC data centers and the cloud. Current HPC architectures struggle to keep up with the challenges presented by such a distributed data environment. By addressing the key areas of large-scale collaboration while supporting system performance capabilities and minimizing potentially costly data movement in HPC cloud environments, Hammerspace aims to provide a key missing ingredient that many HPC users and system architects are looking for.
Hammerspace provides a global data environment that spans on-premises data centers and public cloud infrastructure enabling decentralized cloud. With its origins in Linux, NFS, open standards, flash and deep file system, and data management technology leadership, Hammerspace offers the world’s first and only solution for connecting global users with their data and applications, on any existing data center infrastructure or public cloud services, including AWS, Google Cloud, Microsoft Azure, and Seagate Lyve Cloud.