Lozen and Performance: How Fast Is It? (Hint: Pretty Fast.)

When we talk about Lozen™ with customers and partners, some of the most frequent questions we get are about performance.

“How much latency is there?”

“How much bandwidth do I need?”

“Is it really practical to access my mainframe data from the cloud?”

To help answer these questions, we installed Lozen on a typical mainframe and captured some performance statistics. The remainder of this blog describes the environment in which we ran these tests, the tests themselves, and the results.

The key takeaway from these tests is that Lozen is extremely high performing in terms of throughput and CPU utilization. The test yielded 87% of local filesystem performance at a blazing 2.6 GB/S transfer rate when referencing z/OS VSAM data from another platform. This exceeded even our high expectations.

The Test Environment

Our mainframe test environment was a current-generation IBM z15 8562-T02 server. The LPAR we used for testing had ten dedicated general-purpose processors, five dedicated zIIP processors, and a total of 1TB of memory. The storage devices used in the test were multiple IBM DS8980F DASD arrays, and the network leveraged multiple OSA Express 7S network adapters, with each network port capable of 25Gbps. The software environment was z/OS 2.5 with the latest version of the IBM Java 11 JVM. Also installed and running in the LPAR was a DB2 system and several CICS transaction processing systems, although the system was quiet during our tests.

The client system we used was a large 40-core Intel-based Linux system running RedHat Enterprise Linux (RHEL) version 8 with a typical mix of common open-source and RedHat applications and middleware. Installed on this server were 256GB of main memory, 8TB of NVME solid-state disk, and a fiber-channel connection to an outboard SAN environment. For networking, the system included two Intel E810-CAM2 100G network interface cards, providing a total of four high-speed network ports, configured in a single, bonded 802.3ad setup. The network was configured with jumbo Ethernet frames. As was the case for z/OS, the system was quiet during our testing.

The Test Data

For our testing, we defined 1GB, 10GB, and 100GB extended-format VSAM KSDS (keyed) files. For maximum performance, the VSAM objects were defined across multiple different candidate devices using VSAM’s data striping and systems-managed buffering options. We also were careful to allocate the index components on separate volumes from the data to improve performance.

For the keyed VSAM file, we specified 256-byte records with keys in the initial 8 bytes of each record. We used IBM’s File Manager product to populate random data into the records, filling each key with a sequential number and with the data portion set to random character strings. The largest allocation yielded a file containing over 350 million records in total.

Local Performance

To judge local performance, we used the IBM IDCAMS “PRINT” command, reading the entire VSAM object sequentially from beginning to end. Several runs helped us home in on optimum settings, and we were able to achieve a total throughput of slightly over 3.1GB/sec.

At this I/O rate, we found that CPU utilization in our environment averaged 22% and peaked at approximately 40% on a single general-purpose processor engine. We did not observe any significant performance differences when accessing the 1GB, 10GB, or 100GB files: I/O rates and throughput remained consistent, and elapsed times grew in proportion to the size of the file.

The Network Test

For our networked test, we installed and configured Lozen on z/OS. We created external links pointing to our test VSAM datasets, and we set up the network such that any NFS client could connect. We disabled debug logging and other performance-sensitive features and assigned the Lozen server to a top priority WLM service class to ensure it would get access to all the resources it needed.

From our RedHat Linux client, we installed no special VirtualZ software, opting instead just to use the standard NFS driver bundled with RHEL. In this case, it offered an implementation of the NFS V4.2 specification, and we mounted the directory containing our VSAM test data with no special options.

Once mounted, we used a simple Linux command that would open and sequentially read the target mainframe dataset from beginning to end:

cp /mnt/Z/vsam100g /dev/null

The command above has the effect of sequentially reading our mainframe VSAM file from beginning to end in the order of the VSAM KSDS keys, giving us a simple way to benchmark I/O performance.

We ran several instances of this command sequence and averaged the results, which are summarized in the table below. In all cases, the results specify an average obtained across multiple test runs.  However, results from one iteration to the next were generally quite similar with less than 5% variation from run to run.

The punchline is that in the configuration we tested, we were able to get about 87% of local filesystem performance when referencing z/OS VSAM data from another platform. Considering the networked nature of the test, we are very pleased with this result.

The Results

Learn More

To learn more about how to unlock the power of real-time, read-write mainframe data access with Lozen:

 

Latest Blog Posts

Unlocking the Power of Mainframe Data with Power BI Using PropelZ

Unlocking the Power of Mainframe Data with Power BI Using PropelZ

In today’s fast-paced business environment, the ability to access and analyze data in real time is crucial for informed decision-making. For organizations that rely on IBM Z mainframes, this can often present significant challenges.  Moving data from mainframe systems...

Loading...