At VirtualZ, we are often asked about latency in the context of deploying Lozen™ for real-time, read-write data access. Much like our approach to security, latency was a top consideration when we designed Lozen, our groundbreaking data access solution.
What is Latency?
Latency is one of the ways to assess a network’s performance. It measures the time it takes for data to pass from one point on a network to another and often refers to delays incurred in the processing of network data.
Many factors influence latency and end-to-end response time. They include network bandwidth, volume of data, and file size, as well as things like the size of the mainframe and whether zIIP engines are available and how busy they are.
Why is Latency Important?
A responsive, high-performing network underpins the success of most IT initiatives. Things that negatively impact latency, therefore, also impact the success of these initiatives and end user satisfaction.
This is highly visible with modernization initiatives like digital transformation or application re-platforming. As organizations migrate applications to the cloud and distributed platforms, for example, mainframe data access demands intensify. Ensuring highly responsive, secure data access is a key piece of the puzzle.
Latency is often determined by the characteristics and nature of the application. For example, accessing keyed records in a multi-terabyte file will have very high performance. However, if that same application opens the file and sequentially reads every record one after the next (instead of doing a keyed lookup for a couple of records), it will consume more bandwidth.
Lozen’s Approach to Latency
We designed Lozen with latency in mind.
- At the most foundational level, Lozen eliminates the need to move or replicate data, because the data can stay where it is, safely and securely, on the mainframe.
- Lozen’s structure allows applications to do sophisticated things like search for records by the record keys, which eliminates the need to transfer massive amounts of data across the network.
- Lozen is built on industry standard protocols like NFS, benefiting from their inherent capabilities, like caching and optimizing buffer sizes, which help with latency and performance. If you are running many applications concurrently and they are all accessing the same set of data, caching operates the first time a record is referenced. It ends up stored locally on whatever platform you’re using, and from there forward, there is no overhead. Once fetched, the caching enables access to that record at either memory speeds or local disc speeds, eliminating any network dependency or latency.
- We also designed Lozen to run on specialty processors (zIIPs). This helps with performance and cost — adding capacity in an efficient, low-cost way and reducing MIPS consumption. (Read more about our zIIP engine architecture here.)
Today, there are many options for high-bandwidth connections. The cloud providers have optimized and accelerated solutions in combination with Lozen that make them suitable for enterprise applications. Lozen is also a very high-performing solution for on-premise, co-located distributed environments, or when connected to your core network.
Regardless of your needs, customers deploying Lozen can have confidence that we have taken latency into consideration as a critical design feature. With Lozen’s unique approach, custom and packaged applications running anywhere—in the cloud, on distributed platforms or on mobile devices—have real-time, read-write, efficient access to always-in-sync data on the IBM zSystems platform.
Listen to our CTO, Vince Re, talk in more detail about our approach to latency.
To learn more about how to unlock the power of real-time, read-write
IBM zSystems data access with Lozen: