The Mainframe at 60
We thought we would celebrate the mainframe’s 60th anniversary – and have a little fun – by bringing back one of the least accurate predictions – ever – about anything:
“I predict that the last mainframe will be unplugged on March 15, 1996.”
– Stewart Alsop, March 1991
We gathered quotes from widely diverging individuals and organizations: some large, some small; those in the mainframe ecosystem and those who wouldn’t skip a beat if the last mainframe was unplugged tomorrow.
We’ll focus on execution further at the end of this piece, but for a background partition until then: everything that you read from here to that point will reinforce the importance of:
- Running the right workloads – and only the right workloads – on the mainframe.
- Running the mainframe and its workloads as effectively as you possibly can.
- Sustaining a qualified and experienced team so that your most important IT asset can continue to thrive.
“Mainframes have more staying power than most understand”
David Linthicum, formerly of Cloud Technology Partners (now HPE) and Deloitte, and author of “An Insider’s Guide to Cloud Computing.” is as prominent a cloud evangelist as this writer has seen over the past ten years. He states:
…not all applications and data sets belong on the cloud, especially ones that reside on mainframes.
Although I have often been laughed out of a meeting for this opinion, adoption patterns have proved me correct. It’s often spun as a pushback on cloud computing itself, but it’s just pragmatism.
The truth is that we’re going to find applications and data sets (more than you think), that are not viable to move to the cloud. I call this the point of saturation when we’re done moving most of the applications to the cloud that are practical to relocate. I’ve had mainframe applications in mind when saying that, and for good reason. [i]
Let’s look at some of those workloads and whom they serve.
Mainframe Staying Power: Our Continued Dependence
According to some reports, mainframes are used today by two-thirds of Fortune 500 companies. Mainframes are found in 45 of the top 50 global banks, eight of the world’s top 10 insurers, seven of the top 10 retailers, and eight of the top 10 telecom companies. Their reliability, security, and scalability make them a popular choice for critical business operations such as real-time fraud detection, credit card processing, and core banking. [ii]
Let’s add travel: at every tech conference where we’ve so much as heard the mainframe mentioned, someone points out that if you booked a hotel, car, or flight, you almost certainly touched a mainframe.
Many of these systems are decades old and there is no plan to invest in changing these mission-critical workloads – because the business case just isn’t there.
In addition to commercial applications, the Social Security Administration, Internal Revenue Service, and Centers for Medicare and Medicaid Services remain dependent on mainframe systems.
The Mainframe Continues to Drive Results for IBM
…the star of the IBM show is a mainframe, the Z16, and it grew revenue a whopping 16% for one of the most expensive and arguably most secure, capable, and reliable platforms in the market. For those new to technology, back in the 1980s, everyone, including most at IBM, thought the mainframe was virtually dead. [iii]
Never mind just “not dead:” it remains a profitable contributor to IBM, and continues to draw investment for use on some of the most important emerging technologies in today’s marketplace (which we’ll cover a little later in this article).
While over time, the “number of customers” tends toward incrementally smaller, the number of MIPS under management has continued to grow steadily.
Beyond IBM itself, the mainframe has a bustling ecosystem around it, with regular and significant M&A activity.
“…a funny thing happened on the road to obsolescence”
People kept using mainframes. No matter how attractive cloud platforms become or how imperative modern features are to leading businesses, the mainframe continues to offer a compelling value proposition. Mainframes often host applications that can’t be moved to the cloud because it would be either too cost-prohibitive due to the substantial work needed to refactor applications or too risky due to the possibility of breaking system dependencies.
The trick is getting the mainframe to communicate with modern applications, and this is where leading enterprises are getting creative. [iv]
As early as 2018, this writer heard a CTO-level conference speaker from a prominent consultancy say, “We’re way past ‘cloud good/mainframe bad.’”
“We’re All Hybrid Now”
According to Kyndryl’s survey, as companies seek to transform their mainframe environment:
- 95% of respondents are moving at least some of their mainframe applications to cloud or distributed platforms, on average 37% of their workloads. Only 1% of respondents plan to move all workloads fully off the mainframe.
- 90% of respondents state the mainframe remains essential to their business operations due to its high levels of security, reliability, and performance while having the flexibility to move to cloud platforms for efficiency. [v]
Kyndryl’s survey supports this point. And rightly so.
Indeed, our job is getting the most from disparate platforms, and we’ll add, doing so through fact-based decisions, which get past the emotions and the biases, and using the right platform for the right purpose – whether that’s:
- Protecting the authoritative source of data
- Presenting that data to a customer, employee, or citizen in an ergonomic fashion
If data is the new oil, discoveries are growing at a double-digit rate every year.
…many organisations still rely heavily upon mainframes with data sitting on them around the 12EB (yes, exabyte) mark and growing at approximately 10-20% per year.
This is especially true in financial services where still a lot of the ‘crown jewels’ are kept. [vi]
The Mainframe and Data
How much data is on your mainframes? How many years of transactional data? Ask any data scientist and they will tell you the path to creating powerful machine learning models is shortened if a company has a rich set of historical information upon which to reason. If you historically have been siloed, you weren’t able to blend other rich data sets with that transactional data, so bringing that history to your estate in the cloud unlocks untold insights into the patterns of your business. [vii]
There’s a phrase used in AI/ML discussions: “the unfair advantage of data.”
The premise is that:
- At the outset of a pursuit, between two competing organizations, the better data scientists will produce better models
- Over time, the organization or the pursuit with access to more and better data will end up with the better models
Let’s consider the mainframe’s value in model building when we combine these massive repositories of data with the processing available on the mainframe.
Real-Time AI Inferencing on the Mainframe
“What’s cool about this latest announcement is we’ve now integrated AI inferencing particularly optimized for fraud detection in real-time inside the chip,” he said. The difference here is that there is usually a lag between the fraud being detected and the consumer learning about it. IBM wants to change that with the z16. “Now, that set of customers can access data [that fraud could be occurring] in real-time inside the chip. It’s all enabled by our own VLSI chip called Telum that we announced last [year], but this will be the first system shipping with that chip.” Rick Lewis, IBM[viii]
Elsewhere, IBM goes on to identify an insurance use case:
We are bringing artificial intelligence (AI) to emerging use cases that our clients (like Swiss insurance provider La Mobilière) have begun exploring, such as enhancing the accuracy of insurance policy recommendations, increasing the accuracy and timeliness of anti-money laundering controls, and further reducing risk exposure for financial services companies. [ix]
The Mainframe’s 60th Anniversary: Not a Retrospective
April 7 marks a significant chapter in mainframe history—its 60th anniversary. This occasion is far more than a historical commemoration; it symbolizes a pivotal moment for the industry to project forward. It is a reflection of the mainframe’s resilience and its ability to adapt through decades of technological change and evolving business needs. As we celebrate this milestone, the emphasis should be on the future, imagining the evolving role of mainframes in spearheading the next phase of digital transformation. [x]
This observation from the Futurum Group is perfect to introduce the mainframe’s support of technologies which, 60 years ago, were mere imagination relative to their current form.
Not Just AI/ML, but Other Emerging Technologies
According to McKinsey, the worldwide number of IoT-connected devices is projected to increase to 43 billion by 2023. As a result, billions of devices will perform trillions of transactions. There is a pressing concern regarding managing the processing capacity and securing these transactions. We’re talking about 10 or 100 times the current storage and computing capacity. Mainframes are suited to rise to the occasion at such a scale. They are designed to handle such a load. Hence, developing the IoT landscape will improve the scope of use of mainframe technology. [xi]
The Internet of Things brings technology to all of us in some of the most personal ways (thinking healthcare applications here). To be sure, a great many IoT workloads will be executed “at the edge” with both data and processing which never returns even to a cloud, let alone to a corporate data center. But for other workloads, according to HCL, the mainframe can enable response at the scale to be demanded.
Blockchain
One of the most attention-grabbing of the new uses in which mainframes now excel is blockchain. The mainframe’s advantages over x86 servers in response time, transaction throughput, scalability, and particularly security, make it the ideal blockchain host.
That security advantage is decisive. The blockchain model is entirely dependent on transaction records being carried in a chain of data blocks that, once assembled, cannot be changed. Because of their superior processing power, mainframes can protect 100% end-to-end encryption without degrading performance. In fact, IBM claims that its mainframes encrypt data 18 times faster than x86 platforms at just 5% of the cost. [xii]
The combination of mainframe processing power with historically superior security makes blockchain a promising workload for IBMz.
Let’s move to ever more stringent compliance requirements.
The Mainframe’s Value in Meeting Ever-Stricter Compliance
Regulations are increasing and becoming stricter over time, increasing the cost of compliance. With complex hybrid-cloud environments, stitching together all the information needed for an audit is complicated and time-consuming.
With IBM z16, IBM is introducing a new software product, IBM Z Security and Compliance Center, that automates the collection and validation of the evidence against a set of controls, the system, operating system, middleware, and applications. The first release will focus on PCI DSS and NIST 853 security controls, with a more planned. The solution includes a centralized interactive dashboard displaying compliance posture in real time. [xiii]
One analyst’s view of the future
What is the future of mainframes? (Forrester’s Brent) Ellis believes mainframes will continue to exist in the enterprise for the foreseeable future, if for no other reason than that migrations take a long time. “We have software vendors that are committed to 2050 for supporting different software packages in this environment,” he says. “The longevity of the platform is really amazing to me.”[xiv]
* * * * *
At the outset of this article, we noted that everything you’d see about the growth of current workloads and all of everything that you read from here to that point there will reinforce the importance of:
- Workload Placement Strategy
- Operating effectiveness
- Sustainability of a high-quality team
With over 35 years on the mainframe, GTSG is called upon to assist clients with all three issues. Three short illustrations:
Workload placement strategy: One of our clients relies on the mainframe to manage sensitive data for millions of customers. They engaged us, looking to leverage the cloud to improve the agility and effectiveness of several key workflows. We found disagreement among senior executives: some saw the mainframe’s value and would selectively move elements of the workload to new platforms over time; some would decommission quickly, risking operations during the transition.
GTSG led the client through a series of classic consultative structured decision-making processes. The client identified representative Business Process Flows, for which we developed Solution Alternatives with benefits and considerations identified for each, with a recommended course of action.
In addition to “answering the question,” we helped the client resolve conflict with fact-based alternatives and recommendations. We provided a deliverable that serves as the client’s guide to action through the multi-year effort.
The answer in this case was, “No, we can’t just ‘turn the mainframe off’ – but we can begin to add capability through integration with public cloud capabilities.”
Operating effectiveness: a client outsourced to a Tier 1 mainframe managed services provider struggled to locate a sufficiently technical mainframe to oversee the contract. Experienced mainframers are in short supply, particularly among outsourcers, resulting in what we call “run/operate” vs. “manage/improve” behavior.
The client and GTSG decided together to begin with a performance and workload management engagement. Our team brought hundreds of years of experience across hundreds of shops, in the mainframe towers relevant to this particular client. They began looking at the largest consumers of capacity; HSM, DB2, CICS, and other high-consuming services – asking a deceptively simple question – “what seems off?” – which only works at this level of experience. The results were staggering – you can see them here – and the client extended our involvement to its parent organization in Europe.
High-quality sustainability of the team: a client reached out to us looking to plan for the impending retirements of their team. They chose not to outsource the environment wholesale: rather, they chose to work with us on an extended transition so that their team could finish their careers on their terms. We’ve been engaged there for a couple of years now, our team understands their environment (eliminating transition costs down the road), and we add resources as required – entirely on the client’s timeline.
* * * * *
We hope this analysis has been of value to you. To discuss further, please write us at Mainframe@GTSG.com. Thank you.