Notes from Gartner IOCS: GTSG Focus Areas

Over three thousand infrastructure & operations practitioners gathered in Las Vegas in December to hear from analysts- and network with each other – at the infrastructure world’s premier technology-agnostic exchange of best practices. As not everyone could attend, we took notes.

The Value of Gartner’s Market Reach

GTSG is recognized in Gartner research for its strategy, workload migration, disaster recovery, and mainframe expertise. Over 100 of our consultants, architects, technical subject matter experts, and technical project managers “go deep” on mission-critical client engagements ranging from weeks to years, whereas each Gartner analyst may have 1,000 interactions with a broad range of organizations annually. Gartner then synthesizes these interactions into trends and best practices applicable within and across industry segments.

GTSG engages with dozens of these analysts each quarter. We benefit from their unparalleled market view and collective insight; our clients benefit from the knowledge that our methods are consistent with Gartner-recommended practices and solution paths.

In this second of two posts on the event, we’ll provide some quick takeaways in areas relevant to our practice.

  • Workload placement strategy
  • Technical Debt
  • Edge
  • Disaster recovery
  • Sustainability
  • Midsize Enterprise Impact
  • And, of course, our roots – the mainframe

Workload Placement Strategy

Analyst Chris Saunderson presented “Delivering Great Hybrid IT Workload Placements: Cloud vs. On-Premises vs. Colocation vs. Edge.”

He noted the evolution:

  • the simple days of the past when all workloads ran on-prem with a DR site
  • today’s complex infrastructure mesh comprised of on-prem or colocated IaaS/PaaS/SaaS cloud workloads, micro data centers, hosted, edge, and IoT workloads moving data in a complex infrastructure mesh
  • tomorrow’s goal of “The Right Workload, in the Right Place, for the Right Reasons, at the Right Time.”

He showed the distribution of workloads over time, with on-prem data center workloads continuously declining from the advent of colocation and SaaS a generation ago to housing perhaps half of workloads today. The decline in on-prem appeared, perhaps (it’s a short sampling period), to be leveling off with the advent of GenAI and HPC demand.

Chris also discusses methodology, depicting optimal workload placement as a complex function of organizational factors, preferences, and drivers; workload attributes and requirements; current constraints, industry trends, and benchmarking.

His recommendations are to adopt the hybrid IT approach, blending options to capitalize on the strengths of each, avoiding preferences. Finding the right mix will be different in each organization. We must seek continuous improvement, adjusting workload placements to improve outcomes.

GTSG agrees: the answer is always in the analysis. There is no shortcut through the detail.

We further believe that a change in workload placement strategy is akin to any other expenditure: an investment requiring a return, whether measured in financial or non-financial, more qualitative business terms. To discuss our approach to the right strategy for your organization, write us at Partners@GTSG.com.

Technical Debt

Roger Williams opened “Tactical Technical Debt Reduction and Its Consequences” by asking who was using the cloud as the automatic solution for technical debt, then asked, “How’s that working out for you?” Based on the chuckles from the audience, the answer appears to have been, “Not particularly well, thanks for asking.”

He provides a straightforward comparison of the “project” approach to sourcing infrastructure vs. the acquisition of cloud services:

  • The project approach is good for capex cost optimization and “sweating assets” when necessary (if you can live with the MTBF) but can lead to higher technical debt requiring more transformation. It carries a set capacity.
  • Cloud services reduce technical debt from N/N-1-2-3 to N/N-1 in the vast majority of cases, drive more standardization and less customization, while increasing lock-in, but also increasing elasticity.

Gartner believes that over the next several years, disruption in the server virtualization market will result in more than 60% of enterprises accelerating their public cloud migration and exploring the re-virtualization of existing virtual workloads.

For years, GTSG has assisted clients with the analysis, business case, and successful migration away from platforms where vendor pricing became untenable- particularly after an acquisition. To discuss your options to deal with this market disruption, write us at Partners@GTSG.com.

Edge

Several sessions on Edge included:

  • The wild, wild edge computing marketplace: Tom Bittman discussed
    • the growth of hyperscaler distributed cloud solutions (modest)
    • the penetration of edge deployments in top tier retailers (extensive)
    • the distributed nature of data (over half outside the data center or cloud by next year), and
    • the rise of machine learning at the edge within two years (half of edge deployments will utilize it.

The number of IoT devices will continue to grow rapidly, and the volume of data they produce will grow faster still.

  • Examples of Edge Computing, Everywhere: Tom also provided industry profiles that showed transportation, manufacturing and natural resources, and energy and utilities as the top adopters – but even the bottom industry on his list, wholesale, showed 50% of CIO survey respondents expecting to adopt within three years.
  • Tom also presented on edge computing strategy, and Jeff Hewitt ran a complementary session on building the business case to solve a problem Jeff crisply defined: “Inadequate business acumen, a frequently reported I&O failing, leads to incorrect prioritization, missed targets, inadequate value realization and missed opportunities.” We must address this potential shortcoming in our own organizations by translating the four major drivers of edge workloads – latency reduction, autonomy, data/bandwidth and privacy/security – into business terms.

GTSG has built a straightforward strategy engagement that can help frame an organization’s approach to edge adoption. For more on this, please write us at Partners@GTSG.com.

Disaster Recovery

Stanton Cole opened “Top Mistakes to Avoid When Implementing Disaster Recovery in the Public Cloud” by asking via poll, “Do you think your Disaster Recovery is done right?” The last we saw on the screen, 73% of 116 respondents said “no.”

He believes the top mistakes are

  • Not starting with a strategy including goals, assessment, criticality tiers (“everyone has the most important”), an Application Recovery Matrix and a Ransomware Response Plan, identification of RTOs and RPOs – the basics
  • Not aligning the cloud strategy with DR: resilience should be built into the approach to cloud
  • Not using cloud as part of the DR strategy: Stanton identifies operations quality, change management, logical design, physical design, and implementation quality as critical factors of resilient systems
  • DR in the cloud: preconceptions about the cloud’s inherent resilience, and failing to base the approach on criticality
  • Not understanding what types of failures to plan for – from service, to fault domain, to full data center, to availability zone, to region, to geography
  • Not continuously innovating DR with production, which can propel you forward to resilience

Notes from Stanton’s advice for “when you return to the office:”

  • Don’t try to change everything immediately; gather critical information- what you have and what you need, build the vision, and determine where you can start
  • Over the next 12 months, coordinate tabletop walkthroughs to validate that implemented designs work as expected, identifying any gaps, and then test the implemented workloads

GTSG’s DR experience dates from immediately post-9/11 in the financial services sector and extends across industries today. We help with everything from building the business impact analysis to program design, building application recovery designs and procedures (ARDs/ARPs), and tabletop exercises through actual recovery. If our experience is of interest to you, please write us at Partners@GTSG.com.

Midsize Enterprise Impact

Mike Cisek presented several sessions on the midsize environment he knows as well as anyone. Mike always frames his message by defining the mid-sized enterprise, focusing on the highly constrained resource pool of versatilists (with limited specialists) who are responsible for both run and change.

The three presentations were:

The 6 Building Blocks of Infrastructure and Operations Maturity in Midsize Enterprises:

optimizing I&O maturity is the most effective means to maximize the output of a small I&O team. Working from Gartner’s IT Score for I&O, Mike calls out the six target activities that impact smaller organizations:  rationalizing new requests, optimizing IT service management processes, supporting IT services, building effective teams and culture, developing the I&O strategy, and managing I&O finance and budgeting.

An I&O Structure for Midsize Enterprises to Drive Execution:

despite 90% of survey respondents believing that MSE IT is much more than a “cost center to be managed,” MSE CIOs and leaders spent 75% of their time focused on internal operations and process.

He recommends that these organizations of 10 to 30 FTEs be structured for operational efficiency, focused on improving user productivity and delivering against business-focused metrics and reports.

Architectural simplicity is linked to operational efficiency: a single vendor ecosystem is operationally expedient; it aids with upskilling, outsourcing, and process maturity; and, done right, architecture proactively solves recoverability.

Mike illustrated with a diagram depicting team members residing organizationally as run requires but “matrixed” into virtual roles for the grow and transform activities.

Leadership Vision for 2024: Midsize Enterprise Technology Leader:

Here, Mike

  • correlates trends (pursuit of operational excellence, security vendor consolidation, the requirement for diversified infrastructure investment) with
  • challenges (execution at scale, data governance, managing cloud costs and operations, managing cloud technical challenges), then
  • recommends actions which include maintaining a high level of cost visibility and discipline; adoption of a data-driven culture; and making intentional workload placement decisions.

Sustainability

Autumn Stanish opens “The I&O Leader’s Guide to Sustainable Technology,” noting that pressure is rising due to state and federal regulation, CIO priority alignment with the business, GenAI’s energy consumption, and the shortage of resources. She identifies gaps in the current state – manual measurements leaving blind spots, a lack of priority-setting, and the sometimes misleading nature of vendor information.

Improvement recommendations cover the data center (product energy efficiency and temperature control), the digital workplace (endpoint carbon footprint and circularity), and cloud (workload placement and tools and services). One of Autumn’s client illustrations involved the implementation of DCIM, enabling a server consolidation that returned a 75% power saving in its facilities, which deferred the need for new building.

She prescribed sustainability insights dashboards whose outcomes include rightsizing hardware configurations to avoid wasted value, dynamic workload placement for the highest level of efficiency, moving from “always on” to “always available,” data center temperature fine-tuning; optimized asset lifecycles, and the identification of “zombie” equipment.

She also cautioned us against what she called “disingenuous” vendor behaviors taxonomized as greenwashing, “greenwishing,” greenhushing (when an enterprise underreports or hides its credentials to avoid scrutiny), and greenflation.

GTSG, with our partner BRUNS-PAK, can help you assess where you are and make real progress on your sustainability journey. Please write us at Partners@GTSG.com to discuss.

Mainframe

Dennis Smith now covers mainframe strategy, which is of key interest to GTSG. The mainframe was critical to our origin, and mainframe services remain a substantial part of our work.

The core of his messaging is around the need to develop a strategy. He categorizes strategies as “retire,” “caretake,” or “leverage.” (Dennis noted that about one-third of end users in the presentation inhabited each category.)

He distinguishes modernization on the platform from migration away from it.

  • Technology swap is for the “retire” category – and incorporates replatform, refactor, and rehost
  • Modernization is for both caretake and leverage, incorporating rehosting, outsourcing or even insourcing (something we’ve also done for clients here at GTSG on a large scale).

Key points:

  • Hybrid is the future. It won’t end well if you have a mainframe and you’re not including it in your strategy. (Dennis is neither selling the mainframe nor advocating migration away from it. He is simply stating that if you have one, you need to modernize it; you can’t just “stand pat”.)
  • While the number of mainframe installations may decrease, usage (MIPS consumed) increase within the enterprises that retain it. He believes the mainframe is ideal for low latency requirements with high throughput, security integrated with hardware and software, and added functionality with no drop in performance.
  • He shares mainframe modernization (again, not re-platforming) approaches, which include improving the developer experience, DevOps implementation, API enablement, AI/ML Incorporation, Automation, the Hybrid Data Grid (bringing real-time to systems of record while providing interactive access to applications and developers) and sustainability (consider the mainframe vs. horizontal server buildout, and study which is cheaper).

It’s the “caretake” category Dennis cautions against: if you don’t have a plan to retire or to leverage the platform for its strengths, you’re going to have to figure that out by 2030 because

  • you’re going to struggle for even more scarce resources,
  • and yes, you’ll be vulnerable to price increases, but
  • most importantly, you’re going to sacrifice capability you’ll need- it’s more dangerous from a business perspective than from a technology standpoint

The workload analysis is critical. We must go “house to house,” Dennis says, application by each. Almost every modernization or migration failure he’s seen has been based on the inability or failure to perform this analysis successfully. GTSG agrees, based on our experience doing the analysis and observing clients who make decisions: data-driven is the key.

* * * * *

We hope these notes have been of value to you. If you want to talk further, please write to us at Partners@GTSG.com.