DATA DRIVEN TRANSFORMATION: The case of a manufacturing organization leveraging big data and data mesh to drive competitive advantage

Written by: Dr. Anurag Vij

We recently wrote about using a product-centric approach (vs. traditional projects) to drive digital transformation. Whilst there are many underlying factors to a successful digital transformation, enterprise data and how that’s leveraged across analytical planes in the enterprise is critical to the success of any transformative effort at scale. This post shares a point of view on leveraging a similar product-centric approach to big data, which we believe in time will become the DNA of every successful enterprise imparting itself the ability to continuously evolve and transform, leveraging a data mesh architecture.

Although it’s logically possible to take an industry-agnostic approach to enterprise data, analytics are only meaningful and drive the expected business outcomes when they possess real-time contextual awareness. For that very purpose, instead of having a general discussion on enterprise data without a business context, I will take an example of a regional manufacturing organization (referred to as RMO through the rest of this article) that produces industry chemicals. RMO is pursuing several transformative efforts relying heavily on analytics, insights, and thereby the business decisions that can be made by leveraging its enterprise data that exists in a multinational and complex organization. This transformation is spearheaded by RMO’s recently appointed Chief Data Officer (CDO), who I sat down with.

RMO has been in business for a few decades and has implemented several industry-recognized best practices on driving lean manufacturing, implementing six sigma, and continuously training its workforce on its evolving processes and business methodologies. As much as the broader consensus across the organization is that its operations are lean and one of the best in the world, the leadership believes they need to strive for new methods on driving competitive advantage that emerges from the quality of its products, the speed of production, and the cost of production, with data at the heart of all things.


RMO has outlined a 2×5 matrix across the shop-floor and the supply-chain. This comprises a total of ten OKRs that RMO believes will create competitiveness across quality, speed, and cost. For brevity, I am only listing the objectives and data-led priorities that define the key results:

Shop Floor:

a.      Improve Quality (5 points): Implement real-time production process monitoring, equipment fault monitoring and root cause analysis, and variability monitoring.  

b.     Increase Yield (2 points): In addition to process and equipment monitoring, implement ability to assess process complexity and effectiveness in the context of variability and its impact on yield (reducing defective products as a percentage of total products produced).

c.      Improve equipment uptime and performance (as defined in OEE, Overall Equipment Effectiveness) (5 points): Implement fault prediction and predictive maintenance.

d.     Reduce Waste (3 points): Implement streamlined inventory management, recycling, scaling and substitution, and minimizing production line stoppages.

e.     Improve worker wellness by reduction in leaks and accidents (10 points): Implement IIoT sensors (Industrial Internet of Things) in conjunction to SCADA systems to enhance fault and leak prediction analytics that drive swift decisions to avoid hazardous situations implemented through dashboards and frontline worker IoT devices.

 Supply Chain:

a. Achieve near real-time demand forecast and order management: Analytics based on sales, predictive forecasts, supplier inventories, geo-political and other factors that impact demand, and integrating and automating supplier order management.

b. Benchmark supplier performance: Implement analytics across a common set of KPIs for suppliers and defining performance benchmarks feeding into supplier-side improvements, incentives, and future selection.

c. Achieve near real-time multisite inventory management: Remove data silos in inventory tracking, order management, transfers, and purchase decisions across sites to reduce stockouts, waste, and improve turnarounds.

d. Transportation optimization (10 points): Reduce freight spending and increasing turnarounds through analytics that create optimized and efficient load-plans.

e. Support and Returns efficiency (5 points): Implement predictive customer satisfaction, transportation analytics, and inventory management.


Given that RMO has been in operation for decades and that its business and organizational structure (Graphic 1) has evolved over time, the enterprise data is fragmented across various business units, legacy and monolithic systems, complex governance barriers, and with questionable quality.

No alt text provided for this image

Graphic 1

The RMO architectural team started by defining five core principles for RMO enterprise data strategy:

  1. A cloud-first strategy that caters for hybrid and multi-cloud scenarios.
  2. A data-driven culture that fosters open, collaborative, and ever evolving participation of the entire workforce.
  3. Data as a product to achieve scalability and quality with a domain driven design (DDD) where data domain-nodes follow the domain boundaries (vs. technology boundaries).
  4. A self-serve data platform that prioritizes business use-cases over technical complexity.
  5. A federated data governance model.

Let’s double-click on each of these to understand the thought process behind the selection of these core principles and how these would assist RMO to deliver to the 10 prioritized OKRs:

1.     A cloud-first strategy: While the organization has significant investments in legacy systems and tools such as SCADA solutions, RMO understands the benefits of cloud and cloud-based solutions that offer much higher scalability, reliability, availability, security, and new abilities such as edge computing. Further, RMO operates in a regulated environment in certain countries, including those with data residency restrictions, that motivated RMO’s decision to choose a hybrid strategy.

 Consider the example of data collected through SCADA solution from waste management plants that can now be processed in real-time together with production process data, inventory data, and transportation data to increase recycling speed and efficiency. A cloud-first strategy will help RMO implement advanced scenarios over time, including the use of edge-computing for faster cycle and decision times. 

2.     A data-driven culture: To be able to innovate quickly, RMO believes in the power of data democratization. For that to manifest, the workforce across the organization must continuously interact to learn and improve the desired business outcomes. Many of the outlined OKRs require automation and use of AI, which can only be achieved through a data-driven culture across the enterprise. 

An enterprise-wide unified data strategy together with a data-driven culture will enable RMO to make more informed decisions, such as in the case of benchmarking supplier performance which will further drive performance improvements, incentives, and even supplier selection. 

3.     Data domains and data as a product: To manage complexity, RMO chooses to use bounded-context wherein the domain influences the boundaries of the data product. This in turn drives clarity on which data and underlying code is owned, managed, and governed by which teams, and where the dependencies are that need orchestration across other domains and data products. A data product must serve a specific business need. As needs evolve to deliver successfully to the OKRs, that define the NorthStar of business success, the data products must evolve too. The data product may produce insights to serve the business needs on its own or by leveraging, integrating, or making sense of data from other data products.  

As an example, driving improvements in yield for an already lean manufacturing process requires deep analysis and predictive models looking at multiple input and output variables, and variability effects of such. A data mesh architecture that drives integration across various data products in the enterprise caters for such use-cases while keeping data owned and managed by those who understand it the best (Graphic 2).  

4.     A self-serve data platform: For the teams to autonomously own and manage their data product, a self-serve data platform is required. RMO chooses for each data domain to align with one data landing zone and each data product within to align with one resource group. The data landing zone provides capabilities such as network, monitoring, metadata services, data lake services, ingestion and processing, data integration, reporting, and so forth through its resource groups. Furthermore, each data landing zone and management zone align with the underlying subscriptions. The data management and analytics scenario templates inherit their respective policies from the hosting data management landing zone, which simplifies management, provisioning, integration, and testing.  

The shop-floor produces multiple products, with interdependencies across some and serial manufacturing cycles across others. To drive improvements in quality, teams need to continuously monitor data emerging from multiple systems and look for root causes of variability. This in turn may need the teams to provision resources such as compute cycles or visualization services on-demand for large data sets. A self-serve data platform as part of the data mesh architecture enables such requirements with elasticity on-demand while staying cost efficient. 

5.     A federated data governance model: To drive autonomous decision making at data product level while ensuring that each data owner can trust others and their data products, RMO implemented an enterprise level data governance body. The data management landing zone, that uses the data management capability from data management and analytics scenario, provides a federated model of governance for the self-service platform and the data domains within. Data management landing zone is all-encompassing of the data domains across the enterprise. It provides shared resources for all data landing zones, a common architecture for data products, central visibility of data ownership, consistent data access and privacy policies, and ensures data quality.   

In addition, RMO chose Azure Purview as its data governance service to simplify automated data discovery, lineage identification, and classification, and to build a unified map of its data assets across a hybrid environment inclusive of legacy on-premises systems. In other words, RMO will now be able to discover and manage data across its legacy and modern systems. With features such as Data Catalog, RMO is able to document key business terms and their definitions to create a common vocabulary across the enterprise. This is critical to RMO’s needs of being able to move rapidly from ideation to proof through minimal viable products (MVPs) to creating data products that serve the defined OKRs.

No alt text provided for this image

Graphic 2

Underlying these architectural decisions is the use of an identity-driven data access model that builds upon the principle of least privilege (through MIs, user-assigned MIs, and nested security groups), leverage of the Microsoft Zero Trust security model, and underpinning network design that includes network isolation through private endpoints and private network communication. This design caters well for uniform data access while providing centralized data governance and auditing.   

While there are various ways to achieve the outcomes RMO is targeting, a unified enterprise data model that puts the empowerment and ownership with those that understand their data the best and leverages a data mesh architecture is well suited for an organization such as RMO. 


Industry leaders in manufacturing are moving towards Industry 4.0 that leverages data from sensors, robots, processes, and simulations to enable smarter and faster decision making, opening the possibilities of new business models that companies can explore. Establishing a data driven culture, appointing leadership positions such as CDO and empowering them, defining a prioritized list of OKRs that keeps the entire organization focused on building data products that define the maximum business benefits, and underpinning all of this with well thought through architectural and governance principles are critical to success of a data driven transformation for any organization.


Sincere gratitude to the following amazingly talented leaders at Microsoft for their contributions and reviews of this article: Andreas Wasita (Managing Architect), Angus Foreman (Chief Architect), Danny Tambs (Managing Architect), Darren Dillon (CTO), Hany Azzam (Specialist Sales Lead).


1.    The CDO Seat at the Cloud Table:

2.    OKR Wiki:

3.    Azure Data Landing Zone:

4.    Cloud Adoption Framework – Introduction to data management and analytics scenario:

5.    Azure Purview:

6.    Business Glossary for Governed Tagging:

Testing for Large Transformation Programs

A Strategic Tool to Enable Velocity and Quality

By: Sidharth Sabat & Ankush Rathore

Part1: Test Principles, Test Practices and Test Operating Model

Executive Overview

Over the last couple of years, Microsoft Industry Solution Delivery (ISD) has on an average observed the percentage mix of migrated applications types at 50% for Lift and Shift (Rehost), 30% for Workload Migration (Re-platform and Refactor) and 20% for Clean Deployment (Rearchitect and Rebuild)

However, the percentage mix of the applications for an organization is based on several criteria’s like Application Portfolio Complexity (Complex, Medium and Simple), Level of Functional Automation, Cloud Operational Maturity and the overarching Compliance and Security readiness and requirements.

A recent Large Transformation and Migration program undertaken by ISD, reveled the percentage mix to be heavily skewed towards Workload Migration. It was observed the Workload Migration stood at almost 95% as compared to the Lift and Shift Migrations which constituted a little less than 5% of the App portfolio. This case study serves as the premise for this whitepaper, and aims at sharing key Testing Principles, Practices, and the Operating Model to achieve Velocity and Quality.


Migrating applications to cloud brings-in unique challenges in terms of technology and management. And, when the migration involves workloads for a large organizations where the no. of workloads runs into hundreds and thousands the challenges grow multifold due to additional complexities of alignment, communication, managing multiple stakeholders, navigating the organization ecosystem and other dependencies.

Although a cloud migration effort involves changes to servers and other components related to infrastructure and minimal configuration update to the application, the business value is derived based on how the effectively and efficiently the application works on the cloud. To drive the business value, every migration program needs to build comprehensive cloud migration test strategy targeting the unique needs of the program. 

The approach elaborated below is not “The Only Approach” recommended by ISD, however the authors are keen to share the key practices and principles with the community which proved to hit the sweet spot in achieving program objectives of quality migration at scale.

There is no silver bullet or a one size fits all when it comes to defining a test strategy to ensure the (1) quality of the migration process, (2) migrating applications to cloud at scale and (3) ensuring smooth operations on the cloud. The program team needs to carefully weigh in the objectives and challenges to channelize the effort in building a strategy to address majority of the objectives and challenges. 

Migration to cloud in-principle involves multiple applications belonging to multiple teams within an organization. These application teams are generally accustomed to an existing process and often are resistant to change. Hence building an overarching testing strategy for the entire program ensures,  a consistent and repeatable process is followed across different application and platform teams whose applications are a part of the migration journey to the cloud. 

It is quite natural for an application team to push for a comprehensive testing of their application prior to production deployment, however simultaneously it directly challenges the program velocity. The balance here lies in identifying and testing ‘Just Enough’ and  ‘Just In Time’ across all areas to aid velocity and bringing in efficiency in handovers between dependent teams to achieve leanness in operation and desired velocity. 

Test Principles

The balance between testing and migration velocity can be achieved by driving focus on below principles: 

  • Repeatable Use across various Application teams
    • Migration of legacy workloads to Azure cloud involves applications from various app teams, who are accustomed to using different software development methodologies. It is of utmost importance to have a consistent and repeatable testing process to align application team and the migration team to the  program objective. 
  • Lightweight Testing
    • Migrating workloads to Azure cloud from different application team simultaneously requires huge co-ordination between various platform, security, compliance and operation team to meet compliance needs, hence keeping the testing process lightweight brings the required optimization to the program  
  • Risk Based Approach  
    • Migration of workloads to Azure cloud from on-prem hardly requires any change to the application functionality, hence following a risk based approach to identify minimum yet fit to purpose test scenarios can ensure meeting the quality goals with minimum effort. 
  • Minimum effort to operationalize 
    • While defining the test process it is necessary to employ expertise to bring in minimum variance to the testing process in order to keep the effort required for acceptance and incorporation by the application and platform teams 

Test Practices

Listed below are the recommended key testing practices which should be considered for building the test operating model (discussed in next section) for creating an overarching test strategy across migration workloads: 

  • Practice #1:
    • Align test strategy across migration treatment-types i.e. Rehost, Re-platform and Refactor
  • Practice #2: 
    • Fit to purpose testing to aid migration of scale. Identify 20% of the critical scenarios for testing to minimize risk and optimize speed. 
  • Practice #3: 
    • Categorize test types into mandatory and non-mandatory testing 
  • Practice #4: 
    • Build decision trees for non-mandatory test types for faster decision making
  • Practice #5: 
    • Build lean questionnaire to gather early information from application and platform team to identify test types and scope  
  • Practice #6: 
    • Templatization of test artifacts to ensure consistency and establishing a rhythm in evidencing quality goals met for application cutover 
  • Practice #7: 
    • Build decision trees to determine the involvement of dependent teams like test data masking, service virtualization and environment determination 

Test Operating Model 

The Test Operating Model brings consistency to the migration journey and helps in the alignment of the different applications team to the program objectives. The Operating Model can be categorized into four major stream of work as follows: 


Focusses on activities related to discovery of current testing capabilities for the application under consideration and determine a high level testing scope

  • Participate in Discovery Interview 
  • Co-ordinate collection and storing of test information  via test questionnaire 
  • Upload and maintain high level understanding prior to  application migration plan 
  • Participation in the design and planning discussions  
  • Identify scope of testing  
  • Engage dependent team  (across Application, Middleware and Infrastructure Dependencies)
  • Governance:

Focusses on coaching application teams, standardization of test artifacts, removing blockers, monitor progress, quality assurance and process improvement 

  • Co-ordinate planning activities with respective application team  
  • Perform lightweight assessment to determine non-mandatory tests inclusion   
  • Review the test plan  
  • Coach application teams on test process 
  • Monitor progress and govern planned test activities 
  • Provide resources to address resource constraints within corresponding application team  
  • Gather feedback and work towards improvement of the process and standardization of test artifacts 
  • Delivery

Focusses on activities required to complete test planning, test execution, test reporting and test evidencing on appropriate environment for application cutover 

  • Finalize, review and approve the test plan 
  • Align the testing timeline to the migration window provided 
  • Complete test case are uploaded to the testing tools 
  • Execute test cases on non-prod environment 
  • Defect management, Test Status Report and Evidencing  
  • Test execution on the prod environment during cutover 
  • Test execution of any multi-cloud test scenarios post cutover. 
  • Dependent Service

Focusses primarily on the dependent activities involving teams which need to be engaged based on application requirement. Listed below are few of the dependent services which are common across industries however these may vary based on the organizational environment  

  • Involve test data masking team to endorse and provide the masking scripts based on the application requirement  
  • Involve service virtualization team to stub integration point in case the application cannot be integrating with the systems to conduct integration testing  
  • Involve release team in case the application needs integration with the real time systems to conduct testing. 
  • Involve required security team to provide guidance, conduct  security scans and security testing to ensure the compliance need for the assets. 


This whitepaper is the first in series of few proposed publications, and was drafted with an objective to share the Test Principles, Test Practices and Test Operating Model that served as Key Enablers for Velocity and Quality in the referential Case Study, and helped in driving business synergies across various applications and Infrastructure teams. The authors are equally keen to learn and hear feedback from the Community on what other principles and practices worked for their programs.


Authors: Ankush Rathore, Anurag Vij, and Francois Magnin, Azure Cloud & AI Practice, Microsoft Industry Solutions

“After two failed migrations over seven years, ‘mainframe migration’ became the two most dreaded words in the organization. Third attempt was not only opposed by the business unit heads but was outright rejected by the steering committee…” – Group CIO, A Regional Bank


Mainframes figure prominently in the history of computing and remain viable for highly specific workloads. They continue to be used across industries to run vital information systems, and have had a place in highly specific scenarios, such as large, high-volume, transaction-intensive IT environments.

The typical mainframe operations are:

·      Online: Workloads include transaction processing and database management.

·      Batch: Jobs that run without user interaction, and typically on a regular schedule.

·      Job control language (JCL): Specify resources needed to process batch jobs.

·      Initial program load (IPL): IPLs are used to recover from downtime. An IPL is like booting the operating system on Windows VMs.

A system that was first designed in the late 1950s as scale-up servers to run high-volume online transactions and batch processing, continues to be an Achilles-heel for big enterprises. It is the last and biggest hurdle that holds organizations back from making 100% move to the cloud.


The reliability, availability, and processing power of mainframes have taken on almost mythical proportions, and the below table should help in distinguishing the myths from the reality (1).

No alt text provided for this image


The Mainframe has stood as the high-powered safehouse of critical enterprise applications for decades. For many businesses, it remains at the very heart of many organizations’ core line-of-business applications and workflows and viewed as a symbol resilience.

Mainframes have been the proven platform that scale well and provide reliable performance, assuming there are programmers and developers available to design, run, and maintain mainframe programs. In a robust Mainframe operating environment with long-established operating procedures, including JCLs (Job Control Language statements), programs can run based on usage, measured by MIPS (million instructions per second), and provide extensive usage reports for charge backs.

However, rapidly evolving cloud technologies that provide significant business, economical, operating, and personnel advantages over Mainframes build a solid case for organizations to seek a mainframe-free state.

Business Value: Enterprises unlock business value for themselves and their customers by driving continuous innovation. Innovation in a digital world requires a move to a modern DevOps environment. Mainframes are typically monolithic platforms where workloads and surrounding processes don’t necessarily lend themselves to enabling agile development or rapid innovation cycles. Move to cloud platform enables organizations’ IT and business units to closely partner to create opportunities for innovation and supportive services that drive transformation.

Azure as a leading cloud platform provides a hyperscale environment for mission critical workloads promising multiple 9s of availability, optimized for local or geo-based replication services, and backed by commitment-based service level agreements (SLAs). Moving to Azure or similar cloud-based platforms provides organizations the ability to innovate faster, develop and operate better, and consequentially, unlock business value faster to stay ahead of the competition.

Economic Value: Pandemic has had a multi-fold acceleration impact on digital transformation of organizations across industries. In the current environment, organizations are pushed to innovate their future businesses at unprecedented rates while they survive intense competition and operate in the present. Thereby, every dollar saved is an opportune extra dollar to create economic headroom for faster transformation.

The perception of mainframes providing operational stability, and thereby cost savings for the organization is an anti-pattern and must be avoided. To start with, mainframes are subject to very expensive monthly hardware and software contracts, where extra capacity comes at an added cost while not providing the elasticity that Cloud services can. There are several studies and examples of a 5,000 MIPS environment being nearly tenfold more expensive than leveraging Cloud services such as Azure.

The operating and maintenance cost of mainframe environments increase by 4% – 7%, annually (source: BMC). Lack of pay-for-use models, CAPEX and OPEX commitments, and rising costs to source an ever-decreasing workforce with mainframe technical and developmental expertise further makes it a losing proposition.

Lastly, and perhaps most importantly, a mainframe environment stifles rapid experimentation, learning, and innovation.

Personnel Considerations: Industry has seen a constant attrition of mainframe knowledge over the years. Like a prominent CIO noted,

Even if I wanted to run mainframes, lack of people available with quality technical skills and abilities to manage them makes it almost impossible for me to do so.

The decreasing number of mainframe developers that can develop and maintain mainframe applications not just makes it more expensive to operate these environments but also creates an environment that’s change averse and hinders creation of new value on existing mainframe environments. In contrast, modern mainframe-free operating environments with large pool of deep technical skills help remove data siloes, help create new value within the organization and its ecosystem, and helps businesses compete better while making them future-proof.


Once an organization has understood the trade-offs, making migration decisions must be taken with thoughtfulness and planning. The lifetime of mainframes within the organizations typically exceed their oldest employees. This has made mainframes deeply embedded into many organizations’ core business processes and staff’s way of working. As deeply entrenched as this has made mainframes both technically and process-wise, it has also created what’s commonly termed as technical debt and process debt. Whilst all organizations carry a certain amount of technical debt and process debt from the past and must consider that in their business cases as they plan modernization of their environments, with mainframes this debt can be multifold and hence requires a thorough and well-bought-in case for change. There are several other factors such as technological choices, project management, and change management that must be considered and planned for too.

The Case for Change: Typically, mainframe migrations are driven by the central IT teams. Levels of reluctance is usually encountered from the business units, where such a large change not only requires considerable incremental (and often undesired) efforts and resources from them but also poses a threat to their businesses and priorities, should things not go as planned. A lack of strong buy-in and ongoing commitment from key stakeholders that are crucial to provide critical inputs during planning, implementation, and adoption of new systems during a mainframe migration stays a top reason for why mainframe migrations fail. The case for change must be strong with complete stakeholder buy-in and ongoing support as a critical dependency for success.

Technical and Process Debt: Myriad of applications, business processes, and mere code that’s been created over decades from simple patches to JCLs have succeeded in finding deep roots within the organization. Untangling this, technically and process-wise requires deep discovery of applications across the portfolio as well as identification of technical and process dependencies. Depending on the industry the organization is operating in, this could also mean looking at regulatory or other compliance requirements across the countries that the organization operates in. Many mainframe migrations fail due to lack of deep discovery and mapping exercise during planning, only to be discovered during execution, and in many (unfortunate) instances requiring a complete rollback.

Technological Choices: As planning teams carry out thorough discovery activities, its critical to understand the application’s purpose, business rules, underlying code, available documentation and other support resources, and its technical dependencies. These factors help determine the treatment these applications should be given during the migration process ranging from Re-platforming, Re-factoring, or Re-architecting to Retiring (and repurchasing). Post migration planned steps and strategies to test and mitigate issues that could concern scale or redundancy of these applications must also be a key part of discovery and planning process. This requires right capabilities, access to application owners, and a well thought out planning process. Given the complexity, many organizations lead with a lift-and-shift or Re-platforming approach, wanting to move the applications as-is to Cloud services. This not only leads to many failed mainframe migrations but also mostly defeats the purpose of moving to Cloud, technically and economically.

There are several possible approaches to a mainframe migration depending on the application stack and the underlying legacy hardware it sits on. These range from emulating the hardware in Cloud while the stack is ported as-is followed by treatment of middleware, application code, data, and so on to treatment of the stack followed by porting to Cloud services. Many of these applications may be treated differently while they inherit the same databases. A key factor to determine during the process is critical data dependencies that keep the business functions and required analytics and reporting intact. Failure to do so impacts core business functions and results in unfortunate cases of rollbacks.

Project and Change Management: Research proves over 60% of the projects either completely fail or fail to achieve the originally anticipated outcomes. Apparently, the percentage of failure for mainframe projects is even higher (2). Mainframe projects require deep levels of project management. Where most mainframe projects fail, two key areas are either underestimated or left unaddressed: (a) deep levels of project management embedded within the application teams that ties back into the larger project framework, and (b) change management program that runs in parallel to drive acceptance, readiness, and plan-to-scale of the application teams. Lack of attention to these areas are a definite way to fail a mainframe migration.


Microsoft and its partner ecosystem take pride in helping hundreds of customers with their mainframe migrations and accelerating their digital transformation journeys. Microsoft Industry Solutions believes deeply in a migration framework that’s embedded in core principles of: (a) Discovery and Assessment, (b) Building a strong business case, (c) Making smart technological choices leveraging Microsoft’s deep partner ecosystem, and (d) Leading with world-class project and change management.

Discovery and Assessment: Based on the key business drivers that a client has set to modernize their mainframe workloads, Microsoft Industry Solutions (IS) conducts a Discovery and Assessment phase leveraging one or more of its partners and leveraging its global intellectual property to determine the best migration strategy. Given there is no one-size-fits-all for modernizing Mainframe workloads, this uniquely crafted migration strategy serves the specific business outcome that the organization is seeking for within the requisite timelines.

Building a Business Case: The Business Value Advisors (BVAs) work closely with the client to help build a robust business case that helps the client to seek internal buy-ins and form stakeholder coalitions. In addition to requisite financial approvals, the business case helps bring organizational stakeholders, from Boards to Business Unit Heads, along in the journey from the get-go.

Making Technological Choices: Based on the migration strategy, IS team works with its key partners and client’s stakeholders to determine and finalize the technological choices. These choices are often determined through simple to complex POC, depending on the application or workloads under treatment. POCs not only validate the underlying assumptions and expected outcomes, but also help determine mitigation strategies for anti-patterns, code gaps, and scale challenges. Depending on the technological choice, automated tools such as AI based code reverse engineering are often used to deconstruct mainframe applications, to achieve speed and scale.

World-class Project and Change Management: IS firmly believes that both strong project management and change management are fundamental to driving any large migration or transformation effort. It’s equally, if not more, important to mainframe migrations too. Embedding project management into application teams aligning them closely with application owners assists in driving a predictable project that can actively discover and mitigate forward looking issues and avoid unwanted surprises to the overall program. Driving a thorough adaption and change management program assists in building user awareness, securing buy-in, building user readiness, and creating scale abilities.


While mainframe migrations can be a daunting task, whether an organization is looking at becoming mainframe-free or has a path to operating in a hybrid environment while growing its MIPS for certain workloads, the outlined underpinning strategies to a successful migration project are core to success. A business case that’s built upon a well discovered and assessed environment, helps secure stakeholder buy-in and support, informs right technological choices, and is executed upon with world-class project and change management offers highest possibilities of success.




The Microsoft Delivery Approach: A Product Centric Approach for Accelerated Customer Outcomes

The Microsoft ISD team is helping thousands of customers across the globe to accelerate their digital transformation journeys through Product Centric principles and approaches.

An article by: Ankush Rathore & Anurag Vij, Azure Cloud and AI Practice, Microsoft Industry Solutions

Fixed outcomes vs. Business agility? 

Consider the age-old methodology of a hospital’s plan to upgrade its patient management system detailing the desired specifications and feature-set, identifying the right partner, and managing the project over multiple years with pre-defined milestones, and a few years down, going live only to realize how much has changed during the course. Compare this with a hospital that aspires to provide meaningful experiences at every stage of a patient’s journey from enrollment to recovery and leads the planning and development of the underlying platform that accommodates for the rapidly changing dynamics ranging from technological evolution to pandemics. 

Organizations that adopt the latter approach stand the chance to benefit from being able to adopt to changes and evolving constraints in the surrounding environment, make course corrections with changes in expectations or learning along the way, and overall, realizing faster business outcomes for the benefit of their customers.  

The recent rapid acceleration in digital transformation of organizations makes this approach even more important as the digital agendas evolve swiftly. In 2020 alone, Microsoft saw many of its customers achieve 2 years of digital transformation in just 2 months. Organizations around the world have started to change the way they think and respond to customer needs. Projects are no longer multi-year planning, development, testing, and implementation cycles. Rather, each customer need is viewed as a product in itself and is approached with rapid prototyping, parallel dev-test processes, minimal viable product (MVP) testing, followed by production releases. The trend is not just a symptomatic representation of COVID-19 times but has rapidly evolved over the last few years. To note a few studies: 

  • In an October 2018 Gartner study [1], it was observed that IT department of Companies have started the transition from Projects to Products.
  • In a July 2019 study [2]. Gartner forecasted that by 2024 more than three-quarters of digital business leaders would have benefited from product management practices, up from a third that had already done so in 2018. It was also projected that by 2024, 80% of IT organizations would have undergone radical restructuring and changes to their missions as they embraced product-centric operating models.
  • In an April 2020 Gartner survey [1], 85% of respondents said their organization has adopted or plans to adopt a product-centric model, further confirming the market trends and movements. 

From running Projects to delivering Products 

The basic premise of a product-centric model is based on five key principles:

a)   Outcome Focused: Focus on specific desired outcomes and not a set of certain predefined outputs that may or may not lead to the desired outcome.

b)   Continuous Value Realization: Business and customer value must be realized in an ongoing fashion like a flywheel motion versus an approach that pivots on fixed scope, duration, quality, and budget consumption.

c)    Adaptive Approach: An approach that’s agile to adopt to changes in the surrounding environment or factors versus one that’s stringent, pre-defined, and prescriptive that does not take market and customer movements into account.

d)   Perpetual Development: A development build-and-release approach based on demand cycles versus an artificially prescribed end-date.

e)   Value Driven: Investments are based on ongoing value realization and growth versus promises of a future ROI. 

This product-centric model has been followed by the largest technology companies throughout the world and almost every product company that operates in the Cloud or offers services in the Cloud.  

Microsoft Delivery Approach: Rapid Customer and Business Outcomes

Microsoft has had the longest run of releasing enterprise products in the commercial market and has used the product-centric approach for speed to market and demonstrating agility when faced with changes, aside from earning trust currency and customer satisfaction. With these years of learnings, Microsoft devised the product-centric model into an operational entity i.e., Product Centric Operational Model as demonstrated in Figure 1.

Figure 1: Product Centric Operational Model

Microsoft’s Industry Solution Delivery (ISD) team is chartered with helping Microsoft’s customers realize their full potential by driving rapid business outcomes in their respective industries. Microsoft ISD leverages this Product Centric Operational Model and helps its customers respond to market movements with its best of breed industry focused talent that operationalizes this knowledge in real-time. It’s worth noting that this shift in approach requires a significant mindset-shift, with these alterations being quite profound, like that of mindset change from a fixed-mindset to a growth-mindset. Given this shift represents a big change in how enterprises view their change management and project charters, measures of success, and resource allocation. Microsoft ISD is working closely with its customers through the Microsoft Delivery Approach, which helps organizations manage four critical areas for success with a product-centric approach: 

a)   Project centric to Product centric: Assisting the organizations move away from predefined scope with predetermined beginning and end in mind to desired outcomes predicated on business value, speed, continuous learning, and agility to adopt.

b)   Recognizing outputs to Recognizing outcomes: Helping organizations to lead with a model that drives results-based value and recognition for outcomes delivered and moves away from activity-based recognition on predetermined outputs.

c)    Inside-out to Outside-in: Enabling the organizations to lead with their customers’ perspective and needs versus leading with internal thinking, past experiences, and intuition.

d)   Individual based staffing to Team based staffing: Helping the organizations adopt agile models that focus on forming core feature teams offering stable entities, progressive efficiencies, and continuous learning.

Table 1: Mindset Shifts

Whilst a product-centric mindset is at the heart of the Microsoft Delivery Approach, there are four other key principles that truly bring it to life:

a)   Start fast, deploy for use, and iterate quickly: To fail early, learn rapidly and realise instant value.

b)   Deliver using cross-functional delivery teams: To develop functional redundancies and manage point of failures.

c)    Seek data driven continuous improvement: To generate telemetry for benchmarking delivery and conducting metric-based retrospectives.

d)   Secure and quality by design: To generate value with inherent quality and security at each MVP.

Table 2: Delivery Principles

A senior executive of a large insurance company, a Microsoft customer, recently shared with us: 

“It’s been a painful learning for me and my organization, but I am glad we finally switched out of driving a series of fixed-scope, fixed-fees projects with our technology partners such as Microsoft.

Our teams today are literally working as if we are technology product company and not a traditional internal IT department for an Insurer. Sprints are keeping us honest to the rapid outcomes business expects, we can experiment and fail fast, and most importantly, we are not burning massive holes in our budget before realizing course-corrections are required.

Digital transformation is not so much about technology as it is about driving a wholescale change in how people and processes work in tandem for technology to become a competitive differentiator.

I can clearly see how organizations that adapt such agile methodologies, supported by people and process shifts required, will stand a robust chance to win.”

The Microsoft ISD team is helping thousands of customers across the globe accelerate their digital transformation journeys through the unique Microsoft Delivery principles and approach. And yet, no approach set in stone. We continue to push ourselves to keep our growth mindsets on and learn as we work through our customer’s journeys in their respective industries. Watch this space as we share specific industry scenarios leveraging the Microsoft Delivery Approach for success in future posts.


1.       Swanton, B., Hotel M., Wan D. 2018, Gartner: “Survey Analysis: IT Is Moving Quickly From Projects to Products, ID: G00373896, October 23, 2018, refreshed 20 April 2020, <;

2.      Wan D. 2019, Gartner: “A Day in the Life of a Digital Product Manager”, ID G00400672, July 31, 2019, <;