Mainframe Modernization

Think big, start small, scale fast!

Drew Jaegle
Staff Technology Engineer

Got mainframe batch jobs and need to migrate them to the cloud? Looking to gain insights on how to start? Take a peek at the approach State Farm is using to migrate its P&C Claims mainframe applications.


For many decades, the mainframe has provided critical solutions supporting the various State Farm Insurance operations. In today’s world, mainframe technologies like COBOL, JCL, etc… have fallen out of favor with the next generation of IT professionals. This in turn has increased the people risk associated with enhancing and supporting these solutions. While it would be nice to re-write everything into cloud-native technologies, that is not practical nor an effective use of the limited IT development capacity. So, what to do?

The State Farm Property & Casualty Claims IT area is adopting an automated refactoring approach to migrating 1000’s of mainframe jobs & programs. This approach is leveraging the AWS Mainframe Modernization Blu Age service to perform the transformation of our mainframe assets. The goal of this approach is to accelerate the migration to cloud based technologies and in-turn helping mitigate people risk. This risk grows each day as the mainframe skilled workforce approaches retirement. We expect this migration pattern to be one of two approaches with the other being re-architect. As we embarked on this journey, we quickly realized that we needed a repeatable process and tooling to ensure the success of this endeavor.

The Journey

The remainder of this article will cover the highlights of our journey as depicted in this diagram. Each of these stages has helped us identify and implement techniques to make the journey repeatable and scalable.

Mainframe Conversion Lifecycle

State Farm®, Mainframe Modernization Journey, 2023


During this stage of the journey, our focus was on identifying all Mainframe assets along with the product teams who own them. Luckily we could leverage existing configuration management and organizational datastores to put together our initial list. We did augment this data with runtime statistics (i.e., when a job/program was last used) which helped the product teams evaluate the need to migrate the asset to AWS. Once the data was ready, we engaged each product team in a review of their assets with a simple goal of determining what assets will migrate, which ones will be retired, and which ones will be re-architected. About 30% of mainframe assets are targeted to migrate using the AWS Mainframe Modernization Blu Age approach.

The next step was to start grouping the migrating assets into conversion waves. Our first wave was focused on a small sub-set of read-only jobs that would allow us to gain experience with Blu Age, refine our processes, and build automation. We expect to perform multiple conversion waves over the next several years.


In this stage we are taking input from the engage stage and using it to gather the source code for a specific wave of conversion. We quickly realized that a manual process to locate and extract source code was not going to scale. We built a code crawler tool which will locate and export various mainframe assets (i.e., COBOL, copybook, JCL, etc…) which then can be packaged and shared with AWS for conversion. This tool also helps to identify missing source code which is then triaged in partnership with the product teams.

For the packaging process, we leveraged AWS CodeCommit repositories for sharing existing source code as well as receiving the converted source code back from AWS. This has provided us a simple and transparent code sharing process based on Git. We also used this mechanism to provide documentation and test data which is important for analysis and validation purposes.


Once AWS receives the existing source code, they leverage various Blu Age tools to perform analysis and code conversion tasks. The actual code conversion process is relatively quick. Much of the work comes with performing functional equivalency testing (a.k.a. regression testing). This is where the provided input/output test data is used to run the converted code and compare the outputs. This type of testing allows AWS to validate the converted code operates the same as the existing mainframe code.

After the converted code is validated, the AWS Blu Age team will commit that code to the appropriate CodeCommit repositories and we will pull that code into our internal GitLab repositories. This was another area where we invested some time to develop automation to pull the CodeCommit source code, perform some transformation to align with a GitLab template project, and commit to GitLab. Doing this was an important step to ensure consistency with how the converted code base is managed and deployed.


A consistent pattern for staging converted source code in GitLab allows us to have a standardized CI/CD pipeline to perform code quality checks and deploy applications to the AWS Mainframe Modernization (M2) service. We invested in using Terraform to provision the M2 environment and supporting infrastructure components. We also leveraged AWS Lambda Step Functions to provide a consistent and repeatable application deployment engine for M2 apps. Both of these investments reduced the complexity and friction for product teams deploying M2 applications.


This is probably the most labor-intensive part of the modernization process and can be very time consuming, especially for mainframe assets that haven’t been changed in years. Given that our testing approach is built on the premise of regression testing, it is critical that we can run the existing job flows in a test environment with test data that we would also use for our converted jobs. We are making some investments in building automation to run the existing and converted jobs against the same test data, gather the output from each job run, and perform automated comparison of the data. This will help reduce some of the manual work in this labor-intensive stage.

The Learnings

We are in the late stages of our first conversion wave and have learned A LOT! Here are some of the key learnings:

  • Don’t underestimate the rich source of meta data about your mainframe assets and leverage it to help guide your journey
  • Convert a job flow rather than a single job to minimize any platform hops
  • Start with read-only/data extract type of jobs to build confidence in the conversion process and tooling
  • Keep your system and program interfaces intact to minimize the ripple effect of the modernization activities
  • Leverage your existing job scheduling solution (i.e., Control-M) to enable hybrid job management which allows for incremental modernization
  • If you can write it down and consistently execute, AUTOMATE, AUTOMATE, AUTOMATE!

What’s next?

While it can be overwhelming to modernize millions of lines of code, it can be done! We have found that the investment in a repeatable process along with supporting automation greatly reduce the “where do I start?" syndrome. It is also helping us understand how to plan for future conversion waves including increasing our confidence in how to scale up.

Good luck and hope you have a successful Mainframe Modernization journey!

To learn more about technology careers at State Farm, or to join our team visit,

Information contained in this article may not be representative of actual use cases. The views expressed in the article are personal views of the author and are not necessarily those of State Farm Mutual Automobile Insurance Company, its subsidiaries and affiliates (collectively “State Farm”). Nothing in the article should be construed as an endorsement by State Farm of any non-State Farm product or service.