How Far We Have Come
Over the last seven years, State Farm® has strived to improve our software delivery process and increase the frequency at which it deploys changes to production.
The journey to improve started with a challenge to get code from a workstation to production in less than 24 hours.
The first step to addressing this challenge was to value stream map our software delivery process. It was discovered that there were over 150 processes, 45 hand offs between teams, and 15 specialized roles to take a single change to production. All of this resulted in approximately 1500 hours, of which 1000 hours were simply spent waiting.
Two approaches were taken to address the issues discovered after analyzing the value stream map:
- Reduce the issues encountered by engineers as they deliver code to test and perform various activities.
- Simplify the change process by reducing hand offs and automating manual tasks.
As an organization we have made huge strides in how we deliver change. For example, we encourage teams to make small and frequent changes to production and use far fewer handoffs between specialized teams for tasks like testing, scanning, and maintaining the application. These efforts have resulted in the average lead time to production being approximately 24 hours.
The Next Mountain to Climb
With many manual processes removed, increased team autonomy, and the use of the public cloud, State Farm teams are delivering software change with increased efficiency and speed. Not only has pace changed dramatically, but the way we develop, deliver, and maintain software has evolved as well. However, increased efficiencies have introduced other, unique challenges in the software delivery process.
Seeing is Believing
At State Farm, there are approximately 1,200 product teams working on technologies that range from the mainframe to public cloud. Each of these product teams have different requirements, methods of delivery, and different skillsets.
As teams navigate the software development lifecycle, various tools are used to accomplish tasks such as source code management, security scans, and deployments. Each of these tools contain a wealth of data that can provide insights around the actions being performed. We decided to use that data to paint an end-to-end picture. The goal was to take a data-driven approach in hopes of improving our understanding of how teams are navigating through the DevOps lifecycle, the pinpoint issues that may slow down the process, and identify opportunities to enhance or improve how we work in the future. By leveraging API’s, webhook functionality, and wrapping command line interfaces (CLIs) we were able to collect additional data for analysis.
We chose to build an eventing framework that would provide a means to broadcasting these events, build relationships across the events and tooling, and visualize the events as they occur. The goal with the framework was to create a strong foundation upon which we could implement solutions that would leverage the real-time data in ways that transform how we deliver change in technology solutions at State Farm.
The Eventing Framework
The framework started with a few architectural principles:
- Our tools and systems should self-report when important events occur.
- Events should not be coupled to a process or system. Anyone can react to an event, and we shouldn’t care.
- Events should be tied to a code change and be transparent for the lifecycle of that code change.
- Our event driven framework should be scalable based on demand of our developers.
- Minimize the need for manual controls by providing API’s and CLI’s for scripting.
As events occur within our DevOps tooling, they are published to the eventing framework. The framework provides the following capabilities:
- Broadcasting – Event messages are broadcast to subscribed consumers.
- Visualization - The framework provides the ability for product teams to build dashboards that displays events they are interested in.
- Correlation - The framework helps build connectivity between these events and various tools within the organization.
What the Future Holds
Throughout the last year we have worked with several different areas throughout the organization and have collected events happening within the software development lifecycle such as those related to source control management tooling, security scans, and deployments.
We have the ability to leverage the information contained within the event messages to more effectively compile this information within a graph database. This helps more effectively build relationships between actions performed throughout the software delivery process and source code, artifacts, and infrastructure.
As we continue our journey, the goal is to provide a means to automate our development workflows, governance, and standards, which ultimately aims to improve the developer experience through transparency of policies and requirements. We continue to explore the process of codifying our policy definitions across different topics, so we can leverage this event data to help guide teams as they navigate through the delivery process.
To learn more about technology careers at State Farm, or to join our team visit, https://www.statefarm.com/careers.