Automation is a vital aspect of DevOps work. Teams benefit from being able to replace manual tasks like defining particular changes to code builds or reducing potential bottlenecks in processes including event prioritization during incident monitoring.
“When beginning the journey of building a new product, we make heavy upfront investments in automation,” SailPoint DevOps Director Marty Bowers said.
Those investments help Bowers and his team stay nimble and release product quickly, while also granting them the ability to repeat and scale the processes that work for them.
Though automation is popular among DevOps teams across industries, keep specificity in mind. Each DevOps teams has different goals and metrics for success, so their approaches to automation will be different.
For instance, DevOps Engineer Jake Newton and his team at Liquibase sought to move away from manually updating Amazon Machine Images, and decided instead to build virtual templates with the automation-friendly image builder Packer. The team also started performing parallel automated testing.
When adopting a new resource, automation-related or otherwise, the DevOps teams we spoke with stressed the importance of measuring side effects. Mastering a tool doesn’t happen overnight, and production time, product quality and other parts of a business could be negatively impacted if an ineffective solution is chosen. Bowers said DevOps pros should consistently evaluate the influence their latest approaches have on other areas of the business.
Taking risks can bring big rewards, but as a company scales, those risks should be more calculated, as there’s often more at stake. SailPoint DevOps Director Marty Bowers said DevOps teams should be mindful of the effects their choices in new tools and processes have on other key stakeholders.
What DevOps best practices have been most impactful for your team?
When beginning the journey of building a new product, we make heavy upfront investments in automation and continuous integration and delivery. These strategies allow the DevOps team to stay lean while getting features into our customers’ hands as quickly and safely as possible. As the team has matured and gone through this cycle multiple times, our focus has moved toward repeatability and reusability of code to expedite the overall process. And of course, we continually iterate on and improve our observability over all our SaaS products.
We’ve moved more from bleeding-edge to cutting-edge when it comes to our methods, tools and strategies.”
How does your team balance a need to utilize best practices versus a desire to test new resources?
Over the past several years, I’d say we’ve moved more from bleeding-edge to cutting-edge when it comes to our methods, tools and strategies. I think this transition naturally happens as a team or an organization matures. As our customer base grows, expectations around our Customer Satisfaction Score (CSAT), quality, uptime and stability have also grown. We have to weigh the risk of anything new that could compromise one or more of those expectations against the impact it yields. So being mindful of that risk and thoroughly testing is our only path forward.
Experimentation is a key part of discovering the tools that work best for any DevOps team. Liquibase DevOps Engineer Jake Newton shared how a discovery process was built into his team’s story point estimation. Newton and his colleagues built in extra time for their work processes to give them opportunities to explore more efficient ways of reaching their production goals.
What DevOps best practices have been most impactful for your team?
Automated testing and continuous integration. We started with a lot of manual testing and single-branch builds. Over time, we moved to parallel automated testing and multi-branch builds, which reduced our test cycle from around 48 hours to under four.
We established the same consistent workflows and processes for all components and microservices. Branch-naming consistency was implemented whereby all work originates from a Jira ticket and all implementation is done on a feature branch referencing that ticket. Commits pushed to the branch trigger the automated building of the branch. We also use isolated databases per build for testing.
By taking a little bit of extra time with each story, we’re able to chip away at product and non-product tech debt.”
How does your team balance a need to utilize best practices versus a desire to test new resources?
We encourage Spike work by utilizing branch-based development as well as employing ephemeral instances using tools like Docker. These practices allow our development team to do exploratory testing while still maintaining a stable codebase.
We also try to pad story points a little so that we can investigate better tools or strategies instead of always working in the same way. Rather than manually updating Amazon Machine Images, we now use Packer templates, for example. We’ve also started transitioning our infrastructure over to Terraform to make it easier to maintain, scale and implement best practices. By taking a little bit of extra time with each story, we’re able to chip away at product and non-product tech debt with each sprint as opposed to trying to address it all at one time.