The continued spotlight on DevOps is understandable: the theory is that it bridges that traditional gap between the development and operations teams, so that they can work together in a more end-to-end, seamless way. This contributes to faster, more efficient release of software, which in turn improves time-to-market of products and services, or enhancements to internal projects, even better compliance to legislation or industry standards.
By Sven Erik Knop, Principal Solutions Engineer at Perforce Software.
When it works well, DevOps is also a natural partner to other methodologies, such as Agile, Continuous Integration and Continuous Delivery. These all share some common attributes, such as rapid feedback to enable faster response to change or customer requirements, greater collaboration and transparency among contributors.
The theory is sound, but practical implementation is a totally different matter and the truth is that many organizations are still finding their way with DevOps. The good news is that there are some organizations who have forged ahead with DevOps adoption and as they began to share their own experiences, there is more knowledge out there that can be passed along to their peers.
While DevOps is going to differ from organization to organization, some common threads have emerged. For instance, central to most DevOps visions is having a single, unified view of a software project. Often referred to as the ‘single source of truth’, the idea is to have everything in one place: not just code, but all the components of the environments in which they run (apps, production and pre-production environments). The goal is to improve communications, visibility and the ability to unearth and address any issues early on.
The ‘single source of truth’ can also give teams the ability to recreate a production environment based on what is in the single source of truth (typically a version control system), so that they can see how that app might behave in a real production environment (rather than waiting until it’s in production and then finding out if there are problems, which is not ideal if it’s already in customers’ hands).
Problems can be corrected and, because all versions are in the system, it’s possible to roll back to a previous version and see where a problem occurred. Also, giving developers the ability to run tests on code in their everyday work, rather than waiting until it is in the hands of the operations team to create test cases, means that problems can be spotted and fixed early on. This is also good news for the Operations team, because some consistency and prevention of errors is built into the process at an early stage. Plus, developers can experiment safely and be more innovative, knowing that they can roll back to a previous version and are not changing the final production environment.
Good DevOps practice includes automation and self-service as much as possible, particularly within large-scale projects. Version control systems can also help to automate a lot of the work involved, such as notifying the release automation system that a release is ready based on the latest change. With that foundation in place, then it becomes easier to spin up production environments, or servers to host applications, because the manual effort has been removed. This is why we often see infrastructure as code (IaC) alongside DevOps implementation. IaC is virtualised approach based on governing communication services, memory, network interfaces and processors with software-defined configurations.
There are a few caveats: it is likely that contributors will have their own preferred platforms, systems, toolsets and workflows. For instance, in the electronics market, the binary file assets created by analog designers are not intuitively designed for easy sharing. Likewise, embedded software managers might have code, library and object files spread across different platforms, each with their own configuration requirements. Expect to see similar disparities in other industries, such as financial services, pharmaceutical and life sciences and semiconductor design.
So, it is probably going to be very important that the tool being used for the ‘single source of truth’ – typically a version control system – can support a variety of users and working environments, providing the transparency that the whole group needs. However, this tool needs to avoid imposing new workflows and technologies on workers (as we all know, people don’t like having their favourite tech tools taken away from them, or being told to use to new ones). Make sure there are clean integrations with the main tools that users prefer, whether ‘out of the box’ or easily created using an open API.
Furthermore, this single source of truth needs to be able to support a wide variety of digital assets, binary and non-binary. Depending on the industry, this might easily include art files, CAD drawings and support documentation. To not include all the files associated with a project could not only impact on the final product, but could also derail compliance. On this point, the single source of truth can also make work easier for external auditors, while also reducing the amount of time that internal teams need to spend collating compliance information (because it is already stored in the single source of truth).
For this reason, the system should also provide an immutable history of events (in other words, the facts cannot be changed retrospectively, for instance, when a piece of code is checked in, the record of that event cannot be altered, regardless of what happens with that piece of code in the future).
It is also important to not just focus on the initial stages of the software development, but instead, to look at the entire digital asset lifecycle, with a unified pipeline that enables automation and testing at every stage of a digital asset’s life.
These stages include: ideation (such as a feature request), definitions, design (including requirements), development, testing, deployment, release and maintenance. This end-to-end view makes it easier to ensure that nothing is accepted that deviates from the ‘single source of truth’ (or if it does, it is easier to spot). This might include a developer being asked to work on something that was not specified in the original set of requirements.
Finally, consider how fast a project can scale and whether the ‘single source of truth’ can accommodate escalating growth, users and complexity. After all, what starts off as an idea for a simple application can easily evolve into something more ambitious, with more features and involving Gigabytes, if not Terabytes, of digital files in the development process.
DevOps is, of course, fundamentally a methodology and is primarily about processes, user behaviour and cultural adoption. That said, some of the most successful DevOps projects in which we have been involved have embraced the role that supporting technology tools play in achieving good DevOps. While creating a ‘single source of truth’ and other technology approaches are only part of the recipe for DevOps success, they make a significant contribution.