ABSTRACT / PROBLEM STATEMENT:
With the changing business landscape and increased internet and smartphone penetration, businesses need to rapidly evolve to face challenges from new age entities. DevOps framework helps enterprises undertake just that helping an organization to reduce time-to-market significantly by combining the twin functions of development and operations in the software development domain.
To undertake this is not an easy task, to incorporate a DevOps driven development philosophy, organizations must radically transform their technology, process, and support culture.
There are some key challenges aspects that need to be kept in mind before embarking on this journey-
Using a lift and shift approach for creating new environments for applications means that applications may not be optimized to take advantage of the new environment, leading to inefficiency.
Legacy deployment methodologies do not allow inbuilt security across layers leading to regulatory implications, legal and contractual liabilities. Misconfiguration and lack of change control leads to data breaches, Identity and Access management issues, & API vulnerability.
Versional Control at an application level has existed for many years but in a traditional environment versioning of the application, the environment is missing. This impacts embedded rollbacks when unexpected changes are rolled out having an impact, failed deployments leading to ineffectiveness and inefficiency having a direct impact on the business .
SETTING UP THE CONTEXT:
WHAT IS DEVOPS?
DevOps is a step forward to Agile driven development where developers and operations teams work together use to build, test, deploy, and monitor applications with speed, quality, and control. So, the next big question is that “Can DevOps be used for any kind of software project?” Well, DevOps is relevant to almost any kind of software project regardless of architecture, platform, or purpose. Common use cases include cloud-native and mobile applications, application integration, and modernization, and multi-cloud management. Successful DevOps implementations generally rely on an integrated set of solutions or a “toolchain” to remove manual steps, reduce errors, increase team agility, and to scale beyond small, isolated teams.
HOW IT INFRASTRUCTURE HAS HISTORICALLY BEEN PROVISIONED?
In a legacy on-premise development t environment, setting up IT infrastructure was a manual process involving:
● Physical installations of servers.
● Manual configuration of Hardware to project-specific requirements and installing the operating system, databases, applications servers, and other tools.
● Finally, the application must be deployed to the hardware.
● Only then can your application be launched. As you can imagine this process has multiple drawbacks which are
With multiple approvals needed for asset purchase and the hardware delivery lead time, it can take a long time to acquire the necessary hardware.
Setting up physicals servers requires resource hiring to perform the setup work. A corporation will need network engineers to set up the physical network infrastructure, storage engineers to maintain physical drives, and many others to maintain all of this hardware. That leads to more overhead, management, and costs.
High Maintenance Charges
Maintenance of data centers means paying maintenance and security employees, HVAC and electricity expenses, and many other costs.
Achieving high availability of applications is a challenge. A corporation would have to build a backup data center, which could increase real estate and other costs mentioned above.
THEN CAME CLOUD COMPUTING!
This was no less than a tech revolution since it fundamentally changed the way businesses purchased and used infrastructure.
Affordable, scalable, and easy to procure, it seemed like a godsend for the enterprises! But even this has its limitations, especially for enterprises who want or need to keep data centers on-premise.
But cloud computing was not enough for automating application deployment and application version control. Luckily, those enterprises have a new solution to their infrastructure woes: Infrastructure as Code.
Infrastructure as Code (IaC) refers to the management of data centers through code rather than through a manual process such as physical hardware configuration. The technology is used widely in cloud computing as it helps to solve issues with utility computing and second-generation web frameworks. It can also be used in on-premises data centers, allowing IT teams to manage their infrastructure as if it were in the cloud.
So, let’s get into a bit of detail regarding what Infrastructure as Code (IaC) is all about!
WHAT IS INFRASTRUCTURE AS CODE/INFRA-OPS IN DEVOPS?
Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code.
IaC is a key DevOps practice and is used in conjunction with continuous delivery.
If you simply try to take this term and break it down like the DevOps term, i.e., Infrastructure Operations, you may want to re-think it. Although, any term you want to use within your organisation can be adopted if the appropriate definitions are applied and accepted We have defined it as “AEP”, i.e Application Environment Provisioning.
Infrastructure Operations is the layer consisting of the management of the physical and virtual environment, which may very well be within On-Premises and Cloud.
BENEFITS OF “APPLICATION ENVIRONMENT PROVISIONING” (AEP)
● Faster time to production/market
AEP dramatically speeds the process of provisioning infrastructure for development, testing, and production (and for scaling or taking down production infrastructure as needed). Because it codifies and documents everything, AEP can even automate provisioning of legacy infrastructure, which might otherwise be governed by time-consuming processes (like pulling a ticket).
● Improved consistency—less ‘configuration drift’
Configuration drift occurs when ad-hoc configuration changes and updates result in a mismatched development, test, and deployment environments. This can result in issues at deployment, security vulnerabilities, and risks when developing applications and services that need to meet strict regulatory compliance standards. AEP prevents drift by provisioning the same environment every time.
● Faster, more efficient development
By simplifying provisioning and ensuring infrastructure consistency, AEP can confidently accelerate every phase of the software delivery lifecycle. Developers can quickly provision sandboxes and (CI/CD) environments. QA can quickly provision full-fidelity test environments. Operations can quickly provision infrastructure for security and user-acceptance testing. And when the code passes testing, the application, and the production infrastructure it runs on can be deployed in one step.
● Protection against churn
To maximize efficiency in organizations without AEP, provisioning is typically delegated to a few skilled engineers or IT staffers. If one of these specialists leaves the organization, others are sometimes left to reconstruct the process. AEP ensures that provisioning intelligence always remains with the organization.
● Lower costs and improved ROI
In addition to dramatically reducing the time, effort, and specialized skill required to provision and scale infrastructure, It also enables developers to spend less time on plumbing and more time developing innovative, mission-critical software solutions.
● Speed and simplicity
AEP allows you to spin up an entire infrastructure architecture by running a script. Not only can you deploy virtual servers, but you can also launch pre-configured databases, network infrastructure, storage systems, load balancers, and any other service that you may need. You can do this quickly and easily for development, staging, and production environments. This can make your software development process much more efficient (more about this later). Also, you can easily deploy standard infrastructure environments in other regions where your provider operates so they can be used for backup and disaster recovery. You can do all this by writing and running code.
● Configuration consistency
Standard operating procedures can help maintain some consistency in the infrastructure deployment process. But human error may leave subtle differences in configurations that may be difficult to debug.
AEP completely standardizes the setup of infrastructure so there is a reduced possibility of any errors or deviations. This will decrease the chances of any incompatibility issues with your infrastructure and help your applications run more smoothly.
● Minimization of risk
Imagine having a lead engineer be the only one who knows the ins and outs of your infrastructure setup. Now imagine that engineer leaving your company. What would you do then? There’d be a bunch of questions, some fear and panic, and many attempts at reverse engineering.
Not only does AEP automate the process, but it also serves as a form of documentation of the proper way to instantiate infrastructure and insurance in the case where employees leave your company with institutional knowledge.
Configurations are bound to change to accommodate new features, additional integrations, and other edits to the application’s source code. If an engineer edits the deployment protocol, it can be difficult to pin down exactly what adjustments were made and who was responsible. Because code can be version-controlled, AEP allows every change to your server configuration to be documented, logged and tracked.
These configurations can be tested, just like code. So if there is an issue with the new setup configuration, it can be pinpointed and corrected much more easily, minimizing risk of issues or failure.
Increased efficiency in software development
Developer productivity drastically increases with the use of AEP. On-Premises architectures can be easily deployed in multiple stages to make the software development life cycle much more efficient. Developers can launch their own sandbox environments to develop in.
QA can have a copy of production that they can thoroughly test. Security and user acceptance testing can occur in separate staging environments. And then the application code and infrastructure can be deployed to production in one move. Infrastructure as Code allows your company to use
Continuous Integration and Continuous Deployment techniques while minimizing the introduction of human errors after the development stage. You can also include in your AEP script the spinning down of environments when they’re not in use. This will shut down all the resources that your script created, so you a corporation won’t end up with a bunch of orphan components that everyone is too afraid to delete. This will further increase the productivity of your engineering staff by having a clean and organized infrastructure.
Cost savings Automating the infrastructure deployment process allows engineers to spend less time performing manual work, and more time executing higher-value tasks. Because of this increased productivity, your company can save money on hiring costs and engineers’ salaries. As mentioned earlier, your AEP script can automatically spin down environments when they’re not in use, which will further save on computing costs.
HOW AEP IS BUI LT TO ADDRESS THE PROBLEM STATEMENT
“Application Environment” consists of both physical/virtual hardware and software components, which can be defined in terms of 3 main areas i.e.
● Configuration, and
Infrastructure is the most important element of the environment, as it defines where the application will run, the specific configuration needs, and how dependencies need to interact with the application.
The configuration is the next most important aspect of the application environment. Configuration dictates both how the application behaves in a given infrastructure and how the infrastructure behaves in relation to the underlying application. Dependencies are all the different modules or systems an application depends on, from libraries to services or other applications.
The other overlooked aspect of the application environment when it comes to the financial domain is the application hardening which can be focused on two different layers which can be classified as Server Hardening across the operating systems, applications, code, and middleware. Having an AEP solution brings in the scope and capabilities like Application boarding, Selfservice infrastructure provisioning, and on-demand test environment provisioning along with capabilities to scale up the application environment as and when as business demands.
To achieve the business challenges and get a complete integrated automated application environment provisioning meant breaking down the 3 key areas as mentioned above and automating the same i.e. Infrastructure provisioning automation, Configuration Management automaton while also providing the ability to perform application and server hardening as part of the provisioning workflow.
In-order to achieve the same the following steps were followed:
1. Getting the silos together and setting the process workflow:
The approach involves in creation of ticket-based system which would moderate Infrastructure requests coming in allowing to achieve self-service provisioning and Application onboarding automation. This is done by integrating with Infrastructure Management tools for IP Address Management, Storage Management, Network Management, Firewall Management as these are Silos which often work individually to manage Infrastructure and network-level dependencies required for an application to work smoothly. Getting a Ticket-based system for AEP removes the overhead of approvals and documentation required if done manually while maintaining the audit compliance through change-requests.
2. Automate Infrastructure Provisioning
The core aspect of any application environment is the infrastructure, by introducing IaC using terraform scripts for virtual and Digital Rebar workflows for Physical provisioning, and introducing Ansible to run workflows for storage commissioning, corporations are able to get the application infrastructure commissioned in a matter of hours from days.
3. Automate Application Middleware Installations
Applications i.e., application code require middleware on which they can be deployed upon, while every organization follows its own practices for their installation adherence to these practices are required to maintain compliance. In order to achieve the same custom scripts and workflows are created which can be clubbed together or used individually while automating provisioning requests.
4. Automate Security Hardening
One of the most important factors of an application environment when it comes to the financial domain are the Application Hardening parameters this includes hardening of application code, middleware components, and OS parameters, by creating custom workflows in Urbancode deploy management of application hardening was made possible as part of the provisioning request. This not only provides the business the ability to apply hardening as part of the provisioning but also allows them to rollback, configure, alter and version the changes in hardening for each environment uniquely using the component version feature of Urbancode deploy.
By following steps 1 to 4, we were able to create application environment blueprints for applications that included all the 3 main areas of the infrastructure thus enabling them to modify and scale and replicate environments on the go.
HOW DATAMATO LEVERAGES URBANCODE FAMI LY, IBM RFT, TERRAFORM, ANSIBLE – TO ACCELERATE THE APPL ICATION ENVIRONMENT PROVISIONING
● IBM UrbanCode Family (UDeploy)
➔ UrbanCode Deploy is a tool for automating application deployments through your environments. It is designed to facilitate rapid feedback and continuous delivery in agile development while providing the audit trails, versioning, and approvals needed in production.
● IBM UrbanCode Family (Blueprint Designer)
➔ The Blueprint Designer is a separately installed component that is included with the UrbanCode Deploy product suite. While UrbanCode Deploy drives deployment automation of applications into existing environments, Blueprint Designer accelerates application testing and deployment by enabling the design and provision of new environments in the cloud, and application deployments to those environments.
➔ You can establish a CI/CD pipeline using Blueprint Designer templates to create and destroy short-term test environments to quickly test your application changes. Additionally, you can provision and manage long-term production environments. Each blueprint can represent a full-stack environment, including infrastructure, middleware and application layers.
● IBM RFT
➔ Rational Functional Tester is an object-oriented automated functional testing tool capable of performing automated functional, regression, GUI, and data-driven testing. RFT supports a wide range of applications and protocols, such as HTML, Java, .NET, Windows, Eclipse, SAP, Siebel, Flex, Silverlight, Visual Basic, Dojo, GET and PowerBuilder applications.
➔ Terraform is a tool for building, changing and versioning infrastructure safely and efficiently. Terraform can manage existing and popular cloud service providers as well as custom in-house solutions.
➔ Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.
➔ Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows.
By building AEP the above problem statement / challenges have been mitigated. Initially time taken to do Application + Infrastructure provisioning took more than 3 to 4 weeks the same has burned down to a matter of hours.
AEP is a highly productive form of configuration management that focuses on automating cloud IT infrastructure management. Once AEP is in place it can be used to achieve levels of CI/CD automation for changes to a project’s infrastructure. AEP enables many beneficial insights into communication and transparency around infrastructure changes. AEP requires a set of dependencies like hosting platforms and automation tools, that are widely available from modern hosting companies.
Infrastructure as Code can simplify and accelerate your infrastructure provisioning process, help you avoid mistakes and comply with policies, keep your environments consistent, and save your company a lot of time and money. Your engineers can be more productive and focus on higher-value tasks. And you can better serve your customers.
If you not doing this, it is time start building your own AEP.