TL Consulting Group

devsecops

Application Modernisation with VMWare Tanzu

APPLICATION MODERNISATION WITH VMWARE TANZU The Need for Accelerating Application Modernisation:  Building innovative, modern apps and modernising existing software are key imperatives for organisations today. Modern apps are essential for deepening user engagement, boosting employee productivity, offering new services, and gaining new data-driven insights. But to maximise the impact of modern apps, organisations need to deliver them rapidly—fast enough to keep up with swiftly changing user expectations and emerging marketplace opportunities. As per the Google’s CIO Guide to App modernisation, in today’s IT landscape, 70-80% of C-Executives report that their IT budgets are spent on managing the legacy applications and infrastructure – In addition to that, legacy systems consume almost 76% of the IT spend. Despite a large amount of investment in legacy applications, most businesses fail to see their digital transformation plans to a satisfactory. On the other hand, constantly changing digital behaviours of consumers and the evolution of viable, reduced Opex, self-sustaining infrastructure models that are better suited to today’s pace of technological change are the primary drivers pushing application modernisation up the CIO/CTO’s list of priorities. According to a study conducted by Google, public cloud adoption alone can reduce the IT overheads by 36-40% when migrating from traditional IT frameworks. However, application modernisation can help in further reduction – it frees up the IT budget to make space for innovation and explore new business value opportunities. Lastly, this digital transformation brings greater agility, flexibility, and transparency while opening operations up to the benefits of modern technologies like AI, DevSecOps, intelligent automation, IoT, etc. Challenges to the Adoption: As per the State of Kubernetes survey 2021, here are the 5 different challenges enterprises face today with the cloud native /Kubernetes adoption, while lack of experience and expertise with the implementation and operations being the top of the list. Containers are very lightweightAs more and more businesses are moving rapidly towards implementing cloud native practices to enable agility and increased time to market, the operational impacts to the business can be vary time to time. While these challenges bring complexity, if would be less complicated and cheaper to address them from the very beginning as part of its cloud strategy. VMware Tanzu portfolio: Due to its lightweight nature, we can create a container image and deploy a container in a matter of seconds. VMWare’s decades of vast experience in virtualisation and the quest of bringing innovation had driven towards introducing the VMWare Tanzu Portfolio. The VMware Tanzu portfolio empowers developers to rapidly build modern apps across a multi-cloud landscape while simplifying operations by using Kubernetes as an underlying platform. VMware Tanzu is an essential component of the growing VMware App Modernisation portfolio, which provides enterprises the required tools and technology, which would help in building new applications and modernising their existing application suits. Using Tanzu portfolio, organisations can rapidly—and continuously—deliver the modern apps that are vital for achieving their strategic goals. Fast-tracking modern apps delivery: Tanzu helps developers deliver modern apps with a quick turnaround and greater reliability. Organisations can use that speed to better address quickly evolving business requirements and changing priorities.  Flexibility With Kubernetes: With Tanzu, Organisations can run Kubernetes in their private clouds, on-premises datacentres in public clouds and at the edge. This flexibility helps organisations align application and cloud decisions better with technical and operational requirements.  Simplified Operations: Deploying and managing the applications across multiple clouds and environments brings new challenges to the operations. Tanzu provides tools to manage, govern and secure all Kubernetes clusters centrally irrespective of where they reside. As a result, operations teams can meet application security and reliability expectations while controlling costs.  Stronger DevOps Collaboration: Tanzu helps alleviate the tension between rapid development goals and stable operations. It transforms the DevOps relationship by giving operations teams what they need to support fast release cycles VMWare Tanzu Value Preposition: The core principles underlying the vision for VMware Tanzu are entirely consistent with VMware’s promise to help customers run any app on any cloud and to drive Kubernetes adoption, to ensure that businesses don’t need to invest in any additional code or training. How Can TLConsulting help organisations with the Modernisation Journey with VM Ware Tanzu? Cloud-native adoption requires a mindset shift, which drives Culture and processes change across the organisation in its IT landscape and technology choices throughout the stack. With the IT being the focus point of the enterprises business strategy. This transformation shift requires the new application to be developed and delivered at a quick turnaround time with greater reliability and quality. Transforming your existing application into a modern app is a complex process with no or minimal guaranteed path for success. A successful transformation requires not only the transformation of your organisation’s technology but also people-centred assets. Culture, process, and leadership need to change to keep up with your new ecosystem. Since Cloud Native is so new, most organisations lack the experience to handle the transformation road on their own. It’s all too easy to get lost. TL Consulting is well-positioned with certified and experienced professionals to help your organisation define and drive your vision with a “Customer First approach” and cloud-native philosophy. We will understand the business objective, long term strategies, and risks involved with a pragmatic assessment matrix and formulate the tailor-made transformation roadmap.We will also assist in the design, architecture, and implementation of the transformation to deliver highly reliable, secure modern apps with a faster time to market. Service Offerings: TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as below Summary: Adopting and Implementing a Cloud-Native transformation is not an easy feat. Careful thought and planning are required for adopting a Cloud Native strategy and roadmap. For enterprise architects, CTOs and CIOs thinking about transforming their organisation to support the Cloud Native world, some key points should be considered: standardizing their platform and services to support a Cloud Native platform like VMware Tanzu to gain the maximum benefit out of the transformation.While adopting Cloud Native applications can be exciting

Application Modernisation with VMWare Tanzu Read More »

Uncategorised, , , , ,

Pressure on teams to modernise applications

Pressure on teams to modernise applications As many organisations are moving towards a cloud-native approach, the need to modernise applications using new platforms and products is inevitable. But are the expectations on teams too much? With agile delivery being the norm, teams are empowered to experiment, align capacity to continuously learn and are encouraged to fail fast. But with that said, there is increasing pressure for teams to cut corners and adapt to tools and engineering standards as they deliver. In TL Consulting’s opinion, this is when most teams fail to adopt Kubernetes and other modern technology correctly. Issues begin to appear right through the build pipeline most commonly with security, multi-cloud integration, compliance, governance, and reliability. Embedding modern engineering standards Organisations often opt for a lift and shift approach to reduce OPEX and or CAPEX. However, the underlying code is not mature enough to be decoupled correctly and housed within a container. This requires considerable rework and creates an anti-pattern for software engineering teams. Instead, to move from the traditional 3-tier architecture and implement new technical stacks, new development principles for cloud applications such as Twelve-Factor Apps need to be embraced. Other levels of DevSecOps automation and infrastructure as code need to become the engineering standard too. The Twelve Factor App The Twelve Factor App is a methodology providing a set of principles for enterprise engineering teams. Like microservices architecture, teams can leverage the similarities of these principles to embed engineering strategies. This does require highly skilled engineers to create models that can be adopted and reused by development teams. Engineering support With these types of expectations put on immature development teams, the pressures and demand on resources impact performance and quality. From our experience we have found that even Big 4 banks require assistance to modernise applications and seek external support from platforms, and products to modernise their app portfolio. e.g., VMWare Tanzu. VMWare Tanzu is an abstraction layer on top of Kubernetes platforms which enables enterprises to streamline operations across different cloud infrastructures.  Tanzu provides ease of management, portability, resilience, and efficient use of cloud resources. It is important to note that to be successful implementing the likes of Tanzu’s suite of products, an organisation would need to establish a DevSecOps culture and mature governance models. Embracing DevSecOps TL Consulting has found many organisations need guidance when embedding a culture shift towards DevSecOps. Teams must have a security first mindset. The norm therefore should not be limited to the likes of security testing, such as Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST), but instead focus on securing applications by design and automating security practices and policies across the SDLC. After all, the goal is to standardise teams’ daily activities, to build secure software into cloud-native engineering workflows. Infrastructure-as-code (IaC) As IT infrastructure has evolved, leveraging IaC can now be invigorating for teams. Engineers can spin up fully provisioned environments that scale, are secure and cost effect. However, if DevSecOps and infrastructure automation orchestration are not aligned, CI/CD pipelines and cloud costs will be difficult to control. To achieve these sustainable processes and practices, implementing a DevSecOps culture that has mature governance models will help keep cloud costs optimised. Conclusion Providing teams with capacity and implementing modern technology platforms will not overcome the engineering challenges faced when modernising applications. To modernise applications requires an established DevSecOps culture, robust governance models and highly skilled teams. Additionally, each team needs to understand the application(s) under their control to determine what needs to be automated. For example: the purpose of the application and customer experience architecture and design of the application and its dependencies application workflows and data privacy policies compliance with government managed data (if applicable) business security policies & procedures cloud security policies & procedures which impact the application application infrastructure employed The modern platforms, products and tools therefore become enablers to optimise cloud-native adoption, not solutions. This is where onsite education, guidance and support from experts and subscriptions models like A Cloud Guru, can be highly beneficial for leaders and engineers. If you are facing challenges implementing DevSecOps or adopting modern technology platforms such as Kubernetes, contact us.

Pressure on teams to modernise applications Read More »

DevSecOps, , , , , , , ,

Road to a Cloud Native Journey

Road to a Cloud Native Journey Author:  Ravi CheetiralaTechnical Architect ( Cloud & DevSecOps) at TL Consulting “Cloud Native” is the new buzz word in the modern application development. It is an evolving application build pattern. The technology is relatively new to the market; thus, our understanding of the architecture is very primitive and keeps changing over the time with technological advancements in the cloud and containers. Understanding cloud native approach and strategy helps to build better understanding among developers, engineers, and technology leaders so that, teams can collaborate each other more effectively. The Need for Cloud Application Modernization: In today’s IT landscape, 70-80% of C-Executives report that their IT budgets are spent on managing the legacy applications and infrastructure – In addition to that, legacy systems consume almost 76% of the IT spend. Despite of large amount of investment on legacy applications, most businesses fail to see through their digital transformation plans to a satisfactory. On the other hand, constantly changing digital behaviours of consumers and the evolution of viable, reduced opex, self-sustaining infrastructure models that are better suited to today’s pace of technological change are the primary drivers pushing application modernization up the CIO/CTO’s list of priorities. According to a study conducted by Google, public cloud adoption alone can reduce the IT overheads by 36-40% when migrating from traditional IT frameworks. However, application modernization can help in further reduction – it frees up the IT budget to make space for innovation and exploring new opportunities of business value. Lastly, this digital transformation brings greater agility, flexibility, and transparency while opening operations up to the benefits of modern technologies like AI, DevSecOps, intelligent automation, IoT, etc. Kickstart to Cloud Native Journey Beyond the upfront investments after creating a buy-in, application modernization entails several considerations to be made by the CIOs, and more importantly, a game plan to manage the massive amount of change that comes with such a large-scale transformation. However, moving away from the sunk costs of legacy IT can help enterprises take on a new trajectory of profitability and value. Here are four essential steps to a successful application modernization roadmap. Assessment of legacy system landscape: The first and crucial step of the application modernisation journey should be assessment of the legacy system systems. identify the business-critical systems, applications, and business processes. High-value assets that need to be modernized on priority can form the first tier of the legacy application modernization process. Next, we need to start with business value and technical impact assessments. The outcome of these assessments will drive the journey further down to the roadmap. Pickup your Anchor applications: Once an assessment is complete and business services are identified, team must shortlist their modernization options from their legacy application suite. This list will enable a more targeted implementation plan. Following this, an implementation framework needs to be developed and implemented, which will help you to create a modernization schedule. Assessment should also help in determining the scope of the project, team, technologies, and the skills required. Define the success criteria: Various application transformation approaches comprise different costs and risks involved. Say for some instances refactoring a legacy application cost much higher than rebuilding the application using a new technical stack. Most of the times organisations fail to determine the target outcomes in effective manner. So, it is very important to measure the change, costs and risks involved along with the return on investment, the features we aim to improve, and set new benchmarks of attaining agility and resilience while bringing an enhanced security and risk management strategy into the portfolio. Structure of target operating model: The traditional operating structure consists of network engineers, system administrators, and database engineers, are no longer fit to support to the new modern digital transformation landscape, so the organisation must align the IT landscape to suite to new suite, alongside upskilling/reskilling path – In the end, applications are ultimately maintained and supported by the people, and your end state operating model must account for ownership of microservices, who will configure and manage the production environment, etc. Benefits of Cloud Native applications: Drives Innovation: With a new cloud native environment, it is easy drive the digital transformation and  to adopt the new age technologies like AI/ML, automation driven insights as these are readily available in most of the cloud environments and comes with easy integration to the applications. Ship Faster: In current world, key to the success of any business is time to market. With the DevOps and CI/CD capabilities, it is very much a possibility to deploy changes very frequently (multiple times in day) while it takes months to deploy a change in traditional software development. Using DevOps, we can transform the software delivery pipeline using automation, building automation, test automation, or deploy automation. Optimised Costs: Containers manage and secure applications independently of the infrastructure that supports them. Most of the organisations use Kubernetes to manage the large volumes of containers. Kubernetes is an open-source platform that is standard for managing resources in the cloud. Cloud-native applications are using containers; hence it fully benefits from containerization. Alongside Kubernetes, there is a host of powerful cloud-native tools. This, along with an open-source model, drives down costs. Enhanced cloud-native capabilities such as Serverless let you run dynamic workloads and pay-per-use compute time in milliseconds. So, it has standardization of infrastructure and tooling. Hence, it helps to reduce cost. Improved Reliability: Achieving high fault tolerance is hard and expensive with the traditional applications. With modern cloud-native approaches like microservices architecture and Kubernetes in the cloud, you can more easily build applications to be fault tolerant with resiliency and autoscaling and self-healing built in. Because of this design, even when failures happen you can easily isolate the impact of the failure, so it doesn’t take down the entire application. Instead of servers and monolithic applications, cloud-native microservices helps you achieve higher uptime and thus further improve the user experience. Foundational elements of Cloud Native applications: In general, cloud native applications are designed

Road to a Cloud Native Journey Read More »

DevSecOps, , , , , , ,

The need for adoption

Embrace DevSecOps Author:  Ravi CheetiralaTechnical Architect ( Cloud & DevSecOps) at TL Consulting DevOps is a widely adopted cultural norm in modern software development. It enabled enterprises to bring development teams, operations teams and tools under a single streamlined process. In addition, its automation capabilities help organisations to deliver the software much faster, by reducing the costs and release cycle times. However, in many cases security is not prioritised as a part of the CI/CD practices, thus the move to DevSecOps has not been adopted. While DevOps has been a successful methodology, one of the key roadblocks is that it doesn’t stress much upon a security and governance lens, as its core focus is on agility and faster time to market. A recent survey conducted by GitLab, (one of the popular DevOps vendors) had proven the point that more than 70% organisations have not included security in their DevOps model. With the rise of cyber-attacks, most of the incidents occur by exploiting the vulnerabilities in the software, which indicates a compelling need of rearchitecting the existing DevOps model to DevSecOps by adding additional levels of security and governance. Market Insights on DevSecOps adoption As per the recent survey by Gitlab conducted in the fall of year 2021. Please find some of the insights on DevOps, and security. The chart below illustrates the various drivers to adopt the DevSecOps. These findings demonstrate the alignment of improved security as a top priority for DevSecOps enablement. Why do we need DevSecOps? As per the above market insights, it is evident that more than 50% of the organisations have chosen security as their primary driver to lead to adoption. This is due to the fact conventional security measures are not good enough to cope up with latest technology innovations. Hence there is pressing need of DevSecOps adoption to have high security measures. What is DevSecOps? DevSecOps is an extension of DevOps by adding additional measures on security and governance layers, such as security testing, observability and, governance. Just like DevOps, the goal of DevSecOps is to deliver the trusted and secured software faster. Security adoption barriers in DevOps: Developers are focused on acceleration, least bothered about security – With the DevOps adoption, developers deliver the software faster. However, they tend to ignore the best security practices. Some of the risks include using an unsolicited third-party /open-source software downloaded from the internet without much of scrutiny and consent. Conflicting interests between teams – Development teams are usually relying on other teams for security and vulnerability testing, which is usually planned as a separate phase of the project. The delivered software might pose multiple security threats, vulnerabilities and usually, security analysts are assigned to review and take care of these issues. These usually create a knowledge gap between teams, thus end up delivering a compromised software. Cloud and container security challenges – Undoubtedly the wide adoption of containers and public cloud environments are helping in exceptional productivity with low cost and innovation lens for the organisation, however it also brings new security risks and challenges.  For instance, containers are an operating system agnostic and  that can run applications anywhere, but the lack of visibility into containers makes it difficult to scan them for vulnerabilities. Lack of skills and knowledge on security – There are fundamental knowledge gaps on security frameworks as most of the security standards are industry specific. Which acts as a barrier to achieve higher degree of efficiency with devops. The pitfall of DevOps nature – The core nature of DevOps is collaboration of the teams. This interconnection allows us the sharing of privileged information. Teams share account credentials, tokens, and SSH keys. Systems such as applications, containers, and microservices also share passwords and tokens. This opens an opportunity to attackers to disrupt operations, and steal information.          How to implement DevSecOps? Embed Security in the pipelines – Implement security in the DevOps or CI/CD pipelines as an additional level of integration, such as including the DAST, SAST and vulnerability, image scanning tools, which would help to identify and resolve the code vulnerabilities as soon as they appear. Identify the compliance requirements at design stage – Understand the organisation security framework and compare with the industry’s security guidelines during the early stages of design. This gap analysis will help us to assess the right tools to choose for automation. Shift left security approach – Embedding the security in the early stages of development cycles. As we move along to various phases of the development process, security will be carried along instead of focusing on the end. This leads to a better outcome and lesser challenges. Shift left is a preventive approach rather a reactive one. Automate as much as possible – The cornerstone of the DevOps is automation, use those capabilities to automate the security and governance by integrating with right tools in the CI/CD pipelines. DevSecOps tooling needs to run with full automation without any manual interventions. Validating cloud /container security standards – As a best practice, it is good to evaluate the cloud security standards with organisational, industry security frameworks and identify the gaps in the early stages. This will ensure the early detection of threats and organisational alignment. Creating awareness and education – Clear delineation of roles and responsibilities, creating the awareness of security best practices, providing education on industry security framework. Establishing a safe code guideline from the security lens. Adopting a security tooling is not always the best solution, as it might be ineffective if the teams do not know on how to use it. Establishing a governance model – Creating a governance model is the vital part of implementing the devsecops model to get the maximum outcome. Adopt the observability and governance tools, which will help to create a transparency in the teams to identify and address the security and other application related issues reported at all levels. How does DevSecOps fit in organisational GRC framework? GRC (Governance, Risk management and Compliance) and DevSecOps use various skills, tools and processes.

The need for adoption Read More »

DevSecOps, , , , ,

Rise of “Service Mesh” in Application Modernisation – White Paper.

Rise of “Service Mesh” in Application Modernisation  The What, the Why and the How  Author:  Ravi Cheetirala Technical Architect ( Cloud & DevSecOps) at TL Consulting Learn how Service Mesh brings the safety and reliability in all the aspects of service communication. Read on to find out more about the following: What is a Service Mesh? Key Features of a Service Mesh Why do we need Service Mesh? How does it work? Case Study What is Service Mesh? A Service Mesh is programmable piece of software layer sits on tops of the services in a Kubernetes cluster. Which helps to effective management of service-to-service communication, also called as “East-West” traffic.  The objective of Service mesh is to allow services to securely communicate to each other, share data and redirect traffic in the event of application/service failures. Quite often service mesh will be an overlay of network load balancer, API gateway and network security groups.  Key Features of Service Mesh  Traffic Routing  Rate Limiting, Ingress Gateway, traffic splitting, service discovery, circuit breaking, and service retry  Service mesh helps in enabling the traffic routing between the services in one or more clusters. It also helps in resolving some of the cross-cutting concerns like service discovery, circuit breaking, traffic splitting.  Securing the Services  Authentication, Authorization, encryption and decryption, Zero Trust Security  The service mesh can also encrypt and decrypt the data in transit, by removing the complexity from each of the services. The usual implementation for encrypting traffic is mutual TLS, where a public key infrastructure (PKI) generates and distributes certificates and keys for use by the sidecar proxies. It can also authenticate and authorize requests made within and outside the app, sending only authorized requests to instances.  Observability   Monitoring, Event Management, Logging, Tracing (M.E.L.T)   Service Mesh comes with lot of monitoring and tracing plugins out of the box to understand and trace the issues like communication latency errors, service failures, routing issues. It captures the telemetry data of the service calls, including the access logs, error rates, no of requests served per second, which will be the base for the operators/developers to troubleshoot and fix the errors. Some of the out of box plugins include Kiali, Jaeger and Grafana.    Why do we need Service Mesh?  Evidently most of the new age applications or existing monolith applications are being transformed or written using the microservice architecture style and deployed in a Kubernetes cluster as a cloud native application because they offer agility, speed, and flexibility. However, the exponential growth of services in this architecture brings challenges in peer-to-peer communication, data encryption, securing the traffic and so on.  Adopting the service mesh pattern helps in addressing these issues of microservice application particular the traffic management between the services, which involves a considerable amount of manual workaround. Service mesh brings the safety and reliability in all the aspects of service communication.  How does it work?   Most of the service meshes are implemented based on a Side Car pattern, where a side car proxy named “Envoy Proxy” will be injected into the Pods. Sidecars can handle tasks abstracted from the service itself, such as monitoring and security.  Services, and their respective envoy proxies and their interactions, is called the data plane in a service mesh. Another layer called the control plane manages tasks such as creating instances, monitoring and implanting policies, such as network management or network security policies. Control plane is the brain behind the service mesh operations.  A Case Study Client Profile  The client in question is one of the large online retailers with global presence. The application is legacy e-commerce platform built as a giant monolith application.  Client’s architecture consists of a multi-channel (mobile and web) front end application developed using React JS and tied together using a backend service developed using legacy Java/J2EE technology and hosted on their own data center.  There is an ongoing project to split this giant app into a microservice based architecture using the latest technical stack and hosted onto a public cloud.  Client’s Organization needed to setup a deployment platform, which ensures high availability and scalable and resilient. Also, it should have cost effective, secure and high deployment frequency when it comes to release and maintenance.  Project Goals  Zero Downtime/No Outage deployments and support of various deployment strategies to test the new release/features.  Improved deployment frequency  Secure communication between the services   Tracing the service-to-service communication response times and troubleshooting the performance bottlenecks  Everything as a code  Role of Service Mesh in the project:  The client was able to achieve the goals by adopting the service mesh pattern in their micro service architecture.  Achieved Zero downtime deployments with 99.99% availability.  Enabled the secure communication using service mesh’s TLS/mTLs feature in a language-agnostic way.  Using traffic splitting they were able to test the new features and sentiment in their customer base.  Chaos testing was conducted using the service mesh fault injection features.  Operational efficiency and infrastructure cost optimization.  Helped to understand the latency issues, by distributed tracing.  No additional burden on Development teams to write code to manage these.  Conclusion  Service mesh provides robust set of features in resolving the key challenges and issues faced by the DevOps and, SREs in a microservice applications on cloud native stack by abstracting most of its functionality. And now it is widely adopted pattern and critically used component in a Kubernetes implementation.  TL Consulting can help by solving these complex technology problems by simplifying IT engineering and delivery. We are an industry leader delivering specialised solutions and advisory in DevOps, Data Migration & Quality Engineering with Cloud at the core. If you want to find out more, please review our application modernisation services page or contact us.

Rise of “Service Mesh” in Application Modernisation – White Paper. Read More »

Uncategorised, , , ,