TL Consulting Group

data

Application Modernisation with VMWare Tanzu

APPLICATION MODERNISATION WITH VMWARE TANZU The Need for Accelerating Application Modernisation:  Building innovative, modern apps and modernising existing software are key imperatives for organisations today. Modern apps are essential for deepening user engagement, boosting employee productivity, offering new services, and gaining new data-driven insights. But to maximise the impact of modern apps, organisations need to deliver them rapidly—fast enough to keep up with swiftly changing user expectations and emerging marketplace opportunities. As per the Google’s CIO Guide to App modernisation, in today’s IT landscape, 70-80% of C-Executives report that their IT budgets are spent on managing the legacy applications and infrastructure – In addition to that, legacy systems consume almost 76% of the IT spend. Despite a large amount of investment in legacy applications, most businesses fail to see their digital transformation plans to a satisfactory. On the other hand, constantly changing digital behaviours of consumers and the evolution of viable, reduced Opex, self-sustaining infrastructure models that are better suited to today’s pace of technological change are the primary drivers pushing application modernisation up the CIO/CTO’s list of priorities. According to a study conducted by Google, public cloud adoption alone can reduce the IT overheads by 36-40% when migrating from traditional IT frameworks. However, application modernisation can help in further reduction – it frees up the IT budget to make space for innovation and explore new business value opportunities. Lastly, this digital transformation brings greater agility, flexibility, and transparency while opening operations up to the benefits of modern technologies like AI, DevSecOps, intelligent automation, IoT, etc. Challenges to the Adoption: As per the State of Kubernetes survey 2021, here are the 5 different challenges enterprises face today with the cloud native /Kubernetes adoption, while lack of experience and expertise with the implementation and operations being the top of the list. Containers are very lightweightAs more and more businesses are moving rapidly towards implementing cloud native practices to enable agility and increased time to market, the operational impacts to the business can be vary time to time. While these challenges bring complexity, if would be less complicated and cheaper to address them from the very beginning as part of its cloud strategy. VMware Tanzu portfolio: Due to its lightweight nature, we can create a container image and deploy a container in a matter of seconds. VMWare’s decades of vast experience in virtualisation and the quest of bringing innovation had driven towards introducing the VMWare Tanzu Portfolio. The VMware Tanzu portfolio empowers developers to rapidly build modern apps across a multi-cloud landscape while simplifying operations by using Kubernetes as an underlying platform. VMware Tanzu is an essential component of the growing VMware App Modernisation portfolio, which provides enterprises the required tools and technology, which would help in building new applications and modernising their existing application suits. Using Tanzu portfolio, organisations can rapidly—and continuously—deliver the modern apps that are vital for achieving their strategic goals. Fast-tracking modern apps delivery: Tanzu helps developers deliver modern apps with a quick turnaround and greater reliability. Organisations can use that speed to better address quickly evolving business requirements and changing priorities.  Flexibility With Kubernetes: With Tanzu, Organisations can run Kubernetes in their private clouds, on-premises datacentres in public clouds and at the edge. This flexibility helps organisations align application and cloud decisions better with technical and operational requirements.  Simplified Operations: Deploying and managing the applications across multiple clouds and environments brings new challenges to the operations. Tanzu provides tools to manage, govern and secure all Kubernetes clusters centrally irrespective of where they reside. As a result, operations teams can meet application security and reliability expectations while controlling costs.  Stronger DevOps Collaboration: Tanzu helps alleviate the tension between rapid development goals and stable operations. It transforms the DevOps relationship by giving operations teams what they need to support fast release cycles VMWare Tanzu Value Preposition: The core principles underlying the vision for VMware Tanzu are entirely consistent with VMware’s promise to help customers run any app on any cloud and to drive Kubernetes adoption, to ensure that businesses don’t need to invest in any additional code or training. How Can TLConsulting help organisations with the Modernisation Journey with VM Ware Tanzu? Cloud-native adoption requires a mindset shift, which drives Culture and processes change across the organisation in its IT landscape and technology choices throughout the stack. With the IT being the focus point of the enterprises business strategy. This transformation shift requires the new application to be developed and delivered at a quick turnaround time with greater reliability and quality. Transforming your existing application into a modern app is a complex process with no or minimal guaranteed path for success. A successful transformation requires not only the transformation of your organisation’s technology but also people-centred assets. Culture, process, and leadership need to change to keep up with your new ecosystem. Since Cloud Native is so new, most organisations lack the experience to handle the transformation road on their own. It’s all too easy to get lost. TL Consulting is well-positioned with certified and experienced professionals to help your organisation define and drive your vision with a “Customer First approach” and cloud-native philosophy. We will understand the business objective, long term strategies, and risks involved with a pragmatic assessment matrix and formulate the tailor-made transformation roadmap.We will also assist in the design, architecture, and implementation of the transformation to deliver highly reliable, secure modern apps with a faster time to market. Service Offerings: TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as below Summary: Adopting and Implementing a Cloud-Native transformation is not an easy feat. Careful thought and planning are required for adopting a Cloud Native strategy and roadmap. For enterprise architects, CTOs and CIOs thinking about transforming their organisation to support the Cloud Native world, some key points should be considered: standardizing their platform and services to support a Cloud Native platform like VMware Tanzu to gain the maximum benefit out of the transformation.While adopting Cloud Native applications can be exciting

Application Modernisation with VMWare Tanzu Read More »

Uncategorised, , , , ,

How do Kubernetes and Containers Help Your Enterprise?

How do Kubernetes and Containers Help Your Enterprise? In today’s world success of any organisation heavily depends on its ability to drive innovation and deliver those at speed. And IT being an enabler for this rapid delivery model, businesses are looking at Kubernetes and containers adoption as an essential piece of technology for building, deploying, and managing their modern applications at scale. Containers provide an abstraction to the underlying applications and drive towards portability, making it possible to run anywhere, across multiple clouds and on-premises data centres. Furthermore, by providing uniform deployment, management, scaling, and availability services for all the applications, irrespective of its technology—Kubernetes offers significant advantages for your IT and development efforts. Kubernetes offers a range of benefits to the various levels of executives and developers; here we will discuss some of those key advantages. Ultimate Need of Containers and Kubernetes: Keeping up with the latest technology trends and organisational goals towards digitalisation is very tough for the IT teams for the last few years. Conventional software models, traditional VM based IT infrastructure will not be able to help in delivering these modern applications at scale. To deliver these new-age applications, one should adopt the new software practices such as agile and DevOps practices alone with cloud-native architecture. Containers and Kubernetes are the 2 key building blocks in the cloud-native architecture, which the organisations widely use to deliver faster, reliable, and efficient software with a significant cost reduction in the application life cycle. Key Advantages: Light Weight: Containers are very lightweight when compared with traditional virtual machines. A Container includes everything it needs to run, including its operation system, dependencies, libraries, and code. Multiple containers can run inside a single node of a cluster; the VM hosts the OS and container runtime, and the team can still take advantage of all the capabilities of traditional infrastructure virtualisation. Speed: Due to its lightweight nature, we can create a container image and deploy a container in a matter of seconds. Once the image is ready, it can quickly replicate containers and easily and quickly deploy as needed. Destroying a container is also a matter of seconds. This also helps with quicker development cycles and operational tasks. Portability: Containers can run anywhere if the container engine supports the underlying operating system—it is possible to run containers on Linux, Windows, MacOS, and many other operating systems. Containers can run in virtual machines, on bare metal servers, locally on a developer’s laptop and all major public clouds. They can easily be moved between on-premises machines and public cloud, and across all these environments, continue to work consistently. As per RedHat’s market dynamics report, please see how organisations benefit from containers and Kubernetes adoption. Kubernetes for ‘everyone’ Kubernetes is well known for supporting the automation of configuring, deploying, and scaling microservice-based applications that are implemented using containers. Also, microservices-based applications orchestrated by Kubernetes are highly automated in their deployment and management, as well as their maintenance, so that it’s possible to create applications that are highly responsive and adaptive to spikes in network traffic and needs for other resources.  It offers significant advantages to all IT executives and developers as below. Biggest Barriers for Kubernetes Adoption: Cost Of Adoption: One of the biggest obstacles to wider Kubernetes (K8s) adoption is deriving the cost of adoption and running the workloads in the Kubernetes clusters. Cost is the key factor for executives to make decisions to leverage the Kubernetes in their enterprise. In a recent FinOps Foundation survey , — 75% of whom reported having Kubernetes in production — highlights Kubernetes cost management difficulties. It revealed that spending on Kubernetes is spiking beyond what deployments should likely require. The survey’s subtitle isn’t exactly subtle: “Insufficient — or non-existent — Kubernetes cost monitoring is causing overspend.” Lack of Skills and Training: Another barrier for adoption is the lack of skilled and experienced personnel on containerisation and orchestration. As a result, although Kubernetes and container adoption is growing rapidly, many organisations still face a steep learning curve to effectively build, deploy, and manage Kubernetes. This is due to both the technology’s immaturity and a lack of operational excellence with it. Organisations are trying various approaches like paired programming, partners, education, and training to overcome this barrier. Visibility and monitoring: Enterprises are deploying Kubernetes clusters spanning across multiple public clouds and /or in their traditional virtualisation data centres or managed services introduce an increasing amount of complexity. To realise the greatest benefits from, organisations need to be able to visualise their entire Kubernetes footprint, including all its workloads (applications, containers, pods, nodes, namespaces, etc.), their dependencies, how they interact with each other in terms of network bandwidths, response times, and memory utilisations for cluster management and optimisation. Security and Compliance: While enterprises give priority to speed in software delivery, security and compliance sometimes are just an afterthought. Security is a major challenge in the container world, just as it has almost everywhere else in IT. Although many changes and innovations so far, security is still not on par with the traditional structure models. Due to the unique nature of Kubernetes and containerized environments, one misconfiguration can be easily multi-folded to many containers. A security breach of a container is almost identical to an operating system-level breach of a virtual machine in terms of potential application and system vulnerability. How to overcome these challenges: Many organizations want to adopt and leverage the benefits of containers but struggle to justify the total time, resources, and cost needed to develop and manage it internally. One approach is to use VMware Tanzu to organize their Kubernetes clusters across all their environments, set policies governing access and usage permissions, and enable their teams to deploy Kubernetes clusters in a self-service manner. This enables infrastructure and operations teams to gain visibility and command of their Kubernetes footprint while still empowering developers to use those resources with a focus on delivering solutions rather than worrying about infrastructure. Bottom Line: Evidently, Kubernetes adoption helps drive innovation and rapid software development with reliability

How do Kubernetes and Containers Help Your Enterprise? Read More »

Uncategorised, , , , , ,

Pressure on teams to modernise applications

Pressure on teams to modernise applications As many organisations are moving towards a cloud-native approach, the need to modernise applications using new platforms and products is inevitable. But are the expectations on teams too much? With agile delivery being the norm, teams are empowered to experiment, align capacity to continuously learn and are encouraged to fail fast. But with that said, there is increasing pressure for teams to cut corners and adapt to tools and engineering standards as they deliver. In TL Consulting’s opinion, this is when most teams fail to adopt Kubernetes and other modern technology correctly. Issues begin to appear right through the build pipeline most commonly with security, multi-cloud integration, compliance, governance, and reliability. Embedding modern engineering standards Organisations often opt for a lift and shift approach to reduce OPEX and or CAPEX. However, the underlying code is not mature enough to be decoupled correctly and housed within a container. This requires considerable rework and creates an anti-pattern for software engineering teams. Instead, to move from the traditional 3-tier architecture and implement new technical stacks, new development principles for cloud applications such as Twelve-Factor Apps need to be embraced. Other levels of DevSecOps automation and infrastructure as code need to become the engineering standard too. The Twelve Factor App The Twelve Factor App is a methodology providing a set of principles for enterprise engineering teams. Like microservices architecture, teams can leverage the similarities of these principles to embed engineering strategies. This does require highly skilled engineers to create models that can be adopted and reused by development teams. Engineering support With these types of expectations put on immature development teams, the pressures and demand on resources impact performance and quality. From our experience we have found that even Big 4 banks require assistance to modernise applications and seek external support from platforms, and products to modernise their app portfolio. e.g., VMWare Tanzu. VMWare Tanzu is an abstraction layer on top of Kubernetes platforms which enables enterprises to streamline operations across different cloud infrastructures.  Tanzu provides ease of management, portability, resilience, and efficient use of cloud resources. It is important to note that to be successful implementing the likes of Tanzu’s suite of products, an organisation would need to establish a DevSecOps culture and mature governance models. Embracing DevSecOps TL Consulting has found many organisations need guidance when embedding a culture shift towards DevSecOps. Teams must have a security first mindset. The norm therefore should not be limited to the likes of security testing, such as Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST), but instead focus on securing applications by design and automating security practices and policies across the SDLC. After all, the goal is to standardise teams’ daily activities, to build secure software into cloud-native engineering workflows. Infrastructure-as-code (IaC) As IT infrastructure has evolved, leveraging IaC can now be invigorating for teams. Engineers can spin up fully provisioned environments that scale, are secure and cost effect. However, if DevSecOps and infrastructure automation orchestration are not aligned, CI/CD pipelines and cloud costs will be difficult to control. To achieve these sustainable processes and practices, implementing a DevSecOps culture that has mature governance models will help keep cloud costs optimised. Conclusion Providing teams with capacity and implementing modern technology platforms will not overcome the engineering challenges faced when modernising applications. To modernise applications requires an established DevSecOps culture, robust governance models and highly skilled teams. Additionally, each team needs to understand the application(s) under their control to determine what needs to be automated. For example: the purpose of the application and customer experience architecture and design of the application and its dependencies application workflows and data privacy policies compliance with government managed data (if applicable) business security policies & procedures cloud security policies & procedures which impact the application application infrastructure employed The modern platforms, products and tools therefore become enablers to optimise cloud-native adoption, not solutions. This is where onsite education, guidance and support from experts and subscriptions models like A Cloud Guru, can be highly beneficial for leaders and engineers. If you are facing challenges implementing DevSecOps or adopting modern technology platforms such as Kubernetes, contact us.

Pressure on teams to modernise applications Read More »

DevSecOps, , , , , , , ,

Road to a Cloud Native Journey

Road to a Cloud Native Journey Author:  Ravi CheetiralaTechnical Architect ( Cloud & DevSecOps) at TL Consulting “Cloud Native” is the new buzz word in the modern application development. It is an evolving application build pattern. The technology is relatively new to the market; thus, our understanding of the architecture is very primitive and keeps changing over the time with technological advancements in the cloud and containers. Understanding cloud native approach and strategy helps to build better understanding among developers, engineers, and technology leaders so that, teams can collaborate each other more effectively. The Need for Cloud Application Modernization: In today’s IT landscape, 70-80% of C-Executives report that their IT budgets are spent on managing the legacy applications and infrastructure – In addition to that, legacy systems consume almost 76% of the IT spend. Despite of large amount of investment on legacy applications, most businesses fail to see through their digital transformation plans to a satisfactory. On the other hand, constantly changing digital behaviours of consumers and the evolution of viable, reduced opex, self-sustaining infrastructure models that are better suited to today’s pace of technological change are the primary drivers pushing application modernization up the CIO/CTO’s list of priorities. According to a study conducted by Google, public cloud adoption alone can reduce the IT overheads by 36-40% when migrating from traditional IT frameworks. However, application modernization can help in further reduction – it frees up the IT budget to make space for innovation and exploring new opportunities of business value. Lastly, this digital transformation brings greater agility, flexibility, and transparency while opening operations up to the benefits of modern technologies like AI, DevSecOps, intelligent automation, IoT, etc. Kickstart to Cloud Native Journey Beyond the upfront investments after creating a buy-in, application modernization entails several considerations to be made by the CIOs, and more importantly, a game plan to manage the massive amount of change that comes with such a large-scale transformation. However, moving away from the sunk costs of legacy IT can help enterprises take on a new trajectory of profitability and value. Here are four essential steps to a successful application modernization roadmap. Assessment of legacy system landscape: The first and crucial step of the application modernisation journey should be assessment of the legacy system systems. identify the business-critical systems, applications, and business processes. High-value assets that need to be modernized on priority can form the first tier of the legacy application modernization process. Next, we need to start with business value and technical impact assessments. The outcome of these assessments will drive the journey further down to the roadmap. Pickup your Anchor applications: Once an assessment is complete and business services are identified, team must shortlist their modernization options from their legacy application suite. This list will enable a more targeted implementation plan. Following this, an implementation framework needs to be developed and implemented, which will help you to create a modernization schedule. Assessment should also help in determining the scope of the project, team, technologies, and the skills required. Define the success criteria: Various application transformation approaches comprise different costs and risks involved. Say for some instances refactoring a legacy application cost much higher than rebuilding the application using a new technical stack. Most of the times organisations fail to determine the target outcomes in effective manner. So, it is very important to measure the change, costs and risks involved along with the return on investment, the features we aim to improve, and set new benchmarks of attaining agility and resilience while bringing an enhanced security and risk management strategy into the portfolio. Structure of target operating model: The traditional operating structure consists of network engineers, system administrators, and database engineers, are no longer fit to support to the new modern digital transformation landscape, so the organisation must align the IT landscape to suite to new suite, alongside upskilling/reskilling path – In the end, applications are ultimately maintained and supported by the people, and your end state operating model must account for ownership of microservices, who will configure and manage the production environment, etc. Benefits of Cloud Native applications: Drives Innovation: With a new cloud native environment, it is easy drive the digital transformation and  to adopt the new age technologies like AI/ML, automation driven insights as these are readily available in most of the cloud environments and comes with easy integration to the applications. Ship Faster: In current world, key to the success of any business is time to market. With the DevOps and CI/CD capabilities, it is very much a possibility to deploy changes very frequently (multiple times in day) while it takes months to deploy a change in traditional software development. Using DevOps, we can transform the software delivery pipeline using automation, building automation, test automation, or deploy automation. Optimised Costs: Containers manage and secure applications independently of the infrastructure that supports them. Most of the organisations use Kubernetes to manage the large volumes of containers. Kubernetes is an open-source platform that is standard for managing resources in the cloud. Cloud-native applications are using containers; hence it fully benefits from containerization. Alongside Kubernetes, there is a host of powerful cloud-native tools. This, along with an open-source model, drives down costs. Enhanced cloud-native capabilities such as Serverless let you run dynamic workloads and pay-per-use compute time in milliseconds. So, it has standardization of infrastructure and tooling. Hence, it helps to reduce cost. Improved Reliability: Achieving high fault tolerance is hard and expensive with the traditional applications. With modern cloud-native approaches like microservices architecture and Kubernetes in the cloud, you can more easily build applications to be fault tolerant with resiliency and autoscaling and self-healing built in. Because of this design, even when failures happen you can easily isolate the impact of the failure, so it doesn’t take down the entire application. Instead of servers and monolithic applications, cloud-native microservices helps you achieve higher uptime and thus further improve the user experience. Foundational elements of Cloud Native applications: In general, cloud native applications are designed

Road to a Cloud Native Journey Read More »

DevSecOps, , , , , , ,

Reasons to Move, and Reasons Not to Move, to the Public Cloud

Reasons to Move, and Reasons Not to Move, to the Public Cloud Public cloud adoption is more popular now than ever. Companies across all industries are modernizing their environments to support remote work, lower costs, and improve reliability. In fact, Gartner predicts global public cloud end-user spending to increase by 23% in 2021. Despite this momentum, it’s important to realize the public cloud isn’t an ideal fit for every organization. Many companies rushed into the cloud during the pandemic without fully understanding the implications. Now, issues are surfacing — and some businesses are reconsidering their migration altogether. This post explores the pros and cons of moving to the public cloud. Keep reading to learn more about whether the cloud makes sense for your business, and the reasons to move, and reasons not to move, to the public cloud. What Is the Public Cloud? The public cloud is a framework that lets you access on-demand computing services and infrastructure through a third-party provider. In a public cloud environment, you’ll share the same hardware, software, and network services as other companies or tenants. It’s different from a private cloud environment where your company receives access to private, hosted infrastructure and services. To illustrate, it’s like staying in a hotel versus renting a private cottage on Airbnb. A few of the top public cloud providers on the market include Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Alibaba Cloud, and IBM Cloud. Public cloud services can refer to infrastructure as a service (IaaS), software as a service (SaaS), and platform as a service (PaaS) models. Top Reasons for Public Cloud Adoption Companies have a variety of reasons for migrating to the public cloud. Here’s a few of them. Replacing the Data Center and Lowering Computing Costs Enterprises are increasingly moving away from data centers. In fact, by 2025, 80% of enterprises will shut down their traditional data centers. Companies with aging data centers can avoid retrofitting facilities or building new ones by migrating to the public cloud and leveraging hosted infrastructure instead. This greatly reduces costly builds and minimizes operational expenses. Achieving Rapid Scalability The public cloud enables rapid scalability. You can significantly increase storage and compute power through the public cloud at a fraction of the cost of expanding your existing infrastructure. The public cloud is particularly useful for growing startups that need to be able to accommodate massive usage increases. It’s also ideal for organizations that experience seasonal spikes in sales. For example, an e-commerce provider might use the public cloud when ramping up production and sales around the holidays. By the same token, the public cloud provides flexibility to easily scale back down during lulls. Accessing Managed Services The service provider manages the underlying hardware and software in a public cloud deployment. They also typically provide security, monitoring, maintenance, and upgrades. This approach enables you to take a hands-off approach to managing infrastructure. Your IT team can focus on other business needs, with the expectation that the public cloud provider will keep your services up and running within the scope of the service-level agreement (SLA). Reducing IT Burden Right now, there’s a widespread IT staffing shortage. The issue is particularly bad in the data center industry, where 50% of data center owners and operators are having difficulty finding qualified candidates for open jobs. If your company’s having a hard time finding qualified IT workers, you may want to consider outsourcing operations to the public cloud. This can free your IT workers from grunt work and enable them to take on more valuable projects. At the same time, your IT team can still manage its public cloud environment. For example, they can still perform data governance and identity and access management (IAM). They just won’t have to worry about maintaining or upgrading any hardware or software.  Strengthening Business Continuity and Disaster Recovery (BC/DR) Another reason why companies migrate to the cloud is to improve their BC/DR posture. Business continuity involves establishing a plan to deal with unexpected challenges like service outages. Disaster recovery is all about restoring network access following an issue like a cyberattack or natural disaster. Companies often rely on the public cloud to establish BC and DR across two or more geographically separate locations. Running a BC/DR strategy through the cloud is much more efficient, as it prevents you from having to maintain a fully functioning recovery site 24/7. This approach drastically reduces costs. At the same time, using the public cloud can guarantee full operational BC/DR availability. This can provide the peace of mind that comes with knowing you can keep your business running when emergency strikes. Why Companies Avoid the Public Cloud Without a doubt, the public cloud offers several exciting advantages for businesses. But there are also a few major drawbacks to consider. Here are some of the top reasons why companies might avoid the public cloud. Higher Costs Companies often expect instant cost savings when migrating to the cloud. In reality, cloud services can sometimes be more expensive than on-premises data centers — at least at first. Oftentimes, companies fail to achieve true cost savings until they learn how to take full advantage of the public cloud. This can take months or years. It’s important to carefully break down cloud migration costs and ROI before moving to the public cloud to get an accurate understanding of the move’s short-, medium-, and long-term financial impact. In some cases, companies find they fare better with their existing setups. Data Ownership Concerns Right now, there’s an ongoing debate about who owns data in the public cloud. Some cloud providers attempt to retain ownership of some or all of the data they store. As such, many business leaders fear storing data in the public cloud, and some simply can’t risk it. Instead, they choose to avoid the issue by using their own dedicated infrastructure. It’s a good idea to talk with your team before migrating to the public cloud and conduct a security and privacy

Reasons to Move, and Reasons Not to Move, to the Public Cloud Read More »

Uncategorised, , , , ,

Rise of “Service Mesh” in Application Modernisation – White Paper.

Rise of “Service Mesh” in Application Modernisation  The What, the Why and the How  Author:  Ravi Cheetirala Technical Architect ( Cloud & DevSecOps) at TL Consulting Learn how Service Mesh brings the safety and reliability in all the aspects of service communication. Read on to find out more about the following: What is a Service Mesh? Key Features of a Service Mesh Why do we need Service Mesh? How does it work? Case Study What is Service Mesh? A Service Mesh is programmable piece of software layer sits on tops of the services in a Kubernetes cluster. Which helps to effective management of service-to-service communication, also called as “East-West” traffic.  The objective of Service mesh is to allow services to securely communicate to each other, share data and redirect traffic in the event of application/service failures. Quite often service mesh will be an overlay of network load balancer, API gateway and network security groups.  Key Features of Service Mesh  Traffic Routing  Rate Limiting, Ingress Gateway, traffic splitting, service discovery, circuit breaking, and service retry  Service mesh helps in enabling the traffic routing between the services in one or more clusters. It also helps in resolving some of the cross-cutting concerns like service discovery, circuit breaking, traffic splitting.  Securing the Services  Authentication, Authorization, encryption and decryption, Zero Trust Security  The service mesh can also encrypt and decrypt the data in transit, by removing the complexity from each of the services. The usual implementation for encrypting traffic is mutual TLS, where a public key infrastructure (PKI) generates and distributes certificates and keys for use by the sidecar proxies. It can also authenticate and authorize requests made within and outside the app, sending only authorized requests to instances.  Observability   Monitoring, Event Management, Logging, Tracing (M.E.L.T)   Service Mesh comes with lot of monitoring and tracing plugins out of the box to understand and trace the issues like communication latency errors, service failures, routing issues. It captures the telemetry data of the service calls, including the access logs, error rates, no of requests served per second, which will be the base for the operators/developers to troubleshoot and fix the errors. Some of the out of box plugins include Kiali, Jaeger and Grafana.    Why do we need Service Mesh?  Evidently most of the new age applications or existing monolith applications are being transformed or written using the microservice architecture style and deployed in a Kubernetes cluster as a cloud native application because they offer agility, speed, and flexibility. However, the exponential growth of services in this architecture brings challenges in peer-to-peer communication, data encryption, securing the traffic and so on.  Adopting the service mesh pattern helps in addressing these issues of microservice application particular the traffic management between the services, which involves a considerable amount of manual workaround. Service mesh brings the safety and reliability in all the aspects of service communication.  How does it work?   Most of the service meshes are implemented based on a Side Car pattern, where a side car proxy named “Envoy Proxy” will be injected into the Pods. Sidecars can handle tasks abstracted from the service itself, such as monitoring and security.  Services, and their respective envoy proxies and their interactions, is called the data plane in a service mesh. Another layer called the control plane manages tasks such as creating instances, monitoring and implanting policies, such as network management or network security policies. Control plane is the brain behind the service mesh operations.  A Case Study Client Profile  The client in question is one of the large online retailers with global presence. The application is legacy e-commerce platform built as a giant monolith application.  Client’s architecture consists of a multi-channel (mobile and web) front end application developed using React JS and tied together using a backend service developed using legacy Java/J2EE technology and hosted on their own data center.  There is an ongoing project to split this giant app into a microservice based architecture using the latest technical stack and hosted onto a public cloud.  Client’s Organization needed to setup a deployment platform, which ensures high availability and scalable and resilient. Also, it should have cost effective, secure and high deployment frequency when it comes to release and maintenance.  Project Goals  Zero Downtime/No Outage deployments and support of various deployment strategies to test the new release/features.  Improved deployment frequency  Secure communication between the services   Tracing the service-to-service communication response times and troubleshooting the performance bottlenecks  Everything as a code  Role of Service Mesh in the project:  The client was able to achieve the goals by adopting the service mesh pattern in their micro service architecture.  Achieved Zero downtime deployments with 99.99% availability.  Enabled the secure communication using service mesh’s TLS/mTLs feature in a language-agnostic way.  Using traffic splitting they were able to test the new features and sentiment in their customer base.  Chaos testing was conducted using the service mesh fault injection features.  Operational efficiency and infrastructure cost optimization.  Helped to understand the latency issues, by distributed tracing.  No additional burden on Development teams to write code to manage these.  Conclusion  Service mesh provides robust set of features in resolving the key challenges and issues faced by the DevOps and, SREs in a microservice applications on cloud native stack by abstracting most of its functionality. And now it is widely adopted pattern and critically used component in a Kubernetes implementation.  TL Consulting can help by solving these complex technology problems by simplifying IT engineering and delivery. We are an industry leader delivering specialised solutions and advisory in DevOps, Data Migration & Quality Engineering with Cloud at the core. If you want to find out more, please review our application modernisation services page or contact us.

Rise of “Service Mesh” in Application Modernisation – White Paper. Read More »

Uncategorised, , , ,

How to modernise legacy applications

How to modernise legacy applications Hosting applications on the cloud is a strategic objective for most organisations. There are many benefits to modernise legacy applications and implementing enablers such as automated deployments, auto-scaling and containerised architectures. These include lower running costs and better performance. However, there is a perception that many legacy systems and commercial off-the-shelf (COTS) applications cannot be modernised. Instead, organisations opt for a “Lift and Shift” approach which not only requires a significant amount of rework and refactoring but does not deliver any of the benefits of the cloud. Consider an alternative to lift and shift While a “Lift and Shift” approach is an affordable option, there are often additional costs. These costs are generally not in the initial estimates. When estimating costs, the overall vision of the application and its lifecycle needs to be considered. As does the Total Cost of Ownership after deployment. When these factors are included the cost will often be more than first expected. But higher cost is not the only factor to consider. A lift and shift approach often does not deliver the benefits of moving to the cloud such as performance improvements and deployment efficiencies. As an alternative, monolithic applications can benefit from modern architectures such as Kubernetes, without rearchitecting the solution. An option that few organisations consider or have the skills to accomplish. This provides the same benefits as a “Lift and Shift” but at the same time, provides a model that enables a relatively mature cloud native application. A white paper and case study In the following sections, we will explore key findings from a recent application modernisation service provided to a NSW Government agency. In this white paper we describe how we successfully migrated a legacy Oracle SOA application stack to containerised infrastructure. We explore common challenges, solution design, the implementation and business benefits too. Common Challenges in Modernising Monolithic Applications A main difference between monolithic and microservices architectures, apart from the obvious scalability, flexibility and agility benefits that are achieved with microservices, is that monolithic applications are built of layers and components that are tightly coupled. Putting all these layers and components in one docker container does not at first sight seem like a viable option. Such an approach appears to be adding an external shell on top of the existing layers, thereby further complicating the build process. Also, from a scalability perspective, what if the consumption of the different components were not uniform? In other words, only few of the components would need to be replicated instead of replicating complete layers. It will be a complete waste of infrastructure resources having to replicate all the components when only a few are in high demand. Solution Design Stage Firstly, the engineering teams needed to assess the feasibility of decoupling the application components and explore different architecture design options. Secondly, we evaluated data segregation based on the needs of Docker containers. The next step of the design stage shows the different deployment models highlighting their respective advantages and disadvantages. Depending on the infrastructure, there are different options that can be considered. Another aspect that may need consideration is stateful versus stateless components. With technologies like docker and Kubernetes, running stateless workloads is easier compared to Stateful. The Solution Design Stage is important to setting up the core foundation of the modernised application. Without this assessment, key issues with the code, technology and or architecture will not be identified. In turn, the application will inherit the technical debt thus not achieving the ROI of the project. We often hear from other clients that TCO has risen due to poor analysis of an applications current state. Implementation Stage During the implementation stage there were many considerations to address. We needed to have robust continuous integration and continuous delivery pipelines to ensure stage gates are controlled and governed. This approach enabled the teams to have the transparency that was unfortunately lacking within the current technology stack. Infrastructure as code, cost benefit analysis, team skill levels and workflows were among other considerations, risks and issues to overcome. The image below shows a simplified version of the solution pipelines and technology stack. Figure 1 The Design that was implemented for our client needed to address three critical issues. The first issue was a manual activity requiring an engineer to switch a malfunctioning active node to a standby node. The second issue, was overcoming the substantial costs of the previous lift and shift implementation. The cost of provisioning and maintaining the different environments for the platform exceeded that of running it on VMs. The last main issue was scalability. Adding another node group to the platform to handle extra load was an onerous process, which required extensive planning prior to implementation. It is important to note, the infrastructure components and workloads were compliant with the mandated government policies and the government data centre models. Outcomes and Business Benefits Our client realised the immediate benefit of implementing an engineering model that leveraged an infrastructure-as-code pipeline and Kubernetes. Automated build/test/deploy pipelines, Self-Healing, Auto-scaling and 0% Downtime gradual deployments were just a few benefits that helped our client move towards a cloud-native approach. A Cloud-Native Partner While most internal engineers know the business and product well enough to perform a “Lift and Shift” approach, to modernise legacy applications effectively requires specialised DevOps knowledge. TL Consulting can provide this expertise allowing your team to get as close as possible to cloud native models when migrating your legacy systems to the cloud. If you want to find out more, please review our application modernisation services page or contact us below.

How to modernise legacy applications Read More »

Uncategorised, , , , ,