TL Consulting Group

kubernetes

Demand for Kubernetes and Data Management

Transforming the Way We Manage Data Data is the backbone of today’s digital economy. With the ever-increasing volume of data being generated every day, the need for efficient, scalable, and robust data management solutions is more pressing than ever. Enter Kubernetes, the revolutionary open-source platform that’s changing the game of data management. Market research suggests that the demand for Kubernetes in data management is growing at a rapid pace, with a projected compound annual growth rate of over 30% by 2023. There is an increase in demand for Kubernetes. With its ability to automate deployment, scaling and management of containerized applications, is providing organisations with a new way to approach data management. By leveraging its container orchestration capabilities, Kubernetes is making it possible to handle complex data management tasks with ease and efficiency. Stateful applications, such as databases and data pipelines, are the backbone of any data management strategy. Traditionally, managing these applications has been a complex and time-consuming task. But with Kubernetes, stateful applications can be managed with ease, thanks to its Persistent Volumes and Persistent Volume Claims. Data pipelines, the critical component of data management, are transforming the way organizations process, transform and store data. Kubernetes makes it possible to run data pipelines as containers, simplifying their deployment, scaling, and management. With Kubernetes in-built jobs support, these workflows can run as a scheduled or triggered jobs that are orchestrated by the Kubernetes engine. This enables organizations to ensure the reliability and efficiency of their data pipelines, even as the volume of data grows. Scalability is a major challenge in data management, but with Kubernetes, it is by design. Its ability to horizontally scale the number of nodes in a cluster makes it possible to easily handle the growing volume of data. This ensures that data management solutions remain robust and scalable, even as data volumes increase. Resilience in another key requirement in data management. Traditionally, a single point of failure can bring down the entire system. But with Kubernetes, failures are handled gracefully, with failed containers automatically rescheduled on healthy nodes. This provides peace of mind, knowing that data management solutions remain available even in the event of failures. Kubernetes also offers zero downtime deployment in the form of rolling updates. This also applies to databases where the administrator can upgrade the database version without any impact to the service by rolling the update to one workload at a time until all replicas are upgraded. To complement the resilience features, operations such as memory or CPU upgrades which, in the past, were considered destructive changes that required planning and careful change and release management. Today, since Kubernetes relies on declarative management of its objects, this change is just a single line of code. This change can be deployed similar to any code change that progresses to the different environments using CI/CD pipelines. Conclusion In conclusion, Kubernetes is transforming data management. Gone are the days of regarding Kubernetes as a platform suitable only for stateless workloads leaving databases running on traditional VMs. Many initiatives took place to adapt stateful workloads to run efficiently and reliably in Kubernetes from releasing the StatefulSets API and Storage CSI, to building Kubernetes operators that will ensure databases can run securely in the cluster with massive resilience and scalability. With these operators being released for common database systems such as Postgres and mySQL to name a few, daunting database operations such as automatic backups, rolling updates, high availability and failover are simplified and taken care of in the background transparent to the end user. Today, with more database vendors either releasing or endorsing Kubernetes operators for their database systems, and enterprises running databases in Kubernetes production environments successfully, there is no reason to think that it lacks the necessary features to run production enterprise database systems. The future of data management is looking bright, and we excitedly await what lies ahead thanks to the Kubernetes community’s constant drive for innovation and the expansion of the possibilities. To learn more about Kubernetes and our service offering here.

Demand for Kubernetes and Data Management Read More »

Cloud-Native, DevSecOps, ,
VMWare - Tanzu Application Platform

Unlocking The Potential of Tanzu Application Platform

Unlocking The Potential of Tanzu Application Platform (TAP – a Multicloud, Portable Kubernetes PaaS) Cloud-native application architecture targets building and running software applications that triumph the flexibility, scalability, and resilience of cloud computing by following the 12 factors, microservices architecture with self-service agile infrastructure offering an API based collaborative and self-healing system. Cloud-native encompasses the various tools and techniques used by software developers today to build applications for the public cloud. Kubernetes is the de-facto standard for container orchestration to build the Cloud Native applications. Undoubtedly Kubernetes is changing the way enterprises manages their infrastructure and application deployments. However, at the core, there is still a clean separation of concerns between the developers and operators. Now comes the new VMWare’s Tanzu Application Platform under the Tanzu Portfolio to address some of the fundamental issues with the developer and operations collaboration issues and provides an effortless path to application deployments in a secure, module, scalable in a portable Kubernetes environment. What is Tanzu Application Platform (TAP)? “A superior multi-cloud developer experience on Kubernetes VMware Tanzu Application Platform is a modular, application-aware platform that provides a rich set of developer tooling and a prepared path to production to build and deploy software quickly and securely on any compliant public cloud or on-premises Kubernetes cluster.” By VMWare Tanzu Application Platform simplifies workflows Tanzu Application Platform simplifies workflows in both the inner loop and outer loop of cloud-native application development and deployments on Kubernetes. A typical inner loop consists of developers writing the code in their local IDE (Integrated development environment), testing, and debugging the application, push and pull the code from a soured code repository, deploying to a development or staging environment, and then making additional code changes based on the continuous feedback. An outer loop consists of the steps to deploy the application to a non-production /production environment and support them over time. In the instance of a cloud-native platform, the outer loop includes activities such as building container images, adding container security, i.e., vulnerability scanning, trust and adding signature and configuring continuous integration (CI) and continuous delivery (CD) pipelines. TAP creates an abstraction layer above the underlying Kubernetes, focusing on portability and reproducibility, avoiding lock-in where possible. Underneath, TAP provides strong support with all the tools required for the build and deployment of the applications in the form of Accelerators and Supply chains Choreographers. TAP can be installed and managed on most of the managed Kubernetes instances like AKS(Azure), EKS(AWS) and GKE (Google Cloud) available in the market as well as any other unmanaged conformant Kubernetes cluster. Developers can even install it on their local Minikube instance as well. TAP also supports an out of the box workflow for DevSecOps based on the best open-source tools. However, there is strong support to customise these workflows with the enterprise-grade/commercial tools of choice. TL Consulting TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as If you need assistance with your Containers/Kubernetes adoption, please contact us at our kubernetes consulting services  page.

Unlocking The Potential of Tanzu Application Platform Read More »

Cloud-Native, DevSecOps, Uncategorised, , , , , ,
Application Security in Kubernetes

“Shift Left” Application Security in Kubernetes with Open Policy Agent (OPA) and Tanzu Mission Control (TMC)

“Shift Left” Application Security in Kubernetes with Open Policy Agent (OPA) and Tanzu Mission Control (TMC) To secure a Kubernetes environment, we must adopt the “shift left” security approach right from the initial phases of the development, rather than wait for the deployment to complete and focus on the security at later stages of the build. Kubernetes security is constantly evolving with new features to strengthen both the application and cluster security. Kubernetes offers several mechanisms to administer security within the cluster. Some of these include enforcing resource limits, API security, standardizing containers, auditing and so on. Here we will discuss one of such mechanism, which helps to implement the shift left security in a Kubernetes cluster. What is OPA? Open Policy Agent (OPA) is an open-source policy engine that provides a way of manifesting the policies declaratively as code, which helps to ease out some of the decision-making processes with the Kubernetes cluster end users, such as developers, operations teams without impacting the agility of the development. OPA uses a policy language called Rego, which allows you to write policies as code for various services like Kubernetes, CI/CD, Chef, and Terraform using the same language. OPA enforces the separation of concern by decoupling the decision-making from the core business logic of the applications. OPA Workflow: OPA provides centralized policy management and generates policy decisions by evaluating the input data against policies (written in Rego) and data (in JSON) through RESTful APIs. Here we have some of the example policies we can enforce using OPA: Which users can access which resources? Which subnets egress traffic is allowed to? Include node and pod (anti-), affinity selectors, on Deployments Which clusters a workload must be deployed to? Ensure all the images come from a trusted registry Which OS capabilities a container can execute with. Implementing Kubernetes Admission Controllers to validate API requests. Allowing or denying Terraform changes based on compliance or safety rules. Enforcing certain deployment policies (such as resource limits, meta data types of resources) Creating Custom Policies using OPA in Tanzu Mission Control (TMC) VMware Tanzu Mission Control is a centralized hub for simplified, multi-cloud, multi-cluster Kubernetes management. Tanzu Mission Control aims to help with the following list of Kubernetes operations: Managing clusters on both public, private cloud and edge Cluster lifecycle management on supported providers Manage security across multiple clusters Centralized policy management Access management Cluster conformance VMware Tanzu Mission Control provides centralized policy management for specific policies that you can use to govern your fleet of Kubernetes clusters, The polices include access controls, image registry policies, and resource limit policies. While these cover the baseline polices, it also offers an ability to create custom policies using Open Policy Agent (OPA). Custom policies are somewhat open-ended and provide the opportunity to address aspects of cluster management that specifically suit the needs of your organization. As described above OPA implement specialized policies that enforce and govern your Kubernetes clusters. Closing thoughts: Enterprises use the OPA to enforce, govern, audit, and remediate policies across all IT environments. You can use OPA to centralize operational, security, and compliance aspects of Kubernetes, in the context of cloud-native deployments, (CI/CD) pipelines, auditing and data protection. Thus, OPA enables DevOps teams to shift control over application authorization further left to advance the adoption of best DevSecOps practices. TL Consulting TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as If you need assistance with your Containers/Kubernetes adoption, please contact us at our kubernetes consulting services  page.

“Shift Left” Application Security in Kubernetes with Open Policy Agent (OPA) and Tanzu Mission Control (TMC) Read More »

Uncategorised, , , , , ,

Secrets management in Kubernetes using Sealed Secrets

Secrets management in Kubernetes using Sealed Secrets: Kubernetes has gained its popularity due to its core nature of running an immutable infrastructure, where the pods, containers can be destroyed, and replaced automatically. This helps to ease out the deployment friction as you declaratively describe the resources in a manifest file.  Kubernetes manifest files can be stored in a source code repository like GitHub and the Kubernetes operations can be managed easily using the GitOps methodology. However, one of the biggest challenges in Kubernetes is the secure storage and rotation of credentials / secrets such us passwords, keys, and certificates. While Kubernetes offers basic secrets management capabilities, it doesn’t help secure secrets needed both inside and outside of Kubernetes. Here we discuss one of the ways to address this issue using “sealed secret”: Sealed Secrets: When looking at optimising the infrastructure costs, enterprises consider various cost-management best practices, but Kubernetes require a specialised a Sealed Secrets is a Kubernetes object, which helps to store the encrypted Kubernetes secrets in a version control.It consists 2 main components. Sealed Secret Controller (At Server Side) Kubeseal Utility (At Client Side) First step is to use sealed secrets is, install the sealed secret controller in the target cluster using the sealed-secret-controller helm chart. helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets helm repo update helm install sealed-secrets-controller –namespace kube-system –version 2.13 sealed-secrets/sealed-secrets Install the kubeseal client in our machine wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.17.3/kubeseal-linux-amd64 -O /usr/local/bin/kubeseal brew install kubeseal or yum install kubeseal Create and encrypt the secrets using the kubeseal kubectl create secret generic db-password -n test –from-file=dbpassword.txt –dry-run=client -o yaml | kubeseal -o yaml > db-password.yaml The output of the above command is apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata:   creationTimestamp: null   name: secret-sql-password   namespace: test spec:   encryptedData:     DB_PASSWORD: GBbjeKXfPSjqlTpXSxJWwFWd1Ag/6T6RS1b6lylLPDfFy4Xvk0YS+Ou6rED1FxE1ShhziLE8a7am0fbiA2YuJMSCMLAqc2VYcU3p3LS0QKXdWKelao7h5kLwue7rEnCnuKLSZXHuU6DV/yCBYIcCCz88dBmzE8ga1TARLsFRrZmq2EWgU/ON57tIexCEAyztWreJi1Qnf0uJZE56Zg3x1Fj7MJ4Z06pcSSAwY2v0yZ8UNo1qzdmTfkOg0sMXdaFwF9Nga83MPeXfyKdfiH6kAW+LjUbpWi4JHEK7elZswRCBtU6caKt2sxfmue38UbQw8AXL5TmECqwttuKADWictIfWWhCYnyaO7DQm7+a2kfKUaUHZlw8X3vJtoiXAO/cEFJv2+X29gmwvX24gixgD6yrnxpA+GBbjeKXfPSjqlTpXSxJWwFWd1+H1Fb4FWVs6m1PxehsrHDbVTk8kGVXDzV1KK9EjF+CIxQPhGEQTUVq4qMmLAnPKw8HQYmh73v1K/a2kfKUaUHZlw8X3vJtoiXAO/cEFJv2+X29gm   template:     data: null     metadata:       creationTimestamp: null       name: db-password       namespace: test In the above manifest file, we can see that our database password is encrypted. Only the sealed-secret-controller within the cluster can decrypt the value. Hence these can be safely stored in a version control. TL Consulting TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as If you need assistance with your Containers/Kubernetes adoption, please contact us at our kubernetes consulting services  page.

Secrets management in Kubernetes using Sealed Secrets Read More »

Uncategorised, , ,

How to Optimise Kubernetes Costs?

How to Optimise Kubernetes Costs? The increasing popularity of cloud-native applications has brought technologies like microservices and containers to the frontline. Kubernetes is the most preferred container orchestration platform by most enterprises for automating the deployment, scaling, and management of containers. Most of the Kubernetes implementations thrive to focus on technical aspects and are least bothered by the costs involved with their benefits. In a recent survey from the Cloud Native Computing Foundation (CNCF), 68% of participants reported that their Kubernetes costs increased in the past year, with bills surging more than 20% year-on-year for most organisations. So, how to optimise Kubernetes costs? How much has your Kubernetes-related spend grown in the last 12 months?   Source:  FinOps Foundation survey When looking at optimising the infrastructure costs, enterprises consider various cost-management best practices, but Kubernetes require a specialised approach. Here we will discuss some of the key aspects to reduce overall Kubernetes costs. Size of the infrastructure as per the need: First and foremost, reducing the consumption costs is to have the correct infrastructure size in terms of pods and nodes. While it is always advisable to overprovision to cater to the unusual spikes, leaving the applications to use unlimited resources can lead to unexpected repercussions. For instance, a stateful database container consumes all the available memory in the node due to an application fault; this leads other pods to wait indefinitely for the resources. This can be prevented by setting up Quotas at Pod and namespace levels. Additionally, it is good to enforce the resource request limits at a container level. Other enforcement is to limit the number of pods running on a node, as running many pods can lead to inefficient resource utilisation. Due to this issue, most cloud providers have set hard limits on their managed instances if Kubernetes. Choosing the right tools: A fundamental way of managing any cloud or infrastructure costs is by monitoring utilisation and costs involved for the resources over a period. It allows users to get better insights into storage, memory, computing, network traffic utilisation, etc, and how the costs associated are distributed between them. Irrespective of managed instances or bare-metal clusters, today, almost all the clusters support one or other tools for monitoring to get the basic information. Suppose we are looking at an enterprise with many clusters. In that case, it is always advisable to have a propriety APIM tooling like Dynatrace, New Relic, App D, Splunk, and Prometheus and so have a proper drill-down of the resources and utilisation. It enables SREs and Kubernetes admins to gain a more comprehensive view of the environment and optimise the costs. Use the monitoring insights to analyse and create actions. And start implementing more concrete actions for better utilisation and cost optimisation.  Adopting the Best Practices Across the Delivery Pipeline: DevOps is a proven practice which helps to reduce the barriers between the Development teams and Operations. It allowed users to create robust and flexible deployments through pipelines. One of the possibility of reducing the time and effort to deploy containers to the Kubernetes cluster is to automate the build and deployment pipelines using CI/CD tooling. Also, practices like GitOps are tailor-made to facilitate continuous delivery when manifests are used and version-controlled in a source code repository, greatly reducing the deployment workloads of the team. An Initial investment will be needed to set up a continuous integration to build, test, and publish containers and continuous delivery to deploy these containers on the cluster. Tools like Harness Argo CD will significantly reduce the manual errors that can cause disruptions in the application, leading to less troubleshooting. This reduced workload will allow teams to focus on more valuable tasks such as functionality development, bug fixes, and improving the security posture of the environment. Conclusion: Kubernetes deployments and operations can be very costly if implemented and managed inefficiently. Most enterprises incorporate Kubernetes without any proper practices, tooling, and personal experience in the organisation. However, without proper guidance, it is often will become unoptimised and businesses don’t think about expenses forefront and will be a heavy operational burden in the long run. Considering the above-mentioned practices could save a lot of unnecessary Kubernetes costs and encourage the implementation of best practices from the beginning. TL Consulting TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as If you need assistance with your Containers/Kubernetes adoption, please contact us at our kubernetes consulting services  page.

How to Optimise Kubernetes Costs? Read More »

Uncategorised, , , , , , ,

C-Level Executives Should Learn the Kubernetes Way

Why C-Level Executives Should Learn the Kubernetes Way of Thinking C-Level Executives lead their enterprises to deliver applications and services to customers with the same capabilities that Kubernetes and cloud-native architecture are best known for. So why should you learn the Kubernetes way of thinking? Well, when executives recognize they need to scale, the scaling needs to occur in a way that won’t muddle their existing business process and lock you into one cloud provider. While they can’t turn everything into an API with a click of a button, it is possible to be on the lookout for processes that can only scale by creating additional organizational complexity and bottlenecks.  To achieve the great power and flexibility that Kubernetes APIs and containers deliver in cloud-native architecture, executives can use intelligent business processes and proven architectural models that are built to auto scale up and down in a flexible multi-cloud environment. To use a retail analogy, retailers must have proper staff in place to handle the Christmas rush, but they don’t need that level of staffing the remainder of the year. In the same way, with containers and cloud-native architectures, companies need to be able to adapt and respond to scale up or down depending on the level of demand on an application or system at any given time. Creating the ability to expand capacity more easily or repurpose staff and resources allows for amazing results, and if setup correctly Kubernetes manages this for you automatically on the Cloud and gives you the greater control. Summary The key takeaway from delivering Cloud native solutions, is that Kubernetes can enable your business on the Cloud, offering major benefits including faster time to market, improved scalability and enhanced cost optimization. Ideally you want the technology to underpin your business and deliver greater enablement and ROI. Kubernetes was developed to create more business agility, allowing organizations to focus on achieving their business objectives and not waste valuable time or effort on mainstream tasks and operations. If you need assistance with top down management of Kubernetes and or creating an approach to cloud-native, contact us or read more about our Kubernetes consulting.

C-Level Executives Should Learn the Kubernetes Way Read More »

Uncategorised, ,

Application Modernisation with VMWare Tanzu

APPLICATION MODERNISATION WITH VMWARE TANZU The Need for Accelerating Application Modernisation:  Building innovative, modern apps and modernising existing software are key imperatives for organisations today. Modern apps are essential for deepening user engagement, boosting employee productivity, offering new services, and gaining new data-driven insights. But to maximise the impact of modern apps, organisations need to deliver them rapidly—fast enough to keep up with swiftly changing user expectations and emerging marketplace opportunities. As per the Google’s CIO Guide to App modernisation, in today’s IT landscape, 70-80% of C-Executives report that their IT budgets are spent on managing the legacy applications and infrastructure – In addition to that, legacy systems consume almost 76% of the IT spend. Despite a large amount of investment in legacy applications, most businesses fail to see their digital transformation plans to a satisfactory. On the other hand, constantly changing digital behaviours of consumers and the evolution of viable, reduced Opex, self-sustaining infrastructure models that are better suited to today’s pace of technological change are the primary drivers pushing application modernisation up the CIO/CTO’s list of priorities. According to a study conducted by Google, public cloud adoption alone can reduce the IT overheads by 36-40% when migrating from traditional IT frameworks. However, application modernisation can help in further reduction – it frees up the IT budget to make space for innovation and explore new business value opportunities. Lastly, this digital transformation brings greater agility, flexibility, and transparency while opening operations up to the benefits of modern technologies like AI, DevSecOps, intelligent automation, IoT, etc. Challenges to the Adoption: As per the State of Kubernetes survey 2021, here are the 5 different challenges enterprises face today with the cloud native /Kubernetes adoption, while lack of experience and expertise with the implementation and operations being the top of the list. Containers are very lightweightAs more and more businesses are moving rapidly towards implementing cloud native practices to enable agility and increased time to market, the operational impacts to the business can be vary time to time. While these challenges bring complexity, if would be less complicated and cheaper to address them from the very beginning as part of its cloud strategy. VMware Tanzu portfolio: Due to its lightweight nature, we can create a container image and deploy a container in a matter of seconds. VMWare’s decades of vast experience in virtualisation and the quest of bringing innovation had driven towards introducing the VMWare Tanzu Portfolio. The VMware Tanzu portfolio empowers developers to rapidly build modern apps across a multi-cloud landscape while simplifying operations by using Kubernetes as an underlying platform. VMware Tanzu is an essential component of the growing VMware App Modernisation portfolio, which provides enterprises the required tools and technology, which would help in building new applications and modernising their existing application suits. Using Tanzu portfolio, organisations can rapidly—and continuously—deliver the modern apps that are vital for achieving their strategic goals. Fast-tracking modern apps delivery: Tanzu helps developers deliver modern apps with a quick turnaround and greater reliability. Organisations can use that speed to better address quickly evolving business requirements and changing priorities.  Flexibility With Kubernetes: With Tanzu, Organisations can run Kubernetes in their private clouds, on-premises datacentres in public clouds and at the edge. This flexibility helps organisations align application and cloud decisions better with technical and operational requirements.  Simplified Operations: Deploying and managing the applications across multiple clouds and environments brings new challenges to the operations. Tanzu provides tools to manage, govern and secure all Kubernetes clusters centrally irrespective of where they reside. As a result, operations teams can meet application security and reliability expectations while controlling costs.  Stronger DevOps Collaboration: Tanzu helps alleviate the tension between rapid development goals and stable operations. It transforms the DevOps relationship by giving operations teams what they need to support fast release cycles VMWare Tanzu Value Preposition: The core principles underlying the vision for VMware Tanzu are entirely consistent with VMware’s promise to help customers run any app on any cloud and to drive Kubernetes adoption, to ensure that businesses don’t need to invest in any additional code or training. How Can TLConsulting help organisations with the Modernisation Journey with VM Ware Tanzu? Cloud-native adoption requires a mindset shift, which drives Culture and processes change across the organisation in its IT landscape and technology choices throughout the stack. With the IT being the focus point of the enterprises business strategy. This transformation shift requires the new application to be developed and delivered at a quick turnaround time with greater reliability and quality. Transforming your existing application into a modern app is a complex process with no or minimal guaranteed path for success. A successful transformation requires not only the transformation of your organisation’s technology but also people-centred assets. Culture, process, and leadership need to change to keep up with your new ecosystem. Since Cloud Native is so new, most organisations lack the experience to handle the transformation road on their own. It’s all too easy to get lost. TL Consulting is well-positioned with certified and experienced professionals to help your organisation define and drive your vision with a “Customer First approach” and cloud-native philosophy. We will understand the business objective, long term strategies, and risks involved with a pragmatic assessment matrix and formulate the tailor-made transformation roadmap.We will also assist in the design, architecture, and implementation of the transformation to deliver highly reliable, secure modern apps with a faster time to market. Service Offerings: TLConsulting brings its consulting and engineering personnel to application modernisation adoption and implementation by providing range of services – as below Summary: Adopting and Implementing a Cloud-Native transformation is not an easy feat. Careful thought and planning are required for adopting a Cloud Native strategy and roadmap. For enterprise architects, CTOs and CIOs thinking about transforming their organisation to support the Cloud Native world, some key points should be considered: standardizing their platform and services to support a Cloud Native platform like VMware Tanzu to gain the maximum benefit out of the transformation.While adopting Cloud Native applications can be exciting

Application Modernisation with VMWare Tanzu Read More »

Uncategorised, , , , ,

How do Kubernetes and Containers Help Your Enterprise?

How do Kubernetes and Containers Help Your Enterprise? In today’s world success of any organisation heavily depends on its ability to drive innovation and deliver those at speed. And IT being an enabler for this rapid delivery model, businesses are looking at Kubernetes and containers adoption as an essential piece of technology for building, deploying, and managing their modern applications at scale. Containers provide an abstraction to the underlying applications and drive towards portability, making it possible to run anywhere, across multiple clouds and on-premises data centres. Furthermore, by providing uniform deployment, management, scaling, and availability services for all the applications, irrespective of its technology—Kubernetes offers significant advantages for your IT and development efforts. Kubernetes offers a range of benefits to the various levels of executives and developers; here we will discuss some of those key advantages. Ultimate Need of Containers and Kubernetes: Keeping up with the latest technology trends and organisational goals towards digitalisation is very tough for the IT teams for the last few years. Conventional software models, traditional VM based IT infrastructure will not be able to help in delivering these modern applications at scale. To deliver these new-age applications, one should adopt the new software practices such as agile and DevOps practices alone with cloud-native architecture. Containers and Kubernetes are the 2 key building blocks in the cloud-native architecture, which the organisations widely use to deliver faster, reliable, and efficient software with a significant cost reduction in the application life cycle. Key Advantages: Light Weight: Containers are very lightweight when compared with traditional virtual machines. A Container includes everything it needs to run, including its operation system, dependencies, libraries, and code. Multiple containers can run inside a single node of a cluster; the VM hosts the OS and container runtime, and the team can still take advantage of all the capabilities of traditional infrastructure virtualisation. Speed: Due to its lightweight nature, we can create a container image and deploy a container in a matter of seconds. Once the image is ready, it can quickly replicate containers and easily and quickly deploy as needed. Destroying a container is also a matter of seconds. This also helps with quicker development cycles and operational tasks. Portability: Containers can run anywhere if the container engine supports the underlying operating system—it is possible to run containers on Linux, Windows, MacOS, and many other operating systems. Containers can run in virtual machines, on bare metal servers, locally on a developer’s laptop and all major public clouds. They can easily be moved between on-premises machines and public cloud, and across all these environments, continue to work consistently. As per RedHat’s market dynamics report, please see how organisations benefit from containers and Kubernetes adoption. Kubernetes for ‘everyone’ Kubernetes is well known for supporting the automation of configuring, deploying, and scaling microservice-based applications that are implemented using containers. Also, microservices-based applications orchestrated by Kubernetes are highly automated in their deployment and management, as well as their maintenance, so that it’s possible to create applications that are highly responsive and adaptive to spikes in network traffic and needs for other resources.  It offers significant advantages to all IT executives and developers as below. Biggest Barriers for Kubernetes Adoption: Cost Of Adoption: One of the biggest obstacles to wider Kubernetes (K8s) adoption is deriving the cost of adoption and running the workloads in the Kubernetes clusters. Cost is the key factor for executives to make decisions to leverage the Kubernetes in their enterprise. In a recent FinOps Foundation survey , — 75% of whom reported having Kubernetes in production — highlights Kubernetes cost management difficulties. It revealed that spending on Kubernetes is spiking beyond what deployments should likely require. The survey’s subtitle isn’t exactly subtle: “Insufficient — or non-existent — Kubernetes cost monitoring is causing overspend.” Lack of Skills and Training: Another barrier for adoption is the lack of skilled and experienced personnel on containerisation and orchestration. As a result, although Kubernetes and container adoption is growing rapidly, many organisations still face a steep learning curve to effectively build, deploy, and manage Kubernetes. This is due to both the technology’s immaturity and a lack of operational excellence with it. Organisations are trying various approaches like paired programming, partners, education, and training to overcome this barrier. Visibility and monitoring: Enterprises are deploying Kubernetes clusters spanning across multiple public clouds and /or in their traditional virtualisation data centres or managed services introduce an increasing amount of complexity. To realise the greatest benefits from, organisations need to be able to visualise their entire Kubernetes footprint, including all its workloads (applications, containers, pods, nodes, namespaces, etc.), their dependencies, how they interact with each other in terms of network bandwidths, response times, and memory utilisations for cluster management and optimisation. Security and Compliance: While enterprises give priority to speed in software delivery, security and compliance sometimes are just an afterthought. Security is a major challenge in the container world, just as it has almost everywhere else in IT. Although many changes and innovations so far, security is still not on par with the traditional structure models. Due to the unique nature of Kubernetes and containerized environments, one misconfiguration can be easily multi-folded to many containers. A security breach of a container is almost identical to an operating system-level breach of a virtual machine in terms of potential application and system vulnerability. How to overcome these challenges: Many organizations want to adopt and leverage the benefits of containers but struggle to justify the total time, resources, and cost needed to develop and manage it internally. One approach is to use VMware Tanzu to organize their Kubernetes clusters across all their environments, set policies governing access and usage permissions, and enable their teams to deploy Kubernetes clusters in a self-service manner. This enables infrastructure and operations teams to gain visibility and command of their Kubernetes footprint while still empowering developers to use those resources with a focus on delivering solutions rather than worrying about infrastructure. Bottom Line: Evidently, Kubernetes adoption helps drive innovation and rapid software development with reliability

How do Kubernetes and Containers Help Your Enterprise? Read More »

Uncategorised, , , , , ,

Pressure on teams to modernise applications

Pressure on teams to modernise applications As many organisations are moving towards a cloud-native approach, the need to modernise applications using new platforms and products is inevitable. But are the expectations on teams too much? With agile delivery being the norm, teams are empowered to experiment, align capacity to continuously learn and are encouraged to fail fast. But with that said, there is increasing pressure for teams to cut corners and adapt to tools and engineering standards as they deliver. In TL Consulting’s opinion, this is when most teams fail to adopt Kubernetes and other modern technology correctly. Issues begin to appear right through the build pipeline most commonly with security, multi-cloud integration, compliance, governance, and reliability. Embedding modern engineering standards Organisations often opt for a lift and shift approach to reduce OPEX and or CAPEX. However, the underlying code is not mature enough to be decoupled correctly and housed within a container. This requires considerable rework and creates an anti-pattern for software engineering teams. Instead, to move from the traditional 3-tier architecture and implement new technical stacks, new development principles for cloud applications such as Twelve-Factor Apps need to be embraced. Other levels of DevSecOps automation and infrastructure as code need to become the engineering standard too. The Twelve Factor App The Twelve Factor App is a methodology providing a set of principles for enterprise engineering teams. Like microservices architecture, teams can leverage the similarities of these principles to embed engineering strategies. This does require highly skilled engineers to create models that can be adopted and reused by development teams. Engineering support With these types of expectations put on immature development teams, the pressures and demand on resources impact performance and quality. From our experience we have found that even Big 4 banks require assistance to modernise applications and seek external support from platforms, and products to modernise their app portfolio. e.g., VMWare Tanzu. VMWare Tanzu is an abstraction layer on top of Kubernetes platforms which enables enterprises to streamline operations across different cloud infrastructures.  Tanzu provides ease of management, portability, resilience, and efficient use of cloud resources. It is important to note that to be successful implementing the likes of Tanzu’s suite of products, an organisation would need to establish a DevSecOps culture and mature governance models. Embracing DevSecOps TL Consulting has found many organisations need guidance when embedding a culture shift towards DevSecOps. Teams must have a security first mindset. The norm therefore should not be limited to the likes of security testing, such as Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST), but instead focus on securing applications by design and automating security practices and policies across the SDLC. After all, the goal is to standardise teams’ daily activities, to build secure software into cloud-native engineering workflows. Infrastructure-as-code (IaC) As IT infrastructure has evolved, leveraging IaC can now be invigorating for teams. Engineers can spin up fully provisioned environments that scale, are secure and cost effect. However, if DevSecOps and infrastructure automation orchestration are not aligned, CI/CD pipelines and cloud costs will be difficult to control. To achieve these sustainable processes and practices, implementing a DevSecOps culture that has mature governance models will help keep cloud costs optimised. Conclusion Providing teams with capacity and implementing modern technology platforms will not overcome the engineering challenges faced when modernising applications. To modernise applications requires an established DevSecOps culture, robust governance models and highly skilled teams. Additionally, each team needs to understand the application(s) under their control to determine what needs to be automated. For example: the purpose of the application and customer experience architecture and design of the application and its dependencies application workflows and data privacy policies compliance with government managed data (if applicable) business security policies & procedures cloud security policies & procedures which impact the application application infrastructure employed The modern platforms, products and tools therefore become enablers to optimise cloud-native adoption, not solutions. This is where onsite education, guidance and support from experts and subscriptions models like A Cloud Guru, can be highly beneficial for leaders and engineers. If you are facing challenges implementing DevSecOps or adopting modern technology platforms such as Kubernetes, contact us.

Pressure on teams to modernise applications Read More »

DevSecOps, , , , , , , ,

Road to a Cloud Native Journey

Road to a Cloud Native Journey Author:  Ravi CheetiralaTechnical Architect ( Cloud & DevSecOps) at TL Consulting “Cloud Native” is the new buzz word in the modern application development. It is an evolving application build pattern. The technology is relatively new to the market; thus, our understanding of the architecture is very primitive and keeps changing over the time with technological advancements in the cloud and containers. Understanding cloud native approach and strategy helps to build better understanding among developers, engineers, and technology leaders so that, teams can collaborate each other more effectively. The Need for Cloud Application Modernization: In today’s IT landscape, 70-80% of C-Executives report that their IT budgets are spent on managing the legacy applications and infrastructure – In addition to that, legacy systems consume almost 76% of the IT spend. Despite of large amount of investment on legacy applications, most businesses fail to see through their digital transformation plans to a satisfactory. On the other hand, constantly changing digital behaviours of consumers and the evolution of viable, reduced opex, self-sustaining infrastructure models that are better suited to today’s pace of technological change are the primary drivers pushing application modernization up the CIO/CTO’s list of priorities. According to a study conducted by Google, public cloud adoption alone can reduce the IT overheads by 36-40% when migrating from traditional IT frameworks. However, application modernization can help in further reduction – it frees up the IT budget to make space for innovation and exploring new opportunities of business value. Lastly, this digital transformation brings greater agility, flexibility, and transparency while opening operations up to the benefits of modern technologies like AI, DevSecOps, intelligent automation, IoT, etc. Kickstart to Cloud Native Journey Beyond the upfront investments after creating a buy-in, application modernization entails several considerations to be made by the CIOs, and more importantly, a game plan to manage the massive amount of change that comes with such a large-scale transformation. However, moving away from the sunk costs of legacy IT can help enterprises take on a new trajectory of profitability and value. Here are four essential steps to a successful application modernization roadmap. Assessment of legacy system landscape: The first and crucial step of the application modernisation journey should be assessment of the legacy system systems. identify the business-critical systems, applications, and business processes. High-value assets that need to be modernized on priority can form the first tier of the legacy application modernization process. Next, we need to start with business value and technical impact assessments. The outcome of these assessments will drive the journey further down to the roadmap. Pickup your Anchor applications: Once an assessment is complete and business services are identified, team must shortlist their modernization options from their legacy application suite. This list will enable a more targeted implementation plan. Following this, an implementation framework needs to be developed and implemented, which will help you to create a modernization schedule. Assessment should also help in determining the scope of the project, team, technologies, and the skills required. Define the success criteria: Various application transformation approaches comprise different costs and risks involved. Say for some instances refactoring a legacy application cost much higher than rebuilding the application using a new technical stack. Most of the times organisations fail to determine the target outcomes in effective manner. So, it is very important to measure the change, costs and risks involved along with the return on investment, the features we aim to improve, and set new benchmarks of attaining agility and resilience while bringing an enhanced security and risk management strategy into the portfolio. Structure of target operating model: The traditional operating structure consists of network engineers, system administrators, and database engineers, are no longer fit to support to the new modern digital transformation landscape, so the organisation must align the IT landscape to suite to new suite, alongside upskilling/reskilling path – In the end, applications are ultimately maintained and supported by the people, and your end state operating model must account for ownership of microservices, who will configure and manage the production environment, etc. Benefits of Cloud Native applications: Drives Innovation: With a new cloud native environment, it is easy drive the digital transformation and  to adopt the new age technologies like AI/ML, automation driven insights as these are readily available in most of the cloud environments and comes with easy integration to the applications. Ship Faster: In current world, key to the success of any business is time to market. With the DevOps and CI/CD capabilities, it is very much a possibility to deploy changes very frequently (multiple times in day) while it takes months to deploy a change in traditional software development. Using DevOps, we can transform the software delivery pipeline using automation, building automation, test automation, or deploy automation. Optimised Costs: Containers manage and secure applications independently of the infrastructure that supports them. Most of the organisations use Kubernetes to manage the large volumes of containers. Kubernetes is an open-source platform that is standard for managing resources in the cloud. Cloud-native applications are using containers; hence it fully benefits from containerization. Alongside Kubernetes, there is a host of powerful cloud-native tools. This, along with an open-source model, drives down costs. Enhanced cloud-native capabilities such as Serverless let you run dynamic workloads and pay-per-use compute time in milliseconds. So, it has standardization of infrastructure and tooling. Hence, it helps to reduce cost. Improved Reliability: Achieving high fault tolerance is hard and expensive with the traditional applications. With modern cloud-native approaches like microservices architecture and Kubernetes in the cloud, you can more easily build applications to be fault tolerant with resiliency and autoscaling and self-healing built in. Because of this design, even when failures happen you can easily isolate the impact of the failure, so it doesn’t take down the entire application. Instead of servers and monolithic applications, cloud-native microservices helps you achieve higher uptime and thus further improve the user experience. Foundational elements of Cloud Native applications: In general, cloud native applications are designed

Road to a Cloud Native Journey Read More »

DevSecOps, , , , , , ,