Showing posts with label Cloud Native. Show all posts
Showing posts with label Cloud Native. Show all posts

Friday, March 27, 2026

Cloud Native Without Enforcement: Why principles are not enough

Cloud Native is often used as a direction, but rarely defined in a way that influences actual decisions. It appears in strategies and architecture documents, yet teams still make their own interpretations in practice. Platforms diverge, implementations vary, and architecture struggles to keep pace.

This is not caused by a lack of intent. It is caused by a lack of enforceable constraints. Most teams understand the direction and are willing to work towards it. They adopt containers, introduce automation, and try to align with what is described as Cloud Native. The issue is that none of this is bounded by clear, enforceable constraints. There are no hard rules that define what is allowed and what is not. As a result, teams are left to interpret principles on their own.

One team may run containers on a managed platform, another may run them on virtual machines. One team may use standardized pipelines, another may rely on manual steps. Both can argue they are working “Cloud Native,” because there is no shared definition that can be verified.  In that situation, alignment becomes optional. Decisions are made locally, based on speed, familiarity, or immediate constraints, rather than on a consistent architectural baseline.

Without enforceable constraints, architecture cannot guide behavior. It can only describe an intention, while implementation continues to diverge.

As long as Cloud Native is described in general terms, it does not guide behavior. Statements such as “we use containers” or “we automate everything” do not determine what is allowed and what is not. They leave too much room for interpretation. In that space, teams optimize locally, often with valid reasons, but without overall consistency.

The result is predictable: multiple platforms, inconsistent deployment models, and unclear boundaries between workloads. Architecture becomes descriptive instead of directive.

This perspective is based on my practical experience in enterprise architecture within a multi-tenant government environment, where platform choices, integration patterns, and delivery models are not theoretical concerns but daily decisions. In that context, the gap between architectural intent and implementation becomes visible quickly.


The effect of this becomes visible when you look at how teams actually implement these principles in practice.The following illustrates how this plays out in practice when no enforceable constraints are in place.

Without enforceable constraints

This diagram shows what happens when Cloud Native is defined as a direction, but not enforced through concrete rules. Teams interpret the principles in their own way, leading to different implementation choices and a fragmented platform landscape.


Cloud Native is defined at a high level, but not translated into rules that guide implementation. Teams interpret the direction based on their own context, priorities, and constraints. This leads to different approaches to platforms, deployment, and integration. Over time, these differences accumulate into a fragmented landscape where consistency is lost and architectural control becomes difficult to maintain.


With enforceable constraints

This diagram shows how clear, enforceable constraints translate Cloud Native from a general direction into consistent implementation. Workloads are classified, and rules are applied through the platform and delivery process, resulting in predictable and controlled outcomes across teams.


This does not remove flexibility, but it makes deviations explicit and manageable.


From intent to enforceable constraints

If Cloud Native is to have any impact, it needs to be defined in terms of what can be enforced. The question is not what we prefer, but what we are willing to require.

A first step is to make explicit that not every workload is Cloud Native. Treating all systems the same creates ambiguity and weakens decisions. A simple classification such as Cloud Native, transitional, and legacy removes that ambiguity. Each category comes with consequences. A Cloud Native workload is not just containerized. It runs on the designated platform, follows a defined deployment model, and meets requirements for isolation and lifecycle management. Transitional and legacy workloads are handled differently, without forcing them into a model they do not fit.

Deployment is where most of the inconsistency becomes visible. As long as manual deployments are allowed, consistency remains optional. Requiring all changes to go through standardized pipelines creates a single path to production. This is where behavior becomes predictable and where policies can be applied in a consistent way. It also makes it clear how systems are built and operated.

Isolation needs to be defined beyond logical separation. In many environments, constructs such as namespaces are treated as sufficient, while actual isolation depends on network controls, access boundaries, and runtime constraints. Without a defined baseline, shared platforms are difficult to govern. Setting minimum requirements makes it clear which workloads are allowed and under which conditions.

Integration follows the same pattern. Without constraints, systems connect directly, driven by immediate needs. Over time, this leads to tight coupling and limited visibility. Defining APIs and messaging as the standard integration model introduces consistency and makes deviations explicit. It also aligns with established practices around controlled access and traceability, as reflected in frameworks such as ISO/IEC 27001 and NIST Cybersecurity Framework.




Maturity models, such as those from the Cloud Native Computing Foundation (see also Post CNCF Maturity Model), are widely used in the Cloud Native ecosystem to describe progress and capability. They provide structure, but they do not define enforceable boundaries. In practice, teams at the same maturity level can still make very different implementation choices. Without constraints, maturity does not lead to consistency.

The Cloud Native Maturity Model

The platform is where these constraints become real. It defines how workloads are deployed, how isolation is implemented, and how integrations are exposed. If teams are free to select their own platforms, differences in behavior will follow. By setting platform boundaries, architecture ensures that constraints are not only defined but also applied.

Enterprise Architecture: Making Cloud Native enforceable

This changes what enterprise architecture actually does. Instead of describing intent, it defines the conditions under which solutions are acceptable. That reduces interpretation and limits unnecessary variation. It also makes deviations visible, so they can be discussed and managed.

In reality, platform direction is often not fully established. Multiple solutions coexist, and teams move forward because delivery cannot wait. Architecture defines direction, while teams are already moving. Ignoring that does not help. Defining constraints that apply regardless of the final platform choice is what keeps control during that transition.

Enterprise architecture does not need to prescribe every detail. It needs to define boundaries that can be verified and enforced. Within those boundaries, teams remain free to design and deliver.

How to see this in practical terms?

In practice, this does not require a complete redesign. It starts by making a small number of decisions explicit. Define which workloads are allowed on which platform. Require all deployments to go through pipelines. Make integration standards non-optional. These constraints don't need to be perfect, but they need to be enforced.

Cloud Native is not a label or a technology choice. It is a set of constraints. Without those constraints, architecture describes intent. With them, it shapes outcomes.

Monday, December 29, 2025

Policy Belongs in the Pipeline

A practical perspective on build-time governance in CI/CD

Governance arises quickly when people discuss modern CI/CD. Compliance too. Everyone agrees it matters.

And yet, when you look at how pipelines are actually built, policy enforcement is often surprisingly thin. Not absent, but just fragile. That gap is not usually caused by bad intent. More often, responsibility simply ends up in the wrong place.

Where policy enforcement tends to drift

In many environments, policy enforcement gradually assumes familiar forms.

  • rules that live in documents instead of pipelines
  • scripts added late in the delivery flow
  • checks that only run after deployment
  • tools developers are expected to install locally

None of these is an unreasonable choice on its own. The problem is what they have in common.

When something fails, it becomes difficult to answer basic questions: where the rule was enforced, when it was evaluated, and why the pipeline made that decision.

That uncertainty is rarely a tooling problem. It is almost always an architectural one.

Stepping back from tools

At some point I stopped asking which tool would solve this best. That question tends to lead nowhere.

A more useful question turned out to be simpler: Where should policy enforcement actually live?

Not on developer machines. Not only at runtime. And not as an afterthought added to an otherwise finished pipeline.

The answer I keep coming back to is uncomplicated: policy enforcement belongs inside the CI/CD pipeline, at build time. Once you accept that, many design decisions stop being optional.

What Changes Occur When Policy is Embedded in the Pipeline

Integrating policy enforcement into the pipeline promotes transparency. Inputs must be explicit, and assumptions can no longer be relied upon. Hidden states become a liability.

Decisions must also be predictable. If the same pipeline is executed twice with identical inputs, the outcome should remain consistent. 

Moreover, when a rule is violated, the pipeline should stop immediately; no warnings, no deferrals, just a halt. This approach may seem strict, but without such clarity, governance quickly becomes negotiable.



Why local enforcement keeps failing

One pattern that repeatedly causes trouble is policy enforcement that depends on local developer setups.

Different machines behave differently. Versions drift. People work around issues “just this once”.

Over time, ownership becomes unclear. Was the rule enforced by the pipeline, by the tool, or by the developer?

By enforcing policy only inside the pipeline, those questions largely disappear. There is one execution context. One decision point. One place to look.

Developers write code. Pipelines enforce policy. That separation turns out to be surprisingly powerful.

Explainability is not a nice-to-have

Another thing that becomes obvious very quickly: pipelines that fail without explanation do not earn trust.“Policy check failed” is not an answer. It is a conversation starter, however usually an unproductive one. If work is blocked, teams need to understand why, immediately and in context. Not by reading a document. Not after escalation. But as part of the pipeline output itself.

Policy-as-code makes that possible, but only if explanation is treated as part of enforcement, not an add-on.

A deliberately small experiment

To investigate these concepts, I developed a compact reference implementation with a focused approach. This implementation enforces policy within the pipeline, necessitating clear input and designed to fail fast while providing insights into its decision-making process. It intentionally avoids attempting to be comprehensive and refrains from using abstractions that obscure the underlying processes. The primary objective was not to create a complete solution but to highlight the associated trade-offs clearly.

What stood out

Even in a limited setup, a few things became very clear.

  • pipeline tasks are stateless unless you make state explicit
  • pipeline definitions and pipeline execution are not the same thing
  • changing code does not automatically change behavior
  • governance only works when people understand it

None of this is new. But it is easy to overlook when governance is discussed in abstract terms.

Closing thought

There is a growing emphasis on the concept of "shifting left" in various discussions. However, what truly holds significance beyond mere timing is the notion of responsibility. It raises the critical question of who enforces compliance with established rules and at which stages in the process. 

If governance is truly important, it must be deeply integrated into the entire delivery process right from the outset: early, explicit, and prominently displayed for all to see. 

Moreover, Continuous Integration and Continuous Deployment (CI/CD) pipelines should not merely be viewed as mechanisms for delivering software. Instead, they should be recognized as vital governance boundaries that help ensure accountability and maintain standards throughout the development lifecycle.

The reference implementation discussed here is available as open source. Feedback and alternative perspectives are welcome. If you want to contribute, I’m most interested in:

  • alternative policy examples
  • clearer policy–pipeline contracts
  • cases where this approach breaks down

Pull requests, issues, and disagreement are all equally welcome.

Repo is located at: https://github.com/mnemonic01/opa-tekton-policies.git


Friday, June 14, 2024

How organizations can boost their Cloud Native Adoption: The CNCF Maturity Model

Introduction

Cloud Native has become important for building scalable and resilient applications in today's IT landscape. As organizations increasingly embrace cloud technologies, it is crucial to assess their maturity in implementing Cloud Native practices. To aid in this process, the Cloud Native Computing Foundation (CNCF) has developed the Cloud Native Maturity Model, which helps organizations evaluate their progress and guides them toward a successful Cloud Native strategy. In this article, I will dive into the CNCF Cloud Native Maturity Mode and its significance in shaping the future of organizations' cloud strategies.

I started recently(May 2024) as an Enterprise Architect for the Dutch Government on this focus area, and I think this model can help shape the Cloud Native strategy and future of many organizations, which are somewhere in this journey, beginning , middle, it actually doesn't matter. For those who are already far in this journey, this all might seem obvious; however, for large organizations existing for many years already, it can be a struggle to get on the right path.


Understanding the CNCF Cloud Native Maturity Model

The CNCF Cloud Native Maturity Model acts as a framework for organizations to evaluate their Cloud Native capabilities across multiple dimensions. It offers a detailed set of criteria that enables organizations to assess their maturity levels in various domains, including:

  • Culture and Organization: This involves evaluating the organization's dedication to embracing Cloud Native practices, enhancing teamwork, and encouraging innovation.
  • Architecture: This handles assessing the organization's capability to create scalable, robust, and loosely coupled systems through microservices architecture.
  • Application Lifecycle: This concerns measuring how effectively the organization integrates automation, continuous integration/continuous delivery (CI/CD), and observability within their application development lifecycle.
  • Infrastructure Automation: This involves improving the organization's skill in automating infrastructure provisioning, management, and scaling with tools such as Kubernetes.
  • Observability: This entails appraising the organization's competence in monitoring, tracing, debugging, and analyzing applications in a distributed setting.
  • Security: This involves judging the organization's strategies for ensuring data protection, secure communications, and the adoption of best practices in securing cloud-native applications.

CNCF Maturity Model Flow









A flow in the journey to become more Cloud Native

Benefits of Using the CNCF Cloud Native Maturity Model

How can an organization get benefit from this model? Topics such as becoming more agile, resilient, and customer-focused while also reducing costs and driving innovation will be some of those benefits.
In this process, an organization will set and experience some of the following benefits:

  • Self-Assessment: Organizations can use the maturity model to conduct a self-assessment and understand their current level of Cloud Native maturity. This helps identify areas that require improvement and prioritize actions accordingly.
  • Goal Setting: The maturity model provides a clear roadmap for organizations to set goals and define targets for their Cloud Native journey. It helps align the organization's strategy with industry best practices.
  • Benchmarking: The maturity model enables organizations to benchmark themselves against peers and industry leaders. This comparison provides insights into areas where improvements are needed to stay competitive in the market.
  • Decision Making: By evaluating their Cloud Native maturity, organizations can make informed decisions regarding technology adoption, resource allocation, and investment in training and upskilling.
  • Continuous Improvement: The maturity model serves as a continuous improvement tool, allowing organizations to track their progress over time. It promotes an iterative approach towards achieving higher levels of Cloud Native maturity.

Implementing a CloudNative Strategy

The CNCF Cloud Native Maturity Model is not just a measurement tool; it also guides organisations in formulating an effective Cloud Native strategy. Here are some key steps to consider when implementing a CloudNative strategy:

Cloud Native Strategy











 



Levels of Maturity

The maturity models consists of different levels of maturity, which is a good indicator to determine how far an organisation is in it's Cloud Native adoption. The levels are:


Cloud-Native Foundation (Beginner)

Containerization:

Begin by containerizing existing applications using Docker or similar tools to create consistent runtime environments. Ensure that applications are stateless when possible.

Continuous Integration/Continuous Deployment (CI/CD):

Implement basic CI/CD pipelines to automate code builds, tests, and deployments.

Version Control:

Adopt a centralized version control system like Git and establish best practices for branching, merging, and code reviews.

Monitoring and Logging:

Implement basic monitoring and logging solutions to gain visibility into application performance and issues.


Cloud-Native Adoption (Intermediate)


Orchestration:
Deploy a container orchestration platform such as Kubernetes to manage containerized applications, including scaling, deployment, and management across multiple nodes. Utilize Helm charts for easier deployment and management of Kubernetes applications.

Microservices Architecture:
Refactor monolithic applications into microservices to enable individual component development, deployment, and scaling.

Advanced CI/CD:
Enhance CI/CD pipelines to support blue-green deployments, canary releases, and automated rollbacks.

Observability:
Implement comprehensive observability tools for logging, monitoring, and tracing.


Cloud-Native Maturity (Expert)

Serverless Architectures:
Explore serverless computing for specific use cases. Utilize platforms for event-driven, scalable applications.

Chaos Engineering:
Introduce chaos engineering practices to test the resilience and reliability of your systems. Simulate failures and improve system robustness.

Advanced Observability:
Integrate AI/ML for predictive analysis and automated anomaly detection. This helps in proactive monitoring and faster issue resolution.

Continuous Improvement:
Establish a culture of continuous improvement. Regularly review and refine processes, tools, and practices to stay aligned with evolving cloud-native technologies and business needs.


Culture

Culture is an important aspect in change. This is something which doesn't change in the blink of an eye and can be a challenging path. The Maturity model can help driving these cultural changes, where you can think of the following:

Training and Development:
Offer continuous training and development opportunities for staff to enhance their skills in cloud-native technologies and practices.

Agile and DevOps Practices:
Promote a culture of agility and collaboration through DevOps practices. Encourage cross-functional teams to collaborate and iterate fast and flexible.

Feedback Loops:
Establish feedback loops to consistently gather insights from teams and stakeholders. This feedback can be utilized to make informed decisions and adjustments to the cloud-native strategy.

Governance and Compliance:
Ensure that governance and compliance measures are in place to comply with regulatory requirements and organizational policies.

Also consider that these aspects need time and can't all be implemented at once, but very carefully. Take the organization on the journey and let them also come up with good suggestions.




Conclusion

As organizations embrace the benefits of Cloud Native technologies, it becomes crucial to evaluate their maturity in implementing these practices. The CNCF Cloud Native Maturity Model offers a valuable framework for self-assessment, goal setting, benchmarking, decision making, and continuous improvement. By leveraging this model and implementing an effective CloudNative strategy, organizations can unlock the full potential of cloud technologies and stay competitive in today's digital landscape.

This model is a good point to start with: Today!!


Remember: Cloud Native is not just a buzzword; it is a transformative approach that can shape the future of organizations' IT strategies.





Don't let the Clouds rain on you!



CNCF



© CNCF

Cloud Native Without Enforcement: Why principles are not enough

Cloud Native is often used as a direction, but rarely defined in a way that influences actual decisions. It appears in strategies and archi...