From Energy Efficient Cloud Infrastructure to Energy Aware Cloud Architecture!

Energy Aware Cloud Architecture

Cloud Service Providers (CSP’s) have become increasingly aware about the climate impact of cloud computing and as a result are proposing measures to make their platforms more sustainable. However, when analyzing their approaches to sustainability in more detail, often sustainability is reduced to three aspects:

  1. Optimizing the energy consumption of the infrastructure: reducing the overall energy consumption
  2. Using energy from sustainable sources: using green or carbon neutral energy sources
  3. Using equipment that has a sustainable life-cycle: using a green device supply chain

The energy consumption optimization is translated by all hyperscale cloud providers into:

  • IT Operational efficiency
  • IT Equipment efficiency
  • Datacenter Infrastructure efficiency

Mapping this to cloud service models, it becomes clear that the approaches are working bottom-up in the architecture stack, seem not to pay attention at the application and data level and focus more on infrastructure asset management.

IaaS PaaS SaaS

It is nice to see CSP’s looking at more efficient ways to organize their infrastructure but the usage of their cloud infrastructure seems to be underexposed. Efficient usage is one, but reducing the need for consumption is another. CSP’s can’t be blamed for our consumption but the focus on energy alone, although important, seems like green washing: “What is not consumed does not need to be optimized or turned in something carbon neutral!”

All this points to the fact that the approaches to cloud native sustainability is still in its infancy. Bottom-up optimization focusses on efficiency and is becoming an established approach.

Energy Management Impact
EC, 2020, Energy-efficient Cloud Computing Technologies and Policies for an Eco-friendly Cloud Market.

In the bottom-up approach an element that often is overlooked, is the cost of organizing the energy optimization. It requires additional monitoring and operations management infrastructure which also result in cloud computing and storage requirements i.e. components that also consume energy. None of the CSP’s mentions the management overhead costs of the operations management infrastructure.

The top-down optimization, focusing on the cloud usage, is still in research and wants to tune cloud efficacy in stead of efficiency. The top-down approach involves:

  • Application and Data architecture
  • Application Usage

Application and data architecture receives lately a bit more attention.

Cloud Native

Firstly through the use of the cloud-native computing paradigm, selecting of the right component for the right job, has a positive effect. For example the use of containerized micro-services reduced the need for full fletched VM’s what resulted in an infrastructure optimization: less compute and less storage. Important not to overlook is that this approach also introduces more network communication and complex management but the overall overhead is lower than the resource efficiency gains. Data traffic optimization can be found under Fog Computing techniques that are being researched.

Cloud Programming Efficiency

Secondly there is the programming efficiency i.e. the correct organization of algorithms in application code. Inefficient algorithms will require more CPU cycles, memory and more storage capacity. Static code analysis tools are currently used to check for architecture qualities in code like maintainability. For the moment checking algorithm efficiency is not commonly available in such a tools and are part of specialized tooling. Programming language and compiler efficiency checks are not mainstream at all. These would involve checking the CPU instruction sets being used in the binary code. CPU instructions have different effect on the CPU’s energy consumption. These techniques are being developed under the umbrella of the Energy Aware Software Engineering.

Cloud Acceptable Use

Application Usage has not received any attention!

Finally we have to point out to the acceptable use of technology. Lately a lot of focus has gone to the ethics of data privacy. As applications are running on infrastructure that are consuming natural resources, one should wonder if a cost-benefit analysis of the usage of application should not be considered: “It is not because we can that we should use an application and in extension the cloud”.

This opens the discussion to a new debate: from less energy consumption towards the correct application of the available energy. Should we start taxing applications like we do with other consumer products based on necessity? Alcohol (pleasure) is taxed more than water (basic need) could be translated into the tax on a Social Media application (pleasure) vs. Banking application (basic need)!

The following document is on Microsoft Efforts for cloud sustainability:

The EC released following report on an Eco-Friendly Cloud Market:

How Agile is your Organization?

Maximum flexibility requires maximum responsibility!

Being Agile as an organization and Agile application delivery has been on the radar of many CIO’s and CTO’s trying to reduce the time to market of systems under their control.  Often I see agility being interpreted by the other C-levels as a methodology for maximum project flexibility. However maximum flexibility requires maximum responsibility to achieve business value and that is not a solely technical issue, on the contrary!

Being effective in Agile delivery requires a lot of maturity in different areas during project delivery. We will be looking at six axes besides the typical technical one:

  1. Technical Environment
  2. Quality Management
  3. User Experience
  4. Team Dynamics
  5. Ownership Management
  6. Project Management
  7. Company’s Eco-System

To generate maximum value, the Agile methodology has guiding principles and these have to be translated in operating procedures. Based on earlier work of M. Balbes, we propose a model to quantify this operational effectiveness by scoring a list of required capabilities according to maturity going from no capabilities to innovating and leading capabilities.

An Excel version to score your organization can be downloaded here:

Before we try to measure something, what are the Agile principles companies should adhere to?

  1. Delivery Value: Focus on continuously delivery value to the customer, key-users …
  2. Embrace Change: Change is good and inevitable. Change avoids waste by adapting before it is too late.
  3. Business + ICT: Avoid Chinese walls between teams. Work closely together every day.
  4. Simplicity: Focusing on what is good enough to avoid gold-plating. Work smarter not harder.
  5. Frequent Delivery: Delivery version frequently to have short feedback cycles.
  6. Self-Organizing Teams: Motivated individuals will be able to identify how to organize the work and what they need.
  7. Communication: Transparent, open and face-to-face communication helps insights and clear understanding.
  8. Self-Emerging: Avoid analysis-paralysis and big design up-front. Design for what is needed now and adapt.
  9. Progress Monitoring: Measure progress as delivered software i.e. potential shippable product increments.
  10. Constant Pace: The team should work according at a pace they can keep up without feeling pressured.
  11. Technical Excellence: Focus on the quality of artefacts and development process.
  12. Continuous Improvement: Be self-reflective and make incremental improvements to the development process.

For the different axes an organization needs to have some capabilities in place and below some examples of these capabilities. The detailed list can be found in the attached Excel model.

Technical Environment capabilities check if the principles of extreme programming are enabled. What is the company’s maturity for following capabilities:

  • Unit testing, test driven development and technical testing approaches
  • Continuous integration methods
  • Pair programming – Spikes experimentation
  • Source control – Branching strategy approaches
  • Release management
  • Coding standards usage
  • Development process – Software Development Life Cycle
  • Shared code ownership
  • Software changeability enablement

Quality Management validates if quality assurance is taken in favor of quality measurement. What is the company’ maturity for following capabilities:

  • Quality management process
  • Code Quality – Internal – External Quality impact analysis
  • Team’s  ownership of quality
  • Defect management
  • User acceptance testing – Exploratory testing

User Experience looks at the application of multi-disciplinary teams since developers are not designers. What is the company’s maturity for following capabilities:

  • Graphical Design – UX selection process
  • Embedded usability testing

Team Dynamics focuses how people collaborate and how self-reflective they are. What is the company’s maturity for following capabilities:

  • Team Structure – Charter agreements
  • Team discussion – Conflict resolution processes
  • Retrospectives – Stand-Ups organizations
  • Information radiators availability
  • Continuous improvements
  • Change acceptance process

Ownership Management goal is to see how well the link between business and IT is managed to enhance a project’s business value. What is the company’ maturity for following capabilities:

  • Identified stakeholders management
  • User stories – Story sizing principles
  • Acceptance criteria – Owner acceptance process
  • Delivering value – User feedback management
  • Prioritization – Backlog – Release Cadence organization
  • Backlog – Change management

Project Management looks for transparency in reporting and collaboration enhancement. What is the company’s maturity for following capabilities:

  • Kaban – Milestones overview
  • Decisions – Meeting minutes creation
  • Staffing alignment

Finally we have the company’s Eco-System or the environment. What is the company’s maturity for following capabilities:

  • Risk Identification – Monitoring – Mitigation
  • Embedded learning culture
  • Change – Champions creation and management
  • Governance – Internal Change – External Change management

An overview presentation can be downloaded here:

Using Robotic Process Automation wisely – “If you are a hammer, everything looks like a nail” … but probably isn’t!

Recently Robotic Process Automation (RPA) was embedded in MS Windows 11. Although I’m happy to see such a great capability being added, I fear the incorrect application of this technology. Back in 2016 when the first RPA tooling came to market, I made an overview of technology capable of automating business processes. RPA is a solution in this area, but not the only solution. Out of fear of “if you a hammer, everything is a nail” or in other words “it is not because we can do it with RPA, we should do it with RPA”, some insights in the alternatives to RPA and guidelines around when to apply RPA and when not.

You can download the deck here:

End-to-End Business Process Automation (BPA) is nothing new. It has been around decades and comes in different flavors. The focus of this article is on automating a flow of activities across multiple systems. This to distinguish BPA from solutions that focus on one activity or that stay within one system. The latter are typical shortcuts or macro’s embedded in office tools or an integration API’s where one system invokes an activity in another system. The difference lays in the fact that a process by definition has a state. It knows about the sequence of activities that make up the flow, it knows the current activity under execution and it knows about the execution of previous activities that led-up to the current activity.

In the group of BPA systems when can identify two approaches with different characteristics and applications:

  • The Business Process Management Systems (BPMS)
  • The Robotic Process Automation Systems (RPAS)

What adds to the confusion is that most commercial products have become a hybrid between BPMS and RPAS but still it is good to understand the different approaches to business process automation.

Below a comparison table of typical use-cases and characteristics of RPAS-es and BPMS-es:

 RPAS
BPMS  
Products– BluePrism
– Automation Anywhere
– Power Automate
– K2/NinTex
– AgilePoint
– Windows Workflow Manager
Typical Use-Case– To integrate Line of Business (LOB) systems through the UI when there is no means to get information from the LOB system through a system-to-system interface .– The process is composed of activities executed in LOB system (CRM, ERP) and they can be targeted through system-to-system integration.
– The process requires human intervention and can be targeted to a human-to-system integration: writing a custom UI to handle the human input and link the UI through normal integration strategies with the process (web-service, messaging, DB). => Important: this human-process interaction is not UI-integration => SEE RPA.
Characteristics– Typically less building blocks and out-of-the-box integration components to build the full business process A-Z.
– Process complexity is limited to flow chart like flows. Not a lot of support for hierarchical or nested processes.
Processes are executed atomically from begin to end. Limited or no support for long-running processes that can be interrupted mid execution. The duration of an activity is at the level of magnitude of seconds.
Elaborate components to integrate LOB system at screen level: so screen scraping (visual pixel level) or screen spying (API widget ID level).
– Typically used when there are limited and simple activities in the process to automate, the process is organized to overcome the UI integration. => Important this UI integration is not human-process interaction => SEE BPMS.
Hard to make abstraction of the workflow and the systems it integrated with as UI is used for integration. This tight coupling through the UI requires a new flow per system that is integrated.
– Typically used when there are a lot of activities in the process and some complex logic to drive the process.
Processes can be interrupted mid execution (long running) waiting for an activity to complete. This interruption can be at the level of magnitude of hours, days, months, years.
– Typically have no components to integrate LOB system at screen level: so screen scraping (visual pixel level) or screen spying (API widget ID level).
Supports abstraction of the workflow from the systems it integrates with as technical interfaces are used for the integration (API/Web services). Loose coupling is used and can reuse flows as long as the system respect the same technical interface. This abstraction (partner management) is typically done by using middleware like a service bus (BizTalk).  

To summarize: RPAS’ weaknesses are BPMS’ strengths and vice versa

  • RPAS:

– Gaps in supporting all types of business processes and all levels of complexity.

+ Good components for UI integration.

  • BPMS:

– Gaps in UI integration.

+ Good support for all complex long running business processes.

So the solution is using a RPAS – BPMS combination?

Pro’s and Con’s:

  • Disadvantage: two licenses and two products
  • Advantages: optimizing the advantages of RPA and BPMS

Trade-off factors to make decision. Is UI-integration required and can no alternatives be found (through web-services, files, messages, DB)? Go for RPA. Is the business process complex? Go for BPMS

RPAS-es and BPMS-es also have touchpoints with other technologies. Some points of attention and advice here:

  • Optical Character Recognition (OCR) vs. RPA: scanning vs screen scraping, do not abuse RPA systems for OCR scanning!
  • Orchestrations vs. BPMS: atomic processes vs interruptible processes, do not use orchestrations for human-process interaction
  • Communication Bus (Messaging) vs. Orchestrator; atomic-requests vs. unit-of-work requests, do not use communication busses when collaboration between multiple systems is required to handle a request.

The distinction between BPMS-es and Orchestration Engines and the different approaches to system-to-system integration was not covered in this article but contains a lot of food for thought as well.

Building Evolutionary Architectures

Controlling the Fitness of Your Software Architecture!

In their book “Building Evolutionary Architectures”, N. Ford, R. Parsons and P. Kua introduce a new way to look at software architecture where changes to requirements become part of the business as usual. These concepts were already set in motion by Agile, DevOps and CI/CD but the authors add a refreshing concept to the mix.

How are we going to measure the suitability of our architecture compared to the every changing requirements?

The proposed solution lays in measuring how far or how close the characteristic of the current architecture are from the ideal or expected characteristics. The solution was inspired by statistical models were we want to fit a curve to a set of data points by fitting a function. An expression of the suitability of the curve’s function is called the fitness of such a function.

The same concept can be applied to software architectures. Required architectural characteristics a.k.a. as the technical *-abilities can be measured by fitness functions. When the architecture changes the impact is measured by these fitness functions and as a result the changes become quantifiable and controllable.

An architectural fitness function provides an objective integrity assessment of some architectural characteristic. Combining the outcomes of the collection of fitness functions gives a view on the overall architecture.

Making architecture capable of evolving requires 3 things:

  • An architecture that supports incremental change
  • An architecture that can be measured so the changes can be guided
  • An architecture with the appropriate level of coupling to allow for an optimal change process

More details on the topic can be found in:

Cloud Target Operating Models are a perquisite to drive Cloud Adoption!

We often see companies moving to the cloud for the sake of not losing to the competition reversing the question “technology searches problem to solve” in stead of “business problem searches technology to be supported”.

Cloud computing can be an answer to a business question and has a lot of advantageable properties if it is aligned with a digital transformation vision. Hence Business-ICT alignment is key to make this happen. So the logical steps would be:

  1. Come with a Digital Strategy first.
  2. Translate the strategy into an Target Operating Model (TOM) for your business.
  3. Deduce a Cloud Strategy from the Digital Strategy
  4. Translate the Cloud Strategy into a Cloud Target Operating Model (CTOM)

Doing some literature research and looking into the publications of Enamul Haque, I created an overview slide deck with a focus on Business-ICT alignment for cloud computing.

You can download the deck here:

The content first focusses on where cloud computing fits within a digital transformation. Digital transformation is driven by a customer focus and on allowing continuous business changes. Next the following questions are answered: 1. What technologies and prediction are fitting with these drivers? 2. What are key success factors and challenges of a digital transformation? Finally the reasons behind a digital strategy are discussed.

In a second part the focus is on cloud computing. We start with the business drivers and technology drivers and next discus some benefits and challenges. In order to fit cloud as a technology within any company it must be supported by a cloud adoption framework in order to live up to its full protentional.

The final part of the slide deck focuses on the CTOM where the adoption is initiated by a Cloud Center of Expertise (CCoE) delivering the processes for Cloud Service Management and Cloud Operations Management regulated through Cloud Governance.

Cloud Target Operating Model is based on four processes:

  • Plan: Strategy to Portfolio (S2P)
  • Build: Requirement to Deploy (R2S)
  • Fulfil: Request to Fulfil (release/delivery) (R2F)
  • Run: Detect to Correct (R2D)