Category Archives: Uncategorized

Five Prerequisites for Citizen Development

In a pervious article we talked about evolving Citizen Developers from City Dwellers to Townsmen … so they can become first class citizen. We discussed the left-shift of responsibilities from pro-developers to Citizen Developers and explained that Low-Code / No-Code (L-C/N-C) platforms cannot exonerate an organization of all the responsibilities shifted to these Citizen Developers:

It is the tooling support of the L-C/N-C platforms
that evolve Citizen Developers from City Dwellers to Townsmen.

It is the organization’s (city) strategy and tactical focus
that turns them into first class citizens.

Next we introduced 3 prerequisites for success the so-called D^3 or Data, Devices and Delivery.

After some more research it became clear that still some dimensions were missing to cover all the angels of people – processes – technology.

Firstly, we want to extend Data with Digitalization. Data is a necessary but not a sufficient condition. Data as a collection of information is nothing without a clear view where it fits within business processes. It is here where Digitalization comes in to add the process angle as sufficient condition.

Secondly, in order to have an acceptable ROI for a L-C/N-C platform investment, a high level of reusability is important. Reusability focusses on technical building blocks like components and connections but also on governance and policies. The idea that there are things that will be application-specific and handled by a Citizen Developer and other things will be generic and handled on an organizational level, introduces  a dimension where we look at the distribution of reusable elements throughout the L-C/N-C platform.

The two additions mean that we have now D^5 model:
Data – Devices – Delivery – Digitalization – Distribution

Data: data encapsulation/exposure is a prerequisite


For a Citizen Developer to develop applications for his business,
he needs access to his business data

  • Data source isolation: data specific for his business
  • Data availability: data accessible through API’s
  • Data storage: data to be stored in relation to his application

Devices: suitable devices are a prerequisite


For a Citizen Developer to build applications,
he needs to be able to select a suitable device for the problem at hand
i.e. desktop, mobile, VR-AR, kiosk

  • Device availability: devices made available on short-term to all the users of his application
  • Device/License sharing:  devices and licenses ad-hoc attributed and revoked to users
  • Device security policies: devices are managed to protect against exposure and loss of business data

Delivery (Deployment): platform governance and strategy is a prerequisite


For a Citizen Developer to manage the life cycle of his application,
he needs to be able to count on supporting process to be in place

  • Application Delivery TOM/SOM: application delivery processes clearly describe roles and responsibilities for Business and IT
  • Delivered Application Support Model: delivered application he can support directly or indirectly
  • Application DevSecOps policies: business risks and IT risks are controlled by DevSecOps tooling

Digitalization: process oriented automation is a prerequisite


For a Citizen Developer to contribute to the digitalization
he needs to be able to affect end-to-end processes he owns

  • Monetized Digitalization: processes direct and indirect costs and revenues are quantified (FinOps)
  • Dematerialized Digitalization: processes descriptions focus on activities (steps towards end-results) before information (content carried between activities)
  • Optimized Digitalization : end-to-end processes are defined as a set of interrelated activities (process optimization not isolated activity optimization)

Distribution: availability of reusable building blocks is a perquisite


For a Citizen Developer to efficiently assembly his application
he needs to combine distributed building blocks

  • Distributed Components: reusable front-end components (UI elements) as well as reusable back-end components (Systems/Solutions)
  • Distributed Connectivity: reusable means of integration between components (data exchange, network connectivity, import/export)
  • Distributed Responsibilities: reusable cross application responsibilities (generic governance) extending application specific responsibilities of his application (app specific governance)

From Energy Efficient Cloud Infrastructure to Energy Aware Cloud Architecture!

Energy Aware Cloud Architecture

Cloud Service Providers (CSP’s) have become increasingly aware about the climate impact of cloud computing and as a result are proposing measures to make their platforms more sustainable. However, when analyzing their approaches to sustainability in more detail, often sustainability is reduced to three aspects:

  1. Optimizing the energy consumption of the infrastructure: reducing the overall energy consumption
  2. Using energy from sustainable sources: using green or carbon neutral energy sources
  3. Using equipment that has a sustainable life-cycle: using a green device supply chain

The energy consumption optimization is translated by all hyperscale cloud providers into:

  • IT Operational efficiency
  • IT Equipment efficiency
  • Datacenter Infrastructure efficiency

Mapping this to cloud service models, it becomes clear that the approaches are working bottom-up in the architecture stack, seem not to pay attention at the application and data level and focus more on infrastructure asset management.

IaaS PaaS SaaS

It is nice to see CSP’s looking at more efficient ways to organize their infrastructure but the usage of their cloud infrastructure seems to be underexposed. Efficient usage is one, but reducing the need for consumption is another. CSP’s can’t be blamed for our consumption but the focus on energy alone, although important, seems like green washing: “What is not consumed does not need to be optimized or turned in something carbon neutral!”

All this points to the fact that the approaches to cloud native sustainability is still in its infancy. Bottom-up optimization focusses on efficiency and is becoming an established approach.

Energy Management Impact
EC, 2020, Energy-efficient Cloud Computing Technologies and Policies for an Eco-friendly Cloud Market.

In the bottom-up approach an element that often is overlooked, is the cost of organizing the energy optimization. It requires additional monitoring and operations management infrastructure which also result in cloud computing and storage requirements i.e. components that also consume energy. None of the CSP’s mentions the management overhead costs of the operations management infrastructure.

The top-down optimization, focusing on the cloud usage, is still in research and wants to tune cloud efficacy in stead of efficiency. The top-down approach involves:

  • Application and Data architecture
  • Application Usage

Application and data architecture receives lately a bit more attention.

Cloud Native

Firstly through the use of the cloud-native computing paradigm, selecting of the right component for the right job, has a positive effect. For example the use of containerized micro-services reduced the need for full fletched VM’s what resulted in an infrastructure optimization: less compute and less storage. Important not to overlook is that this approach also introduces more network communication and complex management but the overall overhead is lower than the resource efficiency gains. Data traffic optimization can be found under Fog Computing techniques that are being researched.

Cloud Programming Efficiency

Secondly there is the programming efficiency i.e. the correct organization of algorithms in application code. Inefficient algorithms will require more CPU cycles, memory and more storage capacity. Static code analysis tools are currently used to check for architecture qualities in code like maintainability. For the moment checking algorithm efficiency is not commonly available in such a tools and are part of specialized tooling. Programming language and compiler efficiency checks are not mainstream at all. These would involve checking the CPU instruction sets being used in the binary code. CPU instructions have different effect on the CPU’s energy consumption. These techniques are being developed under the umbrella of the Energy Aware Software Engineering.

Cloud Acceptable Use

Application Usage has not received any attention!

Finally we have to point out to the acceptable use of technology. Lately a lot of focus has gone to the ethics of data privacy. As applications are running on infrastructure that are consuming natural resources, one should wonder if a cost-benefit analysis of the usage of application should not be considered: “It is not because we can that we should use an application and in extension the cloud”.

This opens the discussion to a new debate: from less energy consumption towards the correct application of the available energy. Should we start taxing applications like we do with other consumer products based on necessity? Alcohol (pleasure) is taxed more than water (basic need) could be translated into the tax on a Social Media application (pleasure) vs. Banking application (basic need)!

The following document is on Microsoft Efforts for cloud sustainability:

The EC released following report on an Eco-Friendly Cloud Market:

Using Robotic Process Automation wisely – “If you are a hammer, everything looks like a nail” … but probably isn’t!

Recently Robotic Process Automation (RPA) was embedded in MS Windows 11. Although I’m happy to see such a great capability being added, I fear the incorrect application of this technology. Back in 2016 when the first RPA tooling came to market, I made an overview of technology capable of automating business processes. RPA is a solution in this area, but not the only solution. Out of fear of “if you a hammer, everything is a nail” or in other words “it is not because we can do it with RPA, we should do it with RPA”, some insights in the alternatives to RPA and guidelines around when to apply RPA and when not.

You can download the deck here:

End-to-End Business Process Automation (BPA) is nothing new. It has been around decades and comes in different flavors. The focus of this article is on automating a flow of activities across multiple systems. This to distinguish BPA from solutions that focus on one activity or that stay within one system. The latter are typical shortcuts or macro’s embedded in office tools or an integration API’s where one system invokes an activity in another system. The difference lays in the fact that a process by definition has a state. It knows about the sequence of activities that make up the flow, it knows the current activity under execution and it knows about the execution of previous activities that led-up to the current activity.

In the group of BPA systems when can identify two approaches with different characteristics and applications:

  • The Business Process Management Systems (BPMS)
  • The Robotic Process Automation Systems (RPAS)

What adds to the confusion is that most commercial products have become a hybrid between BPMS and RPAS but still it is good to understand the different approaches to business process automation.

Below a comparison table of typical use-cases and characteristics of RPAS-es and BPMS-es:

 RPAS
BPMS  
Products– BluePrism
– Automation Anywhere
– Power Automate
– K2/NinTex
– AgilePoint
– Windows Workflow Manager
Typical Use-Case– To integrate Line of Business (LOB) systems through the UI when there is no means to get information from the LOB system through a system-to-system interface .– The process is composed of activities executed in LOB system (CRM, ERP) and they can be targeted through system-to-system integration.
– The process requires human intervention and can be targeted to a human-to-system integration: writing a custom UI to handle the human input and link the UI through normal integration strategies with the process (web-service, messaging, DB). => Important: this human-process interaction is not UI-integration => SEE RPA.
Characteristics– Typically less building blocks and out-of-the-box integration components to build the full business process A-Z.
– Process complexity is limited to flow chart like flows. Not a lot of support for hierarchical or nested processes.
Processes are executed atomically from begin to end. Limited or no support for long-running processes that can be interrupted mid execution. The duration of an activity is at the level of magnitude of seconds.
Elaborate components to integrate LOB system at screen level: so screen scraping (visual pixel level) or screen spying (API widget ID level).
– Typically used when there are limited and simple activities in the process to automate, the process is organized to overcome the UI integration. => Important this UI integration is not human-process interaction => SEE BPMS.
Hard to make abstraction of the workflow and the systems it integrated with as UI is used for integration. This tight coupling through the UI requires a new flow per system that is integrated.
– Typically used when there are a lot of activities in the process and some complex logic to drive the process.
Processes can be interrupted mid execution (long running) waiting for an activity to complete. This interruption can be at the level of magnitude of hours, days, months, years.
– Typically have no components to integrate LOB system at screen level: so screen scraping (visual pixel level) or screen spying (API widget ID level).
Supports abstraction of the workflow from the systems it integrates with as technical interfaces are used for the integration (API/Web services). Loose coupling is used and can reuse flows as long as the system respect the same technical interface. This abstraction (partner management) is typically done by using middleware like a service bus (BizTalk).  

To summarize: RPAS’ weaknesses are BPMS’ strengths and vice versa

  • RPAS:

– Gaps in supporting all types of business processes and all levels of complexity.

+ Good components for UI integration.

  • BPMS:

– Gaps in UI integration.

+ Good support for all complex long running business processes.

So the solution is using a RPAS – BPMS combination?

Pro’s and Con’s:

  • Disadvantage: two licenses and two products
  • Advantages: optimizing the advantages of RPA and BPMS

Trade-off factors to make decision. Is UI-integration required and can no alternatives be found (through web-services, files, messages, DB)? Go for RPA. Is the business process complex? Go for BPMS

RPAS-es and BPMS-es also have touchpoints with other technologies. Some points of attention and advice here:

  • Optical Character Recognition (OCR) vs. RPA: scanning vs screen scraping, do not abuse RPA systems for OCR scanning!
  • Orchestrations vs. BPMS: atomic processes vs interruptible processes, do not use orchestrations for human-process interaction
  • Communication Bus (Messaging) vs. Orchestrator; atomic-requests vs. unit-of-work requests, do not use communication busses when collaboration between multiple systems is required to handle a request.

The distinction between BPMS-es and Orchestration Engines and the different approaches to system-to-system integration was not covered in this article but contains a lot of food for thought as well.

Cloud Computing and the Internal Audit Function!

Discover why organizations move to the cloud and how this can have an impact on the internal audit function.

I presented a webinar on cloud computing:

https://home.kpmg/be/en/home/insights/2020/10/adv-cloud-computing-and-its-impact-on-the-internal-audit-function.html

Linked to that webinar an article was published, that first defines cloud computing based on four axes:

  • Service provision model
  • Service access
  • Service resources
  • Service characteristics

Secondly, on why cloud computing is important:

  • Need for more ICT flexibility
  • Need to increase ICT speed
  • Need to reduce ICT costs
  • Need to reduce ICT risks

Next it presents benefits and risks, to conclude on the internal audit challenges:

  • Definition of scope
  • Dependencies on third parties
  • Skills and expertise
  • Access to data

You can read the article on the KPMG webiste:

https://home.kpmg/be/en/home/insights/2020/11/ta-cloud-computing-and-the-internal-audit-function.html

… or download it here:

How to Quantify the Effects of Innovation?

An Adapted Valuation Model for Innovation! Innovation in IT Consultancy Services.

Innnovation NPV

Working in the IT Consultancy Services Industry for a bit over 20 years, innovation projects have been a major part of my job. Innovation is my passion but I often see innovation projects being approached as an art-form and not as a science. This is often due to limitations of traditional project management techniques and project valuation methods.

The drive to create a new innovation valuation model was triggered by three key questions:

  • Innovation is important but can it be managed?
  • Is innovation radically different compared to existing business processes?
  • Does innovation require special management techniques?

During the research I first defined what innovation is, next I defined a new innovation process model and finally I created a quantification model to put a value on innovation projects.

The innovation process model is based on 5 dimensions and 4 diamonds.

5 D's and 4 Diamonds Model
Continue reading