Digitalisation in a Post-Covid19 World (1/6)
There’s a lot of talk about digitalisation post-Covid19. I know from experience, mention of digitalisation or automation strikes fear into the hearts of many in leadership, and for sound logical reasons, especially when everyone is predicting the mother of all downturns.
In fact, done well, digitalisation is a way for firms (i.e. leadership teams) to reduce the amount of money spent on Information Technology (or, achieve better value per pound/euro/dollar), realign the goals of Business and IT, distribute risk, and build more resilience into the business.
A leader is to some degree at the mercy of experts. After all, you don’t know what you don’t know (Johari Window), so a reliance on expert advice is a necessity of life.
Sometimes experts pander to a bias (such as confirmation bias) in the leader, sometimes they deliberately pull the wool over the poor leader’s eyes, and sometimes the expert isn’t as expert as they themselves believe. So it is that the poor old leader is saddled with having to be savvy enough to know enough to know when they’re being sold a pup – not an easy thing.
This series on digitalisation has been produced to help leaders navigate six areas that comprise digitalisation: Cloud Platforms, Cloud Native Applications, DevOps / Site Reliability Engineering (SRE), Agile Delivery, the Operating Model, and… The C word (revealed in part 6). You won’t be an expert once you’ve read these articles, but you should have enough information to help navigate the field.
Prediction is easy. Accurate prediction, less so
In a recent Webinar for the Financial Services Club, I suggested the term “The New Normal” may, apart from already becoming hackneyed, be incorrect. Instead, we might be looking at series of evolving normals. Thus, we are already in the Next Normal. Why coin this term? I’m no medical expert, of course, but have been speaking frequently to many who are, as well as medical scientists, and have taken to reading a number of publications. One thing that’s become clear from this autodidactic feast, is that despite the hyperbole about a vaccine being ready by September, we still don’t know what we’re dealing with or whether such a vaccine is viable. Can Severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2), the virus that causes Covid19, mutate, like, say, flu, or other coronaviruses (of which four are reckoned to be responsible for roughly 20-30 percent of common colds), such that despite having had it once you are able to be re-infected? What are the longer-term effects? Why are some people affected more severely than others? The list goes on.
As lockdown cautiously lifts, will there be a second peak. One doctor I know remarked that we’ll know in two to three weeks. Well, quite!
Even if the “R” number is significantly reduced, is it fair, or tenable, to expect those with underlying health conditions, such as any form of diabetes or asthma, to return to work, especially if the only way for them to get there is by public transport?
The Next Normal
On 29 April 2020, Jes Staley, the CEO of Barclays stated that the days of “putting 7,000 people in a building may be thing of the past”. Mr Staley is right. Of course, one might operate a Blue Team / Green Team arrangement, alter shift patterns, or reorganise office space to encourage social distancing, as well as demand all workers wear face masks at whatever the Health and Safety department deem critical times. But, how do you get several thousand people to a building’s floors? Anyone who has been to one of the large banks in Canary Wharf, in London, will immediately comprehend the immense challenge this entails. Maybe the days of regional offices will have to come back (perhaps with positive regional economic effects)?
However one configures the options, it seems remote working is here to stay in some form for the foreseeable future (leaving aside the question of what foreseeable might mean). Not only that, but there may well be repeating cycles of lockdown necessary until the virus is sufficiently vanquished.
It’s not all about the workforce. Your workforce are someone else’s customers and vice versa. So, whilst we need to think of what we provide to our workers, it’s also about how we service clients. Bluntly, this is about the whole ecosystem.
As mentioned earlier, Digitalisation is a great way to ensure that risk is distributed (i.e. lessened) and that firms achieve better value per currency unit (value for spend as one firm I’ve worked with eloquently puts it).
So how do you embrace digitalisation?
Let’s start with a modern definition.
Digitalisation is not just about making a process electronic, it’s about making the right process electronic in the right way.
Simply automating existing processes in their current flow is a mistake made many times in the past and one that leads to increased risk and wastage.
A great example of this sort of risk and wastage is the settlement process for wholesale securities. It can take up to two days to settle a transaction that is electronic across its entire lifecycle. This should be able to be done in moments (and, in fact, is so with cryptocurrencies, on which you can find out more by listening to Jannah Patchay on the Wicked Problems podcast).
Delay exists because the legacy manual process required time to physically move things around, do quill and ink ledger accounting, and ensure that trust mechanisms were in place. Yet, when it was made “digital” those same lags were embedded into the process, thus ensuring it remained slow and pregnant with counterparty risk.
So, digital is about reimagining processes and process flows to be efficient and then using the right tools to get the job done.
As you read the rest of this article, keep the following picture in mind:
This is a segment from a bigger picture based on the “flower maturity’ model from Martin Walsh at Think Above Cloud, and adapted by Toby Corballis to incorporate a specific element (more on this as the series unfolds).
To achieve proper digitalisation requires understanding that there are a number of aspects to digitalisation and the balance between each aspect matters. In this series, I’m looking at each one, in turn, starting with…
There are many of great things about Cloud, and, in particular, Public Cloud – too many to cover in an article aimed at leadership within firms. Thus, we will limit the conversation here to Security, Scalability, and Resilience. These may sound like technical terms, and they are, but we don’t need to delve into the technical. Instead, we’ll keep to a high-level conceptual discussion that should give you enough to enable you to see clearly the benefits.
Firstly, when properly configured Public Cloud is far and away more secure than self-owned infrastructure – be that one’s own datacentre or space in a commercial one. To be clear: anyone who says otherwise either doesn’t know how the security of Public Cloud works, doesn’t understand the technological difference between, say, a Cloud network and a private network, or has some other agenda.
Public Cloud resources are defined in code, rather than being discrete bits of infrastructure. If a problem is encountered, it can be fixed instantly across all resource instances, usually without the need for an outage. When the same thing happens in a legacy environment, this is not possible. Instead it’s necessary to rely on engineers to deploy a fix. Some will do a better job than others, and all represent a cost.
In the Cloud, security is defined at the level of the individual resource (often referred to as Zero Trust). In a datacentre, engineers tend to define a perimeter on the assumption that if that is secure then so is everything inside it (a castle and moat system). This logic has been proven time and again to be flawed. Bad actors penetrate the perimeter and often hang around inside systems for months, observing, building a picture of the organisation from the inside, and testing for compromised resources, before doing whatever dastardly deed they decide on. I could go on, but suffice to say that Public Cloud is far more secure when done correctly. Of course, anything done badly is subject to compromise.
Cloud platforms are scalable in ways that legacy systems are not. When the lockdown first happened, firms already operating in Public Cloud were able to adjust capacity within minutes by simply adjusting the dial on whatever resources they needed more of, be that Compute, Storage, Networks, Databases, etc. Literally, anything. As the resources scaled, so too did the security.
In contrast, many legacy firms had to rely on “Superman” syndrome – people who work long hours (often at extra cost) to ensure systems are held together using a Sellotape and bandages solution. Even now, nine weeks in, I know of systems creaking at the seams that are only being held together by the heroics of a few knowledgeable people.
That Public Cloud systems are code based, also increases resilience. If someone destroys a system, whether intentionally or accidentally, a perfect clone can be up and running in moments (without a Disaster Recovery solution) and DR is a doddle.
Public versus Private Cloud
Why recommend Public Cloud over Private Cloud? The answer is actually very simple. There are three main Public Cloud providers, though there are others too. Each excels in different ways. Because Public Cloud systems are distributed over the Internet, there’s no reason to tether the firm to any one. Instead, it’s possible to cherry pick services from the panoply of providers, even if you simply stick with the main three.
There are those who advise going with one or the other Public Cloud provider, often citing exclusivity discounts. Resist this – the deals are almost never to the favour of the client. Cloud salespeople are incentivised on consumption. It’s in their interest to drive exclusivity deals and over consumption and to obfuscate lack of discount.
Public Cloud is distributed even within each provider. This distributed nature means that if your engineers think that, say, Google has the best Kubernetes implementation for your needs, Microsoft the best Active Directory Federated Services, and AWS the best Kafka, there’s no reason why you can’t use all three together. Many firms do; and do it successfully.
Another reason for distributing solutions across multiple Public Cloud providers is enhanced resilience. Cloud providers are fond of saying that their systems are fully resilient but in fact this is untrue. As recently as 8 April 2020, Google Cloud Platform experienced a 90 minute outage. It’s not all GCP, of course. On 22 October 2019, AWS was subjected to a Distributed Denial of Service (DDoS) attack. Neither is Microsoft immune. A significant slowdown of Azure in EMEA went unreported for five hours because the company’s Primary Incident Manager (PIM) was based in the US, so was asleep at the time it occurred, with services taking up to three days to return to normal. The point is that whilst Public Cloud can also have outages, if you plan your architecture well, you can pivot from one region to another, or one provider to another, in minutes when things do go awry, thus ensuring a high level of resilience.
Hopefully you’ve found this article informative. If so, please consider sharing it with colleagues and others in your network. In the next article, we’ll cover the second aspect of Digital Maturity: Cloud Native Applications. Don’t worry, it will also be aimed at leaders, not engineers.