July 2023 and it's been a little while since I penned an article. The new kid on the block is of course AI or AGI and you can't turn a metaphorical page without coming across an article based on some function of how either the A or the I is going to be the saviour of the human race or the harbinger of its demise. I wondered how a newsletter on data, risk and regulation could develop without being lost in the multitude of voices on the subject. AI is a subject too close to those topics to be ignored, yet there's already too much being said, some of it ill informed and ultimately repeated.
Pausing for thought, I started Nick Bostrum's incredible book Superintelligence.
Given this book was written in 2016, its foresight as well as its reach in terms of the breadth of interconnected topics is quite incredible. Recommended. I won't do justice to the book here in a short article so pick up a copy and dive in yourself. I do, however, want to focus on a single topic he covers in some detail. System Recalcitrance. It is defined so: -
Optimization power and Recalcitrance. Bostrom’s proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance. Optimization power refers to the effort of improving the intelligence of the system. Recalcitrance refers to the resistance of the system to being optimized
In the context of this article, I'm using the term “system” to refer to the whole ecosystem of people, processes, data and technology that come together to make “an intelligent system”. The issue of Recalcitrance or resistance to change is of course not a new topic or one singularly related to AI adoption. Fear of change and difficulties with making and embedding change are as old as humanity itself. And maybe that is the area of focus we need to consider the most. The Human element of the intelligent system.
Two sayings spring to mind: -
“…a bad workman always blames their tools”. Unknown
“…if I had six hours to cut down a tree I'd spend four hours sharpening my axe.” Abraham Lincoln
I've worked on projects that had the barest of governance and good practice, and yet were immensely successful. I've also worked on projects that had the best governance, certified staff and the support of the best technology and yet failed by every measure to deliver a good outcome. It's clear to me that the tools themselves, whilst helpful, are nothing without the right people using them. What made the first project so successful despite their lack of tooling? I'd offer perhaps it was the energetic participants, a clear focus on the outcome, pragmatism, team work and an open mindedness to the task in hand. There is no doubt that the tools help, (a sharp axe makes a huge difference) but it is the person wielding the blows that's makes the difference.
So when we consider System Recalcitrance and the human aspect of resistance to optimisation, what aspects of the human condition are going to be influential?
Resistance to change: Humans can be inherently resistant to change, often preferring to stick to familiar routines and ways of doing things. This resistance can manifest as a reluctance to adopt new technologies, processes, or ideas.
Fear of the unknown: Like all intelligent animals with a fight or flight response, we have a natural fear or apprehension when it comes to stepping into the unknown. This fear can hinder the exploration of new possibilities and make people hesitant to embrace innovative solutions or take risks.
Cognitive biases: Cognition is susceptible to various biases, such as Confirmation Bias (favouring information that confirms existing beliefs) and Status Quo bias (preferring the current state of affairs). These biases can prevent leaders from critically evaluating alternative viewpoints. Additionally, Optimism Bias can lead to the early adoption and ultimate failure of AI to be implemented because of a failure to understand the true complexity or impacts, resulting in future AI related projects being shelved - see Status Quo Bias and Fear.
Inertia and complacency: We are inherently complacent, accepting the comfort of current performance when things seem to be functioning adequately, even if there are opportunities for improvement. This complacency can lead to a lack of motivation or effort to challenge the status quo.
Lack of vision or foresight: Sometimes, individuals or organisations may lack a clear vision or long-term perspective, focusing only on short-term goals or immediate benefits preventing actions focussed on long term goals or risk mitigation.
Organisational culture: The culture within an organisation can significantly influence direction. If the culture discourages risk-taking, stifles creativity, or values conformity over innovation, it can impede progress and maintain the status quo.
Misalignment of incentives: When incentives within a system do not align with the desired outcomes, individuals may prioritise their own interests over organisational or system improvement.
There’s a lot here that is common to all forms of change, not just resistance to AI adoption, but a few other common human traits may become more prevalent when considering AI: -
Ethics - how do we view the ethical considerations of AI? Are we concerned with the effects of the AI itself and the data it is using (eg, the ethical outcomes of using data for the purpose it was not intended) or the ethical impacts on the AI itself (consider an AI that is considered ‘conscious’ and the implications on that AI of being replicated many times as ‘workers’ and terminated when no longer required).
Morals - how do we feel about the impacts of AI on our loved ones and our colleagues? Would we choose to not adopt AI if it resulted in huge improvement in efficiency at the expense of staff losing their jobs?
So lets park recalcitrance for the moment and just take a quick look at the System Optimisation side of the coin. Recent history is littered with examples of misdirected optimisation as well as countless examples where systems never even got off the ground.
Here’s two early examples where AI systems were optimised to do the wrong thing.
What human traits gave rise to such deeply impactful examples of mis-optimisation?
Greed? “Let's sell this model even though we know it may be inaccurate?”
Fear? “We've invested a lot in this project and it needs to make money or we're bankrupt?”
Communication Breakdown? People in the organisations knew the models needed refinement but that information didn't reach the decision makers.
Optimism Bias? “It'll be fine…..”
Ignorance? The designers and managers didn't ask sufficiently detailed questions or test to a degree where they understood there was a problem.
Risk Bias? “The risks to poor deployment are felt elsewhere, not by me or my company”. See Greed.
Lack of Resources? Insufficient staff, technology or training data to identify issues before large scale productionisation.
Plain Old Bias? Benign Bias : “The model works on these faces so it will work on all faces.” Hostile Bias : “The model isn't biased towards me or people like me, so that's ok.”
So, ignoring the actual AI technologies themselves, of which there will be many, the successful adoption is going to a very large degree be influenced by the humans implementing them and the human traits that determine their decision making, both in terms of their choices in adoption speed (recalcitrance) and effectiveness (optimisation). But what of replacing these vague, opaque and variable human traits with something more deterministic - like an AI? This is a slight diversion from the subject of recalcitrance, but bear with me..
In the examples above, we saw how poorly trained models resulted in the continuation of human bias. But, the reality is we already have human bias. In everyday life, in prison sentencing, in how police behave, in work. Is AI bias any different?
Here’s a useful article with a few further reading links that illustrate the downsides of AI adoption and an argument around ‘human bias’ and ‘computer bias’.
But the article doesn’t go far enough, AI is different in one very important way to a human judge in that such an AI can be closely monitored for its inputs and outputs. It’s trends appreciated, considered and adjusted. So our reluctance (recalcitrance) perhaps to use AI because of bias, should be tempered by the fact that humans are already biased. Don’t be afraid of the bias itself, but be afraid of how that bias is measured, monitored, managed before, during and after adoption. In this example, would we let an AI loose with the totalitarian ability to sentence, or would we put a human in the loop and allow them to adjust the proposals (which themselves could be monitored for the level of ‘human bias override’)? Let the model run in the real world until it can be prove to be as least as unbiased as a human? (There’s the subject of an entire other article here on ‘intentional bias’ but lets not go there today!)
There’s one more thing I’d like to consider in the realm of AI system adoption, and that is the concept of ‘advantage’. The poverty gap and the technology gap are real and widening issues. The ability for people to not only access but utilise technology is forged into discrete financial, cultural and geographical lines. Your success is closely linked to your opportunities and your circumstances. But equally, it is linked to the ability to chose how we adopt and adapt.
Here’s an article from The New York Times and an abstract if you don’t have access to a NYT subscription. (There’s also a link to the latest review from the Digital Poverty Alliance because they do good work).
Two things clearly arise - advantage and choice. Those with the advantage to adopt, can chose to adopt. Those who cannot, cannot. But the choice extends further. With additional insight, education and opportunity, the way in which we interact can also be chosen. There is a counter intuitive aspect at play, where those with a clear choice to adopt technology, adopted, but on terms that further favoured them. The adoption of technology that added advantage, and the rejection of technology that disadvantaged. The access to technology needed a further level of sophistication in choices that not everyone was able to perceive or provide.
Change is Change. Right?
So cracking the human component of System Recalcitrance is clearly possible if we identify early the specific factors that relate to our problem and address them. None of the human traits we identified above were specific to AI.
System Optimisation is clearly also largely a factor of human ability, values and judgement that sets a technology, an organisation or a person on a particular path.
So if “change is change” and resistance or adoption of change is the same whether we are talking about AI or talking about changing to electric cars, do we need to consider AI as a special case? Is it any different to how we chose to chose?
There are two aspects that makes AI so different. The Stakes. The Speed.
The personal, organisational and sovereign stakes are enormous. The successful adopters of AI will accelerate their advantage away from those who do not. Those people that engage with and understand AI will better placed to make personal and organisational choices whilst those who reject AI will see the technology gap and the poverty gap widen. The organisations who successfully adopt AI into their technology and into their cultural practice and values will see accelerating technological advantages and improved efficiency. Those who cannot or will not will see their margins and profits eroded by more effective, efficient competitors. The sovereign stakes are equally high.
The speed of change will be like nothing we have seen before, technological advantage, piled on technological advantage.
The gaps of advantage, poverty, opportunity will deepen, unless we choose to not let that happen.
So we come back to people. People, human beings, will chose how AI is adopted, how we chose to engage and embed it, or not. The skills required to successfully adopt AI extend beyond the technological. It could be argued that if AI is there to enhance and accelerate human-like capability, then it is the capabilities themselves that need to be chosen to support the adoption. A team perhaps that is able to span commercialism, technological prowess, compassion, morals, pragmatism, vision and energy.
Those same skills and capabilities will be the ones to chose to make sure The Gap does not get too wide.
There is an arms race for AI skills and AI talent - but that misses the point. It will not only be the AI Technologists that decide the fate of our individual and collective futures.
Build your teams with care.