The pace of development at the moment is astonishing in all aspects of Artificial Intelligence. From the base capabilities of the LLMs themselves to the wide ranging use cases AI can be used for - all multiplied by the sheer numbers of actors in the AI space: the AI capability providers, the developers building "AI as a Service" and the companies and countries adopting AI.
So its no surprise we see AI expanding in every direction at an explosive pace and as with every other technology, we will see varying degrees of success in those implementations. AI in that respect will not be any different to any emergent technology where both technology suppliers will win or lose and technology adopters will win or lose. Of course, winning or losing will be driven by the success of the AI model to do what was intended, but winning or losing will also be driven by the organisations willingness to adopt good governance so it has the best chance of success but also the best chance of avoiding pitfalls.
In an earlier article I referenced Nick Bostrom's excellent book, Superintelligence in which he discusses the anticipated pace of change - recommended reading.
Which brings us onto two important aspects of AI governance. Ethics and Responsibility. They sound similar, but they are not the same thing and need different approaches to how you manage them in an organisational context.
If we take the literally meaning of the two words first (from Merriam-Webster): -
Ethic(s)
a set of moral principles : a theory or system of moral values.
the principles of conduct governing an individual or a group
Responsibility
the quality or state of being responsible: such as moral, legal, or mental accountability
But what do those terms mean in a practical sense, in a process sense and specifically in the context of AI?
I choose to think of them as two stage gates on a track that we pass through each time we start a race.
The Starting Gun - Ethics.
We ask ourselves the question: -
"Is what we are about to do, the hypothesis we are about to test 'ethical'?"
Each company will have it's own views on 'how ethical' they want to be and how ethics are interpreted (that is a societal problem in itself) but at least asking the question is an acknowledgement of the need to ask it. Many companies will start down the long and winding AI road without ever asking the question of how ethical their position is and of course the question of ethics for any given use case has multiple perspectives. What is the impact on my staff? What is the impact on the target audience? What is the impact on the people who have supplied the data? What is the impact on society?
The answer to those questions should determine whether or not you even embark on the endeavour your are designing. Only when the answers meet the standards of your own ethical position should you fire the starting gun.
Lap One - Responsibility
Lets assume we've passed the ethical test and started the design of a new AI tool, for example to assess the rate of tooth decay in any given population so as to inform healthcare policy. That is a use case that has his societal merit, little by the way of immoral use cases for the bad actor to take advantage of and can be widely or narrowly adopted and scaled. The question of responsibility arises when we need to ensure the model we build, does that prediction is a responsible way. What might an irresponsible model look like?
Feature Design : The model is not aware of certain demographic, ethnic, religious or geographic characteristics which influence he outcome of tooth decay resulting in poor policy decision making and the possible under allocation of resources to areas that need it.
Bias : The training data is acquired from dentists and hence automatically biased to those people that go to the dentist, when in fact the highest rate of tooth decay may be within the population that doesn't go to the dentist.
That's just two examples of how a model would need to be examined to ensure it is fair and accurate across the range of populations it is intended to serve. So how do we do that?
Thankfully, a range of tools and approaches are available to us, provided we choose to use them, and in particular I find Microsoft have provide an excellent pool of resources for governance and technical approaches. So let's start there.
Responsible AI Impact Assessment Template - even if you are not embarking on an AI journey just yet, this template provides some great context for the questions and considerations you should be asking within your organisation.
The Responsible AI Dashboard - This video gets into a bit of detail, but is again a great introduction into the considerations of model monitoring and the inherent capabilities of the Microsoft products
The Responsible AI Dashboard Demo - Here's an interactive version of that Responsible AI Dashboard.
This article isn't a deep dive on technical approaches, that's an article for another day, so we'll stop there for a moment and make some assumptions: -
You've completed an Ethics review of your proposal and satisfied yourself it aligns with your companies and stakeholders views and objectives;
You've completed an RAI Impact Assessment and satisfied yourself you understand the important components such as the use cases, fairness and human oversight requirements.
So you are literally off the starting blocks - the starting gun is ringing in your ears and you are off down the back straight and hugging the inside track on the first turn. But its no so much a 200 meters sprint, rather a 3000 meter steeplechase as it requires some obstacles to be negotiated before we reach the finish line.
The Hurdles
I started by saying Ethics assessment and Responsibility assessment was a linear process, one following the other. But whilst its important to view it that way at the outset (why start something that that you consider unethical?) its really an interplay between the two.
Microsoft sum it up like this where responsibility is embedded in the concept of 'explainability'.
The transparency of a model is crucial to understanding responsibility and vice versa. Provided we have accepted the need to 'build responsible AI', we will have embarked on our development having put in place the tools to assess how responsible we are, and feedback into our ethical judgement. Coming back to our race analogy, we've started the race and can see our first hurdle fast approaching. The model we've designed is biased. The training data is poor and the hypothesis is flawed. The essential component here is to come back to both the concepts of Ethics and Responsibility. Is it responsible to publish a model you know to be biased? Is it ethical to publish a model whose outputs we know to be flawed would be used to determine the level of healthcare a community receives? I hope the reader would agree the answer is 'no' to both of these questions (in the article referenced above, I cite some reasons why that may not be the case!)
So we navigate (hopefully) the ethical hurdles by adjusting our hypothesis, getting better training data and identifying and resolving areas of bias. The race continues.
The Finish Line
A 3000 Meter Steeplechase has 35 barriers to overcome and with the right tools for assessing our models accuracy, fairness, transparency, degree of error and numerous other factors, our AI Runners can navigate them with ease.
The finish line is in sight. We're on the verge of implementing our AI Tool into the real world.
Real Users,
Real Data,
Real Scenarios,
Real Word Impact
It is this stage that the work really begins. Whilst we finished the race, it was the start of whole series of races! The model is now in the real world and needs constant monitoring and assessment. It will see new data and new scenarios it has never seen before. Be asked questions that did not fit the original hypothesis and whose answers will be difficult to predict.
So the idea of managing Ethical and Responsible AI covers the whole lifecycle and with that overhead comes the need to fine tune both the technical capability to assess responsibility on an ongoing & frequent basis and the corresponding organisational governance and decision making capability. Thankfully those tools and approaches are starting to catch up with the need to use them. (More to come on Microsoft's AI Toolkit in another feature).
Those entities that embrace the oversight requirements of Responsible AI will be the ones who foster and develop their capability, truly understand organisational and societal benefit and prosper into the future.