In this two-part blog, we look at clearing up some of the myths surrounding AI and how the public sector could be using it to improve their working practices.
The promise of cognitive technologies to reduce the mundane tasks is one of the reasons why the public sector is enthusiastic about new AI-based applications. Organisations are looking for – and finding – applications to improve their service.
Between November 2018 and April 2019, the Government Digital Service (GDS) and the Office for Artificial Intelligence (OAI) led a review into using AI in the public sector.
The findings revealed that leaders across the public sector would benefit from better understanding the technology, the opportunities it presents and the limitations of its use. Following from that, the GDS and the OAI published joint guidance to meet the need from the findings.
Understanding artificial intelligence is the first step in an organisation's journey to using AI. So what does it really mean?
There is not one AI technology
AI is constantly evolving, but generally involves machines using statistics to detect patterns in large data sets and repetitively performing tasks without constant human guidance. Current applications of AI focus on performing narrowly defined tasks and is not a general-purpose solution that can solve every problem.
Machine learning is a subset of AI and usually refers to systems that improve their performance on a given task over time through feedback and experience. It is the most widely used form of AI with recent advances due to:
- Improved algorithms
- Huge growth in data availability
- Increased computational power, especially within cloud computing
Now that we have an understanding of what AI really is, we can look at describing key AI and cognitive technologies. In this blog, we will discuss the first two of the four factors outlined below that often have a significant impact on the success of an AI project:
- Data availability and quality
- People with the right skillsets
- Choosing the right AI solution
- AI ethics including fairness and explainability
Data: the sticking point
Lack of a coherent data infrastructure has made it difficult to share and access data between organisations. The Ministry of Justice’s (MoJ) Transforming Rehabilitation Strategy is an example of a programme that has not managed to create the right infrastructure to appropriately share data between various organisations. Community Rehabilitation Companies, who run probation services, are not always informed whether a person who is released from prison has suffered from mental-health issues, making it difficult to deliver effective and personalised services.
Legacy systems also complicate the data landscape. In 2015, Whitehall announced the Crown Hosting programme, which intended to map departments’ legacy systems into an updated data centre, but take-up of the scheme was slow. The Department for Work and Pensions (DWP) intended to shift 250 of its systems to the data centre but ended up migrating just five.
Making sure the data held is properly managed is also a challenge. The impact of big data on our every day lives is quite familiar, but less has been said about how this can be used to deliver high-quality public services. Without evidence of data, organisations can develop policies and services that do not address people’s real concerns.
People, not technology, enable innovation
When starting with an AI project, it is important that the right people are involved. The team should be multidisciplinary, with a diverse combination of roles and skills.
Organisations may need specialist roles such as:
- A data architect who sets out the data vision and data design to meet user needs;
- A data scientist who has a good understanding of existing data sets and the target problems;
- A data engineer who integrates the delivery into business systems and processes;
- An ethicist who provides ethical judgements and assessments on the inputs;
- A domain knowledge expert who knows the target environment on which to deploy the results;
- An engineer with strong knowledge of dev-ops, infrastructure and security design to support the running in production.
In the end, a great AI model is not enough by itself. It is critical to plan for and develop approaches to encourage adoption from day one. This may include having target users working with the cross-functional team early in the project, or encouraging users to use, interpret and challenge the outputs from the model.
Depending on the problem and the target users, organisation can choose from the three approaches below to AI solutions deployment:
- Replace. AI takes over the job completely. For example, having automated call handlers directing callers to the right operators.
- Divide and conquer. AI automates as many steps as possible leaving the humans to supervise or complete the remaining work. For example, machines could complete live translation of TV broadcast, with experts revising transcripts for later release.
- Collaborate. This is where the true promise of AI lies: humans working more effectively thanks to technology complementing their skills. An example of this would be CCTV camera systems flagging potential threats for review and tracking by police officers.
For each of these automation approaches, organisations should consider their priorities. A cost strategy uses technology to reduce costs, especially by reducing the workforce. A value strategy focuses on increasing value by complementing human work with technology, reassigning people to complete higher-value work. In reality, organisations will use a mixture of the two strategies for their AI solution.
Return next week for part two, where we discuss how to choose the right AI solution and the ethics surrounding it.