You are here
  • Home
  • >
  • A TECHNOLOGICAL TAKEOVER

A TECHNOLOGICAL TAKEOVER

A TECHNOLOGICAL TAKEOVER

Could artificial intelligence take over the role of human risk management, asks Ian Manners, Partner and Head of Business Risk & Regulation at Ashfords.

Risk management is an integral aspect of the effective management of buildings and FM sites.

It can also be time consuming, resource-heavy and expensive. The rise of artificial intelligence (‘AI’), raises a question mark as to whether technology can relieve facilities managers of their health and safety responsibilities, providing a simpler and more effective management of risks. 

Under the Health and Safety at Work Act 1974, employers have a duty to manage risks and to protect the health, safety and welfare of their workforce and other people who might be affected by their work activities. Two legal duties, which underpin the requirements of health and safety law, are:

  1. Regulation 3 - The Management of Health and Safety at Work Regulations 1999: the duty to carry out a suitable and sufficient assessment of the risks which employees and non-employees are exposed to; and 
  2. Regulation 7 - The Management of Health and Safety at Work Regulations 1999: the duty to appoint a ‘competent’ person, who has the necessary skills, experience and knowledge to manage health and safety. 

These are ongoing duties, requiring proactive monitoring, adaptation and re-assessment, which in turn until now has meant that organisations need to rely on individuals with experience and expertise of the industry. So, can we realistically envisage AI taking on this responsibility? 

AI applications can do what humans can in a fraction of a second: processing and analysing large volumes of complex data and identifying patterns and correlations which might not be immediately apparent to human analysts. They can learn algorithms, analyse historical data, including accident reports, weather conditions and employee behaviour to predict and inform people and organisations regarding matters which can lead to accident prevention. 

They can also use communication platforms, to streamline information flow among managers, helping to identify and address potential risks to minimise the likelihood of misunderstanding and errors. All this (and more) can facilitate efficient resource allocation and management, saving time, costs and resources. But could AI go further and to substitute for the role of a human health and safety management? 

What AI (at least presently) lacks, is the nuanced contextual understanding of humans, which is particularly pertinent in a subject-matter where human error is often the root cause. Where humans are alive to minor variables and unique characteristics of a person, AI is limited to algorithmic predictions based on past situations based on human modelling. These limitations are particularly evident in ‘dynamic’ risk assessments, whereby individuals are able to respond in ‘real-time’ to new factors, assessing and responding to risks on the spot and dynamically adapting methodologies. Competent humans can quickly iterate their assessments to accommodate a vast array of environmental conditions and circumstantial changes. Whereas AI might require retraining or manual intervention, potentially delaying the implementation of necessary safety measures. 

More importantly, central to any effective health and safety management is the duty under Regulation 7 to appoint a competent person. In short, the satisfaction of this legal duty means that AI simply cannot be used as a replacement for an organisation’s competent person, at least on the present wording of the law. 

This is not to say that AI cannot be a useful tool and it has been recognised as such by the regulators, with the Government having released guidance for regulators for ‘Implementing the UK’s AI Regulatory Principles’ which sets out five principles for regulators to interpret and apply: 

  1. Safety, security and robustness.
  2. Appropriate transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

Indeed, the Health and Safety Executive is actively exploring ways in which it can leverage its data tools in order to unlock intelligence that will help organisations to take an evidence-based approach to managing and improving health and safety risks.  

But put simply, AI cannot deliver these principles alone and while an ever more sophisticated AI application will continue to be developed and utilised by organisations, and indeed risk management consultants as a tool to support them to manage risk, we are still a long way from being in a position to replace humans’ role in risk management.  

www.ashfords.co.uk

Read our

Latest Issue

Tomorrow's FM Yearbook
2024/25

Tomorrow's
FM
Awards 2024