Andrew Ward explains how he helped create a bespoke audit solution to support Balfour Beatty and Crossrail during construction of the new Woolwich station.
Audit schedules are the basic requirement for checking what and when quality processes and outputs should be measured to gain the maximum benefits for the business. However, often understanding the ‘what’ and the ‘when’ can be a big challenge.
Balfour Beatty had this problem during its contract with Crossrail to build the new Woolwich station in London. Halfway through the three-year project, with an audit schedule already in place, there was no evidence that a business-based approach was being applied. The quality team carried out a risk assessment and realised it needed to find a solution.
After some initial research it was clear that there was no suitable method for a business-based approach, despite claims that someone had done this work previously. However, the team quickly realised a clean sheet meant a bespoke system could be implemented to meet the organisation’s needs.
Often when we analyse risk it is easy to overlook the multiple factors that impact the likelihood and consequences of an event. These factors can be defined as sub-topics and scored based on interaction.
We wanted our technique to encompass several other deliverables to make it workable over the duration of the contract (and to apply to other works or situations). The scoring would have to be visible for the client to review and the layout of the data had to be set out in a way that would make the review easy. The process also had to be flexible enough to add in additional topics that could be scored without having to score the existing topics again.
The process we adopted had to ensure the factors affecting likelihood and consequence were predetermined and were consistent across each topic. It needed to be a quantitative system, with the results shown as numbers for comparison. The problem with applying a number to a risk assessment is that people tend to take the figure as absolute and avoid solving the problem. We overcame this challenge by setting up a range of comparisons.
Once we had set out what was wanted from the system we needed to design a process for getting the results. This was broken down into six stages. By using this staged approach the process could be systematically applied and additions or changes could be made later down the line.
Stage 1: production of a suitable audit list
Unfortunately, many audit schedules are far from complete and are prepared without using a systematic approach. Often these are good audit schedules but they lack a clear basis for how the schedule was produced.
The audit list needs to be identified in a systematic way to provide an evidential assessment for producing a positive and supportive audit programme. For the large contract with Crossrail, we reviewed the associated documents, including contract deliverables, specifications, design criteria, plans and strategies, as well as Crossrail’s own management system. This part of the process ensures the list of audit subjects is manageable. Experience has shown that exceeding about 40 topics will lead to duplication and requires a lot of resources.
We produced a long list of audit subjects, including notes on the source and we outlined the scope for each. Then we noted where the contract mandated specific audits be undertaken to ensure they were addressed.
This list was lengthy so we reduced it by producing sets that could be broken down later if necessary.
For example, ‘cost control’ was used as an audit subject to include incoming invoice processing, recording of work done, invoice generation and payment process. The main subject was later broken down into the recognised subjects.
Stage 2: identification of assessment factors
Next, we needed to recognise the important factors in the assessment process. The audit list is bespoke to the audit programme and each topic is applied to every item in the audit list. These assessment factors are based on likelihood and consequences. The table below shows the assessment factors identified and applied to the construction work.
Stage 3: rating criteria for each factor
To carry out a successful audit risk evaluation, the identified assessment factors need to be compared to each other. We created a scale of one to five, with five being most important. In the exercise at Woolwich it was easier to keep the rating criteria at either three or five as we were halfway through the contract. The results are shown below as entered onto the spreadsheet.
Stage 4: setting the assessment factor rating criteria
Next, we allocated descriptions and ratings to each of the assessment factors and numbered them. Then we developed a list of descriptions against a rating of one to five for each assessment factor, with the highest value relating to maximum effect.
Stage 5: applying the ratings
With the system set, it was time to apply the factors for the assessment. It is important that you look at each topic individually against all the assessment factors and determine what value is most appropriate for the audit topic for the assessment criteria. In the spreadsheet below, ‘training and competency of staff’ has an assessment factor of two for its effect on contact in value (cost of process under three per cent of the contract value) and a value of three for effect on programme (the effects programme and needs managed out using additional resource).
Stage 6: setting the importance boundary
As stated earlier, the process was designed to avoid heated discussion about values and numbers. This was resolved by grouping the total scores into bands to reflect the impact of the audit subject on the overall delivery of the project. The list below shows the immediacy of the subject to be audited, based upon the 18 months remaining on the contract.
The audit schedule review
Good practice dictates that all audit schedules should be regularly reviewed to reflect changes in the workplace and working methods. This assessment process allows for easy review by simply revisiting each stage to confirm its validity. Where changes are necessary, the areas that need to be improved are re-scored and can be easily compared to the previous audits scores. The process can then be built up to show a progressive and developing audit schedule that reflects the current concerns of the business or project.
Using this process to select audit topics will mean quality professionals can gain the maximum business benefit from the audit process. Often the process itself lacks enough evidence to be audited but by carrying out this audit risk evaluation you can ensure the evidence is available and the process is flexible enough to reflect regular reviews.
By allocating numbers, the process gives clear visibility of the reason behind the decisions made. It is easily reviewed by the client or others, such as the certification body. While the allocation of each number is very much a personal choice against the criteria set at each stage, the conclusion is arrived at by the multiple application of those choices and this minimises the impact of an errant scoring decision. The process, overall, allows for easy review if circumstances change. In all, it is a useful technique to add to the quality toolbox.
Top tips for applying the process in practice
From experiences over the years ranging from well-run, controlled and productive risk workshops to those which represent a total waste of time, lessons were applied when we developed the principles introduced into this process.
A big challenge was the numbers. We wanted to avoid fiery discussions about whether an item is ‘a three or four’, so the clearer the description of the stages of the assessment factor the better. The aim of the exercise is often lost when people argue and totally disagree.
I learned that you should never present an empty sheet. First, fill in your spreadsheet based on knowledge and discussions with key people. It is the application of a modified Pareto principle (also known as the 80/20 rule, in which 80 per cent of the consequences come from 20 per cent of the causes) – 80 per cent of the people will not understand the changes so you can cut your workload by reflecting this dynamic.
We also carried out an exercise to make the factors SMART (specific, measurable, achievable, realistic and timely) but found the work involved failed to add value to the overall process. As a result it was dropped and a short, general but clear statement was retained. Always be prepared for concerns regarding the scoring and be confident in your decisions. If you can show you rarely change the numbers to reflect what people want, then the importance boundary rating will become cemented.
One challenge was determining the audit topics and people do have favourite audit topics that they want on the list. In hindsight I feel we should have carried out a small workshop to get the most out of the comments in the review.
Attribute to original publisher/ publishing organization: Andrew Ward is an independent consultant at Get On With Business, https://www.quality.org/knowledge/how-conduct-successful-audit-risk-evaluation