21st Century Auditing for 21st Century Auditing Failures

By   /  4 November 2011  /  Comments Off on 21st Century Auditing for 21st Century Auditing Failures

Headline: 21st Century auditing for 21st Century auditing failures or using technology to transform Auditing

Now being used by many organisations and having recently presented to 350 leaders of Japanese auditing, the case for change to an auditing approach that audits and identifies risks of the real-world.

Using evidence collection and analysis software programmes that are specifically designed for the purpose and delivered over the internet provide the real way forward.

There have certainly been some stunning system and process failures over the last few years. From Deepwater Horizon to Fukushima Power Plant to Toyota to Staffordshire hospital, where the subsequent Francis inquiry found ‘shocking’ failures in care as the hospital focused on cutting costs and hitting government targets’. To you and me, that’s the inappropriate use of ‘lean’, using a closed system quality technique to manage an open system.

At Staffordshire hospital, where was the auditing and what was it indicating, apart from giving this hospital a clean bill of health?  Or how about the Deepwater Horizon system disaster, where the subsequent inquiry concluded:

“ ….. a complex and interlinked series of mechanical failures, human judgements, engineering design, operational implementation and team interfaces” was to blame.

Why hasn’t auditing picked up the risk issues?

Have you ever wondered why so often after a poor performance is experienced that someone somewhere says ‘I knew that was going to happen’ or ‘I could have told you that’.  Just like the major inquiries above, the indicators of the risk of poor performance already existed before the poor performance or unintended event emerged from system or process – before it became reality.  By the time it happens it is too late, we can’t fully recover from the impact of that lost customer, money, sale, poor design, death etc. All we can do is deal with its impact.  So why is it that auditing rarely picks up these indicators?  As you read this article, what level of risk is your organization facing through the potential failure of your ‘real-world’ system or processes? Does your auditing truly tell you?

An analogy may help.  Imagine you are driving your car, you take the odd look in the rearview mirror to check that the police are not following you and that you aren’t going to be surprised by someone coming up behind.  The mirror is small because looking in it only tells you where you have been.  In comparison, the windscreen is huge because you have to continuously take in and analyse the evidence around you to help you steer your car down the road without having an accident. Could you drive safely if you only used the rear-view mirror? Now imagine you are an auditor and the driver is the manager driving his system, process or department.  What the auditor typically does is to tell the manager where he has been by looking in the rearview mirror; at the records, documents and data to confirm or otherwise that he/she really has been down that road correctly – something often already known by the manager.  How does that help the manager look through the windscreen to understand the current level of capability which he can analyse to help steer his system/process/department into the future without crashing?

I was recently talking to a company that had been let down by a supplier, which had badly affected their ability to service customers.  As a result, the senior management questioned the value of quality, the quality team and what was the point of certification.  After playing the blame game for a while it emerged that the risk indicators were all there.  But the audit team said: “We checked that people had been doing what they should and they were.  They were doing exactly what they should, ie. what was documented and they had the records to prove it.” Quality had operated in the documented world, not the real one, and auditing had been looking in the rear-view mirror. A double whammy!

Changing the audit model

Systems, processes and organizations are complex living entities where much of what really happens is never documented with maps, procedures and records which are not the real world just useful pictures.  Basing our auditing solely on these documents is by definition flawed.   What we need to do is understand the ‘business as normal’, our real-world that delivers business objectives and creates risk.  The question is what auditing model will help us do this.

What do we mean by a new auditing model?  Just think about it.  There are 50 people working in a system or process.  They each interact with each other 10 times a day giving 25,000 interactions every day.  They do this five days a week for six months.  That is over three million interactions, all of which will have an effect on the quality of the output, efficiency, effectiveness and the outcome experienced by customers and other stakeholders.  Most of these interactions are not recorded or documented, but somehow we expect a human auditor to audit, understand and analyse all this evidence and then work out the potential impact or risk to the delivery of objectives: impossible, they need help! Relying on what has been documented and what the auditor sees during their visit is therefore just a scratch on the surface of what is actually happening.

Intelligent use of technology to the rescue

This is where technology can step in, but not in the conventional way where technology is seen as merely the use of remote auditing tools, input devices, etc. for gathering data, such as looking at records online, teleconferencing, online workshops, using webcams etc.  All these do is to reinforce the current auditing model not change it to solve the structural auditing issues outlined above.  They are normally just technological versions of looking in the rearview mirror with little inbuilt intelligence.  We need to see technology in broader terms, as new uses of ITC systems and their software programmes, applying inbuilt intelligence which collects and processes the data in a structured way we can break through the constraints of what typically happens. This facilitates the creation of a new auditing model and addresses the problems above by auditing the real ‘business as normal’, presenting a new forward-facing window on organisational capability and risk.

So what type of technology-based auditing methods could we use?  I am not talking about surveys, which although in some cases useful, do not really provide true evidence and often reinforce the existing auditing model. Using evidence collection and analysis software programmes that are specifically designed for the purpose and delivered over the internet is instead the real way forward. They can use the connectivity of modern software tools and large numbers of evidence sources to consistently collect behavioural, compliance and effectiveness evidence. They can also use a range of analysis options, for example using an in-built analysis engine or Excel spreadsheets, to deliver absolutely consistent processing of the evidence. These approaches allow us, as auditors, to gather information and objective evidence from a very wide range of auditees (both inside and outside the scope) about what is really going on day-by-day, ie. collect and analyse large quantities of behavioural evidence.

Let me say that a different way: you don’t audit process maps and other documents to understand the risks and complexity of the real-world ‘business as normal’. You use one or more input devices and intelligent software programmes to collect real-world evidence of what different people experience of other people’s behaviour and the effect of that on business performance.  Behaviour is a lead indicator of strategy execution  and of the risk of delivering performance.  Understanding behaviour makes what is currently invisible visible. With this new type of audit information, management can manage potential risks in steering their system, process, etc into the future. It opens a new auditing window on the world enabling auditors to support our organizations better; b looking through the windscreen, not just the rear-view mirror.

A case study in using an intelligent audit system

A real-life example – auditing a sales process.  Technology-based auditing systems can be used to gather experiences from people from within the process, ie. the sales team, sales managers, administrators, etc. and those who are impacted by the process, let’s say, customers, operations, finance, etc.  Input devices and software working together can gather data from auditees about the impact of other auditees’ behaviour – in effect, everyone is an auditee and an auditor at the same time without realising it or being given that label.  There is no auditor collecting this information in a traditional way, their role is to identify the experiences that they want feedback about and the level to which that demonstrates organisational competence when analysed together. Having loaded this into the auditing system, this then gathers and analyses it for them.  Whilst technology-based input tools (such as the internet, mobile devices, teleconference,s etc) can be used to collect the data, it is the software that runs on them that makes the audit truly independent and confidential. It also removes the danger of the auditor misinterpreting information and its business importance whilst allowing far more people to be involved, but with shorter inputs.

Technology-based analysis can consistently rate each experience or piece of objective evidence collected in terms of its importance to the delivery of objectives.  The software can number crunch vast amounts of data quickly and consistently and certainly better than the average human, to produce a risk profile of system or process performance, such as that below:

Description %
1 Activities enhance market share 35
2 Sales teams build effective relationships 36
3 Net sales margins are optimised 39
4 Gross profit is maximised 33
5 Sales forecasting is accurate and managed 33
6 The process is managed 31
7 Process activities take place 35
8 Stock turnover is managed 19
9 Performance is managed against individual sales targets 37
10 Customers understand what we can and will deliver 32

If behaviours and outcomes that define minimum acceptable risk or capability are scored at 40%, what does this tell us about the chances of this process being a success?  The manager now knows what potential risk areas they face and it is up to him/her to decide what improvements to make to ensure that the process isn’t going to fail.  Far better this than giving him/her a tactical picture of what happened in the past and a list of non-conformances – or even worse, a ‘tick’ that everything is ‘OK’.

The software also allow the analysis of this data in more detail to show where potential risks to performance actually are, by benchmarking data between the various departments, sites or type of people, for example, as below:

Performance Drivers
1 2 3 4 5 6 7 8 9 10 11
Overall result 32.9 % 33.3 % 31.9 % 32.6 % 35.1 % 32.8 % 40 % 34.7 % 34.5 % 45.8 % 32.6 %
Dept/function/team Total
Branding and marketing 21.2% 20.7% 20.4% 20.6% 18.8% 21.7% 19.1% 22.9% 20.5 %
Creative operations %
Engineering 43.3% 36.3% 26.4% 35.8% 41.7% 51.2% 40% 36.9% 22.4% 30% 30% 37.5 %
Global operations 42.1% 41.5% 42.2% 40% 37.1% 36.7% 42.9% 46.7% 40.8 %
Mobility 62.5% 60% 61.5% 67.5% 76.5% 56.7% 80% 70% 76.4% 100% 100% 71.5 %
Product design 29.4% 18.9% 19% 31.6% 35% 25% 34.3% 31.1% 15.2% 26.7% 30% 29 %
Program management 54.3% 60% 56.7% 50% 42.9% 53.3% 40% 50% 40% 50.2 %
Quality 34.7% 50.3% 50% 40.9% 38.1% 34% 34.3% 40.5% 59.4% 31.6% 26.7% 40.4 %

Unlike traditional audit reporting, this information is not saying that the departments highlighted are the ones where there is a problem. It is saying that they are where the risk is emerging. The cause may well be elsewhere in other processes and functions, but this helps to pin-point the start of the root cause analysis.  This could not have been achieved within reasonable cost constraints without the use of technology and software to collect data and process it. Technology also helps the auditor interpret the results and report to management on the of risk to the delivery of their objectives. At a lower level of detail, specific compliance evidence can also be collected where this is needed, either to complement or focus traditional auditing methods.

But doing this is unrealistic and surely it will be too costly?

No on both counts.  Using technology-based auditing systems in this way has been proven to usually cut the cost of auditing whilst delivering significantly more value. This is because it:

  • reduces the need for human auditors collecting data (they do other more value-adding and rewarding work)
  • reduces associated travel costs where these are incurred
  • carries out audit activity where previously it would not have happened because of a lack of audit resource
  • reduces audit fatigue, as these tools allow something to be audited once and the data used in different ways
  • uses a range of input devices from PC and laptops to widely available handheld devices
  • streams data automatically to provide valuable benchmarking information
  • automatically and consistently analyses the importance of the evidence collected, removing potential bias
  • increases the return on the auditing investment made, by providing the information management need to manage the business.

Go To Top
Skip to toolbar