Artificial Intelligence and the Future of Internal Audit

Artificial Intelligence and the Future of Internal Audit

Artificial intelligence (AI) is a broad term that refers to technologies that make machines “smart.” Organizations are investing in AI research and applications to automate, augment, or replicate human intelligence i.e. human analytical and/or decision-making – and the internal audit profession must be prepared to fully participate in organizational AI initiatives. There are many other terms related to AI, such as, deep learning, machine learning, image recognition, natural language processing, cognitive computing, intelligence amplification, cognitive augmentation, machine augmented intelligence, and augmented intelligence. AI, as used here, encompasses all of these concepts.

AI is not new. According to the McKinsey Global Institute’s (MGI) discussion paper “Artificial Intelligence: The Next Digital Frontier,” the idea of AI dates back to 1950 when Alan Turing first proposed that a machine could communicate well enough to convince a human evaluator that it, too, was human.

It is critical that internal auditors pay attention to the practical application of AI in business, and develop competencies that will enable the internal auditing profession to provide AI-related advisory and assurance services to organizations in all sectors and across all industries. AI is dependent on big data and algorithms, and it can be intimidating, especially for internal audit activities and organizations that have yet to master big data. But internal auditors do not have to be data scientists or quantitative analysts to understand what AI can do for organizations, governments, and societies at large.

Types of AI

Type I – Reactive machines: This is AI at its simplest. Reactive machines respond to the same situation in exactly the same way, every time. An example of this is a machine that can beat world-class chess players because it has been programmed to recognize the chess pieces, know how each moves, and can predict the next move of both players.

Type II – Limited memory: Limited memory AI machines can look to the past, but the memories are not saved. Limited memory machines cannot build memories or “learn” from past experiences. An example is a self-driving vehicle that can decide to change lanes because a moment ago it noted an obstacle in its path.

Type III – Theory of mind: Theory of mind refers to the idea that a machine could recognize that others it interacts with have thoughts, feelings, and expectations. A machine embedded with Type III AI would be able to understand others’ thoughts, feelings, and expectations, and be able to adjust its own behavior accordingly.

Type IV – Self-awareness: A machine embedded with Type IV AI would be self-aware. An extension of “theory of mind,” a conscious or self-aware machine would be aware of itself, know about its internal states, and be able to predict the feelings of others

To differentiate the different types of AI reacting similarly to a situation, it is the matter of reasoning. In other words, a Type I self-driving vehicle would change lanes because it has been programmed to do so whenever there is an obstacle. A Type II self-driving vehicle would decide to change lanes when a pedestrian is in its path, simply because it recognizes the pedestrian as an obstacle. A Type III self-driving vehicle would understand that the pedestrian would expect the vehicle to stop, and a Type IV self-driving vehicle would know that it should stop because that is what the self-driving vehicle would want if it (the self-driving vehicle) were in the path of another oncoming vehicle.

Most “smart machines” today are manifestations of Type I or Type II AI. Ongoing research and development initiatives will enable organizations to advance toward practical applications of Type III and Type IV AI.

Opportunities and Risks of AI


  • The ability to compress the data processing cycle.
  • The ability to reduce errors by replacing human actions with perfectly repeatable machine actions.
  • The ability to replace time-intensive activities with time-efficient activities (process automation), reducing labour time and costs.
  • The ability to have robots or drones replace humans in potentially dangerous situations.
  • The ability to make better predictions, for everything from predicting sales of certain goods in particular markets to predicting epidemics and natural catastrophes.
  • The ability to drive revenue and grow market share through AI initiatives.


  • The risk that unidentified human biases will be imbedded in the AI technology.
  • The risk that human logic errors will be imbedded in the AI technology.
  • The risk that inadequate testing and oversight of AI results in ethically questionable results.
  • The risk that AI products and services will cause harm, resulting in financial and/or reputational damage.
  • The risks that customers or other stakeholders will not accept or adopt the organization’s AI initiatives.
  • The risk that the organization will be left behind by competitors if it does not invest in AI.
  • The risk that investment in AI (infrastructure, research and development, and talent acquisition) will not yield an acceptable ROI.

AI in Internal Audit

Internal audit is adept at evaluating and understanding the risks and opportunities related to the ability of an organization to meet its objectives. Leveraging this experience, internal audit can help an organization evaluate, understand, and communicate the degree to which artificial intelligence will have an effect (negative or positive) on the organization’s ability to create value in the short, medium, or long term. Some critical and distinct activities of internal audit related to artificial intelligence are:

  • Risk assessment procedures.
  • Implementation of processes, policies and procedures to be in line with enterprise risk.
  • Assurance on management of risks related to the reliability of the underlying algorithms and the data on which the algorithms are based.
  • Filling the understanding gap.
  • Utilisation of Computer Analytical Audit Tools (CAAT).
  • Reemphasizing cyber resilience and IT risks.
  • Proper adoption of the AI Auditing Framework

Closing Thoughts

The internal auditing profession cannot be left behind in what may be the next digital frontier. To prepare, internal auditors must understand AI basics, the roles that internal audit can and should play, and AI risks and opportunities. To meet these challenges, internal auditors should leverage the Framework to deliver systematic, disciplined methods to evaluate and improve the effectiveness of risk management, control, and governance processes related to AI.

As a diligent internal audit firm in Malaysia that eagerly pursue for continuous improvement, IBDC believes that as a way of moving forward and staying competitive, the adaptation and adoption of AI technology is inevitable. However, the subject matter is on how fast the environment will elevate itself to meet the practicality of implementing AI onto emerging markets and semi-efficient economy like Malaysia, whereby small-medium enterprises (SMEs) is the dominant business type in this environment. The key concerns are always on the cost-versus-benefit initiatives, as well as the ability to generate and develop quality and meaningful data to be used by the AI.

Ultimately, we do not think that machines are going to replace chartered accountants in the audit profession, but it will facilitate and compliment humans’ thought processes to make more sound judgement. However, those who utilises AI will eventually replace those that don’t.

This article is summarised and amended with additional input to the original article Artificial Intelligence – Considerations for the Profession of Internal Auditing, Special Edition, issued by The Institute of Internal Auditors Global on