Technology

Leadership Strategies In Navigating Fraud In The Age of AI - Part II


by Sridhar Ramamoorti

As AI grows in use, the information coming out becomes less credible, leading to the consequences of truth decay as applied to the finance and accounting world.

“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”
 ---Late theoretical physicist, Stephen Hawking

Where Does This New AI Reality Put Us as Finance and Accounting Professionals?

This is what I believe is happening in the context of our profession of accounting. 

We are now dealing with an environment where we collect data from what we hope are trustworthy sources, we process it with what we hope are trustworthy processing systems, we extract meaning from what we hope are trustworthy evaluation criteria. Many hopes, not much reassurance.   We seek truth in numbers at a time when “truth decay” is being nurtured.  It is fed by systems that are vulnerable to manipulation by those wanting to create an artificial reality for financial fraud, gains in glory and power, or other designs.  We are moving to the point where we can no longer assume the trustworthiness of information we receive as much as before.  I find this development to be distressing. One of my former Andersen partners called an audit in such circumstances “the spray and pray approach.”

For those of us defining the meaning of data, we now need to question the meaning of meaning!  What are the numbers really telling us?  Is it true or is it not?  How valid are our interpretations of data with no conscience behind them?  Does my interpretation hold up considering other factors?  How is the absence of professional conscience affecting the culture, ethical ecology, and organizational values and their governance?  People who provide financial information have been generally conscience capable.  But as conscience-incapable AI grows in use, the information coming out becomes less credible, leading to the consequences of truth decay as applied to the finance and accounting world. Because AI does not have a conscience and more and more work is being done by AI, we must not only have healthy consciences, but also be vigilant for the role of the conscience. This means thoughtful and regular attention to our ethics and what is our culture. \

2024-Power-in-the-C-Sui600x130-px.jpg

There are at least three tiers of finance and accounting “truth” involved in obtaining financial information for decisions.  These levels must be addressed in any leadership strategy aimed at making the organization more fraud resistant.   They are:

The standards and protocols for collecting information.  Do they include measures to provide for accuracy in meeting materiality standards, in spotlighting suspicious anomalies, in assuring a reasonable amount of accuracy, and in telling the whole story?
The standards and protocols for processing the information collected.  How can we ensure that only facts (not opinions) emerge?  How can we ensure that no relevant information is omitted, and no misleading information is included?  Where can we see the dangers of AI potentially lurking?
The judgment used for interpreting the meanings that can be extracted from the information provided.  Because interpretation requires judgment, I propose that derivation of data meanings should be isolated from AI applications. In other words, we intentionally include a “human in the loop” in each and every AI application, not only to ensure the right interpretations and judgments occur, but to ensure proper accountability.  Judgment is a human activity that requires not only human intelligence, but also critical thinking, guided by conscience, in determining strategies and tactics to be used for achieving corporate objectives.  Presumably the interpretation process will include standards for psychological materiality (is this significant beyond just financial impact to key stakeholders?).  Financial materiality does not adequately measure the impact of key human experiences e,g., disappearing coral reefs resulting in eco-grief, gun violence leading to fear and depression, water insecurity leading to decline in the quality of life, losing trust in the capital markets and in government, etc.
Some further thoughts about AI and psychological materiality:
  • Should AI be very invested in questions of psychological materiality in meaningful ways?   Take the example of empathy.  Expressions of empathy facilitate human understanding and solace, building productive relationships among coworkers.  How can an artificially sourced expression of empathy accomplish this goal? How credible can it be?
  • The loss of emotional competence (including empathy) among leaders can be yet another result of AI-generated information.  Genuine empathy is lost.  Narcissism is encouraged.  Toleration of quiet moments for self-expression dwindles.  Attention spans shorten. 
Divided attention becomes weak attention which becomes insufficient attention.  And when dire situations arise, how much attention is required to deal with them?

 What We Leaders Can Do About Operating In An AI World

It is clear to me that the most effective organization-wide strategy to minimize the potential damage that AI and fraud can impose is to strengthen the culture and ethical ecosystem.  Specific steps include:
  1. Values underlying Behaviors: Demonstrate through personal actions the behaviors desired within the organization and the values behind them.
  2. Trust and Trustworthiness: Build trust by being trustworthy and insisting on trustworthiness at every level of the organization.  This includes actions to reduce organizational politics to a level acceptable to you.  Communicate real and truthful information to the organization on a regular basis, including instances of bad news.
  3. AI is without conscience: Learn all you can about Artificial Intelligence and its absence of conscience, as well as its productivity potential. 
  4. Professionalism: Honor commitments made and cultivate professionalism in personal actions and expectations of others.
  5. Weekly Reminders: A good way to keep these strategies at the forefront is to embed them in your weekly list of what you want to accomplish.
  6. Responsible use of AI: Implement the use of AI responsibly by relegating its use to non-judgmental tasks.  Establish a list of AI-focused standards to guide those developing processes and procedures.  Ensure “human in the loop” arrangements at every turn.
  7. Responsible AI Development and Deployment: Assure that AI research and development is guided by standards of transparency, accountability, and ethical considerations.
  8. AI Education and Awareness: Establish an AI education program for selected employees, to include its benefits, its risks, its deficiencies, its signals, and critical thinking skills for applying this knowledge.
  9. Minimizing Bias: Create systems development standards that are designed to minimize bias in information collection and processing and to promote fairness and the seeking of truth.
  10. Privacy and Confidentiality: Strengthen data privacy policy provisions to protect data from conscience-lacking intrusions of all kinds in the personal, identifiable information (PII) space.
  11. Universal standards and collaboration: Foster global collaboration on AI standards that recognize the need for conscience-guided judgment on AI-generated information.
  12. Distinguishing fact from fiction: Demonstrate a lack of tolerance for opinions that are communicated as facts and call them out when they occur.
Note: As a business leader – how do I do any of these recommendations? Do I start an internal task force? Do I hire a consulting firm that can help make my organization more fraud resistant?
It can be enormously useful and instructive to refer to the May 30, 2024 Corrigendum version of the European Union’s Artificial Intelligence Act of 2024. See High-level summary of the AI Act | EU Artificial Intelligence Act.

Concluding Thoughts
AI is a business resource with impressive benefits----and impressive risks.  Unleashed without standards to manage those risks, it can distort the meaning(s) of information used throughout the organization.  It may be able to distinguish between what is legal or illegal (based on the facts of law), but it operates without moral compass in “knowing” what is right and what is wrong.  That is where human judgment enters the picture.

As collectors, interpreters, and users of financial information, it is incumbent on us to protect judgment-required issues from AI influence, and to relegate them to the judgment with which human beings are endowed.  Because AI does not have a conscience and more and more work is continuing to be done by AI, we must not only have healthy consciences, but also be vigilant for the role of the conscience. This means thoughtful and regular attention to our ethics and what is our culture. Bottom line: The world may become awash with AI, but it is human beings who must reign supreme and be in control.  Going forward, the role of the Chief Artificial Intelligence Officer promises to become critically important.

Parting quotes:
“Machine intelligence is the last invention that humanity will ever have to make.”
---Nick Bostrom
“The problem with humanity is paleolithic emotions, medieval institutions, and Godlike technology.”
----Edward O. Wilson

Acknowledgments: Author Sri Ramamoorti would like to thank Mr. Jack Bigelow and Daven Morrison, MD, of the Behavioral Forensics Group LLC, as well as Ms. Sue Kirchner for their help with earlier versions of this article.