Technology

AI in Finance: Balancing Innovation, Accuracy, and Audit Readiness – A Q&A with Deloitte’s Ryan Hittner


Ryan Hittner shares practical steps finance leaders can take to implement AI confidently while maintaining trust and accuracy in their reporting processes.

As the finance and accounting landscape continues to evolve with the rapid advancement of artificial intelligence, leaders are seeking insights on how to navigate this transformation responsibly and effectively. In the lead-up to the AI in Finance and Accounting 2025: Managing Governance, Adapting the Workforce event taking place April 9–10, we spoke with Ryan Hittner, Audit & Assurance principal at Deloitte & Touche LLP, to discuss how organizations are leveraging both traditional and generative AI, what this means for audit readiness, and the critical importance of governance, transparency, and human oversight. In this Q&A, Ryan shares practical steps finance leaders can take to implement AI confidently while maintaining trust and accuracy in their reporting processes.

FEI Weekly: What are some ways that Artificial Intelligence (AI) is currently being used in the finance function? Are we starting to see wider adoption of AI?

Ryan Hittner: Organizations are rapidly exploring how to leverage the enhanced capabilities of traditional and Generative AI (GenAI) through applications that can aid professionals in the finance and accounting functions. This includes increasing the capabilities of more established intelligent automation solutions (e.g., three-way or two-way match controls to record certain transactions) as well as leveraging GenAI to assist with creating new content (e.g., research and drafting documents/memos) or enhancing existing content (e.g., contract review and extraction) in a more flexible and efficient manner.  

When implementing AI within the finance and accounting functions, it is important for organizations to consider how AI may have an impact on their audits, both by internal and external auditors, and adapt their governance and oversight of AI.   

FEI Weekly: What are some steps organizations can take to address AI’s impact to their audits?

Hittner: While AI can introduce a variety of risks to organizations, the risks for finance and accounting functions typically focus around accuracy of AI-generated outputs and transparency into AI-enabled processes. Because various stakeholders rely on the information coming out of the finance and accounting functions, organizations will not only need high confidence in their AI applications’ performance, but they will also have to demonstrate why they are confident. This is a complex task with no universal solutions and will require a diverse set of skills, experience, and tools. In terms of moving forward, however, organizations may consider implementing comprehensive procedures and controls across the following areas:

  • Human oversight and transparency – Maintaining human review in the AI lifecycle where possible and being transparent with stakeholders about where and how AI is being used.
  • Data management and maintaining an audit trail – Implementing data quality controls is vital to maintain quality, accuracy, and relevance of data used by AI models. Archiving inputs fed into AI models and generated outputs and implementing change management for changes to the AI models themselves or data sets used by AI models is important for maintaining an audit trail.  
  • Testing and ongoing monitoring – Implementing robust development and validation testing measures to evidence to internal and external stakeholders that management is monitoring and addressing the risks associated with the use of AI models.
  • Documentation and reporting – This can enable transparency, traceability, and provides a clear record of key decisions around AI models. Documentation and reporting can provide a clear record of key decisions around AI models.   

FEI Weekly: Human oversight in AI remains critical to ensure control and accuracy. How does an organization balance human review with automation?

Hittner: Once an organization’s AI system shows consistent and reliable outputs as part of pre-deployment testing and a certain period of performance, it can consider automating some of the human review process. This may include taking a sample of certain actions in the AI system rather than performing a 100% review. That way, you can periodically check and assess the reliability of AI outputs without overwhelming human resources. Users can implement automated monitoring controls to identify anomalies or deviations, which would then prompt human intervention as required. This balanced approach harnesses the advantages of both AI and human oversight, all while enhancing efficiency while ensuring high standards of accuracy are upheld. 

FEI Weekly: Can you expand on the role data management plays in supporting transparency and accuracy?

Hittner: The quality and accuracy of data directly influences the performance and reliability of AI models. Incomplete or inaccurate data can lead to flawed outcomes, which can then result in financial losses, regulatory and compliance issues, or reputational damage. Organizations need to implement effective data management practices to not only verify quality and accuracy of data but also to maintain an audit trail that enables transparency on AI model performance. As mentioned previously, these practices include several elements:

One element is data quality controls. This is not limited to verifying that data used by AI models is accurate, complete, and representative of the scenarios that AI will encounter but also ensuring appropriate data integration and consistency. One of the reasons that data integration and consistency is important is because organizations may integrate data from various sources to provide a more comprehensive dataset to improve the effectiveness of their AI models.  

Another element is archiving inputs and outputs. This is important because if you don’t track the inputs fed into AI models and the outputs generated, it can be difficult to facilitate effective review (i.e., understand the origin and timing of data used by AI models and trace outputs to inputs to assess consistency and reliability) and maintain control evidence. A leading practice is for organizations to include details including name/title of dataset, source, prompts, and timestamps for inputs and link them to the related outputs.  

The final element that I want to highlight is change management. As changes in AI models and their underlying inputs can lead to significant variations in performance and outputs, it is important to maintain detailed documentation of model changes, including the rationale for changes, the expected impact, and the results of any validation testing performed. This documentation supports a robust audit trail by providing a clear history of model evolution and implications of changes.

FEI Weekly: How important is it to continuously test and monitor AI systems? What are some of the benefits of ongoing monitoring?

Hittner: The key difference between AI models and other algorithmic models is that AI models change based on training or input data rather than solely through human intervention. As such, it is important to have ongoing testing and monitoring of an AI system’s performance rather than just when the AI system is implemented. While the testing metrics and methods may differ based on the use case and model type, it is important to evaluate the system's outputs against expected results so that inaccuracies can be identified and fixed. Given that many organizations are working with various vendors or providers for their AI systems, organizations may need to understand the testing and monitoring performed by the vendor or provider if the organization is not performing the testing and monitoring themselves. Robust testing and ongoing monitoring procedures can be vital for finance and accounting functions leveraging AI solutions to demonstrate to internal and external stakeholders that they are regularly assessing the accuracy, reliability, and compliance of their AI systems.

FEI Weekly: Why is it important for accounting and finance leaders to attend the AI in Finance and Accounting 2025: Managing Governance, Adapting the Workforce conference? What should they expect from Deloitte's session?

Hittner: As organizations explore how to leverage the enhanced capabilities of traditional AI and GenAI in their accounting and financial reporting, it is important to evaluate how it may impact their internal and external audits and how to support accuracy of AI-generated outputs and sufficient transparency around AI-enabled processes.  

Join me and my colleagues, Morgan Dove and Mark Hughes, during our session, AI in Accounting and Financial Reporting: How might it impact your audit? We’ll provide an overview of AI in accounting and financial reporting and discuss leading practices for supporting accuracy and transparency when implementing AI. We encourage you to register now and look forward to seeing you on April 9th.