Technology

AI Washing: A New Risk on the CFO’s Radar


As AI continues to dominate corporate narratives, finance executives must contend with a growing challenge: distinguishing genuine innovation from “AI washing.”

In boardrooms and earnings calls alike, artificial intelligence is having its moment. But as AI continues to dominate corporate narratives, finance executives must contend with a growing challenge: distinguishing genuine innovation from “AI washing”—and preparing for the regulatory scrutiny that’s quickly catching up.

What Is AI Washing?

Much like the greenwashing of the ESG era, AI washing refers to the practice of overstating, misrepresenting, or outright fabricating a company’s use of AI technologies. It’s often driven by pressure to appear cutting-edge to investors, customers, or regulators. But as Ryan Adams and Scott Lesmes of Morrison Foerster noted in their recent keynote, the consequences of AI washing are no longer theoretical—they’re already showing up in SEC enforcement.

In March 2024, the SEC brought its first enforcement actions related to AI disclosures. Two investment advisers paid penalties for misrepresenting their AI capabilities—one claiming to use “expert, AI-driven forecasts” that didn’t exist, the other promoting a machine learning platform that was never actually implemented. A third case targeted a startup CEO for touting AI-powered hiring tools that hadn’t been developed.

Whether public or private, companies making claims about AI must now answer a tough question: Can you substantiate what you’re saying?

SEC Signals and Disclosure Considerations

Though the SEC has not (yet) issued AI-specific rules, the agency has been clear that existing disclosure frameworks—particularly around risk factors, MD&A, and material cybersecurity incidents—already apply.

What’s new is the need to evaluate qualitative materiality. “Finance and accounting teams are used to focusing on quantitative thresholds,” said Adams. “But with AI, qualitative impacts—like reputational damage, ethical failures, or systemic model bias—can be just as material.”

For example, if your company is investing heavily in third-party AI solutions, that could introduce data privacy or IP risks. If you’re developing proprietary models, consider how their performance might impact product quality or customer trust. And if you're touting AI-driven improvements in efficiency or forecasting, be ready to support those claims with audit-ready evidence.

In short, risk factor disclosures should avoid boilerplate language and instead home in on risks specific to your business. Material use of AI in operations, decision-making, or strategy may also warrant discussion in MD&A—especially if it introduces capital expenditures, workforce changes, or competitive dynamics.

The Governance Imperative

For CFOs, the AI conversation must extend beyond compliance—it’s fundamentally about risk management. That includes establishing internal oversight, ensuring audit committees are informed, and setting clear lines of accountability.

Lesmes pointed out that many companies are forming cross-functional AI governance committees, often anchored in legal, finance, compliance, and IT. These teams are tasked with evaluating use cases, tracking third-party vendor risks, and ensuring ethical AI practices. While some organizations have begun appointing Chief AI Officers, most are still bolting AI oversight onto existing leadership roles.

At the board level, AI oversight is still emerging. Only 11–15% of S&P 500 companies currently disclose board-level involvement in AI oversight, though that number is expected to rise. For most, AI oversight is falling under the audit or risk committee’s purview—a natural fit given their role in internal controls and regulatory compliance.

What Finance Executives Should Do Now

With AI investment accelerating and regulatory expectations sharpening, finance and accounting leaders should take proactive steps:

  1. Inventory AI Use: Understand where AI is being used across the enterprise—from predictive analytics in FP&A to generative tools in marketing or HR.

  2. Assess Materiality: Evaluate which AI-related risks or dependencies are material to your financials, controls, or strategic outlook.

  3. Establish Oversight: Ensure there’s a clear governance structure that includes finance, legal, and compliance voices. Consider designating a lead AI risk officer.

  4. Vet Public Claims: Implement legal and finance review processes for investor-facing materials, especially earnings calls, press releases, and marketing collateral that reference AI.

  5. Train the Team: Ensure your finance team understands the basics of how AI works, its risks, and the implications for financial reporting and disclosure.

As Adams aptly put it, AI governance is about “creating a framework to confine the AI”—to ensure it is used responsibly and doesn’t expose the business to unnecessary risk. For finance leaders, that means approaching AI not just as a technology issue, but as a strategic, ethical, and regulatory one.

To learn more, register for FEI's 2025 Financial Leadership Summit.