Hall Chadwick Insights

When AI Writes Reports: New Ethics and Risks in Corporate Governance

 

1. AI in the Corporate Governance Scene

In many companies, AI has evolved from a supporting tool into a capable colleague that drafts meeting minutes, summarizes financial reports, and prepares sustainability disclosures. It saves time, improves efficiency, and helps organize large amounts of unstructured data. As generative AI becomes part of governance processes, it enhances reporting speed and quality but also introduces new management risks. When the content of a report is inaccurate, when a model produces biased outcomes, or when data sources are unclear, the question of responsibility becomes a new governance issue.

2. Internal Controls and Data Trust Challenges

The introduction of generative AI is transforming the preparation of financial statements and sustainability disclosures, while also testing the effectiveness of internal control and audit systems. Although AI can improve efficiency, it also introduces new risks in data processing and report generation. These risks are not limited to technical errors; they challenge the design of governance frameworks and the boundaries of responsibility.

  1. Unclear Accountability: When AI-generated reports contain errors, biased calculations, or unverified data sources, it becomes difficult to determine who is responsible. This uncertainty undermines internal control systems that emphasize traceability and human verification.
  2. Challenges to Internal Control: The “black box” nature of generative AI makes it difficult for accountant to understand how results are derived or to verify their consistency. When companies rely on AI to automatically generate reports or conduct risk analysis, these processes lack verifiable control points, weakening accountability chains.
  3. Data Governance and Disclosure Risks: When used in sustainability reporting or ESG analysis, AI may rely on unlicensed datasets or generate biased content, compromising report credibility. In the future, companies will need to explain not only the results disclosed but also how data are generated and validated.
  4. Need for Model Audits: Accountant must not only verify financial information but also evaluate the design logic and data sources of AI models, increasing the complexity of audit work.
To address these risks, companies should take concrete action across several dimensions—governance design, professional support, ethical management, and international collaboration.
  1. Strengthening Governance Design: Companies should formally integrate “AI usage policies” into internal control systems, covering model selection criteria, data quality reviews, result verification, and error correction responsibilities. Large enterprises may establish AI oversight committees, while smaller firms can engage external advisors to implement monitoring and audit mechanisms that ensure traceability.
  2. Expanding Professional Support: Accounting firms and consulting companies can provide “AI assurance services,” extending their work to model risk assessment, data validation, and algorithm transparency reviews. Developing AI model review checklists helps companies confirm data compliance, maintain audit trails, and ensure reproducibility of generated content.
  3. Enhancing Ethics and Corporate Culture: Management should emphasize information security and fairness, providing internal training to build awareness of data ethics, model bias, and transparent disclosure. AI usage guidelines can be incorporated into employee codes of conduct, requiring source citation and prohibiting the input of personal or client data, embedding ethical behavior into daily operations.
  4. Learning from International Practice: Drawing on Japan’s experience with model auditing and ethical review systems, Taiwan can establish transparent and traceable AI usage records and strengthen cooperation in model auditing, data disclosure, and risk management to build a more mature governance framework.

3. Japan's Response and Taiwan's Challenges

  1. Policy and Practice in Japan: The Japanese government and industry are beginning to recognize how generative AI affects governance transparency. According to the Ministry of Economy, Trade and Industry’s 2024 Generative AI Utilization Guidebook (生成式AI活用ガイドブック), companies introducing AI should ensure explainability, traceability, and accountability, and establish internal oversight functions. Major corporations such as Hitachi and Fujitsu have created AI ethics committees integrating model risk management with internal control systems. The Financial Services Agency has also started studying how AI usage might affect the J-SOX reporting framework. Japan’s traditional governance culture, which values personal trust and self-discipline, is gradually being translated into formal, verifiable institutional mechanisms.
     
  2. The Situation in Taiwan: Taiwanese companies have invested in financial digitalization and sustainability reporting automation, but AI governance remains in its early stage. The Financial Supervisory Commission and the Accounting Research and Development Foundation are reviewing the compliance of AI-generated financial statements, yet no formal guidelines exist. Most firms still treat AI as an efficiency tool rather than a governance subject, leaving accountability structures incomplete. As IFRS S1 and S2 sustainability standards take effect in 2026, the question of whether AI-generated data can be verified will become increasingly important. Japan’s experience with model auditing and ethical oversight offers valuable reference for Taiwan.

4. Professional Services and Future Directions

  1. Strengthening Governance Design: To integrate AI into a company’s governance framework, it is necessary to formally include “AI usage policies” within the internal control system. These should cover criteria for model selection, quality review of training data, procedures for verifying outputs, and accountability for corrections when errors occur. Large corporations may establish dedicated AI oversight committees or technology risk teams, while small and medium-sized enterprises can engage external consultants to implement basic monitoring and audit processes. Such measures ensure that AI usage is properly documented and help prevent accountability gaps caused by the technology’s black-box nature.
     
  2. Expanding Professional Roles: As generative AI becomes widely adopted, accounting firms and consulting companies are entering the field of “AI assurance services.” These services go beyond traditional financial audits, extending to model risk assessment, data validation, and algorithm transparency reviews. Some firms have begun developing AI model review checklists to help companies verify data compliance, ensure traceability of the generation process, and confirm the reproducibility of outputs. The emergence of such services allows companies to evaluate their systems from an external third-party perspective, strengthening public trust in reported information.
     
  3. Updating Ethics and Corporate Culture: AI governance ultimately depends on corporate culture. If management focuses solely on efficiency while neglecting information security and fairness, even the most comprehensive systems will fail to function effectively in the long run. Companies can begin by providing internal education and training to build awareness of data ethics, model bias, and transparency in disclosure. Some Japanese companies have already incorporated AI usage guidelines into their employee codes of conduct, requiring that generated content include source attribution and prohibiting the input of personal or client information. These practices move ethics from mere declarations to a tangible part of daily business operations.

5. Bringing Technology Back into the Framework of Responsibility

Japan and Taiwan are moving in different yet complementary directions in AI governance. Japan is building an institutional foundation through policy guidance and industry self-regulation, while Taiwan can stay aligned with global sustainability reporting standards by adopting frameworks and audit mechanisms at an early stage. The widespread use of generative AI is reshaping the foundations of corporate governance, and the ability to maintain trust amid technological change will depend on how governance systems and ethical principles are designed. As AI technologies gradually enter decision-making, reporting, and auditing processes, companies need clearer rules on how AI is used and where accountability lies. Otherwise, efficiency gains may come at the cost of new governance risks. Japan faces the challenge of keeping its institutional frameworks flexible, while Taiwan needs to establish transparent and traceable usage records as soon as possible. If both countries can share experiences in model auditing, data disclosure, and risk management, a more mature governance model may emerge. The integration of AI marks a new dimension in corporate governance. When technology becomes part of an institutional framework and is subject to continuous review, it can function as an element of governance rather than a source of risk.


Read more: The New Challenges of Taiwan–Japan Cooperation in the Age of AI