Artificial Intelligence as Organizational Agent: Legal Standing, Accountability Gaps, and the Governance of Algorithmic Decision-Making Authority
PDF

Keywords

AI Legal Agency
Accountability Gap
Algorithmic Accountability

Abstract

The deployment of artificial intelligence systems in consequential organizational decisions has created a fundamental legal and organizational challenge: the accountability gap. When an AI system makes or materially influences a decision that causes harm—an discriminatory hiring outcome, a negligent medical recommendation, a faulty financial assessment—the existing frameworks of legal responsibility and organizational accountability provide no clear answer to the question of who is responsible. This paper develops a comprehensive analysis of AI as an organizational agent, examining the legal, organizational, and governance dimensions of the accountability gap and the institutional innovations required to address it. Drawing on three foundational references and twelve supplementary citations spanning legal theory, AI governance, organizational law, and regulatory design, this study argues that the accountability gap is not merely a legal technicality but a structural feature of how AI systems function as de facto agents within organizational hierarchies. The analysis reveals that the metacognitive miscalibration documented by the MIRROR benchmark, the structural constraints on AI auditing identified by the Verification Tax, and the practical demand for explainability in human resource analytics illuminate different dimensions of the same underlying phenomenon: AI systems exercise genuine agency—the capacity to make consequential decisions—without the accountability structures that the exercise of agency has historically required. The paper concludes by proposing an Algorithmic Agency Governance Framework that addresses the accountability gap through a combination of liability reform, organizational governance requirements, and institutional innovations in how AI agency is constituted and overseen.

PDF

References

1. Buçinca, Z., et al. (2025). Between transparency and trust: Identifying key factors in AI system perception. ACM Transactions on Interactive Intelligent Systems.

2. Solaiman, I., & Talhouk, R. (2024). Legal personality of AI and robotics: Design-based approaches. Springer Nature Computer Science Reviews.

3. Wang, J. Z. (2026). MIRROR: A Hierarchical Benchmark for Metacognitive Calibration in Large Language Models. arXiv preprint arXiv:2604.19809.

4. IEEE Standards Association. (2025). Ethically aligned design of autonomous and intelligent systems: Governance frameworks. IEEE.

5. Bradford, A. (2024). The Brussels Effect, regulatory diffusion, and AI governance. Columbia Law Review.

6. Wang, J. Z. (2026). The Verification Tax: Fundamental Limits of AI Auditing in the Rare-Error Regime. arXiv preprint arXiv:2604.12951.

7. Bei, J., Liu, Z., Huang, J., Wang, X., & Yang, P. (2025). Strategic Human Resource Analytics with Explainable Artificial Intelligence. In Proceedings of the 2025 6th International Conference on Computer Science and Management Technology.

8. Stilgoe, J. (2024). The politics of AI governance. In AI Governance. Oxford University Press.

9. European Commission. (2024). The EU Artificial Intelligence Act. Official Journal of the European Union.

10. Chander, S., et al. (2025). Algorithmic accountability and transparency in public sector decision-making. Government Information Quarterly.

11. Zeng, Y., Lu, E., & Huangfu, C. (2024). Linking artificial intelligence to robotics: Legal and policy implications. Science and Engineering Ethics.

12. Smuha, N. A. (2024). From a governance-framework to a regulation: The EU AI Act. Computer Law & Security Review.

13. Kahrl, A. W. (2025). Disability, AI, and employment discrimination: A legal and organizational analysis. Berkeley Journal of Employment and Labor Law.

14. Yeung, K., & Lodge, M. (2024). Algorithmic Regulation: A Critical Appraisal of EU Data Protection and AI Governance. Oxford University Press.