Legal & Tax Updates [Back to list]
The Philippine Supreme Court Introduces Human-Centered AI Governance Rules
The Supreme Court of the Philippines has formally adopted the Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary (A.M. No. 25-11-28-SC), a landmark resolution aimed at integrating advanced technology into the administration of justice. This framework, a key component of the Strategic Plan for Judicial Innovations 2022-2027 (SPJI), seeks to enhance operational efficiency and expand access to justice while strictly adhering to ethical standards. It establishes a comprehensive regulatory environment for the development and deployment of artificial intelligence (“AI”) tools, ensuring that technological progress remains anchored in the rule of law, transparency, and accountability.
Central to this policy is the concept of “Human-Centered Augmented Intelligence,” defined as technology that empowers human cognitive skills rather than supplanting human judgment. The framework’s scope is broad, governing the conduct of members of the Judiciary, court personnel, and “court users” (expressly including members of the Bar and litigants) as well as third-party vendors engaged in judicial technology. It mandates that AI be used primarily as a support mechanism, emphasizing that such use must be reasonable, proportional, and the least intrusive or restrictive means to achieve a legitimate judicial objective.
For legal practitioners and litigants, the framework introduces stringent transparency and disclosure mandates. Any use of AI tools in the preparation of court-bound documents, legal research, or document summarization must be clearly disclosed in plain language. These disclosures must specify: 1) the AI tool/s used, 2) the degree of AI involvement, 3) the extent of human control and oversight, 4) a statement that in case of inquiry or request, the AI output was preserved by the user, 5) compliance with this Framework, and 6) that the user bears ultimate responsibility for the work or output done.
Furthermore, developers of AI tools must disclose the 1) logic, 2) limitations, and 3) safeguards undertaken to ensure transparency and accountability. This is to ensure that Human-centered augmented intelligence tools used in the Judiciary must be auditable and traceable.
Human control must be paramount, hence, the framework explicitly prohibits AI from serving as the sole, primary or determinative basis for any adjudicatory outcome. Legal reasoning and final conclusions must be independently formed by human decision-makers, who retain ultimate responsibility for all outputs produced by AI tools. The framework defines three levels of human involvement: Human-in-the-loop (HITL), where AI provides recommendations; Human-on-the-loop (HOTL), focusing on design and monitoring; and Human-in-command (HIC), ensuring comprehensive control over when and how a tool is used. Crucially, users cannot evade liability for ethical or legal breaches by claiming the error was the “fault” of the AI tool.To oversee this transition, the Court has established a permanent AI Committee tasked with evaluating emerging tools, managing algorithmic bias, and responding to incident reports or complaints. The framework adopts a risk-management approach, classifying systems from “minimal-risk” to “prohibited,” with the latter including AI used for cognitive behavioral manipulation or unauthorized real-time biometrics-based identification and tracking of people. As the legal landscape evolves, this framework serves as a vital guide for ensuring that innovation within the Philippine Judiciary is sustainable, ethically governed, and consistently protective of privacy and data protection. To this end, the Court has committed to fostering both domestic and international cooperation to adopt best practices from AI governance frameworks of other jurisdictions, while simultaneously advancing the development of national AI policies.
