Legal & Tax Updates [Back to list]
The National Privacy Commission Issues Guidelines on the Application of the Data Privacy Act to Artificial Intelligence Systems Processing Personal Data
The National Privacy Commission (“NPC”) issued NPC Advisory No. 2024-04 (“Advisory” or “Guidelines”), which provides guidelines to address the privacy concerns related to Artificial intelligence (“AI”) systems, particularly when these systems involve the processing of personal data.
The Advisory applies whenever personal data is processed at every stage of the AI lifecycle, including its development, deployment, training or testing and whether the AI systems are used for automation, data analysis, or decision-making.
PICs and their Personal Information Processors (“PIPs”) must comply with the Data Privacy Act of 2012 (“DPA”) and its Implementing Rules and Regulations (“IRR”) when processing personal data for AI systems. This includes adhering to core privacy principles, protecting data subjects’ rights, determining appropriate lawful bases for data processing, and ensuring robust security measures.
The following is an overview of the key principles and obligations that Personal Information Controllers (“PICs”) must follow in relation to AI systems development, deployment, and testing.
Applying the DPA to AI Systems
Transparency in Data Processing
PICs are required to inform data subjects about the nature, purpose, and extent of their personal data processing. This includes:
- Explaining the purpose and inputs used by the AI system;
- Outlining any risks and expected outcomes of AI processing;
- Describing the impact of AI systems on data subjects; and
- Providing for any available dispute mechanisms to their data subjects.
All information should be clear, accessible, and presented in simple, clear and plain language. Technical terms should be explained in a way that the target audience can understand.
Accountability for AI Processing
PICs are accountable for the processing of personal data within AI systems. This responsibility extends to the outcomes of the AI systems and any actions carried out by subcontracted PIPs.
PICs must demonstrate compliance by maintaining necessary documentation and policies to show that they adhere to the DPA and its regulations. PICs must be able to prove that they have effective policies and procedures in place, including those for AI-related data processing. These measures should be regularly reviewed and updated.
To ensure the responsible and ethical development of AI systems, PICs should implement robust governance mechanisms which may include:
- Conducting Privacy Impact Assessments (PIA)
- Integrating privacy-by-design and privacy-by-default principles
- Adopting industry security standards
- Continuously monitoring AI systems
- Creating a dedicated AI ethics board
- Regularly retraining AI systems to improve accuracy
- Enabling human intervention in decision-making when necessary
For AI systems that involve automated decision-making, additional safeguards must be put in place to allow meaningful human intervention, especially when decisions may significantly affect data subjects’ rights.
Fairness in AI Systems
PICs must ensure that personal data is processed in a manner that is fair and non-manipulative. They should implement mechanisms to identify, monitor, and reduce biases in AI systems, including addressing systemic, human, and statistical biases.
PICs should avoid practices such as AI Washing—the act of overstating the involvement of AI to the detriment of data subjects. They must ensure that AI systems are used in a manner that respects the rights and freedoms of data subjects.
Accuracy of Personal Data
To maintain fairness in AI outputs, PICs must ensure that the personal data used is accurate, up-to-date, and relevant. Measures should be put in place to verify the accuracy of personal data and prevent the use of outdated or incorrect data in AI processes.
Data Minimization
PICs should apply the principle of data minimization, meaning they should only use personal data that is necessary for the development or deployment of AI systems. Data that does not directly contribute to the improvement or testing of the AI system should be excluded from processing by default.
Lawful Basis for Processing
Before processing personal data for AI systems, PICs must determine its appropriate basis under Sections 12 and 13 of the DPA. Publicly available personal data remain protected under the DPA and must still meet the legal criteria for processing, even if it has been made accessible to the public.
Rights of Data Subjects in AI Systems
The processing of personal data in AI systems can impact data subjects’ ability to exercise their rights. PICs must implement mechanisms to facilitate the exercise of these rights, including the right to object, right to rectification, and right to erasure.
Mechanisms for Exercising Data Subject Rights
PICs must ensure effective means for data subjects to exercise their rights. This includes:
- Providing alternatives when the full effect of the data subject’s rights cannot be implemented
- Informing data subjects about the scope and consequences of exercising their rights
- In cases where requests are unfeasible, PICs must clearly explain why and provide a transparent justification
Exercising Rights Throughout the AI Process
Data subjects must be able to exercise their rights before, during, and after the development or deployment of AI systems. This includes having access to their data, correcting inaccuracies, and requesting data erasure when appropriate.
Meaningful Exercise of Rights
Incorporating personal data into AI systems does not automatically negate the right of data subjects to request access, rectification or erasure. PICs are responsible for implementing measures to allow meaningful exercise of these rights, even if the data is part of a larger data set or AI model.