GDPR Article 22 sets a general prohibition on decisions made entirely by automated means when those decisions produce legal effects or similarly significant consequences for the person involved. That single sentence has caused more confusion among website owners, developers, and marketers than almost any other provision in the regulation - partly because its scope keeps expanding through court rulings and enforcement actions.

The provision protects individuals from being reduced to a data point. If an algorithm denies someone a loan, rejects a job application, or determines which price they see for a product, and no human being meaningfully reviewed that outcome, Article 22 is almost certainly in play.

What Article 22 Actually Says

Article 22(1) of the GDPR states that a data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significantly affects them. Three conditions must all be met for the prohibition to apply: the decision must be made without meaningful human involvement, it must be based on automated processing (which includes profiling), and it must produce effects that are either legal in nature or carry a comparable weight.

The European Data Protection Board has interpreted this not as a right that individuals must actively invoke, but as a standing prohibition. Controllers cannot engage in qualifying automated decision-making and then wait for someone to object. The restriction applies by default.

That distinction matters. It means you cannot bury an opt-out mechanism in your privacy policy and consider the matter handled.

Profiling vs. Automated Decision-Making: The Difference

These two concepts overlap but are not identical. Article 4(4) of the GDPR defines profiling as any form of automated processing of personal data that evaluates personal aspects relating to a natural person - particularly to analyse or predict things like work performance, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements.

Profiling is the act of building up a picture of someone from data. Automated decision-making is the act of using that picture (or other automated processing) to reach a conclusion that affects the person. Profiling can feed into automated decision-making, but it can also exist on its own - for instance, when a website segments visitors by browsing behaviour to display different content.

Article 22 catches both, but only when the resulting decision meets the threshold of legal or similarly significant effects. A website showing personalised product recommendations based on browsing history is profiling, but it probably does not trigger Article 22. An insurance company automatically adjusting premiums based on an algorithmic risk profile almost certainly does.

When Does a Decision Have "Legal or Similarly Significant Effects"?

Legal effects are relatively straightforward: decisions that affect someone's legal rights, contractual status, or entitlements under law. Refusing a credit application, terminating an employment contract, or denying a social security benefit all qualify.

"Similarly significant effects" is where it gets harder to draw the line. The EDPB's 2018 Guidelines on Automated Individual Decision-Making and Profiling explain that the effects must be sufficiently great or important to merit attention. They list several indicators: the decision could significantly affect someone's circumstances, behaviour, or choices; it could have a prolonged or permanent impact; or it could lead to exclusion or discrimination.

ScenarioLikely triggers Article 22?Why
Automated credit scoring that determines loan approvalYesDirectly affects contractual rights and financial access
Algorithmic CV screening that rejects job applicantsYesDetermines whether someone can be considered for employment
Dynamic pricing based on user profilingPossiblyIf prohibitively high prices effectively bar someone from goods or services
Personalised product recommendations on an e-commerce siteUnlikelyTypically does not affect legal status or carry similarly significant weight
Targeted advertising based on browsing cookiesUnlikely in most casesGenerally lacks the severity threshold, though exceptions exist for vulnerable groups
Algorithmic work shift allocation and rider deactivation on gig platformsYesAffects access to paid work - a similarly significant effect

The SCHUFA Ruling: A Landmark Expansion of Article 22

On 7 December 2023, the Court of Justice of the European Union delivered its first ruling on the scope of Article 22 in Case C-634/21, commonly known as the SCHUFA case. SCHUFA is a German credit reference agency that generates automated credit scores and supplies them to lenders. An individual, referred to as OQ, was refused a loan based on a negative SCHUFA score and sought access to the data and logic behind the scoring.

SCHUFA argued it was merely engaged in preparatory processing - creating a score - and that the actual decision to refuse the loan was made by the bank. The CJEU rejected that argument entirely. The court held that SCHUFA's credit score played a "determining role" in the lending decision, and that where a poor probability value leads "in almost all cases" to the bank refusing a loan, the creation of the score itself constitutes a decision within the meaning of Article 22.

This ruling has far-reaching consequences. It means that intermediary service providers generating automated scores, risk assessments, or probability values can be caught by Article 22 - not just the organisation making the final call. Any company producing an automated output that a third party relies on heavily when making decisions about individuals should take notice.

The CJEU followed up in February 2025 with its ruling in Case C-203/22 (Dun & Bradstreet Austria), which addressed the tension between an individual's right to transparency about automated decision-making and the protection of trade secrets. The court ruled that where a controller considers the logic behind its scoring to contain trade secrets, it must still disclose that information to the competent supervisory authority or court, which must then balance the competing interests. Algorithmic opacity is not an excuse for non-compliance.

Enforcement Actions: Real Fines for Real Violations

Regulators are no longer treating Article 22 as a theoretical provision. Several significant enforcement actions in 2024 and 2025 demonstrate the practical consequences of getting automated decision-making wrong.

In November 2024, the Italian Data Protection Authority fined Foodinho (a subsidiary of Glovo, part of the Delivery Hero group) EUR 5 million for unlawfully processing the personal data of over 35,000 delivery riders through its digital platform. The investigation revealed that the company used algorithms to manage work shifts, assign orders, and deactivate rider accounts without providing any mechanism for human intervention or allowing riders to contest automated decisions. The Italian DPA found a direct breach of Article 22, marking one of the first cases where algorithmic management of gig workers was explicitly found to violate this provision.

In September 2025, the Hamburg Commissioner for Data Protection fined a financial services provider nearly EUR 500,000 for failing to meet its obligations under Articles 22 and 13-15 of the GDPR. The company had used automated systems to assess creditworthiness in credit card applications, rejecting applicants who appeared to have good credit histories, without adequately explaining the logic behind those rejections or providing access to the underlying rationale when requested. For context on how GDPR fines are calculated under Article 83, the ten assessment criteria give regulators significant room to scale penalties based on the nature and severity of the violation.

These cases share a common thread: the fine is not just about using automated systems. It is about failing to be transparent, failing to provide meaningful information about the logic involved, and failing to offer affected individuals a genuine route to human review.

The Three Exceptions - and Their Limits

Article 22(2) permits solely automated decision-making in three specific circumstances. Each comes with strings attached.

Contractual necessity - the decision is necessary for entering into or performing a contract between the data subject and the controller. A fully automated credit check at the point of application could potentially rely on this basis, but the controller must demonstrate that automation is genuinely necessary for the contract, not merely convenient.

Authorised by law - EU or Member State law permits the processing and lays down suitable measures to safeguard the data subject's rights. Germany's Federal Data Protection Act (BDSG), for example, includes provisions on credit scoring under Section 31, though the CJEU in the SCHUFA ruling questioned whether those provisions fully satisfied GDPR requirements. See also what DSGVO means for German websites and the additional national obligations it introduces.

Explicit consent - the data subject has given their explicit consent. This is a higher bar than the standard GDPR consent threshold. It requires a clear, affirmative statement specifically addressing the automated decision-making. A general cookie consent banner does not qualify.

Whichever exception applies, Article 22(3) requires controllers to implement suitable safeguards. At minimum, the data subject must be able to obtain human intervention, express their point of view, and contest the decision.

What This Means for Cookies and Online Profiling

Most cookie-based tracking for analytics or marketing purposes does not directly trigger Article 22. Setting a _ga cookie to measure page views, or a _fbp cookie to track conversions, involves profiling in the broad GDPR sense, but the resulting activity - showing a targeted advert or analysing traffic patterns - rarely produces legal or similarly significant effects on the individual. Understanding cookie categories and their legal treatment is a useful starting point for mapping which of your cookies could feed into higher-risk processing.

That said, the line is not as clear as it once seemed. The EDPB has noted that targeted advertising based on extensive profiling could have significant effects on certain individuals, particularly vulnerable groups such as minors or people in financial difficulty. If cookie-based profiling feeds into decisions about pricing, access to services, creditworthiness, or insurance eligibility, Article 22 is potentially engaged.

Consider a practical example: a visitor browses a comparison website, and tracking cookies build a behavioural profile that is then shared with an insurance provider. If that profile directly influences the premium offered - or whether coverage is offered at all - the chain of processing from cookie to decision may fall under Article 22, especially after the SCHUFA ruling established that intermediary scoring can itself constitute a decision.

Regardless of whether Article 22 applies, profiling via cookies still requires a lawful basis under Article 6 of the GDPR and prior consent under Article 5(3) of the ePrivacy Directive for non-essential cookies. A cookie consent management platform that properly categorises tracking cookies and blocks them until consent is obtained is the baseline requirement - Article 22 adds an additional layer when the profiling feeds into consequential decisions.

The EU AI Act: An Additional Layer of Obligation

From August 2024, the EU AI Act began its phased implementation alongside the GDPR. The two regulations overlap significantly when it comes to automated decision-making. Article 14 of the AI Act requires high-risk AI systems to be designed with human oversight capabilities, and Article 86 gives individuals the right to obtain clear explanations of the role an AI system played in a decision affecting them.

Credit scoring systems, recruitment algorithms, and insurance risk assessment tools are all classified as high-risk under the AI Act. Organisations deploying these systems must now comply with both Article 22 of the GDPR and the AI Act's requirements for transparency, human oversight, and bias monitoring. The Hamburg DPA's 2025 enforcement action explicitly noted that the AI Act creates additional obligations beyond the GDPR for users of automated decision-making systems.

For website owners, this dual regime is most relevant where AI-powered tools are integrated into customer-facing processes. If your site uses an AI chatbot that makes binding decisions about customer complaints, or an algorithm that automatically approves or rejects account applications, both the GDPR and the AI Act apply. Where high-risk processing is involved, a Data Protection Impact Assessment is almost certainly required under Article 35 of the GDPR before you begin.

How to Comply: Practical Steps for Website Owners

Start by auditing your processing activities. Identify every point where decisions about individuals are made without meaningful human review. "Meaningful" is the operative word here - the EDPB has stressed that a human rubber-stamping automated outputs does not count. The person reviewing the decision must have the authority, competence, and practical ability to change the outcome.

For each automated process identified, assess whether it produces legal or similarly significant effects. If it does, you need one of the three Article 22(2) exceptions, plus the mandatory safeguards. If it does not meet the Article 22 threshold, the general GDPR principles still apply - you still need a lawful basis, must provide transparency about the processing, and must respect the right to object to profiling under Article 21.

Your privacy notice must include specific information about automated decision-making. Articles 13(2)(f) and 14(2)(g) of the GDPR require you to inform data subjects about the existence of automated decision-making, provide meaningful information about the logic involved, and explain the significance and envisaged consequences. After the D&B Austria ruling, "meaningful information" means more than a generic description - it must enable the individual to understand the rationale behind the decision. Individuals may also exercise their rights through a Data Subject Access Request to obtain the specific data used in an automated decision affecting them.

On the cookie side, ensure that any profiling cookies are properly categorised and subject to prior consent. Use a cookie scanning tool to identify all cookies on your site, including those set by third-party scripts that may be building profiles of your visitors. Kukie.io's scanner detects and categorises both first-party and third-party cookies, making it easier to map which cookies contribute to profiling activities that could engage Article 22.

The UK Position: Divergence Ahead

The UK GDPR retained a near-identical version of Article 22 after Brexit. The ICO's guidance on automated decision-making and profiling closely mirrors the EDPB's approach, requiring a Data Protection Impact Assessment for any processing that falls under Article 22 and mandating that controllers provide accessible routes to human intervention.

That picture may be shifting. The UK's Data (Use and Access) Act received Royal Assent and came into law on 19 June 2025. The ICO has flagged that its guidance on automated decision-making is under review as a result. The broader UK policy direction - announced by the Chancellor in March 2025 - aims to cut regulation and reduce compliance costs for businesses, which could mean a more permissive approach to automated processing than the EU takes.

For now, organisations operating in both the UK and EU should apply the stricter standard. If a future UK framework loosens Article 22 protections, website owners serving EU visitors will still need to comply with the EU version for those users. California has taken a parallel path: the CCPA's automated decision-making rules, finalised in late 2025, impose their own notice and opt-out requirements for businesses targeting Californian residents.

Frequently Asked Questions

Does Article 22 apply to all types of profiling?

No. Article 22 only applies when profiling feeds into a decision that is made solely by automated means and that decision produces legal effects or similarly significant effects on the individual. Profiling for general analytics or personalisation typically falls outside its scope, though the general GDPR principles still apply.

Can I use automated credit scoring on my website if I get consent?

Explicit consent is one of the three exceptions under Article 22(2), but it must be specific to the automated decision-making, freely given, and informed. A generic cookie consent banner or a buried clause in terms and conditions does not meet this threshold. You must also still provide the right to human intervention, the right to express a point of view, and the right to contest the decision.

Do targeted advertising cookies trigger Article 22 of the GDPR?

In most cases, no. Displaying a targeted advert based on cookie profiling does not typically produce legal or similarly significant effects. Exceptions may arise where the pr