Privacy

Sensitive data ruling by Europe’s top court could force broad privacy reboot

Comment

A close-up of a smartphone in its user's hands
Image Credits: Witthaya Prasongsin (opens in a new window) / Getty Images

A ruling put out yesterday by the European Union’s top court could have major implications for online platforms that use background tracking and profiling to target users with behavioral ads or to feed recommender engines that are designed to surface so-called ‘personalized’ content.

The impacts could be even broader — with privacy law experts suggesting the judgement could dial up legal risk for a variety of other forms of online processing, from dating apps to location tracking and more. Although they suggest fresh legal referrals are also likely as operators seek to unpack what could be complex practical difficulties arising from the judgement.

The referral to the Court of Justice of the EU (CJEU) relates to a Lithuanian case concerning national anti-corruption legislation. But the impact of the judgement is likely to be felt across the region as it crystalizes how the bloc’s General Data Protection Regulation (GDPR), which sets the legal framework for processing personal data, should be interpreted when it comes to data ops in which sensitive inferences can be made about individuals.

Privacy watchers were quick to pay attention — and are predicting substantial follow-on impacts for enforcement as the CJEU’s guidance essentially instructs the region’s network of data protection agencies to avoid a too-narrow interpretation of what constitutes sensitive data, implying that the bloc’s strictest privacy protections will become harder for platforms to circumvent.

In an email to TechCrunch, Dr Gabriela Zanfir-Fortuna, VP for global privacy at the Washington-based thinktank, the Future of Privacy Forum, sums up the CJEU’s “binding interpretation” as a confirmation that data that are capable of revealing the sexual orientation of a natural person “by means of an intellectual operation involving comparison or deduction” are in fact sensitive data protected by Article 9 of the GDPR.

The relevant bit of the case referral to the CJEU related to whether the publication of the name of a spouse or partner amounted to the processing of sensitive data because it could reveal sexual orientation. The court decided that it does. And, by implication, that the same rule applies to inferences connected to other types of special category data.

“I think this might have broad implications moving forward, in all contexts where Article 9 is applicable, including online advertising, dating apps, location data indicating places of worship or clinics visited, food choices for airplane rides and others,” Zanfir-Fortuna predicted, adding: “It also raises huge complexities and practical difficulties to catalog data and build different compliance tracks, and I expect the question to come back to the CJEU in a more complex case.”

As she noted in her tweet, a similarly non-narrow interpretation of special category data processing recently got the gay hook-up app Grindr into hot water with Norway’s data protection agency, leading to fine of €10M, or around 10% of its annual revenue, last year.

GDPR allows for fines that can scale as high as 4% of global annual turnover (or up to €20M, whichever is greater). So any Big Tech platforms that fall foul of this (now) firmed-up requirement to gain explicit consent if they make sensitive inferences about users could face fines that are orders of magnitude larger than Grindr’s.

Ad tracking in the frame

Discussing the significance of the CJEU’s ruling, Dr Lukasz Olejnik, an independent consultant and security and privacy researcher based in Europe, was unequivocal in predicting serious impacts — especially for adtech.

“This is the single, most important, unambiguous interpretation of GDPR so far,” he told us. “It’s a rock-solid statement that inferred data, are in fact [personal] data. And that inferred protected/sensitive data, are protected/sensitive data, in line of Article 9 of GDPR.”

“This judgement will speed up the evolution of digital ad ecosystems, towards solutions where privacy is considered seriously,” he also suggested. “In a sense, it backs up the approach of Apple, and seemingly where Google wants to transition the ad industry [to, i.e. with its Privacy Sandbox proposal].”

Since May 2018, the GDPR has set strict rules across the bloc for processing so-called ‘special category’ personal data — such as health information, sexual orientation, political affiliation, trade union membership etc — but there has been some debate (and variation in interpretation between DPAs) about how the pan-EU law actually applies to data processing operations where sensitive inferences may arise.

This is important because large platforms have, for many years, been able to hold enough behavioral data on individuals to — essentially —  circumvent a narrower interpretation of special category data processing restrictions by identifying (and substituting) proxies for sensitive info.

Hence some platforms can (or do) claim they’re not technically processing special category data — while triangulating and connecting so much other personal information that the corrosive effect and impact on individual rights is the same. (It’s also important to remember that sensitive inferences about individuals do not have to be correct to fall under the GDPR’s special category processing requirements; it’s the data processing that counts, not the validity or otherwise of sensitive conclusions reached; indeed, bad sensitive inferences can be terrible for individual rights too.)

This might entail an ad-funded platforms using a cultural or other type of proxy for sensitive data to target interest-based advertising or to recommend similar content they think the user will also engage with. Examples of inferences could include using the fact a person has liked Fox News’ page to infer they hold right-wing political views; or linking membership of an online Bible study group to holding Christian beliefs; or the purchase of a stroller and cot, or a trip to a certain type of shop, to deduce a pregnancy; or inferring that a user of the Grindr app is gay or queer.

For recommender engines, algorithms may work by tracking viewing habits and clustering users based on these patterns of activity and interest in a bid to maximize engagement with their platform. Hence a big-data platform like YouTube’s AIs can populate a sticky sidebar of other videos enticing you to keep clicking. Or automatically select something ‘personalized’ to play once the video you actually chose to watch comes to an end. But, again, this type of behavioral tracking seems likely to intersect with protected interests and therefore, as the CJEU rules underscores, to entail the processing of sensitive data.

Facebook, for one, has long faced regional scrutiny for letting advertisers target users based on interests related to sensitive categories like political beliefs, sexuality and religion without asking for their explicit consent — which is the GDPR’s bar for (legally) processing sensitive data.

Although the tech giant now known as Meta has avoided direct sanction in the EU on this issue so far, despite being the target of a number of forced consent complaints — some of which date back to the GDPR coming into application more than four years ago. (A draft decision by Ireland’s DPA last fall, apparently accepting Facebook’s claim that it can entirely bypass consent requirements to process personal data by stipulating that users are in a contract with it to receive ads, was branded a joke by privacy campaigners at the time; the procedure remains ongoing, as a result of a review process by other EU DPAs — which, campaigners hope, will ultimately take a different view of the legality of Meta’s consent-less tracking-based business model. But that particular regulatory enforcement grinds on.)

In recent years, as regulatory attention — and legal challenges and privacy lawsuits — have dialled up, Facebook/Meta has made some surface tweaks to its ad targeting tools, announcing towards the end of last year, for example, that it would no longer allow advertisers to target sensitive interests like health, sexual orientation and political beliefs.

However it still processes vast amounts of personal data across its various social platforms to configure “personalized” content users see in their feeds. And it still tracks and profiles web users to target them with “relevant” ads — without providing people with a choice to deny that kind of intrusive behavioral tracking and profiling. So the company continues to operate a business model that relies upon extracting and exploiting people’s information without asking if they’re okay with that.

A tighter interpretation of existing EU privacy laws, therefore, poses a clear strategic threat to an adtech giant like Meta.

On Meta’s ‘regulatory headwinds’ and adtech’s privacy reckoning

YouTube’s parent, Google/Alphabet, also processes vast amounts of personal data — both to configure content recommendations and for behavioral ad targeting — so it too could also be in the firing line if regulators pick up the CJEU’s steer to take a tougher line on sensitive inferences. Unless it’s able to demonstrate that it asks users for explicit consent to such sensitive processing. (And it’s perhaps notable that Google recently amended the design of its cookie consent banner in Europe to make it easier for users to opt out of that type of ad tracking — following a couple of tracking-focused regulatory interventions in France.)

“Those organisations who assumed [that inferred protected/sensitive data, are protected/sensitive data] and prepared their systems, should be OK. They were correct, and it seems that they are protected. For others this [CJEU ruling] means significant shifts,” Olejnik predicted. “This is about both technical and organisational measures. Because processing of such data is, well, prohibited. Unless some significant measures are deployed. Like explicit consent. This in technical practice may mean a requirement for an actual opt-in for tracking.”

“There’s no conceivable way that the current status quo would fulfil the needs of GDPR Article 9(2) paragraph by doing nothing,” he added. “Changes cannot happen just on paper. Not this time. DPAs just got a powerful ammunition. Will they want to use it? Keep in mind that while this judgement came this week, this is how the GDPR, and EU data protection law framework, actually worked from the start.”

The EU does have incoming regulations that will further tighten the operational noose around the most powerful ‘Big Tech’ online platforms, and more rules for so called very large online platforms (VLOPs), as the Digital Markets Act (DMA) and the Digital Services Act (DSA), respectively, are set to come into force from next year — with the goal of levelling the competitive playing field around Big Tech; and dialling up platform accountability for online consumers more generally.

The DSA even includes a provision that VLOPs that use algorithms to determine the content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling — so there is already an explicit requirement for a subset of larger platforms to give users a way to refuse behavioral tracking looming on the horizon in the EU.

But privacy experts we spoke to suggested the CJEU ruling will essentially widen that requirement to non-VLOPs too. Or at least those platforms that are processing enough data to run into the associated legal risk of their algorithms making sensitive inferences — even if they’re not consciously instructing them to (tl;dr, an AI blackbox must comply with the law, too).

Both the DSA and DMA will also introduce a ban on the use of sensitive data for ad targeting — which, combined with the CJEU’s confirmation that sensitive inferences are sensitive data, suggests there will be meaningful heft to an incoming, pan-EU restriction on behavioral advertising which some privacy watchers had worried would be all-too-easily circumvented by adtech giants’ data-mining, proxy-identifying usual tricks.

Reminder: Big Tech lobbyists concentrated substantial firepower to successfully see off an earlier bid by EU lawmakers, last year, for the DSA to include a total ban on tracking-based targeted ads. So anything that hardens the limits that remain is important.

Behavioral recommender engines

Dr Michael Veal, an associate professor in digital rights and regulation at UCL’s faculty of law, predicts especially “interesting consequences” flowing from the CJEU’s judgement on sensitive inferences when it comes to recommender systems — at least for those platforms that don’t already ask users for their explicit consent to behavioral processing which risks straying into sensitive areas in the name of serving up sticky ‘custom’ content.

One possible scenario is platforms will respond to the CJEU-underscored legal risk around sensitive inferences by defaulting to chronological and/or other non-behaviorally configured feeds — unless or until they obtain explicit consent from users to receive such ‘personalized’ recommendations.

“This judgement isn’t so far off what DPAs have been saying for a while but may give them and national courts confidence to enforce,” Veal predicted. “I see interesting consequences of this judgment in the area of recommendations online. For example, recommender-powered platforms like Instagram and TikTok likely don’t manually label users with their sexuality internally — to do so would clearly require a tough legal basis under data protection law. They do, however, closely observe how users interact with the platform, and mathematically cluster together user profiles with certain types of content. Some of these clusters are clearly related to sexuality, and male users clustered around content that is aimed at gay men can be confidently assumed not to be straight. From this judgment, it can be argued that such cases would need a legal basis to process, which can only be refusable, explicit consent.”

As well as VLOPs like Instagram and TikTok, he suggests a smaller platform like Twitter can’t expect to escape such a requirement thanks to the CJEU’s clarification of the non-narrow application of GDPR Article 9 — since Twitter’s use of algorithmic processing for features like so called ‘top tweets’ or other users it recommends to follow may entail processing similarly sensitive data (and it’s not clear whether the platform explicitly asks users for consent before it does that processing).

“The DSA already allows individuals to opt for a non-profiling based recommender system but only applies to the largest platforms. Given that platform recommenders of this type inherently risk clustering users and content together in ways that reveal special categories, it seems arguably that this judgment reinforces the need for all platforms that run this risk to offer recommender systems not based on observing behaviour,” he told TechCrunch.

In light of the CJEU cementing the view that sensitive inferences do fall under GDPR article 9, a recent attempt by TikTok to remove European users’ ability to consent to its profiling — by seeking to claim it has a legitimate interest to process the data — looks like extremely wishful thinking given how much sensitive data TikTok’s AIs and recommender systems are likely to be ingesting as they track usage and profile users.

TikTok’s plan was fairly quickly pounced upon by European regulators, in any case. And last month — following a warning from Italy’s DPA — it said it was ‘pausing’ the switch so the platform may have decided the legal writing is on the wall for a consentless approach to pushing algorithmic feeds.

Yet given Facebook/Meta has not (yet) been forced to pause its own trampling of the EU’s legal framework around personal data processing such alacritous regulatory attention almost seems unfair. (Or unequal at least.) But it’s a sign of what’s finally — inexorably — coming down the pipe for all rights violators, whether they’re long at it or just now attempting to chance their hand.

Sandboxes for headwinds

On another front, Google’s (albeit) repeatedly delayed plan to depreciate support for behavioral tracking cookies in Chrome does appear more naturally aligned with the direction of regulatory travel in Europe.

Although question marks remain over whether the alternative ad targeting proposals it’s cooking up (under close regulatory scrutiny in Europe) will pass a dual review process, factoring in competition and privacy oversight, or not. But, as Veal suggests, non-behavior based recommendations — such as interest-based targeting via whitelisted topics — may be less risky, at least from a privacy law point of view, than trying to cling to a business model that seeks to manipulate individuals on the sly, by spying on what they’re doing online.

Here’s Veal again: “Non-behaviour based recommendations based on specific explicit interests and factors, such as friendships or topics, are easier to handle, as individuals can either give permission for sensitive topics to be used, or could be considered to have made sensitive topics ‘manifestly public’ to the platform.”

So what about Meta? Its strategy — in the face of what senior execs have been forced to publicly admit, for some time now, are rising “regulatory headwinds” (euphemistic investor-speak which, in plainer English, signifies a total privacy compliance horrorshow) — has been to elevate a high profile former regional politician, the ex U.K. deputy PM and MEP Nick Clegg, to be its president of global affairs in the hopes that sticking a familiar face at its top table, who makes metaverse ‘jam tomorrow’ jobs-creation promises, will persuade local lawmakers not to enforce their own laws against its business model.

But as the EU’s top judges weigh in with more jurisprudence defending fundamental rights, Meta’s business model looks very exposed, sitting on legally challenged grounds whose claimed justifications are surely on their last spin cycle before a long overdue rinsing kicks in, in the form of major GDPR enforcement — even as its bet on Clegg’s local fame/infamy scoring serious influence over EU policymaking always looked closer to cheap trolling than a solid, long-term strategy.

If Meta was hoping to buy itself yet more time to retool its adtech for privacy — as Google claims to be doing with its Sandbox proposal — it’s left it exceptionally late to execute what would have to be a truly cleansing purge.

Meta crowns Nick Clegg president of tilting at regulatory headwinds

UK’s CMA accepts Google’s post-cookie pledges, will ‘closely monitor’ Privacy Sandbox plan

Summer decision looms for Facebook’s EU-US data transfers

More TechCrunch

Apple is bringing new accessibility features to iPads and iPhones, designed to cater to a diverse range of user needs.

Apple announces new accessibility features for iPhone and iPad users

TechCrunch Disrupt, our flagship startup event held annually in San Francisco, is back on October 28-30 — and you can expect a bustling crowd of thousands of startup enthusiasts. Exciting…

Startup Blueprint: TC Disrupt 2024 Builders Stage agenda sneak peek!

Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the…

Anthropic hires Instagram co-founder as head of product

Seven orgs so far have signed on to standardize the way data is collected and shared.

Venture orgs form alliance to standardize data collection

As cloud adoption continues to surge towards the $1 trillion mark in annual spend, we’re seeing a wave of enterprise startups gaining traction with customers and investors for tools to…

Alkira connects with $100M for a solution that connects your clouds

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing Quickbooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI

Trawa simplifies energy purchasing and management for SMEs by leveraging an AI-powered platform and downstream data from customers. 

Berlin-based trawa raises €10M to use AI to make buying renewable energy easier for SMEs

Lydia is splitting itself into two apps — Lydia for P2P payments and Sumeria for those looking for a mobile-first bank account.

Lydia, the French payments app with 8 million users, launches mobile banking app Sumeria

Cargo ships docking at a commercial port incur costs called “disbursements” and “port call expenses.” This might be port dues, towage, and pilotage fees. It’s a complex patchwork and all…

Shipping logistics startup Harbor Lab raises $16M Series A led by Atomico

AWS has confirmed its European “sovereign cloud” will go live by the end of 2025, enabling greater data residency for the region.

AWS confirms will launch European ‘sovereign cloud’ in Germany by 2025, plans €7.8B investment over 15 years

Go Digit, an Indian insurance startup, has raised $141 million from investors including Goldman Sachs, ADIA, and Morgan Stanley as part of its IPO.

Indian insurance startup Go Digit raises $141M from anchor investors ahead of IPO

Peakbridge intends to invest in between 16 and 20 companies, investing around $10 million in each company. It has made eight investments so far.

Food VC Peakbridge has new $187M fund to transform future of food, like lab-made cocoa

For over six decades, the nonprofit has been active in the financial services sector.

Accion’s new $152.5M fund will back financial institutions serving small businesses globally

Meta’s newest social network, Threads, is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months.

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature