Technology Law and Data Privacy Updates

Latest News

Technology Law and Data Privacy Updates

Monthly Edition - October 2025

INDEX 

A. FOUNDER’S NOTE

B. NATIONAL UPDATES 

Crypto Updates

C. INTERNATIONAL UPDATES

United States of America

European Union

United Kingdom

Others

D. ABBREVIATIONS

FOUNDER’S NOTE

Welcome to this edition of Fountainhead Legal’s newsletter!

This month, India turned a fresh page in its digital playbook. MeitY released Draft Rules for gaming industry following the implementation of Promotion and Regulation of Online Gaming Act, 2025. MeitY has even proposed amendments to the existing intermediary rules under the Information Technology regulations to regulate artificially generated content in the hands of social media platforms. The proposed amendments require the synthetically generated content to be labelled. On the insurance front, IRDAI has rolled out a new fraud monitoring framework, an industry-wide blueprint to fight emerging cyber and digital scams. TRAI proposed steep daily penalties for late tariff filings, and the National Commodity Clearing Limited ordered trading members to overhaul their communication systems to stamp out spam calls and messages. On the judicial front, while the Singapore High Court greenlighted the restructuring scheme floated by the troubled exchange, the fight against WazirX continues at home with the Indian Courts providing interim reliefs. In back-to-back rulings, the Bombay and Madras High Court gave cryptocurrencies a rare legal spotlight recognising exchanges as custodians of user assets and declaring digital currencies as property under Indian law. While the Indian Courts have provided momentary support, it seems a long-drawn battle for the Indian users.   

Globally, regulators are just as busy tightening the screws. The European Union’s draft guidelines on the interplay between the Digital Markets Act and the GDPR make one thing clear, Big Tech must walk the twin tightropes of competition and privacy without falling on either side. California continued to lead the charge, becoming a laboratory for future digital governance with its Transparency in Frontier AI Act and the Opt Me Out Act, setting new benchmarks for AI safety and browser-level privacy. Across the Atlantic, U.S. courts in cases laid down the law on fair use in AI model training, while Microsoft opened a new chapter in antitrust scrutiny of AI partnerships. Further, enforcement actions worldwide sent an unmistakable signal. Data negligence now carries real consequences. British, Dutch, and Guernsey regulators issued multi-million-pound and euro fines against firms for cybersecurity failures, unlawful profiling, and patient data breaches. Together, they underline a global reality that data protection and digital ethics have become operational necessities, not compliance afterthoughts.

As India prepares to implement the DPDP Act and hope for the draft rules to be released and global regulators align AI, privacy, and competition frameworks, the world is entering an era of rule-based innovation. The future belongs to organisations that can build technology with transparency, accountability, and trust baked in, not bolted on later.

We hope you find these updates insightful and informative!

NATIONAL 

1. IRDAI introduces Guidelines updating its Fraud Detection and Monitoring Framework to include New Age Cyber Frauds[1]

IRDAI has released the Insurance Fraud Monitoring Framework Guidelines, 2025 (“Guidelines”), which will come into effect from April 1, 2026. The Guidelines mandates insurers, intermediaries and distribution channels to adopt a Board-approved Anti-Fraud Policy that sets out procedures to deter, prevent, detect, report, and remedy fraud across all operations and must cover red flag indicators, responsibilities of designated officers, internal reporting structures, whistle-blower protection, and annual fraud risk assessments. Insurers are also required to constitute a Fraud Monitoring Committee and an independent Fraud Monitoring Unit to oversee implementation, supported by staff training and vendor due diligence procedures. The Guidelines classify fraud into five categories i.e., internal fraud, distribution channel fraud, policyholder or claims fraud, external fraud, and affinity or complex fraud and place special emphasis on tackling cyber or new-age frauds carried out using digital tools or emerging technologies. Insurers must establish robust cybersecurity frameworks, continuously monitor risks, and participate in an industry-wide fraud intelligence system managed by the Insurance Information Bureau. All incidents of fraud must be reported to the regulator annually through Form FMR-1, within 30 days of the close of each financial year.

Insurers, intermediaries and distribution channels must now set up dedicated fraud monitoring systems, adopt a Board-approved Anti-Fraud Policy, and train staff to detect and report red flags. They should also strengthen cybersecurity controls, enhance vendor due diligence, and ensure timely reporting of fraud cases through Form FMR-1. For intermediaries like brokers, it is essential to timely detect and inform the Insurer along with Law Enforcement Agencies. Early implementation of these measures will be crucial to meet the April 2026 compliance deadline.

2. Draft Rules for Promotion and Regulation of Online Gaming introduced[2]

Following the Promotion and Regulation of Online Gaming Act, 2025, MeitY has unveiled the Draft Promotion and Regulation of Online Gaming Rules, 2025 (“Draft Rules”) that mentions about establishment of the Online Gaming Authority of India (“Authority”), a dedicated regulator responsible for registering online games, maintaining a national registry of approved titles, and enforcing compliance. Additionally, game service providers must obtain a Certificate of Registration within 90 days of application and maintain a grievance redressal mechanism for users. The Authority is also empowered to suspend or cancel registrations for violations or material changes that convert a game into an online money game.

The Draft Rules place strong emphasis on transparency, user protection, and fair competition, mandating penalties for non-compliance and grievance appeal systems. Once finalized, these Draft Rules are expected to bring long-awaited clarity and regulatory discipline to India’s online gaming ecosystem. The deadline for submission of feedback was till October 31, 2025.

The Draft Rules introduce several compliance requirements that gaming companies must prepare for. These include mandatory registration of all online games with the proposed Authority, maintaining a national registry entry, and setting up an internal grievance redressal system for users. Platforms must also ensure that their games do not qualify as online money games or risk suspension and penalties. Companies should begin reviewing their game design, payment structures, and user terms to ensure compliance once the Draft Rules are finalized.

3. MeitY proposes Draft Amendments to IT Rules for regulating AI-Generated Content[3]

MeitY has proposed Draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (“Draft Rules”) open for public consultation till November 13, 2025. The Draft Rules aims to combat the risks of deepfakes and misinformation by introducing a specific regulatory framework for ‘synthetically generated information’.

Beyond requiring visible labelling and embedded metadata, the Draft Rules platforms, that enable AI creation, must ensure labels cannot be removed or altered. Significant social media intermediaries will need to obtain user declarations on whether content is synthetic and deploy tools to verify these claims.

The Draft Rules could significantly increase compliance costs and technical challenges for large platforms, as they would need to build or integrate AI detection and verification systems at scale. These obligations may also impact how user content is moderated and labelled across different jurisdictions. Given that the Government has extended the deadline from 06 November 2025 till 13 November 2025 for public feedback, affected intermediaries should use this opportunity to submit detailed representations highlighting practical concerns such as implementation timelines, technical feasibility, and the need for uniform verification standards.

4. National Commodity Clearing Corporation directs Members to follow Anti-Spam Rules to curb Spam Calls and Messages[4]

The National Commodity Clearing Limited (“NCCL”), the clearing arm of the National Commodity & Derivatives Exchange, has issued a circular asking all its trading and clearing members to follow the updated Telecom Commercial Communications Customer Preference Regulations, 2018, recently announced by TRAI. The aim is to curb the rising problem of spam calls and messages and ensure that all business communications follow stricter telecom rules. Under the circular, members cannot use regular 10-digit mobile numbers for promotional, service, or transactional messages. Instead, they must use official 1600-series numbers for all outgoing calls and texts. Members must also get their digital assets such as website links and mobile applications approved or ‘whitelisted’ by telecom authorities. NCCL has further asked members to strengthen their internal systems to prevent misuse of registered numbers or templates and to fully cooperate with the Indian Cybercrime Coordination Centre (I4C) and TRAI for any reporting or compliance needs.

In practical terms, this initiative will require members to revamp their communication channels and align with telecom-approved systems, potentially increasing initial compliance efforts but significantly enhancing credibility and consumer trust. It will also reduce the risk of regulatory penalties and cyber fraud arising from misuse of communication channels. For end users, this move could mean fewer spam calls, more authentic service messages, and a more transparent communication environment within the commodities trading ecosystem.

5. Karnataka High Court upholds Use of Sahyog Portal for Content Takedown[5]

The High Court has dismissed X Corp.’s (formerly Twitter) (“Company”) challenge to the Central Government’s use of Sahyog Portal, an online platform created by the Government to coordinate and manage content takedown requests related to unlawful or harmful online material. The Company had contended that Sahyog portal had no clear legal backing under the IT Act, and that any content removal requests should only be routed through the formal blocking process under Section 69A of IT Act. The High Court upheld the validity of Sahyog portal clarifying that takedown orders are issued under Section 79(3)(b) of the IT Act, which empowers authorities to require intermediaries to remove unlawful content after due notice. Unlike Section 69A, which allows preventive blocking of content, Section 79(3)(b) deals with reactive removal of specific posts or material that violate Indian law. It also underscored that safe harbour protection for intermediaries is not an unfettered right but a conditional safeguard dependent on timely and lawful compliance.

This ruling makes it clear that online platforms must respond quickly and responsibly to lawful takedown requests. By upholding the Sahyog Portal, the Court supported the Government’s goal of creating a faster and more coordinated system for tackling unlawful content. Importantly, the Government has since improved the process by requiring takedown orders to be issued only by senior officers and backed by detailed reasoning. This change not only ensures greater accountability within the Government but also gives intermediaries clearer, more transparent directions, reducing the risk of arbitrary or unclear takedown orders.

CRYPTO UPDATES

6. Singapore High Court greenlights Indian Exchange’s Restructuring Scheme [6]

The Singapore High Court has approved a restructuring plan proposed by Zettai Pte. Ltd., the Singapore-based parent company of WazirX, allowing it to move forward with its “socialisation of losses” scheme. The plan, sanctioned on October 13, 2025, enables the company to restructure its liabilities following the 2024 cyberattack by distributing losses among users and creditors in a structured and transparent manner.

The court endorsed the scheme with limited modifications, clarifying that creditors would release claims against the company’s officers only in respect of their role in preparing and implementing the scheme excluding any acts of fraud, gross negligence, or wilful misconduct. This condition preserves accountability while enabling the restructuring to proceed.

The order enables Zettai’s restructuring and aims to restore stability to the WazirX ecosystem, however, the socialisation of losses model remains contentious. It effectively spreads the financial impact of the hack across all users, including those whose assets were never compromised. Such an approach raises serious concerns about fairness and accountability, as unhacked users should not bear the burden of losses caused by security lapses beyond their control. This standpoint as also been examined by the Bombay High Court in its recent judgment mentioned below.

7. Bombay High Court affirms Duty of Exchange to safeguard Crypto Assets[7]

While the Singapore Court was in process of scheduling its final hearing, the Bombay High Court dismissed an appeal filed by Zanmai Labs Pvt. Ltd (“Zanmai”), operator of the crypto exchange ‘WazirX’, against an arbitral tribunal’s order directing it to provide financial security to preserve the subject matter of an ongoing dispute. The case was brought by Bitcipher Labs LLP and Nextgendev Solutions Pvt. Ltd., two broker partners that had deposited substantial sums and held crypto assets on WazirX on behalf of their clients.

The court upheld, on a prima facie basis, the arbitral tribunal’s view that Zanmai acts as a custodian of users’ crypto assets and owes a duty of care toward them. Zanmai had argued that the compromised wallets were managed by Binance Holdings Ltd., and later by its Singapore-based parent, Zettai Pte Ltd., but the court noted that the Broker Agreement treated WazirX and Zanmai as synonymous, making Zanmai directly responsible for managing and safeguarding user assets. It also rejected Zanmai’s proposal to ‘socialise’ the losses from the 2024 cyber-attack across all users, holding that such a mechanism lacked contractual or legal support. While clarifying that the findings were prima facie and limited to the interim stage, the court upheld the arbitral tribunal’s directions as necessary to protect users’ interests pending final adjudication.

Although interim in nature, this judgment signals a noteworthy shift in how Indian courts are beginning to view crypto assets and exchange responsibilities. By recognising that a crypto exchange may act as a custodian of user holdings, the court has implicitly underscored the fiduciary obligations such entities owe their users. This decision reflects a clear judicial movement toward stronger accountability and structured protection for digital assets within India’s evolving regulatory landscape.

8. Madras High Court declares Cryptocurrency as Property[8]

The Madras High Court has recognized cryptocurrency as a form of property under Indian law. The case arose after an investor’s XRP holdings were frozen on the WazirX trading platform following the 2024 cyberattack. Seeking protection of her assets, the investor approached the court under Section 9 of the Arbitration and Conciliation Act, 1996, requesting interim relief pending arbitration. While granting the relief, the Court held that cryptocurrencies, though intangible, have clear, transferable, and exclusive ownership enabled by private keys, making them capable of being owned, possessed, and held in trust. The court extensively referenced global precedents and philosophical concepts as well as the recent Bombay High Court judgement regarding the appeal by Zanmai, as mentioned above, affirming that a virtual digital asset is a legitimate asset class recognized under the Income Tax Act, 1961 and also pointed out that cryptocurrency holders can seek legal remedies when their rights are jeopardized.

While the final dispute remains to be adjudicated through arbitration, this decision marks a pivotal step toward legal recognition and protection of crypto investors in India. It also complements the Bombay High Court’s observation that WazirX acts as a custodian of user assets, signaling a broader judicial trend toward treating cryptocurrencies as legitimate, protectable property interests rather than speculative virtual tokens.

9. TRAI proposes graded Financial Penalties for delayed Tariff Reporting[9]

TRAI has released Telecommunication Tariff (Seventy Second Amendment) Order, 2025,
(“Draft Order”), proposing a stricter financial disincentive regime for service providers who delay reporting their tariff plans. The Draft Order aim to ensure timely compliance by proposing graded penalty structure, a fine of INR 10 thousand per day for the first 7 days of delay, increasing to INR 20,000 for each subsequent day, with a total cap of INR 5 lakh per violation. The Draft Order also removes a redundant clause to avoid duplicate penalties. Stakeholders were required to submit their comments on the Draft Order by October 31, 2025.

These proposed changes are designed to create a more disciplined and transparent telecom sector. For consumers, this translates to tighter regulatory supervision over service providers, ensuring they adhere to rules and report accurately. This initiative contributes to building a more reliable and accountable digital ecosystem in India, with TRAI inviting stakeholder feedback on the draft until October 31, 2025.

10. ECI issues AI and Synthetic Content Guidelines for Bihar Assembly Elections[10]

ECI has introduced Advisory on responsible and disclosure of synthetically generated information and AI-generated content during elections (“Advisory”) regulating the use of AI and synthetic media in political campaigns for the upcoming Bihar Assembly elections. To promote transparency and fair competition, all AI-generated, digitally altered, or synthetic content including images, videos, and audio must now be clearly labelled as ‘AI-Generated’, ‘Digitally Enhanced’, or ‘Synthetic Content’. The label must cover at least 10% of the visual area or, for audio, be announced during the first 10% of its duration. Each such post must also include metadata or captions disclosing the name of the creator or responsible organization.

INTERNATIONAL 

11. AI Safety Law introduced[11]

California has passed the Transparency in Frontier Artificial Intelligence Act, 2025 (“Act”), establishing the first state-level framework for advanced AI systems. The Act requires major AI developers to publish safety frameworks, report critical incidents such as large-scale data leaks, unauthorized model access, or use of AI in weapon development and safeguard whistle-blowers. Rather than imposing rigid technical controls, it promotes a transparency-based approach, focusing on risk reporting and accountability.  The Act also allows alignment with future U.S. federal AI standards to avoid overlapping compliance burdens, positioning California as a model for state-level AI governance in the absence of national legislation.

The Act positions California as a pioneer in AI governance, creating a blueprint for state-level oversight in the absence of comprehensive federal regulation. By focusing on transparency and incident reporting, the Act sets a precedent for responsible AI development while maintaining flexibility for future alignment with national standards.

12. Build-In Browsers Control for Online Privacy made Mandatory[12]

California has enacted the Opt Me Out Act (AB 566, Lowenthal) (“Act”), a landmark legislation requiring all web browsers operating in the state to include a built-in privacy control that allows users to automatically signal their preference not to have their personal data sold or shared. This new feature, known as the Opt-Out Preference Signal (OOPS), enables users to communicate their privacy choices to every website they visit eliminating the need for repetitive manual opt-outs and offering a single, seamless way to protect sensitive information such as browsing history and location data.

Set to take effect in January 2027, the Act strengthens Californians’ ability to exercise their rights under the California Consumer Privacy Act, 2018 by turning privacy preference into a default browser setting. It closes long-standing gaps in enforcement where opt-out requests were inconsistently honored, ensuring that privacy rights are automatic, practical, and enforceable. The Act is expected to serve as a national model, empowering individuals to take control of their digital identities while compelling businesses and browsers alike to prioritize user privacy by design.

13. Tractor supplying Company penalized for Ignoring Privacy Opt-Outs for Data Sharing[13]

CCPA fined Tractor Supply Company (“Company”) USD 1.35 million for violating the California Consumer Privacy Act, 2018 marking its largest penalty to date. The CPPA’s investigation revealed that Tractor Supply failed to post a clear privacy notice, ignored opt-out requests (including via browser signals like Global Privacy Control), and shared personal data with vendors and third parties without contractual safeguards.

Under the enforcement order, the Company must honour all opt-out signals automatically, update its data-tracking inventory, revise third-party contracts, and train staff to handle privacy requests correctly. It must also publish annual privacy metrics for 5 years. The case underscores California’s aggressive stance on privacy compliance confirming that companies must respect user choices across all data-sharing channels, not just direct customer interactions.

14. New York focuses on Stronger Data Security after experiencing Data Breaches by Accounting Firm[14]

The New York Attorney General has reached a settlement with Wojeski & Company (“Company”), a public accounting firm, over 2 data breaches that exposed personal and financial information of more than 4,700 New Yorkers. The breaches, caused by a ransomware attack in 2023 and unauthorized data access in 2024, involved unencrypted Social Security numbers and delayed disclosure to clients violating state data breach notification laws.  As part of the settlement, the Company will pay a USD 60,000 penalty and adopt strict cybersecurity reforms, including data encryption, access restrictions, multi-factor authentication, employee training, and a formal incident response plan to ensure timely notifications in the future. The Attorney General emphasized that data protection must be a top priority for all businesses, warning that failure to safeguard personal information now brings severe financial, reputational, and legal consequences.

This enforcement action underscores that robust cybersecurity measures are no longer optional but essential for organizations handling sensitive data. With regulators increasingly holding businesses accountable for breaches, proactive steps such as encryption, access controls, and employee awareness are now the need of the hour to protect personal information and maintain consumer trust in the digital era.

15. E-commerce Platform settles with Authority for penalty imposed for Misleading Users using Deceptive Practices[15]

Amazon.com Inc (“Company”) has been charged by FTC for using deceptive practices to enrol consumers into Prime membership without clear consent and making cancellation deliberately difficult. The FTC found that Company’s interface obscured key terms and employed manipulative design tactics, violating the Restore Online Shoppers’ Confidence Act, 2010.

Under the settlement order, Company will pay USD 1 billion in civil penalties and USD 1.5 billion in consumer refunds, overhaul its subscription interface to include clear disclosures, a visible ‘decline’ option, and a one-click cancellation process, with oversight by an independent monitor. The ruling sets a strong precedent against manipulative digital design, affirming that convenience-driven platforms must uphold transparent and informed consumer choice.

16. District Courts lay down ground rules for Fair Use in AI model training

In recent decisions including Bartz v. Anthropic[16] and Kadrey v. Meta Platform Inc.[17], the district courts have drawn key boundaries around copyright fair use in AI training. Courts held that training AI models on lawfully obtained copyrighted material can be transformative and qualify as fair use, provided the AI does not reproduce or harm the market for the originals. However, using pirated or unauthorized data disqualifies developers from this protection.

Both courts applied the standard four-factor fair use test examining the purpose of use (where transformative AI training favored fair use), the nature of the original work (creative works weighed against fair use but not conclusively), the amount used (permitting full copying when essential for model training), and the effect on the market (finding no infringement if AI outputs did not replicate or harm the originals’ market value). However, since the companies involved had used pirated or unauthorized data repositories, the courts held that such conduct fell outside the scope of fair use, exposing them to potential damages and further trial.

These rulings underscore that copyright compliance in AI development requires lawful data sourcing, transparent dataset disclosure, and safeguards against unauthorized reproduction in outputs. The principles laid down could serve as an important reference point for ongoing cases in India, such as OpenAI matter, where similar questions on the legality of training AI models on copyrighted news content are under judicial scrutiny.

17. Software Giant faces Class Action over Alleged Anticompetitive Control of AI[18]

A class action lawsuit has been proposed and filed against Microsoft Corporation (“Microsoft”) in the District Court of California alleging that Microsoft used its exclusive partnership with OpenAI LLC (“OpenAI”), that operates ‘ChatGPT’, to artificially inflate process and restrict competition in consumer generative AI market.

According to the complaint, Microsoft’s early investment agreements granted it exclusive control over OpenAI’s compute resources via Azure, effectively allowing Microsoft, OpenAI’s horizontal competitor, through its own Co-pilot products to limit OpenAI’s output and dictate pricing. The lawsuit claims this ‘compute chokehold’ inflated ChatGPT prices up to 200 times higher than competitors’, degraded product quality, and prevented timely model improvements. When OpenAI was finally permitted in mid-2025 to procure computing resources from Google, its API prices dropped by nearly 80% overnight, allegedly confirming the anticompetitive impact. The plaintiffs seek damages and injunctive relief to bar Microsoft from maintaining such exclusive control, arguing that the arrangement constitutes an unlawful restraint of trade under the Sherman Act, 1890. Although this case is still at an initial stage, it will be closely watched to understand how competition law will apply to big tech partnerships that control access to essential AI infrastructure.

EUROPEAN UNION

18. Draft Guidelines on Balancing Competition and Privacy related Obligation for Large Online Platforms released[19]

EDPB and the European Commission have together issued Joint Guidelines on the Interplay between Digital Market Act and General Data Protection Regulations (“Guidelines”) clarifying how two legislations must operate together. The Guidelines make it clear that large online platforms or ‘gatekeepers’ such as major search engines, app stores, and messaging services must comply with both frameworks simultaneously. While the DMA ensures fair competition and prevents dominance abuses, the GDPR protects individuals’ privacy and data rights.  The Guidelines warns that gatekeepers cannot invoke the DMA to justify processing personal data in ways that would breach the GDPR. Any data sharing, interoperability, or profiling activities must still meet GDPR requirements such as lawful basis, transparency, purpose limitation, and valid consent. To ensure coordinated enforcement, data protection and competition authorities will enhance cooperation and information sharing.

Feedback is invited till December 4, 2025.

19. Medical Firm penalized for Patient Data Breach[20]

The Office of the Data Protection Authority (“ODPA”) fined Medical Specialist Group LLP (“Firm”) 100,000 pounds after a major cyberattack exposed sensitive health information from its email system, impacting thousands of patients across Guernsey. Investigations uncovered that the Firm had failed to install critical security updates to its Microsoft Exchange email server for over a year, leaving the infrastructure vulnerable. In August 2021, cybercriminals exploited these gaps, stealing emails containing private patient details and using them for targeted phishing campaigns. The breach went undetected for months, and the Firm’s initial claim of having secured its systems was refuted by later technical assessments. The ODPA’s inquiry found that Firm’s oversight of threat detection and incident investigation was inadequate, breaching local data protection laws on several counts.

Following the penalty, Firm is required to pay 75,000 pounds within 60 days and the remaining 25,000 pounds after 14 months, which could be waived if all prescribed remedial actions are completed on time. Measures already taken include shifting to a cloud-based email platform, enhanced system monitoring, and staff training on data security.

20. Credit Rating Agency fined for Unlawful Data Profiling[21]

The Dutch Data Protection Authority has imposed a Euro 2.7 million fine on Experian Nederland B.V. (“Company”), for illegally creating and selling consumer credit profiles without proper consent or legal basis under the GDPR. The Company compiled extensive databases of personal and financial data from public records and private companies, which were then used by clients such as landlords, telecom providers, and lenders to evaluate creditworthiness and contract eligibility.

It was observed that consumers were often unaware their data had been collected or used to generate credit assessments that influenced financial decisions. The Company failed to provide sufficient transparency, justification, or access rights for individuals to correct errors. As part of the enforcement outcome, the Company must delete its entire database by the end of 2025 and cease data processing activities in the Netherlands. The Company accepted the decision and will not appeal marking one of the strictest enforcement actions in Europe for non-consensual data profiling.

21. German Federal Labour Court clarifies GDPR liability in Employee Background Checks[22]

Germany’s Federal Labour Court has issued a key ruling on GDPR compliance in employee screening, holding that employers must have a valid legal basis when conducting online background checks. The case concerned a lawyer whose job application was rejected by a university after it discovered an old, non-final criminal conviction through an internet search. The court found that the university failed to inform the applicant about the categories of data being processed and lacked a   lawful justification for collecting sensitive criminal information. The applicant was awarded Euro 1,000 in non-material damages for loss of privacy and dignity. The judgment clarifies that even publicly available online information cannot be processed without meeting GDPR principles of transparency and purpose limitation. For human resource departments, this decision underscores the need for clear applicant notices, specific justification for online searches, and transparency when sensitive data influences hiring decisions.

This ruling offers timely lessons for India, where the DPDP Act is yet to be fully implemented. Similar issues could arise as employers increasingly rely on online searches and digital profiles during recruitment. The case highlights the importance of transparency, consent, and purpose limitation principles central to the DPDP Act. Indian organizations should proactively adopt these safeguards now to avoid future compliance risks once the law comes into force.

22. Not All Cookies or IP Addresses can be Personal Data holds Polish Court[23]

Supreme Administrative Court (“NSA”) resolved a key issue concerning the classification of IP addresses and cookie IDs as personal data under the GDPR. The case concerned a company that processed a user’s IP address and cookie identifiers and was ordered by the Personal Data Protection Office to remove them and inform third parties of their deletion. The main issue of the dispute was whether, in the specific circumstances, these internet identifiers actually allowed for the identification of an individual, making them ‘personal data’ under both GDPR and Polish law.

NSA observed that neither an IP address nor a cookie identifier can always be treated as personal data, regardless of whether the address is static or dynamic. Instead, the court emphasised a context-based, ‘reasonably likely’ test i.e., authorities must assess whether identification of a natural person is realistically possible given the technology, cost, and time available at the time of processing. The NSA faulted the data protection authority for presuming identifiability without examining these conditions or precisely considering if the company could tie the identifiers, in that context, to a specific person. This reasoning aligns with both CJEU doctrine and prior Polish rulings, rejecting blanket assumptions in favour of a fact-based analysis for each case. By dismissing the cassation appeal, the NSA underlined that supervisory authorities must provide detailed justification and show clear evidence when declaring that online identifiers amount to personal data.

This ruling offers much-needed relief to businesses by confirming that not all online identifiers, such as cookies or IP addresses, automatically qualify as personal data. It recognises that liability should depend on whether an individual can actually be identified using available technology and resources. The judgment brings practical clarity for companies, ensuring they aren’t unfairly penalised for using routine technical features that don’t, by themselves, enable user identification.

UNITED KINGDOM

23. Tribunal dismisses Major Tech Company’s Encryption Challenge[24]

The Investigatory Powers Tribunal has formally dismissed Apple Inc.’s (“Company”) high-profile case against the UK Home Office after the Government modified its Technical Capability Notice (“TCN”) demanding backdoor access to encrypted iCloud data. The original January 2025 notice required the firm to enable Government access to encrypted messages and backups protected by its Advanced Data Protection (ADP) feature. The Company argued that such a demand would compromise user privacy and warned it might withdraw encryption services from the UK, sparking international concern over civil liberties.  Following intense criticism, the Home Office issued a revised TCN in October 2025, narrowing its request to data belonging only to UK-based users. Both parties agreed to discontinue the case in light of this modification. While the Company confirmed that its enhanced encryption will remain unavailable in the UK for now, it reiterated its global commitment to data security and privacy-by-design. The case leaves open the broader debate on how far Governments can compel access to encrypted data without undermining digital security standards.

24. Outsourcing Company fined for Cybersecurity failures[25]

ICO imposed a 14 million pounds fine on Capita PLC (“Company”), one of Britain’s largest outsourcing firms, for failing to protect the personal data of over 6.6 million people during a cyber-attack in March 2023. The breach occurred after hackers gained access through a malicious file downloaded on a company device. Company took 58 hours to isolate the compromised system, allowing attackers to steal nearly one terabyte of personal and financial data and deploy ransomware. The stolen information included not just basic records but also sensitive data like criminal records and financial details, causing substantial anxiety and stress for those affected.​

The ICO investigation found that Company had ignored known security weaknesses, failed to limit administrator access, and operated with an understaffed security team. The company also offered minimal post-incident support to affected clients. Originally facing a 45 million pounds penalty, the fine was reduced after Company’s cooperation and remedial actions, including enhanced access controls and engagement with the National Cyber Security Centre. The case serves as a reminder that timely response, regular testing, and strong governance are essential to safeguarding large-scale data environments.​

OTHERS

25. Canada tightens Data Protection Rules in Hiring Practices[26]

The Commission d’accès à l’information (“CAI”) in Quebec, Canada has issued new guidance establishing stricter limits on personal data collection during recruitment. Employers may now gather only information essential to assessing a candidate’s suitability such as name, contact details, and work or academic history at the initial stage. Requests for references, medical, or background checks must be deferred to later stages and tied directly to the role’s requirements.

Written candidate consent is mandatory before verifying references, qualifications, or criminal history. Employers using AI or psychometric tools must disclose their use upfront and ensure assessments remain relevant to job performance. The guidelines also restrict social media screening to professional profiles and require prompt destruction of applicant data once hiring concludes. The CAI emphasizes that necessity not convenience remains the standard, and over-collection or irrelevant questioning may result in enforcement action. These rules promote fairness, transparency, and data minimization throughout the hiring process.  

ABBREVIATIONS

AI – Artificial Intelligence
CCPA – California Consumer Protection Authority
DMA – Digital Markets Act, 2022
DPA- Data Protection Authority
DPDP Act Digital Personal Data Protection Act, 2023
ECI – Election Commission of India
EDPB – European Data Protection Board
GDPR – General Data Protection Regulation (EU) 2018/1725
ICO – Information Commission Officer
IRDAI – Insurance Regulatory and Development Authority of India
IT Act Information Technology Act, 2000
MIB – Ministry of Information and Broadcast
MeitY – Ministry of Information and Technology
TRAI – Telecom Regulatory Authority of India

Authors:

  • Rashmi Deshpande
  • Aarushi Ghai
  • Vaibhav Gupta

Download File:

[1] https://irdai.gov.in/web/guest/document-detail?documentId=7948519
[2] https://www.meity.gov.in/static/uploads/2025/10/18bae7782749f36ebb062fdb0b2607ea.pdf
[3] https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf
[4] https://www.ncdex.com/public/uploads/circulars/Compliance%20with%20directives%20issued%20by%20Telecom%20Regulatory%20Authority%20of%20India(TRAI)_01102025_1759322048.pdf
[5] X Corp v. Union of India, WP 7405/2025
[6] In the matter of Zettai Pte. Ltd., HC/SUM 940/2025
[7] Zanmai Labs Pvt Ltd. v. Bitcipher Labs LLP, Commercial Arbitration Petition (L) No. 11646 of 2025 and Zanmai Labs Pvt ltd v. Nextgendev Solutions Pvt. Ltd., Commercial Arbitration Petition (L) No. 11975 of 2025
[8] Rhutikumari v. Zanmai Labs Pvt. Ltd, Original Application No.194 of 2025
[9] https://www.trai.gov.in/sites/default/files/2025-10/Draft_Tariff_16102025.pdf
[10] https://www.eci.gov.in/eci-backend/public/api/download?url=LMAhAK6sOPBp%2FNFF0iRfXbEB1EVSLT41NNLRjYNJJP1KivrUxbfqkDatmHy12e%2FzX%2FLARKC1lI3JwqUiIIk3e%2B7gOOlUZXPW%2BNP40Y7lTYHctZoALayD8g8gQjxekHZkSIq%2Fi7zDsrcP74v%2FKr8UNw%3D%3D
[11] https://www.gov.ca.gov/wp-content/uploads/2025/09/SB-53-Signing-Message.pdf
[12] https://cppa.ca.gov/announcements/2025/20251008_2.html
[13] https://cppa.ca.gov/pdf/20250930_tractor_supply_bd_sfo.pdf
[14] https://ag.ny.gov/press-release/2025/attorney-general-james-announces-settlement-accounting-firm-failing-protect-new
[15] https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-secures-historic-25-billion-settlement-against-amazon
[16] Bartz v. Anthropic, 2025 WL 1741691 (N.D. Cal. June 23, 2025)
[17] Kadrey vs. Meta Platforms, 2025 WL 1752484 (N.D. Cal. June 25, 2025)
[18] Samuel Bryant, Dominique Cavalier, v. Microsoft Corporation, Case No. 3:25-cv-8733
[19] https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/joint-guidelines-interplay-between-digital_en
[20] https://www.odpa.gg/news/news-article/?id=744842ad-8dad-f011-bbd2-7ced8d13a51c
[21] https://www.autoriteitpersoonsgegevens.nl/actueel/experian-krijgt-boete-van-27-miljoen-euro-voor-privacyovertredingen
[22] https://www.bundesarbeitsgericht.de/entscheidung/8-azr-117-24/
[23] Appealed authority – Inspector General for Personal Data Protection, II SA/Wa 3993/21
[24] https://investigatorypowerstribunal.org.uk/judgement/apple-inc-v-secretary-of-state-for-the-home-department/
[25] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2025/10/capita-fined-14m-for-data-breach-affecting-over-6m-people/
[26] https://www.cai.gouv.qc.ca/protection-renseignements-personnels/sujets-et-domaines-dinteret/operation-recrutemement-emploi

Disclaimer

Current rules of the Bar Council of India impose restrictions on maintaining a web page and do not permit lawyers to provide information concerning their areas of practice. Fountainhead Legal is, therefore, constrained from providing any further information on this web page except as stated below.

The rules of the Bar Council of India prohibit law firms from soliciting work or advertising in any manner. By clicking on ‘I AGREE’, the user acknowledges that:

The user wishes to gain more information about Fountainhead Legal, its practice areas and the firm’s lawyers, for his/her own information and use;

The information is made available/provided to the user only on his/her specific request and any information obtained or material downloaded from this website is completely at the user’s volition and any transmission, receipt or use of this site is not intended to, and will not, create any lawyer-client relationship; and

None of the information contained on the website is in the nature of a legal opinion or otherwise amounts to any legal advice.

Fountainhead Legal, is not liable for any consequence of any action taken by the user relying on material/information provided under this website. In cases where the user has any legal issues, he/she in all cases must seek independent legal advice.