Technology Law and Data Privacy Updates

Latest News

Technology Law and Data Privacy Updates

Monthly Edition - June 2025

INDEX 

A. SUMMARY

B. NATIONAL UPDATES 

C. INTERNATIONAL UPDATES

United States of America

United Kingdom

European Union

Others

D. ABBREVIATIONS

SUMMARY 

Welcome to this edition of Fountainhead Legal’s newsletter, where we bring you the latest legal and regulatory developments shaping the rapidly evolving world of data privacy, technology, AI governance, and platform regulation in India and beyond.

This month, concerns around the misuse of investigative powers came to the forefront, following the Enforcement Directorate’s (“ED”) issuance, then withdrawal, of a summons to a senior advocate over legal opinion provided in a professional capacity. The legal community swiftly and rightly responded, with bar associations warning that such actions undermine the independence of the legal profession and erode the democratic principle that individuals must be free to seek legal advice without fear of reprisal. While the ED ultimately withdrew the notice, the incident is far from trivial. It signals a troubling precedent where lawyers may be dragged into investigative proceedings merely for discharging their professional duties, weakening client confidence and the foundational protections offered by legal representation. We view this as a moment that calls for strong institutional reaffirmation of the independence, inviolability, and professional autonomy of legal counsel.

On the regulatory front, India’s data protection regime took a major step forward with the release of the Consent Management Framework under the DPDP Act. It marks a departure from symbolic checkbox compliance toward technical enforcement and system-level accountability, a shift that will require both strategic redesign of data ecosystems and operational alignment across stakeholders. The spotlight on AI governance is also intensifying. SEBI’s draft Guidelines for Responsible Use of AI/ML in Indian Securities Markets is a timely initiative. It reflects an early, proactive attempt to regulate AI before it becomes deeply embedded in capital market operations. The draft emphasises key principles such as fairness, explainability, model accountability, and bias control, offering a starting point for sector-specific AI governance. However, it raises important questions that stakeholders must engage with: How do we ensure explainability without forcing disclosure of proprietary models? Should compliance obligations be based on size and risk profile? In addition, the Government has also proposed amendments to the Telecom Cybersecurity Rules, 2024 introducing a centralised platform for mobile number validation and a tampered IMEI database. These measures aim to strengthen digital identity verification and curb the circulation of unauthorised mobile devices steps that signal India’s broader commitment to tightening security and trust in its digital infrastructure.

Judicial rulings this month affirm that privacy and platform rights cannot override legal accountability in the digital space. The Karnataka High Court upheld police access to user data during criminal investigations, while the Madras High Court allowed a contract suit against Google to proceed, clarifying that dominant platforms are not beyond civil scrutiny. In another significant ruling, the Madras High Court upheld restrictions on real money online gaming, holding that measures such as KYC requirements, Aadhaar-based login (with central approval), and night-time access limits are proportionate responses to demonstrated public harm. Collectively, these decisions make clear that digital businesses—particularly those operating in high-risk sectors—must align with established legal standards and regulatory safeguards.

Globally, there’s been increased debate over the pace of AI regulation. The temporary moratorium on state-level AI laws in the U.S. was recently lifted, paving the way for Texas to pass one of the most comprehensive AI laws to date. Conversely, in the EU where the AI Act is taking shape some Member States, are advocating for a phased rollout to avoid derailing innovation with premature compliance burdens. These contrasting approaches, fast-track legislation versus gradual enforcement reflect an evolving regulatory philosophy: not all AI use is equal, and compliance should match actual deployment maturity and risk. India, too, must weigh these considerations as it expands AI regulation beyond finance and into other critical sectors. A blanket, one-size-fits-all approach may hinder the very innovation the country seeks to promote. Instead, sectoral guidelines like SEBI’s paired with adaptive enforcement and regulatory sandboxes may offer a more effective, balanced path forward.

Internationally, enforcement actions continued to shape how businesses manage data and platform power. In Europe, regulators fined companies in the financial and employment sectors for improper data disclosure and unauthorised reliance on digital evidence. In Canada, courts reaffirmed that social media images are not fair game for facial recognition training, and South Korea launched a probe into mass data leaks and exploitation of minors in digital content. Across jurisdictions, the common thread is clear: legal systems are no longer struggling to catch up they are actively defining the boundaries of responsible innovation. Whether through consent frameworks, AI accountability, or judicial interventions in platform governance, the digital rulebook is being rewritten in real time.

At Fountainhead Legal, we see this as an opportunity for companies to shift from compliance as a legal burden to compliance as a design principle. As laws evolve, organisations must build structures that are legally sound, operationally agile, and technologically resilient. Because in the digital age, what is lawful must also be ethical and what is innovative must also be accountable.

We are committed to supporting organizations on this journey. With our deep expertise in data privacy compliance and a strong understanding of regulatory nuances, we offer tailored solutions for each client’s unique needs. From drafting privacy policies and developing data protection frameworks to advising on cross-border data transfers and facilitating employee training programs, our team is equipped to guide clients through every stage of their compliance strategy.

We hope you enjoy our latest updates!

NATIONAL 

1.  Amendments Proposed to Telecommunications (Telecom Cyber Security) Rules, 2024 [1]

The Ministry of Communications published draft amendments (“Draft Rules”) to the Telecommunications (Telecom Cyber Security) Rules, 2024, introducing a comprehensive framework for telecom identifier validation and International Mobile Equipment Identity (“IMEI”)-based device monitoring. The Draft Rules aim to strengthen national cyber hygiene by tightening obligations on telecom operators and digital service platforms using telecom identifiers for user verification.

A key highlight of the Draft Rules is the proposed establishment of a Mobile Number Validation (“MNV”) Platform, which would enable authorised entities and Telecommunication Identifier User Entities (“TIUEs”) including digital service providers and intermediaries to verify whether a customer’s telecom identifier (such as a mobile number) matches official telecom records. TIUEs will be required to pay a fee for each validation and must ensure compliance with applicable data protection laws when conducting these checks. Further, the Government proposes maintaining a central database of tampered or restricted IMEIs and mandates that any sale or purchase of used mobile devices in India be preceded by a check against this database, aiming to prevent the circulation of tampered or unauthorised devices.

Public feedback on the Draft Rules is open till July 23, 2025.

The Draft Rules mark a significant step toward strengthening digital identity assurance and telecom ecosystem integrity in India. By introducing a centralised MNV platform and mandating IMEI checks for used devices, the Government aims to curb identity fraud, reduce telecom-enabled financial crimes, and restrict the circulation of compromised handsets. These measures are especially relevant for digital platforms, fintech companies, and device resellers, who will need to integrate these checks into their onboarding and transaction processes. While the Draft Rules align with broader objectives under the DPDP Act and cybersecurity frameworks, careful attention must be paid to ensure operational feasibility, cost implications, and privacy safeguards during implementation.

2. MeitY released Standards of Consent Management Systems under Data Privacy Regulation [2]

MeitY released the Business Requirements Document for India’s Consent Management System (“CMS”) under the DPDP Act. The CMS will form the technical backbone for consent handling by Data Fiduciaries across the personal data ecosystem. The CMS enables individuals whose data is being collected i.e., data principals to grant, update, renew, or withdraw consent in a purpose-specific, informed, and revocable manner. It strictly prohibits pre-checked or bundled consents and mandates clear, affirmative user action for each data processing purpose. It also supports multi-language access and accessibility-compliant interfaces. The core features include real-time consent validation, dashboards for data principals, metadata logging, and immutable audit trails to ensure accountability. Businesses collecting personal data i.e., data fiduciaries are required to notify users of changes, verify consent before processing, and respond to grievances or data access requests through the CMS. CMS integrates with grievance redressal mechanisms and sets operational standards for secure logging, consent verification, and system-wide notifications. All actions must be recorded and cryptographically hashed to meet audit and compliance requirements under the DPDP Act.

This is a major milestone in operationalising India’s data protection regime, as it sets out the technical and functional blueprint for consent management across all sectors. By mandating purpose-specific, revocable, and auditable consent, the CMS framework will transform how businesses design user interfaces, data flows, and compliance systems. It also signals that organisations must prepare for robust consent governance, seamless integration with grievance mechanisms, and heightened accountability, moving beyond policy-level compliance to implementing secure, verifiable consent practices in their daily operations.

3. SEBI issues Draft Guidelines for Use of AI in Securities Markets [3]

SEBI released a consultation paper proposing Guidelines for Responsible Usage of AI/ML in Indian Securities Markets (“Draft Guidelines”), with an aim to promote safe and ethical use of AI and machine learning. The Draft Guidelines focus on five key areas. First, ‘Model Governance’, which requires regulated entities such as stockbrokers, mutual funds, and portfolio managers to establish strong oversight frameworks, conduct regular audits, maintain detailed documentation, and ensure accountability at the senior management level. Second, ‘Investor Protection and Disclosure’, which mandates clear communication to clients about the AI tools being used, including their purpose, limitations, risks, accuracy, and costs. Third, ‘Testing and Monitoring’, which requires maintaining separate testing and live environments, conduct shadow testing using real data, and retain model records for at least five years. Fourth, ‘Fairness and Bias Control’, which emphasises testing AI systems for biases and training staff to identify and avoid discriminatory outcomes. Finally, ‘Data Privacy and Cybersecurity’, which requires compliance with data protection laws, implementation of robust security measures, and prompt reporting of any data breaches or system failures.

Public feedback on the Draft Guidelines is invited till July 11, 2025.

Stakeholders may consider making representations on several key areas such as challenges in ensuring AI model explainability and transparency, proportional compliance obligations for smaller entities, confidentiality concerns around proprietary AI algorithms, the need for clarity on liability in the event of AI system failures, and the practical constraints involved in testing and mitigating algorithmic bias.

4. Madras High Court upheld that Contractual Disputes are not barred by CCI Jurisdiction [4]

The Madras High Court dismissed Google’s application to reject a civil suit filed by an ed-tech company, challenging the contractual validity of Google’s Play Store billing policies. The court held that the claims raised by the Company were rooted in contract law and not barred under Section 61 of the Competition Act, 2002. The Company had sought to declare certain terms in the Developer Distribution Agreement, including Clause 15.3, as unenforceable, citing economic duress, abuse of dominant bargaining power, and violations of Sections 16 and 62 of the Indian Contract Act, 1872. Google argued that the suit was barred due to the exclusive jurisdiction of the Competition Commission of India over anti-competitive conduct. However, the court found that the dispute involved in personam contractual rights and did not require adjudication under competition law.

This decision affirms that civil courts may entertain claims involving dominant digital platforms where the relief sought pertains to specific contractual obligations. It reinforces the principle that platform dominance alone does not preclude judicial scrutiny under general contract law, especially where allegations involve unilateral and potentially unfair terms.

5. Bombay High Court provides Relief in Social Media Arrest Matter [5]

The Bombay High Court ordered the release of a 19-year-old engineering student arrested for sharing an Instagram post about the Bharat-Pakistan war. The student had shared the post on May 7, 2025, which allegedly caused religious tension. However, within 2 hours, she deleted the post and issued a public apology expressing her remorse. Despite this, an FIR was registered against her on May 9, 2025, under several provisions of the BNS, including those relating to promoting enmity and disturbing public peace, and she was taken into custody. The arrest led to her rustication by her college, preventing her from appearing her semester examinations. The court criticised the police action as excessive, especially given her immediate apology and lack of any criminal background. The court ordered her release the same evening to enable her to attend her remaining exams and suspended the college’s rustication order, noting it had been passed in haste without giving her a fair chance to explain.

This case raises significant concerns around the criminalisation of online expression, especially by young individuals. It highlights the need for proportionate law enforcement in matters involving digital speech and reinforces that remedial steps, like deletion and apology, must be reasonably considered before invoking harsh measures like arrest and academic penalties.

6. Andhra Pradesh High Court reinforces Need for Due Process for Arrest in Social Media Abuse Case [6]

The Andhra Pradesh High Court directed that no arrest be made without following due process under Section 35(3) of the BNSS, which requires notice before arrest in offences punishable up to 7 years. The petitioner, in charge of a political party’s social media wing, faced multiple FIRs across the State for allegedly paying individuals to post abusive and derogatory content against rival political leaders on social media. He was booked under several provisions of BNS, including Sections 61, 79, and notably Section 111 on organised crime, alongside offences under the IT Act such as computer-related offences, privacy violation, obscene content, and sexually explicit content. The court held that mere allegations of coordinated abusive posts did not meet the threshold for ‘organised crime’ under Section 111 BNS, which requires proof of material benefit and prior charge sheets for similar offences within ten years, neither of which was established here. It further noted that confessions of co-accused had limited evidentiary value, especially when many posts predated the BNS. Urging the State to act against online toxicity, the court recommended issuing executive directions to block hate-filled language and suggested that digital platforms implement ‘auto-block’ mechanisms for offensive content.

This decision underscores that while offensive online content may attract liability under the IT Act, invoking harsher organised crime provisions requires strict compliance with statutory conditions. It reinforces proportional enforcement in digital offences and calls upon the State to curb online abuse through policy tools like executive directions and platform-based auto-block mechanisms rather than excessive criminal action.

7. Karnataka High Court allows Police to seek User Data from Fintech Company for alleged Online Betting Case [7]

The Karnataka High Court dismissed a petition filed by a company engaged in fintech, challenging a notice sent by the Police seeking transaction data linked to alleged online betting. The company argued that as a regulated system provider under the Payments and Settlement Systems Act, 2007 and an intermediary under the IT Act, it could not share user data without a court order. It relied on the Bankers’ Books Evidence Act, 1891(“BBE Act”) to claim that such disclosure required a court order.

The court held that the ‘Investigating Officer’ under the CrPC is a competent statutory authority to seek such information for investigation purposes. It noted that the applicable legislations do not bar sharing of user data with law enforcement and that the BBE Act merely governs evidentiary admissibility, not investigatory access. Further, Rule 3 of Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, also permits data sharing for lawful investigation. The court rejected the argument of blanket confidentiality, emphasising that privacy rights must yield to legitimate state interests in investigating cybercrimes.

This ruling reinforces that privacy protections cannot be used as a shield against legitimate criminal investigations. Regulated entities and digital intermediaries must recognise that statutory obligations to protect user data coexist with legal duties to cooperate with law enforcement. In cases involving serious offences such as online betting or cyber fraud, data disclosure—when properly requested under lawful authority—is not only permissible but essential to upholding public interest and ensuring effective enforcement.

8. SEBI extends Deadline for Compliance with Cybersecurity Framework [8]

SEBI extended the compliance deadline for its Cybersecurity and Cyber Resilience Framework (“CSCRF”) till August 31, 2025, for all regulated entities such as stockbrokers, stock exchanges, mutual funds, etc, except Market Infrastructure Institutions, KYC registration agencies, and Qualified Registrars to issue and share transfer agents. The CSCRF, originally published on August 20, 2024, consolidates guidelines to introduce a unified framework focused on robust internal governance, real-time threat monitoring, encryption, and secure software practices. It also introduces a Cyber Capability Index and mandates 24×7 Security Operations Centres (“SOCs”) for large REs, with flexibility for smaller REs to use shared SOCs provided by exchanges.

SEBI granted the extension following industry requests for more time to implement the framework. Stock exchanges and depositories have been directed to disseminate the circular to their members and participants. The extension aims to ensure smoother adoption without compromising cybersecurity preparedness.

9. Madras High Court upholds Restrictions on Online Real Money Games[9]

The Madras High Court upheld the Tamil Nadu Online Gaming Authority (Real Money Games) Regulations, 2025 (“Regulations”), dismissing petitions filed by online gaming platforms that had challenged the restrictions imposed under these Regulations. The Regulations were issued under the Tamil Nadu Prohibition of Online Gambling and Regulation of Online Games Act, 2022. The court rejected the argument that the Regulations conflicted with the IT Act, holding that while online games use telecom networks, States have legislative authority over public health and safety under the Constitution. It found that real money games are not exclusively governed by the IT Act, and in the absence of a comprehensive regulatory framework under the Intermediary Rules, the State’s intervention was not inconsistent with central law. Petitioners also objected to mandatory KYC verification and permits Aadhaar authentication for initial login citing privacy concerns. The court held that online game providers are permitted to use Aadhaar authentication with prior approval from the Central Government under the Aadhaar Act, 2016 and rules thereunder. Further, while upholding the time-based restriction of accessing real money games, the court emphasised that the right to privacy is not absolute and may be reasonably restricted to protect users from harm. It observed that the measures are proportionate to the State’s legitimate aim of addressing online gaming addiction and its associated risks.

This ruling makes it clear that user privacy and business rights must align with the State’s duty to protect public health. Online gaming platforms cannot use privacy as a shield against reasonable safeguards like KYC or time-based restrictions. When there is evidence of harm, such measures are both lawful and necessary.

INTERNATIONAL 

UNITED STATES OF AMERICA

10. Senate passes Legislation to Regulate US Dollar-Backed Stablecoins[10]

On June 17, 2025, the Senate passed the Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025 (“Act”), which proposes a federal framework for regulating U.S. dollar-backed payment stablecoins. The Act was introduced in the House of Representatives on June 23, 2025, and is currently under review. The Act allows only authorised entities, such as insured banks, credit unions, their subsidiaries, and specially licensed non-bank institutions approved by the Office of the Comptroller of the Currency, to issue payment stablecoins. Each token must be backed one-to-one by safe and liquid assets like cash, insured deposits, or short-term U.S. Government securities. Issuing entities are required to submit monthly reserve reports, verified by independent auditors, and are strictly prohibited from using reserve funds for lending or investment purposes. In addition, the Act lays down robust compliance standards covering governance, cybersecurity, liquidity, and anti-money laundering. It also empowers both federal and state regulators to intervene swiftly if any stablecoin activity threatens financial stability or consumer protection.

If passed, the Genius Act would become the first comprehensive U.S. law governing fiat-backed stablecoins, offering legal clarity and stronger protections for users in the digital asset space.

11. Court orders OpenAI to preserve User Data in New York Times Copyright Case [11]

The District Court of New York directed OpenAI to retain all user-generated outputs, even those deleted at user request or under privacy laws, to preserve evidence for the case.

This order comes as a result of the New York Times Company (“NYT”) suing Microsoft and OpenAI for alleged copyright infringement. NYT claims that Microsoft is liable because it has integrated OpenAI’s models into its products like ‘Bing’ and ‘Copilot’, benefiting commercially from outputs generated using NYT content. OpenAI and Microsoft have a close commercial partnership, with Microsoft investing heavily in OpenAI and deploying its models across Microsoft platforms.

Although the matter is yet to be decided, this development underscores the challenge for AI companies in balancing legal data preservation with privacy commitments and highlights the exposure of major tech partners like Microsoft in AI copyright litigation.

12. Court upheld Age Verification for Adult Websites [12]

In a significant decision on June 27, 2025, the US Supreme Court upheld Texas’s law requiring online pornographic websites to verify that users are over 18. The case arose after Texas enacted House Bill 1181 (“Bill”) to prevent minors from accessing sexually explicit content online. In this matter, industry groups representing adult websites challenged the law, arguing that forcing users to provide Government IDs or undergo verification would violate their First Amendment rights and create a chilling effect on adults seeking lawful content. They contended that such requirements were overly burdensome, intrusive, and could discourage users from accessing legal material due to privacy concerns. However, the court disagreed. The court ruled that the Bill imposes only a minimal burden on adult users and serves a compelling state interest in protecting children from harmful content. The court reasoned that Texas was within its traditional authority to safeguard minors and that the age verification requirement did not unreasonably restrict adults’ access to legal pornography.

This decision is likely to have a ripple effect across the digital content industry, paving the way for similar age-gating laws in other states. It also raises important debates around privacy, user verification, and online freedom, signalling that platforms hosting explicit content must prepare for stricter compliance standards to continue operating legally in the US.

13. Trade Associate challenges New Arkansas’ Social Media Laws [13]

A trade association representing online businesses and advocating for free expression and open digital markets, has filed a new lawsuit challenging Arkansas’s Acts 900 and 901 of 2025, which impose new regulations on social media platforms, particularly in relation to minors. (collectively referred as “Acts”), which impose sweeping new requirements on social media platforms. These include restrictions on algorithms for minors, mandatory addiction risk audits, parental dashboards to monitor activity, and default content filters.

It has been argued that the provisions of the Acts infringe free speech, force platforms to redesign features and are vague, overbroad, and threaten platform autonomy and user privacy, however, the State is bend on

As the proceedings are ongoing, no final decision has been delivered; however, this case highlights the ongoing tension between state-level child safety laws and constitutional protections for online speech, with significant implications for how platforms design and moderate digital content.

14. Texas AI Regulations Move Forward as Federal Moratorium fails

On June 22, 2025, Texas enacted the Texas Responsible Artificial Intelligence Governance Act, 2025 (“TRAIGA”)[14], which will take effect from January 1, 2026. TRAIGA introduces comprehensive AI-specific obligations for both state agencies and private entities, prohibiting AI systems designed to manipulate behavior, discriminate, or create harmful deepfakes. The legislation mandates disclosure of AI use by public authorities, requires consent for biometric data collection, and establishes a regulatory sandbox under the State’s Artificial Intelligence Council. Enforcement lies with the Texas Attorney General, with penalties ranging from USD 80,000 to 200,000 per violation, following a 60-day cure period. The political landscape for state AI regulation has dramatically shifted. While the House initially passed a reconciliation bill in May 2025 proposing a 10-year moratorium on state laws regulating AI systems in interstate commerce, the Senate voted nearly unanimously on July 1, 2025, to remove this moratorium provision from the federal budget bill. This decisive 99-1 vote eliminates the immediate threat of federal pre-emption that had cast uncertainty over TRAIGA’s future enforceability [15].

With the federal moratorium off the table, TRAIGA’s effective date appears secure. This development resolves the previous tension between state-level innovation and federal pre-emption, allowing Texas to move forward as a pioneer in comprehensive AI governance. The legislation’s focus on data protection, responsible system design, and ethical AI deployment can now proceed without the regulatory uncertainty that had previously surrounded state AI laws.

15. Montana amends Data Privacy Law to Strengthen Safeguards for Minors and Online Consumers [16]

Montana amended the Montana Consumer Data Privacy Act, 2023 to include new consumer rights and stronger safeguards for minors. Now, consumers must be notified when their data is collected and can access, correct, delete, and obtain their data in a portable format. Businesses must provide clear opt-out options for data sales, targeted ads, and profiling, and publish accessible privacy notices detailing data use and user rights. As far as minors are concerned, there is prohibition on processing their data for targeted advertisements or profiling without consent, restricts geolocation tracking, and bans design features that prolong online engagement. Businesses must conduct data protection assessments where there is a heightened risk of harm and implement mitigation plans. Adding to the key aspects, the Montana Attorney General retains exclusive enforcement power, with penalties of up to USD 7,500 per violation and a 60-day cure period.

Montana’s updated law reflects a growing trend among States to tailor privacy protections to digital realities, especially concerning minors’ safety and automated processing. Companies offering online services in Montana should update privacy policies, design practices, and data governance procedures to ensure compliance before the October deadline.

UNITED KINGDOM

16. Data Use and Access Regulation received Royal Assent [17]

The Data (Use and Access) Act 2025 (“DUAA”) received Royal Assent, with phased implementation through June 2026. DUAA amends existing UK GDPR and data protection laws to create a more innovation-focused regulatory approach while maintaining privacy protections.

Key provisions include new Smart Data schemes enabling secure data sharing between organizations, enhanced data portability across sectors, and a statutory framework for Digital Verification Services. Unlike GDPR’s strict consent requirements, DUAA provides significant flexibility through exemptions for low-risk cookies (analytics, site optimization, security) from explicit consent requirements and relaxed rules around automated decision-making and legitimate interests processing. Organizations need only provide information and easy opt-out options rather than obtaining explicit consent. Organizations should prepare for the phased rollout. DUAA represents the UK’s strategic shift toward pragmatic data governance that balances commercial opportunities with robust privacy standards.

17. UK calls for Feedback on International Data Transfer Guidance [18]

On June 26, 2025, the UK Information Commissioner’s Office (“ICO”) launched a ‘Call for Views’ on its International Data Transfers: Guidance (“Guidelines”) under the UK GDPR. Open until August 7, 2025, the consultation invites businesses and stakeholders to share feedback on which parts of the Guidelines are helpful, where they face difficulties, and what tools or changes would improve compliance with data transfer rules. This consultation follows the ICO’s January 2025 letter to the Prime Minister, Chancellor, and Business Secretary, (“Letter”) where it highlighted that international data transfers support 40% of UK exports and promised to modernise data governance to boost economic growth. The ICO committed to publishing clearer guidance, streamlining adequacy assessments, and aligning with global data standards. It also outlined broader plans, including new AI regulation proposals, training programmes for SMEs, and innovation sandboxes to test emerging technologies safely. Additionally, the ICO addressed concerns in the digital advertising sector, stating it would review strict consent requirements under the privacy and electronic communications regulations that may limit adoption of privacy-preserving advertising models, and pledged to clarify low-risk use cases unlikely to trigger enforcement.

This step reflects the ICO’s wider strategy to balance regulatory certainty and innovation. By simplifying international data transfer rules while maintaining strong privacy protections, the ICO aims to position the UK as a competitive and trusted hub for global data flows.

EUROPEAN UNION

18. Belgian DPA dismisses Complaints on Cookie Banners [19]

DPA’s Litigation Chamber dismissed 16 complaints across five cases filed by None of Your Business (“NOYB”), the European privacy advocacy group, regarding cookie banners. The DPA’s Litigation Chamber found that NOYB did not have valid mandates from individual users to file these complaints. It noted that organisations like NOYB can only act with express authorisation from affected individuals and cannot file complaints unilaterally. In these cases, NOYB had used automated tools to submit complaints without securing proper consent from data subjects. The DPA noted that these complaints were part of NOYB’s internal projects, where individuals were merely instructed on how to authorise filings, and similar complaints had already been dismissed earlier without appeal. While the DPA acknowledged NOYB’s important role in advancing privacy rights, it emphasised that complaints must follow existing legal procedures. DPA also expressed support for future legislative changes that could allow privacy organisations to file complaints independently to strengthen enforcement.

This decision highlights the procedural limits faced by privacy advocacy groups under current Belgian law, underscoring the need for clear mandates when representing data subjects in regulatory complaints.

19. Spanish DPA penalises Financial Non-Profit Company Over Data Misuse and AI failures [20]

On June 7, 2025, a non-profit financial services company was penalized for disclosing a customer’s personal address in a transfer receipt. The payment followed an order issued by the Spanish Data Protection Agency (“AEPD”) on May 23, 2025, which closed the sanctioning procedure after the Company agreed to the penalty and waived its right to appeal.

The case concerned a transfer made through the company’s online banking platform, in which the complainant’s home address was visible on the receipt shared with the transaction’s recipient. The AEPD found that this data was not required for the transaction and that including it served no clear purpose. Basis this, AEPD held that the company should have limited the personal data processed to what was strictly necessary and failed to put in place proper controls to prevent unnecessary disclosure. This amounted to a breach of Article 5(1)(f) of the GDPR, which requires integrity and confidentiality of personal data. The AEPD also reviewed complaints about marketing messages managed by an AI system. Although the user had objected multiple times, the system continued to send messages because it was only programmed to respond to the word ‘UNSUBSCRIBE.’ It acknowledged the limitation but clarified that current legislations do not yet require AI systems to interpret all types of human responses. Basis this, the Company was liable to pay fine of EUR 42,000.

20. EU initiated Public Consultation on Data Retention Requirements for Criminal Investigations [21]

On June 20, 2025, the European Commission launched a public consultation and impact assessment on whether to introduce a common EU framework for retention of metadata by service providers to support criminal proceedings. This is an initiative at the assessment stage, not yet a draft guideline or legislative proposal. Currently, there is no EU-wide obligation requiring telecom companies, internet service providers, or digital platforms to retain metadata, such as IP addresses, timestamps, and subscriber details, for any specific period. As a result, critical data may no longer be available when law enforcement requests it, hindering investigations into serious crimes. Varying data retention laws across Member States further create legal and operational challenges, especially for service providers operating cross-border. Through this consultation, the Commission seeks input from a wide range of stakeholders, including telecom operators, digital platforms, law enforcement agencies, civil society groups, and EU citizens, on whether and how such data retention obligations should be designed.

Feedback can be submitted via an online questionnaire until September 12, 2025.

21. Italian Company held liable for Unlawful Use of Employee’s Private Messaging and Social Media Data [22]

Data Protection Authority (“Garante”) held Autostrade per l’Italia S.p.A., a company engaged in road infrastructure and toll highway management sector, to be in violation of the GDPR for using an employee’s private social media and messaging data in disciplinary proceedings. The company had received screenshots of the employee’s Facebook posts, WhatsApp messages, and Messenger chats from colleagues and third parties. Although ASPI did not actively collect the data, the Garante found that using it still constituted ‘processing’ under GDPR.

The company claimed it acted on a legitimate interest under Article 6(1)(f), but the Garante found the company had failed to conduct a required balancing test or consider less intrusive means. It held that the content, shared in closed or private settings, came with a reasonable expectation of confidentiality. The Garante found that ASPI had violated key privacy principles, including those requiring lawful, limited, and proportionate data use in the workplace. It imposed a fine of EUR 420,000 and emphasized that employers cannot rely on private digital communications for disciplinary action without a clear legal basis and appropriate safeguards.

This decision underscores that under the GDPR, processing is not limited to active collection. Merely being exposed to, receiving, or storing private personal data can create compliance risks if not backed by a lawful basis and necessary safeguards, especially in employment contexts.

OTHERS

22. South Korea launches Investigation into Data Leak by Global Pizza Brand [23]

On June 26, 2025, PIPC launched an investigation into Papa John’s Korea Co. Ltd, global pizza brand. The investigation was launched after the company reported a major data leak. The PIPC, South Korea’s national data protection authority, oversees enforcement of privacy laws and protection of personal information.

The company revealed that customer order data had been exposed online since January 2017 due to negligence in website source code management. The leaked data includes customers’ names, phone numbers, and addresses, raising serious privacy and security concerns. The exposure remained undetected for over eight years, highlighting systemic failures in website monitoring and data security practices. The PIPC will examine how the leak occurred, the scale of compromised data, and whether the company complied with technical and administrative security obligations under Korean privacy law. The probe will also assess if the company retained customer order information beyond the permissible retention period stated in its privacy policy. Further, PIPC has urged all businesses to strengthen website security by restricting administrator page access and managing URLs diligently, noting an increasing trend of data leaks arising from poor website management.

23. Canadian Court redefines Limits of Data Scraping and AI Training [24]

The Court of King’s Bench of Alberta upheld an enforcement order against Clearview AI Inc., a U.S.-based company offering facial recognition services built on scraped online images. The Office of the Information and Privacy Commissioner of Alberta (“Commissioner”) had found that the company collected images of individuals from public websites and social media without consent, violating Alberta’s privacy legislation PIPA. The Commissioner directed the company to stop providing services in Alberta, delete images and biometric facial arrays of Albertans, and cease further processing. Opposing this, the company challenged the order, arguing that PIPA did not apply to it as a foreign entity, and that the images were ‘publicly available’ under the PIPA. It also claimed that this interpretation infringed its freedom of expression enumerated in the Canadian Charter of Rights and Freedoms.

Finally, the court found that company had a real and substantial connection to Alberta, given its marketing to local police agencies and use of individual’s data. It upheld the Commissioner’s interpretation that social media content does not fall under the ‘publicly available’ exception. The court also dismissed the Charter challenge, finding the privacy restrictions proportionate and justified.

24. Québec’s Latest Regulation on Intimate Images comes into force [25]

Québec brought into force the Act to Counter Non‑Consensual Sharing of Intimate Images, 2024, c. 37 (“Act”), to provide individuals with legal remedies against the unauthorized distribution of intimate content. The Act seeks to protect a person’s dignity, privacy, and reputation, particularly from harm caused by rapid online dissemination. The Act defines ‘intimate image’ to include any visual or audio recording, altered or not, depicting nudity or explicit sexual activity, where there was a reasonable expectation of privacy. Sharing covers a wide range of actions such as publishing, distributing, or advertising. It clarifies that prior consent to image creation does not waive privacy rights, and allows for revocation of consent, with exceptions for certain commercial or artistic contracts. Individuals can now apply to a judge or justice of the peace to obtain urgent orders requiring anyone with control over the image to stop sharing it, destroy it, or de-index links. These orders may be issued without notifying the alleged violator and can apply to unknown persons. The court may also compel disclosure of identifying information. Further, all hearings are held in camera, and public access to records is restricted unless ordered otherwise. Non-compliance with court orders may attract daily fines or imprisonment, with higher penalties for repeat offences. Collected penalties are directed to a victims’ assistance fund. The Act also introduces a presumption of civil liability against anyone who shares or threatens to share such images without consent.

ABBREVIATIONS
  • BNS – Bharatiya Nyaya Sanhita, 2023
  • BNSS – Bharatiya Nagrik Suraksha Sanhita, 2023
  • CrPC – Code of Criminal Procedure, 1973
  • DPA – Data Protection Authority
  • DPDP Act – Digital Personal Data Protection Act, 2023
  • GDPR – General Data Protection Regulation (Regulation (EU) 2016/679)
  • IT Act – Information Technology Act, 2000
  • KYC – Know Your Customer
  • PIP Act – Personal Information Protection Act, SA 2003, c P-6.5
  • PIPA Regulation – Personal Information Protection Act Regulation, Alta Reg 366/2003
  • PIPC – Personal Information Protection Commission
  • SEBI – Securities and Exchange Board of India
  • SME – Small and Medium Enterprises
  • UK GDPR – United Kingdom General Data Protection Regulation, 2018

Authors:

  • Rashmi Deshpande
  • Aarushi Ghai
  • Shriya Haridas

Download File:

[1] https://dot.gov.in/sites/default/files/Gazette%20Notification%20Draft%20Telecom%20Cyber%20Security%20Amendment%20Rules.pdf?download=1
[2] https://d38ibwa0xdgwxx.cloudfront.net/create-edition/7c2e2271-6ddd-4161-a46c-c53b8609c09d.pdf
[3] https://www.sebi.gov.in/reports-and-statistics/reports/jun-2025/consultation-paper-on-guidelines-for-responsible-usage-of-ai-ml-in-indian-securities-markets_94687.html
[4] Testbook Edu Solutions Pvt. Ltd. v. Google India Pvt. Ltd. & Ors. [C.S. (Comm Div.) No. 186 of 2023]
[5] Khadeejah Shahabuddin Shaikh v. State of Maharashtra & Ors W.P. No. 6684 of 2025 and Cri. App. No. 598 of 2025
[6] S. Bhargav Reddy v. State of Andhra Pradesh – Criminal Petitions Nos. 8059, 8114, 8320, 8545, 8550 and 8854 of 2024
[7] PhonePe Private Limited v. State of Karnataka & Ors [W.P. No. 3757 of 2023]
[8] SEBI | Cybersecurity and Cyber Resilience Framework (CSCRF) for SEBI Regulated Entities (REs)
[9] Play Games 24×7 Private Limited v. State Of Tamil Nadu [ W.P.Nos.6784, 6794, 6799, 6970, 8832 and 13158 of 2025]
[10] https://www.congress.gov/bill/119th-congress/senate-bill/1582
[11] The New York Times Company v. Microsoft Corporation et al. [Case No. 1:23-cv-11195]
[12] Free Speech Coalition, Inc. v. Paxton [No. 23-1122]
[13] NetChoice, LLC v. Griffin [Case No. 5:25-cv-05140-TLB] NetChoice, LLC v. Griffin, No. 5:2023cv05105 – Document 77 (W.D. Ark. 2025) :: Justia
[14] https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB00149F.pdf#navpanes=0
[15] https://www.senate.gov/legislative/LIS/roll_call_votes/vote1191/vote_119_1_00363.htm
[16] https://bills.legmt.gov/#/laws/bill/2/LC0372?open_tab=bill
[17] https://www.legislation.gov.uk/ukpga/2025/18/enacted
[18] Call for Views – International Transfers Guidance – Information Commissioner’s Office – Citizen Space
[19] https://www.autoriteprotectiondonnees.be/citoyen/actualites/2025/06/26/l-apd-explique-pourquoi-elle-classe-sans-suite-des-plaintes-de-noyb?mkt_tok=MTM4LUVaTS0wNDIAAAGbY6swmHSx-3FSALi6X_pGqYf2rUxzMBykmiN4uSTFxyMydF1IbGWMhadcuJo9RJ9zvcLgxF4pKghDEE_0YywubaFPWvirzx_HN4DONZ4VzZkIgQ
[20] https://www.aepd.es/informes-y-resoluciones/criterios-juridicos-aepd/debe-la-ia-entender-el-ejercicio-de-un-derecho-proteccion-de-datos?mkt_tok=MTM4LUVaTS0wNDIAAAGbTpIpJxGwisTFJmAXFVmOVy2iiE8s0nRcOlzJ9eWlnB8ph3KBOzHKVrDUJmciyzt0vfoZm8beCo3NreDYsVTtgHZkP4-vTHxCzSZ51SzmHK3uhQ
[21] Impact assessment on retention of data by service providers for criminal proceedings
[22] https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/10143261
[23] https://www.pipc.go.kr/np/cop/bbs/selectBoardArticle.do?bbsId=BS074&mCode=C020010000&nttId=11315&mkt_tok=MTM4LUVaTS0wNDIAAAGbY6swmdkoA-7qMBVKh2fsFqYXjK5Pi8Xp49swSUYgdP1F4u-5AAwfxoRpSTxEkwvUYIv8XIAQ2D_rebS0kzTLaIzwQ3B8zPpgzhZHO_AKcpnQQg
[24] Clearview AI Inc. v. Alberta (Information and Privacy Commissioner) [2025 ABKB 287]
[25] https://www.legisquebec.gouv.qc.ca/en/document/cs/P-9.0002

Disclaimer

Current rules of the Bar Council of India impose restrictions on maintaining a web page and do not permit lawyers to provide information concerning their areas of practice. Fountainhead Legal is, therefore, constrained from providing any further information on this web page except as stated below.

The rules of the Bar Council of India prohibit law firms from soliciting work or advertising in any manner. By clicking on ‘I AGREE’, the user acknowledges that:

The user wishes to gain more information about Fountainhead Legal, its practice areas and the firm’s lawyers, for his/her own information and use;

The information is made available/provided to the user only on his/her specific request and any information obtained or material downloaded from this website is completely at the user’s volition and any transmission, receipt or use of this site is not intended to, and will not, create any lawyer-client relationship; and

None of the information contained on the website is in the nature of a legal opinion or otherwise amounts to any legal advice.

Fountainhead Legal, is not liable for any consequence of any action taken by the user relying on material/information provided under this website. In cases where the user has any legal issues, he/she in all cases must seek independent legal advice.