Technology Law and Data Privacy Updates
Monthly Edition - September 2025
INDEX
- CERT-In issues Cybersecurity Guidelines for MSMEs
- RBI issues Master Directions Regulating Payment Aggregators
- RBI issues New Directions to Strengthen Authentication Mechanism in Digital Payment Transactions
- Draft Bill for Civil Drone Regulations released for Public Consultation
- Delhi & Bombay High Courts issue Orders Protecting Celebrities’ Personality Rights
- TRAI releases Draft Rules to Streamline Broadcasting and Cable Audits
United States of America
- Privacy Lawsuit over Period-Tracking App settled
- Violation of Privacy Laws leads to Disinvestment Order against Prominent Social Media App
- New Regulations to strengthen CCPA announced
- FTC secures USD 2.5 billion Settlement over Deceptive Subscription Practices
- Homebuyers Privacy Protection Regulation becomes Law, restricting Unsolicited Mortgage Offers
European Union
- Data Act enters into Force
- Advocate General opines against publishing Names of Doping Athletes Online
- EU-US Data Privacy Framework upheld
- EU Court clarifies when Pseudonymised Data constitutes Personal Data
- Italy becomes first EU country to enact National AI Law
- EU launches Stakeholder Consultation to Develop Guidelines for Transparent AI Systems
- EU Board issues Guidelines to harmonize DSA and GDPR Enforcement
United Kingdom
- Social Media Platforms launches Consent-or-Pay Model with ICO Approval
- Nationwide Digital ID Scheme to enhance Service Access and Border Security announced
Others
FOUNDER’S NOTE
Welcome to this edition of Fountainhead Legal’s newsletter!
The Digital Personal Data Protection Rules, initially expected by September 28th, are still under legal vetting. This extended review period highlights the framework’s complexity and underscores that organizations must not relax their compliance efforts simply because the rules are not yet public. Data privacy remains critical, as data is the ‘new oil’. Inadequate compliances with data privacy provisions may seriously impact businesses. A good example is the US Government’s Executive Order on TikTok which required TikTok to form a new American-based joint venture to avoid nation-wide ban due to breach of cross border data privacy regulations.
On the Indian front, courts have delivered several significant interim rulings to safeguard Personality Rights in the era of AI and deepfake technologies. The Delhi High Court granted interim relief to filmmaker Karan Johar, actress Aishwarya Rai Bachchan, and actor Abhishek Bachchan, prohibiting the unauthorized use of their name, image, and voice via AI tools, deepfakes, and merchandise. Similarly, the Bombay High Court granted ad-interim relief to legendary singer Asha Bhosle against entities, including an AI firm, for unauthorized cloning of her voice and image for commercial purposes. These rulings highlight the imminent risks that artists face in today’s rapidly evolving AI-driven world.
In terms of fresh regulations, the Indian Government has introduced draft Civil Drone Promotion and Regulation Bill, 2025, meant for licensing and registration of drones along with data privacy safeguards for their operations. Further, the RBI issued new directions to strengthen authentication in digital payments, requiring two-factor Authentication for all digital payment transactions by April 1, 2026.
Globally, data governance and consumer protection also saw key developments. In the US, Google LLC and Flo Health Inc. agreed to pay a combined USD 56 million to settle allegations that the Flo period-tracking app shared sensitive health data for advertising without proper consent. In the European Union, the EU Data Act came into effect on September 12, 2025, democratizing access to personal and non-personal data generated by connected products and services. Italy became the first EU country to enact comprehensive national AI legislation, establishing oversight and criminalizing the creation and dissemination of harmful AI-generated deepfakes.
We hope you find these updates insightful and informative!
On behalf of Team Fountainhead Legal, we wish you a very Happy Diwali! May your life and your data always remain bright, secure, and well-protected.
NATIONAL
1. CERT-In issues Cybersecurity Guidelines for MSMEs[1]
CERT-In has released 15 Elemental Cyber Defense Controls for Micro, Small, and Medium Enterprises (“Guidelines”) aimed to help smaller businesses strengthen their cybersecurity posture by adopting foundational controls against common cyber threats. Recognising that MSMEs form the backbone of India’s economy yet often lack dedicated IT security resources, the framework provides a practical starting point to safeguard digital infrastructure.
The Guidelines introduce 15 core controls supported by 45 baseline recommendations, covering key areas such as asset management, network and email security, endpoint protection, secure configurations, patch management, incident response, continuous monitoring, data backup, and third-party risk management. MSMEs are encouraged to conduct annual cybersecurity audits through CERT-In–empanelled auditors and integrate these measures into their internal security policies. The Guidelines also emphasises the importance of employee training, secure password practices, multi-factor authentication, and timely vulnerability assessments to protect against malware, phishing, and ransomware attacks. CERT-In clarifies that these controls represent a minimum cybersecurity baseline, not a complete solution. Each MSME is advised to go beyond the baseline by tailoring security measures to its own operational risks and data sensitivity. The Guidelines stress that cybersecurity is a continuous process requiring regular reviews, policy updates, and proactive monitoring.
By implementing these elemental controls, MSMEs can reduce their exposure to cyber risks, ensure compliance with regulatory expectations, and contribute to building a more secure and trusted digital ecosystem for India’s growing business landscape.
2. RBI issues Master Directions Regulating Payment Aggregators[2]
The RBI issued the Reserve Bank of India (Regulation of Payment Aggregators) Directions, 2025 (“Directions”), introducing a consolidated framework to govern all entities facilitating digital payments between merchants and customers. The Directions replace earlier circulars and establish a unified regulatory structure that strengthens accountability, consumer protection, and financial stability across the payment ecosystem.
The Directions classify Payment Aggregators (“PA”) into three categories, PA-O (Online) for digital transactions conducted remotely, PA-P (Physical) for in-person transactions, and PA-CB (Cross-Border) for PAs handling international payments. All non-bank PAs are now required to obtain authorisation from the RBI and maintain the prescribed minimum net worth thresholds, ensuring that only financially sound and compliant entities operate in the market. The Directions also mandates strict merchant due diligence and KYC checks, with ongoing monitoring to prevent fraud, misuse, and money laundering.
In addition, the Directions introduce robust escrow account norms, requiring that funds collected on behalf of merchants be kept separate from business accounts, with clear rules governing permissible debits, credits, and settlement timelines. PAs must also implement strong governance mechanisms, including board-approved policies for dispute resolution, transaction monitoring, data protection, and consumer grievance redressal. Further, it is expected for PAs to adopt advanced cybersecurity controls, ensure transparent disclosures to users, and comply with periodic audit and reporting requirements.
3. RBI issues New Directions to strengthen Authentication Mechanism in Digital Payment Transactions[3]
RBI has released Reserve Bank of India (Authentication mechanisms for digital payment transactions) Directions, 2025 (“Directions”), introducing stricter standards to enhance the security of digital payments. The Directions mandate that all digital payment transactions must implement ‘Two-Factor’ Authentication by April 1, 2026, ensuring that no single compromised factor can enable unauthorized access. One of the two factors must be dynamic, such as OTP or biometric verification to provide additional protection against fraud. While SMS-based OTPs will continue to be accepted, the RBI has also permitted alternative authentication methods such as device-based tokens, biometric recognition, or app-based prompts, provided they meet prescribed security and interoperability standards. For international card transactions conducted without physical card presence, the RBI has mandated an additional factor of authentication whenever requested by the overseas merchant or acquirer. These measures are to be implemented by October 1, 2026, strengthening India’s defences against global payment fraud. To ensure balance between security and convenience, the Directions allow for risk-based authentication, where issuers can apply enhanced verification only to transactions flagged as suspicious or high-risk.
RBI has made it clear that consumers must be fully compensated for losses resulting from non-compliance by payment system participants. All regulated entities are required to provide authentication and tokenization services on a fair, non-exclusive basis, ensuring equal access across the digital payment ecosystem.
4. Draft Bill for Civil Drone Regulations released for Public Consultation[4]
The Ministry of Civil Aviation has unveiled Civil Drone Promotion and Regulation Bill, 2025 (“Draft Bill”), aimed at establishing a comprehensive legal framework to encourage safe, responsible, and innovative use of drones across India. The Draft Bill marks a significant step in balancing innovation with public safety, offering clear guidance for individuals, companies, and government agencies involved in drone operations.
At its core, the Draft Bill introduces a licensing and registration system for drones, mandating that every remotely piloted aircraft be registered and operated only by trained, certified individuals. It proposes the creation of a National Civil Drone Authority, which will oversee certification, airspace management, and policy implementation. Drones are classified by weight and purpose, ranging from recreational nano-drones to large commercial and defence-use models, each requiring specific operational permissions. To ensure public safety, the Draft Bill emphasizes geo-fencing, real-time tracking, and anti-collision mechanisms, prohibiting drone flights near airports, defence installations, or sensitive zones without prior authorization.
Beyond regulation, the Draft Bill also focuses on promoting the civil drone ecosystem. It introduces measures to encourage drone manufacturing, Research & Development, and skill development through Government incentives and pilot projects. Importantly, it requires operators to maintain insurance coverage for potential damages and data privacy safeguards for any information collected during drone operations.
The Draft Bill shows the Government’s push to make India a leader in drone technology while keeping safety and privacy in check. By setting up a central authority and clear rules for licensing and operation, it aims to bring structure to a fast-growing sector. The real test, however, will be in smooth coordination between agencies and ensuring that innovation is not slowed down by heavy regulation.
5. Delhi & Bombay High Courts issue Orders Protecting Celebrities’ Personality Rights[5]
The Delhi and the Bombay High Courts have issued interim orders in 4 different cases expanding judicial protection of personality rights. The Delhi HC, through an ex parte interim order, has granted protection to the personality rights of filmmaker and public figure Karan Johar given that various online entities had been misusing his name, image, voice, likeness, and nickname ‘KJo’ through AI tools, deepfakes, memes, merchandise, and social media content without his consent and often for commercial exploitation. Further, the Delhi High Court in a separate case, granted similar protection to Bollywood actress Aishwarya Rai Bachchan, and directed that offending platforms, websites, and online entities take down infringing material, disable associated URLs, and submit identifying data such as IP logs and subscriber information.
Similarly, interim protection was also granted by the Delhi High Court to actor Abhishek Bachchan, restraining illegal monetisation of the actor’s persona, which includes his name, image, likeness, signature, and the acronym ‘AB’. On the other hand, the Bombay High Court issued ad-interim relief in favour of legendary playback singer Asha Bhosle safeguarding the personality rights against widespread unauthorized exploitation across digital platforms upon observing that several entities were allegedly cloning her voice and using her image and persona for commercial purposes without consent.
This interim order reflects the judiciary’s increasing attention to personality and publicity rights in the digital environment. By emphasizing the right of individuals, particularly public figures, to control the commercial and reputational use of their identity, the Courts underscored the importance of safeguarding personal likenesses against emerging challenges such as deepfakes and AI-generated impersonations.
6. TRAI releases Draft Rules to Streamline Broadcasting and Cable Audits[6]
TRAI has issued the Draft Telecommunication (Broadcasting and Cable) Services Interconnection (Addressable Systems) (Seventh Amendment) Regulations, 2025, (“Draft Regulation”) marking a significant effort to enhance transparency and streamline compliance within India’s complex broadcasting and cable distribution industry. The Draft Regulation are primarily focused on strengthening the audit-related provisions of the original Interconnection Regulations of 2017. For a non-industry professional, these regulations govern the fundamental business relationship, the ‘interconnection’ between Broadcasters (who create the TV channels) and Distributors (like DTH and cable operators who deliver them to your home), specifically ensuring accurate reporting of the number of subscribers for billing and regulatory compliance. The aim is to reshape the industry by balancing innovation with robust accountability.
A major proposal is to shift the mandatory audit cycle from a calendar-year basis (January to December) to a financial-year basis (April to March). This change is designed to align regulatory compliance with the standard corporate and financial reporting schedules of the entities involved. Under the Draft Regulations, distributors must now share the completed audit report, which must be certified by a TRAI-empanelled auditor or by Broadcast Engineering Consultants India Limited, with all broadcasters by September 30 of each year. Furthermore, to reduce friction and prevent disputes, distributors are mandated to provide broadcasters with a minimum of 30 days’ advance notice regarding the audit schedule and the name of the appointed auditor, thereby strengthening broadcaster oversight and promoting accuracy in subscriber figures.
The core purpose of these amendments proposed by the Draft Bill is to ensure accurate subscriber reporting and revenue assurance across the digital ecosystem. Notably, the Draft Bill clarifies timelines to prevent common industry disputes, and explicitly states that distributors who fail to complete these mandatory audits by the September 30, 2025 deadline will continue to face prescribed penalties. TRAI has opened the Draft Bill for public feedback, inviting written comments from all stakeholders until October 6, 2025, signalling the government’s commitment to a consultative process before the regulations are scheduled to come into effect from April 1, 2026.
INTERNATIONAL
UNITED STATES OF AMERICA
7. Privacy Lawsuit over Period-Tracking App settled[7]
Google LLC (“Google”) and Flo Health Inc. (“Flo”) agreed to pay a combined USD 56 million to settle a class action lawsuit alleging that the Flo period-tracking app shared users’ sensitive health data without proper consent. The lawsuit claimed that between November 2016 and February 2019, Flo transmitted personal information about users’ menstrual cycles and pregnancies to third parties, including Google and Meta, for advertising purposes. Despite Flo’s assurances of confidentiality, users’ data was allegedly accessed through Software Development Kit (“SDK”) integrated into the app. Google will contribute USD 48 million, while Flo will pay USD 8 million. Meta Platforms Inc., another co-defendant, did not settle and was found liable by a jury in August 2025; the company plans to appeal. The plaintiffs argued that Flo’s integration of third-party SDKs allowed companies to access intimate health data, violating the California Invasion of Privacy Act, 1967 that imposes statutory penalties of USD 5,000 per violation, potentially leading to damages in the billions of dollars. Google denied that any data was used for advertising and emphasized its policies against collecting health information through Google Analytics.
This settlement underscores the growing scrutiny of tech companies’ data practices, particularly concerning sensitive health information. It also highlights the importance of obtaining explicit user consent and maintaining transparency in data collection and sharing practices. For anyone using a health or wellness app, it’s a reminder to be fully aware of privacy policy. The case sets a significant precedent for privacy litigation in the digital age.
8. Violation of Privacy Laws leads to Disinvestment Order against Prominent Social Media App[8]
The US President has signed an Executive Order (“EO”) titled ‘Saving TikTok While Protecting National Security’ on September 25, 2025. The goal is to separate its control from its China-based parent company, ByteDance Ltd. (“ByteDance”), primarily because of national security and data privacy concerns. The decision comes after months of legal delays and negotiations on how to keep the app available to its 170 million American users without compromising national security.
Under the new framework, TikTok’s US business will operate through a newly formed joint venture, majority-owned and managed by American investors, with ByteDance retaining less than a 20% minority stake. The EO requires that all US user data be stored on American cloud servers, for which Oracle has been designated as the security partner, and that TikTok’s recommendation algorithms be retrained and overseen by US security officials. A new board of directors, comprising experts in cybersecurity and national security, will ensure the platform’s operations remain independent from foreign influence.
For tech companies, this EO signals that data localization, algorithm transparency, and ownership structure are now inseparable from market access. For India and other countries grappling with similar data sovereignty and national security concerns, the US approach demonstrates how technology governance can ensure keeping platforms alive while protecting user data and national interests.
9. New Regulations to strengthen CCPA announced[9]
California Privacy Protection Agency has finalized new regulations to strengthen privacy protections under the California Consumer Privacy Act (“CCPA”). The new regulations address areas such as cybersecurity audits, risk assessments, automated decision-making technology (“ADMT”), and obligations for insurance companies. The process involved years of engagement with industry stakeholders, civil society, and the public, including multiple hearings and review of hundreds of comments, ensuring that the rules balance strong consumer protections with practical compliance for businesses. It establishes clear deadlines and requirements for businesses of different sizes. Companies making over USD 100 million must complete cybersecurity audits by April 1, 2028, while smaller businesses have extended timelines up to 2030. Risk assessments must be completed starting January 1, 2026, with formal attestations and summaries submitted to CPPA by April 1, 2028. Businesses using ADMT to make major decisions affecting consumers are required to comply with related rules beginning January 1, 2027. These measures are designed to ensure companies carefully evaluate potential risks to consumer data and adopt strong safeguards against misuse. Beyond compliance, the regulations emphasize transparency and accountability, giving meaningful control over their personal information while guiding businesses through practical implementation pathways. With the regulations taking effect on January 1, 2026, and staggered deadlines for full compliance, this initiative solidifies California’s privacy standards, while helping businesses understand their responsibilities and maintain trust in the digital economy.
10. FTC secures $2.5 Billion Settlement over Deceptive Subscription Practices[10]
The Federal Trade Commission (“FTC”) has finalized a landmark USD 2.5 billion settlement with Amazon Inc. (“Amazon”), resolving allegations that the company used deceptive practices to enroll consumers in its Prime subscription service without their consent and made it challenging for them to cancel. The settlement, announced on September 25, 2025, addresses violations of the federal Restore Online Shoppers’ Confidence Act (“ROSCA”), which mandates clear disclosure of subscription terms and easy cancellation processes. The FTC’s complaint highlighted that Amazon’s design choices, such as less prominent options to decline Prime enrollment and a convoluted cancellation process, misled consumers and hindered their ability to manage subscriptions effectively. Under the terms of the settlement, Amazon is required to pay USD 2.5 billion to resolve the allegations. Additionally, it must implement significant changes to its subscription practices, including providing clearer disclosures about subscription terms and simplifying the cancellation process to ensure compliance with ROSCA. These measures aim to restore consumer trust and promote transparency in online subscription services.
This settlement underscores the FTC’s commitment to holding companies accountable for practices that undermine consumer confidence in online shopping. It serves as a reminder to businesses of the importance of adhering to consumer protection laws and maintaining transparent and fair practices in their operations.
11. Homebuyers Privacy Protection Regulation becomes Law, restricting Unsolicited Mortgage Offers[11]
The Homebuyers Privacy Protection Act (H.R. 2808) (“Act”) was signed into law on September 5, 2025, strengthening consumer privacy protections in residential mortgage transactions. The Act amends the Fair Credit Reporting Act (“FCRA”) to restrict the use of “trigger leads,” which occur when a lender or credit reporting agency shares a consumer’s credit information with third parties after a mortgage application, often resulting in unsolicited offers. Under the Act, credit reporting agencies can only share consumer credit reports in connection with a mortgage if the third party has obtained consumer consent, has originated a mortgage, serves the consumer’s current mortgage, or holds a current account for the consumer. The Act also sets clear timelines, with the provisions taking effect 180 days after enactment, on March 4, 2026. Additionally, the Act mandates a study by the Comptroller General to evaluate the impact of trigger leads received via text messages, with findings to be submitted to Congress within a year. These measures aim to enhance transparency, limit unsolicited marketing, and protect consumers’ personal financial information during the sensitive mortgage application process.
By curbing the widespread sharing of credit data for unsolicited mortgage offers, the Act positions itself as a landmark privacy safeguard, providing consumers with greater control over their personal information while promoting fair and responsible lending practices.
EUROPEAN UNION
12. Data Act enters into Force [12]
The Data Act, 2025 (“Act”) officially published as Regulation (EU) 2023/2854 (“Act”), became law on January 11, 2024 and is taking full effect since September 12, 2025. The Act is a central part of the European Union’s digital strategy, aimed at democratising access to data generated by connected products and services. It seeks to promote fair data-sharing practices, enhance competition, and foster innovation across all sectors of the economy. Unlike GDPR, which governs only personal data, the Act applies to both personal and non-personal data generated by connected devices and digital services, making it a more comprehensive framework for Europe’s evolving data ecosystem. From September 2025, users, both individuals and businesses, will gain explicit rights to access, use, and share the data generated by their own devices and digital services. The Act also introduces new obligations for manufacturers, service providers, and cloud operators, ensuring that data-sharing agreements are fair and transparent. It mandates that users can easily switch between different cloud or data service providers without facing restrictive barriers, marking a significant step towards a more open and competitive digital market.
Additionally, the Act empowers public authorities to access data in cases of public emergencies, such as natural disasters or cybersecurity incidents, but under strict safeguards to protect user privacy. The Act complements GDPR, reinforcing Europe’s commitment to a human-centric, innovation-friendly data economy that balances accessibility with strong privacy protections.
13. Publishing Names of Doping Athletes Online may violate Privacy Law[13]
The Advocate General of the Court of Justice of the European Union (“Advocate General”) has issued an opinion in NADA Austria and Others[14], stating that the online publication of names of professional athletes found guilty of doping violations may contravene GDPR. The case stems from an Austrian law that required the national anti-doping agency to publicly disclose athletes’ names, banned substances, exclusion periods, and reasons for sanctions. Several athletes challenged this rule, arguing that such publication disproportionately infringed their privacy and data protection rights. In his opinion, the Advocate General noted that making this information publicly available online, without restriction and for an unlimited audience, could go beyond what is necessary for transparency or deterrence and proposed that limited disclosure, such as to relevant sporting bodies or through pseudonymised reporting could achieve similar objectives while offering greater respect for privacy. The opinion is not binding but will inform the final decision at the Court of Justice of the EU.
14. EU-US Data Privacy Framework upheld[15]
The General Court of the European Union (“Court”) has dismissed an annulment action brought against the European Commission’s adequacy decision for the EU US Data Privacy Framework (“DPF”), confirming that the United States ensures an adequate level of protection for personal data transferred from the EU. The decision reaffirms that organizations in the US certified under the DPF may continue receiving personal data from EU entities under Article 45 of the GDPR.
In reaching its decision, the Court addressed core objections raised by the challenger, including concerns about the independence and impartiality of the US Data Protection Review Court (“DPRC”) and the legality of bulk collection of data by intelligence authorities without prior authorization. The Court held that ex post judicial review via the DPRC, together with safeguards under US reform measures such as Executive Order 14086, suffices to bring US practices in line with essential equivalence to EU data protection standards.
The ruling offers legal certainty for transatlantic data flows, particularly for EU and US companies relying on the DPF. However, it is subject to appeal before the CJEU, and the European Commission retains the power to suspend, amend, or repeal the adequacy decision should the US framework deviate from required privacy safeguards.
15. Court clarifies when Pseudonymised Data constitutes Personal Data[16]
The CJEU, has issued a ruling in the case of European Data Protection Board v. Single Resolution Board (“SRB”), providing significant clarification on the status of pseudonymised data under EU data protection law. The case involved the SRB transferring pseudonymised comments from stakeholders to Deloitte for analysis without informing the stakeholders, leading to a dispute over whether the data remained personal under Regulation (EU) 2018/1725.
The CJEU concluded that pseudonymised data is not automatically considered personal data in all cases. Whether data is personal depends on the ability of the recipient to re-identify individuals using available means. If re-identification is not reasonably possible for the recipient, the data may not be personal data for them. However, the Court emphasized that the original data controller must assess the identifiability of data at the time of collection, not at the point of transfer.
This judgment underscores that pseudonymisation does not inherently anonymise data and that data controllers must carefully evaluate the identifiability of data before sharing it, ensuring compliance with transparency obligations.
16. Italy becomes first EU country to enact National AI Law[17]
On September 17, 2025, Italy’s Parliament approved Law No. 132/2025, making Italy the first EU member state to establish comprehensive national legislation on AI. It was published in the Official Gazette on 25th September 2025 and is scheduled to come into force on October 10, 2025. It aims to ensure AI is used responsibly, transparently, and safely across various sectors, including healthcare, public administration, justice, labour, intellectual property, and criminal law. It introduces oversight measures, mandates human supervision, and enforces transparency in AI applications. It also criminalises the creation and dissemination of harmful AI-generated content, including deepfakes, with penalties of up to five years in prison. Additionally, it establishes a 1 billion Euros venture capital fund to support the AI, cybersecurity, and telecommunications sectors. Enforcement will be managed by Italy’s Agency for Digital Italy and the National Cybersecurity Agency. This pioneering move positions Italy at the forefront of AI governance in Europe.
17. Stakeholder Consultation to Develop Guidelines for Transparent AI Systems launched[18]
The European Commission has initiated a comprehensive stakeholder consultation aimed at developing clear guidelines and a code of practice for transparent AI systems. This move forms part of the broader EU effort to implement the Artificial Intelligence Act, 2024 ensuring that AI technologies are developed and deployed in a manner that is trustworthy, ethical, and accountable. By seeking inputs from governments, industry, academia, and civil society, the consultation aims to create a shared understanding of transparency requirements, helping both AI developers and users navigate the evolving regulatory landscape.
The proposed guidelines are expected to cover key aspects of AI transparency, including explainability, data quality, documentation, and user communication. They will provide practical frameworks for organizations to design AI systems that can be audited, monitored, and understood by end-users, thereby promoting responsible AI deployment across sectors. Additionally, the consultation emphasizes the importance of aligning transparency measures with ethical principles, enabling AI solutions to operate without bias and ensuring citizens’ rights are respected.
The consultation also seeks to foster innovation by encouraging collaboration among stakeholders to co-create actionable tools, templates, and best practices. By integrating transparency into AI design from the outset, the EU aims to enhance trust, protect consumers, and ensure that AI contributes positively to economic growth, public services, and societal well-being. Stakeholders are invited to submit their feedback, which will shape the final guidelines and support consistent, reliable, and transparent AI systems across Europe.
18. EDPB issues Guidelines to harmonize DSA and GDPR Enforcement[19]
On September 12, 2025, the EDPB released Guidelines 3/2025 (“Guidelines”), clarifying the relationship between the Digital Services Act, 2022 (“DSA”) and the GDPR. These guidelines aim to ensure that both frameworks work together effectively to protect individuals’ fundamental rights in the digital environment. The DSA focuses on creating a safer online space by regulating online platforms, while the GDPR governs the processing of personal data, including aspects like transparency, consent, and data minimization. The EDPB emphasizes that the DSA does not override the GDPR; instead, both regulations should be applied concurrently. Specific provisions of the DSA, such as those related to notice-and-action systems, recommender algorithms, and targeted advertising, involve the processing of personal data and thus fall under the purview of the GDPR. The guidelines provide practical guidance on how to interpret and apply these provisions in a way that aligns with data protection principles. Furthermore, the EDPB highlights the importance of cooperation between Digital Services Coordinators, the European Commission, and data protection authorities.
This collaboration is essential to ensure consistent enforcement and to avoid conflicts between the two regulatory frameworks. The Guidelines also stress the need for transparency and accountability in the development of codes of conduct under both the DSA and the GDPR, particularly in areas like online advertising. By fostering a coordinated approach, the EDPB aims to uphold individuals’ rights and promote a trustworthy digital ecosystem.
UNITED KINGDOM
19. Social Media launches Consent-or-Pay Model with ICO Approval[20]
The UK’s Information Commissioner’s Office (“ICO”) approved Meta Platforms Inc.’s (“Meta”) new
“consent-or-pay” model for Facebook and Instagram users. This approach allows users to either consent to personalized advertisements or pay a monthly subscription for an ad-free experience. The ICO highlighted that this model enhances transparency and user choice compared to the previous approach, where users were automatically subjected to targeted ads under standard terms. It stressed that meaningful consent and clear information about data use are essential to comply with UK data protection laws. Meta also adjusted its subscription pricing for UK users, lowering it to nearly half of the EU rate. This ensures that users can make an informed choice between paying for an ad-free experience or consenting to targeted advertising. The ICO expects Meta to monitor user decisions closely and assess the model’s impact to ensure ongoing compliance with privacy regulations and fair treatment of consumers.
While the ICO has approved, it will continue to observe the rollout and broader implications of the consent-or-pay model. The initiative reflects a shift in how online platforms balance commercial objectives with user autonomy and data protection, aiming to empower consumers to make free and informed choices regarding their personal data.
20. Nationwide Digital ID Scheme to Enhance Service Access and Border Security announced[21]
The UK government unveiled plans to introduce a free digital ID scheme for all citizens and legal residents. This initiative aims to streamline access to essential services such as driving licences, childcare, welfare, and tax records by eliminating the need for physical documents. The digital ID will be securely stored on users’ smartphones, similar to existing applications like the National Health Service App or contactless payment systems. While carrying the digital ID will not be mandatory, it will be required for proving the Right to Work, thereby preventing individuals without legal status from obtaining employment. The government emphasizes that the digital ID system will be designed with inclusivity in mind, ensuring accessibility for individuals who may not have access to smartphones. A public consultation will be launched later this year to gather input on the service’s delivery. The scheme is expected to be rolled out gradually, with full implementation anticipated by the end of the current Parliament. This move is part of the government’s broader strategy to enhance border security and simplify public service access for legal residents.
OTHERS
21. New Zealand enacts Privacy Amendment Act[22]
New Zealand’s Privacy Amendment Act (“Act”) was officially enacted, introducing significant changes to the Privacy Act 2020. The primary enhancement is the addition of Information Privacy Principle 3A (“IPP3A”), which mandates that agencies inform individuals when their personal information is collected indirectly, meaning from a source other than the individual themselves. This new requirement aims to increase transparency and empower citizens to better exercise their privacy rights.
Under IPP3A, agencies are required to notify individuals about the collection of their personal information unless specific exceptions apply. These exceptions include situations where informing the individual would prejudice the purpose of the collection, such as in cases of fraud investigations, or where it is not reasonably practical to do so, for example, if the agency does not hold contact details for the individual. The Privacy Commissioner has emphasized that these exceptions should be applied judiciously and that agencies must take reasonable steps to ensure individuals are informed as soon as reasonably practicable after the information has been collected.
The Act is set to come into effect on May 1, 2026, providing agencies with time to update their systems and processes to comply with the new requirements. The Privacy Commissioner has committed to providing guidance and support to help organizations meet these obligations, ensuring that the changes lead to a more transparent and accountable handling of personal information across New Zealand.
22. Singapore Ministry of Law launches Public Consultation on Guide for Using Generative AI in the Legal Sector[23]
The Singapore Ministry of Law (“MinLaw”) initiated a public consultation on a Draft Guide for Using Generative Artificial Intelligence (“Draft Guide”) in the Legal Sector, which outlines principles for responsible Generative Artificial Intelligence (“GenAI”) adoption. This initiative aims to provide legal professionals with practical guidance on the responsible, ethical, and effective use of GenAI tools, ensuring compliance with professional standards and enhancing service delivery. The consultation period concluded on September 30, 2025.
The Draft Guide outlines three core principles for integrating GenAI into legal practice. First, professional ethics: legal professionals remain ultimately responsible for all work products and must apply requisite knowledge, skill, and experience to provide competent advice and representation. Second, confidentiality: reasonable steps should be taken to ensure client information is protected when using GenAI tools. Third, transparency: consideration should be given to disclosing the use of GenAI tools to clients, upholding honesty and informing them of all information that may reasonably affect their interests.
The Draft Guide also emphasizes the importance of cybersecurity in adopting GenAI tools. Legal practices are advised to assess whether input data will be stored, used for model training, or could be inadvertently reproduced in outputs for unintended recipients. Clear assurances from GenAI providers regarding data retention and usage, along with data access controls and staff training, are recommended to safeguard client information. This consultation forms part of MinLaw’s broader efforts to support the digital transformation of Singapore’s legal sector, complementing initiatives like the Productivity Solutions Grant for the Legal Sector and the Legal Innovation and Future-Readiness Transformation pilot initiative.
23. Abu Dhabi introduces Rules to strengthen Data Protection Framework[24]
The Abu Dhabi Global Market (“ADGM”) enacted the Data Protection Regulations Substantial Public Interest Conditions Rules, 2025 (“Rules”). This initiative follows Consultation Paper No. 6 of 2025 and provides clearer guidance on processing sensitive personal data in sectors such as insurance and education. The new rules aim to align with global best practices while balancing robust data protection with public interest requirements. The Rules specify conditions under which organizations can process special categories of personal data without consent. In the insurance sector, insurers are allowed to process sensitive data when necessary for insurance purposes, provided the processing serves a substantial public interest. The Rules also cover safeguarding vulnerable individuals by permitting the processing of sensitive data without consent when necessary to protect children or adults at risk of emotional or physical harm, with clear criteria for determining when individuals over the age of 18 may be considered at risk.
Organizations operating within ADGM’s jurisdiction are required to comply with these updated rules by implementing safeguards to ensure that data processing is necessary and proportionate to the public interest purpose. The enactment of these rules reflects ADGM’s commitment to a strong data protection framework that promotes responsible use of personal data while protecting individual rights.
AI – Artificial Intelligence
CERT-In – Indian Computer Emergency Response Team
CJEU – Court of Justice of European Union
DPA- Data Protection Authority
DPDP Act – Digital Personal Data Protection Act, 2023
EDPB – European Data Protection Board
GDPR – General Data Protection Regulation (EU) 2018/1725
IT Act – Information Technology Act, 2000
NAPIX – National Informatics Centre’s NIC API Exchange
NIC – National Informatics Centre
RBI – Reserve Bank of India
RTI – Right to Information Act, 2005
TRAI – Telecom Regulatory Authority of India
SEC – Security and Exchange Commission
Authors:
- Rashmi Deshpande
- Aarushi Ghai
Download File:
[1] https://www.cert-in.org.in/s2cMainServlet?pageid=GUIDLNVIEW02&refcode=CISG-2025-03
[2] https://www.rbi.org.in/scripts/BS_PressReleaseDisplay.aspx?prid=61218
[3] https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12898&Mode=0
[4] https://www.civilaviation.gov.in/sites/default/files/202509/Draft%20Civil%20Drone%20%28Promotion%20and%20Regulation%29%20Bill%202025.pdf
[5] Karan Johar v. Ashok Kumar CS(COMM) 974/2025, Aishwarya Rai Bacchan v. Aishwaryaworld.com & Ors. CS(COMM) 956/2025, Abhishek Bacchan v. the Bollywood Tea Shops & Ors. CS(COMM) 960/2025 and Asha Bhosle v. Mayak Inc. 10 COMM(IP) Suit (L) No. 30262/2025
[6] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2169887
[7] Frasco et al v Flo Health Inc et al, U.S. District Court, Northern District of California, No. 21-00757
[8] https://www.whitehouse.gov/presidential-actions/2025/09/saving-tiktok-while-protecting-national-security/
[9] https://cppa.ca.gov/announcements/2025/20250923.html
[10] https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-secures-historic-25-billion-settlement-against-amazon
[11] https://www.congress.gov/bill/119th-congress/house-bill/2808/text
[12] https://digital-strategy.ec.europa.eu/en/policies/data-act
[13] https://curia.europa.eu/jcms/upload/docs/application/pdf/2025-09/cp250128en.pdf
[14] Case C-474/24
[15] Case T-553/23; Philippe Latombe v European Commission
[16] Case C-413/23 P, EDPS v SRB
[17] https://www.gazzettaufficiale.it/eli/id/2025/09/25/25G00143/sg
[18] https://curia.europa.eu/jcms/upload/docs/application/pdf/2025-09/cp250128en.pdf
[19] https://www.edpb.europa.eu/news/news/2025/interplay-between-dsa-and-gdpr-edpb-adopts-guidelines_en
[20] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2025/09/ico-statement-on-changes-to-meta-advertising-model/
[21] https://www.gov.uk/government/news/new-digital-id-scheme-to-be-rolled-out-across-uk
[22] https://www.privacy.org.nz/tuhono-connect/statements-media-releases/privacy-amendment-act-passes/
[23] https://www.mlaw.gov.sg/public-consultation-on-guide-for-using-generative-artificial-intelligence-in-the-legal-sector/
[24] https://www.adgm.com/media/announcements/adgm-enacts-new-substantial-public-interest-rules-under-data-protection-regulations-2021






