Technology Law and Data Privacy Updates
Monthly Edition - July 2025
INDEX
A. SUMMARY
- CERT-In issues Cyber Security Audit Policy Guidelines
- CERT-In updates Technical Guidelines on Bills of Material for Technology Products
- Government reports Strong Public Feedback on Draft DPDP Rules, highlights Nationwide Cybersecurity Push
- Kerala High Court issues First-of-its-Kind AI Use Policy for District Judiciary
- Madras High Court orders 48-Hour Takedown of Non-Consensual Intimate Content
United States of America
- GENIUS Act signed into Law
- AI Action Plan to boost Global Competitiveness unveiled
- Modifications proposed to California Delete Act
- Court pauses Enforcement of Algorithmic Pricing Regulation in Retail Dispute
- Court certifies Nationwide Class in Alexa Privacy Lawsuit Against Amazon
- Partial Settlement finalized in Privacy Lawsuit Over Sharing of Reproductive Health Data
- Data Breach at Major Insurance Company prompts Statewide Notification and Risk of Litigation
European Union
- Guidelines to Protect Minors Online issued
- French Data Privacy Watchdog issues Recommendations on GDPR Complaint AI Development
- EDPS approves Commission’s Data Protection Reforms setting Benchmark for Other Institutions
- Guidelines on Obligations for Gen-AI Models under the AI Act released
- Switzerland launches AI Model for Public Benefit
United Kingdom
- Stronger Online Protections Requirements for Children on Social Media enforced
- Guidelines issued on disclosing Documents to Public while preventing Data Privacy
China
- Threshold and Requirement for Mandatory Appointment of Privacy Officers introduced
- Government pushes for Global AI Governance at 2025 World AI Conference
Others
SUMMARY
Welcome to this edition of Fountainhead Legal’s newsletter!
As the digital ecosystem becomes increasingly interconnected, the stakes for cybersecurity, privacy, and responsible AI have never been higher. This month’s newsletter captures the latest developments shaping regulatory frameworks, judicial interventions, and technological governance across India, Europe, the U.S., and beyond.
The digital frontier is heating up, and the world is racing to keep pace with technology while safeguarding citizens, data, and trust. In India, CERT-In is setting the stage with comprehensive cybersecurity audit and Bills of Materials guidelines, along with draft AI guidelines to ensure responsible adoption and risk management. Public feedback on the Draft DPDP Rules has been overwhelming, while landmark court orders are shaping the legal landscape from enforcing rapid takedown of non-consensual intimate content, to regulating AI use in the judiciary. Notably, AI is making cultural waves too: it was recently used to recreate the iconic Bollywood film Raanjhanaa, highlighting both the creative potential and ethical questions around generative AI. See our founders comment below on this issue while in discussion with Mint.
“Further, in a controversial case, the 2024 film Brutalist used AI to alter actors’ performances in post-production, which sparked backlash despite the actors being aware of the use of AI tools, according to Rashmi Deshpande, founder, Fountainhead Legal.”
“The AI-generated alternative ending in Raanjhanaa marks a notable moment in Indian cinema and one of the earliest global examples of AI being used to rewrite narrative arcs. As AI becomes more accessible, such practices are expected to grow but raise serious concerns about creative integrity, legal rights, and consent, Deshpande said.”
– Mint | July 27, 2025
Across the United States, cryptocurrency and privacy are grabbing headlines. The GENIUS Act has created a federal framework for stablecoins, reflecting the growing regulatory heat around digital assets. Our founder weighed in on one of the hot topics in the crypto world, the ongoing WazirX controversy, emphasizing how users are being left without proper judicial remedies, underscoring the urgent need for clear consumer protections (for more detailed insights on our founder’s take, click here).
AI governance is also a top priority, the White House’s ‘America’s AI Action Plan’ encourages safe, inclusive, and transparent AI innovation, while California’s Delete Act updates aim to give consumers more control over their data. Courts are tackling major privacy and cybersecurity battles, from Alexa voice data to reproductive health apps, and even a massive breach at Insurance Company affecting 1.4 million users.
Globally, in the EU, the Commission has issued first set of guidelines to protect minor online under DSA, the EDPS cleared the European Commission’s Microsoft 365 compliance, and new AI Guidelines define obligations for general-purpose models, balancing innovation with accountability. Switzerland is breaking new ground with a fully transparent large language model for public use, setting a high bar for ethical and inclusive AI. Further, the UK is strengthening online child protections under the Online Safety Act, while the ICO issues practical guidance for safely disclosing documents without exposing personal data. Meanwhile, China is strengthening its privacy regulations by introducing thresholds for mandatory appointment of privacy protection officers and is simultaneously focusing on AI by pushing for coordinated AI governance and equitable access, and the UN is calling for inclusive frameworks to ensure AI benefits all nations and Singapore is strengthening its digital token service provider licensing framework. Dubai and Nepal are taking swift regulatory steps to protect digital rights and curb risks.
From AI-powered cinema to courtroom battles over data privacy, and from global AI summits to the regulation of cryptocurrencies, these developments show a world where innovation and governance are racing side by side because in the digital age, staying ahead is not just about technology, it is about trust, fairness, and accountability.
At Fountainhead Legal, we see this as an opportunity for companies to shift from compliance as a legal burden to compliance as a design principle. As laws evolve, organisations must build structures that are legally sound, operationally agile, and technologically resilient. Because in the digital age, what is lawful must also be ethical and what is innovative must also be accountable.
We are committed to supporting organizations on this journey. With our deep expertise in data privacy compliance and a strong understanding of regulatory nuances, we offer tailored solutions for each client’s unique needs. From drafting privacy policies and developing data protection frameworks to advising on cross-border data transfers and facilitating employee training programs, our team is equipped to guide clients through every stage of their compliance strategy.
We hope you enjoy our latest updates!
NATIONAL
1. CERT-In issues Cyber Security Audit Policy Guidelines [1]
CERT-In has released the Comprehensive Cyber Security Audit Policy Guidelines (“Guidelines”) to standardize cyber security audits and strengthen resilience across organizations. The aim is to establish a uniform process for conducting cyber security audits and to improve organizations’ ability to prevent, detect, and respond to cyber threats. The Guidelines set clear requirements for empanelled auditors and audited entities, aiming to ensure thorough assessments, consistent reporting, and prompt remediation of identified vulnerabilities. The Guidelines set out mandatory requirements for cyber security auditors, including eligibility criteria, empanelment processes, adherence to prescribed auditing methodologies, and maintaining strict confidentiality of audit findings. Guidelines define the roles and responsibilities of auditors, outline reporting obligations to CERT-In, and require compliance with applicable laws. Organizations are expected to cooperate fully during audits, provide accurate information, and take timely remedial action on identified vulnerabilities.
CERT-In has also clarified that non-compliance with the Guidelines may result in suspension or removal of auditors from the empanelled list, and violations could attract legal consequences under the IT Act. By laying down a clear compliance framework, CERT-In seeks to enhance cyber security preparedness, ensure consistency in audit quality, and support India’s broader cyber resilience objectives.
2. CERT-In updates Technical Guidelines on Bills of Material for Technology Products [2]
CERT-In published Version 2.0 of its Technical Guidelines on SBOM, QBOM & CBOM, AIBOM, and HBOM (“Guidelines”) covering different types of Bills of Materials (“BOMs”) for technology products namely Software (SBOM), Qualified (QBOM), Cryptographic (CBOM), AI (AIBOM) and Hardware (HBOM).
The Guidelines recommend that Government departments, public sector bodies, essential services, and software companies use SBOMs as a standard part of software procurement and development. SBOMs should include key details like component name, version, supplier, license, and known vulnerabilities. They must also follow standard formats such as Software Package Data eXchange or CycloneDX. The Guidelines also stress the need for secure sharing, access control, and using SBOMs during the entire software development process.
To improve vulnerability management, the Guidelines introduce two key tools: the Vulnerability Exploitability eXchange (VEX), which labels issues as ‘Not Affected’, ‘Affected’, ‘Fixed’, or ‘Under Investigation’ so that users can act promptly; and the Common Security Advisory Framework (CSAF) for consistent security notifications. The inclusion of QBOM and CBOM is also intended to prepare organisations for emerging threats, including those posed by quantum computing.
By standardising how BOMs are created and used, Guidelines aims to improve transparency, trace supply chain risks, and strengthen trust in India’s digital systems. Technology developers, suppliers, and procurers should begin adopting these updated BOM practices to meet future compliance needs, enhance product security, and build stakeholder confidence.
3. Government reports Strong Public Feedback on Draft DPDP Rules, highlights Nationwide Cybersecurity Push [3]
On July 25, 2025, MeitY informed Parliament that the Draft Digital Personal Data Protection Rules, 2025 (“Draft DPDP Rules”) aimed at operationalising the DPDP Act, have drawn 6,915 inputs from citizens and stakeholders. The Draft DPDP Rules are intended to implement the DPDP Act’s privacy safeguards, balancing individual rights with lawful data processing needs.
Alongside, the Government highlighted ongoing capacity-building and public awareness initiatives, including over 3,600 cybersecurity workshops under the ISEA programme, campaigns like Cyber Security Awareness Month, and the CyberShakti initiative to train women in cybersecurity. Key institutional measures, such as CERT-In for incident response, NCIIPC for critical infrastructure protection, and Cyber Swachhta Kendra for malware cleaning, continue to underpin India’s cyber resilience framework.
By combining regulatory consultation with extensive awareness and security infrastructure, the Government appears to be taking a dual-track approach, strengthening legal safeguards for personal data while building nationwide cyber hygiene and response capabilities. This positions India’s digital ecosystem for more robust privacy compliance and threat preparedness under the DPDP regime.
4. Kerala High Court issues First-of-its-Kind AI Use Policy for District Judiciary [4]
The Kerala High Court unveiled a landmark policy, Policy Regarding the use of Artificial Intelligence (AI) tools in the District Judiciary (“Policy”), the first such framework in India to set strict boundaries on AI in judicial functions. The Policy mandates that AI can only be used as an assistive tool for specific approved purposes and never as a substitute for judicial reasoning or decision-making. It applies to judges, court staff, interns, and law clerks, covering all AI tools, including generative AI like ChatGPT, Gemini, and DeepSeek. Notably, cloud-based AI services are prohibited unless explicitly approved, to prevent breaches of confidentiality. Judicial officers must meticulously verify all AI-generated outputs, maintain audit records of AI use, and undergo mandatory training on ethical, legal, and technical aspects of AI. Violations could trigger disciplinary proceedings. By introducing the Policy, the High Court seeks to harness the benefits of AI while safeguarding transparency, fairness, and the integrity of the judicial process, setting a precedent for responsible AI governance in the justice system.
5. Madras High Court Orders 48-Hour Takedown of Non-Consensual Intimate Content [5]
MeitY has been directed by the Madras High Court to take urgent steps to remove the petitioner’s Non-Consensual Intimate Images and Videos (“NCII”) from the internet. The petitioner, a practising advocate, alleged that intimate videos, recorded without her knowledge under the pretext of a romantic relationship had been widely circulated across more than 70 websites, pornographic platforms, and social media channels. Despite lodging a criminal complaint under provisions of the IT Act, IPC/BNS, and the Tamil Nadu Prohibition of Harassment of Women Act, 1998 the content continued to be shared, causing her severe reputational damage, public shaming, and emotional distress.
Relying on the Delhi High Court’s 2023 guidelines on NCII redressal, the court emphasised that the right to dignity and privacy under Article 21 of the Constitution of India is a fundamental guarantee and that constitutional courts have a duty to act promptly in cases of such gross violations. It ordered MeitY to block, remove, and prevent further dissemination of the offending content by issuing necessary directions to intermediaries, websites, pornographic platforms, and telecom service providers, and to employ technological solutions such as hash-matching and AI-based detection tools.
The court mandated that this exercise be completed within 48 hours and kept the petition pending to issue a “continuing mandamus” to ensure that similar situations are addressed effectively in the future. The Director General of Police was also suo motu impleaded to facilitate awareness and sensitisation within the police force.
INTERNATIONAL
UNITED STATES OF AMERICA
6. GENIUS Act signed into Law [6]
As highlighted in our June 2025 Monthly Edition, the Guiding and Establishing National Innovation for U.S. Stablecoins Act (“GENIUS Act”) has now officially become law, following its signing on July 18, 2025. This marks a major milestone for digital asset regulation, creating a dedicated federal framework for the issuance and oversight of payment stablecoins. With the GENIUS Act now in force, the market gains a clear, uniform set of rules, replacing the patchwork of state-level requirements. While our June edition covered about eligible issuers and core compliance obligations, the key development now is certainty: the GENIUS Act provides a defined path for authorised players to issue stablecoins, while pre-empting certain state laws to streamline oversight. However, the high bar for reserve, disclosure, and anti-money laundering requirements means established institutions are best positioned to benefit, while smaller entrants may face higher compliance hurdles.
The GENIUS Act is expected to spur greater institutional adoption of regulated stablecoins, enabling faster, lower-cost domestic and cross-border payments. The real impact will depend on upcoming agency rulemaking, which will set out the technical and operational details for compliance. These rules will shape how quickly and by whom the new regime is put into practice.
7. AI Action Plan to boost Global Competitiveness Unveiled [7]
The White House has released the America’s AI Action Plan (“Plan”), a coordinated framework aimed at promoting safe, inclusive, and responsible AI development throughout the region. Spearheaded under the Americas Partnership for Economic Prosperity (APEP), the Plan seeks to align AI governance, foster innovation, and ensure that the benefits of AI are shared broadly across societies.
Key measures include commitments to establish interoperable AI governance frameworks, encourage cross-border collaboration on AI safety research, and promote transparency and accountability in AI systems. The Plan emphasises developing shared technical standards, facilitating public–private partnerships, and expanding AI literacy and workforce training to prepare communities for an AI-driven economy. There is also a strong focus on ethical safeguards, addressing risks such as bias, misuse, and lack of transparency, while enabling innovation in sectors like healthcare, education, climate resilience, and public services.
If implemented effectively, the Plan could position the region as a leader in shaping global AI norms, reduce regulatory fragmentation, and strengthen trust in AI technologies. Its emphasis on cooperation and shared standards is expected to lower market entry barriers for innovators while giving policymakers the tools to mitigate AI-related risks. The next phase will involve translating these commitments into concrete national and regional actions, with measurable milestones to track progress.
8. Modifications proposed to California Delete Act [8]
CPPA issued a Notice of Modifications to Proposed Regulations (Accessible Deletion Mechanism) under the California Delete Act, 2023 (“Delete Act”).
To tingle your memories, the Delete Act essentially covers Delete Request and Opt-Out Platform (“DROP”) and aims to provide consumers with a single, centralised mechanism to request deletion of their personal data from all registered data brokers removing the need to contact each broker individually (to know more in detail, you can read our April 2025 Monthly Edition, where we have covered Delete Act and DROP).
While the core framework of DROP remains intact, the modifications introduce several refinements. These include updated definitions of “key identifiers” to improve the accuracy of matching, clearer protocols for verifying and processing deletion requests, enhanced requirements for recordkeeping and retention of unmatched requests, and more explicit technical standards for data formatting and field consistency. The revisions also streamline the audit process, clarify broker obligations when requests are partially matched, and specify acceptable methods for secure deletion. If adopted, these modifications will strengthen operational clarity, improve enforcement feasibility, and help ensure uniform application of the Delete Act across California’s data broker ecosystem.
The public consultation on these modifications will remain open until 5:00 P.M. Pacific Daylight Time on August 18, 2025.
9. Court pauses Enforcement of Algorithmic Pricing Regulation in Retail Dispute [9]
The District Court for the Southern District of New York has paused enforcement of the state’s newly enacted Algorithmic Pricing Disclosure Act (“Act”). The Act requires retailers to display prominent notices when prices are set using algorithms that rely on personal data.
The challenge, brought by the National Retail Federation (“NRF”), argues that the Act compels misleading speech, infringes constitutional protections, and overreaches by applying broadly to many forms of dynamic pricing. Algorithmic pricing refers to the use of automated systems often incorporating AI to adjust or personalise prices based on factors like customer location, purchase history, or browsing patterns. The NRF contends that these tools are frequently used to offer discounts or targeted promotions, not just to raise prices, and that the mandated disclosure risks painting lawful practices in a negative light. A combined hearing on the injunction motion and the state’s motion to dismiss is scheduled for September 4, 2025. If the court grants the injunction, the Act will remain unenforceable while the case proceeds; if denied, enforcement could begin within 30 days, requiring rapid compliance from affected retailers.
This case is poised to test the limits of state-mandated transparency in AI-driven pricing and could set a crucial precedent for balancing consumer protection with businesses’ right to algorithmic pricing practices.
10. Court certifies Nationwide Class in Alexa Privacy Lawsuit Against Amazon [10]
District Court for the Western District of Washington certified two nationwide classes in a lawsuit alleging that Amazon.com Inc. (“Amazon”)’s ‘Alexa’ devices collected, stored, and used voice recordings without sufficient disclosure or consent. The claims are brought under Washington’s Consumer Protection Act, 1961 (“Act”) focusing on whether Amazon misled registered Alexa users about how their audio data is retained and used.
The plaintiffs contend that Alexa records audio after detecting a “wake word,” retains transcripts and related metadata, stores snippets from inadvertent “false wakes,” and permits limited human review, all without sufficient disclosure or consent. They further alleged that Amazon ‘re-purposed’ this data for commercial benefit, including targeted advertising and product development, in violation of the Act.
In certifying both a damages class and an injunctive relief class for registered Alexa users, the court concluded that common questions regarding Amazon’s disclosures and data practices predominated over individual issues and that a class action was the most efficient means of resolving the dispute. This step allows millions of Alexa owners to collectively pursue their privacy claims, significantly increasing Amazon’s potential exposure. By contrast, the court declined to certify proposed classes for unregistered household members under various state privacy and wiretap laws, citing the need for individualised assessments of consent and privacy expectations.
The ruling positions the case to proceed toward the merits stage, with formal notice to class members, further discovery, and potential settlement discussions or trial. Unlike Apple’s Siri litigation, which ended in a million-dollar settlement before contested class certification, Amazon now faces unified damages and injunctive relief claims in court, a posture that not only heightens litigation risk but also increases the chance of a precedent-setting judgment on how voice-assistant data practices are regulated.
11. Partial Settlement finalized in Privacy Lawsuit Over Sharing of Reproductive Health Data [11]
A major privacy class action in California challenges how Flo Health, Inc. (“Flo Health”), the developer of a widely used menstrual and fertility tracking application, collected and disclosed users’ intimate health information [12]. The plaintiffs allege that Flo Health shared sensitive data including menstrual cycle details, fertility indicators, and pregnancy status with third parties such as Meta Platforms, Inc. (formerly Facebook, Inc.), Google LLC (“Google”), and Flurry, Inc., a mobile analytics provider. According to the complaint, these transmissions occurred without clear user consent and were used for marketing, analytics, and product development purposes. The District Court for the Northern District of California allowed the main privacy and consumer protection claims to go forward, finding that the alleged sharing of such personal information could be an unlawful invasion of privacy under state law. At the same time, the court dismissed certain claims including some under federal communications statutes and unjust enrichment theories either outright or with an option to amend. This partial dismissal narrowed the scope of the lawsuit but left its central allegations intact. The case is now set to focus on determining what data was shared, which companies received it, and how it was used.
As recent development, Google has informed the court that it has reached a settlement with the plaintiffs prompting the court to pause proceedings against it. The litigation continues against other companies, and could set an important precedent for how reproductive health applications and related technology companies handle sensitive health data in an era of heightened privacy concerns.
Privacy lawsuits involving sensitive health data can trigger significant financial, legal, and reputational consequences. They underscore why businesses must embed privacy compliance into product design, not only to meet legal requirements but also to maintain user trust and avoid costly disputes.
12. Data Breach at Major Insurance Company prompts Statewide Notification and Risk of Litigation [13]
On July 16, 2025, hackers successfully infiltrated Allianz Life Insurance Company of North America’s (“Allianz”) customer relationship management system, compromising sensitive personal and health information of approximately 1.4 million individuals. The breach, attributed to the notorious ShinyHunters threat group, involved social engineering tactics where cybercriminals impersonated IT support staff to gain unauthorized access to Salesforce Data Loader systems. Post discovery of the breach by Allianz on July 17[14], a federal class action lawsuit has been filed in Minnesota District Court. The compromised data included Social Security numbers, medical histories, financial records, insurance claim information, and other highly sensitive personal details that are now potentially circulating on the dark web.
The lawsuit presents a multifaceted legal challenge, alleging negligence, HIPAA violations, breach of fiduciary duty, and violations of the Minnesota Deceptive Trade Practices regulation. Particularly damaging to Allianz’s position is the extensive documentation showing that its parent company, Allianz SE, has been warning businesses about cyber risks for years, consistently identifying cyber incidents as the top global business risk since 2015. The complaint alleges multiple regulatory failures, including inadequate implementation of required administrative, technical, and physical safeguards under HIPAA, violations of the FTC Act for unfair data security practices, and failure to meet basic industry standards such as proper network segmentation, intrusion detection systems, and multi-factor authentication. Plaintiff, who spent hours responding to fraudulent activity and monitoring his accounts, represents the human cost of these security failures that will likely plague victims for years to come. The lawsuit seeks monetary damages, mandatory credit monitoring services, and injunctive relief requiring enhanced security measures potentially setting important precedents for corporate accountability in data protection.
For businesses handling sensitive personal information, this case underscores the essential need for comprehensive employee cybersecurity training, regular security audits, prompt breach notification protocols, and thorough vetting of third-party vendor security practices. As courts increasingly recognize that data breaches cause real, compensable harm to individuals, companies must prioritize robust cybersecurity measures not merely as IT concerns, but as fundamental business responsibilities with serious legal ramifications.
EUROPEAN UNION
13. Guidelines to Protect Minors Online issued [15]
EU Commission published the Guidelines on the Protection of Minors Online (“Guidelines”) to enhance the safety of children and young people on online platforms, in line with the DSA. The guidelines provide a framework for online services to protect minors from risks such as harmful content, cyberbullying, grooming, and manipulative commercial practices.
The recommendations target platforms accessible to minors, excluding micro and small enterprises. Key measures include setting minors’ accounts to private by default, adjusting recommender systems to limit exposure to harmful content, and giving minors greater control over their feeds. The Guidelines also advise allowing users to block or mute others, restricting features that encourage excessive use, and preventing the unauthorized sharing of minors’ content. Additionally, the Guidelines stress safeguarding minors from exploitative commercial practices, such as certain in-app purchases, virtual currencies, or loot boxes. Platforms are encouraged to improve moderation and reporting tools, offer parental controls, and implement reliable, non-intrusive age verification methods. The upcoming EU Digital Identity Wallets and age verification blueprints will serve as examples for compliant practices.
These guidelines are grounded in children’s rights and a risk-based approach, recognising that the level of risk may vary depending on the platform’s size, purpose, and audience. The Commission intends to use these guidelines to monitor compliance with Article 28(1) of the DSA and to support national regulators in enforcing protections for minors online.
14. French Data Privacy Watchdog issues Recommendations on GDPR Complaint AI Development [16]
Data protection authority has issued its Recommendations on the Development of Artificial Intelligence Systems (“Recommendations”) to guide compliant with GDPR for the AI projects. The Recommendations addresses issues such as the risk of personal data being memorized by AI models, secure development practices, and responsible data annotation. It includes practical tools like a compliance checklist and a summary sheet, currently available in French. Developed after public consultation, the Recommendations form part of a strategic roadmap, which also foresees sector-specific AI guidelines, clearer allocation of responsibilities in the AI value chain, and technical tools like PANAME to audit AI models for personal data processing.
15. EDPS approves Commission’s Data Protection Reforms setting Benchmark for Other Institutions [17]
EDPS has recently closed its investigation into the European Commission’s (“Commission”) use of Microsoft 365 (“Microsoft”), concluding that it now complies with GDPR.
This marks the end of a compliance journey that began in March 2024, when the EDPS ordered the Commission to fix serious shortcomings from vague processing purposes and weak safeguards for data sent outside the EU/EEA to insufficient clarity on how and when information could be disclosed. In response, the Commission reshaped its contractual and technical framework: processing is now limited to defined public-interest purposes, tighter controls govern onward transfers, and disclosure mechanisms have been clarified. A compliance report was filed in late 2024, followed by further detail in mid-2025.
Significantly, the Commission has reworked its licensing agreement with Microsoft so that other EU institutions and bodies can adopt the same protections. While the EDPS has encouraged them to do so, it stressed that this outcome is not a universal seal of approval for all Microsoft 365 uses, a reminder that compliance depends on configuration and context.
16. Guidelines on Obligations for Gen-AI Models under the AI Act released [18]
The Guidelines on the Scope of the Obligations for Providers of General-Purpose AI Models under the AI Act (“Guidelines”) to clarify the compliance framework ahead of the AI Act are coming into effect on August 2, 2025.
The Guidelines define general-purpose AI models as models capable of performing a wide range of tasks and being integrated into various downstream AI systems. Providers of such models must meet documentation, transparency, copyright compliance, and cybersecurity obligations. Additional requirements apply to general-purpose AI models with systemic risk, which are determined either through meeting a compute threshold or being designated by the Commission. These models are subject to enhanced scrutiny and must implement risk mitigation, report serious incidents, and cooperate with the AI Office. The Guidelines also address open-source exemptions. Certain obligations, such as technical documentation, may be waived for models released under a qualifying free and open-source licence, provided the model does not pose systemic risks. However, such providers must still comply with EU copyright law and publish summaries of training content. The Guidelines set out strict conditions on licensing and prohibit any form of monetisation to claim exemption.
These Guidelines form a key component of the AI governance framework, offering legal clarity to developers and ensuring responsible AI development while preserving innovation through narrowly defined open-source exceptions.
17. Switzerland launches AI Model for Public Benefit [19]
The Swiss Federal Institute of Technology Zurich (ETH Zurich), the École Polytechnique Fédérale de Lausanne (EPFL), and the Swiss National Supercomputing Centre (CSCS) are preparing to release a fully transparent large language model later this summer, developed on the carbon-neutral “Alps” supercomputer. The model will be openly licensed and aimed at a wide range of users, including researchers, educators, public authorities, and private companies. Designed to work in more than 1,000 languages, the system will be available in two sizes: 8 billion and 70 billion parameters, and trained on a diverse dataset of over 15 trillion tokens, with a significant share drawn from non-English sources. It will also support code and mathematical problem-solving, making it adaptable to academic, commercial, and civic applications. Importantly, the full model weights, source code, and training dataset details will be released, offering an unprecedented level of transparency and reproducibility in large-scale AI development.
By releasing a high-performance model with full transparency, these institutions set a benchmark for ethical and inclusive AI development. However, such openness also shifts responsibility to the broader community to ensure the technology is applied in ways that align with public interest, reinforcing the need for clear usage guidelines and shared accountability.
UNITED KINGDOM
18. Stronger Online Protections Requirements for Children on Social Media enforced [20]
The new online safety requirements under the Online Safety Act, 2023 (“Act”) came into effect from July 24, 2025, placing legal obligations on platforms to protect children from harmful content. The changes are aimed at making social media and other online services safer for users under the age of 18.
Platforms hosting pornography or content promoting suicide, self-harm, or eating disorders must now implement highly effective age assurance tools, such as facial scans, photo ID, or credit card verification, to prevent children from accessing such content. They are also required to restrict toxic algorithms that recommend harmful content and ensure feeds are safer for children by avoiding the promotion of bullying, violence, or dangerous online challenges.
Platforms must respond quickly to harmful content and provide child-friendly reporting tools and user support. The UK’s Office of Communications is responsible for enforcement. Non-compliant services may face fines of up to 10% of global annual revenue or GBP 18 million, whichever is higher. These measures are part of the Government’s broader push to reclaim the digital environment for children and reduce exposure to online risks.
19. Guidelines issued on disclosing Documents to Public while preventing Data Privacy [21]
The Information Commissioner’s Office has released Disclosing documents to the public securely: hidden personal information and how to avoid an accidental breach (“Guidelines”) to assist organisations in safely disclosing documents to the public while safeguarding personal information. The Guidelines are especially relevant for public authorities handling Freedom of Information (FOI) requests and organisations responding to Subject Access Requests (SARs). It draws attention to the risks posed by hidden personal data, including metadata, hidden cells, worksheets, or active filters, which can inadvertently be shared in documents. To address these risks, the ICO provides practical steps for identifying, removing, or properly redacting such information to ensure compliance with data protection laws. The Guidelines emphasises several key practices. Organisations should conduct thorough document reviews to detect any hidden or embedded personal data prior to disclosure. Proper redaction is crucial, ensuring that personal information is completely removed rather than merely hidden. Selecting safe file formats can further reduce the risk of accidental data exposure. The Guidelines also encourages the use of tools, such as Microsoft Document Inspector, to scan for hidden data, and recommends post-disclosure reviews to learn from any incidents and improve future data-sharing practices.
These Guidelines replaces the previous 2023 advisory, which followed significant data breaches involving organisations such as the Police Service of Northern Ireland and the Ministry of Defence. By adhering to these recommendations, organisations can better protect sensitive information, strengthen public trust, and maintain compliance with UK data protection regulations. However, due to the new Data (Use and Access) Act, 2025 the Guidelines are under review.
CHINA
20. Threshold and Requirement for Mandatory Appointment of Privacy Officers introduced [22]
The CAC announced a new requirement under the Personal Information Protection Law of the People’s Republic of China (2021). Organisations handling the personal data of one million or more individuals must appoint a PIPO and submit the officer’s details to their local CAC office. Entities already meeting the threshold must complete reporting by August 29, 2025, while those crossing it in the future will have 30 working days to comply. Any subsequent changes, such as a replacement officer or updated contact details, must also be reported within 30 working days. Reports are to be filed via the CAC’s designated online platform. Failure to comply whether by missing deadlines, omitting details, or providing inaccurate information may result in penalties under the PIPL. The move strengthens China’s enforcement of data protection responsibilities and ensures clearer oversight of high-volume data processing activities.
21. Government pushes for Global AI Governance at 2025 World AI Conference [23]
Chinese Premier of the State Council, Li Qiang, speaking at the opening of the 2025 World AI Conference and High-Level Meetings on Global AI Governance in Shanghai, emphasized the urgent need for a coordinated international framework to manage AI. According to Premier Li, global consensus is essential to ensure that AI technologies are developed and used safely, responsibly, and inclusively across borders.
Highlighting the rapid advancements in AI, particularly large language models and multimodal systems Premier Li noted that while these technologies create significant economic opportunities, they also introduce serious risks. He stressed that AI must remain under human control and should be regarded as an international public good that benefits all of humanity.
In a significant proposal, Premier Li called for the establishment of a global AI cooperation organization to facilitate multilateral collaboration. He underscored the importance of ensuring equitable access to AI, especially for countries in the Global South, and expressed China’s readiness to share its AI technologies, development experience, and open-source tools. Joint research and cooperative innovation were presented as key pathways to achieving global progress.
The conference, attended by over 1,000 representatives from Governments, industry, and academia, concluded with an action plan for global AI governance. This initiative signals China’s intent to play a central role in shaping the rules and cooperation mechanisms for the safe and inclusive deployment of AI worldwide.
OTHERS
22. Singapore enforced Regulations for Digital Token Service Provider
Government published the Financial Services and Markets (Digital Token Service Providers) Regulations 2025 (S 342/2025) (“DTSP Regulations”), which have taken effect from June 30, 2025. The DTSP Regulations are issued under the authority of the Financial Services and Markets Act 2022 and form part of Singapore’s ongoing framework to regulate the digital token sector. The DTSP Regulations establish clear rules for digital token service providers operating in Singapore, covering areas such as licensing requirements, operational transparency, risk management, and consumer protection. They also include provisions related to anti-money laundering compliance and the safeguarding of client assets. By introducing the DTSP Regulations, Singapore aims to create a secure and trustworthy environment for digital token services, balancing innovation in the digital economy with robust oversight and consumer protection.
23. UN Secretary-General calls for Inclusive Global Governance of AI [24]
Along with the Chinese Premier, Li, the UN Secretary-General also delivered a video address at the World Artificial Intelligence Conference and High-Level Meetings on Global AI Governance held in Shanghai, China. In his remarks, he described the governance of AI as a defining test of international cooperation and urged global leaders to ensure that AI technologies benefit all, not just a few.
Highlighting the transformative potential of AI in achieving the Sustainable Development Goals, including advances in education, healthcare, and climate action, the Secretary-General warned that growing disparities in access threaten to leave developing nations behind. He stressed the need to create inclusive frameworks that allow all countries to shape the future of AI, grounded in science, human rights, and global solidarity.
The Secretary-General also announced that a forthcoming UN report will propose voluntary financing mechanisms to support AI capacity-building in developing nations. He expressed strong support for two key initiatives currently underway at the UN: the creation of an International Independent Scientific Panel on AI and a Global Dialogue on AI Governance. These efforts aim to promote transparent, science-based, and inclusive policymaking at a global scale.
24. Dubai strengthens Data Protection Regulations [25]
The DIFC enacted key amendments its Data Privacy Regulation, DIFC Data Protection Law, DIFC Law No. 5 of 2020, through the DIFC Laws Amendment Law, DIFC Law No. 1 of 2025 (“Amendment Act”). These changes aim to strengthen individual rights and legal remedies while improving alignment with international data protection standards.
A major development under the Amendment Act is the introduction of a ‘Private Right of Action’, allowing data subjects to file claims directly before the DIFC courts for violations of their personal data rights. Additional amendments clarify the scope of application, including the extra-territorial effect of the law under Article 6, confirming that entities outside DIFC may still be subject to the law if they process personal data in connection with DIFC activities. The Amendment Act also revise provisions on data sharing to clarify how adequacy of third countries is assessed when transferring personal data outside the DIFC.
In parallel, the DIFC has enacted the DIFC Digital Assets Law, DIFC Law No. 2 of 2024, establishing a legal framework for the recognition, control, and transfer of digital assets. While not directly linked to data protection, the law reflects DIFC’s broader efforts to provide legal certainty for emerging technologies operating within the Centre.
25. Nepal issues Urgent Notice regarding Immediate Blocking of Telegram Application[26]
The Nepal Telecommunications Authority has issued an urgent directive instructing all internet service providers and telecom operators to immediately block the Telegram Application (“App”) within Nepal.
This directive has been issued in response to serious concerns that the App is being increasingly used for activities such as fraud, criminal activity, and the unauthorized dissemination of sensitive information. The notice stresses the immediate nature of the action and is aimed at curbing the misuse of the platform.
- AI Act – Artificial Intelligence Act 2024
- BNS – Bharatiya Nyaya Sanhita, 2023
- CAC – Cyberspace Administration of China
- CERT-In – Indian Computer Emergency Response Team
- CPPA – California Privacy Protection Agency
- DIFC – Dubai International Financial Centre
- DIFC DPL – Data Protection law 2020
- DPDP Act – Digital Personal Data Protection Act, 2023
- DSA – Digital Services Act (DSA) 2022
- EDPS – European Data Protection Supervisor
- FTC Act – Federal Trade Commission Act 1914
- GDPR – General Data Protection Regulation (EU) 2018/1725
- HIPAA – Health Insurance Portability and Accountability Act 1996
- IPC – Indian Penal Code, 1802
- IT Act – Information Technology Act, 2000
- NCIIPC – National Critical Information Infrastructure Protection Centre
- MAS – Monetary Authority of Singapore
- PIPL – Personal Information Protection Law, 2021
- PIPO – Personal Information Protection Officers
- SEC – Security and Exchange Commission
Authors:
- Rashmi Deshpande
- Aarushi Ghai
Download File:
[1] https://www.cert-in.org.in/
[2] https://www.cert-in.org.in/
[3] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2148944
[4] https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf
[5] X v. Union of India & Anr., [WP No. 25017 of 2025]
[6] https://www.congress.gov/bill/119th-congress/senate-bill/1582
[7] https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/
[8] https://cppa.ca.gov/regulations/drop.html
[9] National Retail Federation v. James (S.D.N.Y., No. 1:25-cv-05500)
[10] Garner v. Amazon.com, Inc., [No. 2:22-cv-00975-RSL]
[11] https://www.classaction.org/media/frasco-et-al-v-flo-health-inc-et-al-google-notice-of-settlement.pdf
[12] Frasco et al. v. Flo Health, Inc., et al., [No. 3:21-cv-00757-JD (N.D. Cal. filed Feb. 1, 2021)]
[13] Simeon Taylor v. Allianz Life Insurance Company of North America, [Case No. 0:25-cv-3020]
[14] https://www.maine.gov/agviewer/content/ag/985235c7-cb95-4be2-8792-a1252b4f8318/0446bff3-a013-43ed-82fa-bca6bb157de1.html
[15] https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-protection-minors
[16] https://www.cnil.fr/en/ai-cnil-finalises-its-recommendations-development-artificial-intelligence-systems
[17] https://www.edps.europa.eu/press-publications/press-news/press-releases/2025/european-commission-brings-use-microsoft-365-compliance-data-protection-rules-eu-institutions-and-bodies
[18] https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act
[19] https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html
[20] https://www.gov.uk/government/news/whats-changing-for-children-on-social-media-from-25-july-2025
[21] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2025/07/new-guidance-on-disclosing-documents-to-the-public/?mkt_tok=MTM4LUVaTS0wNDIAAAGcN8HpCZHoD3B0St638dxBI3JfjTpReBbIhnnY5xtnSfDSs6ObFfPioT9nqlyqagtiCWiMoJYxTGm6F0o3PDE727bTC3MMfR4Vpw4JhsR4TcU81A
[22] https://www.cac.gov.cn/2025-07/18/c_1754553420421538.html
[23] https://english.www.gov.cn/news/202507/26/content_WS6884bea8c6d0868f4e8f4732.html
[24] https://press.un.org/en/2025/sgsm22741.doc.htm#:~:text=26%20July%202025-,Governing%20Artificial%20Intelligence%20a%20’Defining%20Test%20of%20International%20Cooperation’%2C,Global%20Dialogue%20on%20AI%20Governance
[25] https://www.difc.com/whats-on/news/difc-announces-enactment-of-amendments-to-select-difc-legislation-through-difc-law-amendment-law
[26] https://www.nta.gov.np/content/telegram-app-l-b-b-a