Technology Law and Data Privacy Updates

Latest News

Technology Law and Data Privacy Updates

Monthly Edition - May 2025

SUMMARY 

Welcome to the latest edition of Fountainhead Legal’s Data Privacy and Technology Law newsletter.

This edition highlights landmark judicial pronouncements, regulatory strides, and enforcement actions that underscore a growing global consensus: digital access is a fundamental right, data privacy violations carry serious consequences, and emerging technologies must be harnessed responsibly within legal guardrails. In India, the Supreme Court affirmed the ‘Right to Digital Access’ for persons with disabilities, mandating accessible digital KYC processes and inclusive technology in financial services. SEBI has since issued guidelines to promote accessible financial platforms, while the RBI introduced its Digital Lending Guidelines 2025 to ensure responsible lending, mandatory app registration, customer disclosures, and grievance redressal in fintech. CERT-In’s recent advisory on critical cybersecurity vulnerabilities further underscores India’s focus on strengthening cyber resilience.

India’s enforcement and policy landscape is advancing rapidly. The Ministry of Home Affairs launched the ‘e-Zero FIR’ pilot via I4C to expedite cyber fraud FIRs involving losses above INR 10 lakh. The Maharashtra Police Cyber Crime Cell warned against unauthorized AI-generated content mimicking Studio Ghibli’s style, highlighting copyright and data protection risks. The Allahabad High Court clarified that merely ‘liking’ a social media post does not constitute publishing obscene content under Section 67 of the IT Act. Meanwhile, the RBI is developing a framework to ensure ethical AI use in banking, promoting transparency, fairness, and accountability.

Globally, regulators and Courts are intensifying scrutiny over data privacy, digital governance, and the ethical deployment of emerging technologies. In the United States, enforcement actions underscore persistent challenges in safeguarding sensitive health and personal data. BayCare Health System agreed to an USD 800,000 settlement following HIPAA security failures, while LexisNexis Risk Solutions disclosed a significant breach impacting over 360,000 individuals, highlighting risks linked to third-party data integrations. Meanwhile, Coinbase faces a class action under Illinois’s Biometric Information Privacy regulation for allegedly collecting and sharing facial biometric data without proper consent, illustrating rising legal risks in fintech and cryptocurrency sectors.

In the European Union, the European Commission has escalated enforcement by referring five Member States—Czechia, Spain, Cyprus, Poland, and Portugal—to the EU Court of Justice for failing to effectively implement the Digital Services regulation. This action stresses the importance of robust national-level enforcement to complement EU digital governance frameworks, ensuring safer and more transparent online environments. Moving on, the United Kingdom marked a regulatory milestone with the SRA’s approval of Garfield.Law Limited, the first fully AI-driven law firm authorized to provide regulated legal services. This signals a cautious yet progressive embrace of AI in legal practice, balancing innovation with consumer protection.

Together, these developments highlight a worldwide push towards stronger data protection enforcement, clearer regulatory frameworks for emerging technologies, and an emphasis on upholding fundamental privacy and ethical standards in the digital age.

Fountainhead Legal is committed to supporting organizations on this journey. With our deep expertise in data privacy compliance and a strong understanding of regulatory nuances, we offer tailored solutions for each client’s unique needs. From drafting privacy policies and developing data protection frameworks to advising on cross-border data transfers and facilitating employee training programs, our team is equipped to guide clients through every stage of their compliance strategy.

We hope you enjoy our latest updates!

NATIONAL 

1. Supreme Court recognises ‘Right to Digital Access’ as Fundamental Right[1]

In Pragya Prasun & Ors. v. Union of India & Ors. the Supreme Court held that inaccessible digital KYC systems violate the right to life under Article 21 of the Constitution of India and the regulations for persons with disabilities. In this matter, petitioner being persons with disabilities, were unable to complete mandatory KYC due to biometric and visual-based verification systems that excluded them from accessing essential services such as banking and telecommunications. The apex Court ruled that digital access is now integral to the right to dignity and autonomy and that failure to ensure accessibility constitutes exclusion and discrimination. It stressed the duty of public and private entities to provide ‘reasonable accommodation’ under the RPwD Act and comply with accessibility standards under the Web Content Accessibility Guidelines and the Guidelines for Indian Government Websites.

The Court issued detailed directions to ensure digital KYC is made inclusive. These include appointing nodal officers for accessibility compliance, conducting periodic accessibility audits, accepting thumb impressions and alternatives to blinking for live verification, enabling paper-based KYC as an alternative, capturing disability data in customer records, and ensuring all websites and applications comply with national accessibility standards. To give effect, RBI is directed to amend the Master Directions – KYC Directions, 2016 to incorporate inclusive verification methods, promote awareness, and ensure KYC interoperability through the Central KYC Registry. It also called for establishing accessible grievance redressal systems, helplines, and human-led review of rejected KYC applications. Disability sensitisation training for staff was made mandatory, along with strict compliance monitoring by the RBI.

2. SEBI to include ‘Persons with Disability’ under KYC Purview[2]

On May 23, 2025, following the judgement of Supreme Court as mentioned above, SEBI issued circular on Accessibility and Inclusiveness of Digital KYC to Persons with Disabilities (“Circular”), directing all registered intermediaries, stock exchanges, association of mutual fund, portfolio managers and Bombay Stock Exchange administration to ensure that the digital KYC process is accessible to persons with disabilities. For clarification purposes SEBI released a revised FAQ[3] related to the Circular on the opening of accounts by persons with disabilities. The FAQ clarifies that persons with disabilities, not being a minor and those who are of sound mind, can independently open accounts. In case of minors with disabilities, intermediaries may rely on a guardianship certificate issued by local level committee under the National Trust for the Welfare of Persons with Autism, Cerebral Palsy, Mental Retardation and Multiple Disabilities Act, 1999.

The Circular permits digital KYC through accessible technologies and mandates assistance for video verification. To verify ‘liveliness’ the intermediary may use various factors such as live facial gestures, nodding of head, real time video recording etc. Thumb impressions are acceptable for e-signatures if properly documented. Both the person and guardian must comply with standard KYC norms, and intermediaries may source consent-based data from the Central KYC Registry. Accordingly, in case of rejection of KYC application due to accessibility related challenges, the principal officer will be responsible foe review and approval on case-to-case basis.

This move to mandate accessibility in digital KYC processes for persons with disabilities is a progressive and much-needed step toward inclusive finance. By enabling account opening through assistive technologies, flexible verification methods, and case-by-case reviews, the Circular acknowledges the practical challenges faced by persons with disabilities in engaging with digital systems. This initiative aligns with the Supreme Court order as mentioned above and also with meaningfully with the spirit of the DPDP Act, 2023, which explicitly recognizes the rights and dignity of persons with disabilities in the context of data processing. Financial access must be inclusive by design—not an afterthought—and SEBI’s framework sets a valuable precedent for other regulators such as RBI.

3. RBI issues Stricter Digital Lending Norms[4]

On May 8, 2025, the RBI notified the Digital Lending Directions, 2025 (the “Directions”) superseding the earlier Guidelines on Digital Lending dated September 02, 2022 to further strengthen oversight of digital lending platforms and protect borrowers. The Directions expand applicability to All-India Financial Institutions and mandate that multi-lender platforms disclose fair, comparable loan terms without using deceptive design practices. Further, enhanced consent norms now require clear, prior, purpose-specific approval for data use, with borrowers given control over revocation, third-party sharing, and deletion. Additionally, Digital Lending Applications (“DLA”) are barred from accessing contacts, call logs, or media, and all data must reside on Indian servers with strict repatriation timeline of 24 hours. Disbursals and repayments must be made directly between borrowers and regulated entities—bypassing intermediaries—with cash recovery allowed only under defined conditions for delinquent loans. The regulated entities must maintain publicly accessible disclosures and complaint mechanisms across their digital platforms, remaining fully accountable for grievance redressal.

All DLAs, including those of third-party Lending Service Providers are required to be registered with the RBI’s Centralised Information Management System portal by June 15, 2025, with compliance certified by the Chief Compliance Officer.

4. CERT-In issues Advisory to Strengthen Cyber Defences for Businesses[5]

On May 10, 2025, CERT-In issued an advisory titled Essential Measures for Industry for Safeguarding Business Operations Against Cyber Security Threats (“Advisory”). The Advisory urges businesses to strengthen cybersecurity defences considering increasing incidents of ransomware, data breaches, and disruptions to digital infrastructure. The Advisory recommends urgent implementation of several protective measures, including timely patching of software and firmware, hardening of systems and networks, securing remote access infrastructure, enforcing multi-factor authentication, and encrypting sensitive data. It also provides for businesses to segment networks, review firewall configurations, monitor critical logs, and test incident response and recovery plans. CERT-In has further stressed the importance of continuous employee awareness, secure cloud configurations, and regular backups stored offline. Companies must also report cyber incidents promptly as per CERT-In framework and stay updated with CERT-In threat alerts.

With data breaches and ransomware attacks on the rise across sectors, the Advisory is a crucial reminder to keep the IT infrastructure protected. By reinforcing foundational security practices—like patch management, network segmentation, multi-factor authentication, and offline backups—the Advisory serves as a timely reminder that cyber resilience must be proactive. In an era of heightened digital interdependence, businesses cannot afford security lapses. Strengthening cyber hygiene and readiness is no longer optional—it is central to operational continuity and regulatory compliance.

5. ‘e-Zero FIR’ Pilot Project launched to Fast-Track Cybercrime Investigations[6]

On May 19, 2025, the Ministry of Home Affairs launched the ‘e-Zero FIR’ initiative through the I4C as a pilot project in Delhi. The initiative aims to fast-track registration of FIRs in cyber financial fraud cases by integrating digital complaint systems with Bharatiya Nagarik Suraksha Sanhita (“BNSS”). The threshold requirement includes complaints involving losses above INR 10 lakh filed through the National Cybercrime Reporting Portal (“NCRP”) or via Cybercrime helpline by dialling 1930 will automatically be converted into Zero FIRs by the e-Crime Police Station, Delhi. These FIRs will then be routed to the appropriate local police station. Complainants must visit the cybercrime police station within three days to convert the Zero FIR into a regular FIR.

The process leverages integration between NCRP, Delhi Police’s e-FIR system, and the National Crime Records Bureau’s Crime and Criminal Tracking Network & Systems. It is anchored in Section 173(1)(ii) of the BNSS, which allows for electronic registration of FIRs regardless of territorial jurisdiction. The initiative is designed to address delays in FIR registration, facilitate faster recovery of defrauded funds, and improve accountability in cybercrime enforcement. It is also expected to be expanded to other States and Union Territories following the how the pilot project works out.

6. Maharashtra flags Risks related to Ghibli Content[7]

The Maharashtra Police Cyber Crime Cell has issued an advisory (“Advisory”) cautioning against the use of generative AI tools to create or share artwork that mimics the unique visual style of renowned animation house Studio Ghibli Inc. (“Studio”) The Advisory raises red flags over potential copyright infringement and unauthorised data use under Indian law. According to the Advisory, reproducing Studio’s iconic characters, animation style, or background elements without permission—whether for personal use, public sharing, or commercial gain—could violate the Copyright Act, 1957. It further warns that training AI models using content scraped from films or promotional material may infringe the rights of original creators. Legal risks are heightened if such AI-generated content is monetised, made public, or reused—especially when based on datasets lacking proper licenses. This could also breach the terms of digital platforms and clash with India’s evolving ethical standards for AI and cybersecurity.

The Advisory also spotlights concerns around consent, data protection, and fair use, urging developers and users of generative AI to conduct due diligence and build safeguards into their systems. As India moves toward clearer regulation of AI, this serves as a timely reminder that creativity powered by AI must respect the boundaries of law and original expression.

7. Allahabad High Court: Social Media ‘liking’ is not transmitting Obscene Content[8]

The Court clarified that simply ‘liking’ a post on social media does not amount to publishing or transmitting obscene content under Section 67 of the IT Act. The ruling came in the case of Imran Kazi v. State of U.P. and Anr. where criminal proceedings had been initiated against the applicant for allegedly liking an objectionable post. The Court held that for Section 67 to apply, the person must have either actively published i.e., posted the content or transmitted it i.e., shared, forwarded, or circulated it in some form. Since the applicant had only liked the post and did not create, post, or share it, the Court found no basis for prosecution. It further reiterated that the said provision is applicable only to material that is explicitly lascivious or intended to corrupt or deprave viewers—not merely provocative or controversial in nature. As a result, the Court quashed the case, emphasizing that liking a post is not the same as promoting or distributing its content under the law.

8. RBI to introduce Framework for Ethical Use of AI[9]

RBI, in its Annual Report for 2024–25, has announced that it is in the process of formulating a dedicated framework to govern the ethical and responsible use of AI and Machine Learning by banks and other regulated entities. The initiative is aimed at ensuring that AI deployments uphold principles of transparency, fairness, and accountability. This follows the RBI’s earlier move in December 2024 to constitute an expert committee to develop the Framework for Responsible and Ethical Enablement of AI (FREE-AI)[10]. The committee has been tasked with examining the use of AI in financial services and recommending guardrails to address risks such as bias, lack of explainability, and misuse of personal data.

AI is rapidly becoming embedded in core banking functions—from loan underwriting and customer service to risk detection and marketing. As banks increasingly rely on data-driven models, the upcoming framework will be a crucial step in ensuring these technologies are used responsibly. Coupled with the obligations under the DPDP Act, this framework will directly impact how banks design, audit, and govern their AI systems. The forthcoming framework is likely to serve as a key convergence point between technological innovation, and regulatory oversight.

INTERNATIONAL 

UNITED STATES OF AMERICA

9. Sever Penalty against Private Healthcare Provider for mishandling Patient Data[11]

BayCare Health System, Inc (“Company”), a healthcare provider, operating hospitals, outpatient centres, and urgent care clinics across West Central Florida—has reached a settlement with the U.S. Department of Health and Human Services’ (“HHS”) Office for Civil Rights after an investigation uncovered key failures in its handling of patient data. The investigation followed a 2018 complaint about unauthorized access to medical records. Federal regulators found that the Company had not put in place proper procedures to limit access to protected health information (PHI), lacked sufficient safeguards to reduce cybersecurity risks, and failed to monitor system activity logs—core requirements. The Company has also committed to a two-year corrective plan that includes a full risk analysis, tighter access controls, revised internal policies, and employee training. Additionally, the Company has also agreed to pay HHS, the amount of USD 800,000.

10. LexisNexis Risk Solutions reports Data Breach affecting Personal Data[12]

LexisNexis Risk Solutions (“Company”), a provider of risk management and analytics services, has notified individuals of a data breach resulting from hacking. The incident occurred on December 25, 2024, and was identified on May 14, 2025. The breach has affected approximately 364,333 individuals including residents. Following discovery, the Company launched an investigation with external cybersecurity experts, notified law enforcement, and took steps to enhance security. The Company is offering affected individuals 24 months of free credit monitoring and identity protection services through Experian.

This breach highlights the data protection risks associated with third-party service integrations and reinforces the need for vigilant oversight of external platforms handling sensitive information.

11. Class Action Suit Against Crypto Platform for Alleged Violations of Privacy Law[13]

A class action suit has been filed against Coinbase Global Inc. and Coinbase Inc. (together as “Coinbase”) for alleging violations of the Illinois Biometric Information Privacy Act, 2008 (BIPA). The case centres on Coinbase’s identity verification process, which involves collecting facial biometric data during user onboarding. Plaintiffs allege that Coinbase failed to obtain informed, written consent before collecting this sensitive data, did not adequately disclose its data retention policies, and unlawfully shared biometric identifiers with third-party vendors. Additionally, the plaintiffs claim that Coinbase is vicariously liable for its vendors’ actions and have raised claims under the Illinois Consumer Fraud and Deceptive Business Practices Act, citing Coinbase’s failure to properly inform users about biometric data processing. The lawsuit invokes Sections 15(a), 15(b), and 15(d) of BIPA, which impose strict obligations on entities collecting biometric information, including obtaining written consent, providing transparency on data use and retention, and limiting disclosure to third parties without authorization. BIPA’s private right of action allows individuals to seek damages without proving actual harm, which elevates the risk for companies processing biometric data without full compliance. This case highlights the expanding regulatory scrutiny on biometric data practices in the fintech and cryptocurrency sectors nationwide.

12. EU – Five Member States brought before EU Court for Non-Compliance with Digital Services Regulations[14]

The European Commission(“Commission”) has referred Czechia, Spain, Cyprus, Poland, and Portugal to the Court of Justice of the EU for failing to comply with key obligations under the Digital Service Act (Regulation (EU) 2022/1925) (“DSA”) – which applies to a wide range of online intermediaries—including social networks, marketplaces, app stores, and content-sharing platforms—aims to ensure a safer and more transparent digital environment across the EU. It introduces clear responsibilities for service providers and creates a co-regulatory system involving both the Commission and national level authorities. Accordingly, Member States were required to designate a DSC by February 17, 2024, empower them to enforce DSA, and set out penalties for non-compliance. While Poland failed to designate an empowered DSC altogether, the other four countries appointed DSCs without granting them sufficient authority. Additionally, none of the five have implemented national rules on penalties, a core element for effective enforcement. Following earlier warnings and formal steps, the Commission is now seeking judicial intervention to uphold the DSA’s uniform application across the EU’s digital space.

By holding Member States accountable for delays in designating and empowering DSCs and establishing enforcement mechanisms, the Commission reinforces that effective digital governance depends equally on state-level implementation. This move sends a clear message that safeguarding the digital ecosystem requires both robust legislation and committed administrative enforcement at all levels.

13. UK – First AI-Driven Law Firm receives Regulatory Approval[15]

On May 6, 2025, the SRA granted regulatory approval to Garfield.Law Limited (“Firm”), the first purely AI-powered law firm authorized to provide regulated legal services. The SRA highlighted the potential of AI-driven legal services to deliver faster, more affordable, and accessible legal support, while emphasizing the need for robust consumer protections. Before authorization, the SRA rigorously reviewed Firm’s processes to ensure compliance with professional standards, confidentiality, conflict management, and mitigation of AI risks such as ‘hallucinations’ or ‘errors’ in case law application.

14. Canada – Court clarifies that Proof of Actual Harm Not Required to Award General Damages for Data Privacy Violation[16]

In Insurance Corporation of British Columbia v. Ari, [2025 BCCA 131], the Court of Appeal for British Columbia upheld an award of CAD 15,000 per class member for breach of privacy, even in the absence of proof of individual harm. The case involved the unlawful sale of personal information by an employee of a public insurance corporation. Relying on Section 1 of the Privacy Act, R.S.B.C. 1996, c. 373, the Court affirmed that breach of privacy is actionable per se, and that general damages may be granted solely based on the violation itself. It rejected arguments for nominal damages, emphasizing that serious, intentional, and profit-motivated intrusions warrant meaningful compensation.

The Court underscored that privacy rights are intrinsically tied to dignity and personal autonomy. It noted that privacy damages serve not only to compensate but also to vindicate rights and deter future violations.

15. Kenya – Court rules against WorldCoin and Others over Unlawful Collection of Biometric Data[17]

In Republic v Tools for Humanity Corporation (US) & 8 Others; Katiba Institute & 4 Others (Ex Parte) [2025] eKLR, a landmark decision, the High Court of Kenya at Nairobi ruled that biometric data collection by Worldcoin-linked entities—including Tools for Humanity Corporation (US), Tools for Humanity GmbH (Germany), Worldcoin Foundation (Cayman Islands), World Assets Limited (BVI), and local partner Platinum De Plus Limited—violated the Data Protection Act, 2019 (“DPA”) and constitutional rights to privacy and dignity. Worldcoin-linked entities collected iris scans from Kenyan residents using a device called the “Orb” in exchange for cryptocurrency. Public interest groups challenged this on grounds of unlawful data processing, lack of valid consent, absence of Data Protection Impact Assessment, and failure to register as data controllers under DPA. The Court found these practices violated both statutory and constitutional protections, ordering cessation of data processing and deletion of unlawfully obtained biometric data.

It underscores that even innovative technologies like cryptocurrency linked to biometric identity verification must strictly comply with data protection laws and uphold fundamental privacy rights. The ruling sends a clear message that consent must be freely given and fully informed, and regulators must actively enforce compliance to protect individuals in emerging digital ecosystems.

ABBREVIATIONS
  • CCPA – Central Consumer Protection Authority
  • CERT-In – Indian Computer Emergency Response Team
  • DMA – Digital Market Act (Regulation (EU) 2022/1925)
  • DSA – Digital Services Act (Regulation 2022/2065)
  • EDPB – European Data Protection Board
  • HIPPA – The Health Insurance Portability and Accountability Act, 1996
  • I4C – Indian Cyber Crime Coordination Centre
  • IT Act – Information Technology Act, 2000
  • RPwD Act – Rights of Persons with Disabilities Act, 2016
  • SEBI – Securities and Exchange Board of India
  • SRA – Solicitors Regulation Authority  

Authors:

  • Rashmi Deshpande
  • Aarushi Ghai
  • Shriya Haridas

Download File:

Disclaimer

Current rules of the Bar Council of India impose restrictions on maintaining a web page and do not permit lawyers to provide information concerning their areas of practice. Fountainhead Legal is, therefore, constrained from providing any further information on this web page except as stated below.

The rules of the Bar Council of India prohibit law firms from soliciting work or advertising in any manner. By clicking on ‘I AGREE’, the user acknowledges that:

The user wishes to gain more information about Fountainhead Legal, its practice areas and the firm’s lawyers, for his/her own information and use;

The information is made available/provided to the user only on his/her specific request and any information obtained or material downloaded from this website is completely at the user’s volition and any transmission, receipt or use of this site is not intended to, and will not, create any lawyer-client relationship; and

None of the information contained on the website is in the nature of a legal opinion or otherwise amounts to any legal advice.

Fountainhead Legal, is not liable for any consequence of any action taken by the user relying on material/information provided under this website. In cases where the user has any legal issues, he/she in all cases must seek independent legal advice.