Technology Law and Data Privacy Updates

Latest News

Technology Law and Data Privacy Updates

Edition II - March 2025

SUMMARY 

Welcome to the latest edition of Fountainhead Legal’s Data Privacy and Technology Law newsletter. This edition captures a rich tapestry of legal, regulatory, and policy developments that reflect the mounting urgency for responsible innovation, robust compliance, and ethical digital governance.

In India, the National Consumer Dispute Resolution Commission dismissed complaints against crypto platform WazirX, holding that disputes involving cyberattacks, international fraud, and the uncertain legal status of VDAs fall outside the scope of consumer forums although it recognised the complainants as ‘consumers’ under the Consumer Protection Act, 2019. The decision limits the applicability of consumer protection law to such matters highlighting the need to have clear regulatory framework for VDAs. Further, the growing adoption of GenAI has prompted CERT-In to release a detailed advisory highlighting the lurking cybersecurity risks—from prompt injection to model inversion—and urging organizations to adopt robust safeguards. Parallelly, the IRDAI has tightened the cyber preparedness regime for insurers, mandating swift incident reporting and pre-empanelled forensic auditors. In the courtroom, digital policy continues to be hotly debated: the Central Government has clarified the administrative nature of the Sahyog Portal in its ongoing tussle with X Corp, while the Delhi High Court hears a pioneering copyright dispute between ANI and OpenAI, likely to set critical precedent for AI-generated content. Amid these legal currents, platforms are also taking visible steps toward self-regulation—WhatsApp’s ban on over 9.7 million Indian accounts is a clear signal of tightening platform governance in line with the IT Rules. Meanwhile, judicial integrity and transparency have taken centre stage with the Supreme Court’s integration of AI in transcription and procedural management, even as the Karnataka High Court flags misuse of fictitious case law, offering a stark reminder of the risks of misinformation in the age of AI.

Internationally, the regulatory landscape is equally dynamic. Europe continues to lead with clarity and consistency, as seen in ESMA’s supervisory guidance for crypto-asset service providers under MiCA and France’s €150 million fine against Apple for anti-competitive conduct in data tracking. The U.S. has sharpened its regulatory toolkit, from Virginia’s forward-looking AI Bill that seeks to curb algorithmic bias, to the SEC’s nuanced stance on crypto mining, and the establishment of a dedicated enforcement division to tackle AI, blockchain, and cybersecurity fraud. However, these innovations are not without risks. The bankruptcy of genetic testing firm 23andMe has sparked alarm bells over the fate of sensitive DNA data, with New York’s Attorney General urging immediate deletion by consumers. As data becomes the new currency, the security and ownership of personal information are under unprecedented scrutiny.

Moving forward, China has issued targeted rules to regulate synthetic content, biometric data, and corporate accountability. New AI rules mandate labelling of synthetic content to curb misinformation, while facial recognition laws emphasize consent, purpose limitation, and local storage. Additionally, CAC’s compliance audit measures introduce biennial audits, data protection officer requirements, and independent oversight for large-scale processors, reflecting a robust approach to digital governance.

Other jurisdictions are similarly strengthening oversight. Turkey launched a national cybersecurity law and authority; Switzerland mandated cyberattack reporting for critical sectors. Saudi Arabia introduced guidelines for assessing cross-border data transfer risks while Singapore issued security guidelines for data centres.

From AI and cybersecurity to content moderation and genetic privacy, the stories featured this month underline a common theme: digital evolution must be met with legal evolution. The road ahead demands collaboration, foresight, and a steadfast commitment to rights-based governance.

We hope you enjoy our latest updates!

NATIONAL 

1. Consumer Court dismissed Complaint against WazirX[1]

In a significant development, NCDRC dismissed complaints filed in the matters of Gurmeet Singh & Ors. v. Zenmai Labs Pvt. Ltd. [CC/7/2025] and Manveen Kaur & Ors. v. Zenmai Labs Pvt. Ltd [CC/8/2025]. It was alleged that cyber-attacks on the WazirX platform led to the unauthorized transfer of their digital assets—valued at approximately INR 546.55 crores—to foreign wallets. Despite sending legal notices, WazirX denied liability. The complainants claimed to be consumers under the Consumer Protection Act, 2019 and alleged deficiency in service and unfair trade practices. However, the NCDRC held that the consumer forum lacks jurisdiction to investigate complex matters involving international fraud, cybercrime, and the unregulated legal status of VDAs in India. It was observed that such issues require thorough regulatory oversight and are beyond the forum’s summary adjudicatory powers. Further, referencing the arbitration clause in the user agreement, NCDRC advised the complainants to seek remedies before appropriate civil or criminal courts or through arbitration in Singapore. This judgment highlights the regulatory gaps surrounding VDAs and limits the scope of consumer protection law in addressing crypto-related disputes.

The complainants have the option of filing an appeal before the Supreme Court of India.

2. CERT-In Releases Advisory on Using GenAI Tools[2]

CERT-In has issued advisory on ‘Best Practices against vulnerabilities while using Generative AI solutions’ (“Advisory”). GenAI, which enables the creation of human-like text, images, code, and more, has seen rapid adoption across industries. However, its integration also brings a host of security concerns that organizations must proactively address. The Advisory outlines key threats, including adversarial attacks where AI models are manipulated through deceptive inputs; model inversion and model stealing, which risk exposure of sensitive training data or unauthorized duplication of models; and prompt injection attacks, which can subvert intended outcomes of GenAI tools. It also warns against the dangers of hallucinations—where AI systems generate factually incorrect or misleading outputs—and backdoor attacks that can allow unauthorized control of AI functionalities. To mitigate these risks, CERT-In recommends robust security practices such as thorough testing of AI models before deployment, implementing strong identity and access management controls, using encryption and secure APIs, and maintaining logs for traceability. Organizations are also advised to conduct continuous monitoring, integrate AI risk assessments into broader cybersecurity frameworks, and educate users about the responsible use of GenAI tools.

With trends like the ‘Ghibli-style’ AI portraits gaining popularity, GenAI is entering mainstream creativity and user interaction. However, without strong safeguards, such tools can become avenues for privacy breaches and data misuse. The CERT-In advisory serves as a timely reminder that GenAI security must be a continuous effort—not a one-time fix—especially as India’s digital landscape evolves.

3. IRDAI releases Circular on Strengthening Cyber Incident Preparedness

IRDAI released the circular on reporting of cyber incidents and crisis reinforcing the cyber resilience framework for insurers, intermediaries, and training institutes (“Circular”)[3] which directs REs to report any cyber incident within 6 hours in the prescribed format, maintain and monitor ICT infrastructure and application logs for 180 days, and synchronize clocks of all relevant information processing systems with NTP as provided in CERT-In framework.

The Circular calls for a well-defined ‘Cyber Crisis Management Plan’ and mandates the pre-empanelment of certified forensic auditors to enable swift investigation and root cause analysis. It also prohibits entities managing security functions like forensic audits for the same RE to avoid conflict of interest. Compliance with these directives must be placed before the board of the RE in the ensuing meeting, and the minutes submitted to IRDAI.

Recent cyber incidents in the insurance sector have exposed critical vulnerabilities, underscoring the urgent need for stronger digital safeguards. In this context, IRDAI’s Circular marks a timely and essential move. By mandating swift incident reporting, robust log monitoring, system synchronization, and pre-empanelment of forensic auditors, IRDAI is reinforcing sector-wide readiness. This step not only strengthens incident response but also signals a firm regulatory commitment to securing customer data and ensuring business continuity in an increasingly digital landscape.

4. Government clarifies Purpose of Sahyog Portal in Legal Battle with X Corp[4]

In the ongoing matter of X Corp v. Union of India, [W.P.(C) 7405 of 2025], the Central Government has strongly countered X Corp’s (formally known as ‘Twitter’) allegation that the ‘Sahyog’ portal amounts to a censorship mechanism. Filed before the Karnataka High Court, the petition challenges several information blocking orders issued by various ministries, with X Corp asserting that such directions must strictly adhere to Section 69A of the IT Act and the corresponding
Blocking Rules, 2009. X Corp has also sought protection from coercive action for not participating in the Sahyog portal. The Government, in response, has clarified that the Sahyog portal is not intended to block content but to serve as an administrative interface for regulated entities to receive and respond to lawful directions under the IT Act. Refuting the characterization of the portal as a ‘censorship tool’, the Government emphasized that it operates entirely within the legal framework and does not bypass established procedures for content regulation. The matter has been heard in part, with the Karnataka High Court scheduling it for further hearing and possible final disposal on April 24, 2025.

5. Delhi Court hears ANI’s Claim, OpenAI Challenges Jurisdiction[5]

In the ongoing ANI Media Pvt Ltd v. OpenAI Inc. &Anr [CS(COMM) 1028/2024], where the High Court is examining a landmark copyright dispute between news agency ANI and OpenAI (developer of ChatGPT), OpenAI, in its defence, asserted that ChatGPT does not store or reproduce any content verbatim but generates responses dynamically based on probabilistic modelling. OpenAI further argued that the Indian Copyright Act does not apply to its operations, as its models are trained on publicly available data, and it does not intentionally or commercially exploit copyrighted material. OpenAI emphasized that any resemblance to specific content is purely incidental, and the nature of generative AI differs fundamentally from traditional modes of content reproduction.

The matter is scheduled for further hearings on various dates between April 02, 2025 and April 29, 2025.

6. WhatsApp bans Over 9.7 Million Accounts in India for Policy Violations[6]

In its monthly report, WhatsApp LLC (“WhatsApp” or “Platform”), in compliance with the requirements under the IT Intermediary Rules, 2021, disclosed that it had banned 9,781,000 Indian accounts between February 01, 2025 and February 28, 2025 for violations. These include sharing unlawful content such as misinformation, sexually explicit material, hate speech, or content inciting violence, as well as engaging in spam, impersonation, or fraud. WhatsApp also prohibits unauthorized automated messaging, harassment, and any misuse of the platform. Such actions, identified either through user reports or proactive monitoring, are considered breaches under the IT Intermediary Rules, 2021 and WhatsApp’s Terms of Service, leading to enforcement measures including account bans.

7. High Court flags Use of Non-Existent Supreme Court Judgments by Trial Court[7]

In the recent decision of Sammaan Capital Limited v. Mantri Infrastructure Private Limited, [Civil Revision Petition No. 49 of 2025], the High Court of Karnataka, took strong exception to the conduct of a trial court judge who cited non-existent judgments to justify an order. Upon scrutiny, the High Court found that two of the decisions relied upon did not correspond to any actual rulings. This raised significant procedural and ethical concerns. The High court directed that a probe be initiated and placed the matter before the Hon’ble Chief Justice for further consideration.

The ruling highlights serious concern over a trial court’s reliance on non-existent judgments, emphasizing that such oversights—whether human or tech-assisted—can erode the credibility of the judiciary and sends a clear message about the importance of diligence, authenticity, and ethical conduct in judicial reasoning. It reaffirms that the bench’s trustworthiness rests on a foundation of factual accuracy and legal fidelity—something that must never be compromised.

8. Supreme Court integrates AI for Procedural Operations[8]

The Supreme Court is actively integrating AI and ML into its case management system to enhance efficiency and accessibility. As part of this effort, the Court is now using AI to transcribe oral arguments in Constitution Bench hearings, with transcripts being made publicly available on its website. These transcriptions are now a regular feature on designated hearing days, such as Thursdays, to promote transparency and record-keeping.

To bridge the language gap in legal accessibility, the Court is also leveraging AI tools, developed in collaboration with the National Informatics Centre, to translate English-language judgments into 18 Indian languages. These translations are available through the eSCR (electronic Supreme Court Reports) portal, helping non-English speakers better understand judicial decisions.

In addition, the Court is piloting an AI-powered system—developed jointly with IIT Madras—to detect procedural defects in case filings. This prototype has been shared with 200 Advocates-on-Record for testing and feedback, with the goal of improving filing accuracy and reducing delays. Importantly, the Supreme Court clarified that these technologies are currently limited to administrative and procedural support. Tools like the Supreme Court Portal Assistance in Court Efficiency (SUPACE), which aim to assist with legal research and precedent identification, are still under evaluation and not used for judicial decision-making.

INTERNATIONAL 

EUROPEAN UNION

9. Guidance on Approval of Crypto-Asset Service Providers[9]

The European Securities and Markets Authority (“ESMA”) has issued guidance to help national regulators across the EU consistently approve CASP under the new MiCA regulation. ESMA treats all CASPs as high-risk due to their direct dealings with consumers and the relatively new and volatile nature of crypto markets. Therefore, regulators are expected to conduct thorough reviews before granting any authorisations.

Key points include the need for CASPs to have a real presence and decision-making power in the country where they apply, not just a shell setup. Companies must also retain full responsibility for any outsourced activities and show that their management is competent and trustworthy. Complex structures, large user bases, cross-border operations, or issuing their own tokens may increase regulatory scrutiny.

10. French Competition Authority fines Apple €150 Million over Data Tracking Practices[10]

On March 21, 2025, the Autorité de la concurrence (“French Competition Authority”) imposed a €150 million fine on Apple Inc. (“Apple”) for abusing its dominant position in the distribution of mobile applications on iOS and iPadOS between 2021 to 2023. The sanction followed an investigation into Apple’s App Tracking Transparency Framework (“ATT”), introduced in April 2021 with iOS 14.5. While the goal of enhancing user privacy was not disputed, the French Competition Authority found that Apple’s implementation of ATT was neither necessary nor proportionate. ATT created excessive complexity for third-party app publishers, particularly smaller ones, by requiring multiple consent pop-ups and limiting access to user data needed for targeted advertising. Additionally, it was noted that there is an asymmetry in how Apple treated its own apps compared to third-party apps, particularly in terms of consent requirements. Apple was not subject to the same double consent obligations until iOS 15, which was found to be unjustified and harmful to competition. The decision also reflected collaboration with France’s data protection authority – CNIL, whose expert opinions supported the conclusion that minor modifications to ATT could have ensured compliance with both privacy and competition laws. Apple is required to publish a summary of the decision on its website for 7 consecutive days.

UNITED STATES OF AMERICA

11. Virginia introduces AI Bill to ensure Accountability[11]

The Virginia General Assembly has passed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) (“AI Bill”) to establish accountability for AI systems that influence significant life decisions. The AI Bill aims to reduce algorithmic discrimination by setting compliance standards for entities that develop or use ‘high-risk’ AI — systems that impact areas such as education, employment, healthcare, housing, financial services, and legal determinations. The AI Bill adopts a risk-based regulatory model and designates exclusive enforcement authority to the Virginia Attorney General. Developers of high-risk AI must implement safeguards, test for bias, maintain documentation on system functionality, and provide transparency through summaries and disclosures. Deployers must ensure proper data governance, monitor outcomes, and conduct regular impact assessments.

12. SEC Clarifies Regulatory Position on Certain Proof-of-Work Mining Activities[12]

SEC clarified that certain proof-of-work (PoW) crypto mining activities—such as those used in Bitcoin mining—do not constitute the offer or sale of securities under U.S. federal law. The statement specifically addresses situations where crypto assets are earned through participation in a public, permissionless network’s consensus mechanism.

The SEC affirmed that entities or individuals who earn tokens by supporting the operation and security of such networks, in accordance with the protocol rules, are not engaging in securities transactions. This clarification provides greater regulatory certainty to miners and other stakeholders participating in PoW-based blockchain ecosystems.

13. US established Specialized Enforcement Division for AI, Blockchain and Cyber-security[13]

U.S. established the Cyber and Emerging Technologies Unit (“CETU”), aimed at addressing fraud and misconduct linked to evolving technologies. Replacing the earlier ‘Crypto Assets and Cyber Unit’, CETU brings together around 30 attorneys and enforcement experts from across the SEC’s offices. CETU will focus on identifying and addressing fraudulent schemes involving emerging tools such as artificial intelligence, social media misuse, blockchain-related misconduct, and cybersecurity violations by public issuers and regulated firms. CETU is also tasked with investigating breaches involving unauthorized access to confidential information and the compromise of retail investor accounts. The SEC emphasized that CETU’s formation supports its broader mission to protect investors, support fair capital markets, and respond effectively to the risks posed by emerging technologies. This initiative also works in tandem with the efforts of the ‘Crypto Task Force’, reinforcing the Commission’s commitment to strengthening investor trust in an increasingly digital market landscape.

14. Genetic Company’s Bankruptcy sparks Personal Data Security Concerns[14]

New York Attorney General has issued a consumer alert following genetic testing company, 23andMe Holding Co.’s (“Company”) bankruptcy filing and plans of selling. Citing concerns over the vast amount of sensitive genetic and personal data collected by the Company, Attorney General is urging New Yorkers to take immediate steps to delete their data and request the destruction of any stored DNA samples. Company collects and analyses DNA to help users trace ancestry or assess health risks. Given the potential risks involved in the sale or transfer of this sensitive data, consumers are being advised to protect their information by deleting their account and stored data. The alert also specifies the steps to delete data and withdrawing consent.

CHINA

15. Mandatory Labelling of AI Generated Content[15]

Effective September 1, 2025, China will enforce rules requiring the labelling of AI-generated content. Labelling refers to clearly marking material produced by artificial intelligence—such as text, images, audio, video, and virtual content—to show that it was not created by a human. The aim is to promote transparency, help users identify synthetic content generated by using AI, and reduce the risks of misinformation and manipulation. This places compliance obligation on AI service providers and online platforms, who must ensure proper labelling mechanisms are in place. Failure to comply may lead to regulatory consequences.

16. New Measures on Facial Recognition Released[16]

China has announced measures to set out requirements for lawful, ethical, and secure use of facial recognition technology. Data processing must have a specific purpose and necessity, use the least intrusive methods, and ensure strict data protection. Additionally, informed consent must be obtained from individuals, with parental or guardian consent required for minors (aged under 14). Data related to facial recognition must be stored locally on devices and not transmitted online unless legally permitted and with separate consent. Data retention must be limited to what is strictly necessary, and personal information protection impact assessments must be conducted in advance. The measures also prohibit using facial recognition as the sole method of identity verification when alternatives exist and ban coercion or deception in requiring individuals to use such technology. Installation in private areas like hotel rooms or restrooms is prohibited, and clear signs must be posted in public areas where the technology is used. Entities storing facial data of 100,000 or more individuals must file with provincial-level internet authorities within 30 working days. Supervisory departments are tasked with monitoring compliance, and violations will lead to legal liability as provided under the measures.

17. New Rules on Personal Information Protection Audits Released[17]

China has issued Personal Information Protection Compliance Audit Management Measures (“Measures”) providing stricter guidelines for companies handling user data. Measures clarify requirements for data audits, strengthen accountability for enterprises, and set enforcement mechanisms to ensure compliance. The Measures apply to all organizations within mainland China, excluding state organs and public authorities, and require organizations processing data of over 10 million individuals, to carry out audits at least once every 2 years. Regulatory authorities may also compel organizations to conduct external audits if significant data protection risks or large-scale data incidents are identified. Organizations processing data of over 1 million individuals must appoint a dedicated ‘Data Protection Officer’, while major internet platforms with vast user bases and complex services must set up independent oversight bodies. Audits may be conducted internally or by certified professional institutions. When mandated by authorities, audits must be completed within a set timeframe, and the resulting reports submitted for review. Professional bodies involved in such audits must maintain independence, confidentiality, and objectivity, and are prohibited from subcontracting or repeatedly auditing the same entity more than twice in a row. Violations of the Measures—either by organizations or audit firms—may attract administrative penalties or even criminal liability, depending on the severity. Authorities are also empowered to investigate complaints related to audit misconduct.

OTHERS

18. Cyber Security Law enforced in Turkey[18]

Turkey enforced the Cybersecurity Law No. 7545 (“Legislation”), marking a major step in strengthening its national cybersecurity regime. The Legislation sets out a structured framework to enhance cyber resilience by defining key obligations for institutions, regulating cybersecurity service providers, and establishing a centralized authority for oversight. A significant feature of the Legislation is the creation of a ‘Cybersecurity Authority’, tasked with risk monitoring, response coordination, certification procedures, and setting security standards. The Legislation also mandates that companies offering cybersecurity products or services must notify the authority of structural changes such as mergers, share transfers, or sales, with certain transactions requiring prior approval to ensure security oversight. Further, the Legislation imposes obligations on organizations operating in the digital space to implement appropriate safeguards, procure certified cybersecurity solutions, report incidents promptly, and comply with audits. To support these measures, equipment and materials imported for cybersecurity purposes are exempt from customs duties and associated taxes.

19. Switzerland mandates Reporting of Cyberattacks on Critical Infrastructure[19]

Effective April 1, 2025, Switzerland will require operators of critical infrastructure—such as those in energy, healthcare, transport, and communications—to report cyberattacks to the NCSC within 24 hours of detection. This mandate, introduced under the revised FAIS and the new Cybersecurity Ordinance, aims to strengthen national cybersecurity and streamline incident response. Reports must be filed if an attack disrupts essential services, results in data breaches or manipulation, or involves blackmail. Initial reports can be submitted via the NCSC’s Cyber Security Hub or by email using the official form. A detailed follow-up report must be submitted within 14 days. While the reporting obligation begins in April, penalties for non-compliance will only apply from October 1, 2025, giving entities a 6-month window to prepare. The Cybersecurity Ordinance also outlines exemptions for bodies such as the Swiss National Bank, Intelligence Services, and Law Enforcement, and details the NCSC’s responsibilities in managing and sharing cyber incident information.

20. Saudi Arabia issues Guidelines for Cross-Border Data Transfers[20]

The SDAIA has released a Risk Assessment Guideline (“Guidelines”) to assist entities comply with the Personal Data Protection Law when transferring personal data outside the Kingdom. Although, the Guidelines are not legally binding, it outlines a clear, four-phase process to assess and manage risks in cross border transfers. Under the Guidelines, entities are required to assess whether a risk assessment is required—especially when sensitive data, emerging technologies, or automated decision-making are involved. Entities must describe the service, map the full data lifecycle (from collection to deletion), and identify risks and impacts on individuals. If personal data is being transferred, especially on a large scale or if it involves sensitive information, entities must evaluate the legal safeguards in the receiving country and ensure they match Saudi standards. Entities also need to assess whether the transfer could affect the vital interests of the Kingdom, such as national security or economic stability. The Guidelines emphasizes that if serious risks remain—even after safeguards—organizations should reconsider or modify their data processing methods. It also requires special attention to individuals with limited legal capacity and encourages a careful review of all countries, systems, and recipients involved in the data transfer.

21. Singapore issues Advisory Guidelines for Data Centres [21]

The IMDA has issued Advisory Guidelines for Resilience and Security of Data Centres (“Guidelines”). The Guidelines apply to all data centres operating in Singapore, regardless of their size or tier classification, and are intended to promote best practices for enhancing operational resilience, business continuity, and cybersecurity. Data centre operators are advised to identify and mitigate risks such as power failures, cooling disruptions, cable damage, fire, water leaks, and unauthorised access. Regular maintenance, monitoring systems, and physical safeguards are recommended. Further, a Business Continuity Management System based on a Plan-Do-Check-Act framework must be implemented. This includes conducting risk assessments, business impact analyses, and continuity planning, as well as appointing a senior management representative to oversee resilience and security. Additionally, the data centre operators are encouraged to establish an Information Security Management System and adopt recognised standards such as ISO/IEC 27001 or the CSA Cyber Trust Mark. Key measures include incident response plans, penetration testing, internal audits, employee training, and vendor risk management. The Guidelines not legally binding but are intended to complement existing regulatory requirements and industry standards. IMDA encourages data centre operators to adopt these practices to strengthen their security posture and ensure continuous, reliable service.

ABBREVIATIONS
  • API – Application Programming Interface
  • Blocking Rules, 2009 Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009
  • CAC- China Cyberspace Administration
  •  CASP- Crypto-Asset Service Provider
  • CERT-IN – Indian Computer Emergency Response Team
  • Copyright Act – Copyright Act, 1957
  • FAIS – Federal Act on Information Security
  • GenAI Generative AI
  • ICT– Information and Communication Technology
  • IMDA – Infocomm Media Development Authority
  • IT Act – Information Technology Act, 2000
  • IT Intermediaries Rules, 2021 – Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • MiCA Market in Crypto-Assets Regulation (EU) 2023/1114
  • ML – Machine Learning
  • NCA – National Competent Authorities
  • NCDRC – National Consumer Disputes Redressal Commission
  • NTP – Network Time Protocol
  • OECD – Organisation for Economic Co-operation and Development
  • PIPEDA – Personal Information Protection and Electronic Documents Act, 2000
  • SDAIA – Saudi Arabia Data and Artificial Intelligence Authority
  • SEC- Securities and Exchange Commission
  • VDA – Virtual Digital Asset

Authors:

  • Rashmi Deshpande
  • Aarushi Ghai
  • Shriya Haridas

Download File:

[1] https://ncdrc.nic.in/
[2] https://www.cert-in.org.in/
[3] https://irdai.gov.in/document-detail?documentId=6975996
[4]https://karnatakajudiciary.kar.nic.in/newwebsite/casemenu.php
[5]https://delhihighcourt.nic.in/court/dhc_case_status_list_new?sno=1&ctype=CS%28COMM%29&cno=1028&cyear=2024
[6] https://www.whatsapp.com/legal/india-monthly-reports
[7] https://karnatakajudiciary.kar.nic.in/newwebsite/casemenu.php
[8] https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2113224
[9] https://www.esma.europa.eu/press-news/esma-news/esma-provides-guidance-mica-best-practices
[10] https://www.autoritedelaconcurrence.fr/en/press-release/targeted-advertising-autorite-de-la-concurrence-imposes-fine-eu150000000-apple
[11] https://lis.virginia.gov/bill-details/20251/HB2094
[12] https://www.sec.gov/newsroom/speeches-statements/statement-certain-proof-work-mining-activities-032025?utm_medium=email&utm_source=govdelivery
[13] https://www.sec.gov/newsroom/press-releases/2025-42
[14] https://ag.ny.gov/press-release/2025/attorney-general-james-urges-23andme-customers-contact-company-delete-data
[15] https://www.cac.gov.cn/2025-03/14/c_1743654684782215.htm
[16] https://www.cac.gov.cn/2025-03/21/c_1744174262342111.htm
[17] https://www.cac.gov.cn/2025-02/14/c_1741233507681519.htm
[18]   https://www.mevzuat.gov.tr/mevzuat?MevzuatNo=7545&MevzuatTur=1&MevzuatTertip=5
[19] https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-104400.html
[20] https://sdaia.gov.sa/en/SDAIA/about/Pages/RegulationsAndPolicies.aspx
[21] https://www.imda.gov.sg/regulations-and-licences/regulations/codes-of-practice/advisory-guidelines-of-cloud-services-and-data-centres

Disclaimer

Current rules of the Bar Council of India impose restrictions on maintaining a web page and do not permit lawyers to provide information concerning their areas of practice. Fountainhead Legal is, therefore, constrained from providing any further information on this web page except as stated below.

The rules of the Bar Council of India prohibit law firms from soliciting work or advertising in any manner. By clicking on ‘I AGREE’, the user acknowledges that:

The user wishes to gain more information about Fountainhead Legal, its practice areas and the firm’s lawyers, for his/her own information and use;

The information is made available/provided to the user only on his/her specific request and any information obtained or material downloaded from this website is completely at the user’s volition and any transmission, receipt or use of this site is not intended to, and will not, create any lawyer-client relationship; and

None of the information contained on the website is in the nature of a legal opinion or otherwise amounts to any legal advice.

Fountainhead Legal, is not liable for any consequence of any action taken by the user relying on material/information provided under this website. In cases where the user has any legal issues, he/she in all cases must seek independent legal advice.