Technology Law and Data Privacy Updates

Latest News

Technology Law and Data Privacy Updates

Edition I - December 2024

SUMMARY

Welcome to the latest edition of Fountainhead Legal’s Data Privacy and Technology Law newsletter.

As technological advancements continue to shape the global landscape, the intersection of innovation, security, and privacy remains a crucial focal point for legal and regulatory frameworks. This edition highlights key developments across national and international jurisdictions, reflecting the growing complexity and challenges in data privacy, cybersecurity, and intellectual property rights.

In India, the introduction of the Telecommunications (Procedures and Safeguards for Lawful Interception of Messages) Rules, 2024, underscores the Government’s efforts to enhance national security while balancing public interest and individual privacy. At the same time, landmark legal cases, such as those concerning copyright infringement by AI and intermediary liability for online content, signal the judiciary’s pivotal role in addressing novel challenges arising from rapid technological progress. These developments highlight the increasing scrutiny over digital platforms and their obligations towards transparency, accountability, and compliance.

On the global front, privacy and cybersecurity remain at the forefront of regulatory actions. From Hong Kong’s enforcement measures addressing data breaches and opaque recruitment practices to the United States’ decisive actions against unauthorized data collection, authorities are prioritizing the protection of individual rights. The European Union’s Cyber Resilience Act represents a significant leap in setting cybersecurity standards for digital products, while rulings like Germany’s decision on Facebook’s data breach provide clarity on compensation standards under GDPR.

This evolving landscape of law and regulation reflects the delicate balance between fostering innovation and safeguarding fundamental rights. These developments not only shape current practices but also set critical precedents for addressing the legal and ethical responsibilities of organizations in the digital age.

Fountainhead Legal is dedicated to supporting organizations on this journey. With our deep expertise in data privacy compliance and a strong understanding of regulatory nuances, we provide tailored solutions for each client’s unique needs. From drafting privacy policies and building data protection frameworks to advising on cross-border data transfers and facilitating employee training programs, our team is equipped to guide clients through every step of their compliance strategy.

Hope you enjoy our latest updates!

NATIONAL 

1. DoT introduces Telecommunications (Procedures and Safeguards for Lawful Interception of Messages) Rules, 2024

On December 06, 2024, Department of Telecommunications (“DoT”), introduced the Telecommunications (Procedures and Safeguards for Lawful Interception of Messages) Rules, 2024 (“Interception Rules”)[1]. By introducing the Interception Rules, the Government aims to establish a legal framework for the lawful interception of messages, ensuring a balance between national security, public interest, and individual privacy. These Interception Rules apply to competent authorities empowered to intercept messages for purposes such as national security, sovereignty, public order, and crime prevention, while exempting specific cases like testing interception systems with prior approval or certain telecom entities under Section 3(1)(c) of the Telecommunications Act, 2023.

The Interception Rules require that interception orders be issued by a competent authority, such as the Union Home Secretary or State Home Secretary, with provisions for emergency orders by senior officials, which must be confirmed within 7 working days. Interception is permitted only when no alternative means to acquire the information are feasible, and such orders are strictly time-bound to a maximum of 60 days, extendable up to 180 days. Robust safeguards mandate secure handling, confidentiality, and destruction of intercepted data within specified timeframes unless required for functional or legal purposes. Additionally, a Review Committee at the Central and State levels will monitor interception orders every 2 months to ensure compliance with the law. Telecommunication entities are obligated to prevent unauthorized interception, maintain secure records, and ensure adherence to these rules, as violations may lead to legal consequences.

For citizens, while the Interception Rules aim to strengthen national security and curb illegal activities, they raise concerns about privacy and misuse. They authorize interception of communications under specific circumstances, but the detailed procedure and potential lack of transparency in oversight might lead to overreach and infringe on personal privacy. The safeguards, including review committees and time-bound destruction of intercepted data, are positive steps but may not fully address fears of surveillance misuse.

For businesses, particularly telecom entities and internet service providers, these rules impose significant compliance obligations. Companies must establish robust mechanisms to implement lawful interception orders, ensure data confidentiality, and maintain detailed records. While this bolsters national security, it also increases operational costs and risks, including penalties for unauthorized interceptions or non-compliance. The rules necessitate a delicate balance between security needs and the protection of privacy and business interests.

2. Delhi High Court addresses Copyright Infringement Suit Against OpenAI

The High Court issued directions in the matter of ANI Media Pvt Ltd. v. OpenAI Inc. & Anr [CS(COMM) 1028/2024][2]. The case raises significant legal questions regarding the use of copyrighted content by Artificial Intelligence (“AI”) platforms, marking one of the first cases in India addressing the intersection of AI and copyright law.

ANI Media Private Limited (“ANI”) alleged that OpenAI Inc’s (“OpenAI”) AI model, ‘ChatGPT’, unlawfully stored and used ANI’s copyrighted content for training its software and generating responses, thereby infringing the Copyright Act, 1957 (“Copyright Act”). ANI sought an interim injunction to restrain OpenAI from storing, reproducing, or using its content and requested the disabling of access to its copyrighted works. OpenAI stated that ANI’s domain had already been blocklisted in October 2024, ensuring its exclusion from future AI training, which the court took on record.

Recognizing the novel legal questions raised by advancements in AI, including whether the storage and use of copyrighted content constitute infringement or fall under ‘fair use’ as per Section 52 of the Copyright Act, the court highlighted the absence of authoritative rulings globally on such issues. Given the complexity, the court appointed two Amici Curiae—Mr. Adarsh Ramanujan, an IP law expert, and Dr. Arul George Scaria, Professor of Law at NLSIU—to provide their perspectives. The case also raised jurisdictional concerns, as OpenAI’s servers are located in the United States. The court has scheduled the next hearing for January 28, 2025, with further submissions expected.

This case marks a significant development in Indian intellectual property law as it grapples with the intersection of AI and copyright. It raises critical questions about the ethical and legal responsibilities of AI developers when using copyrighted content for training models. The court’s decision is likely to set a precedent for similar disputes in India and beyond.

3. Delhi High Court’s Ruling in ANI v. Wikimedia Foundation – Disclosure of Subscriber Information

The Delhi High Court issued an important ruling in the appeal filed by Wikimedia Foundation against an order of the Single Judge in the case ANI Media Pvt Ltd. v. Wikimedia Foundation (FAO(OS) 146/2024)[3]. The dispute centred around the disclosure of subscriber information for individuals accused of posting defamatory content about ANI on Wikipedia. Following a series of hearings, the court approved a consent order, resolving the appeal and permitting the disclosure of subscriber details of the respondents, identified as the editors involved in the defamatory posts.

ANI sought the personal details of Wikipedia editors who had allegedly posted defamatory material, arguing that such details were necessary to serve legal summons. The case examined Wikimedia’s role as an intermediary and whether it should be responsible for managing user-generated content. The dispute raised questions regarding the proper procedures for serving summons and ensuring legal compliance under Indian law.

The court ruled that Wikimedia would comply with the order by facilitating the service of summons and providing the personal information of the editors in a sealed cover. The parties agreed that email service of the summons would meet legal requirements. The court clarified that this decision was made based on mutual consent, with all parties’ rights and contentions remaining open for future consideration. The court also made it clear that while the service of summons would be sufficient, Wikimedia would not be held liable for the actions or inactions of the individual respondents. The legal questions raised in the appeal were left open for future adjudication.

This ruling is a notable development in addressing the accountability of online platforms for user-generated content. By permitting the disclosure of subscriber information, the court has set an important precedent for holding individuals responsible for defamatory content posted anonymously on the internet. It also reflects the increasing judicial scrutiny over online platforms and their role in content moderation and user accountability.

INTERNATIONAL 

HONG KONG

4. Privacy Commissioner’s Office Publishes Findings on EMSD Data Breach and ‘Blind’ Recruitment Advertisements[4]

On December 9, 2024, the Hong Kong Privacy Commissioner’s Office (“PCPD”) released findings on two significant issues: a data breach incident involving the Electrical and Mechanical Services Department (“EMSD”) and ‘Blind’ recruitment advertisements on JobsDB.

EMSD Data Breach

The breach involved over 17,000 individuals’ personal data, including sensitive details such as names, HKID numbers, and PCR test results. The incident stemmed from a failure by EMSD to properly manage the retention and deletion of data stored with a contractor after COVID-19 testing in 2022. Despite notifying the contractor not to renew their service contract, EMSD did not explicitly request the deletion of personal data, leading to public exposure until April 2024. The PCPD found deficiencies in EMSD’s policies, lack of proactive data deletion, and poor follow-up with the contractor, breaching personal data protection laws. An Enforcement Notice has been issued, requiring EMSD to implement corrective actions basis which EMSD is required to establish clear data retention and deletion policies, strengthen contractual agreements with service providers to ensure data deletion, proactively monitor contractor compliance, and conduct training on data privacy for relevant staff. Additionally, EMSD must implement stricter data security practices to prevent unauthorized access and exposure of personal data, ensuring better overall compliance with data protection laws and enhancing trust in its data handling procedures.

‘Blind’ Recruitment Advertisements

The PCPD raised concerns about the practice of posting ‘blind’ recruitment advertisement—where personal data is collected without complete transparency or consent. Blind advertisement are recruitment advertisements that do not disclose the identity of the employer or the specific position being offered. Instead, these advertisements typically provide a vague description of the job or company, and often request applicants to submit personal information (such as resumes, contact details, or other sensitive data) without informing them about how their data will be used or shared.

PCPD expressed concerns about the practice of posting blind ads because they can potentially violate privacy laws. The key issue is that applicants may unknowingly provide their personal data to a third party or for purposes beyond what they intended, as they are not fully informed about the employer or how their data will be handled. This lack of transparency can lead to unauthorized data collection, storage, and use, which can infringe on individuals’ rights to privacy and breach personal data protection laws. The PCPD highlighted that companies must be clear about how personal data will be used and ensure applicants’ consent before collecting such information.

UNITED STATES OF AMERICA

5. FTC Takes Action Against Mobilewalla and Gravy Analytics for collecting and selling Sensitive Location Data without User Consent

Federal Trade Commission (“FTC”) has taken decisive action against data brokers Mobilewalla Inc. (“Mobilewalla”), Gravy Analytics Inc. (“Gravy Analytics”), and its subsidiary Venntel Inc. (“Venntel”), for illegally collecting, using, and selling sensitive geolocation data without obtaining user consent.

Mobilewalla[5] was found to have violated privacy laws by collecting geolocation data from millions of mobile devices without user knowledge or consent. This included highly sensitive information such as visits to hospitals, places of worship, and abortion clinics, which was sold to various companies. Similarly, Gravy Analytics and Venntel[6] were accused of tracking and selling consumer location data, including visits to sensitive locations like healthcare facilities, schools, correctional facilities, military installations, and places of worship. The companies failed to obtain verifiable user consent, compromising individuals’ safety and privacy.

As per the proposed settlement order, both the companies are required to establish robust safeguards to protect sensitive location data, delete all historical location data unless it is de-identified, and notify data recipients to comply with privacy protections. Additionally, they must assess all data suppliers to confirm valid consumer consent for data collection.

CANADA

6. Privacy Commissioner releases Statement on LinkedIn’s AI Model Practices[7]

The Privacy Commissioner of Canada, Philippe Dufresne, has welcomed LinkedIn Corporation’s (“LinkedIn”) commitment to pause its practice of using the personal data of Canadian members to train generative AI models. This decision comes after concerns were raised about LinkedIn utilizing members’ data without their knowledge or consent. The Office of the Privacy Commissioner (“OPC”) reached out to LinkedIn to seek clarity on the company’s data collection and consent processes.

LinkedIn acknowledged the concern and agreed to temporarily halt the AI training process while addressing outstanding privacy questions with the OPC. Although LinkedIn stated that it believed its AI model was implemented with privacy protections, the company is engaging in discussions to ensure compliance with Canada’s privacy laws. The Privacy Commissioner emphasized the importance of applying privacy-by-design principles and standards to balance innovation with privacy protection, particularly during rapid technological advancements. This pause reflects the OPC’s ongoing efforts to advocate for privacy rights in the evolving digital landscape.

EUROPEAN UNION

7. Cyber Resilience Act enforces New Cybersecurity Standards for Digital Products[8]

EU’s Cyber Resilience Act (“CRA”), which came into effect on December 10, 2024, sets forth new mandatory cybersecurity requirements for products with digital components, including hardware and software. The CRA mandates that manufacturers and retailers ensure their products are secure throughout their lifecycle, addressing gaps in current cybersecurity practices, such as the lack of timely updates and inadequate product security. CRA aims to empower consumers and businesses to make informed decisions by clearly identifying products with robust cybersecurity features.

Under CRA, manufacturers must meet cybersecurity standards at every stage of the product lifecycle, with certain critical products requiring third-party assessments before being sold in the EU. These products will be marked with a ‘CE’ label to signify compliance. The CRA shifts responsibility toward manufacturers, ensuring that digital products meet EU cybersecurity standards and enhancing consumer confidence in their purchases. It aligns with broader EU cybersecurity strategies, including the NIS2 Directive, and its main obligations will be enforced starting December 11, 2027. An expert group will also be established to assist in the implementation of the CRA.

8. German Court’s ruling on Compensation for Facebook Data Breach[9]

The recent ruling by the German Federal Court of Justice (“BGH”) in the Facebook data breach case has clarified the standards for claiming compensation under the GDPR. In April 2021, personal data from over 533 million Facebook users in 106 countries was leaked online. The data breach stemmed from the misuse of Facebook’s contact import feature, which allowed unauthorized parties to link user phone numbers with their profiles through ‘scraping’. Compromised information included user IDs, full names, genders, workplaces, and phone numbers.

The case was brought forward by an affected user who alleged Facebook had failed to secure its platform adequately. The plaintiff claimed non-material damages for distress and loss of control over personal data. Additionally, demands were made for future compensation for potential damages, as well as requests for information on data handling practices and cessation of improper data use.

BGH established that compensation for non-material damages under Article 82(1) of the GDPR cannot be claimed solely on the basis of a loss of control over personal data. The court clarified that to claim compensation, the plaintiff must provide evidence of actual harm or psychological distress caused by the data breach. A mere loss of control over personal data, without evidence of concrete harm or psychological distress, does not entitle individuals to compensation. Although the Federal Court referred the matter back to the Court of Appeals for further examination, the standards for claiming compensation have been clearly laid down.

Authors:

  • Rashmi Deshpande
  • Aarushi Ghai
  • Janmejay Jaiswal

Disclaimer

Current rules of the Bar Council of India impose restrictions on maintaining a web page and do not permit lawyers to provide information concerning their areas of practice. Fountainhead Legal is, therefore, constrained from providing any further information on this web page except as stated below.

The rules of the Bar Council of India prohibit law firms from soliciting work or advertising in any manner. By clicking on ‘I AGREE’, the user acknowledges that:

The user wishes to gain more information about Fountainhead Legal, its practice areas and the firm’s lawyers, for his/her own information and use;

The information is made available/provided to the user only on his/her specific request and any information obtained or material downloaded from this website is completely at the user’s volition and any transmission, receipt or use of this site is not intended to, and will not, create any lawyer-client relationship; and

None of the information contained on the website is in the nature of a legal opinion or otherwise amounts to any legal advice.

Fountainhead Legal, is not liable for any consequence of any action taken by the user relying on material/information provided under this website. In cases where the user has any legal issues, he/she in all cases must seek independent legal advice.