Technology Law and Data Privacy Updates
Edition I - March 2025
INDEX
A. SUMMARY
- Government directed to respond on PIL challenging IT Blocking Rules
- New Income Tax Bill expands Digital Asset Oversight
- ‘AIKosha’ launched to boost AI Innovation Ecosystem
European Union
- Statement on ‘Age Assurance’ Mechanism Requirements Adopted
- Oversight on the Right to Erasure Under GDPR Tightens
- Updated FAQs for Data Act Released
- Ireland prepares for AI Act Implementation
- Guidelines on Direct Marketing released for Public Recommendations
- Guidance on Implementing PNR Directive Released
- Dutch- DPA seeks Input on Human Oversight in AI Decision-Making
United States of America
- Infosys settles Lawsuit Over Cybersecurity Breach
- Court rejects Google’s Motion to Dismiss in Wiretapping Case
- APA passes Resolutions to Strengthen Privacy Practices for Psychological and Neural Data
- NAI introduces Updated Privacy Framework for Network Advertising
United Kingdom
Others
SUMMARY
Welcome to the latest edition of Fountainhead Legal’s Data Privacy and Technology Law newsletter. This edition covers the most significant legal and policy updates from Indian and across the United States, the United Kingdom, the European Union, and other key jurisdictions, offering a deep dive into emerging compliance requirements and industry shifts.
In India, the Supreme Court’s notice on PIL challenging IT Blocking Rules raises crucial questions about content moderation and digital rights, while the Income Tax Bill, 2025 expands tax authorities’ powers over virtual digital assets, sparking privacy concerns. In the meantime, the launch of AIKosha marks a significant push toward AI research and ethical innovation under the IndiaAI Mission.
At the global level, AI governance and data privacy remain at the forefront. The EU’s latest statements on Age Assurance and the Right to Erasure highlight efforts to protect children online and strengthen data deletion rights. Additionally, the EDPB’s guidance on PNR data retention and Ireland’s AI Act enforcement strategy signal increasing regulatory oversight over AI and data processing. The Dutch DPA’s call for public input on meaningful human oversight in AI decision-making further underscores the urgency of balancing automation with accountability. In United States, cybersecurity and data privacy continue to take center stage.
Infosys McCamish has settled a $17.5 million class-action lawsuit following a major ransomware attack, underscoring the growing financial and reputational risks tied to weak security measures. Meanwhile, Google faces heightened scrutiny, as a U.S. District Court denied its motion to dismiss a lawsuit alleging unlawful call monitoring under CIPA, reinforcing concerns over AI-driven data collection. The APA’s new resolutions on neural and cognitive data privacy emphasize the ethical implications of AI and psychological assessments, advocating for stronger data handling and transparency standards. Additionally, California’s Attorney General has launched an investigation into potential CCPA violations in the location data industry, signaling tougher enforcement actions against data brokers, advertisers, and mobile apps. On the advertising front, the NAI has introduced an updated privacy framework, providing new self-regulatory principles to navigate the evolving ad-tech and consumer privacy landscape.
Across the United Kingdom, AI security and data regulation are advancing at full speed. The government has introduced a voluntary AI Cyber Security Code, establishing best practices for mitigating AI-specific risks and aligning with global security standards. Meanwhile, the Data (Use and Access) Bill is nearing final approval, poised to reshape data access, privacy rights, and AI regulation. With provisions such as ‘Digital Verification Services’, biometric data retention rules, and a National Underground Asset Register, the bill introduces tougher compliance measures for businesses, requiring them to navigate new data governance and cybersecurity obligations.
Beyond these regions, international collaboration on AI governance is gaining momentum. Data protection authorities from Australia, Korea, Ireland, France, and the United Kingdom have issued a joint statement on responsible AI governance, emphasizing privacy-by-design, legal clarity, and public trust. This cross-border initiative reflects a growing consensus on the need for robust AI regulatory frameworks that balance innovation with ethical considerations.
With these rapidly evolving regulations, businesses must stay proactive and informed to avoid legal pitfalls and remain compliant. From data privacy crackdowns and AI oversight to new cybersecurity mandates, these developments signal a transformative shift in how digital ecosystems operate. The coming months will be crucial for organizations seeking to align with these new rules, as enforcement intensifies, and global regulatory frameworks take shape.
Fountainhead Legal is committed to supporting organizations on this journey. With our deep expertise in data privacy compliance and a strong understanding of regulatory nuances, we offer tailored solutions for each client’s unique needs. From drafting privacy policies and developing data protection frameworks to advising on cross-border data transfers and facilitating employee training programs, our team is equipped to guide clients through every stage of their compliance strategy.
We hope you enjoy our latest updates!
NATIONAL
1. Government directed to respond on PIL challenging IT Blocking Rules[1]
The Supreme Court in Software Freedom Law Centre and Others v. Union of India and Others
[Writ Petition (Civil) No. 161/2025] has issued a notice to the Central Government in response to PIL challenging the IT Blocking Rules under the IT Act. The PIL has been filed questioning the legality of blocking social media accounts and posts without prior notice to content creators, arguing that this violates principles of natural justice.
A bench has directed the Central Government to submit its response within 6 weeks. The petitioners emphasized that while the Government has the authority to issue blocking orders, affected individuals must be given an opportunity to be heard.
The PIL challenging the IT Blocking Rules raises a critical debate on the right to be informed before content takedown versus the flexibility granted to the Government under existing provisions. Rule 8 of the IT Blocking Rules allows the designated officer to either identify the person or the intermediary hosting the content, rather than mandating direct identification of the individual. This gives the Government discretion in its approach, potentially limiting direct notice to content creators. Additionally, in emergency situations, the requirement to issue prior notice is waived entirely, further complicating the balance between due process and regulatory enforcement.
This case presents an intriguing legal question—should individuals always be notified before their content is blocked, in line with principles of natural justice, or does the current framework reasonably accommodate operational flexibility for the Government? The outcome could reshape digital rights and regulatory practices, determining whether free speech protections warrant stronger procedural safeguards in content moderation.
2. New Income Tax Bill expands Digital Asset Oversight[2]
The Income Tax Bill, 2025, introduced in Parliament, brings key changes to asset attachment, including VDAs. The proposed Section 500 empowers tax authorities to provisionally attach property, including VDAs, securities, and other valuable assets, to protect revenue interests during tax proceedings. It also enables authorities to override security measures, access digital spaces, and seize electronic records under proposed Section 247(1)(b)(iii), raising privacy concerns.
The privacy concerns stem from the broad authority granted to tax officials to seize or attach digital assets without clear safeguards. The ability to virtually attach assets and issue notices asserting ownership of valuables, including electronic records and virtual digital assets, raises questions about due process, data security, and potential overreach. Without strong oversight and transparency, these provisions could lead to unwarranted surveillance and intrusion into individuals’ financial privacy.
3. ‘AIKosha’ launched to boost AI Innovation Ecosystem[3]
Following the recent Budget 2024-25, where funds were allocated to the ‘IndiaAI Mission’, MeitY has launched AIKosha, a secure platform aimed at accelerating AI research, innovation, and responsible AI development.
AIKosha serves as a centralized AI repository, providing AI-ready datasets, pre-trained models, and development tools for researchers, startups, and enterprises. It features sandbox environments, compliance mechanisms, and security frameworks to ensure ethical AI use. This initiative enhances India’s AI infrastructure, fostering collaboration, skill development, and global competitiveness in emerging technologies.
India is gradually laying the groundwork for a strong AI ecosystem. The release of the AI Governance and Development of Guidelines Report for public consultation signaled the Government’s intent to regulate AI responsibly. This was followed by budgetary allocations for the IndiaAI Mission, and now, the launch of AIKosha, a dedicated AI repository. These steps indicate a positive shift toward fostering AI research, innovation, and industry collaboration, paving the way for India to emerge as a key player in the global AI landscape.
INTERNATIONAL
EUROPEAN UNION
4. Statement on ‘Age Assurance’ Mechanism Requirements Adopted
EDPB has released and adopted ‘Statement 1/2025 on Age Assurance’ (“Statement”), outlines key principles to help businesses implement GDPR-compliant age verification mechanisms while ensuring children’s privacy, security, and fundamental rights. Instead of mandating a specific method, the EDPB adopts a risk-based approach, allowing businesses to choose verification methods that align with the level of risk posed by their services. It emphasizes protecting children’s rights, ensuring that age assurance does not infringe on privacy, limit access to information, or lead to excessive data collection. The Statement also promotes privacy-by-design and data minimization, requiring that any data collected for age verification be used only for that purpose and deleted afterward.
To maintain fairness, the EDPB stresses lawfulness, transparency, and accountability, ensuring users understand how their data is processed and providing safeguards against bias in automated decision-making. The Statement also mandates strong security measures to prevent fraud and misuse, with businesses held accountable for compliance and regular audits. Additionally, organizations must continuously evaluate and improve their age assurance methods to adapt to evolving risks and regulatory expectations. By following these principles, businesses can implement effective, privacy-conscious age verification mechanisms that protect children while ensuring accessibility and compliance.
Implementing age assurance in children-centric industries like social media and gaming requires a balance between security, privacy, and accessibility. Social media platforms must prevent underage access to harmful content while avoiding excessive data collection, using methods like AI-driven age estimation or parental consent verification. Gaming platforms, especially those with multiplayer features or in-game purchases, should adopt tiered verification, allowing self-declaration for general gameplay but requiring stronger checks for high-risk activities like online interactions or transactions. By integrating privacy-conscious, risk-based verification, these industries can enhance child protection without creating unnecessary access barriers.
5. Oversight on Right to Erasure under GDPR Tightens[4]
On March 5, 2025, the EDPB initiated its 2025 Coordinated Enforcement Framework (CEF) action, focusing on the ‘right to erasure’, also known as the ‘right to be forgotten’ under Article 17 of the GDPR. This right allows individuals to request the deletion of their personal data under specific circumstances. The EDPB prioritized this issue due to its frequent use and the high volume of related complaints received by DPAs across Europe.
As part of this initiative, 32 DPAs will collaborate throughout 2025, engaging with data controllers across different industries. They will conduct investigations and assessments to evaluate how erasure requests are handled and whether they comply with legal conditions and exceptions. The findings from these national actions will be compiled and analyzed to provide a clearer understanding of compliance trends, leading to potential follow-up measures at both national and EU levels. This initiative reinforces the EDPB’s commitment to strengthening data protection enforcement and ensuring that individuals’ privacy rights are upheld consistently across Europe.
6. Updated FAQs for Data Act Released[5]
The European Commission has released updated FAQs on the Data Act, providing key clarifications on data access, sharing, security, and business obligations ahead of the regulation’s enforcement on September 12, 2025. The Data Act establishes fair rules for accessing and using data generated by connected devices, ensuring a balanced approach between data holders, users, and third parties. It also introduces safeguards for trade secrets, contractual fairness, and security measures.
The updated FAQs confirm that while the Data Act applies to both personal and non-personal data generated by IoT, the GDPR takes precedence in case of conflicts, while also clarifying key aspects of the Data Act’s implementation such as users have the right to access and port their IoT-generated data, with manufacturers required to provide access where technically feasible, though third parties cannot use this data to develop competing products. The FAQs also clarify that DMA ‘gatekeepers’ cannot demand access under the Data Act, and data-sharing agreements must follow fair, reasonable, and non-discriminatory (FRAND) terms, prohibiting unfair contractual clauses. Regarding Business-to-Government data access, the FAQs confirm that public authorities can request private-sector data only in emergencies, under strict conditions, with companies retaining the right to challenge excessive demands.
7. Ireland prepares for AI Act Implementation[6]
On March 4, 2025, the Irish government approved a regulatory model for the AI Act’s implementation, assigning oversight to sector-specific regulators. A total of 8 public bodies, including the Data Protection Commission and the Competition and Consumer Protection Commission, will enforce compliance within their respective industries. This approach leverages existing regulatory expertise to ensure AI governance aligns with sector-specific risks and obligations. Additional regulatory bodies, including a lead authority, will be appointed as the framework develops.
To brush up the memory on AI Act – it is the first comprehensive legal framework for AI regulation, aiming to ensure AI technologies are safe, transparent, and aligned with fundamental rights. It classifies AI systems based on risk, imposing stricter obligations on high-risk applications while allowing innovation in low-risk areas.
As a key player in the EU’s digital economy, Ireland’s proactive AI regulatory framework will influence innovation and compliance strategies across industries. Businesses should take early steps to assess their AI-driven operations and prepare for the evolving regulatory landscape.
8. Guidelines on Direct Marketing released for Public Recommendations[7]
Belgian DPA has released Recommendation 01/2025 (“Guidelines”), updating the legal framework for processing personal data in direct marketing. The Guidelines apply to all organizations engaging in promotional activities, including businesses, non-profits, and political entities. They emphasize the need for a lawful basis for data processing, either through explicit consent or legitimate interest, while ensuring transparency, data minimization, and clear opt-out mechanisms. Additionally, the Guidelines warn against dark patterns that hinder users from withdrawing consent and impose stricter rules on third-party data sharing.
The recommendations are open for public consultation until May 10, 2025, allowing stakeholders to provide feedback.
9. Guidance on Implementing PNR Directive Released[8]
The EDPB has issued Statement 2/2025 on the implementation of the Passenger Name Record (PNR) Directive (EU) 2016/681 (“Directive”), following the CJEU’s ruling in Ligue des droits humains v. Conseil des ministers [Case C-817/19][9]. While upholding the Directive’s validity, the court mandated strict safeguards to ensure compliance with EU fundamental rights. The EDPB clarifies that PNR data processing must be strictly limited to serious crime and terrorism with a clear objective link to air travel. Applying the directive to intra-EU flights requires specific, justified security assessments, and law enforcement agencies must obtain independent prior approval before accessing PNR data. Additionally, passengers must have access to judicial redress and the ability to challenge automated decision-making.
The 5-year general retention period is deemed excessive, and PNR data must be deleted after six months unless objective evidence justifies further retention. The EDPB has urged member states to amend their laws to align with the ruling, warning of enforcement actions by national data protection authorities for non-compliance. Additionally, European Commission is expected to monitor implementation to ensure a proportionate and lawful application of the PNR system, balancing security needs with privacy rights.
10. Dutch- DPA seeks Input on Human Oversight in AI Decision-Making[10]
The Dutch DPA has opened a public consultation on their guidelines on ‘Meaningful Human Intervention’ (“Guidelines”). The Guidelines explores the requirements for genuine human oversight in high-risk AI systems, aligning with the GDPR and the AI Act. It emphasizes that human intervention must be substantive and not a mere symbolic act to comply with Article 22 GDPR, which regulates automated decision-making.
The Guidelines outlines key factors for ensuring effective human oversight, including competence, process design, governance, and technological safeguards. It warns against automation bias—where humans over-rely on algorithmic outputs—and highlights the need for clear accountability structures within organizations. As such, the DPA has sought practical insights on implementing meaningful intervention, particularly in contexts where automated decisions impact individuals’ rights and freedoms, by April 6, 2025.
UNITED STATES OF AMERICA
11. Infosys settles Lawsuit over Cybersecurity Breach[11]
Infosys McCamish Systems LLC (“Infosys”) has agreed to a US $17.5 million settlement to resolve 6 class-action lawsuits in the U.S. stemming from a 2023 cybersecurity breach. The breach, which was disclosed in November 2023, occurred due to a ransomware attack, leading to the unauthorized access and leak of sensitive personal and corporate data. Investigations later revealed that approximately 6.5 million individuals were affected, including corporate customers. The compromised data included names, social security numbers, contact details, financial account information, policy numbers, salary details, and personal medical records[12].
The lawsuits alleged that Infosys failed to implement adequate security measures, violating multiple laws, including the HIPAA, GLBA, and various state data protection and consumer protection laws. Plaintiffs claimed Infosys’s lack of proper encryption mechanism, outdated cybersecurity infrastructure, and delayed breach notification contributed to the severity of the incident. Following mediation on March 13, 2025, Infosys and the plaintiffs reached a settlement agreement, which is subject to court approval.
For businesses, such incidents serve as a reminder that strengthening cybersecurity frameworks is not just a regulatory requirement but also a financial necessity. Paying high settlement costs not only impacts a company’s bottom line but also affects its goodwill and customer trust. Investing in robust security measures, timely risk assessments, and compliance with data protection laws can help companies avoid costly legal disputes and safeguard their reputation in the long run.
12. Court rejects Google’s Motion to Dismiss in Wiretapping Case[13]
A U.S. District Court in California has denied Google’s motion to dismiss a lawsuit alleging violations of CIPA. The case, Ambriz v. Google, LLC [Case No. 23-cv-05437-RFL], alleges that Google unlawfully monitored, transcribed, and analyzed customer service calls without callers’ consent. The court ruled that the plaintiffs adequately alleged Google’s role as an unauthorized third party in the communications, allowing the case to proceed.
The lawsuit focuses on Google’s AI-powered call center services, which process customer calls for major businesses like Verizon, Hulu, GoDaddy, and Home Depot. Plaintiffs claim they were unaware that their conversations were being transcribed and analyzed in real time by Google, violating CIPA Section 631(a). The court rejected Google’s argument that it was merely a software provider, ruling instead that Google had the capability to use call data independently, making it a third party under the law. The ruling reinforces growing scrutiny over AI-driven data collection and privacy rights, particularly in automated decision-making and consumer interactions.
13. APA passes Resolutions to Strengthen Privacy Practices for Psychological & Neural Data[14]
The APA has adopted two key resolutions addressing privacy and security concerns in psychological and cognitive data. The ‘Resolution on the Protection of Neural and Cognitive Data Privacy’ emphasizes safeguarding individuals’ mental privacy amid technological advancements, while the ‘Resolution on Protecting Psychological Test Security, Test Validity, and Public Safety’ focuses on securing psychological test data to ensure the integrity of assessments used in legal, educational, and occupational contexts. The Neural and Cognitive Data Privacy Resolution highlights ethical concerns surrounding the collection and use of neural and cognitive data by consumer software and wearable devices. It affirms individuals’ rights to mental privacy, calling for ethical standards in data handling, including responsible collection, storage, and use of sensitive information with a strong emphasis on transparency and informed consent.
Meanwhile, the Psychological Test Security Resolution seeks to protect the confidentiality and validity of psychological assessments used in child custody disputes, competency evaluations, educational disability determinations, and fitness-for-duty screenings, preventing misuse that could compromise public safety. Through these resolutions, the APA aims to influence policy and professional practices, ensuring responsible management of both neural data and psychological assessments in an era of rapid technological and scientific advancement.
The resolutions underscore the critical need for ethical data governance in psychology. As technology enables greater collection of sensitive mental and cognitive data, protecting privacy, ensuring informed consent, and preventing misuse are essential to maintaining trust and ethical integrity in the field. Safeguarding psychological assessments used in legal, educational, and clinical contexts preserves their validity and prevents potential harm. These resolutions reinforce responsible data management, ensuring advancements in psychology uphold confidentiality, fairness, and public safety while adapting to the evolving digital landscape.
14. NAI introduces Updated Privacy Framework for Network Advertising[15]
NAI, a leading self-regulatory body for digital advertising and data privacy, has introduced a new privacy framework, Network Advertising Initiative Principles & Self-Regulatory Framework (“Framework”) to replace its 2020 Code of Conduct. NAI, which represents companies engaged in interest-based and cross-site advertising, plays a key role in establishing best practices for responsible data use in the ad-tech industry. The Framework adopts a principles-based approach, ensuring compliance with U.S. state privacy laws while allowing flexibility in implementation.
Key areas covered include transparency, consumer control, and data governance. Members must disclose data collection practices, provide opt-out mechanisms, and apply stricter safeguards for sensitive data such as health and location information. The framework also mandates annual privacy reviews to assess compliance. By shifting to broad privacy principles, NAI aims to help companies navigate complex regulatory landscapes while maintaining consumer trust in digital advertising.
15. California-Investigation initiated for Potential Privacy Violations in Industry heavy on Location Data[16]
On March 10, 2025, California Attorney General Rob Bonta announced an investigative sweep into the location data industry to assess compliance with the CCPA. The focus is on advertising networks, mobile apps, and data brokers that may be collecting and selling precise location data without proper consumer consent. Such practices enable the tracking of individuals’ movements without their awareness, raising significant privacy concerns. The investigation emphasizes the necessity for businesses to respect consumers’ rights to opt-out of the sale or sharing of their personal information and to limit the use of sensitive data, including location details. By enforcing these measures, the Attorney General’s office aims to uphold consumer privacy and ensure compliance within the data industry.
This highlights the growing scrutiny on location data privacy, reinforcing the principles of the CCPA. Cases like Mobilewalla, as reported in our previous editions, where the FTC intervened and reached settlements over the unlawful sale of location data, demonstrate that regulatory bodies are actively addressing these concerns. However, enforcement actions and settlements should not be the only deterrent—companies must adopt proactive compliance measures to protect consumer privacy from the outset. After all, prevention is better than compensation!
Businesses that prioritize transparent data practices will not only avoid legal repercussions but also build trust with their consumers in an era of increasing digital oversight.
UNITED KINGDOM
16. Government releases Voluntary Code for Cyber Security of AI[17]
The NCSC and the DSIT have introduced a voluntary ‘Code of Practice for the Cyber Security of AI’ (“Code”) to help organizations develop and deploy AI systems securely. The Code outlines best practices to mitigate AI-specific cyber risks and aligns with international security standards.
The Code establishes a comprehensive AI cybersecurity framework based on four principles: secure design, secure development, secure deployment, and secure operation & maintenance. It emphasizes threat identification, access controls, safe coding practices, data protection, continuous monitoring, and incident response to enhance AI security and resilience.
The Code’s development follows a public consultation that concluded in August 2024, with 80% of respondents endorsing the proposed measures. Notably, this Code will serve as the foundation for a new global standard through collaboration with the European Telecommunications Standards Institute (ETSI), reinforcing the UK’s leadership in promoting secure AI innovation.
17. Data Bill moves closer to Enactment[18]
The Data (Use and Access) Bill (“Data Bill”),[19] last amended on March 13, 2025, is in its final stage in the Parliament, introducing significant reforms in data access, privacy, cybersecurity, and AI regulation. The Data Bill strengthens UK GDPR compliance, enhances transparency, and imposes strict penalties for violations. Key provisions include new rules on customer and business data access, granting broad enforcement powers to the Secretary of State and the Treasury, with unlimited fines for serious breaches. It also establishes ‘Digital Verification Services’, requiring identity verification providers to meet rigorous security standards or face removal from a national register, with full enforcement expected by 2026.
Additionally, the Data Bill enhances data subject rights, imposing 30-day and 60-day time limits for responding to standard and complex data access requests, respectively. The newly created Information Commission, set to replace the ICO by 2026, will oversee enforcement and issue penalties for serious data breaches, aligning with UK GDPR. It also focuses on AI regulation and privacy, setting transparency and accountability requirements for automated decision-making and criminalizing the creation of AI-generated deepfake images, punishable by up to 5 years in prison. It introduces biometric data retention rules, mandates the establishment of a ‘National Underground Asset Register’ by 2027, and expands privacy and electronic communication regulations, including stricter digital marketing and cookie tracking rules. Businesses across sectors must prepare for significant compliance obligations, as the Bill is expected to reshape the UK’s data protection and cybersecurity landscape.
OTHERS
18. International DPAs Unite for Trustworthy AI Governance [20]
On February 12, 2025, data protection authorities from Australia, Korea, Ireland, France, and the United Kingdom issued a joint statement emphasizing the need for trustworthy data governance frameworks to foster privacy-protective and innovative AI development. The discussions focused on balancing AI’s opportunities with its risks, ensuring privacy by design, and clarifying legal grounds for AI data processing, including consent, contractual necessity, and legitimate interest. The authorities committed to enhancing public trust through proportionate safety measures, continuous monitoring of AI’s impact, and stronger collaboration with regulatory bodies overseeing competition, consumer protection, and intellectual property. By reducing legal uncertainties and encouraging responsible AI innovation, the participating nations aim to establish global best practices that align AI advancements with privacy rights and ethical standards.
- AI Act – Artificial Intelligence Act, 2024
- APA – American Psychological Association
- CCPA – California Consumer Protection Act, 2018
- CJEU – Court of Justice of European Union
- Data Act – Data Act (Regulation (EU) 2023/2584)
- DPA – Data Protection Authority
- DPIA – Data Protection Impact Assessment
- DSIT – Department for Science, Innovation and Technology
- EDPB – European Data Protection Board
- GLBA – Gramm-Leach-Bliley Act, 1999
- HIPAA – Health Insurance Portability and Accountability Act, 1996
- IT Act – Information Technology Act, 2000
- IT Blocking Rules – Information Technology (Procedure And Safeguards For Blocking For Access Of Information By Public) Rules, 2009
- NAI – Network Advertising Initiative
- NCSC – National Cyber Security Centre
- PIL – Public Interest Litigation
- VDA – Virtual Digital Asset
Authors:
- Rashmi Deshpande
- Aarushi Ghai
- Shriya Haridas
Download File:
[1] https://api.sci.gov.in/supremecourt/2025/7547/7547_2025_2_18_59913_Order_03-Mar-2025.pdf
[2] https://incometaxindia.gov.in/Documents/income-tax-bill-2025/income-tax-bill-2025.pdf
[3] https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2108961#:~:text=of%20Electronics%20%26%20IT-,MeitY%20launches%20AIKosha%2C%20a%20secured%20platform%20that%20provides%20a%20repository,along%20with%20tools%20and%20tutorials
[4] CEF 2025: Launch of coordinated enforcement on the right to erasure | European Data Protection Board
[5] https://ec.europa.eu/newsroom/dae/redirection/document/108144
[6] https://enterprise.gov.ie/en/news-and-events/department-news/2025/march/20250305.html#:~:text=On%20Tuesday%2C%204%20March%202025,Artificial%20Intelligence%20(AI)%20Act.
[7] https://www.autoriteprotectiondonnees.be/citoyen/actualites/2025/03/10/consultation-publique-relative-au-marketing-direct?mkt_tok=MTM4LUVaTS0wNDIAAAGZJ-_WycuPrTOTrxSb1u17AhnuA_l-B6v3C7863_6kp6aBoJKJs–aHa6UoEGLQ6xT7fAaH-3xkvffn0q-wiAmLNeUQC9q-miNyh2GTIYGAmZCYQ
[8] edpb_statement_20250313_implementation-of-the-pnr-directive-in-light-of-the-cjeu-judgment_en.pdf
[9] https://curia.europa.eu/juris/document/document.jsf;jsessionid=054CF02975B92146A5B521117B3DE1C0?text=&docid=252841&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=13011399
[10] https://autoriteitpersoonsgegevens.nl/actueel/consultatie-betekenisvolle-menselijke-tussenkomst-bij-algoritmische-besluitvorming?mkt_tok=MTM4LUVaTS0wNDIAAAGZDjFh1G6bU5vO-LCnalm_iiP2yJzSuUiRVt0ap0PSS955ReYgEsx5Q-l8gXgkCO-f5R-0R6Eo7TXR5ukarjJ5etdHR6cJqj6RHAuc41WOIeaZgw
[11] https://www.infosys.com/investors/documents/exchange-filings/2025/cyber-incident-proposed-settlement-14mar2025.pdf
[12] https://dd80b675424c132b90b3-e48385e382d2e5d17821a5e1d8e4c86b.ssl.cf1.rackcdn.com/external/mcnally-v-infosys-mccamish-complaint-3-6-24.pdf
[13] https://www.courthousenews.com/wp-content/uploads/2025/02/ambriz-v-google-order-denying-motion-dismiss.pdf
[14] APA adopts policies to strengthen privacy protections for neural, psychological data
Resolution on the Protection of Neural and Cognitive Data
[15] NAI Framework_Dec 2024_March 2025
[16] Attorney General Bonta Announces Investigative Sweep of Location Data Industry, Compliance with California Consumer Privacy Act | State of California – Department of Justice – Office of the Attorney General
[17] Code of Practice for the Cyber Security of AI – GOV.UK
[18] Data (Use and Access) Bill [HL] – Parliamentary Bills – UK Parliament
[19] https://publications.parliament.uk/pa/bills/cbill/59-01/0199/240199.pdf
[20] Joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective AI | OAIC






