EQUAL AI Newsletter: June-July 2025

Summer Greetings!

It’s been an incredibly productive period at EqualAI. Following an energizing and successful C-Suite Summit in May (h/t Athina Kanioura), we’ve secured fantastic new foundation and corporate partners (more details soon!), and we’re gearing up for a busy Fall with the launch of the 6th cohort of our EqualAI Badge Program this October, bringing together senior executives for critical conversations about responsible AI implementation. Meanwhile, our AI literacy initiative, which includes a special In AI We Trust? Podcast series, a hub for resources and upcoming events, is bridging the growing engagement gap and empowering communities nationwide with essential AI knowledge. We are also hiring critical positions, including a Director of Policy and Programs and Special Assist– apply here!  

As the AI landscape evolves at breakneck speed—from policy developments to real-world incidents—our mission to unite leaders across industry, government, academia, and civil society to address critical AI governance challenges has never been more vital. This month’s newsletter captures the key developments from June 2025 that every AI leader should know about. 

Stay tuned for info on upcoming programs on AI Incident Reporting, our multi-city AI literacy initiative (let us know if you’d like to co-host an event)- including leading institutions with whom we are grateful to partner, and details on the October 21 launch of Miriam’s new book, Governing the Machine: How to navigate the risks of AI and unlock its true potential.

the EqualAI Team

Key AI-Related Headlines June – July 2025

Sharing the top U.S. and global headlines about AI policy, incidents, literacy, and research from this past month.

———————————————————————————————————————

U.S. AI Policy

Updates from the White House, Capitol Hill, and state & local governments

68 Organizations Sign White House’s AI Education Pledge Axios • June 30, 2025

Following the April Executive Order on Advancing AI Education for American Youth, on June 30th, the White House announced the Pledge to America’s Youth: Investing in AI Education. As of the announcement, 68 organizations, including Amazon, BSA, and Salesforce, have signed onto the pledge. The signing organizations pledge to work with the White House Task Force on AI Education and “make available resources for youth and teachers through funding and grants, educational materials and curricula, technology and tools, teacher professional development programs, workforce development resources, and/or technical expertise and mentorship” over the next four years.

On July 8th, two of the 68 signers, Microsoft and OpenAI, along with Anthropic, the American Federation of Teachers (AFT), and United Federation of Teachers announced the launch of the National Academy for AI Instruction, a facility that will provide free AI training and curriculum to all members of the AFT. The three tech companies have provided $23 million in funding for it.

One day later, Microsoft announced it would be pledging $4 billion in total to AI education, part of which will go towards launching Microsoft Elevate Academy, a new AI training program with the goal of helping 20 million people earn AI certificates within the next two years.

Senate Removes Proposed AI Moratorium from Reconciliation Bill StateScoop • July 1, 2025

On July 1st, the Senate voted 99-1 to remove a proposed moratorium from the reconciliation bill that would have blocked states from enforcing their own artificial intelligence laws. The move was widely celebrated by state legislatures and advocacy groups as a win for state autonomy. The moratorium had been pushed by Senator Ted Cruz (R-TX) and others, who argued that a uniform federal standard was needed to avoid a chaotic patchwork of state laws. However, negotiations to scale back the moratorium, including efforts led by Cruz and Senator Marsha Blackburn (R-TN), ultimately collapsed. Blackburn, initially supportive, withdrew her backing and issued a statement emphasizing the importance of protecting the public from potential AI abuses. The failed moratorium faced widespread opposition from lawmakers, state attorneys general, and policy experts, who warned it would strip essential protections amid the rising risks posed by AI systems.

For further reading on the discourse surrounding the proposed moratorium, see Congressional pushback on proposed US state‑level AI law moratorium increases (IAPP • June 5, 2025), detailing how bipartisan opposition was being expressed within both the House and Senate. 

Trump Administration Plans July 4th Launch of AI.gov Amid Rapid Push for Government-Wide AI Adoption and Streamlined Reporting Rules404 Media • June 10, 2025

Leaked documents from a now-deleted GitHub repository reveal that the Trump administration planned to launch AI.gov on July 4th, part of a big push to bring AI tools to agencies across the federal government. Led by ex-Tesla engineer Thomas Shedd at the U.S. General Service Administration’s (GSA) Technology Transformation Services division, the platform will include an Application Programming Interface (API) that integrates with major AI models from OpenAI, Google, Anthropic, AWS, and Meta, as well as a usage analytics feature and an AI-powered chatbot. The initiative is positioned as a rapid, startup-style rollout of AI across the federal government, aligning with the Department of Government Efficiency’s broader efforts. However, Shedd’s plans have raised concerns among government employees over the potential for flawed decision-making, especially regarding the use of AI to analyze government contracts.

While AI.gov did not launch on July 4th, and continues to redirect to the White House website, the White House Office of Management and Budget has released updated, streamlined guidance for federal agencies’ annual AI use case reporting, largely maintaining the structure established under the Biden administration. The new process reduces data categories, removes some procurement details, and shifts from Biden-era “rights- and safety-impacting” classifications to a “high-impact” designation, while still requiring agencies to report on development stages, use of personal data, and safeguards. 

Texas Enacts Comprehensive AI Law Focused on Preventing Harm IAPP • June 23, 2025

Texas Governor Greg Abbott has signed the Texas Responsible Artificial Intelligence Governance Act (HB 149), a comprehensive law (effective January 1, 2026) that applies to both public and private sector AI use. Sponsored by State Rep. Giovanni Capriglione, TRAIGA emphasizes preventing harm from AI misuse rather than solely regulating high-risk applications, setting it apart from similar laws like Colorado’s. The law imposes disclosure requirements on government AI use, bans unauthorized biometric data collection, and prohibits AI systems intended to manipulate behavior or exploit children. The law includes a regulatory sandbox to foster innovation without risking law violations and empowers the Texas Attorney General to enforce through civil penalties and a 60-day cure period. The law also establishes the Texas Artificial Intelligence Advisory Council to oversee the sandbox program, provide recommendations for future legislation, and conduct AI training for government entities. With significant transparency, fairness, and enforcement provisions, it’s among the most robust state-level AI regulations in the U.S.

Further Reading

  • As Regulation Ban Looms, California Issues Frontier AI Study (GovTech • June 20, 2025)
  • New York passes a bill to prevent AI-fueled disasters (TechCrunch • June 13, 2025)
  • Trump administration rebrands AI Safety Institute (FedScoop • June 4, 2025)

———————————————————————————————————————

Global AI Policy

Developments in AI policy around the world

European Commission Launches Public Consultation on How to Classify High-Risk AI Systems Digital Policy Alert

June 6, 2025

On June 6th, the European Commission launched a public consultation to gather input on how to classify high-risk AI systems under the AI Act, with feedback open until July 18th. The goal is to inform upcoming guidelines that will clarify how to apply the Act’s rules, including obligations for developers and operators of AI that could impact health, safety, or fundamental rights. These systems must meet strict requirements on data quality, transparency, human oversight, and conformity assessments.

While Executive Vice President Henna Virkkunen signaled parts of the AI Act may be postponed if essential guidance wasn’t ready, in early July, spokesperson Thomas Regnier made it clear the Commission would not be pausing the Act’s implementation or enforcement, despite a potential delay implementing the voluntary General Purpose AI (GPAI) code of practice.

June Global Digital Policy RoundupTech Policy Press • 

July 7, 2025

In June, several countries made notable progress on AI regulation and oversight. In Europe, the European Parliament proposed criminalizing the use of AI for child sexual abuse material, while German data protection authorities deemed the DeepSeek AI app illegal due to privacy violations arising from mass personal data transfers to China. The European Commission advanced the AI Act, launching consultations to classify high-risk AI systems and select experts for the Act’s implementation. Meanwhile, countries such as China, South Korea, and Japan moved forward with national AI legislation, ethical frameworks, and technical standards, emphasizing accountability, copyright protections, and secure infrastructure. The UK and Canada also advanced AI oversight strategies, with the UK investigating AI’s role in media and telecommunications and Canada probing AI-driven pricing. Internationally, the G7 countries adopted a declaration on AI for prosperity which establishes shared principles for trustworthy AI, and the Council of Europe launched a risk assessment methodology for AI systems. Across jurisdictions, AI’s impact on privacy, content generation, moderation, and biometric data use emerged as a central concern in legislative, regulatory, and enforcement efforts.

UK High Court Warns Lawyers After AI-Generated Fake Citations Found in Multiple Cases The Guardian • June 6, 2025

The UK High Court has issued a warning to senior lawyers after numerous citations to fake case-law, definitely or likely generated by AI tools, were submitted in two recent cases. In one case, 18 out of 45 citations were found to be fake, while another case included five “phantom” references, leading to a negligence ruling against the involved law center and lawyers and monetary penalties. Dame Victoria Sharp, president of the King’s Bench Division of the High Court, highlighted the severe risks AI misuse poses to the integrity of the justice system and called on legal regulators and firms to act quickly. She stressed that AI can generate convincing but false outputs, and misuse may result in sanctions, contempt of court, or even referral to the police for a criminal investigation. Legal leaders echoed the warning, emphasizing that while AI can support legal work, its outputs must be carefully verified for accuracy.

Further Reading

  • G7 governments adopted declaration on AI for prosperity (Digital Policy Alert • June 17, 2025)
  • European Commission’s Joint Research Centre adopted the generative Artificial Intelligence outlook report exploring the intersection of technology, society, and policy (Digital Policy Alert • June 13, 2025)
  • Italy’s DPA reaffirms ban on Replika over AI and children’s privacy concerns (IAPP • June 11, 2025)
  • UK’s new AI framework puts culture before code (ComputerWorld • June 5, 2025)

———————————————————————————————————————

AI Incidents

Recent events exemplifying AI risks and their consequences

Senators Urge Meta to Curb AI Chatbots Following Reports of Bots Posing as Therapists 404 Media • June 9, 2025

Senators Cory Booker (D-NJ), Peter Welch (D-VT), Adam Schiff (D-CA), and Alex Padilla (D-CA) have called on Meta to investigate and restrict its AI chatbots after reports revealed they falsely present themselves as licensed therapists to users seeking mental health support. In a letter to Meta executives, the senators cited reports by various media sources, including 404 Media, which found chatbots asked to provide credentials would provide fake license numbers and degrees, and stated that their own staff confirmed the bots’ deceptive behavior. They also referenced a Wall Street Journal investigation showing Meta’s chatbots engaging in sexually explicit conversations with underage users, raising broader concerns about the company prioritizing engagement over user safety. The Senators urged Meta to address what they called “blatant deception” and harmful AI practices.

UK Doctors Accuse NHS England of Misusing Patient Data to Train AI Politico • June 6, 2025

Doctors in the UK are preparing a formal complaint to the Information Commissioner’s Office, alleging that NHS England unlawfully used patient data—originally collected for COVID-19 research—to train its generative AI model, Foresight, without proper consent or oversight. The data, covering 57 million people, was repurposed for broader AI training without approval from the required Professional Advisory Group, raising serious legal and ethical concerns. Although NHS England paused the project and launched an internal audit, doctors argue this is insufficient and want an independent investigation, stricter governance over AI use, and a commitment to consult clinicians on future data use. The controversy comes amid political pressure to open up NHS data for innovation, with critics warning against sidelining patient privacy and established safeguards.

A Month of Copyright Rulings and Complaints • June 2025

A federal judge in California ruled that Anthropic likely violated copyright law by pirating authors’ books to create a massive dataset and digital library, but also determined that training its AI on those books without permission qualifies as transformative fair use under copyright law. This complex decision, part of a broader wave of lawsuits against AI companies, is seen as unfavorable for authors, artists, and creators. The case involves authors suing Anthropic for using full copies of their books from pirated sources and physical scans to train its Claude AI models.

Another recent related ruling on summary judgment held in Meta’s favor but the ruling criticized the plaintiff authors’ lawyers for making the wrong legal arguments in a similar AI training case, underscoring the courts’ complex stance on AI and copyright. Meanwhile, other high-profile cases, including those involving Stability AI, Microsoft, and Midjourney, involve ongoing disputes over the use of copyrighted works for AI training. These cases collectively demonstrate types of legal challenges content creators and AI developers will face in applying traditional copyright frameworks to AI use cases. 

Additionally, OpenAI is pushing back against a court order that requires it to preserve all ChatGPT output logs—including deleted chats and API interactions—after news organizations suing over copyright infringement accused the company of destroying evidence.

Further Reading

  • New Hampshire jury acquits consultant behind AI robocalls mimicking Biden on all charges (Associated Press • June 13, 2025)
  • Wikipedia Pauses AI-Generated Summaries After Editor Backlash (404 Media • June 11, 2025)
  • Reddit sues AI company Anthropic for allegedly ‘scraping’ user comments to train chatbot Claude (Associated Press • June 4, 2025)
  • Unlicensed law clerk fired after ChatGPT hallucinations found in filing (ArsTechnica • June 2, 2025)

———————————————————————————————————————

AI Literacy

Initiatives, programs, and policies improving public understanding of AI

Ohio State Launches AI Fluency Initiative to Integrate AI Studies Across All Undergraduate ProgramsEdScoop • June 6, 2025

Ohio State University has announced it will integrate artificial intelligence studies into all undergraduate programs starting this fall through its new AI Fluency initiative. The effort aims to ensure that students, regardless of their major, gain foundational skills in using AI tools and understanding their ethical implications. The initiative will incorporate lessons on generative AI basics into a required general education seminar, establish mandatory and optional workshops on different AI tools and applications, and create an open course focused on essential skills from prompt engineering to knowing AI’s impact on society. University President Walter “Ted” Carter Jr. emphasized that the initiative prepares students to lead in an AI-driven future and supports Ohio’s long-term competitiveness in the evolving workforce.

European Data Protection Board Issues Guidelines on AI Security, Law, and Data Protection Compliance Digital Policy Alert • June 5, 2025

On June 5th, the European Data Protection Board (EDPB) released guidelines aimed at strengthening legal and compliance expertise in AI security and data protection. Developed under the Support Pool of Experts programme, the guidelines serve as a comprehensive training resource for data protection officers, addressing skill gaps through practical case studies aligned with the GDPR, the AI Act, and other EU regulations. The 15-hour self-study module guides professionals through the entire lifecycle of AI systems—from development to decommissioning—highlighting legal challenges and compliance strategies at each stage. The goal is to equip privacy professionals with the tools to ensure responsible and lawful use of AI technologies within their organizations.

On the same day, the EDPB also published a training curriculum on the secure deployment of AI systems trained with or processing personal data. The curriculum is primarily aimed at those working in cybersecurity, but the EDPB also makes sure to highlight its usefulness to other professionals, including legal experts and management boards, noting that the book can help establish “a solid foundation and shared vocabulary” within organizations deploying AI systems.

Santa Clara University Announces Interdisciplinary AI Master’s ProgramEdScoop • June 13, 2025

Santa Clara University’s School of Engineering has launched a new interdisciplinary master’s program in artificial intelligence aimed at preparing students to develop, implement, and manage AI systems. The program features two tracks—software and hardware—with specialized focuses on computation, algorithms, robotics, and AI chip design. It also emphasizes ethical training and offers hands-on learning opportunities with Silicon Valley companies. University leaders say the program is designed to equip students with both technical expertise and a strong understanding of AI’s societal impact, aligning with growing demand for AI skills in the workforce.

Further Reading

  • Mississippi partners with tech giant Nvidia for AI education program (Associated Press • June 20, 2025)
  • Nancy Mace reintroduces federal AI training bill (FedScoop • June 6, 2025)

For the latest updates on EqualAI’s AI Literacy Initiative, check out our website and subscribe to our podcast.

———————————————————————————————————————

AI Research

Important takeaways from the latest research and polling about AI

Ernst & Young Reports GenAI Adoption Surges Ahead of GovernanceComputerWorld • June 5, 2025

GenAI adoption is rapidly accelerating across enterprises—around 75% of companies now use genAI—yet only about a third have instituted responsible governance frameworks, according to a global Ernst & Young survey of 975 C‑suite executives. While nearly all firms are actively working with genAI and more than half are integrating advanced forms like agentic systems and synthetic data, awareness of associated risks, such as bias, privacy, and regulatory compliance, lags considerably, with only 18% of CEOs reporting strong fairness controls and 14% claiming regulatory alignment. Executives acknowledge this governance gap, and approximately half are making major investments to close it through measures such as defining roles, improving transparency, and boosting training.

Mary Meeker’s Tech Report Highlights Rapid Growth and Broad Impact of AI AdoptionForbes • June 6, 2025

Mary Meeker’s latest 360-page report highlights the explosive global growth and transformative impact of artificial intelligence, especially ChatGPT, whose adoption rate has surpassed even that of Google Search. The report outlines how AI is evolving faster than any previous technological wave, with ChatGPT’s popularity, developer engagement, and monetization growth signaling a fundamental shift in internet use. Meeker notes a 260% annual increase in AI training data and a 360% rise in compute power since 2010, while forecasting widespread adoption of AI agents in business, customer service, and beyond. The report emphasizes the rapid transition to multimodal models capable of understanding and generating text, audio, image, and video, and projects that AI will soon become as indispensable as the internet. The report also discusses increasing corporate investments in AI infrastructure, global AI usage trends (with India leading), China’s AI advancements in military and strategic tech, and the looming arrival of humanoid AI interfaces, suggesting the dawn of a new AI-native generation and digital era.

PwC Finds GenAI Increased Worker Value, Wages, and Productivity Globally ComputerWorld • June 3, 2025

A new PwC study finds that genAI is significantly boosting productivity and transforming jobs rather than replacing them, particularly in software development, where AI enhances worker capabilities and drives business value. Analyzing nearly 1 billion global job ads, PwC found that industries heavily exposed to genAI have tripled revenue per worker since 2022 and their wages are rising twice as fast. Despite predictions from some tech leaders about mass job loss, the report shows AI is redefining work by augmenting human roles rather than automating them entirely. The number of “automatable” and “augmentable” jobs is growing across all industries, with upskilling likely influencing the former, and degree requirements for AI-exposed jobs are declining as employers prioritize AI skills over formal education.

For a contrasting study, see Large Language Models, Small Labor Market Effects (National Bureau of Economic Research • May 2025), concluding that AI chatbots are having little impact on productivity or efficiency in the workplace, despite their rapid uptake.  

Further Reading

  • AI Makes Research Easy. Maybe Too Easy. (Wall Street Journal • June 26, 2025)
  • Anthropic study: Leading AI models show up to 96% blackmail rate against executives (VentureBeat • June 20, 2025)
  • AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums (404 Media • June 17, 2025)
  • Just add humans: Oxford medical study underscores the missing link in chatbot testing (VentureBeat • June 13, 2025)