Rising threat of deepfake pornography for women

As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.

Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.

Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.

Intuit to cut 1,800 jobs, focus on AI investments

Intuit, the parent company of TurboTax, has announced plans to reduce its workforce by 10%, affecting approximately 1,800 jobs. This move comes as Intuit shifts its focus towards enhancing its AI-powered tax preparation software and other financial tools.

The company intends to close two sites in Edmonton, Canada and Boise, Idaho, while aiming to rehire for new positions primarily in engineering, product development, and customer-facing roles.

CEO Sasan Goodarzi outlined that while 300 roles will be eliminated to streamline operations, another 80 technology positions will be consolidated across locations such as Atlanta, Bengaluru, and Tel Aviv.

This restructuring effort is expected to incur costs between $250 million and $260 million, with significant charges anticipated in the fourth quarter of this year.

Despite the layoffs, Intuit plans to ramp up its investments in generative AI and expand its market presence, targeting regions including Canada, the United Kingdom, and Australia. Goodarzi expressed confidence in growing the company’s headcount beyond fiscal 2025, following recent positive financial performance and increased demand for its AI-integrated products.

Healthcare experts demand transparency in AI use

Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.

The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.

To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.

Biden administration assembles AI expert team

President Joe Biden has assembled a team of lawyers, engineers, and national security specialists to develop standards for the training and deployment of AI across various industries. The team, tasked with creating AI guardrails, aims to identify AI-generated images, determine suitable training data, and prevent China from accessing essential AI technology. They are also collaborating with foreign governments and Congress to align AI approaches globally.

Laurie E. Locascio, Director of the National Institute of Standards and Technology (NIST), oversees the institute’s AI research, biotechnology, quantum science, and cybersecurity. She has requested an additional $50 million to fulfil the institute’s responsibilities under Biden’s AI executive order.

Alan Estevez, Under Secretary of Commerce for Industry and Security, is responsible for preventing adversaries like China, Russia, Iran, and North Korea from obtaining semiconductor technology crucial for AI systems. Estevez has imposed restrictions on the sale of advanced computer chips to China and is encouraging US allies to do the same.

Elizabeth Kelly, Director of the AI Safety Institute, leads the team at NIST in developing AI tests, definitions, and voluntary standards as per Biden’s directive. She facilitated data and technology sharing agreements with over 200 companies, civil society groups, and researchers, and represented the US at an AI summit in South Korea.

Elham Tabassi, Chief Technology Officer of the AI Safety Institute, focuses on identifying AI risks and testing the most powerful AI models. She has been with NIST since 1999, working on machine learning and developing voluntary guidelines to mitigate AI risks. Saif M. Khan, Senior Adviser to the Secretary for Critical and Emerging Technologies, coordinates AI policy activities across the Commerce Department, including export controls and copyright guidance for AI-assisted inventions. Khan acts as a key liaison between the department and Congress on AI issues.

Chinese businesses are the biggest adopters of generative AI, report says

A new survey reveals that China is at the forefront of adopting generative AI (GenAI), the technology that can generate images, text and video in response to prompts. Conducted by AI and analytics software company SAS and Coleman Parkes Research, it found that 83% of Chinese respondents are using generative AI. 

When it comes to full implementation of GenAI technologies, the United States with 24% compared to China’s 19% and the United Kingdom’s 11%. The industries surveyed included banking, telecommunications, insurance, healthcare, manufacturing, retail, and energy, with the two former showing the highest integration and use of generative AI.

OpenAI’s recent announcement to ban Chinese users from accessing ChatGPT is not expected to have drastic effects on use. Chinese alternatives like SenseTime and Baidu are expected to replace ChatGPT. SAS actually expects Chinese adoption to accelerate as competition lowers the cost of GenAI for businesses. 

The SAS report also highlighted that China leads the world in continuous automated monitoring (CAM), which involves collecting and analysing user data, behaviour, and communications. Udo Sglavo, vice president of applied AI and modelling at SAS, noted that this raises concerns about privacy infringements. Despite regulation still being behind the implementation AI, companies are increasingly emphasising on their own privacy policies to accompany the rollout of their AI tools. OpenAI and Apple’s recent partnership will focus on AI privacy for the integration of ChatGPT into Siri.

US voters prefer cautious AI regulation over China race

A recent poll by the AI Policy Institute has shed light on strong public opinion in the United States regarding the regulation of AI.

Contrary to claims from the tech industry that strict regulations could hinder competition with China, a majority of American voters prioritise safety and control over the rapid development of AI. The poll reveals that 75% of both Democrats and Republicans prefer a cautious approach to AI development to prevent its misuse by adversaries.

The debate underscores growing concerns about national security and technological competitiveness. While China leads in AI patents, with over 38,000 registered compared to the US’s 6,300, Americans seem wary of sacrificing regulatory oversight in favour of expedited innovation.

Most respondents advocate for stringent safety measures and testing requirements to mitigate potential risks associated with powerful AI technologies.

Moreover, the poll highlights widespread support for restrictions on exporting advanced AI models to countries like China, reflecting broader apprehensions about technology transfer and national security. Despite the absence of comprehensive federal AI regulation in the US, states like California have begun to implement their own measures, prompting varied responses from tech industry leaders and policymakers alike.

EU’s AI Act influences New Zealand’s digital strategy

As governments worldwide grapple with AI regulation and digital identity strategies, many are looking to the EU for guidance. In New Zealand, the EU’s AI Act and EUDI wallet program serve as valuable models. Dr Nessa Lynch, an expert on emerging technology regulation, highlights the need for legal and policy safeguards to ensure AI development prioritises public interests over commercial ones. She argues that the EU’s AI Act, framed as product safety legislation, protects people from high-risk AI uses and promotes trustworthy AI. However, she notes the controversial exceptions for law enforcement and national security.

Lynch emphasises that regulation must balance innovation and trust. For New Zealand, adopting a robust regulatory framework is crucial for fostering public trust in AI. The current gaps in its privacy and data protection laws, along with unclear AI usage guidelines, could hinder innovation and public confidence. Lynch stresses the importance of a people-centred approach to regulation, ensuring AI is used responsibly and ethically.

Similarly, New Zealand’s digital identity strategy is evolving alongside its AI regulation. The recent launch of the New Zealand Trust Framework Authority aims to verify digital identity service providers. Professor Markus Luczak-Roesch from Victoria University of Wellington highlights the transformative potential of digital ID, which must be managed in line with national values. He points to Estonia and Norway as models for integrating digital ID with robust data infrastructure and ethical AI development, stressing the importance of avoiding technologies that may carry unethical components or incompatible values.

The National Education Association approves AI policy to guide educators

The US National Education Association (NEA) Representative Assembly (RA) delegates have approved the NEA’s first policy statement on the use of AI in education, providing educators with a roadmap for the safe, effective, and accessible use of AI in classrooms.

Since the fall of 2023, a task force of teachers, education support professionals, higher-ed faculty, and other stakeholders has been diligently working on this policy. Their efforts resulted in a 6-page policy statement, which RA delegates reviewed during an open hearing on 24 June and overwhelmingly approved on Thursday.

A central tenet of the new policy is that students and educators must remain at the heart of the educational process. AI should continue the human connection essential for inspiring and guiding students. The policy highlights that while AI can enhance education, it must be used responsibly, focusing on protecting data, ensuring equitable access, and providing opportunities for learning about AI.

The task force identified several opportunities AI presents, such as customising instructional methods for students with disabilities and making classrooms more inclusive. However, they also acknowledged risks, including potential biases due to the lack of diversity among AI developers and the environmental impact of AI technology. It’s crucial to involve traditionally marginalised groups in AI development and policy-making to ensure inclusivity. The policy clarifies that AI shouldn’t be used to make high-stakes decisions like class placements or graduation eligibility.

Why does this matter?

The policy underscores the importance of comprehensive professional learning for educators on AI to ensure its ethical and effective use in teaching. More than 7 in 10 K-12 teachers have never received professional learning on AI. It also raises concerns about exacerbating the digital divide, emphasising that all students should have access to cutting-edge technology and educators skilled in its use across all subjects, not just in computer science.

Washington Post launches AI chatbot for climate queries

The Washington Post has introduced a new AI-driven chatbot named Climate Answers, designed to respond to user inquiries about climate issues using information from its articles. The undertaking underscores the Post’s broader strategy to leverage AI to enhance user engagement and accessibility to its journalistic content.

Chief Technology Officer Vineet Khosla highlighted that while the chatbot focuses solely on climate queries, plans include expanding its capabilities to cover other topics. Climate Answers was developed collaboratively by the Post’s product, engineering, and editorial teams, with support from AI firms like OpenAI and Meta’s Llama.

The chatbot operates by sourcing responses from a custom large-language model that synthesises information from multiple Washington Post articles on climate. Crucially, the Post ensures that all answers provided by Climate Answers are grounded in verified journalism, prioritising accuracy and reliability.

Why does it matter?

The Post’s AI initiative demonstrates its broader experimentation in integrating AI into its platform, including recent developments like AI-generated article summaries. The goal is to enhance user experience and engagement, particularly among younger readers who may prefer summarised content as a gateway to deeper exploration of news stories.

Looking ahead, the Washington Post remains open to partnerships that expand the reach of its journalism while maintaining fairness and integrity in content distribution. As the media landscape evolves, the Post monitors user interaction metrics closely to gauge the impact of AI-driven tools on audience engagement and content consumption habits.

OpenAI and Arianna Huffington fund AI health coach development

OpenAI and Arianna Huffington are teaming up to fund the development of an AI health coach through Thrive AI Health, aiming to personalise health guidance using scientific data and personal health metrics shared by users. The initiative, detailed in a Time magazine op-ed by OpenAI CEO Sam Altman and Huffington, seeks to leverage AI advancements to provide insights and advice across sleep, nutrition, fitness, stress management, and social connection.

DeCarlos Love, a former Google executive with experience in wearables, has been appointed CEO of Thrive AI Health. The company has also formed research partnerships with institutions like Stanford Medicine and the Rockefeller Neuroscience Institute to bolster its AI-driven health coaching capabilities.

While AI-powered health coaches are gaining popularity, concerns over data privacy and the potential for misinformation persist. Thrive AI Health aims to support users with personalised health tips, targeting individuals lacking access to immediate medical advice or specialised dietary guidance.

Why does this matter?

The development of AI in healthcare promises significant advancements, including accelerating drug development and enhancing diagnostic accuracy. However, challenges remain in ensuring the reliability and safety of AI-driven health advice, particularly in maintaining trust and navigating the limitations of AI’s capabilities in medical decision-making.