Rising threat of deepfake pornography for women

As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.

Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.

Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.

Intuit to cut 1,800 jobs, focus on AI investments

Intuit, the parent company of TurboTax, has announced plans to reduce its workforce by 10%, affecting approximately 1,800 jobs. This move comes as Intuit shifts its focus towards enhancing its AI-powered tax preparation software and other financial tools.

The company intends to close two sites in Edmonton, Canada and Boise, Idaho, while aiming to rehire for new positions primarily in engineering, product development, and customer-facing roles.

CEO Sasan Goodarzi outlined that while 300 roles will be eliminated to streamline operations, another 80 technology positions will be consolidated across locations such as Atlanta, Bengaluru, and Tel Aviv.

This restructuring effort is expected to incur costs between $250 million and $260 million, with significant charges anticipated in the fourth quarter of this year.

Despite the layoffs, Intuit plans to ramp up its investments in generative AI and expand its market presence, targeting regions including Canada, the United Kingdom, and Australia. Goodarzi expressed confidence in growing the company’s headcount beyond fiscal 2025, following recent positive financial performance and increased demand for its AI-integrated products.

OpenAI and Los Alamos collaborate on AI research

OpenAI is partnering with Los Alamos National Laboratory, most famous for creating the first atomic bomb, to explore how AI can assist scientific research. The collaboration will evaluate OpenAI’s latest model, GPT-4o, in supporting lab tasks and employing its voice assistant technology to aid scientists. This new initiative is part of OpenAI’s broader efforts to showcase AI’s potential in healthcare and biotech, alongside recent partnerships with companies like Moderna and Color Health.

However, the rapid advancement of AI has sparked concerns about its potential misuse. Lawmakers and tech executives have expressed fears that AI could be used to develop bioweapons. Earlier tests by OpenAI indicated that GPT-4 posed only a slight risk of aiding in creating biological threats.

Erick LeBrun, a research scientist at Los Alamos, emphasised the importance of this partnership in understanding both the benefits and potential dangers of advanced AI. He highlighted the need for a framework to evaluate current and future AI models, particularly concerning biological threats.

Healthcare experts demand transparency in AI use

Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.

The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.

To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.

Biden administration assembles AI expert team

President Joe Biden has assembled a team of lawyers, engineers, and national security specialists to develop standards for the training and deployment of AI across various industries. The team, tasked with creating AI guardrails, aims to identify AI-generated images, determine suitable training data, and prevent China from accessing essential AI technology. They are also collaborating with foreign governments and Congress to align AI approaches globally.

Laurie E. Locascio, Director of the National Institute of Standards and Technology (NIST), oversees the institute’s AI research, biotechnology, quantum science, and cybersecurity. She has requested an additional $50 million to fulfil the institute’s responsibilities under Biden’s AI executive order.

Alan Estevez, Under Secretary of Commerce for Industry and Security, is responsible for preventing adversaries like China, Russia, Iran, and North Korea from obtaining semiconductor technology crucial for AI systems. Estevez has imposed restrictions on the sale of advanced computer chips to China and is encouraging US allies to do the same.

Elizabeth Kelly, Director of the AI Safety Institute, leads the team at NIST in developing AI tests, definitions, and voluntary standards as per Biden’s directive. She facilitated data and technology sharing agreements with over 200 companies, civil society groups, and researchers, and represented the US at an AI summit in South Korea.

Elham Tabassi, Chief Technology Officer of the AI Safety Institute, focuses on identifying AI risks and testing the most powerful AI models. She has been with NIST since 1999, working on machine learning and developing voluntary guidelines to mitigate AI risks. Saif M. Khan, Senior Adviser to the Secretary for Critical and Emerging Technologies, coordinates AI policy activities across the Commerce Department, including export controls and copyright guidance for AI-assisted inventions. Khan acts as a key liaison between the department and Congress on AI issues.

US voters prefer cautious AI regulation over China race

A recent poll by the AI Policy Institute has shed light on strong public opinion in the United States regarding the regulation of AI.

Contrary to claims from the tech industry that strict regulations could hinder competition with China, a majority of American voters prioritise safety and control over the rapid development of AI. The poll reveals that 75% of both Democrats and Republicans prefer a cautious approach to AI development to prevent its misuse by adversaries.

The debate underscores growing concerns about national security and technological competitiveness. While China leads in AI patents, with over 38,000 registered compared to the US’s 6,300, Americans seem wary of sacrificing regulatory oversight in favour of expedited innovation.

Most respondents advocate for stringent safety measures and testing requirements to mitigate potential risks associated with powerful AI technologies.

Moreover, the poll highlights widespread support for restrictions on exporting advanced AI models to countries like China, reflecting broader apprehensions about technology transfer and national security. Despite the absence of comprehensive federal AI regulation in the US, states like California have begun to implement their own measures, prompting varied responses from tech industry leaders and policymakers alike.

Bumble fights AI scammers with new reporting tool

With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.

Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.

Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’

The National Education Association approves AI policy to guide educators

The US National Education Association (NEA) Representative Assembly (RA) delegates have approved the NEA’s first policy statement on the use of AI in education, providing educators with a roadmap for the safe, effective, and accessible use of AI in classrooms.

Since the fall of 2023, a task force of teachers, education support professionals, higher-ed faculty, and other stakeholders has been diligently working on this policy. Their efforts resulted in a 6-page policy statement, which RA delegates reviewed during an open hearing on 24 June and overwhelmingly approved on Thursday.

A central tenet of the new policy is that students and educators must remain at the heart of the educational process. AI should continue the human connection essential for inspiring and guiding students. The policy highlights that while AI can enhance education, it must be used responsibly, focusing on protecting data, ensuring equitable access, and providing opportunities for learning about AI.

The task force identified several opportunities AI presents, such as customising instructional methods for students with disabilities and making classrooms more inclusive. However, they also acknowledged risks, including potential biases due to the lack of diversity among AI developers and the environmental impact of AI technology. It’s crucial to involve traditionally marginalised groups in AI development and policy-making to ensure inclusivity. The policy clarifies that AI shouldn’t be used to make high-stakes decisions like class placements or graduation eligibility.

Why does this matter?

The policy underscores the importance of comprehensive professional learning for educators on AI to ensure its ethical and effective use in teaching. More than 7 in 10 K-12 teachers have never received professional learning on AI. It also raises concerns about exacerbating the digital divide, emphasising that all students should have access to cutting-edge technology and educators skilled in its use across all subjects, not just in computer science.

OpenAI and Arianna Huffington fund AI health coach development

OpenAI and Arianna Huffington are teaming up to fund the development of an AI health coach through Thrive AI Health, aiming to personalise health guidance using scientific data and personal health metrics shared by users. The initiative, detailed in a Time magazine op-ed by OpenAI CEO Sam Altman and Huffington, seeks to leverage AI advancements to provide insights and advice across sleep, nutrition, fitness, stress management, and social connection.

DeCarlos Love, a former Google executive with experience in wearables, has been appointed CEO of Thrive AI Health. The company has also formed research partnerships with institutions like Stanford Medicine and the Rockefeller Neuroscience Institute to bolster its AI-driven health coaching capabilities.

While AI-powered health coaches are gaining popularity, concerns over data privacy and the potential for misinformation persist. Thrive AI Health aims to support users with personalised health tips, targeting individuals lacking access to immediate medical advice or specialised dietary guidance.

Why does this matter?

The development of AI in healthcare promises significant advancements, including accelerating drug development and enhancing diagnostic accuracy. However, challenges remain in ensuring the reliability and safety of AI-driven health advice, particularly in maintaining trust and navigating the limitations of AI’s capabilities in medical decision-making.

Matlock denies AI bot rumours amid concerns over campaign image

Mark Matlock, a political candidate for the right-wing Reform UK party, has affirmed that he is indeed a real person, dispelling rumours that he might be an AI bot. The suspicions arose from a highly edited campaign image and his absence from critical events, prompting a thread on social media platform X that questioned his existence.

The speculation about AI involvement is partially plausible, especially considering that an AI company executive recently used an AI persona to run for Parliament in the UK, though he garnered only 179 votes. However, Matlock clarified that he was severely ill with pneumonia during the election period, rendering him unable to attend events. He provided the original campaign photo, explaining that only minor edits were made.

Why does it matter?

The incident highlights the broader implications of AI in politics. The 2024 elections in the US and elsewhere are already witnessing the impact of AI tools, from deepfake videos to AI-generated political ads. As the use of such technology grows, candidates must maintain transparency and authenticity to avoid similar controversies.