Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but demand greater transparency regarding its application. A survey by Elsevier reveals that 94% of researchers and 96% of clinicians believe AI will accelerate knowledge discovery, while a similar proportion sees it boosting research output and reducing costs. Both groups, however, stress the need for quality content, trust, and transparency before they fully embrace AI tools.
The survey, involving 3,000 participants across 123 countries, indicates that 87% of respondents think AI will enhance overall work quality, and 85% believe it will free up time for higher-value projects. Despite these positive outlooks, there are significant concerns about AI’s potential misuse. Specifically, 95% of researchers and 93% of clinicians fear that AI could be used to spread misinformation. In India, 82% of doctors worry about overreliance on AI in clinical decisions, and 79% are concerned about societal disruptions like unemployment.
To address these issues, 81% of researchers and clinicians expect to be informed if the tools they use depend on generative AI. Moreover, 71% want assurance that AI-dependent tools are based on high-quality, trusted data sources. Transparency in peer-review processes is also crucial, with 78% of researchers and 80% of clinicians expecting to know if AI influences manuscript recommendations. These insights underscore the importance of transparency and trust in the adoption of AI in healthcare.
A recent poll by the AI Policy Institute has shed light on strong public opinion in the United States regarding the regulation of AI.
Contrary to claims from the tech industry that strict regulations could hinder competition with China, a majority of American voters prioritise safety and control over the rapid development of AI. The poll reveals that 75% of both Democrats and Republicans prefer a cautious approach to AI development to prevent its misuse by adversaries.
The debate underscores growing concerns about national security and technological competitiveness. While China leads in AI patents, with over 38,000 registered compared to the US’s 6,300, Americans seem wary of sacrificing regulatory oversight in favour of expedited innovation.
Most respondents advocate for stringent safety measures and testing requirements to mitigate potential risks associated with powerful AI technologies.
Moreover, the poll highlights widespread support for restrictions on exporting advanced AI models to countries like China, reflecting broader apprehensions about technology transfer and national security. Despite the absence of comprehensive federal AI regulation in the US, states like California have begun to implement their own measures, prompting varied responses from tech industry leaders and policymakers alike.
With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.
Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.
Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’
The EU has designated the adult content platform XNXX as a Very Large Online Platform (VLOP) under its Digital Services Act (DSA), citing its average of 45 million monthly users in the EU. The designation comes with stringent requirements for the platform, including data sharing with authorities and researchers, risk management, and external independent audits.
Under the DSA rules, XNXX has four months to implement measures to protect users, especially minors, and address systemic risks associated with its services. Failure to provide accurate information can result in significant fines imposed by the European Commission.
India’s data protection law, the Digital Personal Data Protection Act (DPDPA), must hold platforms accountable for child safety, according to a panel discussion hosted by the Citizen Digital Foundation (CDF). The webinar, ‘With Alice, Down the Rabbit Hole’, explored the challenges of online child safety and age assurance in India, highlighting the significant threat posed by subversive content and online threats to children.
Nidhi Sudhan, the panel moderator, criticised tech companies for paying lip service to child safety while employing engagement-driven algorithms that can be harmful to children. YouTube was highlighted as a major concern, with CDF researcher Aditi Pillai noting the issues with its algorithms. Dhanya Krishnakumar, a journalist and parent, emphasised the difficulty of imposing age verification without causing additional harm, such as peer pressure and cyberbullying, and stressed the need for open discussions to improve digital literacy.
Aparajita Bharti, co-founder of the Quantum Hub and Young Leaders for Active Citizenship (YLAC), argued that India requires a different approach from the West, as many parents lack the resources to ensure online child safety. Arnika Singh, co-founder of Social & Media Matters, pointed out that India’s diversity necessitates context-specific solutions, rather than one-size-fits-all policies.
The panel called for better accountability from tech platforms and more robust measures within the DPDPA. Nivedita Krishnan, director of law firm Pacta, warned that the DPDPA’s requirement for parental consent could unfairly burden parents with accountability for their children’s online activities. Chitra Iyer, co-founder and CEO of consultancy Space2Grow, highlighted the need for platforms to prioritise user safety over profit. Arnika Singh concluded that the DPDPA requires stronger enforcement mechanisms and should consider international models for better regulation.
The US Federal Trade Commission (FTC) and the Los Angeles District Attorney’s Office have banned the anonymous messaging app NGL from serving children under 18 due to rampant cyberbullying and threats.
The FTC’s latest action, part of a broader crackdown on companies mishandling consumer data or making exaggerated AI claims, also requires NGL to pay $5 million and implement age restrictions to prevent minors from using the app. NGL, which marketed itself as a safe space for teens, was found to have exploited its young users by sending them fake, anonymous messages designed to prey on their social anxieties.
The app then charged users for information about the senders, often providing only vague hints. The FTC lawsuit, which names NGL’s co-founders, highlights the app’s deceptive practices and its failure to protect users. However, the case against NGL is a notable example of FTC Chair Lina Khan’s focus on regulating digital data and holding companies accountable for AI-related misconduct.
The FTC’s action is part of a larger effort to protect children online, with states like New York and Florida also passing laws to limit minors’ access to social media. Regulatory push like this one aims to address the growing concerns about the impact of social media on children’s mental health.
OpenAI’s ChatGPT, launched in 2022, has revolutionised the way people seek answers, shifting from traditional methods to AI-driven interactions. This AI chatbot, along with competitors like Anthropic’s Claude, Google’s Gemini, and Microsoft’s CoPilot, has made AI a focal point in information retrieval. Despite these advancements, traditional search engines like Google remain dominant.
Google’s profits surged by nearly 60% due to increased advertising revenue from Google Search, and its global market share reached 91.1% in June, even as ChatGPT’s web visits declined by 12%.
Google is not only holding its ground but also leveraging AI technology to enhance its services. Analysts at Bank of America credit Gemini, Google’s AI, with contributing to the growth in search queries. By integrating Gemini into products such as Google Cloud and Search, Google aims to improve their performance, blending traditional search capabilities with cutting-edge AI innovations.
However, Google’s dominance faces significant legal challenges. The U.S. Department of Justice has concluded a major antitrust case against Google, accusing the company of monopolising the digital search market, with a verdict expected by late 2024.
Additionally, Google is contending with another antitrust lawsuit filed by the U.S. government over alleged anticompetitive behaviour in the digital advertising space. These legal challenges could reshape the digital search landscape, potentially providing opportunities for AI chatbots and other emerging technologies to gain a stronger foothold in the market.
Singapore’s digital development minister, Josephine Teo, has expressed concerns about the future of AI governance, emphasising the need for an internationally agreed-upon framework. Speaking at the Reuters NEXT conference in Singapore, Teo highlighted that while Singapore is more excited than worried about AI, the absence of global standards could lead to a ‘messy’ future.
Teo pointed out the necessity for specific legislation to address challenges posed by AI, particularly focusing on using deepfakes during elections. She stressed that implementing clear and effective laws will be crucial as AI technology advances to manage its impact on society and ensure responsible use.
Singapore’s proactive stance on AI reflects its commitment to balancing technological innovation with necessary regulatory measures. The country aims to harness the benefits of AI while mitigating potential risks, especially in critical areas like electoral integrity.
Microsoft has announced plans to provide Apple iOS devices to its employees in China so they can access authentication apps due to the unavailability of Google’s Android services in the country. This move, part of Microsoft’s global Secure Future Initiative, aims to mitigate security risks highlighted by recent breaches, including a high-profile hack by Russian hackers earlier this year.
Bloomberg News first reported that Microsoft, starting in September, will instruct its employees in China to use Apple devices at the workplace. The decision is driven by the absence of the Google Play Store in China, which limits employees’ access to essential security apps like Microsoft Authenticator and Identity Pass.
A Microsoft spokesperson confirmed the shift, emphasising the need for reliable access to required security apps. The company, which has operated in China since 1992 and maintains a significant research and development centre there, will provide iPhone 15 models to employees currently using Android handsets across China, including Hong Kong.
Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.
This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.
Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me. Your dedication helps authenticity and integrity remain paramount. Grateful. #AI#scam#imitation#IdentityProtection
Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.