The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.
Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.
The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korean company, Samsung Electronics, has integrated Microsoft’s Copilot AI assistant into its newest TVs and monitors, aiming to provide more personalised interactivity for users.
The technology will be available across models released annually, including the premium Micro RGB TV. With Copilot built directly into displays, Samsung explained that viewers can use voice commands or a remote control to search, learn and engage with content more positively.
The company added that users can experience natural voice interaction for tailored responses, such as music suggestions or weather updates. Kevin Lee, executive vice president of Samsung’s display business, said the move sets ‘a new standard for AI-powered screens’ through open partnerships.
Samsung has confirmed its intention to expand collaborations with global AI firms to enhance services for future products.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.
Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.
The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.
The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ChatGPT is increasingly used as a travel assistant, with some travellers claiming it can save hundreds of pounds on flights. Finance influencer Casper Opala shares cost-saving tips online and said the AI tool helped him secure a flight for £70 that initially cost more than £700.
Opala shared a series of prompts that allow ChatGPT to identify hidden routes, budget airlines not listed on major platforms, and potential savings through alternative airports or separate bookings. He also suggested using the tool to monitor prices for several days or compare one-way fares with return tickets.
While many money-saving tricks have existed for years, ChatGPT condenses the process, collecting results in seconds. Opala says this efficiency is a strong starting point for cheaper travel deals.
Experts, however, warn that ChatGPT is not connected to live flight booking systems. TravelBook’s Laura Pomer noted that the AI can sometimes present inaccurate or outdated fares, meaning users should always verify results before booking.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The NFL has begun deploying Microsoft Copilot across all 32 clubs to support faster and more intelligent decision-making during games. Over 2,500 Surface Copilot+ devices have been distributed to coaches, analysts and staff for use on the sidelines and in the booth.
Teams now have access to AI-powered tools like a GitHub Copilot filter that quickly pulls key moments, such as penalties or fumbles, reducing the need to scrub through footage manually. Microsoft 365 Copilot also supports analysts with real-time trend spotting in Excel dashboards during matches.
To ensure reliability, Microsoft has provided hard-wired carts for connectivity even when Wi-Fi drops. These systems are linked to secure Windows servers managed by the NFL, safeguarding critical game data under various stadium conditions.
Los Angeles Rams head coach Sean McVay said the team has embraced the changes, calling Copilot ‘a valuable tool’ for navigating the pressure of real-time decisions. NFL leadership echoed his optimism, framing AI as essential to the future of the sport.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Silicon Valley insiders are preparing to pour over $100 million into next year’s US midterm elections to influence AI policy. The super-PAC Leading the Future, backed by Andreessen Horowitz and Greg Brockman, seeks to impact AI policy and limit strict regulation.
Leading the Future targets battleground states such as California, New York, Illinois, and Ohio. The PAC intends to fund campaigns, run extensive social media ads, and focus on politicians who support innovation-friendly ‘guardrails’ rather than heavy-handed regulation.
The initiative draws inspiration from the crypto industry’s political playbook, which successfully backed candidates aligned with its interests.
The group’s structure combines federal and state PACs with a 501(c)(4) organisation, offering flexibility and influence over both major parties. High-profile backers include Marc Andreessen, Greg Brockman, Joe Lonsdale, and Ron Conway.
Their collective goal is to ensure AI development continues without regulatory barriers that could slow American innovation and job creation.
Silicon Valley’s strategy highlights the increasing role of tech money in politics, reflecting a shift in donor priorities. The PAC’s influence may become a decisive factor in shaping AI legislation, with potential implications for the industry and broader US policy debates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Binance founder Changpeng ‘CZ’ Zhao shared his vision for crypto’s future, connecting digital assets with AI and recent policy changes. At WebX in Tokyo, CZ praised US crypto policy under Trump, highlighting stablecoin legislation and the Genius Act while opposing central bank digital currencies.
He argued that embracing innovation is crucial to remaining competitive globally.
CZ predicted that crypto will become the natural medium of exchange for AI, bypassing traditional fiat, banks, and credit cards. He envisaged hundreds or thousands of AI agents per person, generating a surge of microtransactions via programmable blockchain networks.
According to CZ, blockchains’ APIs are better suited than banks for interfacing with AI-driven economic activity.
Since stepping down from Binance, CZ has focused on education and advisory work. His Giggle Academy already serves 50,000 children, aiming to digitise 18 years of schooling at a fraction of government costs.
He advises at least 12 governments on crypto regulation and adoption. He also plans to mentor founders and back early-stage projects through his investment firm EZ Labs, emphasising ethical practices and long-term value creation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.
Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.
Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.
The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.
Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s xAI has filed a lawsuit in Texas accusing Apple and OpenAI of colluding to stifle competition in the AI sector.
The case alleges that both companies locked up markets to maintain monopolies, making it harder for rivals like X and xAI to compete.
The dispute follows Apple’s 2024 deal with OpenAI to integrate ChatGPT into Siri and other apps on its devices. According to the lawsuit, Apple’s exclusive partnership with OpenAI has prevented fair treatment of Musk’s products within the App Store, including the X app and xAI’s Grok app.
Musk previously threatened legal action against Apple over antitrust concerns, citing the company’s alleged preference for ChatGPT.
Musk, who acquired his social media platform X in a $45 billion all-stock deal earlier in the year, is seeking billions of dollars in damages and a jury trial. The legal action highlights Musk’s ongoing feud with OpenAI’s CEO, Sam Altman.
Musk, a co-founder of OpenAI who left in 2018 after disagreements with Altman, has repeatedly criticised the company’s shift to a profit-driven model. He is also pursuing separate litigation against OpenAI and Altman over that transition in California.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.
Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.
Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.
Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.
Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!