Does Section 230 of the US Communication Decency Act protect users or tech platforms?

Typically, Section 230 of the US Communication Decency Act is considered to protect tech platforms from liability for the content provided. In a recent article, the Electronic Frontier Foundation argues that Section 230 protects users to participate in digital life.

The piece argues that repealing or altering Section 230 could inadvertently strengthen the position of big tech firms by removing the financial burden of litigation that smaller companies and startups cannot bear. Without these protections, smaller services might crumble under expensive legal challenges, stifling innovation and reducing competition in the digital landscape.

Such a scenario would leave big tech with even greater market dominance, which opponents of Section 230 seem to overlook. Additionally, the article addresses the misconception that eliminating Section 230 would enhance content moderation.

It clarifies that the law enables platforms to implement and enforce their standards without fear of increased liability, encouraging responsible moderation. EFF’s article argues that by allowing users and platforms to self-regulate, Section 230 prevents the US government from overreaching into defining acceptable speech, upholding a cornerstone of democratic values.

For more information on these topics, visit diplomacy.edu.

Anduril confident in Trump-era defence priorities

Anduril, the AI-powered defence start-up founded by Palmer Luckey, is optimistic about the Trump administration’s approach to defence reform.

Company president Christian Brose said the administration’s focus on innovation aligns with Anduril’s work in low-cost autonomous military systems. The firm recently partnered with OpenAI to integrate advanced artificial intelligence into national security missions.

Brose, a former adviser to Senator John McCain, has long criticised traditional defence procurement processes and believes the administration’s willingness to do things differently presents a major opportunity.

The company is expanding its global footprint, with plans to build manufacturing facilities outside the United States. Australia has emerged as a key market, with Anduril’s AI intrusion detection software being trialled at RAAF Base Darwin, where US Marines rotate annually.

The firm is also bidding to produce solid rocket motors for Australia’s Guided Weapons and Explosive Ordnance Enterprise.

Its Ghost Shark autonomous underwater system, developed in collaboration with the Australian Defence Force, is moving towards large-scale production, with a dedicated facility planned in New South Wales.

Autonomous military technology is a growing focus under the AUKUS treaty, which will see Australia invest heavily in nuclear-powered submarines with the support from the United States and the United Kingdom.

Brose emphasised that both crewed and autonomous systems will play a role in modern defence strategies, with the advantage of autonomous platforms being their faster production, larger deployment scale, and lower cost.

Anduril’s continued expansion highlights the increasing demand for AI-driven defence solutions in a rapidly evolving global security landscape.

For more information on these topics, visit diplomacy.edu.

OpenAI unveils new image generator in ChatGPT

OpenAI has rolled out an image generator feature within ChatGPT, enabling users to create realistic images with improved accuracy. The new feature, available for all Plus, Pro, Team, and Free users, is powered by GPT-4o, which now offers distortion-free images and more accurate text generation.

OpenAI shared a sample image of a boarding pass, showcasing the advanced capabilities of the new tool.

Previously, image generation was available through DALL-E, but its results often contained errors and were easily identifiable as AI-generated. Now integrated into ChatGPT, the new tool allows users to describe images with specific details such as colours, aspect ratios, and transparent backgrounds.

The update aims to enhance creative freedom while maintaining a higher standard of image quality.

CEO Sam Altman praised the feature as a ‘new high-water mark’ for creative control, although he acknowledged the potential for some users to create offensive content.

OpenAI plans to monitor how users interact with this tool and adjust as needed, especially as the technology moves closer to artificial general intelligence (AGI).

For more information on these topics, visit diplomacy.edu.

Lawmakers demand probe into Trump team’s Signal breach

​Top officials from the Trump administration inadvertently included a journalist in an encrypted Signal chat while discussing military plans, leading to concerns over a potential security breach.

The incident has prompted Democratic lawmakers to call for a congressional investigation into the mishandling of classified information. Although US law criminalises the misuse of such data, it remains uncertain if legal provisions were violated in this case. ​

Signal is a widely trusted encrypted messaging app known for strong privacy protections. The service, instead of storing user messages on its servers, keeps data solely on users’ devices, with an option to automatically delete conversations.

Unlike other platforms, Signal does not track user data, use ads, or affiliate with marketers. Its encryption is independent of any government, and cybersecurity experts consider it highly secure. However, if a device itself is compromised, messages within the app can still be accessed by hackers. ​

The app was co-founded by Moxie Marlinspike in 2012 and later supported by WhatsApp co-founder Brian Acton, who left WhatsApp over concerns regarding data privacy.

Signal is run by the non-profit Signal Foundation and has grown in popularity, especially among privacy advocates, journalists, and government agencies.

The European Commission and the US Senate have also endorsed its use. However, experts question whether it is appropriate for discussions involving national security matters, given the risk of mobile device vulnerabilities. ​

Signal saw a significant surge in users in 2021 after WhatsApp introduced a controversial privacy policy update.

Despite its reputation for security, the recent incident with Trump administration officials highlights concerns about the suitability of even the most encrypted platforms for handling sensitive government information.

For more information on these topics, visit diplomacy.edu.

AI powers Microsoft’s latest security upgrade

Microsoft has launched a new set of AI agents as part of its Security Copilot platform, aiming to automate key cybersecurity tasks like phishing detection, data protection, and identity management. The release includes six in-house agents and five developed with partners.

Among the tools is a phishing triage agent that can autonomously process routine alerts, freeing analysts to focus on advanced incidents.

Microsoft said its new AI-driven approach goes beyond traditional security platforms, using generative AI to prioritise threats, correlate data, and even recommend or execute responses.

The rollout also brings new capabilities to Microsoft Defender, Entra, and Purview, enhancing organisations’ ability to manage and secure AI systems.

While analysts welcome the move as a step forward in proactive cybersecurity, some warn that full reliance on one platform carries strategic risks like vendor lock-in and reduced flexibility.

Experts suggest a balanced approach that combines Microsoft’s core capabilities with specialised solutions for areas such as threat intelligence and cloud protection, helping organisations stay agile in a fast-evolving threat landscape.

For more information on these topics, visit diplomacy.edu.

AI physiotherapy service helps UK patients manage back pain

Lower back pain, one of the world’s leading causes of disability, has left hundreds of thousands of people in the UK stuck on long waiting lists for treatment. To address the crisis, the NHS is trialling a new solution: Flok Health, the first AI-powered physiotherapy clinic approved by the Care Quality Commission.

The app offers patients immediate access to personalised treatment plans through pre-recorded videos driven by artificial intelligence.

Created by former Olympic rower Finn Stevenson and tech expert Ric da Silva, Flok aims to treat straightforward cases that don’t require scans or hands-on intervention.

Patients interact with an AI-powered virtual physio, responding to questions that tailor the treatment pathway, with over a billion potential combinations. Unlike generative AI, Flok uses a more controlled system, eliminating the risk of fabricated medical advice.

The service has already launched in Scotland and is expanding across England, with ambitions to cover half the UK within a year. Flok is also adding treatment for conditions like hip and knee osteoarthritis, and women’s pelvic health.

While promising, the system depends on patients correctly following instructions, as the AI cannot monitor physical movements. Real physiotherapists are available to answer questions, but they do not provide live feedback during exercises.

Though effective for some, not all users find AI a perfect fit. Some, like the article’s author, prefer the hands-on guidance and posture corrections of human therapists.

Experts agree AI has potential to make healthcare more accessible and efficient, but caution that these tools must be rigorously evaluated, continuously monitored, and designed to support – not replace – clinical care.

For more information on these topics, visit diplomacy.edu.

DNA-testing firm 23andMe faces financial collapse

23andMe has filed for bankruptcy in the US after struggling with declining demand for its ancestry kits and a major data breach in 2023.

The firm, once valued at nearly $6 billion, has seen its market worth plummet, with shares dropping 50% to just 88 cents after co-founder Anne Wojcicki resigned as CEO. The company will continue operating during the sale process, securing $35 million in financing over the weekend.

Concerns have been raised about the fate of genetic data collected from customers, particularly as 23andMe has made multiple deals with pharmaceutical and biotech firms.

While the company insists the bankruptcy will not affect how data is managed, California’s attorney general has urged users to delete their information amid privacy concerns. Experts warn that while accounts can be deleted, some data may still exist in anonymised form.

The firm’s decline has been worsened by its inability to retain customers, as most users only purchase a kit once. The 2023 data breach, exposing the personal details of nearly 7 million users, further damaged its reputation, leading to a $30 million legal settlement.

Wojcicki, who had made several failed buyout attempts, has signalled her intention to bid again, but 23andMe has not disclosed any other potential buyers.

For more information on these topics, visit diplomacy.edu.

Canada warns of foreign election interference

Canada’s intelligence agency has warned that China and India are highly likely to interfere in the country’s general election on 28 April, with Russia and Pakistan also having the potential to do so.

The Canadian Security Intelligence Service (CSIS) stated that while previous interference attempts in the 2019 and 2021 elections did not alter the results, the country had been slow to respond at the time. Both China and India have denied previous allegations of meddling in Canada’s internal affairs.

Vanessa Lloyd, CSIS’s deputy director of operations, said hostile states are increasingly using AI to influence elections, with China being particularly likely to exploit such tools.

The warning comes amid tense diplomatic relations between Canada and Beijing, following China’s recent tariffs on $2.6 billion worth of Canadian agricultural products and Ottawa’s strong condemnation of China’s execution of four Canadian citizens on drug charges.

India has also been under scrutiny, with Canada expelling six Indian diplomats last year over allegations of involvement in a plot against Sikh separatists.

Lloyd stated that India has both the intent and capability to interfere in Canadian politics and communities, though the Indian diplomatic mission in Ottawa has yet to comment.

She added that while it is difficult to directly link foreign interference with election outcomes, such activities undermine public trust in Canada’s democratic institutions.

For more information on these topics, visit diplomacy.edu.

Gmail uses AI to find emails faster

Google has introduced a new AI feature in Gmail aimed at making email searches faster and more accurate.

Instead of simply listing messages by date or keyword, the updated system now considers user habits, including frequently opened emails and commonly contacted senders, to provide more relevant results.

The enhanced search feature is being rolled out globally for personal Gmail accounts and is accessible via the web, Android, and iOS apps.

Users can now toggle between the new ‘most relevant’ results and the traditional ‘most recent’ option. Google has also stated that it plans to extend this functionality to business users in the near future.

By using AI to refine email searches, Gmail aims to reduce the time users spend digging through their inboxes.

However, this update is part of Google’s broader strategy to integrate more intelligent tools across its suite of productivity apps, offering a smoother, more efficient experience for everyday users.

For more information on these topics, visit diplomacy.edu.

23andMe enters bankruptcy after failed takeover bids

Genetic testing company 23andMe has filed for Chapter 11 bankruptcy protection in the United States as part of efforts to sell the struggling business.

Co-founder and CEO Anne Wojcicki has resigned after multiple failed takeover attempts, with CFO Joe Selsavage stepping in as interim chief executive.

The company had previously cut 40% of its workforce and halted therapy development in a restructuring effort announced in November.

Wojcicki had been pushing for a buyout since April 2023 but faced repeated rejections from the board. Her most recent bid, valuing 23andMe at $11 million, was significantly lower than the company’s current $50 million market value.

Despite financial difficulties, the firm has secured a $35 million financing commitment and expects to continue operations during the sale process.

The company has faced mounting challenges, including a $30 million settlement for a 2023 data breach that exposed the personal information of 6.9 million customers.

With estimated liabilities between $100 million and $500 million, 23andMe’s future now depends on securing a buyer willing to salvage the once high-profile genetic testing firm.

For more information on these topics, visit diplomacy.edu.