FTC warns of risks in big tech AI partnerships

The Federal Trade Commission (FTC) has raised concerns about the competitive risks posed by collaborations between major technology companies and developers of generative AI tools. In a staff report issued Friday, the agency pointed to partnerships such as Microsoft’s investment in OpenAI and similar alliances involving Amazon, Google, and Anthropic as potentially harmful to market competition, according to TechCrunch.

FTC Chair Lina Khan warned that these collaborations could create barriers for smaller startups, limit access to crucial AI tools, and expose sensitive information. ‘These partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that undermines fair competition,’ Khan stated.

The report specifically highlights the role of cloud service providers like Microsoft, Amazon, and Google, which provide essential resources such as computing power and technical expertise to AI developers. These arrangements could restrict smaller firms’ access to these critical resources, raise business switching costs, and allow cloud providers to gain unique insights into sensitive data, potentially stifling competition.

Microsoft defended its partnership with OpenAI, emphasising its benefits to the industry. ‘This collaboration has enabled one of the most successful AI startups in the world and spurred unprecedented technology investment and innovation,’ said Rima Alaily, Microsoft’s deputy general counsel. The FTC report underscores the need to address the broader implications of big tech’s growing dominance in generative AI.

ChatGPT usage in schools doubles among US teens

Younger members of Generation Z are turning to ChatGPT for schoolwork, with a new Pew Research Centre survey revealing that 26% of US teens aged 13 to 17 have used the AI-powered chatbot for homework. This figure has doubled since 2023, highlighting the growing reliance on AI tools in education. The survey also showed mixed views among teens about its use, with 54% finding it acceptable for research, while smaller proportions endorsed its use for solving maths problems (29%) or writing essays (18%).

Experts have raised concerns about the limitations of ChatGPT in academic contexts. Studies indicate the chatbot struggles with accuracy in maths and certain subject areas, such as social mobility and African geopolitics. Research also shows varying impacts on learning outcomes, with Turkish students who used ChatGPT performing worse on a maths test than peers who didn’t. German students, while finding research materials more easily, synthesised information less effectively when using the tool.

Educators remain cautious about integrating AI into classrooms. A quarter of public K-12 teachers surveyed by Pew believed AI tools like ChatGPT caused more harm than good in education. Another study by the Rand Corporation found only 18% of K-12 teachers actively use AI in their teaching practices. The disparities in effectiveness and the tool’s limitations underscore the need for careful consideration of its role in learning environments.

Zuckerberg defends AI training as copyright dispute deepens

Mark Zuckerberg has defended Meta’s use of a dataset containing copyrighted e-books to train its AI models, Llama. The statement emerged from a deposition linked to the ongoing Kadrey v. Meta Platforms lawsuit, which is one of many cases challenging the use of copyrighted content in AI training. Meta reportedly relied on the controversial dataset LibGen, despite internal concerns over potential legal risks.

LibGen, a platform known for providing unauthorised access to copyrighted works, has faced numerous lawsuits and shutdown orders. Newly unsealed court documents suggest that Zuckerberg approved using the dataset to develop Meta’s Llama models. Employees allegedly flagged the dataset as problematic, warning it might undermine the company’s standing with regulators. During questioning, Zuckerberg compared the situation to YouTube’s efforts to remove pirated content, arguing against blanket bans on datasets with copyrighted material.

Meta’s practices are under heightened scrutiny as legal battles pit AI companies against copyright holders. The deposition indicates that Meta considered balancing copyright concerns with practical AI development needs. However, the company faces mounting allegations that it disregarded ethical boundaries, sparking broader debates about fair use and intellectual property in AI training.

UK AI plan calls for AI sovereignty and bottom-up developments

The UK government has launched an ambitious AI Opportunities Action Plan to accelerate the adoption of AI to drive economic growth, create future job opportunities, and enhance the quality of life for its citizens. The plan seeks to position the UK at the forefront of the AI revolution by leveraging its technological strengths.

The Action Plan outlines a strategic framework centred on three key goals: enhancing the nation’s foundational infrastructure and regulatory environment to attract global talent, promoting widespread AI adoption across sectors to improve public services and productivity, and establishing the UK as a leader in AI innovation with domestic champions in critical technology areas. The initiative acknowledges the challenges and complexities involved, emphasising the need for strong public-private partnerships, significant investment, and a commitment to support innovators and new market leaders.

The UK’s plan introduces terms such as AI sovereignty and bottom-up AI development based on the UK’s world-class AI research, a dynamic startup scene, and leading corporate AI entities like Google DeepMind and OpenAI. Coupled with its AI safety and governance leadership, this robust foundation is intended to support the country’s long-term AI ambitions. This comprehensive approach promises transformative outcomes for the UK, reinforcing its status as a global leader in AI innovation and fostering societal advancement through technology.

US regulator escalates complaint against Snap

The United States Federal Trade Commission (FTC) has referred a complaint about Snap Inc’s AI-powered chatbot, My AI, to the Department of Justice (DOJ) for further investigation. The FTC alleges the chatbot caused harm to young users, though specific details about the alleged harm remain undisclosed.

Snap Inc defended its chatbot, asserting that My AI operates under rigorous safety and privacy measures and criticised the FTC for lacking concrete evidence to support its claims. Despite the company’s reassurances, the FTC stated it had uncovered indications of potential legal violations.

The announcement impacted Snap’s stock performance, with shares dropping by 5.2% to close at $11.22 on Thursday. The US FTC noted that publicising the complaint’s transfer to the DOJ was in the public interest, underscoring the gravity of the allegations.

AFP partnership strengthens Mistral’s global reach

Mistral, a Paris-based AI company, has entered a groundbreaking partnership with Agence France-Presse (AFP) to enhance the accuracy of its chatbot, Le Chat. The deal signals Mistral’s determination to broaden its scope beyond foundational model development.

Through the agreement, Le Chat will gain access to AFP’s extensive archive, which includes over 2,300 daily stories in six languages and records dating back to 1983. While the focus remains on text content, photos and videos are not part of the multi-year arrangement. By incorporating AFP’s multilingual and multicultural resources, Mistral aims to deliver more accurate and reliable responses tailored to business needs.

The partnership bolsters Mistral’s standing against AI leaders like OpenAI and Anthropic, who have also secured similar content agreements. Le Chat’s enhanced features align with Mistral’s broader strategy to develop user-friendly applications that rival popular tools such as ChatGPT and Claude.

Mistral’s co-founder and CEO, Arthur Mensch, emphasised the importance of the partnership, describing it as a step toward offering clients a unique and culturally diverse AI solution. The agreement reinforces Mistral’s commitment to innovation and its global relevance in the rapidly evolving AI landscape.

New Nvidia microservices address key security concerns in AI agents

Nvidia has launched three new NIM microservices designed to help enterprises control and secure their AI agents. These services are part of Nvidia NeMo Guardrails, a collection of software tools aimed at improving AI applications. The new microservices focus on content safety, restricting conversations to approved topics, and preventing jailbreak attempts on AI agents.

The content safety service helps prevent AI agents from generating harmful or biased outputs, while the conversation filter ensures discussions remain on track. The third service works to block attempts to bypass AI software restrictions. Nvidia’s goal is to provide developers with more granular control over AI agent interactions, addressing gaps that could arise from broad, one-size-fits-all policies.

Enterprises are showing growing interest in AI agents, though adoption is slower than anticipated. A recent Deloitte report predicts that by 2027, half of enterprises will be using AI agents, with 25% already implementing or planning to do so by 2025. Despite widespread interest, the pace of adoption remains slower than the rapid development of AI technology.

Nvidia’s new tools are designed to make AI adoption more secure and reliable. The company hopes these innovations will encourage enterprises to integrate AI agents into their operations with greater confidence, but only time will tell whether this will be enough to accelerate widespread usage.

AI-generated news alerts paused by Apple amid accuracy concerns

Apple has halted AI-powered notification summaries for news and entertainment apps after backlash over misleading news alerts. A BBC complaint followed a summary that misrepresented an article about a murder case involving UnitedHealthcare’s CEO.

The latest developer previews for iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3 disable notification summaries for such apps, with Apple planning to reintroduce them after improvements. Notification summaries will now appear in italics to help users distinguish them from standard alerts.

Users will also gain the ability to turn off notification summaries for individual apps directly from the Lock Screen. Apple will notify users in the Settings app that the feature remains in beta and may contain errors.

A public beta is expected next week, but the general release date for iOS 18.3 remains unclear. Apple had already announced plans to clarify that summary texts are generated by Apple Intelligence.

AI helps Hull students overcome language barriers

Hull College has embraced AI to enhance learning, from lesson planning to real-time language translation. The institution is hosting a conference at its Queens Gardens campus to discuss how AI is influencing teaching, learning, and career preparation.

Mature student Sharron Knight, retraining to become a police call handler, attended an AI seminar and described the technology as ‘not as scary’ as she initially thought. She expressed surprise at the vast possibilities it offers. Student Albara Tahir, whose first language is Sudanese, has also benefited from AI tools, using them to improve his English skills.

Hull College principal Debra Gray highlighted AI’s potential to empower educators. She compared the tool to a bicycle, helping both teachers and students reach their goals faster without altering the core learning process.

The UK government recently announced plans to expand AI’s role in public services and economic growth, including creating ‘AI Growth Zones’ to support job creation and infrastructure projects. AI is already being used in UK hospitals for cancer diagnostics and other critical tasks.

Chinese tech company Zhipu questions US trade ban

Beijing-based AI company Zhipu Huazhang Technology has opposed the US government’s plan to add it to the export control entity list. The company argues the decision lacks a factual basis.

Zhipu issued a statement on its official WeChat account expressing strong opposition to the move. The firm criticised the US commerce department’s intentions, insisting the decision was unjustified.

Zhipu and its subsidiaries face restrictions on accessing US technologies if added to the list. The company maintains it operates lawfully and transparently in its business practices.

The US has been increasing scrutiny on Chinese technology firms, citing national security concerns. Zhipu emphasised its commitment to responsible technology development and cooperation with global partners.