The Federal Trade Commission (FTC) has raised concerns about the competitive risks posed by collaborations between major technology companies and developers of generative AI tools. In a staff report issued Friday, the agency pointed to partnerships such as Microsoft’s investment in OpenAI and similar alliances involving Amazon, Google, and Anthropic as potentially harmful to market competition, according to TechCrunch.
FTC Chair Lina Khan warned that these collaborations could create barriers for smaller startups, limit access to crucial AI tools, and expose sensitive information. ‘These partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that undermines fair competition,’ Khan stated.
The report specifically highlights the role of cloud service providers like Microsoft, Amazon, and Google, which provide essential resources such as computing power and technical expertise to AI developers. These arrangements could restrict smaller firms’ access to these critical resources, raise business switching costs, and allow cloud providers to gain unique insights into sensitive data, potentially stifling competition.
Microsoft defended its partnership with OpenAI, emphasising its benefits to the industry. ‘This collaboration has enabled one of the most successful AI startups in the world and spurred unprecedented technology investment and innovation,’ said Rima Alaily, Microsoft’s deputy general counsel. The FTC report underscores the need to address the broader implications of big tech’s growing dominance in generative AI.
Mistral, a Paris-based AI company, has entered a groundbreaking partnership with Agence France-Presse (AFP) to enhance the accuracy of its chatbot, Le Chat. The deal signals Mistral’s determination to broaden its scope beyond foundational model development.
Through the agreement, Le Chat will gain access to AFP’s extensive archive, which includes over 2,300 daily stories in six languages and records dating back to 1983. While the focus remains on text content, photos and videos are not part of the multi-year arrangement. By incorporating AFP’s multilingual and multicultural resources, Mistral aims to deliver more accurate and reliable responses tailored to business needs.
The partnership bolsters Mistral’s standing against AI leaders like OpenAI and Anthropic, who have also secured similar content agreements. Le Chat’s enhanced features align with Mistral’s broader strategy to develop user-friendly applications that rival popular tools such as ChatGPT and Claude.
Mistral’s co-founder and CEO, Arthur Mensch, emphasised the importance of the partnership, describing it as a step toward offering clients a unique and culturally diverse AI solution. The agreement reinforces Mistral’s commitment to innovation and its global relevance in the rapidly evolving AI landscape.
Nvidia has launched three new NIM microservices designed to help enterprises control and secure their AI agents. These services are part of Nvidia NeMo Guardrails, a collection of software tools aimed at improving AI applications. The new microservices focus on content safety, restricting conversations to approved topics, and preventing jailbreak attempts on AI agents.
The content safety service helps prevent AI agents from generating harmful or biased outputs, while the conversation filter ensures discussions remain on track. The third service works to block attempts to bypass AI software restrictions. Nvidia’s goal is to provide developers with more granular control over AI agent interactions, addressing gaps that could arise from broad, one-size-fits-all policies.
Enterprises are showing growing interest in AI agents, though adoption is slower than anticipated. A recent Deloitte report predicts that by 2027, half of enterprises will be using AI agents, with 25% already implementing or planning to do so by 2025. Despite widespread interest, the pace of adoption remains slower than the rapid development of AI technology.
Nvidia’s new tools are designed to make AI adoption more secure and reliable. The company hopes these innovations will encourage enterprises to integrate AI agents into their operations with greater confidence, but only time will tell whether this will be enough to accelerate widespread usage.
Apple has halted AI-powered notification summaries for news and entertainment apps after backlash over misleading news alerts. A BBC complaint followed a summary that misrepresented an article about a murder case involving UnitedHealthcare’s CEO.
The latest developer previews for iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3 disable notification summaries for such apps, with Apple planning to reintroduce them after improvements. Notification summaries will now appear in italics to help users distinguish them from standard alerts.
Users will also gain the ability to turn off notification summaries for individual apps directly from the Lock Screen. Apple will notify users in the Settings app that the feature remains in beta and may contain errors.
A public beta is expected next week, but the general release date for iOS 18.3 remains unclear. Apple had already announced plans to clarify that summary texts are generated by Apple Intelligence.
Hull College has embraced AI to enhance learning, from lesson planning to real-time language translation. The institution is hosting a conference at its Queens Gardens campus to discuss how AI is influencing teaching, learning, and career preparation.
Mature student Sharron Knight, retraining to become a police call handler, attended an AI seminar and described the technology as ‘not as scary’ as she initially thought. She expressed surprise at the vast possibilities it offers. Student Albara Tahir, whose first language is Sudanese, has also benefited from AI tools, using them to improve his English skills.
Hull College principal Debra Gray highlighted AI’s potential to empower educators. She compared the tool to a bicycle, helping both teachers and students reach their goals faster without altering the core learning process.
The UK government recently announced plans to expand AI’s role in public services and economic growth, including creating ‘AI Growth Zones’ to support job creation and infrastructure projects. AI is already being used in UK hospitals for cancer diagnostics and other critical tasks.
Beijing-based AI company Zhipu Huazhang Technology has opposed the US government’s plan to add it to the export control entity list. The company argues the decision lacks a factual basis.
Zhipu issued a statement on its official WeChat account expressing strong opposition to the move. The firm criticised the US commerce department’s intentions, insisting the decision was unjustified.
Zhipu and its subsidiaries face restrictions on accessing US technologies if added to the list. The company maintains it operates lawfully and transparently in its business practices.
The US has been increasing scrutiny on Chinese technology firms, citing national security concerns. Zhipu emphasised its commitment to responsible technology development and cooperation with global partners.
Microsoft has introduced a new chat service, Copilot Chat, allowing businesses to deploy AI agents for routine tasks. The service, powered by OpenAI’s GPT-4, enables users to create AI-driven assistants using natural language commands in English, Mandarin, and other languages. Tasks such as market research, drafting strategy documents, and meeting preparation can be handled for free, though advanced features like Teams call transcription and PowerPoint slide creation require a $30 monthly Microsoft 365 Copilot subscription.
With increasing pressure to generate returns on its substantial AI investments, Microsoft is betting on a pay-as-you-go model to drive adoption. The company is expected to spend around $80 billion on AI infrastructure and data centres this fiscal year. Following concerns about Copilot’s adoption, Microsoft has been pushing its AI tools more aggressively, offering businesses greater flexibility in using AI for automation.
In a move towards greater AI autonomy, Microsoft previously introduced tools allowing customers to create self-sufficient AI agents with minimal human input. Analysts suggest that such innovations could offer a simpler path to monetisation for tech companies, making AI-driven automation more accessible and scalable.
Italian startup iGenius has launched Colosseum 355B, a large language model built using the latest Nvidia technology, designed for industries with strict data protection and compliance needs. CEO Uljan Sharka highlighted the challenges that tight regulations pose for AI adoption in sectors like finance, heavy industry, and government, where data security is paramount.
Unlike major competitors like OpenAI, iGenius offers open-source AI models that allow companies to run the technology on their own infrastructure, ensuring that sensitive data remains in-house. The startup is already in talks with potential clients in the financial services and industrial sectors.
Sharka also traveled to Brussels to present the new model to the European Commission, aiming to gain regulatory approval and foster wider adoption in Europe’s heavily regulated markets.
President Joe Biden has signed an executive order to support the rapid expansion of AI data centres by providing federal land and resources. The initiative will allow AI facilities to be built on sites owned by the Defence and Energy departments, addressing the growing demand for computing power while promoting clean energy development. Companies using federal land for AI data centres will also be required to purchase a portion of American-made semiconductors, reinforcing the administration’s push for domestic chip production.
The order aims to ensure that the most advanced AI models are developed and stored within the United States, strengthening national security and economic competitiveness. The White House stressed the importance of securing energy supplies and transmission infrastructure to sustain AI growth, with experts predicting that by 2028, leading developers could need up to five gigawatts of capacity to train their AI models. Agencies have been directed to fast-track grid interconnections, permitting, and infrastructure development to meet these demands.
Efforts to keep AI technology within the United States align with broader national security concerns. The Commerce Department has announced new restrictions on AI chip exports to prevent China from accessing advanced computing power. White House technology adviser Tarun Chhabra highlighted the potential risks posed by AI, including its ability to aid in developing chemical, biological, and cyber warfare capabilities. Ensuring that AI data centres remain under US control will help safeguard military and national security interests.
A French interior designer, identified as Anne, has fallen victim to a sophisticated scam in which she was tricked into believing she was in a relationship with actor Brad Pitt. Over the course of a year, the scammer, using AI-generated images and fake social media profiles, manipulated Anne into sending €830,000 for purported cancer treatment after a fabricated story involving the actor’s frozen bank accounts.
The scam began when Anne received messages from a fake ‘Jane Etta Pitt,’ claiming the Hollywood star needed someone like her. As Anne was going through a divorce, the AI-generated Brad Pitt sent declarations of love, eventually asking for money under the guise of urgent medical needs. Despite doubts raised by her daughter, Anne transferred large sums, believing she was saving a life.
The truth came to light when Anne saw Brad Pitt in the media with his current partner, and it became clear she had been scammed. However, instead of support, her story has been met with cyberbullying, including mocking social media posts from groups like Toulouse FC and Netflix France. The harassment has taken a toll on Anne’s mental health, and police are now investigating the scam.
The case highlights the dangers of AI scams, the vulnerabilities of individuals, and the lack of empathy in some online responses.