OpenAI, previously a close partner of Microsoft, is now officially recognised as a competitor. Microsoft’s recent SEC filing marks the first time the company has publicly acknowledged this shift. OpenAI is now listed alongside tech giants like Google and Amazon as a competitor in both AI and search technologies.
The relationship between the two companies has been under scrutiny, with antitrust concerns arising from the FTC. Microsoft’s decision to relinquish its board observer seat at OpenAI follows a series of significant events, including the brief dismissal of OpenAI’s CEO Sam Altman. The filing may reflect a strategic move to alter public perception amid these investigations.
Silicon Valley has a history of companies navigating complex relationships, balancing roles as both partners and competitors. The dynamic between Yahoo and Google in the early 2000s serves as a notable example. Microsoft and OpenAI might be experiencing a similar evolution, with both entities maintaining competitive and cooperative elements.
Meanwhile, Microsoft continues to expand its own AI initiatives. The hiring of Inflection AI co-founders to lead a new AI division and the development of Microsoft Copilot highlight the company’s broader strategy. The diversification suggests a strategic approach to AI that goes beyond its ties with OpenAI.
On Thursday, Microsoft confirmed that users in New Zealand are experiencing difficulties accessing its services, including Exchange Online. Although the extent of the disruption remains unclear, Microsoft has taken steps to mitigate the issue by rerouting traffic to alternate infrastructure, which has led to some improvement in service availability.
The company is actively investigating to determine the underlying cause of the network problem. The incident follows closely on the heels of a significant tech outage caused by faulty code in CrowdStrike’s cybersecurity software, which affected numerous companies using the Microsoft Windows operating system less than two weeks ago.
As Microsoft works to resolve the current issues, users are advised to stay updated on the situation. The company’s efforts highlight the ongoing challenges of maintaining reliable service amidst increasing technological complexities and interdependencies.
Google is taking significant steps to address the problem of non-consensual sexually explicit fake content, often referred to as ‘deepfakes,’ that has been increasingly distributed online. Recognising the distress this can cause, Google has updated its policies and systems to help affected individuals more effectively. These updates include easier removal processes for such content from Search and improvements to Google’s ranking systems to prevent this harmful material from appearing prominently in search results.
People have long been able to request the removal of non-consensual explicit imagery from Google Search, but the new changes make this process more accessible. Once a request is successfully made, Google’s systems will also aim to filter out all explicit results related to similar searches. Additionally, if an image is removed under these policies, Google will scan for and remove any duplicates, providing greater peace of mind for those worried about future appearances of the same content.
In tandem with these removal process enhancements, Google is also refining its ranking systems to demote explicit fake content. That includes lowering the ranking of such content for searches that may inadvertently lead to it and promoting high-quality, non-explicit content instead. These changes have already shown promising results, with exposure to explicit image results on certain queries reduced by over 70%. By distinguishing between real and fake explicit content, Google aims to surface legitimate information better while minimising harmful content.
Google acknowledges that more work is needed to tackle this issue comprehensively. The company is committed to ongoing improvements and industry-wide partnerships to address the broader societal challenges of non-consensual explicit fake content. These efforts reflect Google’s dedication to protecting individuals and maintaining the integrity of its search results.
The world’s first comprehensive AI law, known as the EU AI Act, officially came into force on 1 August 2024, marking a significant step in regulating AI. This landmark legislation aims to ensure AI’s safe and trustworthy deployment across Europe by setting clear rules and guidelines. While the AI Act is now in effect, it will be fully applicable in two years, with specific provisions, such as bans on prohibited practices, taking effect sooner.
The AI Act establishes a legal framework to address the risks associated with AI while promoting innovation and investment in the technology. It gives AI developers precise requirements, especially for high-risk applications like critical infrastructure, education, and law enforcement. The regulation also includes measures to reduce administrative burdens for small and medium-sized enterprises, encouraging their participation in the AI sector.
Today, the Artificial Intelligence Act comes into force.
Europe's pioneering framework for innovative and safe AI.
It will drive AI development that Europeans can trust.
And provide support to European SMEs and startups to bring cutting-edge AI solutions to market. pic.twitter.com/cRoVoRtEy0
A central aspect of the AI Act is its risk-based approach, categorising AI systems into different risk levels, from minimal to unacceptable. High-risk systems, such as those used in healthcare and law enforcement, face stringent obligations to ensure safety and compliance. Additionally, the Act mandates transparency for general-purpose AI models and requires robust risk management and oversight.
The European AI Office has been established to oversee the enforcement and implementation of the AI Act. This office will work with member states to create an environment that respects human rights and fosters AI innovation. As AI evolves, the regulation is designed to adapt to technological changes, ensuring that AI applications remain trustworthy and beneficial for society.
OnlyFans, a platform known for offering subscribers ‘authentic relationships’ with content creators, faces scrutiny over the use of AI chatbots impersonating performers. Some management agencies employ AI software to sext with subscribers, bypassing the need for human interaction. NEO Agency, for example, uses a chatbot called FlirtFlow to create what it claims are ‘genuine and meaningful’ connections, although OnlyFans’ terms of service prohibit such use of AI.
Despite these rules, chatbots are prevalent. NEO Agency manages about 70 creators, with half using FlirtFlow. The AI engages subscribers in small talk to gather personal information, aiming to extract more money. While effective for high-traffic accounts, human chatters are still preferred for more personalised interactions, especially in niche erotic categories.
Similarly, Australian company Botly offers software that generates responses for OnlyFans messages, which a human can then send. Botly claims its technology is used in over 100,000 chats per month. Such practices raise concerns about transparency and authenticity on platforms that promise direct interactions with creators.
The issue coincides with broader discussions on online safety. The UK recently amended its Online Safety Bill to combat deepfakes and revenge porn, highlighting the rising threat of deceptive digital practices. Meanwhile, other platforms like X (formerly Twitter) have officially allowed adult content, increasing the complexity of managing online safety and authenticity.
A formal complaint has been filed with the Agency for Access to Public Information (AAIP) of Argentina against Meta, the parent company of Facebook, WhatsApp and Instagram. The case is in line with the international context of increasing scrutiny on the data protection practices of large technology companies.
The presentation was made by lawyers specialising in personal data protection, Facundo Malaureille and Daniel Monastersky, directors of the Diploma in Data Governance at the CEMA University. The complaint signals the company’s use of personal data for AI training.
The presentation consists of 22 points and requests that Meta Argentina explain its practices for collecting and using personal data for AI training. The AAIP will evaluate and respond to this presentation as the enforcement authority of the Personal Data Protection Law of Argentina (Law 25,326).
The country’s technological and legal community is closely watching the development of this case, given that the outcome of this complaint could impact innovation in AI and the protection of personal data in Argentina in the coming years.
Britain’s competition regulator, the CNMC, has imposed a hefty fine of €413.2 million (US$448 million) on online reservation platform Booking.com. The fine, the largest ever levied by the CNMC, targets Booking.com’s dominant market position in Spain, where it holds a 70% to 90% share. The penalties stem from practices dating back to 2019.
The CNMC found Booking.com to be imposing unfair terms on hotels and stifling competition from other providers. This included a ban on hotels offering lower prices on their own websites compared to Booking.com’s listings, as well as the ability of Booking.com to unilaterally impose price discounts on hotels. Additionally, the platform mandated that hotels resolve disputes in Dutch courts.
Booking Holdings, Booking.com’s parent company, intends to appeal the fine. They argue that the issue falls under the remit of the European Union’s Digital Markets Act and express strong disagreement with the CNMC’s findings. Booking Holdings plans to challenge the decision in Spain’s high court.
The investigation was triggered by complaints lodged in 2021 by the Spanish Association of Hotel Managers and the Madrid Hotel Business Association. Another point of contention is Booking.com’s practice of offering benefits to hotels that generate higher fees, which critics argue unfairly restricts competition from alternative booking services.
Meta Platforms has agreed to a $1.4 billion settlement with Texas over allegations of illegally using facial-recognition technology to collect biometric data without consent. The case marks the largest settlement of its kind by any state. The lawsuit, initiated in 2022, accused Facebook of capturing biometric data from photos and videos uploaded by users through a ‘Tag Suggestions’ feature, which has since been discontinued.
Meta expressed satisfaction with the resolution and hinted at future business investments in Texas, including developing data centres. Despite the settlement, the company continues to deny any wrongdoing. Texas Attorney General Ken Paxton emphasised the state’s dedication to holding the big tech companies accountable for privacy violations.
Why does it matter?
The settlement was reached in May, just before a state court trial began. Previously, Meta paid $650 million to settle a similar biometric privacy class action under Illinois law. Meanwhile, Google also faces a lawsuit in Texas for allegedly violating the state’s biometric privacy law.
The US Senate has passed significant online child safety reforms in a near-unanimous vote, but the fate of these bills remains uncertain in the House of Representatives. The two pieces of legislation, known as the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA), aim to protect minors from targeted advertising and unauthorised data collectiochiln while also enabling parents and children to delete their information from social media platforms. The Senate’s bipartisan approval, with a vote of 91-3, marks a critical step towards enhancing online safety for minors.
COPPA 2.0 and KOSA have sparked mixed reactions within the tech industry. While platforms like Snap and X have shown support for KOSA, Meta Platforms and TikTok executives have expressed reservations. Critics, including the American Civil Liberties Union and certain tech industry groups, argue that the bills could limit minors’ access to essential information on topics such as vaccines, abortion, and LGBTQ issues. Despite amendments to address these concerns, some, like Senator Ron Wyden, still need to be convinced of the bills’ efficacy and potential impact on vulnerable groups.
The high economic stakes are highlighted by a Harvard study indicating that top US social media platforms generated approximately $11 billion in advertising revenue from users under 18 in 2022. Advocates for the bills, such as Maurine Molak of ParentsSOS, view the Senate vote as a historic milestone in protecting children online. However, the legislation’s future hinges on its passage in the Republican-controlled House, which is currently in recess until September.
The Paris Olympics will highlight the use of generative AI for American viewers, while European audiences will experience a more traditional approach. Comcast’s NBCUniversal plans to integrate AI into its US broadcast, including recreating the voice of a legendary sportscaster. Meanwhile, Warner Bros. Discovery’s sports division in Europe considers the technology too immature for roles like sports commentating.
Warner Bros. Discovery, which will stream the Games on its Max and discovery+ platforms across Europe, has tested AI for translating speech but found it lacks the emotion needed for thrilling sports moments. Scott Young, senior vice president at Warner Bros. Discovery Sports Europe, emphasised that AI struggles to capture the genuine excitement of live commentary. The difference in approaches reflects global media companies’ varied stances on AI technology, as France also plans to allow AI-powered surveillance during the Olympics, highlighting its broad application.
In the US, NBCUniversal will collaborate with Google and Team USA to enhance the viewing experience with AI, including AI-enhanced Google Map images of Olympic venues and AI-generated personalised daily briefings narrated by an AI recreation of Al Michaels’ voice. The Olympic Broadcasting Services is also using AI to produce quick highlights but remains cautious about deepfake risks. Additionally, extensive cybersecurity measures are being implemented to protect the Games from cyber threats, showcasing the crucial role of AI in ensuring safety and security.
As AI capabilities advance, European sports fans may soon experience similar technology. Warner Bros. Discovery anticipates significant AI integration by the 2028 Los Angeles Olympics. The International Olympic Committee (IOC) is already implementing AI for athlete safety and deploying AI tools to counter cyber threats at the 2024 Olympics, illustrating the growing influence of AI in sports.