An Austrian advocacy group, NOYB, has filed a complaint against the social media platform X, owned by Elon Musk, accusing the company of using users’ data to train its AI systems without their consent. The complaint, led by privacy activist Max Schrems, was lodged with authorities in nine European Union countries, pressuring Ireland’s Data Protection Commission (DPC), the primary EU regulator for major US tech firms because their EU operations are based in Ireland.
Despite this fact, NOYB’s complaint primarily focuses on X’s lack of cooperation and the inadequacy of its mitigation measures rather than questioning the legality of the data processing itself. Schrems emphasised the need for X to fully comply with the EU law by obtaining user consent before using their data. X has yet to respond to the latest complaint but intends to work with the DPC on AI-related issues.
In a related case, Meta, Facebook’s parent company, delayed the launch of its AI assistant in Europe after the Irish DPC advised against it, following similar complaints from NOYB regarding using personal data for AI training.
ASOS has deepened its collaboration with Microsoft by signing a new three-year deal to extend its use of AI technologies. The partnership, aimed at enhancing both customer experiences and internal operations, will see the introduction of AI tools designed to save time and allow employees to focus on more creative and strategic tasks. The online retailer’s director of technology operations, Victoria Arden, emphasised the importance of this move in driving operational excellence.
Since early 2023, ASOS has been utilising Microsoft’s Copilot tools, including those for Microsoft 365 and GitHub, to improve the efficiency of its engineering and HR teams. The HR team, for instance, has used Copilot to analyse employee engagement surveys, while other departments have explored AI-powered insights through tools like Power BI. The partnership highlights ASOS’s commitment to adopting cutting-edge technologies to enhance its data-driven decision-making processes.
ASOS has been actively piloting AI solutions to improve various aspects of its business. A recent example is the use of Copilot in Power BI to summarise performance data, aiding the company in making informed decisions. The retailer’s AI Stylist, powered by Microsoft’s Azure OpenAI, also represents a key innovation, helping customers discover new fashion trends through a conversational interface.
The collaboration between ASOS and Microsoft is built on a strong foundation established in 2022 when ASOS chose Microsoft Azure as its preferred cloud platform. However, this extended partnership reflects ASOS’s dedication to innovation through safe and responsible experimentation, aiming to continue delivering personalised, data-driven services to its global customer base.
IBM has teamed up with WWF-Germany to develop an AI-driven solution aimed at safeguarding African forest elephants, a species facing severe threats from poaching and habitat loss. This new technology will use AI to accurately identify individual elephants from camera trap photos, enhancing conservation efforts and allowing for more precise tracking of these endangered animals.
The partnership will combine IBM’s technological expertise with WWF’s conservation knowledge to create an AI-powered tool that could revolutionise how elephants are monitored. By focusing on image recognition, the technology aims to identify elephants by their unique physical features, such as heads and tusks, much like human fingerprints.
Additionally, the collaboration will employ IBM Environmental Intelligence to monitor and analyse biomass and vegetation in elephant habitats. The data will be crucial in predicting elephant movements and assessing the ecosystem services provided by these animals, such as carbon sequestration. Such insights could also pave the way for sustainable finance investments by quantifying the carbon services offered by elephants.
IBM emphasised the broader potential of this initiative, highlighting its role in supporting nature restoration and contributing to global climate change efforts. By integrating advanced technology with conservation strategies, the partnership seeks to make a lasting positive impact on both the environment and sustainable development.
AI is rapidly transforming the landscape of scientific research, but not always for the better. A growing concern is the proliferation of AI-generated errors and misinformation within academic journals. From bizarrely inaccurate images to nonsensical text, the quality of published research is being compromised. Trend like this one is exacerbated by the pressure on researchers to publish prolifically, leading many to turn to it as a shortcut.
Paper mills, which generate fraudulent academic papers for profit, are exploiting AI to produce vast quantities of low-quality content. These fabricated studies, often filled with nonsensical data and plagiarised text, are infiltrating reputable journals. The academic publishing industry is struggling to keep pace with this influx of junk science, as traditional quality control measures prove inadequate.
Beyond the issue of outright fraud, the misuse of AI by well-intentioned researchers is also a problem. While AI tools can be valuable for tasks like data analysis and language translation, their limitations are often overlooked. Overreliance on AI can lead to errors, biases, and a decline in critical thinking. As a result, the credibility of scientific research is at stake.
To address this crisis, a multifaceted approach is necessary. Increased investment in detection tools, stricter peer review standards, and greater transparency in the research process are essential steps. Additionally, academic institutions must foster a culture that prioritises quality over quantity, encouraging researchers to focus on depth rather than speed. Ultimately, safeguarding the integrity of scientific research requires a collaborative effort from researchers, publishers, and the public.
Humanoid robots are poised to revolutionise industries, with tech giants leading the charge. Companies such as Nvidia and Tesla are at the forefront of developing these human-like machines, equipped with advanced AI. These robots are designed to perform complex tasks, from manufacturing to customer service.
The potential applications for humanoid robots are vast. Tesla aims to deploy them in its factories, while other companies are exploring their use in logistics and healthcare. As AI technology continues to evolve, these machines are becoming increasingly sophisticated, capable of learning and adapting to new tasks.
Why does this matter?
The development of humanoid robots represents a significant investment in the future. Companies like Nvidia are building entire ecosystems to support robotics innovation. While challenges remain, the potential benefits are enormous. As these machines become more prevalent, they could reshape the workforce and drive economic growth.
The race to develop the most advanced humanoid robot is heating up. With major players investing heavily in this technology, the future of work is changing rapidly.
OpenAI’s chief strategy officer, Jason Kwon, has expressed confidence that humans will continue to control AI, downplaying concerns about the technology developing unchecked. Speaking at an forum in Seoul, Kwon emphasised that the core of safety lies in ensuring human oversight. As those systems grow more advanced, Kwon believes they will become easier to manage, countering fears of them becoming uncontrollable.
The company is actively working on creating a framework that allows AI systems to reflect the cultural values of different countries. Kwon highlighted the importance of making certain models adaptable to local contexts, ensuring that users in various regions feel the technology is designed with them in mind. However, approach like this one aims to foster a sense of ownership and relevance across diverse cultures.
Despite some scepticism surrounding the future of AI, Kwon remains optimistic about its trajectory. He compared it’s potential growth to that of the internet, which has become an indispensable tool globally. While acknowledging that AI is still in its early stages, he pointed out that adoption rates are gradually increasing, with significant room for growth.
Kwon noted that in South Korea, a country with over 50 million people, only 1 million are daily active users of ChatGPT. Even in the US, fewer than 20 per cent of the population has tried the tool. Kwon’s remarks suggest that AI’s journey is just beginning, with significant expansion expected in the coming years.
One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.
Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.
The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.
In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.
Elon Musk’s social media platform, X, has agreed to pause using data from European Union users to train its AI systems until further court decisions are made. The agreement comes after Ireland’s Data Protection Commission (DPC) sought to suspend X’s processing of user data for AI development, arguing that the platform had started using this data without user consent.
X, formerly known as Twitter, introduced an option for users to opt out of data usage for AI training. However, this was only available from 16 July, despite data processing beginning on 7 May. This delay led the DPC to take legal action, with a court hearing revealing that X would refrain from using data collected between 7 May and 1 August until the issue is resolved.
X’s legal team is expected to file opposition papers against the DPC’s suspension order by 4 September. The platform defended its actions, calling the regulator’s order unwarranted and unjustified. This case follows similar scrutiny faced by other tech giants like Meta and Google, which have also faced regulatory challenges in the EU over their AI systems.
Google’s dominance in the search engine market faces growing challenges from AI advancements, particularly from OpenAI, while also dealing with ongoing antitrust scrutiny. A recent US ruling deemed Google’s search monopoly illegal, marking a significant victory for regulators. However, experts argue that the real threat to Google is the rapid adoption of AI tools like OpenAI’s ChatGPT, reshaping how people search the internet.
Despite Google’s long-standing control of around 90% of the global search market, the rise of AI-powered search alternatives is beginning to erode its position. Former Google engineers and industry analysts believe AI’s impact will be felt much sooner than the effects of antitrust rulings, which often take years.
Historically, Apple has partnered with Google for search services, but it is now exploring AI-driven alternatives. The tech giant has announced a non-exclusive partnership with OpenAI to integrate ChatGPT into its devices, signalling a shift from Google’s search dominance.
OpenAI’s move into the search market with its AI-powered SearchGPT further intensifies the competition. Some analysts predict that AI’s influence on search could outpace regulatory actions, potentially dismantling Google’s monopoly.
Why does it matter?
Although Google has the resources to lead in AI development, its response could have been faster than that of competitors like OpenAI’s swift rise. Google’s initial missteps with AI-powered search features, which were criticised for inaccuracies and errors, have raised concerns about the company’s ability to maintain trust with users.
Analysts suggest that while antitrust actions may not immediately weaken Google’s position, they could pave the way for increased competition in the search market. However, breaking Google’s dominance will be challenging, and whether these developments will lead to significant changes in consumer choice remains to be seen.
The UK’s Competition and Markets Authority (CMA) has launched a formal antitrust investigation nto Amazon’s $4 billion investment in AI startup Anthropic. This follows recent scrutiny of Google’s ties with the same company, as concerns grow over Big Tech’s strategic investments in AI firms. The CMA’s investigation will determine whether Amazon’s stake in Anthropic could harm competition within the United Kingdom, despite the e-commerce giant not holding a majority stake or board seat in the startup.
Anthropic, established in 2021 and known for developing large language models like its chatbot Claude, has raised $10 billion so far. Its public benefit corporation status is intended to distinguish it from rivals in the AI space. Despite Amazon’s significant investment, Anthropic maintains that its strategic partnerships do not compromise its independence or ability to collaborate with other companies. The CMA has 40 working days to decide whether to advance the investigation to a more in-depth phase.
The CMA’s move comes amid increasing concerns about Big Tech companies adopting a ‘quasi-merger’ approach to avoid full acquisitions, which would likely face greater regulatory scrutiny. The regulator has also been examining similar deals, including Microsoft’s investments in AI startups like OpenAI and Mistral AI. The outcome of the CMA’s probe into Amazon’s investment in Anthropic could have broader implications for how tech giants are regulated in their acquisition strategies.
Amazon’s investment is part of a wider trend in which leading tech companies are securing stakes in promising AI startups to ensure they stay ahead in the rapidly evolving AI sector. With the CMA’s investigation underway, the regulatory landscape for these types of deals is expected to become more stringent, potentially reshaping future investment strategies in the AI industry.