California has become the focal point in the debate over regulating AI as a new bill, SB 1047, advances through the state legislature. The bill, which has drawn significant opposition from AI startups and tech giants, proposes safety measures for large AI models to prevent ‘catastrophic harm,’ such as cyberattacks or mass casualties. The legislation would require developers to conduct safety tests and ensure that humans can shut down AI systems if they pose a danger.
Critics argue that the bill needs to be more specific and could stifle innovation in California. Opponents, including major companies like Meta, OpenAI, and Google, have voiced concerns about the bill’s broad and undefined requirements. They fear it could lead to legal uncertainties and discourage the public release of AI models, harming the state’s vibrant tech ecosystem.
The bill has already passed several legislative hurdles but faces strong resistance as it moves toward a final vote. While its author, Democratic state senator Scott Wiener, is open to amendments, he maintains that the bill aligns with safety standards already adopted by the industry. However, many in the tech community still need to be convinced, citing potential legal and operational challenges if the bill becomes law.
Why does this matter?
The outcome of this legislative battle could have far-reaching implications for AI regulation across the United States, as California often sets the precedent for other states. As the debate continues, the tech industry is closely watching how the state will balance innovation with the need for safety and regulation in the rapidly evolving field of AI.
A new report from the UN Secretary-General’s Envoy on Technology and the International Labour Organization examines the impact of AI on the global workforce. Titled ‘Mind the AI Divide: Shaping a Global Perspective on the Future of Work,’ the report delves into how AI is reshaping labour markets, altering the AI value chain, and changing the demand for skills.
The report highlights the uneven adoption of AI across different regions, which could exacerbate global inequalities if not addressed. To promote inclusive growth, the report emphasises the need for strategies that support AI development in areas that need to catch up in technology adoption.
Strengthening international cooperation and building national capacities are identified as key steps toward creating a more equitable and resilient AI ecosystem. The report advocates for global collaboration to ensure that AI benefits are widely shared, fostering global opportunities for prosperity and human advancement.
Siemens is set to expand its management board from five to seven members in a strategic move to accelerate its transition towards a technology-focused enterprise. The company announced that Peter Koerte, head of strategy and technology, and Veronika Bienert, head of Siemens Financial Services, will join the board on 1 October. This expansion is seen as a response to Siemens’ significant scale, with 324,000 employees and €80 billion in revenue, necessitating a larger leadership team to drive growth.
Supervisory Board Chairman Jim Hagemann Snabe, who has held his position since 2018, will seek reelection for another two-year term in February. Snabe emphasised that Siemens prioritises AI as a key focus, aiming to leverage AI for industrial applications to stay ahead of competitors. Peter Koerte, in his new role on the management board, is expected to be instrumental in this AI-driven strategy.
In addition to the new appointments, Siemens confirmed that Cedrik Neike, head of the Digital Industries division, will have his contract extended by five years. The company also hinted at future leadership changes, noting that Veronika Bienert could be a potential successor to the current CFO, Ralf Thomas, who plans to retire in 2026. However, Snabe stated it was too early for a formal discussion on this succession.
X users recently discovered that their data was being used to train Grok, an AI chatbot that Musk’s company xAI developed, without explicit consent. The complaint accuses X of failing to clearly explain its data usage practices, collecting excessive data, and possibly mishandling sensitive information. Scialdone has called on the DPC to order X to stop using personal data for AI training and to ensure compliance with GDPR. Violations of these regulations can lead to fines as high as 4% of a company’s worldwide annual revenue, making non-compliance potentially very expensive for X.
The complaint also highlights issues with X’s communication regarding its data processing practices. According to Scialdone, X’s privacy policy does not transparently outline the legal basis for using personal data for AI training. The policy mentions using data on a ‘legitimate interest’ basis, which allows data processing if it serves a valid purpose without infringing on users’ rights. However, Scialdone argued that this information is not easily accessible to users. He also stressed that such legal actions would lead to a consistent regulatory approach across different platforms, preventing disparities in user treatment and market inequalities.
Why does this matter?
Musk’s approach to compliance with the EU privacy laws has been controversial, raising concerns about X’s adherence to regulatory standards. The DPC’s actions signal a potential end to Musk’s relatively unchecked run on GDPR oversight since the filed suit marks the third major tech company facing such allegations, following similar complaints against Meta and LinkedIn. Recently, X has also faced regulatory challenges in the Netherlands and scrutiny under the EU’s Digital Services Act, which could lead to even steeper penalties for non-compliance.
Apple has introduced its new AI-powered Writing Tools in the iOS 18.1 developer beta, providing users with the ability to reformat or rewrite text using Apple’s AI models. However, the tool warns that AI-generated suggestions might not be of the highest quality when dealing with certain sensitive topics. Users will see a message alerting them when attempting to rewrite text containing swear words, references to drugs, or mentions of violence, indicating the tool wasn’t designed for such content.
Despite the warnings, the AI tool still offers suggestions even when encountering restricted words or phrases. During testing, replacing a swear word with a milder term resulted in the same AI-generated suggestion. Apple has been asked to clarify which specific topics the writing tools are not trained to handle, but no further details have been provided yet.
Apple appears to be exercising caution to avoid controversy by limiting the AI’s handling of certain terms and topics. The Writing Tools feature is not intended to generate new content from scratch but rather to assist in rewriting existing text. Apple’s cautious approach aligns with its history, as seen when it finally allowed autocorrect to learn swear words in iOS 17 after years of restrictions.
The release of these AI features also coincides with Apple’s alignment with OpenAI for future AI innovations and its support for the Biden administration’s AI safety initiatives. These steps underscore Apple’s commitment to responsible AI development while providing advanced tools to its users.
Former President Donald Trump revealed that Meta CEO Mark Zuckerberg apologised to him after Facebook mistakenly labelled a photo of Trump as misinformation. The photo, which showed Trump raising a fist after surviving an assassination attempt at a rally in Butler, Pennsylvania, was initially flagged by Meta’s AI system. Trump disclosed the apology during an interview with FOX Business’ Maria Bartiromo, stating that Zuckerberg called him twice to express regret and praise his response to the event.
Meta Vice President of Global Policy Joel Kaplan clarified that the error occurred due to similarities between a doctored image and the real photo, leading to an incorrect fact-check label. Kaplan explained that the AI system misapplied the label due to subtle differences between the two images. Meta’s spokesperson Andy Stone reiterated that Zuckerberg has not endorsed any candidate for the 2024 presidential election and that the labelling error was not due to bias.
The incident highlights ongoing challenges for Meta as it navigates content moderation and political neutrality, especially ahead of the 2024 United States election. Additionally, the assassination attempt on Trump has sparked various online conspiracy theories. Meta’s AI chatbot faced criticism for initially refusing to answer questions about the shooting, a decision attributed to the overwhelming influx of information during breaking news events. Google’s AI chatbot Gemini similarly refused to address the incident, sticking to its policy of avoiding responses on political figures and elections.
Both Meta and Google have faced scrutiny over their handling of politically sensitive content. Meta’s recent efforts to shift away from politics and focus on other areas, combined with Google’s cautious approach to AI responses, reflect the tech giants’ strategies to manage the complex dynamics of information dissemination and political neutrality in an increasingly charged environment.
Once a leader in the computer chip industry, Intel, has faced significant challenges adapting to the AI era. Seven years ago, Intel had an opportunity to invest in OpenAI, a then-emerging non-profit focused on generative AI. Discussions between the two companies explored various investment options, including a $1 billion stake and hardware manufacturing deals, but Intel ultimately decided against it.
CEO Bob Swan doubted the near-term market viability of generative AI models, leading to the decision not to invest. OpenAI sought the investment to reduce reliance on Nvidia chips and develop its own infrastructure, but Intel’s data centre unit was unwilling to produce hardware at cost. Since then, OpenAI has launched ChatGPT and achieved a valuation of around $80 billion, marking a significant missed opportunity for Intel.
The decision was part of a series of strategic missteps that saw Intel fall behind in the AI chip market. The company’s stock recently plummeted, marking its worst trading day since 1974 and valuing it at under $100 billion for the first time in three decades. In contrast, rivals like Nvidia and AMD have surged ahead, capturing significant market share with AI-optimised GPU technology.
Despite recent efforts to catch up, such as developing the Gaudi AI chip and acquiring startups like Nervana Systems and Habana Labs, Intel still lags behind competitors. The company’s previous focus on CPUs over GPUs, which are better suited for AI tasks, has left it struggling to compete in the rapidly growing AI market.
Automattic, the owner of WordPress.com, has unveiled a new AI tool designed to assist bloggers in writing more clearly and concisely. Named Write Brief with AI, this tool is part of Jetpack, a suite of features to enhance WordPress.com-hosted websites, and is free during its initial beta phase. Users can access it through the Jetpack icon in the top-right corner of the editor.
The tool complements Automattic’s previously launched generative AI writing assistant for WordPress. Write Brief with AI helps users by analysing their text and offering suggestions to simplify sentences, adjust the tone, and ensure clarity. It can identify overly complex words, excessive vocabulary, and a lack of confidence in the language, providing automatic simplifications and a readability score based on complexity, sentence length, and confidence.
Automattic, a key player in the web’s infrastructure due to its role in both the open-source WordPress project and the commercial WordPress.com platform, sees this tool as a natural extension of its services. Write Brief with AI was born out of an internal hack week project and, after proving its usefulness within the company, was made available to the public. Although currently limited to English, it holds the potential for significant uptake among WordPress users worldwide.
The multinational technology magnate has unveiled an internal security platform designed to handle the immense scale of the company’s network. Built on a vast graph database, Mithra helps Amazon manage and protect its systems by filtering vast amounts of data to identify and neutralise malicious domains. Chief Information Security Officer C.J. Moses likens Mithra to a funnel, narrowing down data until human intervention is minimal.
Mithra’s integration with Sonaris, Amazon’s network observation platform, creates a robust defensive net around Amazon’s environments. AI and machine learning are essential for managing the large-scale data, with AI models trained to detect anomalies and potential threats. Generative AI further assists threat analysts by allowing them to interact with data in plain language, enhancing decision-making efficiency.
Amazon’s proactive approach extends beyond technology. The company maintains a strong network of Chief Information Security Officers (CISOs) to facilitate rapid communication and collaboration in times of crisis. The unveiling of Mithra comes as Amazon faces scrutiny over its AI deal with startup Adept and accountability issues for hazardous products in the United States.
Telecommunications firm Lumen Technologies has secured $5 billion in new deals from cloud and tech companies for its networking and cybersecurity solutions. These agreements come as more businesses rush to adapt AI-driven technologies. One notable deal involves Microsoft, which will use Lumen’s network equipment to expand its capacity for AI workloads.
Lumen, which provides secure digital connections for data centres, announced ongoing discussions with customers to secure an additional $7 billion in sales opportunities. The surge in AI adoption has led enterprises across multiple sectors to invest heavily in building infrastructure capable of supporting AI-powered applications.
Major corporations are increasingly seeking high-capacity fibre, a resource becoming valuable and potentially scarce due to growing AI requirements. Lumen’s AI-ready infrastructure and expansive network are key factors driving this demand. According to CEO Kate Johnson, this marks the beginning of a significant opportunity that could lead to one of the largest expansions of the internet ever.
In response to rising demand, Lumen has established a new division, Custom Networks, to oversee its Private Connectivity Fabric solutions portfolio. The division aims to meet the increasing needs of various organisations for secure and reliable connectivity solutions.