California has become the focal point in the debate over regulating AI as a new bill, SB 1047, advances through the state legislature. The bill, which has drawn significant opposition from AI startups and tech giants, proposes safety measures for large AI models to prevent ‘catastrophic harm,’ such as cyberattacks or mass casualties. The legislation would require developers to conduct safety tests and ensure that humans can shut down AI systems if they pose a danger.
Critics argue that the bill needs to be more specific and could stifle innovation in California. Opponents, including major companies like Meta, OpenAI, and Google, have voiced concerns about the bill’s broad and undefined requirements. They fear it could lead to legal uncertainties and discourage the public release of AI models, harming the state’s vibrant tech ecosystem.
The bill has already passed several legislative hurdles but faces strong resistance as it moves toward a final vote. While its author, Democratic state senator Scott Wiener, is open to amendments, he maintains that the bill aligns with safety standards already adopted by the industry. However, many in the tech community still need to be convinced, citing potential legal and operational challenges if the bill becomes law.
Why does this matter?
The outcome of this legislative battle could have far-reaching implications for AI regulation across the United States, as California often sets the precedent for other states. As the debate continues, the tech industry is closely watching how the state will balance innovation with the need for safety and regulation in the rapidly evolving field of AI.
A new report from the UN Secretary-General’s Envoy on Technology and the International Labour Organization examines the impact of AI on the global workforce. Titled ‘Mind the AI Divide: Shaping a Global Perspective on the Future of Work,’ the report delves into how AI is reshaping labour markets, altering the AI value chain, and changing the demand for skills.
The report highlights the uneven adoption of AI across different regions, which could exacerbate global inequalities if not addressed. To promote inclusive growth, the report emphasises the need for strategies that support AI development in areas that need to catch up in technology adoption.
Strengthening international cooperation and building national capacities are identified as key steps toward creating a more equitable and resilient AI ecosystem. The report advocates for global collaboration to ensure that AI benefits are widely shared, fostering global opportunities for prosperity and human advancement.
Siemens is set to expand its management board from five to seven members in a strategic move to accelerate its transition towards a technology-focused enterprise. The company announced that Peter Koerte, head of strategy and technology, and Veronika Bienert, head of Siemens Financial Services, will join the board on 1 October. This expansion is seen as a response to Siemens’ significant scale, with 324,000 employees and €80 billion in revenue, necessitating a larger leadership team to drive growth.
Supervisory Board Chairman Jim Hagemann Snabe, who has held his position since 2018, will seek reelection for another two-year term in February. Snabe emphasised that Siemens prioritises AI as a key focus, aiming to leverage AI for industrial applications to stay ahead of competitors. Peter Koerte, in his new role on the management board, is expected to be instrumental in this AI-driven strategy.
In addition to the new appointments, Siemens confirmed that Cedrik Neike, head of the Digital Industries division, will have his contract extended by five years. The company also hinted at future leadership changes, noting that Veronika Bienert could be a potential successor to the current CFO, Ralf Thomas, who plans to retire in 2026. However, Snabe stated it was too early for a formal discussion on this succession.
Belgium’s imec, a leading semiconductor R&D firm, announced significant breakthroughs in chip-making technology at its joint laboratory with ASML. The advancements were made using ASML’s latest 350 million euro ($382 million) chip printing machine. Imec successfully printed circuitry as small or smaller than the best currently in commercial production, in a single pass under ASML’s new “High NA” tool, suggesting that leading chipmakers can use this tool to create smaller, faster chips in the coming years.
The High NA tool’s ability to print smaller features in fewer steps is expected to save chipmakers money and justify its high price tag. ASML is the largest supplier of lithography systems, crucial for creating chip circuitry. The development indicates that the necessary chemicals and tools for the rest of the chipmaking process are also falling into place for commercial manufacturing. Imec CEO Luc Van den Hove stated that High NA will be instrumental in continuing the scaling of logic and memory technologies.
Intel has purchased the first two High NA tools, with a third expected to go to TSMC later this year. Intel’s director of lithography, Mark Philips, mentioned that a second tool is required for the volume of wafers and experiments needed to support a development line. Other chipmakers, including Samsung Electronics, SK Hynix, and Micron, have also ordered the High NA tool, highlighting its importance in the industry.
These developments come as Micron surpasses revenue expectations in Q3 despite a mixed outlook for Q4, and the US Commerce Department backs SK Hynix with $450 million for an AI plant. These advancements and investments underline the ongoing innovations and growth in the semiconductor sector.
X users recently discovered that their data was being used to train Grok, an AI chatbot that Musk’s company xAI developed, without explicit consent. The complaint accuses X of failing to clearly explain its data usage practices, collecting excessive data, and possibly mishandling sensitive information. Scialdone has called on the DPC to order X to stop using personal data for AI training and to ensure compliance with GDPR. Violations of these regulations can lead to fines as high as 4% of a company’s worldwide annual revenue, making non-compliance potentially very expensive for X.
The complaint also highlights issues with X’s communication regarding its data processing practices. According to Scialdone, X’s privacy policy does not transparently outline the legal basis for using personal data for AI training. The policy mentions using data on a ‘legitimate interest’ basis, which allows data processing if it serves a valid purpose without infringing on users’ rights. However, Scialdone argued that this information is not easily accessible to users. He also stressed that such legal actions would lead to a consistent regulatory approach across different platforms, preventing disparities in user treatment and market inequalities.
Why does this matter?
Musk’s approach to compliance with the EU privacy laws has been controversial, raising concerns about X’s adherence to regulatory standards. The DPC’s actions signal a potential end to Musk’s relatively unchecked run on GDPR oversight since the filed suit marks the third major tech company facing such allegations, following similar complaints against Meta and LinkedIn. Recently, X has also faced regulatory challenges in the Netherlands and scrutiny under the EU’s Digital Services Act, which could lead to even steeper penalties for non-compliance.
Apple has introduced its new AI-powered Writing Tools in the iOS 18.1 developer beta, providing users with the ability to reformat or rewrite text using Apple’s AI models. However, the tool warns that AI-generated suggestions might not be of the highest quality when dealing with certain sensitive topics. Users will see a message alerting them when attempting to rewrite text containing swear words, references to drugs, or mentions of violence, indicating the tool wasn’t designed for such content.
Despite the warnings, the AI tool still offers suggestions even when encountering restricted words or phrases. During testing, replacing a swear word with a milder term resulted in the same AI-generated suggestion. Apple has been asked to clarify which specific topics the writing tools are not trained to handle, but no further details have been provided yet.
Apple appears to be exercising caution to avoid controversy by limiting the AI’s handling of certain terms and topics. The Writing Tools feature is not intended to generate new content from scratch but rather to assist in rewriting existing text. Apple’s cautious approach aligns with its history, as seen when it finally allowed autocorrect to learn swear words in iOS 17 after years of restrictions.
The release of these AI features also coincides with Apple’s alignment with OpenAI for future AI innovations and its support for the Biden administration’s AI safety initiatives. These steps underscore Apple’s commitment to responsible AI development while providing advanced tools to its users.
Former President Donald Trump revealed that Meta CEO Mark Zuckerberg apologised to him after Facebook mistakenly labelled a photo of Trump as misinformation. The photo, which showed Trump raising a fist after surviving an assassination attempt at a rally in Butler, Pennsylvania, was initially flagged by Meta’s AI system. Trump disclosed the apology during an interview with FOX Business’ Maria Bartiromo, stating that Zuckerberg called him twice to express regret and praise his response to the event.
Meta Vice President of Global Policy Joel Kaplan clarified that the error occurred due to similarities between a doctored image and the real photo, leading to an incorrect fact-check label. Kaplan explained that the AI system misapplied the label due to subtle differences between the two images. Meta’s spokesperson Andy Stone reiterated that Zuckerberg has not endorsed any candidate for the 2024 presidential election and that the labelling error was not due to bias.
The incident highlights ongoing challenges for Meta as it navigates content moderation and political neutrality, especially ahead of the 2024 United States election. Additionally, the assassination attempt on Trump has sparked various online conspiracy theories. Meta’s AI chatbot faced criticism for initially refusing to answer questions about the shooting, a decision attributed to the overwhelming influx of information during breaking news events. Google’s AI chatbot Gemini similarly refused to address the incident, sticking to its policy of avoiding responses on political figures and elections.
Both Meta and Google have faced scrutiny over their handling of politically sensitive content. Meta’s recent efforts to shift away from politics and focus on other areas, combined with Google’s cautious approach to AI responses, reflect the tech giants’ strategies to manage the complex dynamics of information dissemination and political neutrality in an increasingly charged environment.
The United States Commerce Department announced on Tuesday that it plans to award SK Hynix up to $450 million in grants to support the construction of an advanced packaging plant and research facility for AI products in Indiana. SK Hynix, the world’s second-largest memory chip maker, previously announced an investment of approximately $3.87 billion to build the facility, which will include a cutting-edge production line for next-generation high bandwidth memory chips, crucial for AI systems.
In addition to the grants, the Commerce Department plans to provide $500 million in government loans for the SK Hynix project, which is expected to qualify for a 25% investment tax credit. The facility is projected to create 1,000 jobs and address a critical gap in the US semiconductor supply chain. The project is part of a broader effort to enhance US semiconductor manufacturing, supported by a $39 billion subsidy program and $75 billion in government lending authority approved by Congress in August 2022.
Commerce Secretary Gina Raimondo highlighted the significance of securing commitments from all five major semiconductor manufacturers, including TSMC, Intel, Samsung Electronics, Micron, and SK Hynix. Raimondo stated that these commitments would ensure the U. has the most secure and diverse supply chain for advanced semiconductors that power AI technologies. The SK Hynix facility in West Lafayette, Indiana, will play a pivotal role in producing high-bandwidth memory chips essential for training AI systems.
The announcement comes amid increasing global tensions over semiconductor supply chains, with the US expanding chip export controls and firms from China stockpiling high bandwidth memory chips in response to these restrictions. SK Hynix’s CEO, Kwak Noh-Jung, expressed gratitude for the US Commerce Department’s support, emphasizing the company’s excitement about bringing this transformational project to fruition. The initiative follows a previous $75 million award to Absolics, an affiliate of SK Group, for a facility in Georgia to supply advanced materials to the US semiconductor industry.
Once a leader in the computer chip industry, Intel, has faced significant challenges adapting to the AI era. Seven years ago, Intel had an opportunity to invest in OpenAI, a then-emerging non-profit focused on generative AI. Discussions between the two companies explored various investment options, including a $1 billion stake and hardware manufacturing deals, but Intel ultimately decided against it.
CEO Bob Swan doubted the near-term market viability of generative AI models, leading to the decision not to invest. OpenAI sought the investment to reduce reliance on Nvidia chips and develop its own infrastructure, but Intel’s data centre unit was unwilling to produce hardware at cost. Since then, OpenAI has launched ChatGPT and achieved a valuation of around $80 billion, marking a significant missed opportunity for Intel.
The decision was part of a series of strategic missteps that saw Intel fall behind in the AI chip market. The company’s stock recently plummeted, marking its worst trading day since 1974 and valuing it at under $100 billion for the first time in three decades. In contrast, rivals like Nvidia and AMD have surged ahead, capturing significant market share with AI-optimised GPU technology.
Despite recent efforts to catch up, such as developing the Gaudi AI chip and acquiring startups like Nervana Systems and Habana Labs, Intel still lags behind competitors. The company’s previous focus on CPUs over GPUs, which are better suited for AI tasks, has left it struggling to compete in the rapidly growing AI market.
An upgraded version of the Titan Image Generator has been introduced by Amazon, now available to AWS customers through the Bedrock generative AI platform. Titan Image Generator v2 offers enhanced capabilities, allowing users to guide image creation using reference images, edit existing visuals, remove backgrounds, and generate variations.
The new model can intelligently detect and segment multiple foreground objects. Users can now generate images based on a colour palette and shape their creations using the image conditioning feature. This model supports image conditioning by focusing on specific visual characteristics such as edges, object outlines, and structural elements. Fine-tuning with reference images, like a product or company logo, ensures consistency in the generated images.
AWS remains vague about the data used to train Titan Image Generator models, citing a mix of proprietary and licensed data. Many vendors keep training data details secret due to competitive and legal concerns. AWS offers an indemnification policy to cover customers in case of any copyrighted content being unintentionally reproduced by the model.
Amazon CEO Andy Jassy expressed strong confidence in generative AI technology, despite increasing costs and enterprise hesitation. He highlighted the rapid growth potential of generative AI, emphasising its future development primarily in the cloud.