Huawei Technologies announced significant advancements in operating systems and AI, achieving in 10 years what took the US and Europe 30 years. Richard Yu, chairman of Huawei’s Consumer Business Group, highlighted these achievements at a developer conference in Dongguan.
Huawei’s Harmony operating system is now on over 900 million devices, which marks a substantial progress since its launch in 2019 when US restrictions cut Huawei off from Google’s Android support. Yu noted that Huawei’s Ascend AI infrastructure is now the second most popular, following Nvidia.
Why does it matter?
The rise of the Internet of Things has provided Huawei an opportunity to surpass long-time Western dominance in software. Additionally, Huawei’s smartphone business has rebounded with the Mate 60, featuring a new China-made chip. Sales of Harmony-equipped smartphones increased by 68% in the first five months of the year. In Q1 2024, HarmonyOS became the second best-selling mobile OS in China, overtaking Apple’s iOS with a 17% market share.
The annual State of Broadband report serves as a comprehensive global assessment of broadband access, affordability, and usage trends. This year’s edition, titled ‘Leveraging AI for Universal Connectivity,’ is being released in two parts. The first part, unveiled on June 20, 2024, outlines how AI applications are transforming sectors like e-government, education, healthcare, finance, and environmental management. It also examines the implications of AI for bridging or exacerbating the digital divide.
Authored by over 50 high-level Commissioners, including UN leaders, industry CEOs, and government officials, the report highlights AI’s potential to drive development while cautioning against its risks. The second part of the report, yet to be released, will provide updated data and deeper insights from the Broadband Commissioners, offering a more detailed analysis of AI’s evolving role in the digital realm.
As the Broadband Commission tracks progress towards its 2025 Advocacy Targets and prepares for future global summits, the report underscores the critical role of policymakers in maximizing the benefits of AI while ensuring equitable access to digital opportunities. It aims to inform strategic decisions that align with sustainable development goals, emphasising the need for proactive measures to harness AI responsibly and inclusively.
The EU is facing significant controversy over a proposed law that would require AI scanning of users’ photos and videos on messaging apps to detect child sexual abuse material (CSAM). Critics, including major tech companies like WhatsApp and Signal, argue that this law threatens privacy and encryption, undermining fundamental rights. They also warn that the AI detection systems could produce numerous false positives, overwhelming law enforcement.
A recent meeting among the EU member states’ representatives failed to reach a consensus on the proposal, leading to further delays. The Belgian presidency had hoped to finalise a negotiating mandate, but disagreements among member states prevented progress. The ongoing division means that discussions on the proposal will likely continue under Hungary’s upcoming EU Council presidency.
Opponents of the proposal, including Signal President Meredith Whittaker and Proton founder Andy Yen, emphasise the dangers of mass surveillance and the need for more targeted approaches to child protection. Despite the current setback, there’s concern that efforts to push the law forward will persist, necessitating continued vigilance from privacy advocates.
Amazon plans to overhaul its Alexa service with a new project known internally as ‘Banyan,’ aiming to integrate generative AI and introduce two service tiers. The initiative, called ‘Remarkable Alexa,’ could include a monthly fee of around $5 for the premium version, which would offer advanced capabilities like composing emails and placing orders from a single prompt. That would mark Alexa’s first major update since its 2014 launch.
The project is driven by Amazon’s need to revitalise Alexa, which has struggled to turn a profit and compete with AI advancements from companies like Google, Microsoft, and Apple. CEO Andy Jassy has prioritised this update, setting an internal deadline for August. The new Alexa aims to provide more intelligent, personalised assistance, building on generative AI already integrated into over half a billion devices worldwide.
Despite the ambitious plans, some Amazon employees view the project as a ‘desperate attempt’ to save Alexa, citing challenges such as software inaccuracies and poor morale within the team. While Amazon hopes the AI-powered Alexa will drive more significant sales and enhance home automation, the project’s success depends on customer willingness to pay for features currently offered for free and the effectiveness of the new technology.
Google LLC and the University of Tokyo are teaming up to leverage generative AI to tackle local challenges in Japan, such as the nation’s shrinking workforce. The initiative, featuring prominent AI researcher Professor Yutaka Matsuo, will be piloted in Osaka and Hiroshima prefectures, with plans to expand successful models nationwide by 2027.
In Osaka, the project aims to address employment mismatches by using AI to suggest job opportunities and career paths that job seekers might not have considered. That approach differs from traditional job placement agencies and will draw from extensive online data to offer more tailored job suggestions.
The specific focus for Hiroshima has yet to be determined. However, Hiroshima Governor Hidehiko Yuzaki expressed a vision for AI to provide detailed responses to relocating inquiries, signalling AI’s potential to shape the prefecture’s future.
Beyond these initial projects, Google suggests that generative AI could enhance medical care on remote islands and automate agriculture, forestry, and fisheries tasks. Professor Matsuo emphasised that effectively utilising generative AI presents a significant opportunity for Japan.
A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.
The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.
UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.
Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.
Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.
Why does it matter?
These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.
In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.
Anthropic, a startup backed by Google and Amazon, has introduced a new AI model named Claude 3.5 Sonnet alongside a revamped user interface to enhance productivity. The release comes just three months after the launch of its Claude 3 family of AI models. Claude 3.5 Sonnet surpasses its predecessor, Claude 3 Opus, in benchmark exam performance, speed, and cost efficiency, being five times cheaper for developers.
CEO Dario Amodei emphasised AI’s flexibility and rapid advancement, noting that, unlike physical products, AI models can be quickly updated and improved. Anthropic’s latest technology is now available for free on Claude.ai and through an iOS app. Additionally, users can opt into a new feature called ‘Artifacts,’ which organises generated content in a side window, facilitating collaborative work and the production of finished products.
Anthropic’s rapid development cycle reflects the competitive nature of the AI industry, with companies like OpenAI and Google also pushing forward with significant AI advancements. The startup plans to release more models, including Claude 3.5 Opus, within the year while focusing on safety and usability.
The US House of Representatives is unlikely to pass broad AI regulation this year. House Majority Leader Steve Scalise said that he opposes extensive regulations, fearing they might hinder the US in AI development compared to China. Instead, he suggests focusing on existing laws and targeted fixes rather than creating new regulatory structures.
This stance contrasts with Senate Majority Leader Chuck Schumer’s proposal, whose bipartisan AI working group report recommended a $32 billion annual investment in non-defense AI innovation and a comprehensive regulatory framework. The House’s bipartisan AI task force is also cautious about large-scale regulations.
Chair Rep. Jay Obernolte suggests that some targeted AI legislation might be feasible, while Rep. French Hill advocates for a sector-specific review under existing laws rather than a broad regulatory framework. This division between the House and Senate reduces the likelihood of significant AI legislation this year, but the House may consider smaller, urgent AI-related bills to address immediate issues.
Why does it matter?
The US Congress has seen a surge in AI legislation from both the Senate and House, by the rise of advanced AI models like ChatGPT and DeepAI, and growing issues with ‘deepfake’ content, particularly around elections and scams. However, this division reduces the likelihood of significant AI legislation this year, though smaller, urgent AI-related bills may still be approved.
Amazon has expanded its generative AI tools for product listings to sellers in France, Germany, Italy, Spain, and the UK. These tools, designed to streamline the process of creating and enhancing product listings, can generate product descriptions, titles, and details and fill in any missing information. The rollout follows an initial introduction in the US and a quieter launch in the UK earlier this month.
The new AI tools aim to help sellers list products more quickly by allowing them to enter relevant keywords or upload product photos, after which the AI generates a product title, bullet points, and descriptions. While the AI-generated content can be edited, Amazon advises sellers to review the generated listings thoroughly to avoid inaccuracies. The company continuously improves these tools to make them more effective and user-friendly.
Earlier this year, Amazon also introduced a tool enabling sellers to create product listings by posting a URL to their existing website, though it remains uncertain when this feature will be available outside the US. The expansion of AI tools to European markets raises regulatory concerns, particularly regarding GDPR and the Digital Services Act, which require transparency in AI applications.
Why does it matter?
Despite these regulatory challenges, Amazon’s use of generative AI marks a significant advancement in e-commerce. By leveraging diverse sources of information, Amazon’s AI models can infer product details with high accuracy, improving the quality and efficiency of product listings at scale. However, the precise data used to train these models remains unclear, highlighting ongoing concerns about data privacy and usage.