Cybersecurity measures ramp up for 2024 Olympics

Next month, athletes worldwide will converge on Paris for the eagerly awaited 2024 Summer Olympics. While competitors prepare for their chance to win coveted medals, organisers are focused on defending against cybersecurity threats. Over the past decade, cyberattacks have become more sophisticated due to the misuse of AI. However, the responsible application of AI offers a promising countermeasure.

Sports organisations are increasingly partnering with AI-driven companies like Visual Edge IT, which specializes in risk reduction. Although Visual Edge IT does not directly work with the Olympics, cybersecurity expert Peter Avery shared insights on how Olympic organisers can mitigate risks. Avery emphasised the importance of robust technical, physical, and administrative controls to protect against cyber threats. He highlighted the need for a comprehensive incident response plan and the necessity of preparing for potential disruptions, such as internet overload and infrastructure attacks.

The advent of AI has revolutionised both productivity and cybercrime. Avery noted that AI allows cybercriminals to automate attacks, making them more efficient and widespread. He stressed that a solid incident response plan and regular simulation exercises are crucial for managing cyber threats. As Avery pointed out, the question is not if a cyberattack will happen but when.

The International Olympic Committee (IOC) also embraces AI responsibly within sports. IOC President Thomas Bach announced the AI plan to identify talent, personalise training, and improve judging fairness. The Summer Olympics in Paris, which run from 26 July to 11 August, will significantly test these cybersecurity and AI initiatives.

PayPal appoints new CTO amid recent AI services launch

PayPal has appointed Srini Venkatesan as its new Chief Technology Officer (CTO) to lead its artificial intelligence initiatives. Venkatesan will be in charge of areas such as AI and machine learning, information security, and product engineering. In his previous position at Walmart, Venkatesan developed platforms to support the retail giant, including aspects of the Walmart+ subscription service. He has also worked at Yahoo and eBay among others.

Why is this important?

Like others in the finance field, PayPal has sought to embrace AI to improve its services. In January, the company announced new AI-driven tools, including some to make payment checkouts smoother. However, other tools use buying history, instead of browsing history, to target clients.

‘Smart Receipts’ uses buying history to recommend products, cashback and other deals on receipts. Similarly, ‘Advanced Offers Platform’ uses AI to deliver targeted promotions based on the customer’s purchase history with any previous merchant. PayPal says it is shifting from general ads to personalised ‘offers’, improving the customer experience.

In an article in PayPal’s newsroom, the company said it is adding simple privacy controls to let customers choose whether to share their data with merchants for personalised offers. However, given that browsing targeted advertising has already caused privacy concerns, it is likely that buying history will do so too. Venkatesan will be expected to implement the tech and answer to these concerns in his new role.

Oracle commits $1 billion to bolster AI and cloud services in Spain

Oracle has announced it will be investing $1 billion over the next decade in AI and cloud computing in Spain to cater to the growing demand for its services in the country. This investment will be used to establish a new cloud region enabling customers to transition workloads from their data centres to Oracle Cloud Infrastructure.

Additionally, it will assist in compliance with regulations such as the EU’s Digital Operational Resilience Act (DORA) and the European Outsourcing Guidelines. This initiative marks Oracle’s third cloud region in Madrid in collaboration with Telefonica España as project partner. Albert Triola, the head of Oracle Spain, emphasised the company’s dedication to supporting organisations in Spain across various sectors and sizes, aiming to accelerate the adoption of cloud technologies to enhance business performance.

José Luis Escrivá, Spain’s minister for digital transformation and public administration noted that this investment will empower Spanish enterprises and public sector entities to innovate with AI and accelerate their digital transformation process. 

Why does it matter?

Oracle’s optimistic forecast for fiscal 2025 revenue growth surpassed analysts’ expectations thereby reflecting the growing demand for its AI-driven cloud services.

Furthermore, Oracle announced collaborations with OpenAI and Google Cloud with the objective of expanding its cloud infrastructure offerings to customers thus showing its commitment to enhancing its cloud services portfolio.

Huawei reports significant advances in AI and operating systems development

Huawei Technologies announced significant advancements in operating systems and AI, achieving in 10 years what took the US and Europe 30 years. Richard Yu, chairman of Huawei’s Consumer Business Group, highlighted these achievements at a developer conference in Dongguan.

Huawei’s Harmony operating system is now on over 900 million devices, which marks a substantial progress since its launch in 2019 when US restrictions cut Huawei off from Google’s Android support. Yu noted that Huawei’s Ascend AI infrastructure is now the second most popular, following Nvidia.

Why does it matter?

The rise of the Internet of Things has provided Huawei an opportunity to surpass long-time Western dominance in software. Additionally, Huawei’s smartphone business has rebounded with the Mate 60, featuring a new China-made chip. Sales of Harmony-equipped smartphones increased by 68% in the first five months of the year. In Q1 2024, HarmonyOS became the second best-selling mobile OS in China, overtaking Apple’s iOS with a 17% market share.

Annual State of Broadband report highlights AI impact

The annual State of Broadband report serves as a comprehensive global assessment of broadband access, affordability, and usage trends. This year’s edition, titled ‘Leveraging AI for Universal Connectivity,’ is being released in two parts. The first part, unveiled on June 20, 2024, outlines how AI applications are transforming sectors like e-government, education, healthcare, finance, and environmental management. It also examines the implications of AI for bridging or exacerbating the digital divide.

Authored by over 50 high-level Commissioners, including UN leaders, industry CEOs, and government officials, the report highlights AI’s potential to drive development while cautioning against its risks. The second part of the report, yet to be released, will provide updated data and deeper insights from the Broadband Commissioners, offering a more detailed analysis of AI’s evolving role in the digital realm.

As the Broadband Commission tracks progress towards its 2025 Advocacy Targets and prepares for future global summits, the report underscores the critical role of policymakers in maximizing the benefits of AI while ensuring equitable access to digital opportunities. It aims to inform strategic decisions that align with sustainable development goals, emphasising the need for proactive measures to harness AI responsibly and inclusively.

Amazon plans to revamp Alexa with generative AI

Amazon plans to overhaul its Alexa service with a new project known internally as ‘Banyan,’ aiming to integrate generative AI and introduce two service tiers. The initiative, called ‘Remarkable Alexa,’ could include a monthly fee of around $5 for the premium version, which would offer advanced capabilities like composing emails and placing orders from a single prompt. That would mark Alexa’s first major update since its 2014 launch.

The project is driven by Amazon’s need to revitalise Alexa, which has struggled to turn a profit and compete with AI advancements from companies like Google, Microsoft, and Apple. CEO Andy Jassy has prioritised this update, setting an internal deadline for August. The new Alexa aims to provide more intelligent, personalised assistance, building on generative AI already integrated into over half a billion devices worldwide.

Despite the ambitious plans, some Amazon employees view the project as a ‘desperate attempt’ to save Alexa, citing challenges such as software inaccuracies and poor morale within the team. While Amazon hopes the AI-powered Alexa will drive more significant sales and enhance home automation, the project’s success depends on customer willingness to pay for features currently offered for free and the effectiveness of the new technology.

Google and University of Tokyo to launch AI initiative for regional solutions

Google LLC and the University of Tokyo are teaming up to leverage generative AI to tackle local challenges in Japan, such as the nation’s shrinking workforce. The initiative, featuring prominent AI researcher Professor Yutaka Matsuo, will be piloted in Osaka and Hiroshima prefectures, with plans to expand successful models nationwide by 2027.

In Osaka, the project aims to address employment mismatches by using AI to suggest job opportunities and career paths that job seekers might not have considered. That approach differs from traditional job placement agencies and will draw from extensive online data to offer more tailored job suggestions.

The specific focus for Hiroshima has yet to be determined. However, Hiroshima Governor Hidehiko Yuzaki expressed a vision for AI to provide detailed responses to relocating inquiries, signalling AI’s potential to shape the prefecture’s future.

Beyond these initial projects, Google suggests that generative AI could enhance medical care on remote islands and automate agriculture, forestry, and fisheries tasks. Professor Matsuo emphasised that effectively utilising generative AI presents a significant opportunity for Japan.

Ukrainian student’s identity misused by AI on Chinese social media platforms

Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.

Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.

Why does it matter?

These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.

In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.

Anthropic unveils Claude 3.5 Sonnet AI model

Anthropic, a startup backed by Google and Amazon, has introduced a new AI model named Claude 3.5 Sonnet alongside a revamped user interface to enhance productivity. The release comes just three months after the launch of its Claude 3 family of AI models. Claude 3.5 Sonnet surpasses its predecessor, Claude 3 Opus, in benchmark exam performance, speed, and cost efficiency, being five times cheaper for developers.

CEO Dario Amodei emphasised AI’s flexibility and rapid advancement, noting that, unlike physical products, AI models can be quickly updated and improved. Anthropic’s latest technology is now available for free on Claude.ai and through an iOS app. Additionally, users can opt into a new feature called ‘Artifacts,’ which organises generated content in a side window, facilitating collaborative work and the production of finished products.

Anthropic’s rapid development cycle reflects the competitive nature of the AI industry, with companies like OpenAI and Google also pushing forward with significant AI advancements. The startup plans to release more models, including Claude 3.5 Opus, within the year while focusing on safety and usability.

US House of Representatives unlikely to pass broad AI regulations this year

The US House of Representatives is unlikely to pass broad AI regulation this year. House Majority Leader Steve Scalise said that he opposes extensive regulations, fearing they might hinder the US in AI development compared to China. Instead, he suggests focusing on existing laws and targeted fixes rather than creating new regulatory structures.

This stance contrasts with Senate Majority Leader Chuck Schumer’s proposal, whose bipartisan AI working group report recommended a $32 billion annual investment in non-defense AI innovation and a comprehensive regulatory framework. The House’s bipartisan AI task force is also cautious about large-scale regulations.

Chair Rep. Jay Obernolte suggests that some targeted AI legislation might be feasible, while Rep. French Hill advocates for a sector-specific review under existing laws rather than a broad regulatory framework. This division between the House and Senate reduces the likelihood of significant AI legislation this year, but the House may consider smaller, urgent AI-related bills to address immediate issues.

Why does it matter?

The US Congress has seen a surge in AI legislation from both the Senate and House, by the rise of advanced AI models like ChatGPT and DeepAI, and growing issues with ‘deepfake’ content, particularly around elections and scams. However, this division reduces the likelihood of significant AI legislation this year, though smaller, urgent AI-related bills may still be approved.