A recent survey by Teikoku Databank Ltd reveals that less than 20% of Japanese companies are utilising generative AI in their operations, primarily due to concerns about inadequate internal expertise. Despite the growing recognition of AI as a tool for enhancing business efficiency, many firms still need to be bold in adopting the technology.
Of the 4,705 primarily small and medium-sized enterprises surveyed, only 17.3% reported using generative AI. While 26.8% are considering its adoption, nearly half have yet to make plans to integrate the technology. A lack of skilled staff and operational know-how was cited by 54.1% of respondents as a bigger barrier, alongside concerns about the accuracy of AI-generated content and uncertainty about which tasks would benefit from AI.
Additionally, companies expressed worries about the need for internal rules to address potential issues such as legal responsibilities, copyright concerns, and the risk of information leaks. Among those already using AI, only 19.5% have established clear guidelines for its application, indicating a general lack of preparedness.
The survey found that information gathering is the most common use of generative AI, with nearly 60% of companies employing it. Other frequent uses include text summarisation and brainstorming during project planning.
Despite the challenges, 86.7% of businesses that have adopted AI reported positive outcomes. Yohei Sadaka of Teikoku Databank expects more companies to embrace AI as they become better equipped to manage the associated risks and establish more precise internal guidelines. The survey was conducted between 14 June and 5 July.
According to a Meta security report, Russia’s use of generative AI in online deception campaigns could have been more effective. Meta, the parent company of Facebook and Instagram, reported that while AI-powered tactics offer some productivity and content-generation gains for malicious actors, they have yet to advance these efforts significantly. Despite growing concerns about generative AI being used to manipulate elections, Meta has successfully disrupted such influence operations.
The report highlights that Russia remains a leading source of ‘coordinated inauthentic behaviour’ on social media, particularly since its invasion of Ukraine in 2022. These operations have primarily targeted Ukraine and its allies, with expectations that as the US election nears, Russia-backed campaigns will increasingly attack candidates who support Ukraine. Meta’s approach to detecting these campaigns focuses on account behaviour rather than content alone, as influence operations often span multiple online platforms.
Meta has observed that posts on X are sometimes used to bolster fabricated content. While Meta shares its findings with other internet companies, it notes that X has significantly reduced its content moderation efforts, making it a haven for disinformation. Researchers have also raised concerns about X, now owned by Elon Musk, being a platform for political misinformation. Musk, who supports Donald Trump, has been criticised for using his influence on the platform to spread falsehoods, including sharing an AI-generated deepfake video of Vice President Kamala Harris.
SK Telecom and Nokia have announced a strategic partnership to implement AI-driven fibre sensing technology to enhance network reliability in South Korea. The collaboration, formalised through a memorandum of understanding, plans to roll out the innovative technology across SK Telecom’s national fixed network by the end of 2024.
The primary goal is proactively monitoring and detecting environmental changes that could impact optical cables, addressing issues before they escalate into significant disruptions. The fibre sensing technology will utilise advanced AI and machine learning techniques to monitor various environmental factors, including earthquakes, climate fluctuations, and disruptions from nearby construction activities. By continuously analysing data from SK Telecom’s commercial networks, the system aims to identify potential threats to network stability early on.
The proactive approach is designed to minimise damage from line breaks and prevent service interruptions, ensuring uninterrupted connectivity for customers. The integration of these advanced technologies allows for real-time monitoring and analysis, which is crucial for maintaining the resilience of network infrastructure. Ryu Jeong-hwan, Head of Infrastructure Strategy Technology at SK Telecom, emphasised the importance of this collaboration in accelerating the adoption of AI technologies.
He noted that this partnership prepares SK Telecom for the evolving AI landscape, positioning it as a leader in innovative network solutions. Similarly, John Harrington, President of Nokia Asia Pacific, expressed enthusiasm about integrating Nokia’s sensing technology into automated networks, highlighting their commitment to providing stable services by proactively addressing potential issues.
California is set to vote on SB 1047, a bill designed to prevent catastrophic harm from AI systems. The bill targets large AI models—those costing over $100 million to train and using immense computing power—requiring their developers to implement strict safety protocols. These include emergency shut-off mechanisms and third-party audits. The Frontier Model Division (FMD) will oversee compliance and enforce penalties for violations.
Supporters of the bill, including State Senator Scott Wiener and prominent AI researchers, contend that preemptive regulation is essential to safeguard against potential AI disasters. They believe it’s crucial to establish regulations before serious incidents occur. The bill is expected to be approved by the Senate and is now awaiting a decision from Governor Gavin Newsom.
If passed, SB 1047 would not take effect immediately, with the FMD scheduled to be established by 2026. The bill is anticipated to face legal challenges from various stakeholders who are concerned about its implications for the tech industry.
Norway’s $1.7 trillion sovereign wealth fund, one of the world’s largest investors, is calling for improved AI governance at the board level across its portfolio companies. Carine Smith Ihenacho, the fund’s Chief Governance and Compliance Officer, highlighted the need for boards to not only understand how AI is being used but to also establish robust policies to ensure its responsible application. The fund, which holds stakes in nearly 9,000 companies, has already shared its views on AI with the boards of 60 major firms.
The call for enhanced AI competency in Norway comes as the fund has increased its focus on the technology sector, where it has significant investments in major tech companies like Microsoft and Apple. The fund’s emphasis is on ensuring that AI is used responsibly, particularly in high-impact sectors such as healthcare. Smith Ihenacho stressed that boards must be able to address key questions about their AI policies and risks, even if they don’t have a dedicated AI expert.
Despite its concerns, the fund supports the responsible use of AI, recognising its potential to drive innovation and productivity. The push for better AI governance is part of the fund’s broader strategy to maintain high standards in environmental, social, and corporate governance (ESG) across its investments.
As the AI sector continues to grow, the fund’s recommendations reflect a broader trend towards increasing accountability and transparency in the use of emerging technologies.
Google is expanding its AI-generated search summaries, known as AI Overviews, to six new countries: Brazil, India, Indonesia, Japan, Mexico, and Britain. This follows a previous rollout in the US, which faced criticism for inaccuracies such as incorrect information and misleading content. The company has since refined the feature, adding restrictions to improve accuracy and reducing reliance on user-generated content from sites like Reddit.
The updated AI Overviews now include more hyperlinks to relevant websites, displayed alongside the AI-generated answers, with plans to integrate clickable links directly within the text. Google aims to balance user experience with publisher traffic, responding to concerns from the media industry about potential impacts on referral traffic.
Hema Budaraju, a senior director at Google, reported improved user satisfaction based on internal data, noting that users of the feature tend to engage more deeply with search queries. These updates come at a time when Google faces legal challenges and competition from AI advancements by rivals like Microsoft-backed OpenAI.
Ridley Scott, the acclaimed director behind the original Gladiator, is raising the stakes with Gladiator II, promising some of the biggest action sequences of his career. In a recent interview with Empire Magazine, Scott revealed that the film begins with an enormous action scene, surpassing even his work on Napoleon. Paul Mescal stars in the sequel, alongside Pedro Pascal and Denzel Washington, taking audiences on a thrilling new adventure two decades after the Oscar-winning original.
Scott embraces advanced technology, including AI, to bring his vision to life. One of the standout sequences features Paul Mescal’s character, Lucius, facing off against a massive rhino. Scott shared that he used a combination of computerisation and AI to create a lifelike model of the rhino, which was mounted on a robotic platform capable of impressive movements, adding a new layer of realism to the film’s action.
The director’s shift in attitude towards AI is notable, given his earlier concerns about the technology. Last year, Scott expressed fears about AI’s potential to disrupt society, but now he acknowledges its role in filmmaking. Despite his previous reservations, Scott seems to have found a balance between caution and innovation, using AI to push the boundaries of what’s possible on screen.
A decentralised blockchain and AI startup, Sahara AI, has successfully raised $43 million in a Series A funding round. The round saw significant backing from prominent investors including Pantera Capital, Binance Labs, and Polychain Capital. Samsung NEXT also joined the funding alongside Matrix Partners, dao5, and Geekcartel.
The funds will be utilised to expand Sahara AI’s global team, improve the platform’s performance, and grow its developer ecosystem. By leveraging its decentralised platform, Sahara AI aims to reward users, data sources, and AI trainers, rather than just the companies that create AI models. The company’s approach is seen as a shift from the traditional model, promoting transparency and fair compensation.
Founded in April 2023, Sahara AI has already partnered with leading tech firms such as Microsoft, Amazon, and Snap. These collaborations highlight the startup’s rapid growth and the increasing interest in its unique decentralised approach to AI.
As the use of AI continues to rise, concerns around data privacy, copyright, and ethical issues have become more pronounced. Sahara AI’s approach seeks to address these challenges by ensuring transparency and fairness in how AI models are developed and utilised.
Singapore’s National University Health System (NUHS) is leveraging advanced AI technologies to enhance efficiency and reduce administrative workloads in healthcare. Through the RUSSELL-GPT platform, which integrates large language models (LLMs) via Amazon Web Services (AWS) Bedrock, over a thousand clinicians now benefit from automated tasks such as drafting referrals and summarising patient data, reducing administrative time by 40%.
The NUHS team is working on event-driven Generative AI models that can perform tasks automatically when triggered by specific events, such as drafting discharge letters without needing any prompts. This approach aims to streamline processes further and reduce the administrative burden on healthcare staff.
Ensuring patient data security is a top priority for NUHS, with robust measures in place to keep data within Singapore and comply with local privacy laws. RUSSELL-GPT also includes features to mitigate the risks of AI hallucinations, with mandatory training for users on recognising and managing such occurrences.
Despite the promise of LLMs, NUHS acknowledges that these models are not a cure-all. Classical AI still plays a critical role in tasks like clustering information and providing predictive insights, underlining the need for a balanced use of it in healthcare.
The English Premier League is set to enhance offside decision-making with new technology from Genius Sports. Multiple iPhones, paired with advanced machine-learning models, will assist referees in making more accurate offside calls. Traditional Video Assistant Referee (VAR) systems have faced criticism for slow reviews and inconsistent decisions, leading to this shift.
Genius Sports developed ‘Semi-Assisted Offside Technology’ (SAOT) as part of its GeniusIQ system. Up to 28 iPhones will be placed around the pitch to generate 3D models of players, offering precise offside line determinations. Expensive 4K cameras will be replaced by iPhones, which capture between 7,000 and 10,000 data points per player.
Strategically positioned on custom rigs, iPhones will cover optimal areas of the pitch. Data collected will be processed by the GeniusIQ system, using predictive algorithms to assess player positions even when obscured. High framerate recording and local processing capabilities further enhance the system’s accuracy.
Genius Sports plans to fully implement the system in the Premier League by the end of the year. While the exact date remains unconfirmed, this marks a significant advancement in football technology, promising a more precise and consistent approach to offside rulings.