Tech giants promote AI-powered PCs

Tech giants like Microsoft and Qualcomm are aggressively promoting a new category of computers dubbed ‘AI PCs,’ which boast integrated AI capabilities. These machines feature dedicated processors designed to enhance AI functions such as personal assistants and task automation, distinguishing them from standard laptops and desktops.

Despite the hype, only a tiny fraction—just 3%—of PCs shipped this year meet Microsoft’s stringent processing power criteria to qualify as AI PCs, according to IDC. Analysts remain sceptical about the practical utility of these AI features, noting limited software support beyond Microsoft’s ecosystem. Major developers like Adobe, Salesforce, and SentinelOne have hesitated to optimise their applications for AI PCs, preferring to deliver AI capabilities via cloud services.

While some smaller software firms have tailored their apps for on-device AI, more considerable adoption hurdles persist. Initial reviews highlight that current AI functionalities on these PCs, such as eye-tracking during video calls and generative AI content creation, are often seen as gimmicks rather than transformative tools. Furthermore, privacy concerns delayed the rollout of flagship AI features like Microsoft’s Recall.

Why does this matter?

Despite challenges, industry players are optimistic about the potential of AI PCs to rejuvenate the stagnant PC market. With superior battery life and promises of enhanced performance, these devices aim to entice consumers who last upgraded at the pandemic’s onset. Market data from Circana indicates early traction, particularly among tech-savvy users and content creators.

Looking ahead, Qualcomm, vying to challenge Intel’s dominance in PCs, plans to market its Snapdragon processors for AI PCs aggressively. Intel and AMD are expected to release competing models later this year, addressing compatibility issues that currently limit adoption. Industry analysts project AI PCs to comprise about 20% of new PC shipments by 2026, signalling a slow but steady shift towards AI-enhanced computing solutions.

OpenAI blocks Chinese users amid growing tech rivalry

At the recent World AI Conference in Shanghai, China’s leading AI company, SenseTime, unveiled its latest model, SenseNova 5.5, which can identify objects, provide feedback on drawings, and summarise text. Comparable to OpenAI’s GPT-4, SenseNova 5.5 aims to attract users with 50 million free tokens and free migration support from OpenAI services. The launch of SenseNova 5.5 comes at a crucial time, as OpenAI will block Chinese users from accessing its tools starting 9 July, intensifying the rivalry between US and Chinese AI firms.

OpenAI’s decision to block Chinese users has sparked concern in China’s AI community, raising questions about equitable access to AI technologies. However, it has also created an opportunity for Chinese companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to attract new users with free tokens and migration services, accelerating the development of Chinese AI companies that are already engaged in fierce competition.

Why does this matter?

The US-China tech rivalry has led to US restrictions on exporting advanced semiconductors to China, impacting the AI industry’s growth. While Chinese companies are quickly advancing, the US sanctions are causing shortages in computing capacity, as seen with Kuaishou’s AI model restrictions. Despite these challenges, Chinese commentators view OpenAI’s departure as a chance for China to achieve greater technological self-reliance and independence.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

AI app aids pastors with sermons

A new AI platform called Pulpit AI, designed to assist pastors in delivering their sermons more effectively, is set to launch on 22 July. Created by Michael Whittle and Jake Sweetman, the app allows pastors to upload their sermons in various formats such as audio, video, manuscript, or outline. The app generates content like devotionals, discussion questions, newsletters, and social media posts. The aim is to ease the workload of church staff while enhancing communication with the congregation.

Whittle and Sweetman, who have been friends for over a decade, developed the idea from their desire to extend the impact of a sermon beyond Sunday services. They believe Pulpit AI can significantly benefit pastors who invest substantial time preparing sermons by repurposing their content for broader use without additional effort. This AI tool does not create sermons but generates supplementary materials based on the original sermon, ensuring the content remains faithful to the pastor’s message.

Despite the enthusiasm, some, like Dr Charlie Camosy from Creighton University, urge caution in adopting AI within the church. He suggests that while AI can be a valuable tool, it is crucial to consider its long-term implications on human interactions and the traditional processes within the church. Nonetheless, pastors who have tested Pulpit AI, such as Pastor Adam Mesa of Patria Church, report significant benefits in managing their communication and expanding their outreach efforts.

Researchers develop a method to improve reward models using LLMs for synthetic critiques

Researchers from Cohere and the University of Oxford have introduced an innovative method to enhance reward models (RMs) in reinforcement learning from human feedback (RLHF) by leveraging large language models (LLMs) for synthetic critiques. The novel approach aims to reduce the extensive time and cost associated with human annotation, which is traditionally required for training RMs to predict scores based on human preferences.

In their paper, ‘Improving Reward Models with Synthetic Critiques’, the researchers detailed how LLMs could generate critiques that evaluate the relationship between prompts and generated outputs, predicting scalar rewards. These synthetic critiques improved the performance of reward models on various benchmarks by providing additional feedback on aspects like instruction following, correctness, and style, leading to better assessment and scoring of language models.

The study highlighted that high-quality synthetic critiques significantly increased data efficiency, with one enhanced preference pair as valuable as forty non-enhanced pairs. The approach makes the training process more cost-effective and has the potential to match or surpass traditional reward models, as demonstrated by GPT-4.0’s performance in certain benchmarks.

As the field continues to explore alternatives to RLHF, including reinforcement learning from AI feedback (RLAIF), this research indicates a promising shift towards AI-based critiquing, potentially transforming how major AI players such as Google, OpenAI, and Meta align their large language models.

AI’s digital twin technology revolution

The AI industry invests heavily in digital twin technology, creating virtual replicas of humans and objects for research. Tech companies believe these digital twins can unlock AI’s full potential by mirroring our physiologies, personalities, and objects around us. Digital twins can range from models of complex phenomena, like organisms or weather systems, to video avatars of individuals. This new technology promises to revolutionise healthcare by providing personalised treatment, accelerating drug development, and enhancing our understanding of environments and objects.

Gartner predicts the global market for digital twins will surge to $379 billion by 2034, mainly driven by the healthcare industry, which is expected to reach a market size of $110.1 billion by 2028. The concept of digital twins began in engineering and manufacturing but has expanded thanks to improved data storage and connectivity, making it more accessible and versatile.

One notable example is LinkedIn co-founder Reid Hoffman, who created his digital twin, REID.AI, using two decades of his content. Hoffman demonstrated the potential of this technology by releasing videos of himself conversing with the twins and even sending them for an on-stage interview. While most digital twins focus on statistical applications, their everyday utility is evident in projects like Twin Health, which uses sensors to monitor patients’ health and provide personalised advice. The technology has shown promise in helping diabetic patients reverse their condition and reduce medication reliance.

Like the broader AI boom, the digital twin market starts with impressive demonstrations but aims to deliver significant practical benefits, especially in healthcare and personalised services.

Samsung wins AI chip order from Japan

Samsung Electronics announced securing an order from Japanese AI company Preferred Networks to manufacture chips using its advanced 2-nanometre foundry process and advanced chip packaging service. The trade exchange between the two countries marks Samsung’s first disclosed order for its cutting-edge 2-nanometre chip manufacturing process, although the order size remains undisclosed.

The chips will employ high-tech gate-all-around (GAA) architecture and integrate multiple chips into a single package to enhance connection speed and reduce size. According to Preferred Networks ‘ VP Junichiro Makino, designed by South Korea‘s Gaonchips, these chips will support high-performance computing hardware for generative AI technologies, including large language models.

The development highlights Samsung’s advancements in semiconductor technology and its role in supporting innovative AI applications.

IBM’s GenAI center to advance AI technology in India

IBM has launched its GenAI Innovation Center in Kochi, designed to help enterprises, startups, and partners explore and develop generative AI technology. The centre aims to accelerate AI innovation, increase productivity, and enhance generative AI expertise in India, addressing challenges organisations face when transitioning from AI experimentation to deployment.

The centre will provide access to IBM experts and technologies, assisting in building, scaling, and adopting enterprise-grade AI. It will utilise InstructLab, a technology developed by IBM and Red Hat for enhancing Large Language Models (LLMs) with client data, along with IBM’s ‘watsonx’ AI and data platform and AI assistant technologies. The centre will be part of the IBM India Software Lab in Kochi and managed by IBM’s technical experts.

IBM highlights that the centre will nurture a community that uses generative AI to tackle societal and business challenges, including sustainability, public infrastructure, healthcare, education, and inclusion. The initiative underscores IBM’s commitment to fostering AI innovation and addressing complex integration issues in the business landscape.

Why does it matter?

lBM’s new GenAI hub stems from a significant investment in advancing AI technology in India. This centre is set to play a crucial role in accelerating AI innovation, boosting productivity, and enhancing generative AI expertise, which is critical for the growth of enterprises, startups, and partners. By providing access to advanced AI technologies and expert knowledge, the centre aims to overcome the challenges of AI integration and deployment, thereby fostering a robust AI ecosystem. Furthermore, the initiative underscores the potential of generative AI to address pressing societal and business challenges, contributing to advancements in sustainability, public infrastructure, healthcare, education, and inclusion.

Microsoft committed to expanding AI in education in Hong Kong

US tech giant Microsoft is committed to offering generative AI services in Hong Kong through educational initiatives, despite OpenAI’s access restrictions in the city and mainland China. Microsoft collaborated with the Education University of Hong Kong Jockey Club Primary School to offer AI services starting last year.

About 220 students in grades 5 and 6 used Microsoft’s chatbot and text-to-image tools in science classes. Principal Elsa Cheung Kam Yan noted that AI enhances learning by broadening students’ access to information and allowing exploration beyond textbooks. Vice-Principal Philip Law Kam Yuen added that in collaboration with Microsoft Hong Kong for 12 years, the school plans to extend AI usage to more classes.

Additionally, Microsoft also has agreements with eight Hong Kong universities to promote AI services. Fred Sheu, national technology officer of Microsoft in Hong Kong, reaffirmed Microsoft’s commitment to maintaining its Azure AI services, which use OpenAI’s models, further emphasising that API restrictions by OpenAI will not affect the company. Microsoft’s investment in OpenAI reportedly allows it to receive up to 49% of the profits from OpenAI’s for-profit arm. As all government-funded universities in Hong Kong have already acquired the Azure OpenAI service, they are thus qualified users. He also emphasised that Microsoft intends to extend this service to all schools in Hong Kong over the next few years.

AI impact in music production: Nearly 25% of producers embrace innovation

A recent survey by Tracklib reveals that 25% of music producers are now integrating AI into their creative processes, marking a significant adoption of technology within the industry. However, most producers exhibit resistance towards AI, citing concerns over losing creative control as a primary barrier.

Among those using AI, the survey found that most employ it for stem separation (73.9%) rather than full song creation, which is used by only a small fraction (3%). Concerns among non-users primarily revolve around artistic integrity (82.2%) and doubts about AI’s ability to maintain quality (34.5%), with additional concerns including cost and copyright issues.

Interestingly, the survey highlights a stark divide between perceptions of assistive AI, which aids in music creation, and generative AI, which directly generates elements or entire songs. While some producers hold a positive view of assistive AI, generative AI faces stronger opposition, especially among younger respondents.

Overall, the survey underscores a cautious optimism about AI’s future impact on music production, with 70% of respondents expecting it to have a significant influence going forward. Despite current reservations, Tracklib predicts continued adoption of music AI, noting it is entering the “early majority” phase of adoption according to technology adoption models.