AI stocks surge prompts profit-taking advice

According to strategists at Citigroup Inc., investors are being advised to consider cashing in on the recent surge in AI stocks. The analysis highlights strong investor sentiment towards AI-exposed equities, reminiscent of levels seen in 2019. Drew Pettit’s team at Citi notes that while there’s no clear bubble in AI stocks overall, the rapid rise in specific names raises concerns about increased volatility ahead.

This year, the AI frenzy has driven Nvidia Corp. to briefly claim the title of the world’s most valuable company, while Taiwan Semiconductor Manufacturing Co. surpassed $1 trillion in market value. Citi suggests focusing on profit-taking, particularly among chip-makers, and diversifying investments across the broader AI sector.

Despite cautious signals from Citi, many market observers believe the AI momentum will persist through the year’s second half. Bloomberg News reports a split among investors, some favouring established giants like Nvidia, while others look to secondary beneficiaries such as utilities and infrastructure providers.

Acknowledging AI stocks’ optimism, Citi’s strategists emphasise that current stock prices imply high expectations.

Singapore advocates for international AI standards

Singapore’s digital development minister, Josephine Teo, has expressed concerns about the future of AI governance, emphasising the need for an internationally agreed-upon framework. Speaking at the Reuters NEXT conference in Singapore, Teo highlighted that while Singapore is more excited than worried about AI, the absence of global standards could lead to a ‘messy’ future.

Teo pointed out the necessity for specific legislation to address challenges posed by AI, particularly focusing on using deepfakes during elections. She stressed that implementing clear and effective laws will be crucial as AI technology advances to manage its impact on society and ensure responsible use.

Singapore’s proactive stance on AI reflects its commitment to balancing technological innovation with necessary regulatory measures. The country aims to harness the benefits of AI while mitigating potential risks, especially in critical areas like electoral integrity.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

AI app aids pastors with sermons

A new AI platform called Pulpit AI, designed to assist pastors in delivering their sermons more effectively, is set to launch on 22 July. Created by Michael Whittle and Jake Sweetman, the app allows pastors to upload their sermons in various formats such as audio, video, manuscript, or outline. The app generates content like devotionals, discussion questions, newsletters, and social media posts. The aim is to ease the workload of church staff while enhancing communication with the congregation.

Whittle and Sweetman, who have been friends for over a decade, developed the idea from their desire to extend the impact of a sermon beyond Sunday services. They believe Pulpit AI can significantly benefit pastors who invest substantial time preparing sermons by repurposing their content for broader use without additional effort. This AI tool does not create sermons but generates supplementary materials based on the original sermon, ensuring the content remains faithful to the pastor’s message.

Despite the enthusiasm, some, like Dr Charlie Camosy from Creighton University, urge caution in adopting AI within the church. He suggests that while AI can be a valuable tool, it is crucial to consider its long-term implications on human interactions and the traditional processes within the church. Nonetheless, pastors who have tested Pulpit AI, such as Pastor Adam Mesa of Patria Church, report significant benefits in managing their communication and expanding their outreach efforts.

Researchers develop a method to improve reward models using LLMs for synthetic critiques

Researchers from Cohere and the University of Oxford have introduced an innovative method to enhance reward models (RMs) in reinforcement learning from human feedback (RLHF) by leveraging large language models (LLMs) for synthetic critiques. The novel approach aims to reduce the extensive time and cost associated with human annotation, which is traditionally required for training RMs to predict scores based on human preferences.

In their paper, ‘Improving Reward Models with Synthetic Critiques’, the researchers detailed how LLMs could generate critiques that evaluate the relationship between prompts and generated outputs, predicting scalar rewards. These synthetic critiques improved the performance of reward models on various benchmarks by providing additional feedback on aspects like instruction following, correctness, and style, leading to better assessment and scoring of language models.

The study highlighted that high-quality synthetic critiques significantly increased data efficiency, with one enhanced preference pair as valuable as forty non-enhanced pairs. The approach makes the training process more cost-effective and has the potential to match or surpass traditional reward models, as demonstrated by GPT-4.0’s performance in certain benchmarks.

As the field continues to explore alternatives to RLHF, including reinforcement learning from AI feedback (RLAIF), this research indicates a promising shift towards AI-based critiquing, potentially transforming how major AI players such as Google, OpenAI, and Meta align their large language models.

AI’s digital twin technology revolution

The AI industry invests heavily in digital twin technology, creating virtual replicas of humans and objects for research. Tech companies believe these digital twins can unlock AI’s full potential by mirroring our physiologies, personalities, and objects around us. Digital twins can range from models of complex phenomena, like organisms or weather systems, to video avatars of individuals. This new technology promises to revolutionise healthcare by providing personalised treatment, accelerating drug development, and enhancing our understanding of environments and objects.

Gartner predicts the global market for digital twins will surge to $379 billion by 2034, mainly driven by the healthcare industry, which is expected to reach a market size of $110.1 billion by 2028. The concept of digital twins began in engineering and manufacturing but has expanded thanks to improved data storage and connectivity, making it more accessible and versatile.

One notable example is LinkedIn co-founder Reid Hoffman, who created his digital twin, REID.AI, using two decades of his content. Hoffman demonstrated the potential of this technology by releasing videos of himself conversing with the twins and even sending them for an on-stage interview. While most digital twins focus on statistical applications, their everyday utility is evident in projects like Twin Health, which uses sensors to monitor patients’ health and provide personalised advice. The technology has shown promise in helping diabetic patients reverse their condition and reduce medication reliance.

Like the broader AI boom, the digital twin market starts with impressive demonstrations but aims to deliver significant practical benefits, especially in healthcare and personalised services.

Samsung wins AI chip order from Japan

Samsung Electronics announced securing an order from Japanese AI company Preferred Networks to manufacture chips using its advanced 2-nanometre foundry process and advanced chip packaging service. The trade exchange between the two countries marks Samsung’s first disclosed order for its cutting-edge 2-nanometre chip manufacturing process, although the order size remains undisclosed.

The chips will employ high-tech gate-all-around (GAA) architecture and integrate multiple chips into a single package to enhance connection speed and reduce size. According to Preferred Networks ‘ VP Junichiro Makino, designed by South Korea‘s Gaonchips, these chips will support high-performance computing hardware for generative AI technologies, including large language models.

The development highlights Samsung’s advancements in semiconductor technology and its role in supporting innovative AI applications.

IBM’s GenAI center to advance AI technology in India

IBM has launched its GenAI Innovation Center in Kochi, designed to help enterprises, startups, and partners explore and develop generative AI technology. The centre aims to accelerate AI innovation, increase productivity, and enhance generative AI expertise in India, addressing challenges organisations face when transitioning from AI experimentation to deployment.

The centre will provide access to IBM experts and technologies, assisting in building, scaling, and adopting enterprise-grade AI. It will utilise InstructLab, a technology developed by IBM and Red Hat for enhancing Large Language Models (LLMs) with client data, along with IBM’s ‘watsonx’ AI and data platform and AI assistant technologies. The centre will be part of the IBM India Software Lab in Kochi and managed by IBM’s technical experts.

IBM highlights that the centre will nurture a community that uses generative AI to tackle societal and business challenges, including sustainability, public infrastructure, healthcare, education, and inclusion. The initiative underscores IBM’s commitment to fostering AI innovation and addressing complex integration issues in the business landscape.

Why does it matter?

lBM’s new GenAI hub stems from a significant investment in advancing AI technology in India. This centre is set to play a crucial role in accelerating AI innovation, boosting productivity, and enhancing generative AI expertise, which is critical for the growth of enterprises, startups, and partners. By providing access to advanced AI technologies and expert knowledge, the centre aims to overcome the challenges of AI integration and deployment, thereby fostering a robust AI ecosystem. Furthermore, the initiative underscores the potential of generative AI to address pressing societal and business challenges, contributing to advancements in sustainability, public infrastructure, healthcare, education, and inclusion.

Microsoft committed to expanding AI in education in Hong Kong

US tech giant Microsoft is committed to offering generative AI services in Hong Kong through educational initiatives, despite OpenAI’s access restrictions in the city and mainland China. Microsoft collaborated with the Education University of Hong Kong Jockey Club Primary School to offer AI services starting last year.

About 220 students in grades 5 and 6 used Microsoft’s chatbot and text-to-image tools in science classes. Principal Elsa Cheung Kam Yan noted that AI enhances learning by broadening students’ access to information and allowing exploration beyond textbooks. Vice-Principal Philip Law Kam Yuen added that in collaboration with Microsoft Hong Kong for 12 years, the school plans to extend AI usage to more classes.

Additionally, Microsoft also has agreements with eight Hong Kong universities to promote AI services. Fred Sheu, national technology officer of Microsoft in Hong Kong, reaffirmed Microsoft’s commitment to maintaining its Azure AI services, which use OpenAI’s models, further emphasising that API restrictions by OpenAI will not affect the company. Microsoft’s investment in OpenAI reportedly allows it to receive up to 49% of the profits from OpenAI’s for-profit arm. As all government-funded universities in Hong Kong have already acquired the Azure OpenAI service, they are thus qualified users. He also emphasised that Microsoft intends to extend this service to all schools in Hong Kong over the next few years.

AI impact in music production: Nearly 25% of producers embrace innovation

A recent survey by Tracklib reveals that 25% of music producers are now integrating AI into their creative processes, marking a significant adoption of technology within the industry. However, most producers exhibit resistance towards AI, citing concerns over losing creative control as a primary barrier.

Among those using AI, the survey found that most employ it for stem separation (73.9%) rather than full song creation, which is used by only a small fraction (3%). Concerns among non-users primarily revolve around artistic integrity (82.2%) and doubts about AI’s ability to maintain quality (34.5%), with additional concerns including cost and copyright issues.

Interestingly, the survey highlights a stark divide between perceptions of assistive AI, which aids in music creation, and generative AI, which directly generates elements or entire songs. While some producers hold a positive view of assistive AI, generative AI faces stronger opposition, especially among younger respondents.

Overall, the survey underscores a cautious optimism about AI’s future impact on music production, with 70% of respondents expecting it to have a significant influence going forward. Despite current reservations, Tracklib predicts continued adoption of music AI, noting it is entering the “early majority” phase of adoption according to technology adoption models.