AI conference spotlights Chinese GPU advances

At the recent World Artificial Intelligence Conference in Shanghai, Chinese GPU developers seized the opportunity to showcase their products in Nvidia’s absence. Prominent companies such as Iluvatar Corex, Moore Threads, Enflame Technology, Sophgo, and Huawei’s Ascend were at the forefront, highlighting their advancements despite significant challenges in manufacturing and software ecosystems.

Enflame Technology emphasised the shift from foreign-dominated computing clusters to a mix of Chinese and foreign GPUs. The company, along with AI solutions firm Infinigence, is promoting compute resources that utilise a variety of chips from both Nvidia and Chinese manufacturers. However, US export restrictions have prevented Nvidia from selling its most advanced chips in China, and several Chinese firms, including Huawei, are struggling with manufacturing hurdles due to being blacklisted by the US.

Huawei’s booth was a major attraction, showcasing its Ascend 910B chips, which train numerous large language models in China. Meanwhile, Enflame presented its Cloudblazer T20 and T21 AI-training chips, benefiting from not being on the US trade blacklist, which allows it access to global foundries like TSMC.

Despite these efforts, Chinese GPUs still need to catch up with their global counterparts regarding performance. Nvidia remains a dominant player, with tailored chips for the Chinese market continuing to be popular. Nvidia is expected to deliver over 1 million H20 GPUs in China this year, generating $12 billion in sales. However, experts highlight that China’s in-house technology still needs to meet its substantial domestic AI demand.

AI driving transformation in financial services

At YourStory’s Tech Leaders’ Conclave, Ankur Pal, Chief Data Scientist at Aplazo, discussed how AI is transforming the financial services industry. Aplazo aims to address financial inclusion, especially in developing countries with low credit card penetration, by providing fair and transparent solutions like their Buy Now Pay Later (BNPL) platform. Pal highlighted AI’s potential to revolutionise fintech by creating personalised financial products and improving operational efficiency, ultimately reducing friction for consumers and institutions.

Pal emphasised AI’s role in enhancing decision-making processes, reducing fraud, and improving customer service. AI-driven solutions enable real-time data processing, which helps financial institutions detect and prevent fraud more effectively.

Additionally, AI can automate routine tasks, allowing financial professionals to focus on strategic initiatives. The real-time decision-making is becoming increasingly important as financial institutions invest in event streaming infrastructure and machine learning operations (MLOps) stacks to manage high transaction volumes with low latency.

Overcoming financial inclusion barriers was a key topic, with Pal noting that many developing countries still have a large unbanked or underbanked population despite high bank account ownership. AI can bridge this gap by offering tailored financial solutions for underserved communities.

Pal also discussed the importance of leadership and the skill sets required for building successful AI teams. He stressed the need for adaptability, continuous learning, and a deep understanding of both technology and business to create valuable AI solutions. While AI will transform job roles, it will also create new opportunities, making it crucial for leaders to foster a culture of innovation.

AI-powered workplace innovation: Tech Mahindra partners with Microsoft

Tech Mahindra has partnered with Microsoft to enhance workplace experiences for over 1,200 customers and more than 10,000 employees across 15 locations by adopting Copilot for Microsoft 365. The collaboration aims to boost workforce efficiency and streamline processes through Microsoft’s trusted cloud platform and generative AI capabilities. Additionally, Tech Mahindra will deploy GitHub Copilot for 5,000 developers, anticipating a productivity increase of 35% to 40%.

Mohit Joshi, CEO and Managing Director of Tech Mahindra, highlighted the transformative potential of the partnership, emphasising the company’s commitment to shaping the future of work with cutting-edge AI technology. Tech Mahindra plans to extend Copilot’s capabilities with plugins to leverage multiple data sources, enhancing creativity and productivity. The focus is on increasing efficiency, reducing effort, and improving quality and compliance across the board.

As part of the initiative, Tech Mahindra has launched a dedicated Copilot practice to help customers unlock the full potential of AI tools, including workforce training for assessment and preparation. The company will offer comprehensive solutions to help customers assess, prepare, pilot, and adopt business solutions using Copilot for Microsoft 365, providing a scalable and personalised user experience.

Judson Althoff, Executive Vice President and Chief Commercial Officer at Microsoft, remarked that the collaboration would empower Tech Mahindra’s employees with new generative AI capabilities, enhancing workplace experiences and increasing developer productivity. The partnership aligns with Tech Mahindra’s ongoing efforts to enhance workforce productivity using GenAI tools, demonstrated by the recent launch of a unified workbench on Microsoft Fabric to accelerate the adoption of complex data workflows.

ChatGPT vs Google: The battle for search dominance

OpenAI’s ChatGPT, launched in 2022, has revolutionised the way people seek answers, shifting from traditional methods to AI-driven interactions. This AI chatbot, along with competitors like Anthropic’s Claude, Google’s Gemini, and Microsoft’s CoPilot, has made AI a focal point in information retrieval. Despite these advancements, traditional search engines like Google remain dominant.

Google’s profits surged by nearly 60% due to increased advertising revenue from Google Search, and its global market share reached 91.1% in June, even as ChatGPT’s web visits declined by 12%.

Google is not only holding its ground but also leveraging AI technology to enhance its services. Analysts at Bank of America credit Gemini, Google’s AI, with contributing to the growth in search queries. By integrating Gemini into products such as Google Cloud and Search, Google aims to improve their performance, blending traditional search capabilities with cutting-edge AI innovations.

However, Google’s dominance faces significant legal challenges. The U.S. Department of Justice has concluded a major antitrust case against Google, accusing the company of monopolising the digital search market, with a verdict expected by late 2024.

Additionally, Google is contending with another antitrust lawsuit filed by the U.S. government over alleged anticompetitive behaviour in the digital advertising space. These legal challenges could reshape the digital search landscape, potentially providing opportunities for AI chatbots and other emerging technologies to gain a stronger foothold in the market.

User concerns grow as AI reshapes online interactions

As AI continues to evolve, it’s reshaping online platforms and stirring concerns among longtime users. At a recent tech conference, concerns were raised about AI-generated content flooding forums like Reddit and Stack Overflow, mimicking human interactions. Reddit moderator Sarah Gilbert highlighted the frustration felt by many contributors who see their genuine contributions overshadowed by AI-generated posts.

Stack Overflow, a hub for programming solutions, faced backlash when it initially banned AI-generated responses due to inaccuracies. However, it’s now embracing AI through partnerships to enhance user experience, sparking debates about the balance between human input and AI automation. CEO Prashanth Chandrasekar acknowledged the challenges, noting their efforts to maintain a community-driven knowledge base amidst technological shifts.

Meanwhile, social media platforms like Meta (formerly Facebook) are under scrutiny for using AI to train models on user-generated content without explicit consent. That has prompted regulatory action in countries like Brazil, where fines were imposed for non-compliance with data protection laws. In Europe and the US, similar concerns over privacy and transparency persist as AI integration grows.

The debate underscores broader issues of digital ethics and the future of online interaction, where authenticity and user privacy collide with technological advancements. Platforms must navigate these complexities to retain user trust while embracing AI’s potential to innovate and automate online experiences.

Chinese AI companies react to OpenAI block with SenseNova 5.5

At the recent World AI Conference in Shanghai, SenseTime introduced its latest model, SenseNova 5.5, showcasing capabilities comparable to OpenAI’s GPT-4o. This unveiling coincided with OpenAI’s decision to block its services in China, leaving developers scrambling for alternatives.

OpenAI’s move, effective from July 9th, blocks API access from regions where it does not support service, impacting Chinese developers who relied on its tools via virtual private networks. The decision, amid US-China technology tensions, underscores broader concerns about global access to AI technologies.

The ban has prompted Chinese AI companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to offer incentives, including free tokens and migration services, to lure former OpenAI users. Analysts suggest this could accelerate China’s AI development, challenging US dominance in generative AI technologies.

The development has sparked mixed reactions in China, with some viewing it as a move to bolster domestic AI independence amidst geopolitical pressures. However, it also highlights challenges in China’s AI industry, such as reliance on US semiconductors, impacting capabilities like Kuaishou’s AI models.

AI stocks surge prompts profit-taking advice

According to strategists at Citigroup Inc., investors are being advised to consider cashing in on the recent surge in AI stocks. The analysis highlights strong investor sentiment towards AI-exposed equities, reminiscent of levels seen in 2019. Drew Pettit’s team at Citi notes that while there’s no clear bubble in AI stocks overall, the rapid rise in specific names raises concerns about increased volatility ahead.

This year, the AI frenzy has driven Nvidia Corp. to briefly claim the title of the world’s most valuable company, while Taiwan Semiconductor Manufacturing Co. surpassed $1 trillion in market value. Citi suggests focusing on profit-taking, particularly among chip-makers, and diversifying investments across the broader AI sector.

Despite cautious signals from Citi, many market observers believe the AI momentum will persist through the year’s second half. Bloomberg News reports a split among investors, some favouring established giants like Nvidia, while others look to secondary beneficiaries such as utilities and infrastructure providers.

Acknowledging AI stocks’ optimism, Citi’s strategists emphasise that current stock prices imply high expectations.

Singapore advocates for international AI standards

Singapore’s digital development minister, Josephine Teo, has expressed concerns about the future of AI governance, emphasising the need for an internationally agreed-upon framework. Speaking at the Reuters NEXT conference in Singapore, Teo highlighted that while Singapore is more excited than worried about AI, the absence of global standards could lead to a ‘messy’ future.

Teo pointed out the necessity for specific legislation to address challenges posed by AI, particularly focusing on using deepfakes during elections. She stressed that implementing clear and effective laws will be crucial as AI technology advances to manage its impact on society and ensure responsible use.

Singapore’s proactive stance on AI reflects its commitment to balancing technological innovation with necessary regulatory measures. The country aims to harness the benefits of AI while mitigating potential risks, especially in critical areas like electoral integrity.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

AI app aids pastors with sermons

A new AI platform called Pulpit AI, designed to assist pastors in delivering their sermons more effectively, is set to launch on 22 July. Created by Michael Whittle and Jake Sweetman, the app allows pastors to upload their sermons in various formats such as audio, video, manuscript, or outline. The app generates content like devotionals, discussion questions, newsletters, and social media posts. The aim is to ease the workload of church staff while enhancing communication with the congregation.

Whittle and Sweetman, who have been friends for over a decade, developed the idea from their desire to extend the impact of a sermon beyond Sunday services. They believe Pulpit AI can significantly benefit pastors who invest substantial time preparing sermons by repurposing their content for broader use without additional effort. This AI tool does not create sermons but generates supplementary materials based on the original sermon, ensuring the content remains faithful to the pastor’s message.

Despite the enthusiasm, some, like Dr Charlie Camosy from Creighton University, urge caution in adopting AI within the church. He suggests that while AI can be a valuable tool, it is crucial to consider its long-term implications on human interactions and the traditional processes within the church. Nonetheless, pastors who have tested Pulpit AI, such as Pastor Adam Mesa of Patria Church, report significant benefits in managing their communication and expanding their outreach efforts.