MediaTek develops ARM-based chip for Microsoft AI laptops

MediaTek, a leading semiconductor company in Taiwan, is collaborating with Microsoft to design an ARM-based chip specifically for its Windows operating system AI-powered laptops. The strategic partnership marks a significant move in the tech industry. Last month, Microsoft unveiled a new generation of laptops featuring chips designed with ARM Holdings technology, providing the necessary power to run advanced AI applications. These applications are considered the future of consumer computing by Microsoft executives.

MediaTek’s new chip is set to be integral in this effort and is expected to bring substantial advancements in processing power and efficiency. That collaboration underscores the growing demand for high-performance, energy-efficient chips tailored for AI applications. Their ARM-based chip is designed to optimize AI tasks, leveraging ARM’s architecture known for its power efficiency and performance scalability. The development aligns with the industry trend of integrating specialized hardware to handle AI workloads more effectively, reducing the reliance on general-purpose CPUs.

For Microsoft, this partnership with MediaTek represents a strategic move to strengthen its position in the competitive AI hardware market. By incorporating MediaTek’s advanced chip technology, Microsoft aims to offer more capable AI laptops, appealing to both consumer and enterprise markets. The collaboration also takes direct aim at Apple, which has been using its own ARM-based chips for Mac computers for roughly four years. Also, Microsoft’s decision to optimise Windows for ARM could pose a significant challenge to Intel’s long standing dominance in the PC market. For decades, Windows machines have relied on chip architectures developed by Intel and AMD. MediaTek’s PC chip is expected for release late next year, coinciding with the end of Qualcomm’s exclusive deal to supply ARM-based chips for Windows laptops.

Why does it matter?

  • Enhanced AI capabilities: The integration of MediaTek’s specialised chip will enable Microsoft’s AI laptops to perform more complex AI tasks with greater efficiency and speed. That advancement is crucial as AI applications become more sophisticated and demand higher computational power.
  • Energy efficiency: ARM-based chips are known for their power efficiency, which means that AI laptops equipped with these chips will likely have longer battery life and reduced energy consumption. This is particularly important for users who require high performance without sacrificing mobility.
  • Market innovation: The partnership could set a new standard in the AI hardware market, encouraging other tech companies to develop and integrate specialised chips for AI applications. That could lead to a surge in innovation, driving the development of more advanced and capable AI devices.
  • Competitive edge: For both MediaTek and Microsoft, this collaboration provides a competitive edge. MediaTek can showcase its capacities in developing cutting-edge chips, while Microsoft can differentiate its AI laptops in a crowded market by offering superior performance and efficiency.

Google tests AI anti-theft feature for phones in Brazil

Alphabet’s Google announced that Brazil will be the first country to test a new anti-theft feature for Android phones, utilising AI to detect and lock stolen devices. The initial test phase will offer three locking mechanisms. One uses AI to identify movement patterns typical of theft and lock the screen. Another allows users to remotely lock their screens by entering their phone number and completing a security challenge from another device. The third feature locks the screen automatically if the device remains offline for an extended period.

These features will be available to Brazilian users with Android phones version 10 or higher starting in July, with a gradual rollout to other countries planned for later this year. Phone theft is a significant issue in Brazil, with nearly 1 million cell phones reported stolen in 2022, marking a 16.6% increase from the previous year.

In response to the rising theft rates, the Brazilian government launched an app called Celular Seguro in December, allowing users to report stolen phones and block access via a trusted person’s device. As of last month, approximately 2 million people had registered with the app, leading to the blocking of 50,000 phones, according to the Justice Ministry.

Turkish student jailed for using AI to cheat on exam

Turkish authorities have arrested a student for using a makeshift device linked to AI software to cheat during a university entrance exam. The student, who was acting suspiciously, was detained by police during the exam and later formally arrested and sent to jail pending trial. Another individual involved in helping the student was also detained.

A police video from Isparta province showed the student’s setup: a camera disguised as a shirt button connected to AI software through a router hidden in the sole of their shoe. The system allowed the AI to generate correct answers, relayed to the student through an earpiece.

This incident highlights the increasing use of advanced technology in cheating, prompting concerns about exam security and integrity. The authorities are now investigating the extent of this cheating method and considering measures to prevent similar occurrences in the future.

Apple announces partnership with OpenAI to integrate ChatGPT into Siri

Apple is integrating OpenAI’s ChatGPT into Siri, as announced at its WWDC 2024 keynote. The partnership will allow iOS 18 and macOS Sequoia users to access ChatGPT for free, with privacy measures ensuring that queries aren’t logged. Additionally, paid ChatGPT subscribers can link their accounts to access premium features on Apple devices.

Apple had been negotiating with Google and OpenAI to enhance its AI capabilities, ultimately partnering with OpenAI. The enhanced feature will utilise OpenAI’s GPT-4o model, which will power ChatGPT in Apple’s upcoming operating systems.

OpenAI CEO Sam Altman expressed enthusiasm for the partnership, highlighting shared commitments to safety and innovation. However, Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced a ban on Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk labelled this move an ‘unacceptable security violation’ and stated that visitors would be required to leave their Apple devices in a Faraday cage at the entrance to his facilities.

Why does it matter?

The new business plan aims to significantly enhance Siri’s capabilities with advanced AI features. The chatbot will be seamlessly integrated into Apple’s systemwide writing tools, enriching the user experience across Apple devices.

Central to this integration is a robust consent mechanism that requires users’ permission before sending any questions, documents, or photos to ChatGPT. Siri will present the responses directly, emphasising Apple’s commitment to user privacy and transparent data handling practices.

US Treasury Secretary shares remarks about use of AI in finance

At a conference hosted by the Financial Stability Oversight Council (FSOC) and Brookings Institution, US Treasury Secretary Janet L. Yellen announced the department’s newest remarks on the use of AI in financial institutions. This comes as an update to similar remarks made during an FSOC meeting in December 2023 and in its 2023 Annual Report

In the first part of her address, the Secretary noted the opportunities and risks of AI generally, highlighting the use of automation in the financial sector for many years and the department’s regulation of it, citing model risk management guidance and third-party risk management. However, she warned of other risks to do with the ‘complexity and opacity’ of new AI systems. She noted the difficulty of regulating an increasing amount of AI systems, but also avoiding dependence on a few. 

Why is this important?

To address this “rapidly evolving field”, the Treasury released a report providing current use cases and practices of AI for cybersecurity and fraud prevention in the financial sector. Work is also being done with international partners in this field, Yellen said. Towards the end of her address, the Treasury Secretary mentioned a request for comments from private and public actors, as well as a round table discussion after this. She also mentioned that the council’s monitoring will be continued and adapted in line with the rapid implementation of AI in the financial sector.

Yellen’s remarks fall into a more general discussion over the regulation of new technologies. In 2022, the Treasury issued a framework for international engagement on digital assets. However, elections, like the ones in the US this year, may discourage lawmakers from legislating AI in time, further raising fears of further AI-created deepfakes or robocalls.

Musk vows to ban Apple devices if they use OpenAI tech

Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced on Monday that he would ban Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk called this move an ‘unacceptable security violation’ and declared that visitors would have to leave their Apple devices in a Faraday cage at the entrance to his facilities.

The statement followed Apple’s announcement of new AI features across its apps and operating platforms, including a partnership with OpenAI to incorporate ChatGPT technology into its devices. Apple emphasised that these AI features are designed with privacy at their core, using both on-device processing and cloud computing to ensure data security. Musk, however, expressed scepticism, arguing that Apple’s reliance on OpenAI undermines its ability to protect user privacy and security effectively.

Industry experts, such as Ben Bajarin, CEO of Creative Strategies, believe that Musk’s stance is unlikely to gain widespread support. Bajarin noted that Apple aims to reassure users that its private cloud services are as secure as on-device data storage. He explained that Apple anonymises and firewalls user data, ensuring that Apple itself does not access it.

Musk’s criticism of OpenAI is not new; he co-founded the organisation in 2015 but sued it earlier this year, alleging it strayed from its mission to develop AI for the benefit of humanity. Musk has since launched his own AI startup, xAI, valued at $24 billion after a recent funding round, to compete directly with OpenAI and develop alternatives to its popular ChatGPT.

Meta develops AI technology tailored specifically for Europe

Meta Platforms, the owner of Facebook, announced it is developing AI technology tailored specifically for Europe, taking into account the region’s linguistic, geographic, and cultural nuances. The company will train its large language models using publicly shared content from its platforms, including Instagram and Facebook, ensuring that private posts are excluded to maintain user privacy.

Last month, Meta revealed plans to inform Facebook and Instagram users in Europe and the UK about how their public information is utilised to enhance and develop AI technologies. The move aims to increase transparency and reassure users about data privacy.

By focusing on localised AI development, Meta hopes to serve the European market better, reflecting the region’s diverse characteristics in its technology offerings. That effort underscores Meta’s commitment to respecting user privacy while advancing its AI capabilities.

Apple to showcase AI innovations at developer conference

At Apple’s annual developer conference on Monday, the tech giant is anticipated to unveil how it’s integrating AI across its software suite. The integration includes updates to its Siri voice assistant and a potential collaboration with OpenAI, the owner of ChatGPT. With its reputation on the line, Apple aims to reassure investors that it remains competitive in the AI landscape, especially against rivals like Microsoft.

Apple faces the challenge of demonstrating the value of AI to its vast user base, many of whom are not tech enthusiasts. Analysts suggest that Apple needs to showcase how AI can enhance user experiences, a shift from its previous emphasis on enterprise applications. Despite using AI behind the scenes for years, Apple has been reserved in highlighting its role in device functionality, unlike Microsoft’s more vocal approach with OpenAI.

The spotlight is on Siri’s makeover, which is expected to enable more seamless control over various apps. Apple aims to make Siri smarter by integrating generative AI, potentially through a partnership with OpenAI. The move is anticipated to improve user interactions with Siri across different apps, enhancing its usability and effectiveness. Also, Apple recently introduced an AI-focused chip in its latest iPad Pro models, signalling its commitment to AI development. Analysts predict that Apple will provide developers with insights into leveraging these capabilities to support AI computing. Additionally, reports suggest Apple may discuss its plans for using its chips in data centres, which could enhance cloud computing capabilities while maintaining privacy and security features.

The Apple Worldwide Developers Conference (WWDC 2024) will run until Friday, offering developers insights into app updates and new tools. Investors are hopeful that Apple’s AI advancements will drive sales of new iPhones and boost the company’s competitive edge amid fierce global competition.

US lawmakers question NewsBreak over Chinese origins and AI-generated stories

Three US lawmakers have raised concerns about NewsBreak, a popular news aggregation app, due to its Chinese origins and use of AI tools that have produced erroneous stories. Senator Mark Warner, chair of the Intelligence Committee, emphasised the threat posed by technologies from adversarial countries. At the same time, Representative Raja Krishnamoorthi highlighted the need for transparency regarding any ties to the Chinese Communist Party (CCP). Representative Elise Stefanik pointed to the backing by IDG Capital, a Beijing-based private equity firm, as a reason for increased scrutiny.

NewsBreak, launched in the US in 2015, was originally a subsidiary of the Chinese news app Yidian, founded by Jeff Zheng. Despite being labelled an American company by its spokesperson, court documents and other evidence reveal historical links to Chinese investors and engineers based in China. Notably, Yidian has received praise from Chinese Communist Party officials for disseminating government propaganda, although there is no evidence that NewsBreak has censored or produced pro-China news.

The primary investors in NewsBreak include San Francisco-based Francisco Partners and Beijing-based IDG Capital. IDG Capital, which the Pentagon has listed as allegedly working with Beijing’s military, denies any such association. Francisco Partners has described the scrutiny as ‘false and misleading,’ but the lawmakers maintain their stance on carefully examining the app’s potential risks to US interests.

Google Play cracks down on AI apps amid deepfake concerns

Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.

The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like ‘DeepNude’ and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.

Apps offering AI features have to be ‘rigorously tested’ to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can’t advertise that their app breaks any of Google Play’s rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.

Why Does It Matter?

The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.