AI transforms early disease detection

AI is revolutionising diagnostic testing by identifying diseases much earlier than traditional methods. AI’s ability to analyse vast amounts of data is uncovering new ways to detect previously undetectable diseases. For instance, researchers at Peking University have discovered that facial temperature patterns, detected with thermal cameras and AI, can indicate chronic illnesses like diabetes and high blood pressure.

Recent advancements highlight AI’s potential in diagnostics. University of British Columbia researchers found a new subtype of endometrial cancer, and another study revealed that AI could identify Parkinson’s disease up to seven years before symptoms appear. These breakthroughs demonstrate how AI can sift through large datasets to identify patterns and markers that traditional methods might miss.

Why does it matter?

The integration of AI in diagnostics is making testing more personalised and predictive. AI analyses data from individual patient records and real-time wearables to tailor diagnoses and treatment plans. Despite concerns about AI infringing on doctors’ roles, experts like John Halamka from the Mayo Clinic emphasise that AI enhances doctors’ capabilities rather than replacing them. However, ensuring data transparency and addressing biases in AI algorithms remain critical challenges.

As AI continues to evolve, patients can expect more personalised and early detection of diseases during routine tests. This technology promises to provide new insights and recommendations that can significantly impact healthcare outcomes.

Apple aligns with OpenAI for future AI innovations

Apple Inc. has secured an observer role on OpenAI’s board, further solidifying their growing partnership. Phil Schiller, head of Apple’s App Store and former marketing chief, will take on this position. As an observer, Schiller will attend board meetings without voting rights or other director powers. The development follows Apple’s announcement of integrating ChatGPT into its devices, such as the iPhone, iPad, and Mac, as part of its AI suite.

Aligning Apple with OpenAI’s principal backer, Microsoft Corp., the observer role offers Apple valuable insights into OpenAI’s decision-making processes. However, Microsoft and Apple’s rivalry might lead to Schiller’s exclusion from certain discussions, particularly those concerning future AI initiatives between OpenAI and Microsoft. Schiller’s extensive experience with Apple’s brand makes him a suitable candidate for this role, despite his lack of direct involvement in Apple’s AI projects.

The partnership with OpenAI is a key part of Apple’s broader AI strategy, which includes a variety of in-house features under Apple Intelligence. These features range from summarising articles and notifications to creating custom emojis and transcribing voice memos. The integration of OpenAI’s chatbot feature will meet current consumer demand, with a paid version of ChatGPT potentially generating App Store fees. No financial transactions are involved; OpenAI gains access to Apple’s vast user base while Apple benefits from the chatbot’s capabilities.

Apple is also in discussions with Alphabet Inc.’s Google, startup Anthropic, and Chinese companies Baidu Inc. and Alibaba Group Holding Ltd. to offer more chatbot options to its customers. Initially, Apple Intelligence will be available in American English, with plans for an international rollout. Furthermore, collbaoration like this marks a rare instance of an Apple executive joining the board of a major partner, highlighting the significance of this partnership in Apple’s AI strategy.

Brazil halts Meta’s new privacy policy for AI training, citing serious privacy risks

Brazil’s National Data Protection Authority (ANPD) has taken immediate action to halt the implementation of Meta’s new privacy policy concerning the use of personal data to train generative AI systems within the country.

The ANPD’s precautionary measure, announced in Brazil’s official gazette, suspends the processing of personal data across all Meta products, extending to individuals who are not users of the tech company’s platforms. The regulatory body, operating under Brazil’s Justice Ministry, has imposed a daily fine of 50,000 reais ($8,836.58) for any directive violations.

The decision by the ANPD was motivated by the perceived ‘imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of affected individuals.’ As a result, Meta is mandated to revise its privacy policy to eliminate the segment related to the processing of personal data for generative AI training. Additionally, Meta must issue an official statement confirming the suspension of personal data processing for this purpose.

In response to the ANPD’s ruling, Meta expressed disappointment, characterising the move as a setback for innovation and predicting a delay in delivering AI benefits to the Brazilian population. Meta defended its practices by pointing to its transparency policy compared to other industry players who have used public content for training models and products. The company asserted that its approach aligns with Brazil’s privacy laws and regulations.

Vodafone calls for EU Connectivity Union

Vodafone has called for the establishment of a ‘Connectivity Union’ to accelerate Europe’s digital ambitions and bolster its global competitiveness. Emphasising the crucial role of next-generation connectivity, particularly 5G standalone technology, Vodafone argues that this is essential for European businesses to fully harness the industrial value of the internet and emerging technologies such as AI. They warn that Europe risks falling behind in the global digital race without addressing the current connectivity issues.

The European Commission has identified several challenges in the connectivity sector, including fragmentation, excessive costs, and inconsistent regulations that vary across companies despite offering similar services. These issues threaten the achievement of Europe’s digital decade targets and put the region at a significant competitive disadvantage.

Vodafone stresses that Europe needs critical action from policymakers to close the 5G investment gap and turn its digital future around. Joakim Reiter, Chief External & Corporate Affairs Officer at Vodafone, highlighted the urgency of resetting Europe’s telecoms policy regime. He proposed a new Connectivity Union that would bring together the European Commission, national governments, and industry stakeholders to tackle the shortcomings in Europe’s connectivity sector more aggressively.

In response to the European Commission’s consultation paper, Vodafone outlined five key policy pillars for a new Digital Communications Framework for Europe. These include enhancing investment competition in mobile and fixed markets, advocating for pro-investment spectrum policies, ensuring fair regulation based on services offered, implementing a harmonised security framework, and creating a stable policy environment that incorporates sustainability requirements. These pillars aim to end the piecemeal policy approach to telecoms and lay the foundation for a robust Connectivity Union.

How AI is transforming construction safety and efficiency

Florida International University’s Moss Department of Construction Management is at the forefront of a revolution in the industry. They’re equipping students with the tools to leverage AI for increased efficiency and safety on construction sites.

Imagine generating blueprints with just a few specifications or having a watchful eye constantly monitoring a site for safety hazards. These are just a few ways AI is transforming construction. Students like Kaelan Dodd are already putting this knowledge to work. ‘An AI tool I tried at my job based on what I learned at FIU lets us create blueprints in seconds,’ Dodd said, impressed by the technology’s potential.

But FIU’s course goes beyond simply using AI. Professor Lufan Wang understands the importance of students not just using the technology but understanding it. By teaching them to code, she gives them a ‘translator’ to communicate with AI and provides valuable feedback to improve its capabilities. An approach like this one prepares students to not only navigate the constantly evolving world of AI but also shape its future applications in construction.

The benefits of AI extend far beyond efficiency. Construction is a field where safety is paramount, and AI can be a valuable ally. Imagine having a tireless AI assistant analyse thousands of construction site photos to identify potential hazards or sending an AI-powered robot into a dangerous situation to gather information. These are a few ways AI can minimise risk and potentially save lives. While AI won’t replace human construction managers entirely, it can take on the most dangerous tasks, allowing human expertise to focus on what it does best – guiding and overseeing complex projects.

Meta responds to photo tagging issues with new AI labels

Meta has announced a significant update regarding using AI labels across its platforms, replacing the ‘Made with AI’ tag with ‘AI info’. This change comes after widespread complaints about the incorrect tagging of photos. For instance, a historical photograph captured on film four decades ago was mistakenly labelled AI-generated when uploaded with basic editing tools like Adobe’s cropping feature.

Kate McLaughlin, a spokesperson for Meta, emphasised that the company is continuously refining its AI products and collaborating closely with industry partners on AI labelling standards. The new ‘AI info’ label aims to clarify that content may have been modified with AI tools rather than solely created by AI.

The issue primarily stems from how metadata tools like Adobe Photoshop apply information to images, which platforms interpret. Following the expansion of its AI content labelling policies, daily photos shared on Meta’s platforms, such as Instagram and Facebook, were erroneously tagged as ‘Made with AI’.

Initially, the updated labelling will roll out on mobile apps before extending to web platforms. Clicking on the ‘AI info’ tag will display a message similar to the previous label, explaining why it was applied and acknowledging the use of AI-powered editing tools like Generative Fill. Despite advancements in metadata tagging technology like C2PA, distinguishing between AI-generated and authentic images remains a work in progress.

Internet bodies warn against perceived UN centralised internet governance plan

Key internet technical bodies, including the Internet Engineering Task Force, World Wide Web Consortium, Internet Research Task Force, and the Internet Society’s Board of Trustees, have signed an open letter to the UN arguing against a centralised governance of the internet, which they argue is being proposed in the UN’s Global Digital Compact (GDC). The letter states that some of the proposals in the latest version of the GDC, released on 26 June 2024, can be interpreted as mandating more centralised internet governance, which the technical bodies believe would be detrimental to the internet and global economies and societies.

The GDC aims to create international consensus on principles for an ‘inclusive, open, sustainable, fair, safe and secure digital future’. However, the technical bodies argue that the GDC is being developed through a multilateral process between states, with very limited engagement of the open, inclusive, and consensus-driven methods used to develop the internet and web to date.

Specifically, the GDC proposes the establishment of an international scientific panel on AI to conduct risk assessments, an office to facilitate follow-ups on the compact, and calls on the UN to play a key role in promoting cooperation and harmonisation of data governance initiatives. The technical bodies view these proposals as steps towards more centralised internet governance, which they believe would be detrimental.

Mary Meeker examines AI and higher education

Mary Meeker, renowned for her annual ‘Internet Trends’ reports, has released her first study in over four years, focusing on the intersection of AI and US higher education. Meeker’s previous reports were pivotal in analysing the tech economy, often spanning hundreds of pages. Her new report, significantly shorter at 16 pages, explores how the collaboration between technology and higher education can bolster America’s economic vitality.

In her latest report, Meeker asserts that the US has surpassed China in AI leadership. She emphasises that for the US to maintain this edge, technology companies and universities must work together as partners rather than see each other as obstacles. The partnership involves tech companies providing GPUs to research universities and being transparent about future work trends. Simultaneously, higher education institutions must adopt a ‘mindset change,’ treating students as customers and teachers as coaches.

Meeker highlights the historical role of universities like Stanford and MIT in driving tech innovation, initially through government funding, now increasingly through industry support. She underscores the critical nature of the coming years for higher education to remain a driving force in technological advancement. Echoing venture capitalist Alan Patricof, Meeker describes AI as a revolution more profound than transistors, PCs, biotech, the internet, or cloud computing, suggesting that AI is now ready to optimise the vast data accumulated over the past decades.

Meeker’s new report was shared with investors at her growth equity firm, BOND, and published on the firm’s website, aiming to inform and guide the next steps in integrating AI with higher education to sustain America’s technological and economic leadership.

Tech giants clash over California AI legislation

California lawmakers are poised to vote on groundbreaking legislation aimed at regulating AI to prevent potential catastrophic risks, such as manipulating the state’s electric grid or aiding in the creation of chemical weapons. Spearheaded by Democratic state Sen. Scott Wiener, the bill targets AI systems with immense computing power, setting safety standards that apply only to models costing over $100 million to train.

Tech giants like Meta (Facebook) and Google strongly oppose the bill, arguing that it unfairly targets developers rather than those who misuse AI for harmful purposes. They contend that such regulations could stifle innovation and drive tech companies away from California, potentially fracturing the regulatory landscape.

While highlighting California’s role as a leader in AI adoption, Governor Gavin Newsom has not publicly endorsed the bill. His administration is concurrently exploring rules to combat AI discrimination in employment and housing, underscoring the dual challenges of promoting AI innovation while safeguarding against its misuse.

The proposed legislation has garnered support from prominent AI researchers and would establish a new state agency to oversee AI development practices and enforce compliance. Proponents argue that California must act swiftly to avoid repeating past regulatory oversights in the social media sector, despite concerns over regulatory overreach and its potential economic impact.

Japan unveils AI defence strategy

The Japanese Defence Ministry has unveiled its inaugural policy to promote AI use, aiming to adapt to technological advancements in defence operations. Focusing on seven key areas, including detection and identification of military targets, command and control, and logistic support, the policy aims to streamline the ministry’s work and respond to changes in technology-driven defence operations.

The new policy highlights that AI can enhance combat operation speed, reduce human error, and improve efficiency through automation. AI is also expected to aid in information gathering and analysis, unmanned defence assets, cybersecurity, and work efficiency. However, the policy acknowledges the limitations of AI, particularly in unprecedented situations, and concerns regarding its credibility and potential misuse.

The Defence Ministry plans to secure human resources with cyber expertise to address these issues, starting a specialised recruitment category in fiscal 2025. Defence Minister Minoru Kihara emphasised the importance of adapting to new forms of battle using AI and cyber technologies and stressed the need for cooperation with the private sector and international agencies.

Recognising the risks associated with AI use, Kihara highlighted the importance of accurately identifying and addressing these shortcomings. He stated that Japan’s ability to adapt to new forms of battle with AI and cyber technologies is a significant challenge in building up its defence capabilities. The ministry aims to deepen cooperation with the private sector and relevant foreign agencies by proactively sharing its views and strategies.