Japan unveils AI defence strategy

The Japanese Defence Ministry has unveiled its inaugural policy to promote AI use, aiming to adapt to technological advancements in defence operations. Focusing on seven key areas, including detection and identification of military targets, command and control, and logistic support, the policy aims to streamline the ministry’s work and respond to changes in technology-driven defence operations.

The new policy highlights that AI can enhance combat operation speed, reduce human error, and improve efficiency through automation. AI is also expected to aid in information gathering and analysis, unmanned defence assets, cybersecurity, and work efficiency. However, the policy acknowledges the limitations of AI, particularly in unprecedented situations, and concerns regarding its credibility and potential misuse.

The Defence Ministry plans to secure human resources with cyber expertise to address these issues, starting a specialised recruitment category in fiscal 2025. Defence Minister Minoru Kihara emphasised the importance of adapting to new forms of battle using AI and cyber technologies and stressed the need for cooperation with the private sector and international agencies.

Recognising the risks associated with AI use, Kihara highlighted the importance of accurately identifying and addressing these shortcomings. He stated that Japan’s ability to adapt to new forms of battle with AI and cyber technologies is a significant challenge in building up its defence capabilities. The ministry aims to deepen cooperation with the private sector and relevant foreign agencies by proactively sharing its views and strategies.

Anthropic launches grants for developing new AI benchmark

Anthropic is launching a new program to fund the creation of new benchmarks for better assessing AI model performance and its impact. In its blog post, Anthropic stated that it will offer grants to third-party organisations developing improved methods for evaluating advanced AI model capabilities.

Urging the AI research community to develop more rigorous benchmarks that address societal and security implications, Anthropic advocated for revising existing methodologies through new tools, infrastructure, and methods. Highlighting how they aim to develop an early warning system to identify and assess risks, it specifically called for tests to evaluate a model’s ability to conduct cyberattacks, enhance weapons of mass destruction, and manipulate or deceive individuals.

Moreover, Anthropic also aims for its new program to support research into benchmarks and tasks that explore AI’s potential in scientific study, multilingual communication, bias mitigation, and self-censorship of toxicity. In addition to grants, researchers will have the chance to consult with the company’s domain experts. The company also expressed interest in potentially investing in or acquiring the most promising projects, offering various ‘funding options tailored to the needs and stage of each project’.

Why does this matter?

Benchmarks are the process of evaluating the quality of an AI system. The evaluation is typically a fixed process of assessing the capability of an AI model, usually in one area, while AI models like Anthropic’s Claude and Open AI’s ChatGPT are designed to perform a host of tasks. Thus, developing robust and reliable model evaluations is complex and is riddled with challenges. Anthropic’s initiative to support new AI benchmarks is commendable, with their stated objective of the program serving as a catalyst for progress towards a future where comprehensive AI evaluation is an industry-standard. However, given their own commercial interests, the initiative may raise trust concerns.

UN adopts China-led AI resolution

The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.

The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.

China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.

Connecticut launches AI Academy to boost tech skills

Connecticut is spearheading efforts by developing what could be the nation’s first Citizens AI Academy. The free online resource aims to offer classes for learning basic AI skills and obtaining employment certificates.

Democratic Senator James Maroney of Connecticut emphasised the need for continuous learning in this rapidly evolving field. Determining the essential skills for an AI-driven world presents challenges due to the technology’s swift progression and varied expert opinions. Gregory LaBlanc from Berkeley Law School suggested that workers should focus on managing and utilising AI rather than understanding its technical intricacies to complement the capabilities of AI.

Several states, including Connecticut, California, Mississippi, and Maryland, have proposed legislation addressing AI in education. For instance, California is considering incorporating AI literacy into school curricula to ensure students understand AI principles, recognise its use, and appreciate its ethical implications. Connecticut’s AI Academy plans to offer certificates for career-related skills and provide foundational knowledge, from digital literacy to interacting with chatbots.

Despite the push for AI education, concerns about the digital divide persist. Senator Maroney highlighted the potential disadvantage for those needing more basic digital skills or access to technology. Marvin Venay of Bring Tech Home and Tesha Tramontano-Kelly of CfAL for Digital Inclusion stress the importance of affordable internet and devices as prerequisites for effective AI education. Ensuring these fundamentals is crucial for equipping individuals with the necessary tools to thrive in an AI-driven future.

AI-driven stock surge sparks dotcom bubble fears

The surge in US stock prices, driven by enthusiasm for AI, draws comparisons to the dot-com bubble two decades ago, sparking concerns over inflated valuations. The S&P 500 has reached new records, climbing more than 50% from its October 2022 low, while the Nasdaq Composite has surged over 70% since the end of 2022. A few massive tech stocks, including Nvidia, are leading this rally, reminiscent of the ‘Four Horsemen’ tech stocks of the late 1990s.

Despite the impressive gains, some analysts caution that today’s tech stocks are more financially robust than their dotcom-era counterparts. However, fears persist that the AI-driven surge might end in a crash similar to the dotcom bust, which saw the Nasdaq Composite plummet nearly 80% from its March 2000 peak, and while some companies like Amazon thrived post-bubble, many did not recover.

Current tech stock valuations, while high, are more grounded in solid earnings prospects rather than speculative growth, a key difference from the dot-com era. For instance, Nvidia trades at 40 times forward earnings estimates, far lower than Cisco’s 131 times in 2000. Although the S&P 500’s price-to-earnings ratio of 21 is above its historical average, it remains below the peak levels of the late 1990s. Nonetheless, investors remain cautious, wary of metrics becoming overly stretched if economic growth continues and tech stocks keep climbing.

GenAI revolution: Challenges and opportunities for marketing agencies

In the evolving landscape of marketing and advertising, the integration of generative AI presents both promise and challenges, as highlighted in a recent Forrester report. Key obstacles include a lack of AI expertise among agency employees and concerns over job obsolescence. Also, the human factor poses a significant hurdle that the industry must address urgently to fully harness the potential of genAI.

The potential economic impact of genAI on agencies is profound. Seen as a transformative force akin to the advent of smartphones, genAI promises to redefine creativity in marketing by combining data intelligence with human intuition. Agency leaders overwhelmingly recognise it as a disruptive technology, with 77% acknowledging its potential to fundamentally alter business operations. However, the fear of job displacement among employees remains palpable, exacerbated by recent industry disruptions and the rapid automation of white-collar roles.

To mitigate these concerns and fully embrace genAI, there is a pressing need for comprehensive AI literacy and training within agencies. While existing educational programmes and certifications provide a foundation, they are insufficient to meet the demands of integrating AI into everyday creative processes. Investment in reskilling and upskilling initiatives is crucial to empower agency employees to confidently navigate the AI-driven future of marketing and advertising.

Industry stakeholders, including agencies, technology partners, universities, and trade groups, must collaborate to establish robust training frameworks. In addition, a concerted effort will not only bolster agency capabilities in AI adoption but also ensure that creative workforce remains agile and competitive in an increasingly AI-centric landscape. By prioritising AI literacy and supporting continuous learning initiatives, agencies can position themselves at the forefront of innovation, delivering enhanced value to clients through AI-powered creativity.

Amazon boosts AI strategy by acquiring Adept co-founders and key team members

Amazon has recently hired the co-founders and several team members from AI startup Adept in a strategic move to bolster its AI capabilities. Adept’s CEO David Luan and other key employees have joined Amazon. At the same time, the startup will continue to operate independently, with Amazon paying a licensing fee to use some of its technology to automate business functions.

The recruitment is similar to Microsoft’s earlier hiring of Inflection AI’s team, which has drawn regulatory scrutiny. Adept, valued at over $1 billion, has already named a new CEO. Amazon’s recruitment of Adept’s team signals its ambition to advance AI agent tools, an area of focus for major tech labs. The company is also working to update its Alexa voice assistant with generative AI for more complex and responsive interactions.

At Amazon, Luan and others will report to Rohit Prasad, who leads the company’s artificial general intelligence efforts. Previously head of Alexa, Prasad has integrated researchers across Amazon to enhance AI model training. He stated that these new hires will significantly contribute to Amazon’s pursuit of achieving AGI.

Big Tech faces antitrust scrutiny amid surge in generative AI sector

Two companies that benefited the most from AI average, Nvidia and Microsoft, are the most exposed to antitrust investigations for AI monopolies. Regulatory authorities have shifted their approach, acting quickly against potential monopolistic practices instead of taking years to intervene.

Notable investigations include the US Department of Justice examining Nvidia’s alleged anticompetitive behaviour in the GPU market and the Federal Trade Commission (FTC) probing Microsoft’s $13 billion investment in OpenAI and strategic staff acquisitions from Inflection. The UK’s Competition and Markets Authority (CMA) is also investigating, particularly concerned about the over 90 partnerships tech giants have formed with large language model developers since 2019, potentially stifling competition.

Politically, there’s a risk that excessive intervention could be seen as stifling innovation, particularly in the face of global competitors like China. Regulators must balance fostering competition with enabling innovation, ensuring that the rise of generative AI, which promises significant technological upheaval, does not result in a market dominated by a few powerful players.

Türkiye uses AI to combat tax evasion

Türkiye plans to use AI to combat tax evasion, following the example of countries like Italy and the US. Treasury and Finance Minister Mehmet Simsek announced the initiative, emphasising the role of AI in auditing companies. The technology is expected to help identify tax evasion, as many Turkish companies report minimal or no profits.

Simsek highlighted thatTürkiyey lags behind other OECD countries in tax collection relative to its economic output. The minister is advocating for a new bill to introduce additional taxes, which he argues are necessary to stabilise the nation’s finances, especially after the significant impact of last year’s earthquakes.

Adopting AI in tax audits is seen as a crucial step in improving compliance and increasing tax revenues, which are essential forTürkiyey’s financial health and recovery efforts.

AI revolutionises academic writing, prompting debate over quality and bias

In a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% in computer science, according to The Economist. This transformation is driven by advancements in large language models (LLMs), as highlighted in a University of Tübingen study comparing recent papers with those from the pre-ChatGPT era. The research shows a notable change in word usage, with terms like ‘delivers,’ ‘potential,’ ‘intricate,’ and ‘crucial’ becoming more common, while ‘important’ declines in use.

Chat with statistics of the words used in AI-generated research papers
Source: The Economist

Researchers are leveraging LLMs for editing, translating, simplifying coding, streamlining administrative tasks, and accelerating manuscript drafting. However, this integration raises concerns. LLMs may reinforce existing viewpoints and frequently cite prominent articles, potentially leading to an inflation of publications and a dilution of research quality. This risks perpetuating bias and narrowing academic diversity.

As the academic community grapples with these changes, scientific journals seek solutions to address the challenges as the sophistication of AI increases. Trying to detect and prevent the use of AI is increasingly futile. Other approaches to uphold the quality of research are discussed, including investment into a more solid peer-reviewing process, insisting on replicating experiments, and hiring academics based on the quality of their work instead of quantity, promoted by public obsession.

Recognizing the inevitability of AI’s role in academic writing, Diplo introduced the KaiZen publishing approach. This innovative approach combines just-in-time updates facilitated by AI with reflective writing crafted by humans, aiming to harmonize the strengths of AI and human intellect in producing scholarly work.

As AI continues to revolutionize academic writing, the landscape of research and publication is poised for further evolution, prompting ongoing debates and the search for balanced solutions.