Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

Musk’s Grok AI struggles with news accuracy

Grok, Elon Musk’s AI model available on the X platform, encountered significant issues in accuracy following the attempted assassination of former President Donald Trump. The AI model posted incorrect headlines, including one falsely claiming Vice President Kamala Harris had been shot and another wrongly identifying the shooter as an antifa member. These errors stemmed from Grok’s inability to discern sarcasm and verify unverified claims on X.

After announcing plans to develop TruthGPT, Elon Musk has promoted Grok as a revolutionary tool for news aggregation, leveraging real-time posts from millions of users. Despite its potential, the incident underscores Grok’s limitations, particularly in handling breaking news. The model’s humorous design can also be a drawback, leading to the spread of misinformation and confusion.

The reliance on AI for news summaries raises concerns about accuracy and context, especially during critical events. Former Facebook public-policy director Katie Harbath emphasized the need for human oversight in providing context and verifying facts. The incident with Grok mirrors challenges faced by other AI models, such as OpenAI’s ChatGPT, which includes disclaimers to manage user expectations.

Musk’s vision: Establishing life on Mars

Elon Musk’s grand vision of establishing a human colony on Mars is rapidly taking shape at SpaceX, where intensive planning efforts are underway alongside rocket development. Musk, driven by a lifelong fascination with Mars, has directed SpaceX teams to design everything from dome habitats to spacesuits capable of withstanding Mars’s harsh conditions. His ambitious timeline now targets having a million people living on Mars within the next 20 years, a drastic acceleration from earlier projections.

Musk remains undeterred despite the immense challenges—such as freezing temperatures, dust storms, and the need for artificial atmospheres. SpaceX’s Starship rocket, designed to transport humans to Mars, is central to these plans, with recent successful test flights marking crucial milestones. The company envisions Starship as a transportation vessel and a potential living space equipped with amenities like living quarters and recreational facilities, crucial for long-term habitation on the red planet.

While Musk’s vision has sparked both admiration and scepticism, particularly given the complexities of Martian colonisation, SpaceX is forging ahead with concrete steps. Internal discussions include considerations on bioengineering, sustainable living through greenhouses, and even potential food sources like plant-based alternatives from Impossible Foods. Despite operational challenges and controversies, SpaceX continues to attract dedicated employees who share Musk’s belief in creating a multi-planetary civilisation.

Musk’s determination to secure humanity’s future on Mars remains resolute, setting SpaceX on a course that challenges conventional timelines and expectations in space exploration. While NASA projects a much later timeframe for human missions to Mars, Musk’s aggressive pursuit of his Martian dream underscores his relentless drive to push the boundaries of what’s possible in space travel and colonisation.

OpenAI blocks Chinese users amid growing tech rivalry

At the recent World AI Conference in Shanghai, China’s leading AI company, SenseTime, unveiled its latest model, SenseNova 5.5, which can identify objects, provide feedback on drawings, and summarise text. Comparable to OpenAI’s GPT-4, SenseNova 5.5 aims to attract users with 50 million free tokens and free migration support from OpenAI services. The launch of SenseNova 5.5 comes at a crucial time, as OpenAI will block Chinese users from accessing its tools starting 9 July, intensifying the rivalry between US and Chinese AI firms.

OpenAI’s decision to block Chinese users has sparked concern in China’s AI community, raising questions about equitable access to AI technologies. However, it has also created an opportunity for Chinese companies like SenseTime, Baidu, Zhipu AI, and Tencent Cloud to attract new users with free tokens and migration services, accelerating the development of Chinese AI companies that are already engaged in fierce competition.

Why does this matter?

The US-China tech rivalry has led to US restrictions on exporting advanced semiconductors to China, impacting the AI industry’s growth. While Chinese companies are quickly advancing, the US sanctions are causing shortages in computing capacity, as seen with Kuaishou’s AI model restrictions. Despite these challenges, Chinese commentators view OpenAI’s departure as a chance for China to achieve greater technological self-reliance and independence.

AI-powered rat model mimics real behaviour

Google DeepMind and Harvard University researchers have developed a realistic virtual rat to study the neural circuits that control movement. The virtual rat’s brain, made up of artificial neural networks, was trained using hours of neural recordings from real rats.

This digital brain could predict and replicate the behaviour of actual rats, such as running or rearing up. The study identified key brain regions involved in movement and demonstrated that AI can simulate neural signals more accurately than older models.

Bridging the gap between AI and neuroscience, the project offers new ways to study brain functions and movements. However, this method allows researchers to tweak neural connections in the virtual rat to observe how changes affect behaviour, providing insights that are challenging to obtain through traditional lab experiments. By understanding how the brain commands muscle movements, the research could lead to advancements in both robotics and neuroscience.

Offering a platform to test hypotheses about brain function and behaviour quickly and efficiently, the virtual rat enables researchers to explore more complex tasks. The team plans to use these virtual rats to understand further how real brains generate intricate behaviours. Combining AI with biological data, the collaboration highlights the potential to uncover the mechanisms of brain function and movement.

How AI is transforming construction safety and efficiency

Florida International University’s Moss Department of Construction Management is at the forefront of a revolution in the industry. They’re equipping students with the tools to leverage AI for increased efficiency and safety on construction sites.

Imagine generating blueprints with just a few specifications or having a watchful eye constantly monitoring a site for safety hazards. These are just a few ways AI is transforming construction. Students like Kaelan Dodd are already putting this knowledge to work. ‘An AI tool I tried at my job based on what I learned at FIU lets us create blueprints in seconds,’ Dodd said, impressed by the technology’s potential.

But FIU’s course goes beyond simply using AI. Professor Lufan Wang understands the importance of students not just using the technology but understanding it. By teaching them to code, she gives them a ‘translator’ to communicate with AI and provides valuable feedback to improve its capabilities. An approach like this one prepares students to not only navigate the constantly evolving world of AI but also shape its future applications in construction.

The benefits of AI extend far beyond efficiency. Construction is a field where safety is paramount, and AI can be a valuable ally. Imagine having a tireless AI assistant analyse thousands of construction site photos to identify potential hazards or sending an AI-powered robot into a dangerous situation to gather information. These are a few ways AI can minimise risk and potentially save lives. While AI won’t replace human construction managers entirely, it can take on the most dangerous tasks, allowing human expertise to focus on what it does best – guiding and overseeing complex projects.

Anthropic launches grants for developing new AI benchmark

Anthropic is launching a new program to fund the creation of new benchmarks for better assessing AI model performance and its impact. In its blog post, Anthropic stated that it will offer grants to third-party organisations developing improved methods for evaluating advanced AI model capabilities.

Urging the AI research community to develop more rigorous benchmarks that address societal and security implications, Anthropic advocated for revising existing methodologies through new tools, infrastructure, and methods. Highlighting how they aim to develop an early warning system to identify and assess risks, it specifically called for tests to evaluate a model’s ability to conduct cyberattacks, enhance weapons of mass destruction, and manipulate or deceive individuals.

Moreover, Anthropic also aims for its new program to support research into benchmarks and tasks that explore AI’s potential in scientific study, multilingual communication, bias mitigation, and self-censorship of toxicity. In addition to grants, researchers will have the chance to consult with the company’s domain experts. The company also expressed interest in potentially investing in or acquiring the most promising projects, offering various ‘funding options tailored to the needs and stage of each project’.

Why does this matter?

Benchmarks are the process of evaluating the quality of an AI system. The evaluation is typically a fixed process of assessing the capability of an AI model, usually in one area, while AI models like Anthropic’s Claude and Open AI’s ChatGPT are designed to perform a host of tasks. Thus, developing robust and reliable model evaluations is complex and is riddled with challenges. Anthropic’s initiative to support new AI benchmarks is commendable, with their stated objective of the program serving as a catalyst for progress towards a future where comprehensive AI evaluation is an industry-standard. However, given their own commercial interests, the initiative may raise trust concerns.

UN adopts China-led AI resolution

The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.

The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.

China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.

Connecticut launches AI Academy to boost tech skills

Connecticut is spearheading efforts by developing what could be the nation’s first Citizens AI Academy. The free online resource aims to offer classes for learning basic AI skills and obtaining employment certificates.

Democratic Senator James Maroney of Connecticut emphasised the need for continuous learning in this rapidly evolving field. Determining the essential skills for an AI-driven world presents challenges due to the technology’s swift progression and varied expert opinions. Gregory LaBlanc from Berkeley Law School suggested that workers should focus on managing and utilising AI rather than understanding its technical intricacies to complement the capabilities of AI.

Several states, including Connecticut, California, Mississippi, and Maryland, have proposed legislation addressing AI in education. For instance, California is considering incorporating AI literacy into school curricula to ensure students understand AI principles, recognise its use, and appreciate its ethical implications. Connecticut’s AI Academy plans to offer certificates for career-related skills and provide foundational knowledge, from digital literacy to interacting with chatbots.

Despite the push for AI education, concerns about the digital divide persist. Senator Maroney highlighted the potential disadvantage for those needing more basic digital skills or access to technology. Marvin Venay of Bring Tech Home and Tesha Tramontano-Kelly of CfAL for Digital Inclusion stress the importance of affordable internet and devices as prerequisites for effective AI education. Ensuring these fundamentals is crucial for equipping individuals with the necessary tools to thrive in an AI-driven future.

Chance the Rapper teams up with Meta for AI-driven creativity

Grammy Award-winning musician and producer Chance the Rapper is known for his innovative approach to music and fashion. Recently, he has teamed up with Meta for their Super Fan event, showcasing his interest in cutting-edge technology, particularly Meta AI. The collaboration highlights how AI transforms various aspects of his work, from engaging fans to creating music and fashion.

Chance has long been a pioneer in using digital platforms to connect with fans and distribute his music. With the advent of AI, he is now pushing the boundaries of creativity even further. He likens AI to the patchwork denim look he sported at the Meta event, describing it as an amalgamation of different design patterns. The comparison underscores his view of AI as a tool for combining diverse elements to create something unique.

The Meta AI suite, integrated into platforms like Instagram and Facebook, allows Chance to explore new artistic directions. He uses these tools to experiment with music production, generate unique soundscapes, and refine his musical style. Chance also finds inspiration on Instagram, drawing from various topics and incorporating these influences into his work.

Additionally, Chance sees potential in Meta’s new Ray-Ban Meta smart glasses, which offer responsive tech for human engagement and photography. By leveraging AI tools, he enhances his artistic process, engages more effectively with fans, and supports initiatives like the growth of women’s sports. As he prepares to release his new project, ‘Star Line Gallery,’ Chance the Rapper continues to inspire and innovate in the realms of music and fashion.