Google and Microsoft join inauguration donor list

Google and Microsoft have each pledged $1 million to support Donald Trump’s upcoming presidential inauguration, joining other tech giants such as Meta, Amazon, and Apple’s Tim Cook in contributing significant sums. The donations appear to be part of broader strategies by these companies to maintain access to political leadership in a rapidly changing regulatory environment.

Google, which has faced threats from Trump regarding potential break-ups, aims to secure goodwill through financial contributions and online visibility, including a YouTube livestream of the inauguration. Microsoft has also maintained steady political donations, previously giving $500,000 to Trump’s first inauguration as well as to President Joe Biden’s ceremony.

This alignment with Trump marks a notable trend of tech companies seeking to protect their interests, particularly as issues like antitrust regulations and data privacy laws remain in political crosshairs. With both tech giants navigating a landscape of increased government scrutiny, their contributions indicate a cautious approach to preserving influence at the highest levels of power.

These donations reflect a pragmatic move by Silicon Valley, where cultivating political ties is seen as a way to safeguard business operations amid shifting political dynamics.

Musk plans edgier version of Grok

Elon Musk’s AI company, xAI, is preparing to launch a controversial feature for its chatbot, Grok, called ‘Unhinged Mode.’ According to a recently updated FAQ on the Grok website, this mode will deliver responses that are intentionally provocative, offensive, and irreverent, mimicking an amateur stand-up comedian pushing boundaries.

Musk first teased the idea of an unfiltered chatbot nearly a year ago, describing Grok as a tool that would answer controversial questions without self-censorship. While Grok has already been known for its edgy responses, it currently avoids politically sensitive topics. The new mode appears to be an effort to deliver on Musk’s vision of an anti-‘woke’ AI assistant, standing apart from more conservative competitors like OpenAI’s ChatGPT.

The move comes amid ongoing debates about political bias in AI systems. Musk has previously claimed that most AI tools lean left due to their reliance on web-based training data. He has vowed to make Grok politically neutral, blaming the internet’s content for any perceived bias in the chatbot’s current outputs. Critics, however, worry that unleashing an unfiltered mode could lead to harmful or offensive outputs, raising questions about the responsibility of AI developers.

As Grok continues to evolve, the AI industry is closely watching how users respond to Musk’s push for a less restrained chatbot. Whether this will prove a success or ignite further controversy remains to be seen.

Synthetic data seen as AI’s future

Elon Musk has echoed concerns from AI researchers that the industry is running out of new, real-world data to train advanced models. Speaking during a livestream with Stagwell’s Mark Penn, Musk noted that AI systems have already processed most of the available human knowledge. He described this data plateau as having been reached last year.

To address the issue, AI developers are increasingly turning to synthetic data, information generated by the AI itself, to continue training models. Musk argued that self-generated data will allow AI systems to improve through self-learning, with major players like Microsoft, Google, and Meta already incorporating this approach in their AI models.

While synthetic data offers cost-saving advantages, it also poses risks. Some experts warn it could cause “model collapse,” reducing creativity and reinforcing biases if the AI reproduces flawed patterns from earlier training data. As the AI sector pivots towards self-generated training material, the challenge lies in balancing innovation with reliability.

Grok chatbot now available on iOS

Elon Musk’s AI company, xAI, has launched a standalone iOS app for its chatbot, Grok, marking a major expansion beyond its initial availability to X users. The app is now live in several countries, including the US, Australia, and India, allowing users to access the chatbot directly on their iPhones.

The Grok app offers features such as real-time data retrieval from the web and X, text rewriting, summarising long content, and even generating images from text prompts. xAI highlights Grok’s ability to create photorealistic images with minimal restrictions, including the use of public figures and copyrighted material.

In addition to the app, xAI is working on a dedicated website, Grok.com, which will soon make the chatbot available on browsers. Initially limited to X’s paying subscribers, Grok rolled out a free version in November and made it accessible to all users earlier this month. The launch marks a notable push by xAI to establish Grok as a versatile, widely available AI assistant.

Tesla’s driverless tech under investigation

US safety regulators are investigating Tesla’s ‘Actually Smart Summon’ feature, which allows drivers to move their cars remotely without being inside the vehicle. The probe follows reports of crashes involving the technology, including at least four confirmed incidents.

The US National Highway Traffic Safety Administration (NHTSA) is examining nearly 2.6 million Tesla cars equipped with the feature since 2016. The agency noted issues with the cars failing to detect obstacles, such as posts and parked vehicles, while using the technology.

Tesla has not commented on the investigation. Company founder Elon Musk has been a vocal supporter of self-driving innovations, insisting they are safer than human drivers. However, this probe, along with other ongoing investigations into Tesla’s autopilot features, could result in recalls and increased scrutiny of the firm’s driverless systems.

The NHTSA will assess how fast cars can move in Smart Summon mode and the safeguards in place to prevent use on public roads. Tesla’s manual advises drivers to operate the feature only in private areas with a clear line of sight, but concerns remain over its real-world safety applications.

US tech leaders oppose proposed export limits

A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.

The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.

Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.

While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.

Grok introduces AI-powered features to wider audience

Elon Musk’s AI venture, xAI, has unveiled a standalone iOS app for its chatbot, Grok, marking its first major expansion beyond the X platform. The app, currently in beta testing across Australia and a few other regions, offers users an array of generative AI features, including real-time web access, text rewriting, summarisation, and even image generation from text prompts.

Grok, described as a ‘maximally truthful and curious’ assistant, is designed to provide accurate answers, create photorealistic images, and analyse uploaded pictures. While previously restricted to paying X subscribers, a free version of the chatbot was launched in November and has recently been made accessible to all users.

The app also serves as a precursor to a dedicated web platform, Grok.com, which is in the works. xAI has touted the chatbot’s ability to produce detailed and unrestricted image content, even allowing creations involving public figures and copyrighted material. This open approach sets Grok apart from other AI tools with stricter content policies.

As the beta rollout progresses, Grok is poised to become a versatile tool for users seeking generative AI capabilities in a dynamic and user-friendly interface.

NeurIPS conference showcases AI’s rapid growth

The NeurIPS conference, AI’s premier annual gathering, drew over 16,000 computer scientists to British Columbia last week, highlighting the field’s rapid growth and transformation. Once an intimate meeting of academic outliers, the event has evolved into a showcase for technological breakthroughs and corporate ambitions, featuring major players like Alphabet, Meta, and Microsoft.

Industry luminaries like Ilya Sutskever and Fei-Fei Li discussed AI’s evolving challenges. Sutskever emphasised AI’s unpredictability as it learns to reason, while Li called for expanding beyond 2D internet data to develop “spatial intelligence.” The conference, delayed a day to avoid clashing with a Taylor Swift concert, underscored AI’s growing mainstream prominence.

Venture capitalists, sponsors, and tech giants flooded the event, reflecting AI’s lucrative appeal. The number of research papers accepted has surged tenfold in a decade, and discussions focused on tackling the costs and limitations of scaling AI models. Notable attendees included Meta’s Yann LeCun and Google DeepMind’s Jeff Dean, who advocated for ‘modular’ and ‘tangly’ AI architectures.

In a symbolic moment of AI’s widening reach, 10-year-old Harini Shravan became the youngest ever to have a paper accepted, illustrating how the field now embraces new generations and diverse ideas.

Blade Runner producer takes legal action over AI image use

Alcon Entertainment, the producer behind Blade Runner 2049, has filed a lawsuit against Tesla and Warner Bros, accusing them of misusing AI-generated images that resemble scenes from the movie to promote Tesla’s new autonomous cybercab. Filed in California, the lawsuit alleges violations of US copyright law and claims Tesla falsely implied a partnership with Alcon through the use of the imagery.

Alcon stated that it had rejected Warner Bros’ request to use official Blade Runner images for Tesla’s cybercab event on October 10. Despite this, Tesla allegedly proceeded with AI-created visuals that mirrored the film’s style. Alcon is concerned this could confuse its brand partners, especially ahead of its upcoming Blade Runner 2099 series for Amazon Prime.

Though no specific damages were mentioned, Alcon emphasized that it has invested hundreds of millions in the Blade Runner brand and argued that Tesla’s actions had caused substantial financial harm.

London-based company faces scrutiny for AI models misused in propaganda campaigns

A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.

In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.

UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.