Google and Microsoft have each pledged $1 million to support Donald Trump’s upcoming presidential inauguration, joining other tech giants such as Meta, Amazon, and Apple’s Tim Cook in contributing significant sums. The donations appear to be part of broader strategies by these companies to maintain access to political leadership in a rapidly changing regulatory environment.
Google, which has faced threats from Trump regarding potential break-ups, aims to secure goodwill through financial contributions and online visibility, including a YouTube livestream of the inauguration. Microsoft has also maintained steady political donations, previously giving $500,000 to Trump’s first inauguration as well as to President Joe Biden’s ceremony.
This alignment with Trump marks a notable trend of tech companies seeking to protect their interests, particularly as issues like antitrust regulations and data privacy laws remain in political crosshairs. With both tech giants navigating a landscape of increased government scrutiny, their contributions indicate a cautious approach to preserving influence at the highest levels of power.
These donations reflect a pragmatic move by Silicon Valley, where cultivating political ties is seen as a way to safeguard business operations amid shifting political dynamics.
Elon Musk’s AI venture, xAI, has unveiled a standalone iOS app for its chatbot, Grok, marking its first major expansion beyond the X platform. The app, currently in beta testing across Australia and a few other regions, offers users an array of generative AI features, including real-time web access, text rewriting, summarisation, and even image generation from text prompts.
Grok, described as a ‘maximally truthful and curious’ assistant, is designed to provide accurate answers, create photorealistic images, and analyse uploaded pictures. While previously restricted to paying X subscribers, a free version of the chatbot was launched in November and has recently been made accessible to all users.
The app also serves as a precursor to a dedicated web platform, Grok.com, which is in the works. xAI has touted the chatbot’s ability to produce detailed and unrestricted image content, even allowing creations involving public figures and copyrighted material. This open approach sets Grok apart from other AI tools with stricter content policies.
As the beta rollout progresses, Grok is poised to become a versatile tool for users seeking generative AI capabilities in a dynamic and user-friendly interface.
Craig Wright, an Australian computer scientist, has been found in contempt of court for falsely asserting he is Bitcoin’s creator, Satoshi Nakamoto. Despite a High Court ruling in March debunking his claim, Wright continued launching lawsuits seeking intellectual property rights over Bitcoin, including a $1.2 trillion demand.
The court described Wright‘s actions as ‘legal terrorism’ and sentenced him to a suspended 12-month prison term. If he persists, he risks jail time. Wright’s claim lacked concrete evidence, prompting the cryptocurrency industry to unite against him.
The court found Wright ‘lied extensively’ in his pursuit of recognition, creating a ‘chilling effect’ on the industry. The identity of Bitcoin’s inventor, Satoshi Nakamoto, remains unknown, as all claims, including Wright’s, have been discredited.
Elon Musk confirmed that Starlink satellite internet is inactive in India, following recent seizures of Starlink devices by Indian authorities. Musk stated on X that Starlink beams were “never on” in the country, addressing concerns raised after a device was confiscated during an armed conflict operation in Manipur and another during a major drug bust at sea.
In Manipur, where ethnic conflict has continued since last year, the Indian Army seized a Starlink dish believed to be used by militants. Officials suspect it was smuggled from Myanmar, where rebel groups reportedly use Starlink despite the company’s lack of operations there.
Earlier this month, Indian police intercepted a Starlink device linked to smugglers transporting $4.2 billion worth of methamphetamine. Authorities believe the internet device was used for navigation, prompting a legal request to Starlink for purchase details.
Starlink is currently seeking approval to operate in India and is working to resolve security concerns as part of the licensing process.
At the IGF 2024 preparatory session, stakeholders discussed the critical challenges surrounding digital sovereignty in developing countries, particularly in Africa. The dialogue, led by AFICTA and global experts, explored balancing data localisation with economic growth, infrastructure constraints, and regulatory policies.
Jimson Olufuye and Ulandi Exner highlighted the financial and technical hurdles of establishing local data centres, including unreliable electricity supplies and limited expertise. Nigeria‘s Kashifu Inuwa Abdullahi stressed the importance of data classification, advocating for clear regulations that differentiate sensitive government data from less critical commercial information.
The conversation extended to renewable energy’s role in powering local infrastructure. Jimson Olufuye pointed to successful solar-powered centres in Nigeria, while Kossi Amessinou noted the need for governments to utilise data effectively for economic development. Participants, including Martin Koyabe and Mary Uduma, underscored the importance of harmonised regional policies to streamline cross-border data flows without compromising security.
Speakers like Melissa Sassi and Dr Toshikazu Sakano argued for public-private partnerships to foster skills development and job creation. The call for capacity building remained a recurring theme, with Rachael Shitanda and Melissa Sassi urging governments to prioritise technical training while retaining talent in their countries.
The discussion concluded on an optimistic note, acknowledging that solutions, such as renewable energy integration and smart regulations, can help achieve digital sovereignty. Speakers emphasised the need for continued collaboration to overcome economic, technical, and policy challenges while fostering innovation and growth.
Elon Musk’s AI startup, xAI, revealed on Saturday that the latest version of its Grok-2 chatbot will be available for free to all users of the social media platform X. The new version of Grok-2 is part of xAI’s continued efforts to integrate AI technology into the platform, providing users with more advanced and efficient tools for interaction.
While the chatbot will be free for everyone, Premium and Premium+ users will benefit from higher usage limits and will be the first to experience new features as they are rolled out. This tiered approach ensures that paying users receive an enhanced experience, with priority access to future updates and capabilities.
xAI has been quietly testing the new Grok-2 model for several weeks, fine-tuning its performance and features in preparation for the public release. The improved version is expected to offer better capabilities and user interactions, marking a significant step forward in AI development for social media platforms.
The Swedish government is exploring age restrictions on social media platforms to combat the rising problem of gangs recruiting children online for violent crimes. Officials warn that platforms like TikTok and Snapchat are being used to lure minors—some as young as 11—into carrying out bombings and shootings, contributing to Sweden‘s status as the European country with the highest per capita rate of deadly shootings. Justice Minister Gunnar Strommer emphasised the seriousness of the issue and urged social media companies to take concrete action.
Swedish police report that the number of children under 15 involved in planning murders has tripled compared to last year, highlighting the urgency of the situation. Education Minister Johan Pehrson noted the government’s interest in measures such as Australia’s recent ban on social media for children under 16, stating that no option is off the table. Officials also expressed frustration at the slow progress by tech companies in curbing harmful content.
Representatives from platforms like TikTok, Meta, and Google attended a recent Nordic meeting to address the issue, pledging to help combat online recruitment. However, Telegram and Signal were notably absent. The government has warned that stronger regulations could follow if the tech industry fails to deliver meaningful results.
Google’s DeepMind has introduced GenCast, a cutting-edge AI weather prediction model that outperforms the European Centre for Medium-Range Weather Forecasts’ (ECMWF) ENS, widely regarded as the global leader in operational forecasting. A study in Nature highlighted GenCast’s superior accuracy, predicting weather more effectively 97.2% of the time during a comparative analysis of 2019 data.
Unlike earlier deterministic models, GenCast creates a complex probability distribution of potential weather scenarios by generating 50 or more forecasts per instance. This ensemble approach provides a nuanced understanding of weather trajectories, elevating predictive reliability.
Google is integrating GenCast into its platforms like Search and Maps, while also planning to make real-time and historical AI powered forecasts accessible for public and research use. With this advancement, the tech giant aims to revolutionise weather forecasting and its applications worldwide.
Google’s newest AI, the PaliGemma 2 model, has drawn attention for its ability to interpret emotions in images, a feature unveiled in a recent blog post. Unlike basic image recognition, PaliGemma 2 offers detailed captions and insights about people and scenes. However, its emotion detection capability has sparked heated debates about ethical implications and scientific validity.
Critics argue that emotion recognition is fundamentally flawed, relying on outdated psychological theories and subjective visual cues that fail to account for cultural and individual differences. Studies have shown that such systems often exhibit biases, with one report highlighting how similar models assign negative emotions more frequently to certain racial groups. Google says it performed extensive testing on PaliGemma 2 for demographic biases, but details of these evaluations remain sparse.
Experts also worry about the risks of releasing this AI technology to the public, citing potential misuse in areas like law enforcement, hiring, and border control. While Google emphasises its commitment to responsible innovation, critics like Oxford’s Sandra Wachter caution that without robust safeguards, tools like PaliGemma 2 could reinforce harmful stereotypes and discriminatory practices. The debate underscores the need for a careful balance between technological advancement and ethical responsibility
Meta Platforms has reported that generative AI had limited influence on misinformation campaigns across its platforms in 2023. According to Nick Clegg, Meta‘s president of global affairs, coordinated networks spreading propaganda struggled to gain traction on Facebook and Instagram, and AI-generated misinformation was promptly flagged or removed.
Clegg noted, however, that some of these operations have migrated to other platforms or standalone websites with fewer moderation systems. Meta dismantled around 20 covert influence campaigns this year. The company aims to refine content moderation while maintaining free expression.
Meta also reflected on its overly strict moderation during the COVID-19 pandemic, with CEO Mark Zuckerberg expressing regret over certain decisions influenced by external pressure. Looking forward, Zuckerberg intends to engage actively in policy debates on AI under President-elect Donald Trump‘s administration, underscoring AI’s critical role in US technological leadership.