Elon Musk has launched a poll on social media platform X, asking if Tesla should invest $5 billion in his AI startup, xAI. Early votes show strong support, with 70% in favour. Musk emphasised that board approval and a shareholder vote are necessary before proceeding. This comes after Tesla reported its lowest profit margin in five years due to price cuts and increased AI project spending.
During Tesla’s earnings call, Musk highlighted xAI’s potential to enhance Tesla’s full self-driving capabilities and data centre infrastructure. He also mentioned opportunities for integrating xAI’s chatbot, Grok, with Tesla’s software. The poll, posted for nearly three hours, saw participation from approximately 386,000 users. Tesla and xAI have not yet commented on the poll results.
Should Tesla invest $5B into @xAI, assuming the valuation is set by several credible outside investors?
(Board approval & shareholder vote are needed, so this is just to test the waters)
Musk launched xAI last year as an alternative to ChatGPT, securing $6 billion in series B funding and reaching a post-money valuation of $24 billion. Major backers include Andreessen Horowitz and Sequoia Capital. Musk plans for a quarter of xAI to be owned by investors in X, which he purchased for $44 billion.
On the earnings call, Musk dismissed concerns about diverting resources from Tesla to his other ventures. He previously ordered Nvidia to ship thousands of AI chips to xAI and X due to Tesla’s data centre being at full capacity. Musk has a history of using polls to gauge public opinion, including a 2021 poll on selling 10% of his Tesla stake.
The US Federal Trade Commission (FTC) announced a probe into eight companies using AI-powered ‘surveillance service pricing’ to evaluate its impact on privacy, competition, and consumer protection. The companies under scrutiny include Mastercard, JPMorgan Chase, Revionics, Bloomreach, Task Software, PROS, Accenture, and McKinsey & Co. These firms use AI to adjust pricing based on consumer behaviour, location, and personal data, potentially leading to different prices for different customers.
The FTC’s investigation aims to uncover the types of surveillance pricing services developed by these companies and their current applications. The agency seeks to understand how these AI-driven pricing models affect consumer pricing and whether they exploit personal data to charge higher prices. FTC Chair Lina M. Khan emphasised the risks to privacy and the potential exploitation of personal data in her statement, highlighting the need for transparency in how businesses use consumer information.
This inquiry reflects growing concerns about using AI and other technologies to set personalised prices based on detailed consumer data. The FTC’s actions aim to shed light on these practices and ensure consumer protection in an increasingly data-driven market.
Meta Platforms has unveiled its largest version of the Llama 3 AI model, boasting impressive multilingual capabilities and performance metrics that challenge paid models from competitors like OpenAI. The new model can converse in eight languages, write better computer code, and solve complex math problems thanks to its 405 billion parameters. That makes it significantly more powerful than its predecessor, though it still needs to catch up to OpenAI’s GPT-4, which has one trillion parameters, and Amazon’s upcoming two trillion parameter model.
CEO Mark Zuckerberg has high expectations for Llama 3, predicting it will surpass proprietary competitors by next year. Meta’s AI chatbot, powered by these models, is on track to become the most popular AI assistant by the end of this year, already used by hundreds of millions. The release comes amidst a competitive push among tech companies to demonstrate the value of their advanced AI models in solving complex reasoning tasks, justifying the significant investments made.
Meta is also releasing updated versions of its lighter-weight 8 billion and 70 billion parameter Llama 3 models. All versions are multilingual and can handle larger user requests, enhancing their ability to generate computer code. Meta’s head of generative AI, Ahmad Al-Dahle, highlighted improvements in solving math problems by using AI to generate training data. By offering Llama models largely free of charge, Meta aims to foster innovation, reduce dependence on competitors, and increase engagement on its social networks, despite some investor concerns over the costs involved.
The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.
Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.
The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.
Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.
Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.
The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.
The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.
Seomjae, a Seoul-based education solutions developer, is set to launch its AI-powered mathematics learning program at the Consumer Electronics Show in Las Vegas next January. The program uses an AI Retrieval-Augmented Generation model, developed over two years by a team of 40 mathematicians and AI developers. It features over 120,000 math problems and 30,000 lectures, offering personalised education tracks for each student.
Beta testing will begin on July 29, involving 50 students from Seoul, Ulsan, and Boston. The feedback will help enhance the technology and its feasibility. The innovative system, called Transforming Educational Content to AI, extracts and analyses information from lectures and problem solutions to provide core content.
Seomjae is also expanding its business portfolio to include an essay-writing educational program through partnerships in the US and Vietnam. The company will participate in Dubai’s Gulf Information Technology Exhibition this October, showcasing its new educational technologies.
A company official expressed excitement about starting beta testing and integrating diverse feedback to improve the program. The goal is to refine the AI system and ensure its effectiveness for students worldwide.
Gcore has raised $60 million in Series A funding from investors including Wargaming, Constructor Capital, and Han River Partners. That marks Gcore’s first external capital raise in over a decade. The funds will be invested in Gcore’s AI technology and platform, utilising NVIDIA GPUs to drive AI innovations. The move highlights Gcore’s commitment to enhancing cloud resource efficiency and data sovereignty.
The company’s extensive network and cloud capabilities have made it a trusted partner for public organisations, telcos, and global corporations. Gcore’s infrastructure supports a wide range of industries, including media, gaming, technology, financial services, and retail. Its global network of over 180 edge nodes spans six continents, powering the training and inference of large language models.
Investors have expressed strong support for Gcore’s mission. Wargaming’s Sean Lee praised Gcore’s decade-long partnership, while Constructor Capital’s Matthias Winter highlighted the company’s comprehensive edge solutions and low latency. Han River Partners’ Christopher Koh noted Gcore’s strategic position in emerging AI markets, particularly in the APAC region.
CEO Andre Reitenbach emphasised the transformative potential of AI for businesses. Gcore aims to connect the world to AI with innovative cloud and edge solutions. The investment underscores the growing demand for AI infrastructure and Gcore’s role in meeting this need, supported by its robust network and advanced AI servers.
Europol’s latest report predicts a surge in AI-assisted cybercrimes across the EU. The ‘Internet Organised Crime Threat Assessment 2024’ highlights how AI tools are enabling non-technical individuals to execute complex online crimes. These tools, such as deep fakes and false advertisements, are making it easier for bad actors to engage in cybercrime.
The agency stresses the need for law enforcement to enhance their capabilities to counter these threats. Europol’s Executive Director, Catherine De Bolle, emphasises the importance of building robust human and technical resources. Future advancements in deepfake technology could lead to severe cases of sexual extortion, requiring sophisticated detection tools.
🚨In 2023, millions of victims across the EU were attacked and exploited online on a daily basis.
📍Today, Europol published the Internet Organised Crime Threat Assessment (IOCTA) on:
🔴Key developments, changes & emerging threats in cybercrime.
Concerns also extend to the cryptocurrency ecosystem. Europol’s report flags the potential for increased fraud involving non-fungible tokens (NFTs) and Bitcoin exchange-traded funds (ETFs). As more people adopt these financial instruments, those without extensive cryptocurrency knowledge may become prime targets for scammers.
Recently, Europol seized €44.2 million in cryptocurrency assets from ChipMixer, linked to money laundering. This operation underscores the growing challenges law enforcement faces in combating sophisticated financial crimes facilitated by emerging technologies.
Deloitte has formed a strategic collaboration with Amazon Web Services (AWS) to assist companies globally in enhancing their capabilities in generative artificial intelligence, data analytics, and quantum computing. The partnership includes the establishment of an Innovation Lab, with a focus on cutting-edge technologies like AI, quantum machine learning, and autonomous robotics. This lab aims to address industry-specific challenges and support companies in successfully transitioning proofs of concept into full production.
The Innovation Lab will facilitate collaboration between Deloitte and AWS engineers to develop solutions for diverse industries, encompassing financial services, healthcare, media, and energy. One of the initial projects, Deloitte’s C-Suite AI™ for CFOs, is designed to streamline financial functions using large language models. These models simplify workflows, generate investor documentation, and automate customer service. This tool is powered by NVIDIA and Amazon Bedrock to specifically aid the financial services sector.
Toyota Motors North America exemplifies a company benefiting from AWS machine learning and decision intelligence services to enhance their data ecosystem. Innovative solutions, such as dynamic pricing and parts forecasting, have been developed through their collaborative efforts with Deloitte. The partnership’s objective is to assist companies in transitioning from exploration to production of new technologies, addressing the inherent complexities and challenges involved.
Why does this matter?
Deloitte remains committed to supporting companies throughout their AI transformation journey. They offer tailored AI services and leverage their deep industry knowledge. The firm is currently training over 120,000 professionals worldwide in AI and investing more than £2 billion in technology learning and development initiatives. This extensive programme aims to boost skills in AI and other advanced technologies, ensuring greater client impact and improved productivity.
Grundon Waste Management is investing £750,000 in AI technology over three years to enhance driver safety. The company will implement Samsara’s Connected Operations Platform across its fleet of over 300 vehicles, following successful trials at two depots. The trials showed a 71% reduction in detected events and increased fuel efficiency, encouraging optimal driving habits.
Grundon expects to save around £220,000 annually in fuel costs once the technology is fully deployed. Chris Double, Regional Operations Manager, noted positive feedback from drivers during the trials. The AI Dash Cams, which provide instant feedback and acknowledge good performance, have been well-received.
The technology includes Dual-Facing AI Dash Cams and other cameras that monitor driver activity and connect to existing 360-degree cameras. Drivers can also use the Samsara App to track their behaviour through a points-based system. The system aims to improve safe driving habits and encourage good behaviour.
Philip van der Wilt, SVP and General Manager EMEA at Samsara, highlighted the measurable impact of the technology during the trials. He looks forward to a long-term partnership with Grundon to continue driving innovation and safety in their operations.