AI startup Etched to produce $120M worth specialised chip

Etched, an AI startup based in San Francisco, announced that it secured $120 million, aiming to create a specialised kind of chip tailored to run a specific type of AI model commonly used by OpenAI’s ChatGPT and Google’s Gemini.

Unlike Nvidia, which dominates the market for server AI chips with a roughly 80% market share, Etched aims to create a specialized processor optimized for running inference tasks. The produced chip would focus on generating content and responses, which is particularly suited for transformer-based AI models. The company’s CEO, Gavin Uberti, sees this as a strategic bet on the longevity of transformer models in the AI landscape.

In Etched’s funding round, key investors include former PayPal CEO Peter Thiel and Replit CEO Amjad Masad. The startup has also partnered with Taiwan Semiconductor Manufacturing Co. (TSMC) to fabricate its chips. Uberti highlighted the importance of the funding to cover the costs associated with sending chip designs to TSMC and manufacturing the chips, a process known as ‘taping out.’

While Etched did not disclose its current valuation, its $5.4-million seed-funding round in March 2023 valued the company at $34 million. The success of its specialised chip could position Etched as an important player in the AI chip market, provided transformer-based AI models continue to be prevalent in the industry.

Privacy concerns behind Apple abandoning Meta partnership, report says

In recent days, the landscape of AI integration on Apple’s devices has become a topic of discussion. Initially, it was reported that a potential partnership could involve Apple’s cooperation with Meta’s AI services. However, ‘people with knowledge on the matter’ told Bloomberg this is not the case, explaining that Apple had explored a potential partnership in March of this year, before settling on OpenAI for part of the recently announced Apple Intelligence services. Reportedly, this partnership was abandoned due to Apple’s privacy concerns. Apple has repeatedly criticised Meta’s privacy practices, making a collaboration between the two tech giants potentially damaging to Apple’s image as a privacy-focussed company.

The timing of these discussions coincides with Meta facing privacy concerns over its new AI tools in the European Union. Despite this, Meta recently rolled out these same tools in India.

Earlier this month, Apple launched its own suite of AI features under the Apple Intelligence brand, including integration in Siri. Apple partnered with OpenAI to allow iPhone users to utilise ChatGPT for specific queries. The company says Siri will always ask for your permission before connecting to ChatGPT, and give you the choice to provide it with data, like a photo, if needed for your query. “From a privacy point of view, you’re always in control and have total transparency,” said Apple senior vice president Criag Federighi. That stance underpins Apple’s strategy as it demarcates itself in the world of AI integration, balancing innovation with its core principle of user privacy.

Apple is not depending exclusively on one AI provider though. At the Worldwide Developers Conference (WWDC), it announced its willingness to work with Google to integrate the Gemini AI model into its ecosystem. They have already partnered to train Apple’s AI. The extent of this integration remains to be seen, but it indicates Apple’s strategy of diversifying its AI partnerships.

Chinese AI companies respond to OpenAI restrictions

Chinese AI companies are swiftly responding to reports that OpenAI intends to restrict access to its technology in certain regions, including China. OpenAI, the creator of ChatGPT, is reportedly planning to block access to its API for entities in China and other countries. While ChatGPT is not directly available in mainland China, many Chinese startups have used OpenAI’s API platform to develop their applications. Users in China have received emails warning about restrictions, with measures set to take effect from 9 July.

In light of these developments, Chinese tech giants like Baidu and Alibaba Cloud are stepping in to attract users affected by OpenAI’s restrictions. Baidu announced an ‘inclusive Program,’ offering free migration to its Ernie platform for new users and additional Ernie 3.5 flagship model tokens to match their OpenAI usage. Similarly, Alibaba Cloud provides free tokens and migration services for OpenAI API users through its AI platform, offering competitive pricing compared to GPT-4.

Zhipu AI, another prominent player in China’s AI sector, has also announced a ‘Special Migration Program’ for OpenAI API users. The company emphasises its GLM model as a benchmark against OpenAI’s ecosystem, highlighting its self-developed technology for security and controllability. Over the past year, numerous Chinese companies have launched chatbots powered by their proprietary AI models, indicating a growing trend towards domestic AI development and innovation.

Italian watchdog tests AI for market oversight

Italy’s financial watchdog, Consob, has begun experimenting with AI to enhance its oversight capabilities, particularly in the initial review of listing prospectuses and the detection of insider trading. According to Consob, these AI algorithms aim to swiftly identify potential instances of insider trading, which traditionally requires significantly more time when conducted manually.

The agency reported that its AI algorithms can detect errors in just three seconds, a task typically taking a human analyst at least 20 minutes. These efforts were part of testing conducted last year using prototypes developed in collaboration with Scuola Normale Superiore University in Pisa, alongside an additional model developed independently.

Consob views the integration of AI as pivotal in enhancing the effectiveness of regulatory controls to detect financial misconduct. The next phase involves transitioning from prototype testing to fully incorporating AI into Consob’s regular operational procedures. That initiative mirrors similar efforts by financial regulators globally who are increasingly leveraging AI to bolster consumer protection and regulatory oversight.

For instance, in the United Kingdom, the Financial Conduct Authority (FCA) has utilised AI technologies to combat online scams and protect consumers. That trend underscores a broader international movement within regulatory bodies to harness AI’s potential in safeguarding market integrity and enhancing regulatory efficiency.

EvolutionaryScale secures $142 million to enhance AI applications in biology

AI startup EvolutionaryScale has secured $142 million in seed funding, led by investors including Nat Friedman, Daniel Gross, and Lux Capital. Both Amazon Web Services (AWS) and NVIDIA’s venture capital arm participated in this substantial funding round. Lux Capital’s co-founder Josh Wolfe likened EvolutionaryScale’s achievements to a ‘ChatGPT moment for biology,’ highlighting their development of a groundbreaking large language model capable of designing new proteins and biological systems.

EvolutionaryScale aims to deploy its AI across diverse applications, from accelerating drug discovery processes to engineering microbes that can degrade plastic pollution. The company’s chief scientist, Alex Rives, emphasised the growing significance of AI in creating innovative biological solutions. That aligns with broader industry trends where AI is increasingly pivotal in advancing biotech and pharmaceutical research.

However, concerns have been raised regarding the potential misuse of generative AI in bioweapons development. Despite these ethical considerations, EvolutionaryScale plans to use its newly secured funding to train its AI models further and expand its team for collaborations within the biotech sector. They have also released the ESM3 models, with the smaller variant open-sourced for non-commercial research, while AWS and NVIDIA will offer the larger ESM3 commercially.

Why does it matter?

One notable achievement highlighted by EvolutionaryScale involves engineering a novel fluorescent protein using their ESM3 model. That protein represents a significant departure from naturally occurring variants, a process typically requiring nature millions of years to evolve. The company’s advancements underscore the transformative potential of AI in pushing the boundaries of biological innovation.

US record labels sue Suno and Uncharted Labs for copyright infringement

Major US record labels are suing AI music startups Suno and Uncharted Labs, accusing them of mass copyright infringement. The lawsuits, filed in federal courts in Massachusetts and New York, represent content creators’ efforts to challenge the use of copyrighted works in the training and operation of generative AI systems, arguing that it does not constitute ‘fair use.’

The plaintiffs, including Sony Music Entertainment, UMG, and Warner Records, seek a declaration of copyright infringement, an injunction to prevent further violations and monetary damages. Mitch Glazier, CEO of the Recording Industry Association of America, emphasised the industry’s willingness to collaborate with responsible AI developers but stressed the need for cooperation to succeed.

Suno’s CEO, Mikey Shulman, defended the technology, claiming it generates new outputs without memorising or replicating existing content and stating that prompts referencing specific artists are not allowed.

The lawsuit adds to the growing number of legal challenges from various content creators against generative AI systems, which argue that both the training and output of these systems violate copyright laws. The outcome of these cases could set significant legal precedents for AI and copyright.

Meta faces backlash from photographers over mislabeling real photos

Meta faced criticism from photographers after its ‘Made by AI’ label was incorrectly applied to genuine photos. Notably, a photo taken by former White House photographer Pete Souza and an Instagram photo of the Kolkata Knight Riders’ IPL victory were wrongly marked as AI-generated. Photographers have reported that even minor edits using tools like Adobe’s Generative Fill can trigger Meta’s algorithm to label images as AI-generated.

Pete Souza and others have expressed frustration at being unable to remove these labels, suspecting that specific editing processes may be causing the issue. Meta’s labelling approach is also affecting photos with minimal AI modifications, leading to concerns about the accuracy and fairness of such labels. Photographer Noah Kalina argued that if minor retouching counts as AI-generated, the term loses its meaning and authenticity.

In response, Meta stated it is reviewing feedback to ensure its labels accurately reflect the amount of AI used in images. The company relies on industry-standard indicators and collaborates with other companies to refine its process. Meta’s labelling initiative, introduced to combat misinformation ahead of election season, involves tagging AI-generated content from major tech firms. However, the exact triggers for the “Made with AI” label remain undisclosed.

UAE government partners with Rittal for AI development

During the 2024 AI Retreat, the Artificial Intelligence, Digital Economy, and Remote Work Applications Office of the United Arab Emirates entered a strategic partnership with Rittal FZE, a Rittal and Co KG division based in Herborn, Germany. Rittal, a renowned provider of IT infrastructure solutions, power distribution, climate management, and industrial enclosures, is set to collaborate with the UAE to enhance the implementation and training of AI technologies. This partnership, focusing on advancing technology and training, is poised to inspire and excite the future of AI in the UAE.

The Executive Director of the AI, Digital Economy, and Remote Work Applications Office, Saqr Binghalib, underscored that the UAE government prioritises enhancing skills in digital advancement. He stressed that such efforts are vital for enhancing the country’s standing as a leading global AI hub and fostering stronger collaborations with the private sector. Rittal is committed to advancing technology and training through AI advancements and other smart applications, especially to support robotics and Industry 4.0 programming. The AI Retreat in 2024 saw the participation of over 2,000 decision-makers, experts, and representatives from the public and private sectors.

Google enhances Gmail with new AI features

Google is enhancing Gmail with new AI features designed to streamline email management. A new Gemini side panel is being introduced for the web, which is capable of summarising email threads and drafting new emails. Users will receive proactive prompts and can ask freeform questions, utilising Google’s advanced models like Gemini 1.5 Pro. The mobile Gmail app will also feature Gemini’s ability to summarise threads.

However, these upgrades will only be accessible to paid Gemini users. To benefit from these features, one must be a Google Workspace customer with a Gemini Business or Enterprise add-on, a Gemini Education or Education Premium subscriber, or a Google One AI Premium member. Despite their potential usefulness, it’s advised not to depend entirely on these AI tools for critical work, as AI can sometimes produce inaccurate information.

In addition to Gmail, Google is incorporating Gemini features into the side panels of Docs, Sheets, Slides, and Drive. The rollout follows Google’s earlier promises at the I/O conference. Further AI enhancements, including ‘Contextual Smart Reply,’ are expected to arrive for Gmail soon.

Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.