Microsoft sues hackers over AI security breach

Microsoft has taken legal action against a group accused of bypassing security measures in its Azure OpenAI Service. A lawsuit filed in December alleges that the unnamed defendants stole customer API keys to gain unauthorised access and generate content that violated Microsoft’s policies. The company claims the group used stolen credentials to develop hacking tools, including software named de3u, which allowed users to exploit OpenAI’s DALL-E image generator while evading content moderation filters.

An investigation found that the stolen API keys were used to operate an illicit hacking service. Microsoft alleges the group engaged in systematic credential theft, using custom-built software to process and route unauthorised requests through its cloud AI platform. The company has also taken steps to dismantle the group’s technical infrastructure, including seizing a website linked to the operation.

Court-authorised actions have enabled Microsoft to gather further evidence and disrupt the scheme. The company says additional security measures have been implemented to prevent similar breaches, though specific details were not disclosed. While the case unfolds, Microsoft remains focused on strengthening its AI security protocols.

Digital art website crippled by OpenAI bot scraping

Triplegangers, was forced offline after a bot from OpenAI relentlessly scraped its website, treating it like a distributed denial-of-service (DDoS) attack. The AI bot sent tens of thousands of server requests, attempting to download hundreds of thousands of detailed 3D images and descriptions from the company’s extensive database of digital human models.

The sudden spike in traffic crippled Ukrainian Triplegangers’ servers and left CEO Oleksandr Tomchuk grappling with an unexpected problem. The company, which sells digital assets to video game developers and 3D artists, discovered that OpenAI’s bot operated across hundreds of IP addresses to gather its data. Despite having terms of service that forbid such scraping, the company had not configured the necessary robot.txt file to block the bot.

After days of disruption, Tomchuk implemented protective measures by updating the robot.txt file and using Cloudflare to block specific bots. However, he remains frustrated by the lack of transparency from OpenAI and the difficulty in determining exactly what data was taken. With rising costs and increased monitoring now necessary, he warns that other businesses remain vulnerable.

Tomchuk criticised AI companies for placing the responsibility on small businesses to block unwanted scraping, comparing it to a digital shakedown. “They should be asking permission, not just scraping data,” he said, urging companies to take greater precautions against AI crawlers that can compromise their sites.

Regulators weigh in on Musk’s lawsuit against OpenAI and Microsoft

US antitrust regulators provided legal insights on Elon Musk’s lawsuit against OpenAI and Microsoft, alleging anticompetitive practices. While not taking a formal stance, the Federal Trade Commission (FTC) and Department of Justice (DOJ) highlighted key legal doctrines supporting Musk’s claims ahead of a court hearing in Oakland, California. Musk, a co-founder of OpenAI and now leading AI startup xAI, accuses OpenAI of enforcing restrictive agreements and sharing board members with Microsoft to stifle competition.

The lawsuit also claims OpenAI orchestrated an investor boycott against rivals. Regulators noted such boycotts are legally actionable, even if the alleged organiser isn’t directly involved. OpenAI has denied these allegations, labelling them baseless harassment. Meanwhile, the FTC is conducting a broader probe into AI partnerships, including those between Microsoft and OpenAI, to assess potential antitrust violations.

Microsoft declined to comment on the case, while OpenAI pointed to prior court filings refuting Musk’s claims. However, the FTC and DOJ stressed that even former board members, like Reid Hoffman, could retain sensitive competitive information, reinforcing Musk’s concerns about anticompetitive practices.

Musk’s legal team sees the regulators’ involvement as validation of the seriousness of the case, underscoring the heightened scrutiny around AI collaborations and their impact on competition.

DW Newsletter # 194 – The rise of OpenAI and Sam Altman’s role in the AI and AGI revolution

 Page, Text

Dear readers,

In November 2022, OpenAI launched ChatGPT, a product redefining AI and catapulting its CEO, Sam Altman, into global prominence. The once-quiet startup suddenly became a sensation, drawing over 100 million visitors within two months. Altman, a long-time advocate of artificial general intelligence (AGI), saw his vision materialise despite early scepticism and the challenges in establishing OpenAI. Today, OpenAI stands at the forefront of the AI industry, shaping the future of technology and society.

Altman’s journey with OpenAI began with bold ambitions to build AGI—a concept dismissed as fringe in 2014. By assembling a team of young, unconventional thinkers, OpenAI distinguished itself from other Silicon Valley ventures. Over the years, the company evolved from a nonprofit to a for-profit hybrid, adapting to secure resources for its ambitious goals. The launch of ChatGPT marked a turning point, rapidly scaling OpenAI’s user base and solidifying its status as a leader in AI innovation. Altman’s decisive leadership and relentless focus on scaling and improving its technology have positioned OpenAI as a trailblazer in the global AI race.

 Book, Comics, Publication, Person, Baby, Face, Head, Weapon

However, in late 2023, OpenAI’s board abruptly dismissed Altman as CEO, only to reinstate him days after internal pushback and public outcry. The episode underscored the challenges of managing a mission-driven company operating at the cutting edge of technology. Despite the turmoil, Altman emerged stronger, steering OpenAI through regulatory challenges and rapid growth while grappling with the societal implications of AGI.

The intersections of technology and politics became increasingly evident, with Altman playing a strategic role in fostering AI’s development under the Trump administration. Despite ideological differences, Altman donated to Trump’s inaugural fund, emphasising the importance of bipartisan cooperation in navigating the profound societal shifts AI will bring. Despite his often unpredictable behaviour, Altman also expressed optimism that Elon Musk would not misuse his growing political influence to undermine competitors like OpenAI.

Altman’s focus remains on ensuring the US leads in AI development, advocating for a streamlined regulatory framework to enable the construction of critical infrastructure such as data centres and power plants. OpenAI’s success, Altman argues, hinges not only on technological breakthroughs but also on policy and leadership that enable the country to maintain its edge in the AI race. As the Trump administration takes the reins, the stakes for balancing innovation, ethics, and governance have never been higher.

Related news:

AGI OpenAI Microsoft Sam Altman Superintelligence

Despite OpenAI’s ambitions, concerns remain over AI safety, with the company acknowledging it lacks solutions for controlling superintelligent systems.

In other news..

Oklahoma senator proposes Bitcoin Freedom Act

Oklahoma State Senator Dusty Deevers has introduced the Bitcoin Freedom Act, paving the way for residents and businesses to opt for Bitcoin as a means of payment.

Diplo Academy redefines diplomatic training with AI

Diplo Academy introduced a new era of diplomatic training in 2024, leveraging artificial intelligence to reshape teaching methodologies and expand its online course offerings.

Visit dig.watch now for other updates and topics!

Marko and the Digital Watch team


Highlights from the week of 03-10 January 2025

smart glasses with interactive lenses seeing future

The latest update in smart glasses comes from Halliday which project a miniature screen directly into your eye, offering real-time translations and notifications without disrupting conversations.

twitter x elon musk british universities

A Reuters survey found that several universities have reduced or ended their presence on X, following a decline in engagement.

tiktok9

Internal documents suggest TikTok was aware of these dangers, revealing instances of minors being groomed for explicit acts and criminal activities like money laundering occurring on the platform.

microsoft headquarters fdi

This surge in investment follows OpenAI’s 2022 launch of ChatGPT, driving demand for specialised data centres due to the intense computing power required for AI technologies.

openai chatgpt pro sam altman

CEO Sam Altman admitted ChatGPT Pro’s pricing was not based on extensive research and was a personal decision.

TikTok

Creators reliant on the app are bracing for potential disruptions, diversifying to platforms like Instagram and YouTube, yet many are taking a cautious approach until a decision is reached.

ICAO cyberattack UN security breach

A hacker alleges they have stolen sensitive ICAO data, including personal information of individuals linked to the agency.

the white house

The new cybersecurity label will help consumers evaluate device security.

traxer 5kApSEYgMqw unsplash

The momentum coincides with the transition to President-elect Donald Trump’s administration, which is anticipated to create a more crypto-friendly regulatory environment.

rubaitul azad u4F54GIZWGI unsplash

A report from 404 Media revealed that while only 14 requests were met from January to September, the number surged after October, affecting over 2,000 users.

enter new era computing with large quantum computer generative ai

Over $8 billion in market value was lost as quantum stocks dropped following a warning from Nvidia’s CEO.


Reading corner

BLOG featured image 2025 01
www.diplomacy.edu

As Trump takes office, the tech world anticipates a blend of continuity and change in policy. While historically, the US has favoured a private-sector-driven tech landscape, Trump is expected to maintain this approach, resisting international regulations that could hinder US companies.

BLOG featured image 2025 02
www.diplomacy.edu

In 2024, Diplo Academy advanced its online courses with AI integration, introducing innovative teaching methodologies and practical tools to equip diplomats with skills for navigating the evolving challenges of the AI era.

ChatGPT Pro costs more to run than expected

OpenAI CEO Sam Altman has revealed that the company is losing money on its $200-per-month ChatGPT Pro plan due to unexpectedly high usage. The plan, introduced last year, provides access to an advanced AI model and fewer restrictions on OpenAI’s tools. Altman admitted that the pricing was not based on a rigorous study but was instead a personal decision.

Despite raising around $20 billion, OpenAI remains unprofitable, with estimated losses of $5 billion last year. The company is considering price increases or usage-based fees to improve financial stability. Altman also acknowledged that OpenAI requires more investment than initially expected.

The company remains optimistic about its future revenue, projecting $11.6 billion in 2025 and aiming for $100 billion by 2029. As OpenAI undergoes corporate restructuring, attracting new investors and refining its pricing strategy will be key to long-term profitability.

OpenAI confident in AGI but faces safety concerns

OpenAI CEO Sam Altman has stated that the company believes it knows how to build AGI and is now turning its focus towards developing superintelligence. He argues that advanced AI could significantly boost scientific discovery and economic growth. While AGI is often defined as AI that outperforms humans in most tasks, OpenAI and Microsoft also use a financial benchmark—$100 billion in profits—as a key measure.

Despite Altman’s optimism, today’s AI systems still struggle with accuracy and reliability. OpenAI has previously acknowledged that transitioning to a world with superintelligence is far from certain, and controlling such systems remains an unsolved challenge. The company has, however, recently disbanded key safety teams, leading to concerns about its priorities as it seeks further investment.

Altman remains confident that AI will soon make a significant impact on businesses, suggesting that AI agents could enter the workforce and reshape industries in the near future. He insists that OpenAI continues to balance innovation with safety, despite growing scepticism from former staff and industry critics.

OpenAI delays Media Manager amid creator backlash

In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI training, aiming to address intellectual property (IP) concerns. The project remains unfinished seven months later, with critics claiming it was never prioritised internally. The tool was intended to identify copyrighted text, images, audio, and video, allowing creators to include or exclude their work from OpenAI’s training datasets. However, its future remains uncertain, with no updates since August and missed deadlines.

The delay comes amidst growing backlash from creators and a wave of lawsuits against OpenAI. Plaintiffs, including prominent authors and artists, allege that the company trained its AI models on their works without authorisation. While OpenAI provides ad hoc opt-out mechanisms, critics argue these measures are cumbersome and inadequate.

Media Manager was seen as a potential solution, but experts doubt its effectiveness in addressing complex legal and ethical challenges, including global variations in copyright law and the burden placed on creators to protect their works. OpenAI continues to assert that its AI models transform, rather than replicate, copyrighted material, defending itself under ‘fair use’ protections.

While the company has implemented filters to minimise IP conflicts, lacking comprehensive tools like Media Manager leaves unresolved questions about compliance and compensation. As OpenAI battles legal challenges, the effectiveness and impact of Media Manager—if it ever launches—remain uncertain in the face of an evolving IP landscape.

Tech leaders embrace nuclear energy

Prominent figures in technology are heavily investing in nuclear energy, viewing it as crucial for future innovation. OpenAI’s Sam Altman and Microsoft co-founder Bill Gates are spearheading initiatives in advanced nuclear technology, with Altman chairing Oklo, a company developing sustainable nuclear reactors.

Data centres, essential for AI and cloud technologies, have seen electricity demands surge by 50% since 2020, now accounting for 4% of US energy use. Projections indicate this figure could rise to 9% by 2030, emphasising the need for scalable, carbon-free energy solutions. Nuclear power offers a consistent energy supply, unlike solar or wind, making it an attractive choice.

Microsoft has committed to reviving the Three Mile Island reactor by 2028, aiming to meet the energy needs of its growing AI operations. Experts, however, caution that tech-driven nuclear investments may prioritise corporate demands over broader public benefits.

Oklo and similar ventures highlight the increasing convergence of technology and energy, as industry leaders strive to support AI advancements sustainably. The debate continues on whether these moves truly serve societal needs or primarily benefit the tech sector.

Plans for major structural change announced by OpenAI

OpenAI has unveiled plans to transition its for-profit arm into a Delaware-based public benefit corporation (PBC). The move aims to attract substantial investment as the competition to develop advanced AI intensifies, and the proposed structure intends to prioritise societal interests alongside shareholder value, setting the company apart from traditional corporate models.

The shift marks a significant step for OpenAI, which started as a nonprofit in 2015 before establishing a for-profit division to fund high-cost AI development. Its latest funding round, valued at $157 billion, necessitated the structural change to eliminate a profit cap for investors, enabling greater financial backing. The nonprofit will retain a substantial stake in the restructured company, ensuring alignment with its original mission.

OpenAI faces criticism and legal challenges over the move. Elon Musk, a co-founder and vocal critic, has filed a lawsuit claiming the changes prioritise profit over public interest. Meta Platforms has also urged regulatory intervention. Legal experts suggest the PBC status offers limited enforcement of its mission-focused commitments, relying on shareholder influence to maintain the balance between profit and purpose.

By adopting this structure, OpenAI aims to align with competitors like Anthropic and xAI, which have similarly raised billions in funding. Analysts view the move as essential for securing the resources needed to remain a leader in the AI sector, though significant hurdles remain.

AGI linked to profits in Microsoft and OpenAI agreement

OpenAI and Microsoft have reportedly agreed on a financial benchmark to define AGI. According to ‘The Information’, AGI will be achieved only when OpenAI’s AI systems generate profits exceeding $100 billion. This definition departs from traditional technical interpretations of AGI and suggests the milestone is many years away.

Despite growing speculation about the progress of models like OpenAI’s o3, the company is currently unprofitable. It expects significant losses this year and predicts profitability only by 2029. The high computational costs associated with advanced AI models pose additional challenges to meeting the ambitious profit target.

Microsoft’s access to OpenAI’s technology hinges on this definition. Under their agreement, Microsoft retains access to OpenAI’s models until AGI is achieved. This provision has sparked discussions, as some believe OpenAI could prematurely declare AGI to gain strategic advantage, though the profit-centric definition may delay such claims.

Experts remain divided on whether the o3 model represents meaningful progress toward AGI. Its performance gains are tempered by substantial expenses, underscoring the tension between innovation and commercial viability in AI development.