The founder and former CEO of GameOn, an AI startup in San Francisco, has been indicted for orchestrating a six-year-long fraud scheme that allegedly defrauded investors and the company out of over $60 million. Alexander Beckman, 41, faces 23 criminal charges, while his wife, Valerie Lau Beckman, 38, who worked as a lawyer for the company, is charged with 16 counts, including obstruction. Both have pleaded not guilty. The US Securities and Exchange Commission has also filed civil charges against the couple.
Beckman is accused of deceiving investors by inflating the company’s financial status, including fabricating fake customer relationships, overstating revenue, and creating fraudulent bank statements and audit reports. He allegedly went as far as impersonating individuals to share false information. Meanwhile, Lau Beckman allegedly assisted her husband by providing authentic audit reports to help fabricate false documents and delete critical files after an investigation began.
The Beckmans are also accused of misusing investor funds for personal expenses, including purchasing a luxury home, vehicles, and covering costs for their wedding. The fraudulent activities reportedly continued up until Beckman’s resignation as CEO in July 2024. GameOn, which has since been rebranded as On Platform, eventually admitted to the financial discrepancies and laid off most of its employees.
The case underscores the need for integrity in the tech industry, particularly within startups, as federal prosecutors emphasise that fraud cannot fuel innovation.
The UK‘s Competition and Markets Authority has appointed former Amazon executive Doug Gurr as its interim chairman, signalling the government’s push to boost economic growth and support the tech sector. Gurr, who brings extensive experience at Amazon, including leading the company’s UK and China operations, will guide the CMA as it fosters competition in industries such as cloud services and AI. The move aligns with the UK’s broader strategy to streamline regulations and position itself as a pro-business nation.
Gurr’s appointment comes amid a critical phase in the CMA’s investigation into the domestic cloud services market, which has been scrutinising Amazon’s dominant position. While Gurr will serve in an interim role, the government hopes his commercial background will help drive pro-business decisions that stimulate growth. This marks a shift from the previous chair, Marcus Bokkerink, whose tenure was shorter than expected, possibly due to dissatisfaction among government officials.
Industry experts note that Gurr’s appointment is timely, as the CMA is stepping up its oversight of Big Tech, particularly with the expanded powers under the Digital Markets, Competition, and Consumers Act. Critics and lobby groups like the Open Cloud Coalition closely watch how the CMA will handle its regulatory responsibilities, particularly in the cloud services sector, where Amazon holds a significant market share. They urge the CMA to maintain a strong stance on promoting fairness and competition.
As the CMA navigates its investigations and enforces new rules, stakeholders are keen to see how Gurr’s leadership will shape the future of competition regulation in the UK. The outcome could have far-reaching implications for businesses and consumers, particularly in the rapidly evolving tech landscape.
Marcus Bokkerink has been removed from his position as chair of the Competition and Markets Authority (CMA) by the UK government, marking a shift in regulatory practices aimed at boosting economic growth. The CMA, a key agency overseeing mergers and competition, had recently paused the high-profile Microsoft-Activision Blizzard merger, showcasing its regulatory power. Bokkerink, appointed in 2022, was expected to serve a five-year term but will now step down as part of the government’s effort to realign regulatory bodies with its economic priorities.
This decision reflects a broader governmental push to reduce barriers to economic expansion. Prime Minister Keir Starmer, Chancellor Rachel Reeves, and Business Secretary Jonathan Reynolds recently sent a letter to several regulators, including the CMA, urging them to prioritize growth. Government insiders have suggested that the move signals a serious commitment to reshaping the regulatory environment to encourage investment and economic development.
The removal of Bokkerink, a former senior partner at Boston Consulting Group, comes as the government continues to focus on attracting international investment, with key figures like Reeves and Reynolds attending the World Economic Forum in Davos to further this goal. The government’s efforts to reshape regulatory culture align with its broader strategy to make economic growth the country’s top priority.
India’s National Human Rights Commission (NHRC) has rebuked labour officials for inadequately investigating claims of employment discrimination at Foxconn’s iPhone manufacturing plant in Tamil Nadu. The commission called for a thorough re-examination after a Reuters investigation revealed that Foxconn systematically excluded married women from assembly line jobs, relaxing the rule only during high-production periods.
Labour officials, who visited the Foxconn plant in July, reported that 6.7% of its 33,360 female workers were married but failed to confirm whether they worked on the assembly line. Federal investigators also relied on employee testimonies, finding no wage or promotion bias but neglected to scrutinise recruitment records. The NHRC criticised these findings as superficial, stating they failed to address the alleged discriminatory hiring practices effectively.
Foxconn and Apple, both key players in India‘s electronics manufacturing push, did not respond to inquiries about the NHRC’s concerns. While Foxconn previously instructed recruiters to remove discriminatory job criteria, the NHRC has ordered a fresh investigation into the matter. The statutory body, which holds civil court-like authority, continues to push for accountability in safeguarding workers’ rights.
ByteDance, the Chinese tech giant behind TikTok, has allocated over 150 billion yuan ($20.64 billion) for capital expenditure this year, with a significant focus on AI, according to sources familiar with the matter. About half of the investment will support overseas AI infrastructure, including data centres and networking equipment. Beneficiaries of this spending are expected to include chipmakers Huawei, Cambricon, and US supplier Nvidia, although ByteDance has denied the accuracy of the claims.
The investment aims to solidify ByteDance’s AI leadership in China, where it has launched over 15 standalone AI applications, such as the popular chatbot Doubao, which boasts 75 million monthly active users. Its international counterparts include apps like Cici and Dreamina, reflecting ByteDance’s strategy to adapt its AI offerings globally. The company also recently updated its flagship AI model, Doubao, to rival reasoning models like those developed by Microsoft-backed OpenAI.
ByteDance’s international spending aligns with its efforts to expand AI capabilities abroad amid challenges like the uncertain future of TikTok in the United States. While ByteDance’s $20 billion plan is substantial, it remains modest compared to the AI investments of US tech giants like Google and Microsoft, which spent $50 billion and $55.7 billion respectively on AI infrastructure in the past year. The spending will also bolster ByteDance’s partnerships with suppliers such as Nvidia, from which it has procured custom AI chips tailored to China despite US export restrictions.
OpenAI has told an Indian court that removing training data used for its ChatGPT service would conflict with its legal obligations in the United States. The company, backed by Microsoft, is defending a copyright lawsuit filed by Indian news agency ANI, which accuses OpenAI of using its content without permission and demands the deletion of ANI’s data from ChatGPT’s memory.
In a January 10 filing, OpenAI argued that Indian courts lack jurisdiction as the company has no physical presence or data servers in India. It also emphasised its legal obligation in the US to preserve training data while litigation is ongoing. OpenAI denied wrongdoing, asserting its systems make fair use of publicly available data, a stance it has maintained in similar copyright disputes globally.
ANI insists the Delhi court has the authority to rule on the case, citing concerns over unfair competition and alleging that ChatGPT reproduces its content verbatim. OpenAI, however, countered that ANI manipulated prompts to elicit such responses. The court is set to hear the case on January 28, marking a key moment in India’s scrutiny of AI and copyright law.
Meta Platforms, the parent company of Facebook and Instagram, is once again under fire by the European Consumer Organisation (BEUC) over its ad-free subscription service. Introduced in 2023, the fee-based option offered European users the ability to opt out of personalised ads, with a subsequent price cut of 40% implemented later that year. However, BEUC claims these changes are merely superficial and fail to address deeper concerns about fairness and compliance with EU consumer and privacy laws.
BEUC’s Director General, Agustin Reyna, criticised Meta for not providing users with a fair choice, alleging that the company still pressures users into accepting its behavioural advertising system. Reyna called on consumer protection authorities and the European Commission to investigate Meta’s practices urgently, emphasising the need for decisive action to safeguard users’ rights. The consumer group also accused Meta of misleading practices, unclear terms, and failing to minimise data collection while restricting services for users who decline data processing.
In response, a Meta spokesperson defended the company’s approach, arguing that its November 2023 updates go beyond EU regulatory requirements. Despite these assurances, EU antitrust regulators have raised concerns, accusing Meta of breaching the Digital Markets Act. They claim the ad-free service forces users into a binary choice, sparking broader concerns about how the tech giant balances profit with consumer protection.
As pressure mounts, Meta faces growing scrutiny over its compliance with EU laws, with regulators weighing potential measures to address BEUC’s allegations and ensure fair treatment for European users.
President Donald Trump unveiled a $500 billion private-sector initiative on Tuesday aimed at transforming AI infrastructure in the US. The joint venture, called Stargate, brings together OpenAI, SoftBank, and Oracle to build 20 massive data centres and create over 100,000 jobs. Backers have committed $100 billion for immediate deployment, with the remainder spread over the next four years.
The announcement, made at the White House with SoftBank CEO Masayoshi Son, OpenAI CEO Sam Altman, and Oracle Chairman Larry Ellison in attendance, underscores America’s push to lead in AI development. Ellison revealed that the first data centres, each half a million square feet, are already under construction in Texas. These facilities aim to power advanced AI applications, including analysing electronic health records to assist doctors.
Trump attributed the project’s launch to his leadership, with executives expressing their support. “We wouldn’t have decided to do this unless you won,” Son said. However, the ambitious project arrives amid concerns over the rising energy demands of AI data centres. Trump promised to simplify energy production for these facilities, even as experts warn of potential power shortfalls across the country in the coming decade.
The announcement comes against a backdrop of surging AI investments since OpenAI’s release of ChatGPT in 2022, which sparked widespread adoption of AI across industries. Oracle and other tech stocks, including Nvidia and Dell, climbed on the news, reflecting market enthusiasm for the Stargate project.
President Donald Trump’s executive order delaying the enforcement of a US TikTok ban has created new legal uncertainties for the platform and its service providers, including Google and Apple. Signed on Monday, the order pauses for 75 days a law requiring TikTok’s Chinese parent company, ByteDance, to divest the app over national security concerns.
While the order directs the Justice Department to halt enforcement and assures app distributors of no liability during the review period, legal experts warn that the promise offers little protection. Courts do not consider executive orders binding, and Trump could alter or selectively enforce the policy at any time, potentially exposing companies to massive penalties.
The ban, passed by Congress and upheld by the Supreme Court days before Trump’s order, imposes steep fines of $5,000 per user for violations, making compliance a high-stakes gamble for service providers. Critics argue that the legal ambiguity could also open companies to shareholder lawsuits if they ignore the ban based solely on Trump’s directive.
Trump’s move has reignited tensions between the White House and lawmakers, who overwhelmingly supported the ban over fears of Chinese influence. The coming weeks may bring further legal battles and political manoeuvring as the future of TikTok in the US hangs in the balance.
The International Telecommunication Union (ITU) has launched the AI Skills Coalition, a global initiative backed by 27 organisations, including Amazon Web Services, Microsoft, and Cognizant, to bridge the AI skills gap in developing countries. The coalition will provide accessible education and capacity-building in areas like generative AI, machine learning, and AI for sustainable development through a new online platform set to launch in March 2025.
The platform will offer free resources such as self-paced courses, webinars, in-person workshops, hybrid programs, and a comprehensive digital library of AI materials. In collaboration with the United Nations Development Programme (UNDP), the coalition will leverage UNDP’s global presence to ensure an inclusive, global approach to AI training, extending beyond the efforts of companies like Google, AWS, and Microsoft.
The initiative will also focus on underrepresented groups, including women, youth, and persons with disabilities, aiming to foster diversity in AI development. Specialised training programs for government officials will address AI governance, ethics, and policymaking, tailored to the needs of developing countries and least developed countries (LDCs).
The AI Skills Coalition’s efforts to deliver AI education and capacity-building aim to ensure that the benefits of AI are shared more equitably, addressing global inequalities in AI knowledge. By equipping the future workforce with critical skills and empowering policymakers to harness AI responsibly, the coalition seeks to support sustainable development and help countries navigate the unique challenges they face in the AI era.