AI is forecast to add a cumulative $19.9 trillion to the global economy by 2030, according to a recent IDC study. This growth includes direct revenue from AI companies and investments in infrastructure. By that year, AI-related activities could contribute 3.5% to global GDP.
IDC reports AI spending will involve direct, indirect, and induced categories. Direct spending includes revenue from AI companies and their investment in hardware, while indirect spending covers the construction of data centres and related hiring. Induced spending, meanwhile, represents the broader economic impact of AI advancements.
Every dollar invested in business-related AI solutions in 2030 is expected to generate $4.60 into the global economy. However, IDC’s analysis does not cover potential changes in jobs or wages, which many believe AI adoption could affect.
A survey from IDC revealed that 48% of workers expect part of their roles to be automated within two years. While job automation is a significant concern, full automation remains rare, with only 3% expecting their jobs to be completely automated.
The International Telecommunication Union (ITU) recently hosted the Digital Skills Forum in Manama, Bahrain, addressing the pressing need for digital skills in today’s technology-driven society. With nearly 700 participants from 44 countries, the forum emphasised urgent calls to action aimed at bridging the digital skills gap that affects billions around the globe.
‘Digital skills have the power to change lives,’ asserted Doreen Bogdan-Martin, ITU Secretary-General, highlighting the union’s dedication to fostering an inclusive digital society. In response to this challenge, ITU introduced the ‘Digital Skills Toolkit 2024,’ a comprehensive resource to support policymakers and stakeholders in crafting effective national strategies to close digital skills gaps.
That toolkit seeks to empower diverse sectors, including private enterprises and academic institutions, by providing essential insights and resources within an ever-evolving technological landscape. Furthermore, the forum underscored the importance of lifelong learning and continuous upskilling, particularly in advanced fields such as AI and cybersecurity. ‘Addressing the digital skills gap requires strong partnerships and a commitment to investing in digital education,’ emphasised Cosmas Luckyson Zavazava, Director of ITU’s Telecommunication Development Bureau.
Bahrain’s leadership in promoting digital skills was prominently featured, reflecting its dedication to international cooperation and innovation. Young entrepreneurs showcased their innovative approaches to digital education, demonstrating the transformative potential of technology in shaping the future.
When planning his summer trip to Amsterdam and Ireland, Jason Brown opted for ChatGPT over traditional travel resources. The founder of People Movers used the AI tool to design a detailed itinerary for his family, outlining activities in Dublin and Galway. He described the experience as ‘fantastic,’ noting how quickly ChatGPT generated organised suggestions for each day. While he implemented many of the AI‘s recommendations, he also appreciated personal connections for uncovering local treasures.
The growing influence of generative AI in travel planning is clear, with tools like Google’s Gemini and Microsoft’s Copilot becoming increasingly popular. A recent survey found that one in ten Britons have turned to AI for travel arrangements, with many showing interest in using it again. However, challenges persist, as many users reported receiving generic or inaccurate information. Experts stress the need to verify AI-generated content with trusted sources, such as residents or travel agents, to ensure accuracy.
Sardar Bali, co-founder of the AI travel planner Just Ask Layla, argues about the need for accuracy in AI-generated content. His team uses a two-step verification process to enhance reliability, though he admits that errors can still happen. Meanwhile, major companies like Expedia are incorporating AI into their services to simplify complex travel planning by offering personalised suggestions.
However, not all experiences with AI in travel planning have been positive. Freelance writer Rebecca Crowe faced challenges with AI-generated itineraries that were often impractical and outdated, especially when looking for gluten-free dining options. She recommends using AI mainly for inspiration, while also cross-referencing information with trusted blogs and travel guides to ensure accuracy and save time.
Generative AI is significantly more energy-intensive than traditional search engines, according to researcher Sasha Luccioni, who has raised concerns about the environmental impact of the technology. Generating new information requires vast computing power and energy, particularly for models like ChatGPT, which rely on extensive data training.
The AI and cryptocurrency sectors consumed nearly 460 terawatt hours of electricity in 2022, around two percent of global production, according to the International Energy Agency. Luccioni, a leading expert on AI’s climate impact, has developed tools to quantify the carbon footprint of AI technologies, helping developers make informed decisions.
Efforts to mitigate the environmental consequences of AI are underway. Luccioni is working on a certification system to rate the energy efficiency of AI models, aiming to encourage more sustainable practices. Transparency from tech giants like Google and OpenAI is essential, as their greenhouse gas emissions have surged due to AI development.
The solution, Luccioni argues, lies in a combination of government legislation, increased transparency, and better public understanding of AI’s limitations and environmental costs. She advocates for ‘energy sobriety’ by using AI tools more judiciously and making environmentally conscious decisions.
Samsung has rolled out its One UI 6.1.1 update for the Galaxy Tab S8 series in South Korea. The update, initially available for Galaxy smartphones, introduces new Galaxy AI features and various improvements to One UI. The software upgrade applies to the Galaxy Tab S8, S8+, and S8 Ultra, with firmware versions X700XXU8CXHB, X800XXU8CXHB, and X900XXU8CXHB, respectively.
The update is significant, with a download size of over 2.8GB. The Galaxy AI features included in the upgrade were previously seen on the Galaxy Z Flip 6 and Galaxy Z Fold 6. Besides the AI enhancements, users will also experience improved Samsung stock apps and refined One UI functionality.
Samsung tablet users in South Korea can now install the update by heading to their device settings and manually downloading the software. For Galaxy Tab S8 series owners outside South Korea, the update is expected to roll out soon across various regions.
Elon Musk’s social media platform, X, is taking steps to comply with Brazil’s Supreme Court in an effort to lift its ban in the country. The platform was banned in Brazil in August for failing to moderate hate speech and meet court orders. The court had ordered the company to appoint a legal representative and block certain accounts deemed harmful to Brazil’s democracy. X’s legal team has now agreed to follow these directives, appointing Rachel de Oliveira Villa Nova Conceicao as its representative and committing to block the required accounts.
Despite previous defiance and criticism of the court’s orders by Musk and his company, X has shifted its stance. The court gave X five days to submit proof of the appointment and two days to confirm that the necessary accounts had been blocked. Once all compliance is verified, the court will decide whether to extend or lift the ban on X in Brazil.
Additionally, X has agreed to pay fines exceeding $3 million and begin blocking specific accounts involved in a hate speech investigation. This represents a shift in the company’s stance, which had previously denounced the court orders as censorship. X briefly became accessible in Brazil last week after a network update bypassed the ban, though the court continues to enforce its block until all conditions are met.
LinkedIn has come under scrutiny for using user data to train AI models without updating its privacy terms in advance. While LinkedIn has since revised its terms, United States users were not informed beforehand, which usually allows them time to make decisions about their accounts. LinkedIn offers an opt-out feature for data used in generative AI, but this was not initially reflected in their privacy policy.
LinkedIn clarified that its AI models, including content creation tools, use user data. Some models on its platform may also be trained by external providers like Microsoft. LinkedIn assures users that privacy-enhancing techniques, such as redacting personal information, are employed during the process.
The Open Rights Group has criticised LinkedIn for not seeking consent from users before collecting data, calling the opt-out method inadequate for protecting privacy rights. Regulatory bodies, including Ireland‘s Data Protection Commission, have been involved in monitoring the situation, especially within regions under GDPR protection, where user data is not used for AI training.
LinkedIn is one of several platforms reusing user-generated content for AI training. Others, like Meta and Stack Overflow, have also begun similar practices, with some users protesting the reuse of their data without explicit consent.
Several tech companies, including Meta and Spotify, have criticised the European Union for what they describe as inconsistent decision-making on data privacy and AI. A collective letter from firms, researchers, and industry bodies warned that Europe risks losing competitiveness due to fragmented regulations. They urged data privacy regulators to deliver clear, harmonised decisions, allowing European data to be utilised in AI training for the benefit of the region.
The companies voiced concerns about the unpredictability of recent decisions made under the General Data Protection Regulation (GDPR). Meta, known for owning Facebook and Instagram, recently paused plans to collect European user data for AI development, following pressure from EU privacy authorities. Uncertainty surrounding which data can be used for AI models has become a major issue for businesses.
Tech firms have delayed product releases in Europe, seeking legal clarity. Meta postponed its Twitter-like app Threads, while Google has also delayed the launch of AI tools in the EU market. The introduction of Europe’s AI Act earlier this year added further regulatory requirements, which firms argue complicates innovation.
The European Commission insists that all companies must comply with data privacy rules, and Meta has already faced significant penalties for breaches. The letter stresses the need for swift regulatory decisions to ensure Europe can remain competitive in the AI sector.
Chinese multinational technology company, Alibaba, has intensified its push into the generative AI space by releasing new open-source AI models and text-to-video technology. The Chinese tech giant’s latest models, part of its Qwen 2.5 family, range from 0.5 to 72 billion parameters, covering fields like mathematics, coding, and supporting over 29 languages.
This marks Alibaba’s shift towards a hybrid approach, combining both open-source and proprietary AI developments, as it competes with rivals such as Baidu and OpenAI, which favor closed-source models. The newly introduced text-to-video model, part of the Tongyi Wanxiang family, positions Alibaba as a key player in the rapidly growing AI-driven content creation market.
The company’s new AI offerings aim to serve a wide range of industries, from automotive and gaming to scientific research, solidifying its role in shaping the future of AI across various sectors.
California has introduced three new laws aimed at reducing AI-generated deepfakes ahead of the 2024 election. The legislation, signed by Governor Gavin Newsom, is designed to combat election misinformation and protect the public from deceptive political ads. One law requires online platforms like X to remove false materials and empowers individuals to sue over election-related deepfakes.
However, two of these laws are now facing a legal challenge. A creator of parody videos featuring Kamala Harris claims the legislation violates free speech rights. The lawsuit, filed in Sacramento, accuses California of censoring content, despite assurances from Newsom’s office that the laws do not target satire or parody.
Supporters of the laws argue they are necessary to prevent erosion of trust in US elections, as AI-generated disinformation becomes an increasing threat. Critics, including free speech advocates, believe the legislation overreaches and could be ineffective due to slow court processes, limiting its impact.
Despite the debate, California’s laws could serve as a deterrent to potential violations. Legislators hope the rules will prompt platforms to act quickly in identifying and removing misleading content.