In the evolving landscape of marketing and advertising, the integration of generative AI presents both promise and challenges, as highlighted in a recent Forrester report. Key obstacles include a lack of AI expertise among agency employees and concerns over job obsolescence. Also, the human factor poses a significant hurdle that the industry must address urgently to fully harness the potential of genAI.
The potential economic impact of genAI on agencies is profound. Seen as a transformative force akin to the advent of smartphones, genAI promises to redefine creativity in marketing by combining data intelligence with human intuition. Agency leaders overwhelmingly recognise it as a disruptive technology, with 77% acknowledging its potential to fundamentally alter business operations. However, the fear of job displacement among employees remains palpable, exacerbated by recent industry disruptions and the rapid automation of white-collar roles.
To mitigate these concerns and fully embrace genAI, there is a pressing need for comprehensive AI literacy and training within agencies. While existing educational programmes and certifications provide a foundation, they are insufficient to meet the demands of integrating AI into everyday creative processes. Investment in reskilling and upskilling initiatives is crucial to empower agency employees to confidently navigate the AI-driven future of marketing and advertising.
Industry stakeholders, including agencies, technology partners, universities, and trade groups, must collaborate to establish robust training frameworks. In addition, a concerted effort will not only bolster agency capabilities in AI adoption but also ensure that creative workforce remains agile and competitive in an increasingly AI-centric landscape. By prioritising AI literacy and supporting continuous learning initiatives, agencies can position themselves at the forefront of innovation, delivering enhanced value to clients through AI-powered creativity.
SoftBank Group has launched a joint venture called ‘SB TEMPUS Corp.’ with Tempus AI, a leader in AI and precision medicine. The joint venture aims to provide precision medicine services in Japan by applying the expertise and technology that Tempus has accumulated in the US. That includes Tempus’ AI-enabled platform, which works to make diagnostics more intelligent and support healthcare providers in making more informed decisions. The goal is to provide personalised, data-driven therapies to patients, with the aim of helping them live longer and healthier lives.
A key focus of the joint venture will be collecting and analysing siloed and unstructured medical data, such as molecular, clinical, pathological, and medical imaging data. By leveraging AI to analyse this data, the joint venture aims to contribute to the advancement of pharmaceutical research, including clinical and drug discovery research and the proposal of treatment plans more suited to individual patients. That approach is expected to reduce side effects and enhance the effectiveness of medications, marking a significant step towards personalised medicine.
To help as many people suffering from cancer as possible, SB TEMPUS plans to establish collaborations with cancer genomic medicine hospitals and Japanese hospitals, medical facilities, pharmaceutical companies, biotech ventures, medical device companies, cancer insurance companies, and testing companies.
Why does it matter?
The collaborative network will support the provision of better diagnosis and treatment for patients, ensuring that they benefit from personalised, data-driven therapies. Also, the joint venture aligns with SoftBank’s corporate philosophy of ‘Information Revolution—Happiness for everyone.’
Amazon’s AWS, the leading global cloud computing provider, is intensifying efforts to draw the public sector into the realm of AI amidst fierce competition with Microsoft and Google in the generative AI domain. The initiative aims to demonstrate AI’s potential to enhance public services across health, security, and non-profit sectors, leveraging technologies like ChatGPT to streamline operations and improve outcomes.
Over two years, AWS has allocated a substantial $50 million fund to support public sector entities in exploring AI applications, offering cloud computing credits, training, and technical expertise to kickstart innovative projects. Currently serving thousands of government agencies, academic institutions, and nonprofits worldwide, AWS seeks to transition AI concepts into practical solutions that can effectively address public sector challenges.
Dave Levy, AWS’s vice president overseeing global public sector operations, highlighted the importance of moving from conceptualisation to implementation in public sector AI projects, underscoring the need for robust support to navigate complexities and achieve meaningful impacts. The push comes amid heightened competition as Microsoft and Google Cloud aggressively pursue public sector AI adoption, aiming to leverage vast datasets and AI capabilities to revolutionise service delivery and operational efficiency.
Amazon’s AWS remains committed to addressing challenges such as data privacy, security, and ethical considerations surrounding AI adoption in the public sector, emphasising rigorous security protocols and readiness for large-scale deployment.
Why does it matter?
As generative AI continues to evolve, AWS’s strategic focus on public sector adoption underscores its belief in AI’s transformative potential, aiming to lead the charge in integrating advanced technologies into governmental and non-governmental organisations worldwide.
Chinese AI companies are swiftly responding to reports that OpenAI intends to restrict access to its technology in certain regions, including China. OpenAI, the creator of ChatGPT, is reportedly planning to block access to its API for entities in China and other countries. While ChatGPT is not directly available in mainland China, many Chinese startups have used OpenAI’s API platform to develop their applications. Users in China have received emails warning about restrictions, with measures set to take effect from 9 July.
In light of these developments, Chinese tech giants like Baidu and Alibaba Cloud are stepping in to attract users affected by OpenAI’s restrictions. Baidu announced an ‘inclusive Program,’ offering free migration to its Ernie platform for new users and additional Ernie 3.5 flagship model tokens to match their OpenAI usage. Similarly, Alibaba Cloud provides free tokens and migration services for OpenAI API users through its AI platform, offering competitive pricing compared to GPT-4.
Zhipu AI, another prominent player in China’s AI sector, has also announced a ‘Special Migration Program’ for OpenAI API users. The company emphasises its GLM model as a benchmark against OpenAI’s ecosystem, highlighting its self-developed technology for security and controllability. Over the past year, numerous Chinese companies have launched chatbots powered by their proprietary AI models, indicating a growing trend towards domestic AI development and innovation.
AI startup EvolutionaryScale has secured $142 million in seed funding, led by investors including Nat Friedman, Daniel Gross, and Lux Capital. Both Amazon Web Services (AWS) and NVIDIA’s venture capital arm participated in this substantial funding round. Lux Capital’s co-founder Josh Wolfe likened EvolutionaryScale’s achievements to a ‘ChatGPT moment for biology,’ highlighting their development of a groundbreaking large language model capable of designing new proteins and biological systems.
EvolutionaryScale aims to deploy its AI across diverse applications, from accelerating drug discovery processes to engineering microbes that can degrade plastic pollution. The company’s chief scientist, Alex Rives, emphasised the growing significance of AI in creating innovative biological solutions. That aligns with broader industry trends where AI is increasingly pivotal in advancing biotech and pharmaceutical research.
However, concerns have been raised regarding the potential misuse of generative AI in bioweapons development. Despite these ethical considerations, EvolutionaryScale plans to use its newly secured funding to train its AI models further and expand its team for collaborations within the biotech sector. They have also released the ESM3 models, with the smaller variant open-sourced for non-commercial research, while AWS and NVIDIA will offer the larger ESM3 commercially.
Why does it matter?
One notable achievement highlighted by EvolutionaryScale involves engineering a novel fluorescent protein using their ESM3 model. That protein represents a significant departure from naturally occurring variants, a process typically requiring nature millions of years to evolve. The company’s advancements underscore the transformative potential of AI in pushing the boundaries of biological innovation.
A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.
The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.
UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.
In a bold move highlighting the intersection of technology and politics, businessman Steve Endacott is running in the 4 July national election in Britain, aiming to become a member of parliament (MP) with the aid of an AI-generated avatar. The campaign leaflet for Endacott features not his own face but that of an AI avatar dubbed ‘AI Steve.’ The initiative, if successful, would result in the world’s first AI-assisted lawmaker.
Endacott, founder of Neural Voice, presented his AI avatar to the public in Brighton, engaging with locals on various issues through real-time interactions. The AI discusses topics like LGBTQ rights, housing, and immigration and then offers policy ideas, seeking feedback from citizens. Endacott aims to demonstrate how AI can enhance voter access to their representatives, advocating for a reformed democratic process where people are more connected to their MPs.
Despite some scepticism, with concerns about the effectiveness and trustworthiness of an AI MP, Endacott insists that the AI will serve as a co-pilot, formulating policies reviewed by a group of validators to ensure security and integrity. The Electoral Commission clarified that the elected candidate would remain the official MP, not the AI. While public opinion is mixed, the campaign underscores the growing role of AI in various sectors and sparks an important conversation about its potential in politics.
As the ‘year of global elections’ reaches its midpoint, AI chatbots and voice assistants are still struggling with basic election questions, risking voter confusion. The Washington Post found that Amazon’s Alexa often failed to correctly identify Joe Biden as the 2020 US presidential election winner, sometimes providing irrelevant or incorrect information. Similarly, Microsoft’s Copilot and Google’s Gemini refused to answer such questions, redirecting users to search engines instead.
Tech companies are increasingly investing in AI to provide definitive answers rather than lists of websites. This feature is particularly important as false claims about the 2020 election being stolen persist, even after multiple investigations found no fraud. Trump faced federal charges for attempting to overturn Biden’s victory, who won decisively with over 51% of the popular vote.
OpenAI’s ChatGPT and Apple’s Siri, however, correctly answered election questions. Seven months ago, Amazon claimed to have fixed Alexa’s inaccuracies, and recent tests showed Alexa correctly stating Biden won the 2020 election. Nonetheless, inconsistencies were spotted last week. Microsoft and Google, in return, said they avoid answering election-related questions to reduce risks and prevent misinformation,, a policy also applied in Europe due to a new law requiring safeguards against misinformation.
Why does it matter?
Tech companies are increasingly tasked with distinguishing fact from fiction as it develops AI-enabled assistants. Recently, Apple announced a partnership with OpenAI to enhance Siri with generative AI capabilities. Concurrently, Amazon is set to launch a new AI version of Alexa as a subscription service in September, although it remains unclear how it will handle election queries. An early prototype struggled with accuracy, and internal doubts about its readiness persist. The new AI assistants from Amazon and Apple aim to merge traditional voice commands with conversational capabilities, but experts warn this integration may pose new challenges.
Italian Prime Minister Giorgia Meloni and Pope Francis are teaming up to warn global leaders that diving into AI without ethical considerations could lead to catastrophic consequences. The collaboration, long in the making, will climax with Pope Francis attending the G7 summit in southern Italy at Meloni’s invitation, where he aims to educate leaders on the potential dangers posed by AI.
Concerned about AI’s societal and economic impacts, Meloni has been vocal about her fears regarding job losses and widening inequalities. She recently highlighted these concerns at the UN, coining the term ‘Algorethics’ to emphasise the need for ethical boundaries in technological advancements. Paolo Benanti, a Franciscan friar and advisor to both Meloni and the Pope, stressed the growing power of multinational corporations in AI development, raising alarms about the concentration of wealth and power.
Pope Francis, known for advocating social justice issues, has previously called for an AI ethics conference at the Vatican, drawing global tech giants and international organisations into the discussion. His upcoming address at the G7 summit is expected to focus on AI’s impact on vulnerable populations and could touch on concerns about autonomous weaponry. Meloni, in turn, is poised to advocate for stronger regulations to ensure AI technologies adhere to ethical standards and serve societal interests.
Despite AI hype, recent studies suggest the promised financial benefits for businesses implementing AI projects have been underwhelming. That challenges the optimistic narratives often associated with AI, indicating a need for more cautious and balanced approaches to its development and deployment.
Young Americans are rapidly embracing generative AI, but few use it daily, according to a recent survey by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving. The survey, conducted in October and November 2023 with 1,274 US teens and young adults aged 14-22, found that only 4% use AI tools daily. Additionally, 41% have never used AI, and 8% are unaware of what AI tools are. The main uses for AI among respondents are seeking information (53%) and brainstorming (51%).
Demographic differences show that 40% of white respondents use AI for schoolwork, compared to 62% of Black respondents and 48% of Latinos. Looking ahead, 41% believe AI will have both positive and negative impacts in the next decade. Notably, 28% of LGBTQ+ respondents expect mostly negative impacts, compared to 17% of cisgender/straight respondents. Young people have varied opinions on AI, as some view it as a sign of a changing world and are enthusiastic about its future, while others find it unsettling and concerning.
Why does it matter?
Young people globally share concerns over AI, which the IMF predicts will affect nearly 40% of jobs, with advanced economies seeing up to 60%. In comparison to the results above, a survey of 1,000 young Hungarians (aged 15-29) found that frequent AI app users are more positive about its benefits, while 38% of occasional users remain skeptical. Additionally, 54% believe humans will maintain control over AI, with 54% of women fearing loss of control compared to 37% of men.