WSIS prepares for Geneva as momentum builds for impactful digital governance

As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11 July in Geneva, stakeholders from across sectors gathered at the Internet Governance Forum in Norway to reflect on WSIS’s evolution and map a shared path forward.

The session, moderated by Gitanjali Sah of ITU, brought together over a dozen speakers from governments, UN agencies, civil society, and the technical and business communities.

The event is crucial, marking two decades since the WSIS process began. It has grown into a multistakeholder framework involving more than 50 UN entities. While the action lines offer a structured and inclusive approach to digital cooperation, participants acknowledged that measurement and implementation remain the weakest links.

IGF
WSIS prepares for Geneva as momentum builds for impactful digital governance 3

Ambassador Thomas Schneider of Switzerland—co-host of the upcoming high-level event—called for a shift from discussion to decision-making. “Dialogue is necessary but not sufficient,” he stated. “We must ensure these voices translate into outcomes.” Echoing this, South Africa’s representative, Cynthia, reaffirmed her country’s leadership as chair-designate of the event and its commitment to inclusive governance via its G20 presidency focus on AI, digital public infrastructure, and small business support.

UNDP’s Yu Ping Chan shared insights from the field: “Capacity building remains the number one request from governments. It’s not a new principle—it has been central since WSIS began.” She cited UNDP’s work on the Hamburg Declaration on responsible AI and AI ecosystem development in Africa as examples of translating global dialogue into national action.

Tatevik Grigoryan from UNESCO emphasised the enduring value of WSIS’s human rights-based foundations. “We continue to facilitate action lines on access to information, e-learning, and media ethics,” she said, encouraging engagement with UNESCO’s ROMEX framework as a tool for ethical, inclusive digital societies.

Veni from ICANN reinforced the technical community’s role, expressing hope that the WSIS Forum would be formally recognised in the UN’s review documents. “We must not overlook the forum’s contributions. Multistakeholder governance remains essential,” he insisted.

Representing the FAO, Dejan Jakovljević reminded participants that 700 million people remain undernourished. “Digital transformation in agriculture is vital. But farmers without connectivity are left behind,” he said, highlighting the WSIS framework’s role in fostering collaboration across sectors.

Anriette Esterhuysen of APC called civil society to embrace WSIS as a complementary forum to the IGF. “WSIS gives us a policy and implementation framework. It’s not just about talk—it’s about tools we can use at the national level.”

The Inter-Parliamentary Union’s Andy Richardson underscored parliaments’ dual role: advancing innovation while protecting citizens. Meli from the International Chamber of Commerce pointed to business engagement through AI-related workshops and discussions on strengthening multi-stakeholders.

Gitanjali Sah acknowledged past successes but urged continued ambition. “We were very ambitious in 1998—and we must be again,” she said. Still, she noted a persistent challenge: “We lack clear indicators to measure WSIS action line progress. That’s a gap we must close.”

The upcoming Geneva event will feature 67 ministers, 72 WSIS champions, and a youth programme alongside the AI for Good summit. Delegates were encouraged to submit input to the UN review process by 15 July and to participate in shaping a WSIS future that is more measurable, inclusive, and action-oriented.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI sandboxes pave path for responsible innovation in developing countries

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts from around the world gathered to examine how AI sandboxes—safe, controlled environments for testing new technologies under regulatory oversight—can help ensure that innovation remains responsible and inclusive, especially in developing countries. Moderated by Sophie Tomlinson of the DataSphere Initiative, the session spotlighted the growing global appeal of sandboxes, initially developed for fintech, and now extending into healthcare, transportation, and data governance.

Speakers emphasised that sandboxes provide a much-needed collaborative space for regulators, companies, and civil society to test AI solutions before launching them into the real world. Mariana Rozo-Paz from the DataSphere Initiative likened them to childhood spaces for building and experimentation, underscoring their agility and potential for creative governance.

From the European AI Office, Alex Moltzau described how the EU AI Act integrates sandboxes to support safe innovation and cross-border collaboration. On the African continent, where 25 sandboxes already exist (mainly in finance), countries like Nigeria are using them to implement data protection laws and shape national AI strategies. However, funding and legal authority remain hurdles.

The workshop laid bare several shared challenges: limited resources, lack of clear legal frameworks, and insufficient participation in civil society. Natalie Cohen of the OECD pointed out that just 41% of countries trust governments to regulate new technologies effectively—a gap that sandboxes can help bridge. By enabling evidence-based experimentation and promoting transparency, they serve as trust-building tools among governments, businesses, and communities.

Despite regional differences, there was consensus that AI sandboxes—when well-designed and inclusive—can drive equitable digital innovation. With initiatives like the Global Sandboxes Forum and OECD toolkits in progress, stakeholders signalled a readiness to move from theory to practice, viewing sandboxes as more than just regulatory experiments—they are, increasingly, catalysts for international cooperation and responsible AI deployment.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

UNESCO and ICANN lead push for multilingual and inclusive internet governance

At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts gathered to discuss how to involve diverse communities—especially indigenous and underrepresented groups—better in the technical governance of the internet. The session, led by Niger’s Anne Rachel Inne, emphasised that meaningful participation requires more than token inclusion; it demands structural reforms and practical engagement tools.

Central to the dialogue was the role of multilingualism, which UNESCO’s Guilherme Canela de Souza described as both a right and a necessity for true digital inclusion. ICANN’s Theresa Swinehart spotlighted ‘Universal Acceptance’ as a tangible step toward digital equality, ensuring that domain names and email addresses work in all languages and scripts.

Real-world examples, like hackathons with university students in Bahrain, showcased how digital cooperation can bridge technical skills and community needs. Meanwhile, Valts Ernstreits from Latvia shared how international engagement helped elevate the status of the Livonian language at home, proving that global advocacy can yield local policy wins.

The workshop addressed persistent challenges to inclusion: from bureaucratic hurdles that exclude indigenous communities to the lack of connections between technical and policy realms. Panellists agreed that real change hinges on collaboration, mentorship, and tools that meet people where they are, like WhatsApp groups and local capacity-building networks.

Participants also highlighted UNESCO’s roadmap for multilingualism and ICANN’s upcoming domain name support program as critical opportunities for further action. In a solution-oriented close, speakers urged continued efforts to make digital spaces more representative.

They underscored the need for long-term investment in community-driven infrastructure and policies that reflect the internet’s global diversity. The message was clear: equitable internet governance can only be achieved when all voices—across languages, regions, and technical backgrounds—are heard and empowered.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija’s book on internet governance turns 20 with new life at IGF

At the Internet Governance Forum 2025 in Lillestrøm, Norway, Jovan Kurbalija launched the eighth edition of his seminal textbook ‘Introduction to Internet Governance’, marking a return to writing after a nine-year pause. Moderated by Sorina Teleanu of the Diplo, the session unpacked not just the content of the new edition but also the reasoning behind retaining its original title in an era buzzing with buzzwords like ‘AI governance’ and ‘digital governance.’

Kurbalija defended the choice, arguing that most so-called digital issues—from content regulation to cybersecurity—ultimately operate over internet infrastructure, making ‘Internet governance’ the most precise term available.

The updated edition reflects both continuity and adaptation. He introduced ‘Kaizen publishing,’ a new model that replaces the traditional static book cycle with a continuously updated digital platform. Driven by the fast pace of technological change and aided by AI tools trained on his own writing style, the new format ensures the book evolves in real-time with policy and technological developments.

Jovan book launch

The new edition is structured as a seven-floor pyramid tackling 50 key issues rooted in history and future internet governance trajectories. The book also traces digital policy’s deep historical roots.

Kurbalija highlighted how key global internet governance frameworks—such as ICANN, the WTO e-commerce moratorium, and UN cyber initiatives—emerged within months of each other in 1998, a pivotal moment he calls foundational to today’s landscape. He contrasted this historical consistency with recent transformations, identifying four key shifts since 2016: mass data migration to the cloud, COVID-19’s digital acceleration, the move from CPUs to GPUs, and the rise of AI.

Finally, the session tackled the evolving discourse around AI governance. Kurbalija emphasised the need to weigh long-term existential risks against more immediate challenges like educational disruption and concentrated knowledge power. He also critiqued the shift in global policy language—from knowledge-centric to data-driven frameworks—and warned that this transformation might obscure AI’s true nature as a knowledge-based phenomenon.

As geopolitics reasserts itself in digital governance debates, Kurbalija’s updated book aims to ground readers in the enduring principles shaping an increasingly complex landscape.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI governance debated at IGF 2025: Global cooperation meets local needs

At the Internet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of artificial intelligence governance. The discussion, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela of UNESCO, featured a rich exchange between government officials, private sector leaders, civil society voices, and multilateral organisations.

The session highlighted how AI governance is becoming a crowded yet fragmented space, shaped by overlapping frameworks such as the OECD AI Principles, the EU AI Act, UNESCO’s recommendations on AI ethics, and various national and regional strategies. While these efforts reflect progress, they also pose challenges in terms of coordination, coherence, and inclusivity.

IGF session highlights urgent need for democratic resilience online

Melinda Claybaugh, Director of Privacy Policy at Meta, noted the abundance of governance initiatives but warned of disagreements over how AI risks should be measured. ‘We’re at an inflection point,’ she said, calling for more balanced conversations that include not just safety concerns but also the benefits and opportunities AI brings. She argued for transparency in risk assessments and suggested that existing regulatory structures could be adapted to new technologies rather than replaced.

In response, Jhalak Kakkar, Executive Director at India’s Centre for Communication Governance, urged caution against what she termed a ‘false dichotomy’ between innovation and regulation. ‘We need to start building governance from the beginning, not after harms appear,’ she stressed, calling for socio-technical impact assessments and meaningful civil society participation. Kakkar advocated for multi-stakeholder governance that moves beyond formality to real influence.

Mlindi Mashologu, Deputy Director-General at South Africa’s Ministry of Communications and Digital Technology, highlighted the importance of context-aware regulation. ‘There is no one-size-fits-all when it comes to AI,’ he said. Mashologu outlined South Africa’s efforts through its G20 presidency to reduce AI-driven inequality via a new policy toolkit, stressing human rights, data justice, and environmental sustainability as core principles. He also called for capacity-building to enable the Global South to shape its own AI future.

Jovan Kurbalija, Executive Director of the Diplo Foundation, brought a philosophical lens to the discussion, questioning the dominance of ‘data’ in governance frameworks. ‘AI is fundamentally about knowledge, not just data,’ he argued. Kurbalija warned against the monopolisation of human knowledge and advocated for stronger safeguards to ensure fair attribution and decentralisation.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience

The need for transparency, explainability, and inclusive governance remained central themes. Participants explored whether traditional laws—on privacy, competition, and intellectual property—are sufficient or whether new instruments are needed to address AI’s novel challenges.

Audience members added urgency to the discussion. Anna from Mexican digital rights group R3D raised concerns about AI’s environmental toll and extractive infrastructure practices in the Global South. Pilar Rodriguez, youth coordinator for the IGF in Spain, questioned how AI governance could avoid fragmentation while still respecting regional sovereignty.

The session concluded with a call for common-sense, human-centric AI governance. ‘Let’s demystify AI—but still enjoy its magic,’ said Kurbalija, reflecting the spirit of hopeful realism that permeated the discussion. Panelists agreed that while many AI risks remain unclear, global collaboration rooted in human rights, transparency, and local empowerment offers the most promising path forward.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

IGF panel urges rethinking internet governance amid rising geopolitical tensions

At the 2025 Internet Governance Forum in Lillestrøm, Norway, a session led by the German Federal Ministry for Digital Transformation spotlighted a bold foresight exercise imagining how global internet governance could evolve by 2040. Co-led by researcher Julia Pohler, the initiative involved a diverse 15-member German task force and interviews with international experts, including Anriette Esterhuysen and Gbenga Sesan.

Their work yielded four starkly different future scenarios, ranging from intensified geopolitical rivalry and internet fragmentation to overregulation and a transformative turn toward treating the internet as a public good. A central takeaway was the resurgence of state power as a dominant force shaping digital futures.

According to Pohler, geopolitical dynamics—especially the actions of the US, China, Russia, and the EU—emerged as the primary drivers across nearly all scenarios. That marked a shift from previous foresight efforts that had emphasised civil society or corporate actors.

The panellists underscored that today’s real-world developments are already outpacing the scenarios’ predictions, with multistakeholder models appearing increasingly hollow or overly institutionalised. While the scenarios themselves might not predict the exact future, the process of creating them was widely praised.

Panellists described the interviews and collaborative exercises as intellectually enriching and essential for thinking beyond conventional governance paradigms. Yet, they also acknowledged practical concerns: the abstract nature of such exercises, the lack of direct implementation, and the need to involve government actors more directly to bridge analysis and policy action.

Looking ahead, participants called for bolder and more inclusive approaches to internet governance. They urged forums like the IGF to embrace participatory methods—such as scenario games—and to address complex issues without requiring full consensus.

The session concluded with a sense of urgency: the internet we want may still be possible, but only if we confront uncomfortable realities and make space for more courageous, creative policymaking.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Global consensus grows on inclusive and cooperative AI governance at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks.

China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration.

Echoing her call, speakers highlighted that AI’s rapid evolution requires national regulations and coordinated global governance, ideally under the auspices of the UN.

Speakers, such as Jovan Kurbalija, executive director of Diplo, and Wolfgang Kleinwächter, emeritus professor for Internet Policy and Regulation at the University of Aarhus, warned against the pitfalls of siloed regulation and technological protectionism. Instead, they advocated for open-source standards, inclusive policymaking, and leveraging existing internet governance models to shape AI rules.

Kurbalija

Regional case studies from Shanghai and Mexico illustrated diverse governance approaches—ranging from rights-based regulation to industrial ecosystem building—while initiatives like China Mobile’s AI+ Global Solutions showcased the role of major industry actors. A recurring theme throughout the forum was that no single stakeholder can monopolise effective AI governance.

Instead, a multistakeholder approach involving governments, civil society, academia, and the private sector is essential. Participants agreed that the goal is not just to manage risks, but to ensure AI is developed and deployed in a way that is ethical, inclusive, and beneficial to all humanity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Yoga in the age of AI: Digital spirituality or algorithmic escapism?

Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.

No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?

Tech and wellness: The rise of AI-driven yoga tools

The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.

Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.

Yoga app

While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.

Beyond the mat: Virtual reality and immersive yoga

The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.

These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.

Can AI be a guru? Empathy, authority, and the limits of automation

One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?

Yoga

AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?

Community, loneliness, and digital yoga tribes

Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?

Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.

Digital policy and the politics of platformised spirituality

Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.

Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.

Yoga

The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.

Toward inclusive and ethical design in wellness tech

As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?

Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.

Toward a mindful tech future

As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.

Yoga

For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EuroDIG outcomes shared at IGF 2025 session in Norway

At the Internet Governance Forum (IGF) 2025 in Norway, a high-level networking session was held to share key outcomes from the 18th edition of the European Dialogue on Internet Governance (EuroDIG), which took place earlier this year from 12–14 May in Strasbourg, France. Hosted by the Council of Europe and supported by the Luxembourg Presidency of the Committee of Ministers, the Strasbourg conference centred on balancing innovation and regulation, strongly focusing on safeguarding human rights in digital policy.

Sandra Hoferichter, who moderated the session in Norway, opened by noting the symbolic significance of EuroDIG’s return to Strasbourg—the city where the forum began in 2008. She emphasised EuroDIG’s unique tradition of issuing “messages” as policy input, which IGF and other regional dialogues later adopted.

Swiss Ambassador Thomas Schneider, President of the EuroDIG Support Association, presented the community’s consolidated contributions to the WSIS+20 review process. “The multistakeholder model isn’t optional—it’s essential,” he said, adding that Europe strongly supports making the Internet Governance Forum a permanent institution rather than one renewed every decade. He called for a transparent and inclusive WSIS+20 process, warning against decisions being shaped behind closed diplomatic doors.

YouthDIG representative Frances Douglas Thomson shared insights from the youth-led sessions at EuroDIG. She described strong debates on digital literacy, particularly around the role of generative AI in schools. ‘Some see AI as a helpful assistant; others fear it diminishes critical thinking,’ she said. Content moderation also sparked division, with some young participants calling for vigorous enforcement against harmful content and others raising concerns about censorship. Common ground emerged around the need for greater algorithmic transparency so users understand how content is curated.

Hans Seeuws, business operations manager at EURid, emphasised the need for infrastructure providers to be heard in policy spaces. He supported calls for concrete action on AI governance and digital rights, stressing the importance of translating dialogue into implementation.

Chetan Sharma from the Data Mission Foundation Trust India questioned the practical impact of governance forums in humanitarian crises. Frances highlighted several EuroDIG sessions that tackled using autonomous weapons, internet shutdowns, and misinformation during conflicts. ‘Dialogue across stakeholders can shift how we understand digital conflict. That’s meaningful change,’ she noted.

A representative from Geneva Macro Labs challenged the panel to explain how internet policy can be effective when many governments lack technical literacy. Schneider replied that civil society, business, and academia must step in when public institutions fall short. ‘Democracy is not self-sustaining—it requires daily effort. The price of neglect is high,’ he cautioned.

Janice Richardson, an expert at the Council of Europe, asked how to widen youth participation. Frances praised YouthDIG’s accessible, bottom-up format and called for increased funding to help young people from underrepresented regions join discussions. ‘The more youth feel heard, the more they stay engaged,’ she said.

As the session closed, Hoferichter reminded attendees of the over 400 applications received for YouthDIG this year. She urged donors to help cover the high travel costs, mainly from Eastern Europe and the Caucasus. ‘Supporting youth in internet governance isn’t charity—it’s a long-term investment in inclusive, global policy,’ she concluded.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.