Adopting a Human-centred Approach to the Development and Use of AI
Speech of Balthasar Staehelin, Personal Envoy of the President and Head of Regional Delegation for East Asia of the International Committee of the Red Cross, at Innovation and Governance of AI Technology Forum, 2025 World Internet Conference Wuzhen Summit, 8 November 2025.
Extinguished Guests,
Ladies and Gentlemen,
First, I would like to thank the organizer for inviting the International Committee of the Red Cross (the ICRC) to this forum. It is my great honor to have this opportunity to share with you a humanitarian perspective on the development and use of AI.
The ICRC is an impartial, neutral and independent humanitarian organization. Since 1863, the ICRC’s sole objective has been to ensure humanitarian protection and assistance for people affected by armed conflict and other situations of violence.
But why is the ICRC, a humanitarian organization, talking about AI?
Simply put, the rapid development of AI is creating significant opportunities, as well as new risks, for the humanitarian sector. The ICRC is committed to exploring if, how, when and where advances in AI can help us achieve our mission. In the meantime, we feel compelled to understand and mitigate the risks that AI poses to the lives and dignity of people living amid conflict or other situations of violence, and to promote the responsible use of AI by related parties.
AI has many potential benefits also for humanitarian organizations. For example, ICRC has developed its own Chatbot to facilitate the staff’s access to information and report drafting. AI based visual recognition technologies are now used to identify missing civilians and combatants by analysing photographs of documents, military identification tags, and written reports recovered from war zones. AI is also being tested to optimise aid delivery routes, improve resource allocation across distributed networks, and support scenario planning and simulations.
In the meantime, AI systems trained by incomplete, outdated, erroneous, or biased data can produce faulty predictions and poor decisions. This may lead to over-representation of certain populations with a bias against certain races, nationalities, genders, or age groups, hindering their access to aid or even exposing them to greater risks. For communities already living through conflict and crisis, such risks are unacceptable.
The ICRC is also evaluating the risks posed by the development and use of AI by belligerents that shapes the environment we work in. For example, generative AI and other digital technologies have greatly accelerated the spread of harmful information, fuelling social polarisation, undermining trust, acceptance and safety of humanitarian workers, and increasing the risk of civilian harm.
In 2024, the ICRC established its institutional AI Policy to guide the exploration of AI in supporting its humanitarian mission. The policy is designed to ensure that all use of AI across the organization remains responsible, safe, coherent, and most importantly, human-centred.
By adopting this human-centred approach, the ICRC upholds a simple but powerful principle: technology must serve humanity. No matter how advanced AI systems become, humans must always remain in control of decisions that affect people’s lives, rights, and dignity. While AI can assist in making humanitarian work more effective, it cannot replace human judgement, empathy, and responsibility.
As a humanitarian organization, maintaining and increasing our physical and emotional proximity to affected people is crucial to build relationships of trust that enable the organization to respond to an evolving palette of needs. AI cannot replace human-to-human interaction. Therefore, the ICRC pays special attention to ensure that the use and development of AI solutions does not jeopardize the ability to demonstrate humanity and empathy through direct and in-person human engagement.
At the same time, a misused or poorly designed AI can reinforce discrimination, introduce harmful biases, or even create new forms of exclusion, which creates more harm to the already suffering community. This is why the ICRC is committed to using AI carefully and only where it truly adds value, ensuring that solutions are fair, transparent, and strive to meet the longstanding “do no harm” principle of the humanitarian sector.
Ladies and gentlemen,
We are confronted with a world fraught with intertwined turbulence, where security disorder, development imbalance, and governance failure are becoming increasingly prominent. At present, the ICRC classified more than 130 armed conflicts around the world, more than a threefold increase over the past three decades, with many marked by intense violence, widespread destruction and restrictions on humanitarian aid.
As a humanitarian organisation, the ICRC cannot stop conflict. But to effectively help people in armed conflict, the ICRC must first assess the implications of contemporary and near-future developments in armed conflict.
In fact, the integration of AI in military operations has raised many legal, ethical and humanitarian concerns. For example, AI-assisted decision-support systems are influencing and accelerating military decisions about who or what is targeted in armed conflict in ways that surpass human cognitive capacity and therefore can undermine the quality of decision-making. Meanwhile, autonomous AI agents can orchestrate complex cyberattacks against civilian services in seconds, drastically reducing the time to detect and prevent such attacks. Furthermore, the future use of autonomous weapons systems (AWS) will involve a wider range of targets, longer time periods of use, and fewer opportunities for human intervention.
To address these issues, the ICRC submitted a position paper this year to the UN Secretary General to state our view on AWS and called for clear rules on the restriction or prohibition of AWS. The organization maintains an unwavering principle: regardless of the technological sophistication of AWS, humans must ultimately remain in control.
Ladies and Gentlemen,
When meeting with ICRC President, Mirjana Spoljaric in 2023, President Xi Jinping emphasised that “China is an active supporter of, participant in and contributor to the international humanitarian cause”.
In fact, this year marks the 20th anniversary of the establishment of the ICRC Regional Delegation for East Asia in Beijing.
As China has become a major power with significant global influence and has emerged as one of the ICRC’s key procurement hubs worldwide, the ICRC is deepening its understanding of China’s perspectives on international cooperation, development, armed conflict, and peace. At the same time, the ICRC is broadening its collaboration with China across a wide range of areas, especially in the technology sector. The ICRC proactively engages with the Chinese tech sector via valuable platforms such as the World Internet Conference to explore potential Chinese AI and other technological solutions in support of humanitarian action and to foster dialogue on their responsible development and use.
From 4-5 December, the ICRC will host a Symposium on the Responsible Use of Technology in Humanitarian Action in Beijing jointly with Tsinghua University, to explore different perspectives on opportunities and challenges relating to digital transformation, including the responsible use of technologies, including AI. We sincerely invite you to join our discussion.
Ladies and Gentlemen,
AI is reshaping the world at a pace few could have imagined. For the ICRC, the choice is clear: AI must remain in service of humanity — protecting dignity, preserving life, and never replacing the human compassion at the heart of humanitarian action. This means embracing innovation where it can strengthen humanitarian action, while firmly rejecting uses that undermine human control, accountability, or compassion. It also means working with global partners, including China, to build common rules and safeguards so that AI becomes a force for protection, not harm.
Thank you!