Mr Nicolas Seidler (Geneva Science Policy Interface, GSPI) noted that this session is a spin-off for the meeting organised last year on geospatial satellite data to support programmes and decision-making for areal development. This session covers images from drones and satellites that are used to support the development and aid workers with refugee settlements, natural disaster response and recovery. The focus of the session is on technologies like AI and machine learning that help analyse the vast amount of available data and speed up the processes.
Mr Patrick Meier (WeRobotics) said they invest in local organisations and specialists because they can do the work better and faster. WeRobotics use the Flying Labs network – local knowledge hubs that are run entirely by local experts supported with technology solutions and equipment provided to them by WeRobotics. The time factor is crucial for development specialists – they cannot wait for months till the aerial data will be interpreted. Highly detailed photos of the land are not enough anymore. Organisations need actionable information extracted from the datasets to define the action plan for the area. Meier shared several projects where AI was applied to speed up the processing of images got from drones to identify and count small objects like trees and buildings. In addition, local knowledge is crucial for interpretation and testing the AI applications.
Mr William Anderson (Medair) spoke on the challenges of data collection since some data is still collected on paper. The biggest problem is in aligning the collected data and turning it into usable .cvs files processing in the cloud. Mediar partners with the Click Program to analyse the collected data. The first challenge is the accuracy of collecting the right indicators, because the local contexts may vary significantly, thus making data incompatible. Another challenge is the mistakes in .csv files that affect the final results for the whole data set.
Mr Timothy Chapuis (Pix4D) said they are working mostly with emergency response and the agriculture sector by developing applications that map the area to coordinate the improvement work. Pix4D use photogrammetry to create multi-dimensional images of the area. The workflow is straightforward: planning the flight, capture of images, model construction, processing on the software, annotation of the model, adjustment of the model to be easily distributed and available for the larger audience, not only for specialists.
Mr Frank de Morsier (Picterra) explained how AI can scale up the image processing, escaping the manual analysis. The available data has reached the critical mass to be processed by humans, and computer vision started exceeding the human one in accuracy and precision of objects recognition. Further, Morsier explained the process of teaching the AI model to recognise the objects on large-scale images. This technology helps in addressing the sustainable development goals (SDGs) connected to area development and health. Mr Sebastian Ancavil (International Organization for Migration) continued providing a specific example on how this AI object-recognition technology works with refugee camps project in Myanmar. Mr David Coluccia (UNOSAT) elaborated on a similar case of refugee camps in Bangladesh. The main advantage of AI application is that instead of making area models manually from scratch, AI does most of the work automatically, and requires only manual correction of object recognition. Although the number of false-positive results is significant and application of the same model to other areas may not provide a good output.
Mr Nico Lang (ETH Ecovision) introduced some projects of machine learning for environmental science that are currently performed by Ecovision. The first is to develop a scalable tool that merges deep learning from high-resolution aerial images with satellite photos. This can help scale up the models of the area to larger territories without big investment since the satellite data from European Space Agency is free; the Sentinel-2 satellite has a revisit cycle of 5 days which enables to track the retrospective developments. The second is biodiversity application of machine learning to combine different sources of images: satellite, LiDAR, drones to get high-precision resolution image of the area for further study. The last project was the estimation of flood level based on the comparison of the height of known objects on photos collected from social media (bikes, cars, people) and creation of the tool with the lowest error in identification level.
The last speaker Ms Beatrice Scarioni (EPFL Tech4Impact) provided the overview of EPFL, technology engineering school, research work towards the SDG, especially focusing on goals 3,7, and 9. She said that we are still at the stage of generating ideas and building dialogue among communities, strategies and policymakers on where to put efforts on specific topics and apply the technical solutions more expediently.
By Ilona Stadnik