RUDN mathematician improved the machine learning algorithm for recognizing images from satellites
Modern satellite and radar systems can be used to automatically monitor earthquakes, volcanic eruptions, fires, and other disasters, as well as to monitor the condition of soil, vegetation, and rivers. To automate this process, complex algorithms need to recognize and classify objects, allowing the computer to understand from a set of pixels what is depicted on the image. Machine learning is used for this purpose, a computer “looks through” thousands of examples and thus learns to recognize images independently. To improve machine learning results, a combination of several training algorithms is often used. This yields more accurate solutions than each of them separately. The RUDN mathematician developed such an ensemble method using three algorithms to process data from multiple sources.
Mathematicians used data from five RapidEye mini-satellites and the UAVSAR airborne radar on July 5 and 7, 2012, they captured the same area in Canada. The RapidEye imagery was acquired in five bands of the light spectrum: blue (B), green (G), red ®, near-infrared (NIR) and a region called “red edge” (RE) where the reflection of green vegetation is dramatically enhanced. The data contained 38 features: spectral channels, vegetation indices, texture parameters, and etc. Their spatial resolution, that is, the minimum object size distinguishable in the images was about five meters. UAVSAR radar images included 49 different features, with a spatial resolution of about 15 meters. Mathematicians compared the images with reference data on the area collected in the summer of 2012. They identified seven types of plants, broadleaf plants, rapeseed, corn, oats, peas, soybeans and wheat. The new algorithm was “trained” based on examples of images and planting type data, and then compared its prediction with the results of other programs based on a similar principle.
The new method showed higher accuracy in interpreting images, both on large and limited amounts of examples for training algorithms. If training was performed on 5% of all data, the new algorithm recognized images correctly at least 65% of the time, while the other algorithms were correct in 52-60% of cases. With an increase in the share of training data to 50% of the total volume, the accuracy of the new algorithm increased to almost 90%, and the other algorithms increased to 75-86%. Thus, the application of the new algorithm was found to be more effective.
“Our method can be proposed for a land use and land cover classification system using data from different sources. For example, Landsat or Sentinel constellation satellites,” Vladimir Razumny, Ph.D., Associate Professor at the RUDN Mechanical and Mechatronics Engineering department.
The article was published in International Journal of Image and Data Fusion
Imagine a world where everyone has enough food, clean water, access to education, and decent work. A world where nature is protected and the future of our planet is cared for. These are the Sustainable Development Goals—to achieve a sustainable future for all! To this end, in 2015, the United Nations (UN) defined 17 Sustainable Development Goals (SDGs). The SDGs are a global plan that helps countries and people work together towards a better future. All 193 UN member states have joined the plan.
Researchers from the Faculty of Artificial Intelligence at RUDN University conducted a large-scale study that revealed systemic errors in large language models (LLMs) when diagnosing depression based on text. This work, carried out in collaboration with colleagues from AIRI, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow Institute of Physics and Technology, and MBZUAI, not only identifies the problem but also lays the foundation for the creation of more reliable and secure tools for detecting depression and anxiety.
Alexandra Sentyabreva, a junior researcher at the Laboratory of Cell Technologies and Tissue Engineering at RUDN Research Institute of Molecular and Cellular Medicine at the Russian University of People's Friendship, won the competition for young scientists at the All-Russian Scientific Conference “Topical Issues of Morphogenesis in Norm and Pathology.” She was awarded the Academician A.P. Avtsyn Prize.
Imagine a world where everyone has enough food, clean water, access to education, and decent work. A world where nature is protected and the future of our planet is cared for. These are the Sustainable Development Goals—to achieve a sustainable future for all! To this end, in 2015, the United Nations (UN) defined 17 Sustainable Development Goals (SDGs). The SDGs are a global plan that helps countries and people work together towards a better future. All 193 UN member states have joined the plan.
Researchers from the Faculty of Artificial Intelligence at RUDN University conducted a large-scale study that revealed systemic errors in large language models (LLMs) when diagnosing depression based on text. This work, carried out in collaboration with colleagues from AIRI, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow Institute of Physics and Technology, and MBZUAI, not only identifies the problem but also lays the foundation for the creation of more reliable and secure tools for detecting depression and anxiety.
Alexandra Sentyabreva, a junior researcher at the Laboratory of Cell Technologies and Tissue Engineering at RUDN Research Institute of Molecular and Cellular Medicine at the Russian University of People's Friendship, won the competition for young scientists at the All-Russian Scientific Conference “Topical Issues of Morphogenesis in Norm and Pathology.” She was awarded the Academician A.P. Avtsyn Prize.