You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a first-year graduate student at Tsinghua University (Dept. of Electronic Engineering), and I am highly interested in contributing to GRASS GIS for GSoC 2026.
My research focuses on Urban Computing and Multimodal Learning. I recently developed a framework called AESPA (Physics-Aware Multimodal Urban Heat Mapping), which has been accepted to Web4Good/WWW 2026. This framework utilizes Satellite imagery, Street-view panoramas, and Human mobility data to estimate fine-grained Land Surface Temperature (LST).
My GSoC Vision:
I noticed that the r.learn.ml2 module in GRASS is evolving towards deep learning. I would like to propose an Adaptive Multimodal Reinforcement Learning (RL) approach to upgrade the current urban mapping capabilities:
Adaptive Fusion: Use an RL Agent to dynamically adjust weights between different data modalities (Satellite, OSM, Street-view) based on data quality and availability.
Physics-Aware Regularization: Incorporate physical proxies (Albedo, Canopy, etc.) as constraints within the RL reward function to ensure scientific consistency in geospatial outputs.
Open-Source Integration: Port the AESPA logic into a GRASS-native module (e.g., r.urban.heat.rl) using pygrass and PyTorch.
Why GRASS GIS?
I believe my work on "Physics-aware" AI aligns perfectly with GRASS GIS's tradition of rigorous spatial analysis. I have experience optimizing complex Python/Gurobi tasks (reducing runtime from 47s to 0.6s) and would love to bring this performance-oriented AI approach to the community.
I would appreciate your feedback on:
Whether this "Adaptive Multimodal" direction fits the current roadmap of r.learn.ml2.
Any suggestions on handling multi-source data I/O efficiently within the GRASS environment.
I’ve attached a brief summary of my AESPA framework and would be happy to share more technical details.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Dear GRASS GIS Community and Potential Mentors,
I am a first-year graduate student at Tsinghua University (Dept. of Electronic Engineering), and I am highly interested in contributing to GRASS GIS for GSoC 2026.
My research focuses on Urban Computing and Multimodal Learning. I recently developed a framework called AESPA (Physics-Aware Multimodal Urban Heat Mapping), which has been accepted to Web4Good/WWW 2026. This framework utilizes Satellite imagery, Street-view panoramas, and Human mobility data to estimate fine-grained Land Surface Temperature (LST).
My GSoC Vision:
I noticed that the r.learn.ml2 module in GRASS is evolving towards deep learning. I would like to propose an Adaptive Multimodal Reinforcement Learning (RL) approach to upgrade the current urban mapping capabilities:
Adaptive Fusion: Use an RL Agent to dynamically adjust weights between different data modalities (Satellite, OSM, Street-view) based on data quality and availability.
Physics-Aware Regularization: Incorporate physical proxies (Albedo, Canopy, etc.) as constraints within the RL reward function to ensure scientific consistency in geospatial outputs.
Open-Source Integration: Port the AESPA logic into a GRASS-native module (e.g., r.urban.heat.rl) using pygrass and PyTorch.
Why GRASS GIS?
I believe my work on "Physics-aware" AI aligns perfectly with GRASS GIS's tradition of rigorous spatial analysis. I have experience optimizing complex Python/Gurobi tasks (reducing runtime from 47s to 0.6s) and would love to bring this performance-oriented AI approach to the community.
I would appreciate your feedback on:
Whether this "Adaptive Multimodal" direction fits the current roadmap of r.learn.ml2.
Any suggestions on handling multi-source data I/O efficiently within the GRASS environment.
I’ve attached a brief summary of my AESPA framework and would be happy to share more technical details.
Looking forward to hearing from you!
Best regards,
Yuanyi You
GitHub: https://github.com/tsinghua-fib-lab/AESPA
Tsinghua University
Beta Was this translation helpful? Give feedback.
All reactions