Mapping trees along urban street networks with deep learning and street-level imagery

2021 
Abstract Planning and managing urban forests for livable cities remains a challenge worldwide owing to sparse information on the spatial distribution, structure and composition of urban trees and forests. National and municipal sources of tree inventory remain limited due to a lack of detailed, consistent and frequent inventory assessments. Despite advancements in research on the automation of urban tree mapping using Light Detection and Ranging (LiDAR) or high-resolution satellite imagery, in practice most municipalities still perform labor-intensive field surveys to collect and update tree inventories. We present a robust, affordable and rapid method for creating tree inventories in any urban region where sufficient street-level imagery is readily available. Our approach is novel in that we use a Mask Regional Convolutional Neural Network (Mask R-CNN) to detect and locate separate tree instances from street-level imagery, thereby successfully creating shape masks around unique fuzzy urban objects like trees. The novelty of this method is enhanced by using monocular depth estimation and triangulation to estimate precise tree location, relying only on photographs and images taken from the street. Experiments across four cities show that our method is transferable to different image sources (Google Street View, Mapillary) and urban ecosystems. We successfully detect > 70% of all public and private trees recorded in a ground-truth campaign across Metro Vancouver. The accuracy of geolocation is also promising. We automatically locate public and private trees with a mean error in the absolute position ranging from 4 to 6 m, which is comparable to ground-truth measurements in conventional manual urban tree inventory campaigns.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    6
    Citations
    NaN
    KQI
    []
    Baidu
    map