Street View Imagery Working Paper Series 2 working paper series: street level imagery 100 101 Street View Imagery Working Paper Series through data-driven tools. Researchers at the more quickly than manual assessment. Street Level Imagery National Disaster Preparedness Training Center 2.3 Alternative Models for Machine Learning (NDPTC) are developing a new non-invasive tool known as the Rapid Integrated Damage Assessment (RIDA). The goal of this tool is to Although YOLOv5 is the machine learning model alleviate the tension between time, damage data, currently being employed in RIDA, it is not the only 2.1 Introduction and local recovery needs through innovative algorithm that can be leveraged. Other popular machine learning applications. To do so, the RIDA machine learning algorithms include Grad-CAM, In early recovery, local responders operate under model integrates whole image classification Mask R-CNN, and Lobe.ai. Each algorithm relies on pressures from residential communities facing through machine learning algorithms to different learning techniques than whole image damage and destruction, as well as federal efficiently analyze household or building level classification. organizations and aid programs demanding damage. In this application, the use of machine damage reports. While one lever asks for help, learning helps (1) promote the rapid capture of Grad Cam (Image 2.1) the other demands information about damage perishable street-level data, (2) analyze damage Grad-CAM, also known as Gradient-weighted through reporting. The two stand at odds with one severity quickly, and (3) reduce local burdens for Class Activation Mapping, uses pixelated color another until damage is captured, documented, assessment. gradients of objects or regions to detect their and processed. Typically, damage assessments location and/or classification. While Grad- are lengthy processes that require immense 2.2 The NDPTC Model CAM has been deployed in many instances coordination and support. Assessors and for localization and classification, there are no emergency responders who conduct damage As it stands, the NDPTC RIDA model currently uses accessible instances of its application for street- assessments travel to each household in a the latest version in a series of object detection level type detection. Grad-CAM appears to be community to assess damage in person. The models known as YOLOv5. YOLOv5 is a machine used most frequently with small objects such as Image 2.2: Example of Mask-R CNN:: Segmentation of amount of time it takes to conduct door-to-door learning algorithm that reviews data such as medical x-rays and animals. Historic Buildings of the City of Merced. Credits to Alberto Valle, Anais Guillem, David Torres-Rouff, PhD. assessment is exhaustive, and the practice of street level imagery to detect objects within the locally administered damage assessments is data. The purpose of integrating any machine such as cars and houses through videos. unclear, nonuniform, and frequently biased. learning algorithm into damage assessments is to preserve and capture on-the-ground data of While many Mask R-CNN applications use binary Prolonged damage assessments not only prevent damage and to assess severity levels. Further, classification and multi-classification to detect aid or support from timely distribution to residents the specific purpose of YOLOv5 is to capture. distinctly different objects from an image, there in need, they also neglect to capture crucial and store street level imagery while detecting are no current models that replicate the nuances data on damage. The inaccessibility of damage damaged structures. for disaster damage detection. To build a data is known as perishability. Perishable data is scalable model that incorporates the intricacies the loss of information, and its value, over time. YOLOv5 balances fast processing speeds and of damage detection is resource dependent and For damage assessment practices, damage high accuracy. The YOLOv5 model analyzes timely. information is invaluable because it expresses the the data to detect damage on a scale from Image 2.1: Example of Grad-CAM:: Application of pixel gradi- no damage,moderate damage, and severe ent maps on houses. Credits to Georgia Institute of Technolo- severity of harm and destruction. When people There are also developmental setbacks including gy and Facebook AI Research. work quickly to repair their homes, information damage. Researchers at the NDPTC use these annotation and training speeds that hinder the about a broken window or concave roof is three categories to train the machine learning Mask R-CNN (Image 2.2) application of Mask R-CNN. Mask R-CNN online lost. Altogether, manual damage assessment algorithm without defining or describing The Mask R-CNN model is one of the most annotation platforms are moderately time practices create data collection and resource classification categories. Each image is labeled robust machine learning algorithms for instance consuming, taking roughly two hours to properly distribution problems. These problems stall aid according to perceived damage level based on segmentation, as well as classification and annotate, label, and download a dataset of only and resource allocation but inevitably provide information from the entire photo. The YOLOv5 localization. Mask R-CNN has extended the 100 images. The training speeds for Mask R-CNN insight about structural damage. In this light, model learns how to detect damage based on usability of other popular networks for “predicting tend to be much longer, averaging 5 frames damage assessments can assist and hinder inconsistent whole image classification. After the an object mask in parallel with the existing per second (fps). For reference, YOLOv5 learns recovery efforts. model assesses damage severity levels based branch for bounding box recognition.” The Mask at a rate of 140 fps, which means it processes on annotated, whole images, the results can be R-CNN model can be trained to detect damage nearly 30 times more data per second than Mask New techniques are being developed to communicated with local communities, disaster through retailoring introductory tutorials and R-CNN. Despite low fps rates, Mask R-CNN can be streamline damage assessment processes response professionals, and federal organizations sample algorithms. For example, one Mask R-CNN pre-trained for future application. Researchers algorithm detects and masks street-level data 102 103 Street View Imagery Working Paper Series at the NDPTC can prepare a model’s algorithm collaboration, and customization/replication. The levels of damage assessment outlined by beforehand to share in the future. Therefore, Video and written tutorials bridged the gap on FEMA’s Preliminary Damage Assessment Guide more exploration of testing speeds rather than machine learning coding, as well. (see Image 2.5) provide the foundation for street- training speeds on a pretrained Mask R-CNN level machine learning annotations in two primary model is necessary. Instance segmentation Whole Image Classification ways. Firstly, the severity level of damage (i.e., overall can increase data capture and generate The YOLOv5 model analyzes entire images and affected, minor, major, destroyed) includes clear faster insights on damage severity given a robust labels data using whole image classification. instructions on categorizing assessment of both pre-trained model. However, for rapid training, However, images contain much more data manufactured homes and conventionally built deployment, or development, there are notable than what is being represented through a homes. Secondly, the terminology of damage barriers compared to more basic, simplified single label. If an algorithm is trained on whole assessment classification in the PDA is the primary models. images, then each pixel in the photo contributes source of communicating incident impacts to the algorithm’s learning. This means that a Image 2.3: Example of Bounding Box:: Implementation contributing to Presidential disaster declarations Lobe.ai YOLOv5 algorithm trained on whole images will of the bounding box method comapred to whole image decisions. One example of classifying a house classification One last platform worth noting is Lobe.ai, an detect damage severity levels based on all of as having “minor” damage in the event of a application that utilizes two machine learning the contents of an image. Street level images and ability to test hypotheses. Since there are non-flood event is “nonstructural damage to algorithms simultaneously to improve the model’s in particular capture data beyond the building, a few platforms available, the ones that were roof components over essential living spaces speed and accuracy (MobileNetV2 and Resnet- including other objects such as nearby forestry, highly accessible aided the machine learning (e.g., shingles, roof covering, fascia board, 50V2, respectively). Developing a model on Lobe. shrubs, front yards, the sky, and vehicles. process through either collaborative annotation soffit, flashing, and skylight).” Incorporating ai begins with uploading a training dataset Therefore, machine learning decision making via methods, succinct storage of data and images, strict guidelines for classifying and annotating and labeling images via image classification. whole image classification may inflate or deflate and/or browser-based coding. Platforms like images helps reduce the number of cognitive Lobe.ai continuously runs and updates the key data points outside the scope of structural Roboflow allow cohesive annotation and dataset biases introduced to a dataset. In the event of a model throughout the annotation process. damage. In the case of Hurricane Ida, the YOLOv5 creation with potential extrapolation to different Presidential disaster declaration, more resources Image augmentation includes adjustments to model was unable to accurately detect damage algorithms. Robodlow is a browser-based are made available through Federal funding to brightness, contrast, saturation, hue, rotation, levels of stilted houses due to potential influences platform that encourages collaboration to rapidly assist in recovery efforts. zoom, and noise of images. Since training of training data. assemble data sets for machine learning. Google data sets can contain hundreds or thousands Colaboratory is a platform that hosts many Collecting Imagery from Multiple Sources and of images, mistakes by humans during the Bounding Box (Image 2.3) coding languages, and can be run on a web Events classification of images may occur. Lobe.ai’s user The YOLOv5 algorithm is adaptable and can also browser rather than a downloadable application. Collecting natural disaster damage assessment interface allows for easy review and analysis learn to detect objects within a photo based on The user interface provides seamless access imagery from multiple sources equalizes the of those mistakes, and users can easily assess the bound box method. This method localizes to strong computational power (GPU’s) without misclassified images even during model training. Machine learning models can be exported to the data inputs through user drawn and labeled boxes. The algorithm’s training inputs are no any downloads. Altogether, accessible platforms AFFECTED MINOR with strong user interfaces increased the overall no-code apps from Lobe.ai, such as Microsoft’s longer an entire image when the bounding box operating and testing speeds, all while increasing Power Platform, or as Python-based notebooks. method extracts only specified portions of the replicability of our methods. There are drawbacks Lobe.ai is currently in beta development and only image for inputting. In this instance, the ability to to strictly relying on browser-based platforms includes image classification but will release detect structural damage to buildings and homes such as the interconnected nature of coding. Each object detection models in the future. can be exclusively extracted and input into a platform must grow and develop in tandem with model given a bounding box around the object. In one another, as each piece is essential to the 2.4 Further Image 2.3, YOLOv5 detected a moderate level of overall machine learning pipeline. When one node MAJOR DESTROYED Considerations damage to the structure. This determination was changes, the process stops working. Therefore, made because a machine learning algorithm we also caution that open, public platforms may For other researchers interested in developing a only uses the data inside the bounding box for adapt much faster than implementation of these machine learning damage assessment model, detection and classification. tools. there are a few overarching considerations that contribute to the utilization of YOLOv5 over Platforms 2.5 In Application and other described platforms. The key takeaways In addition, free, online platforms aided the Practice for a scalable model include the ability to adapt process of developing and deploying YOLOv5 Image 2.4: FEMA PDA Catergories:: There are four an algorithm, platform(s) accessibility and by enhancing the reliability of our methods FEMA’s PDA as Annotation Framework (Image 2.4) catergories that classify severity of damage. Credit to Federal Emergency Management Agency. 104 105 Street View Imagery Working Paper Series - Stock photography websites Platforms that are free of cost, user friendly, and Model Testing on Hurricane Ida - NDPTC field visits (see Image 2.6) browser/interest accessible are notable ways to The adaptation of a pre-trained YOLOv5 model - University of Michigan field visit ensure receptiveness. Listed below are some entry designed for damage detection in early recovery - Conventional local and national media sources level platforms that could easily be used during was tested on a recent natural disaster. If the for natural disaster reporting training of machine learning processes. goal of machine learning for damage detection is to be deployed post-disaster in early recovery, Inspired by the research paper Damage ROBOFLOW then incorporating and testing a model on recent Assessment from Social Media Imagery Data natural disaster imagery is one way to observe its During Disasters and a research inquiry by the Collaboration and Configurations utility. National Disaster Preparedness Training Center, Researchers or planners must label images this research dataset also includes imagery from according to what is being detected or classified Hurricane Ida, a Category 4 hurricane, made different disaster events. This training dataset for machine learning algorithms to understand landfall in Louisiana on August 26th, 2021. Data includes images from multiple hurricanes, data. Roboflow is an online annotation platform collection and capacity research was conducted earthquakes, and tornados both internationally that is free and browser based (See 2.7). The to observe the effects of early recovery on and within the United States. Damage assessment platform allows multiple collaborators to upload communities and how to make improvements to photos from wildfires are excluded from the individually collected photos regardless of format the RIDA model’s deployment. Just five months dataset due to the overrepresentation of images (.jpg, .png, etc). The Roboflow processes will take after the disaster, organizations and residents in with live fires present and available through in a variety of data and produce a downloadable the area were focused on recovery— rebuilding internet-based sources. Including more types or accessible dataset in a variety of formats. The and repairing homes, finding more permanent and quantity of events in a dataset increases the ability to produce different formats allows the solutions, and restarting local economies. In Image 2.5: The federal emergency response agencies chances the training images will have an equal dataset to be integrated into different algorithms, this recovery phase, the RIDA model could have damage assessment guidleines. Credit to Federal representation of classifications. A machine especially YOLOv5. Additionally, the pooled enabled people through the FEMA aid process Emergency Management Agency. learning model trained only on a Category 5 imagery can be divided among collaborators and or insurance claims. Visiting the region at this hurricane will have high precision for categorizing researchers for annotation purposes, allowing point allowed researchers to take advantage frequency of damage assessment classifications homes with severe damage but will not perform cross collaboration on dataset creation. With of hindsight, asking the question, “how can we in machine learning training datasets. Images well at identifying lower levels of damage seen the ability to upload multiple imagery sources, improve recovery?” At the same time, the event is obtained from a single source, such as in weaker storms. Additionally, a model trained different natural disaster dataset configurations still fresh in the minds of community members. conventional new media outlets, are designed to on images from a natural disaster in Louisiana of considerable size can also be created and tell a compelling story of a natural disaster event. will likely underperform if the model is tested 2.5 Programs, Processing, hosted on Roboflow. New imagery can be In this case, the likelihood of overrepresentation on images from another country because the difference is building architecture is not and Platforms integrated into pre-existing datasets when a natural disaster occurs. Multiple events can be of more severe damage assessment categories is higher because the most compelling story is represented in the training dataset. combined in different ways to test if certain To make the tool accessible, the NDPTC should where the most damage occurs. In the article elements of disaster damage from events mimic leverage pre-existing data platforms. These Damage Assessment from Social Media Imagery other natural disasters. More importantly, testing platforms should be highly accessible to any Data During Disasters, the authors provide multiple natural disaster imagery configurations local planning or emergency management office. evidence of increasing machine learning can help experiment to identify the most precise accuracy, precision, and recall by combining images from Google searches and multiple events of the same type (Nepal 2015, Ecuador 2016 Earthquakes). To increase machine learning model metrics and create a dataset with equal representation of damage assessment categories, the collection of images for this exploratory GOOGLE research model include: ROBOFLOW YOLOV5 COLLABORATORY - Social media platforms (Twitter) - Open-source databases (Crisis NLP) - Google images Image 2.6: Data from St Charles Parish, Louisa- na. Credits to NDPTC. 106 107 Street View Imagery Working Paper Series augmentation, in other words imagery alterations. Shearing (+/- 15°): This step distorts the image GOOGLE COLABORATORY Annotation platforms offer limited levels of horizontally to mimic real world data capture. alterations, whereas Roboflow offers a wide street level cameras used by organizations such A recommended platform for YOLOv5 variety. Pre-processing and augmentation assist as Mapillary or Google tend to have a warped or implementation is Google Colaboratory with what and how machine learning algorithms distorted view that mimics the shearing feature. notebooks. Google Colaboratory is a browser should understand data. Steps that Roboflow This step enhances the model’s algorithm by based, free platform that allows users to offers include resizing which either shrinks or understanding different types of imagery. execute code with rich text in a single space. The expands an image’s size. This step is both helpful integration of code and text allows for template for training speeds, but also for datasets with a YOLOv5 notebooks to be organized pre-coding. Organizing variety of image sizes. Resizing can skew images a notebook allows multiple users access to code, and data, potentially to such extremes that it These augmentations and preprocessing while also understanding what each execution Image 2.7: Overacrching view of the Roboflow API for impacts the outcomes detrimentally.. While techniques can reduce model accuracy and entails. For example, in the customizable notebook dataset creation website. Credit from Roboflow.com. there are a myriad of image alterations, the field precision. These steps should be constantly for YOLOv5, there were rich text, images, and gifs asserts these steps increase algorithmic precision, evaluated for best model performance. Since that explained the annotation processes all the model for street level damage detection. however there are unclear standards for best Roboflow allows multiple enhancements, way to training deployment. The combination of multiple disaster event practices. This is due to how the model learns researchers can generate multiple datasets and imagery enhances precision and recall for and what processing enhances detection, so train or test based on alterations. Not only does Google Colaboratory notebooks street level machine learning algorithms. For pre-processing or augmentation changes based allow collaboration for customization of implementation purposes, we recommend on the application of machine learning. For the The decision to operate and experiment with the notebooks, the platform ran code of Google’s entry entry level machine learners, such as purposes of damage assessment from street level YOLOv5 framework is not an easy determination. cloud servers. This means that the operation local emergency responders, planners, and machine learning models, there are a few imagery As discussed previously, there are opportunities speed to run code is remotely managed, allowing others leverage Roboflow to enable various alterations that may align with damage detection for other algorithms to not only detect damage, users to utilize faster graphics processing units configurations of model scaling for increased goals. but detect, classify, and mask other pertinent (GPU’s). precision. Roboflow also allows a multitude of indications of damage. With street level damage local recovery professionals the ability to create imagery being captured, other algorithms With customization and processing speed, Google and contribute to datasets, access or utilize could produce insight into not only the severity Colaboratory takes coding machine learning dataset configurations, and train and test on PREPROCESSING AND AUGMENTATIONS of damage but variety. YOLOv5 is fast and algorithms a step further. Machine learning algorithms with ease. accessible, while also producing strong results developers create and share Google Colaboratory Horizontal flip: This step flips or inverts the through accuracy and precision metrics such as specific notebooks for replication of methods. This Integration with YOLOv5 image. A machine learning algorithm can be Recall. Instance segmentation annotations are means that users can essentially copy and paste In further support of Roboflow utilization trained on images of houses and structures with new features in Roboflow and if paired with a entire pre-built guidelines, minimally changing for damage detection, the designers of the different orientations for better detection and tutorial or custom notebook could allow NDPTC to or altering just a few lines of code. This is the YOLOv5 model have strategically aligned their classification. develop targeted damage detection. This includes case for YOLOv5, which has a series of extremely coding processes with Roboflow’s Application the ability to observe roof vs structural damage, digestible pre-built notebooks that run without Programming Interface (API), for seamless Auto-contrast: This step enhances pixel contrast. or even the ability to detect debris, property, error and consistently perform at fast rates. transition from dataset annotation to training. A damage detection algorithm, or other imagery and landscaping damage. Segmentation could Other annotation programs such as VGG Image algorithms, use contrasting to increase the identify damage through the capture of other The burden to organize machine learning Annotator (VIA) or Labelme are applicable to algorithm’s ability to understand boundaries and data points such as materiality of structure, algorithms and their corresponding code is object detection with bounding box methods. lines. height or level(s) of structure, and elevation significantly reduced due to formatted and However, Roboflow directly integrates into YOLOv5 from sea level. Perhaps machine learning can organized Google Colaboratory notebooks, cloud- notebooks and prevents minor coding errors such Image Resize: This step alters image size. Damage supplement that data collection and enhance based processing, and pre-built guidelines. The as file misstructuring (.json, .csv, .text). Lastly, detection images can vary in size, from phone damage detection simultaneously. integration of YOLOv5 and Roboflow into Google cloud-based notebooks allow for use without the cameras to social media to on the ground Altogether, YOLOv5 is a stand out machine Colaboratory notebooks streamlines machine need to download datasets to local drives and cameras, so the variability allows the machine learning algorithm that can adequately adapt learning processes for faster, more robust reduces the risk of error. learning algorithm to understand all of the data to local capacities post-disaster to produce experiments and applications. in a consistent manner while also making the accurate, fast results. YOLOv5 is accessible and Preprocessing and Augmentation training and testing faster. integrates annotation into its notebooks for Roboflow enables image pre-processing and customization and accessibility. 108 109 Street View Imagery Working Paper Series 2.6 Recommendations throughout local networks and beyond. can learn from three times the images, providing a catalyst to model performance while cutting Alternative Algorithms and Continuous Training down on time spent on data collection. The RIDA If RIDA becomes a tool deployed at the local scale The YOLOv5’s model accuracy and efficiency model ultimately collects images for preliminary to be monitored by local emergency managers are two great assets for a street level machine damage assessment from a car-mounted and disaster response professionals, the tool must learning model that detects severity of damage. 360-degree camera, including a modification for be adaptable, accessible, and equitable. As it The creators of the YOLOv5 algorithm are “shearing” images or rotating them +/- 15°, which stands, RIDA has the potential to bridge the gap Image 2.8: Roboflow Health Check: Totoal raining images continually transparent with their improvements, per classification catergory. Credit from Roboflow.com. is selected to mimic real-world conditions. between the data science and planning fields. To modifications, and methods. The algorithm can be bring that potential to light, the following steps run using cloud based processing speeds which Lowering the Barriers to Machine Learning should be taken to ensure proper use of the tool eradicates the requirement for individual users data representation at each step is balanced. In The growth of programming-less machine during the machine learning steps of the process. to operate or download multiple softwares and Image 2.8, our dataset has notable variations of learning programs, such as Lobe.ai, can also platforms. Google Colaboratory also has tutorials classification sizes which is less than preferable as lower the barriers to entry into machine learning Pool and Share Data that are easily navigable for an entry-level described above in Section 2.5. With the curation to the point that rapid adoption and progression The ability to share and leverage pre-existing practitioner. However, as mentioned previously, of a widespread disaster damage dataset, these of application techniques for artificial intelligence resources makes the production and training of while the YOLOv5 algorithm is reliable for the categories can be evened out over time. in disaster relief can become commonplace. In machine learning processes faster, and more NDPTC project, we strongly recommend continued a brief experiment, annotation and training of a 2.7 Conclusions importantly, more accurate. From research exploration of more precise machine learning machine learning model capable of categorizing articles or actual applications of machine algorithms for street level damage assessment damage assessment following FEMA’s PDA took learning, the integration of data that represents such as Mask R-CNN and Lobe.ai. The Case for Iterative Model Design a fraction of the time to develop compared to and documents various disaster related The ability to iterate on model design allows YOLOv5. Lobe.ai’s damage assessment model’s damage, housing typologies, level of damage, In the field, there are a few available solutions to machine learning researchers to better adapt observed accuracy is 93%, while the highest and in general a variety of imagery, enhances continuous training. The first potential solution to changing conditions of natural disasters and accuracy of a YOLOv5 model observed through the models accuracy. To achieve data sharing, is utilizing a machine learning designed end-to- reflect upon model performance and community this research is 70%. organizations can provide a host of open-sourced end platform. An end-to-end platform starts with representation. As this is an academic project in and public materials. These materials can include: processing multiple datasets using pre-existing understanding how artificial intelligence can aid Based on research and experimentation, damage or custom data parsers. Then within the same disaster recovery, the initial exploratory analysis assessment as conducted through machine (1) a continuously growing dataset on general process, the code can run various algorithms of data collection, annotation methods, and learning practices must continue to iterate on infrastructural damage including YOLOv5 or Mask R-CNN, to train images machine learning model selection mimicked (2) annotation protocols designed for federal in one succinct process. The inspiration for an the process of thoughtful and iterative model agencies (FEMA, HUD, etc) end-to-end machine learning platform was driven design. Through experimentation on machine (3) a pre-built machine learning algorithm by the need to instill continuous learning and learning model development, findings suggest (4) annotated datasets for each natural disaster experimentation protocols, but also due to the collecting images with the intention of the model’s (5) open source platforms for contributions of need to organize each machine learning step into primary objective. If the imagery intended for disaster damage and images one coding narrative. model training and validation does not aid in the damage assessment of buildings, it should be Those interested in damage assessment Audit and Monitor excluded from the dataset. FEMA’s Preliminary assistance through machine learning, such as the To train and deploy models for testing, there are Damage Assessment Guide helps researchers NDPTC, should recognize its position as a liaison several qualitative metrics that determine model select relevant images and provide a framework between the local and federal actors. Large scale performance to review. It is important to include a for annotations. organizations or locally embedded recovery coherent method for review to audit the model’s professionals can act as the host for materials, performance both in training and in application. Selecting preprocessing edits and augmentations tools, processes, and data. Disaster recovery and There are important metrics that should be to allow for more robust training datasets should preparedness organizations benefit from the evaluated for both steps to machine learning. be chosen based on the model’s deployment Image 2.9: Shearing Example: This picture is from the 360 data pooling and storage because it increases imaging company NCTech’s vehicle-mounted iSTAR Pulsar The metrics related to training a model are phase. As discussed, preprocessing alters images the damage assessment model accuracy and camera. The photo demonstrates potential distorted imag- mean average precision (mAP) and recall. Even by rotating, flipping, adding contrast, or cropping es for machine learning unless trained on these distorta- transferability. Altogether, sharing and pooling training data can be monitored using Roboflows to provide more training data inputs in a model. tions know as shearing. Credit to GIS Lounge https://www. data helps eradicate the noted barriers of gislounge.com/next-generation-asset-management-with-is- health check. Checking these metrics ensures A model with three preprocessing modifications perishable data by directly sourcing disaster data tar-pulsar/ 110 111 Street View Imagery Working Paper Series ENDNOTES its methods, while decreasing the barriers to the your own model, there is a customized YOLOv5 data-driven tool. For now, general findings identify template notebook in Google Colaboratory with the YOLOv5 model as the most accessible in terms Roboflow integrations. To assist with knowledge of the availability of tutorials and supporting transfer, data sharing, and tools going forward, 1. Selvaraju, R., Cogswell, M., Das, A., Vedantam, R., Parikh, software, such as Roboflow. Though pre-coded there are supplementary videos and the “Basics D., and Batra, D. (2019) Grad-{CAM}: Visual Explanations 10. FEMA. (2020). Preliminary Damage Assessment notebooks are freely available for running YOLOv5, of Machine Learning Paper” that assist in the from Deep Networks via Gradient-Based Localization. Guide (p. 127). some programming experience is required development. All of the work is hosted on the International Journal of Computer Vision, 128(2). 336- to understand the complexities of operating University of Michigan Capstone website and 359. https://doi.org/10.1007%2Fs11263-019-01228-7 11. Cherry, K. (2020, July 19). What Is Cognitive Bias? [Psy- machine learning algorithms. corresponding GitHub repository. chology]. Very Well Mind. https://www.verywellmind. 2. Xray Pasa, F ; Golkov, V ; Pfeiffer, F ; Cremers, D ; com/what-is-a-cognitive-bias-2794963 For emergency managers, community Pfeiffer, D. (n.d.). Efficient Deep Network Architectures for organization directors, and other recovery Fast Chest X-Ray Tuberculosis Screening and Visual- 12. Nguyen, D. T., Ofli, F., Imran, M., & Mitra, P. (2017) personnel, the barriers to entry into machine ization. Scientific Reports. 9(1), 6268–6268. doi:10.1038/ Damage Assessment from Social Media Imagery Data learning models for damage assessment are s41598-019-42557-4. During Disasters. Proceedings of the 2017 IEEE/ACM much too high for practical adoption. At this Animals International Conference on Advances in Social Net- stage in the development of artificial intelligence works Analysis and Mining 2017, 569–576. https://doi. for disaster recovery, the benefits of integrating 3. He, K., Gkioxari, G., Dollár, P., and Girshick R. (2017) org/10.1145/3110025.3110109 machine learning into preliminary damage Mask R-CNN. IEEE International Conference on Com- assessments for rapid deployment are not yet puter Vision (ICCV), 2980-2988. hhtps://doi: 10.1109/ 13. Nguyen, D. T., Ofli, F., Imran, M., & Mitra, P. (2017) visible. When a disaster strikes a community, it is ICCV.2017.322 Damage Assessment from Social Media Imagery Data often too late to learn and implement new tools During Disasters. Proceedings of the 2017 IEEE/ACM into an overly complex recovery process. Machine 4. Metallo, N., (2017) Using Mask R-CNN in the streets International Conference on Advances in Social Net- learning tools for disaster recovery must be of Buenos Aires. Medium. Accessed on April 24, 2022. works Analysis and Mining 2017, 569–576. https://doi. developed in anticipation of deployment. https://medium.com/@nicolas.metallo/using-mask-r- org/10.1145/3110025.3110109 cnn-in-the-streets-of-buenos-aires-a6cb6509ca75 Going Forward 14. Dutta, A., Gupta, A., and Zisserman, A. (2019) The VIA Altogether, street level machine learning stands 5. Labelme2. MIT, Computer Science and Artificial Intel- Annotation Software for Images, Audio and Video. In as a growing data-driven tool that reduces ligence Laboratory. labelme.csail.mit.edu/Release3.0/ Proceedings of the 27th ACM International Conference assessment delays through improved data https://www.robots.ox.ac.uk/~vgg/software/via/via_ on Multimedia. https://doi.org/10.1145/3343032.3350535 collection and imagery analysis. This paper is demo.html far from comprehensive in regard to machine 15. Labelme2. MIT, Computer Science and Artificial learning development, however it comments 6. Zhang, Q., Chang, X., and Bian, S. (2020). Vehi- Intelligence Laboratory. labelme.csail.mit.edu/Re- on the general industry trends from annotation cle-Damage-Detection Segmentation Algorithm Based lease3.0/ platforms and protocols to useful machine on Improved Mask RCNN. IEEE Access. Doi: 10.1109/AC- learning algorithms. These considerations directly CESS.2020.2964055. contribute to the design and development of damage assessment models in early recovery, 7. Nelson, J. (2020) YOLOv5 is Here: State-of-the-Art Ob- including potential environments for bias. To learn ject Detection at 140 FPS. Roboflow. Accessed on May 1, more about machine learning bias, see “Social 2022. Bias in Machine Learning and Early Recovery.” Nevertheless, for local disaster response 8. FEMA. (2020). Preliminary Damage Assessment Guide professionals who are interested in the reduction (p. 127). of assessment bias or local capacity burdens, machine learning using accessible interfaces can 9. FEMA. (2020). Preliminary Damage Assessment Guide streamline those processes and offer enhanced (p. 127). insight on damage. To access more information on how to build 112 113 Street View Imagery Working Paper Series STREET LEVEL IMAGERY FOR MACHINE LEARNING about this project This project is a joint effort by students and faculty within the Master of Urban and Regional Planning program at the University of Michigan and the National Disaster Preparedness Training Center (NDPTC) as a Capstone project for the Winter 2022 semester. A key focus of the University of Michigan team is to work in a manner that promotes the values of equity, valuing local voices, transparency and honesty. As a result, the outcomes of this capstone aim to speak to both our collaborators at the NDPC and the local communities impacted by disasters across the United States. Our responsibilities as researchers will also include the implementation and/or recommendation of innovative solutions to issues surrounding machine learning, damage assessments, prioritization determinations, and social infrastructure networks. 114 115
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-