[Highlight 3/15/2021 Updates] Keep at top

IEL Stormwater Drone Lab: Introduction

☆ For this lab, a lab user name was created for you. This name was set to YourLabName. This information will be addressed and referenced later in the lab.

Welcome back! This lab is the third in a series of labs designed to demonstrate how several IBM technologies, such as OpenShift (Kubernetes), Hybrid Multicloud, and some of the key capabilities available within Maximo Application Suite (Monitor, IoT Platform, Visual Inspection) can be utilized along with modern industry devices, such as drones and sensors, in various industry-specific applications.

In previous labs, you deployed a customized, containerized application onto OpenShift and then connected and utilized various capabilities made possible by IBM Cloud Satellite. If you have not completed the previous labs, please return to the Technology Zone (https://techzone.ibm.com/collection/dronelab1) and request access to Remote Drone Console Application Hands-on Lab #1, and then Remote Drone Console Application Hands-on Lab #2 Satellite (https://techzone.ibm.com/collection/dronesatlab2).

If it has been a significant amount of time since you have completed the labs, you can review the guide for Drone Lab 1, located here: https://ielgov.github.io/dronelab1 for your reference. You may also refer to the personalized guide for Drone Lab 2 that was previously emailed to you.

Please confirm below that you have completed the previous labs and are ready to start this lab.

I have (at least) completed Drone Lab 1 and 2 and am ready to begin this lab, IEL Stormwater Drone Lab.

The diagram on the right depicts a high-level flow and services used in this lab.

Table of Contents

1. Introduction and Overview
2. Understand how Maximo Visual Inspection can be trained to detect objects within a video/image
3. Walkthrough how Maximo Monitor and IoT Platform can bring insights from sensors
4. Explore Stormwater Management Operations in the Remote Drone Console Application
5. Summary & Contacts

View full size diagram

Lab Overview

The purpose of this lab is to consider how extending the capabilities of the Remote Drone Operations Console that you worked with in Lab 1 and Lab 2 can help deliver functionality within an industry context of Stormwater Operations. You may remember that the previous labs resulted in an application running in OpenShift within the IBM Cloud that was connected to drones, sensors, and other cloud resources running in the Industry Engineering Lab private cloud. The purpose of that application was to allow a user to see video output and telemetry data from a remote drone, as well as to visualize and filter sensors and other resources that were within the same geospatial area. In this lab, you will take that application a step further and bring additional insights to a user who is concerned with public safety during situations where risks can occur during/after heavy rainfall accumulates in areas where pedestrian and vehicular traffic may become dangerous.

Consider for a moment the challenges that cities today face to effectively mitigate and respond to excess stormwater on city roadways during and after heavy or prolonged rainfall. The stakes are high, as flooded roadways can impact traffic as well as citizen safety.

Unfortunately, it is not uncommon for a city's command center operators to resort to making decisions without easy access to information and tools that can provide full situational awareness needed to assess, prioritize and respond to these stormwater flooding issues. In fact, many cities use manual processes to dispatch, track, and support their field responders who need to perform actions at the location of each given issue. Client-facing efforts around emergency management, public safety, traffic management, and stormwater operations management have clearly shown that operators need decision-support analytics and situational awareness that can simplify access to information and insights.

In the context of this Remote Drone application, you will learn how portfolio capabilities within the IBM Maximo Application Suite such as Visual Inspection and Monitor provide analytic uplift on raw drone video footage and weather/stream sensor readings that can be leveraged to deliver insight directly to the fingertips of the Stormwater Operations Management command center operator. Specifically, you’ll get exposure to how visual analytic models can be trained and deployed within a Maximo Visual Inspection (MVI) environment based on a preliminary dataset of images or video and made available to external applications via an API endpoint. You'll even be able to test the deployed model via a custom user interface running in the IBM Cloud. Then, you will walk through how weather and water sensors can be managed and monitored via Maximo Monitor and the associated IoT Tool (Watson IoT Platform) to trigger alerts based on abnormal measurements. Finally, you’ll revisit the Remote Drone Console application to experience the new Stormwater Operations functionality where you’ll interact as a command center operator who is using the interface to remotely monitor the impacts of a severe thunderstorm in Coppell, TX. As a takeaway, you will be able to consider how visual and sensor analytics, when combined with insights from weather and other city data in a geospatial context, can help operators gain the kind of situational awareness that allows them to prioritize and coordinate their responsive actions.

More about IBM Maximo Application Suite

The IBM Maximo Application Suite (MAS) is a single integrated set of application offerings focused around managing the assets. The suite is built on the Red Hat OpenShift environment to provide multi-cloud portability, including support for a hybrid cloud scenario. View full offering information here: https://www.ibm.com/products/maximo.


View full size diagram

Given the maturity of the product, MAS also supports industry-specific and add-on products that include Maximo Asset Management industry solutions:
- IBM Maximo for Utilities
- IBM Maximo for Oil and Gas
- IBM Maximo Nuclear Power
- IBM Maximo for Transportation
- IBM Maximo for Aviation
- IBM Maximo for Civil Infrastructure

You can also visit 2021 FastStart technical enablement for the latest version (8.3) of Maximo Application Suite here.

In this lab, we will focus on the following Maximo Application Suite v8.2 capabilities:

- Maximo Monitor: Improve asset and operational availability with advanced AI-powered remote asset monitoring at scale. Collect data from your existing IoT systems, an converge your IT systems and operational systems in a single data lake and detect anomalies. Depends on underlying IoT Tool (Watson IoT Platform) to get enterprise-wide visibility into performance.

- Maximo Visual Inspection: Use the power of AI computer vision to build highly accurate customized AI models in a way that’s fast and remarkably easy. Perform a visual inspection of the line or asset using existing cameras, or commercial, off-the-shelf iOS devices to get immediate, actionable notifications of any emerging issue. Scale easily to view multiple points 24/7 including global views of all plants and geographies. Integrate with maintenance and quality workflows for a fast and prescriptive response.

For reference, the remaining Maximo Application Suite v8.2 capabilities are explained below:

- Maximo Manage: Reduce downtime and costs by optimizing asset management and maintenance processes to improve operational performance. Leverage embedded industry expertise with best-practice data models and workflows to accelerate your industry transformation. Unify asset management processes using role-based workspaces to help teams across your enterprise. Maximo Manage unifies robust asset life cycle and maintenance management activities, providing insight into all enterprise assets, their conditions and work processes to achieve better planning and control.

- Maximo Health: Manage the health of your assets using IoT data from asset sensors, asset records and work history to increase asset availability and improve replacement planning. Get a true view of asset health via dashboard displays to provide evidence to base operational decisions.

- Maximo Predict: Go beyond time-scheduled maintenance to condition- based action to predict the likelihood of future failures by applying machine learning and data analytics to reduce cost and asset failures. Build on the power of other Maximo capabilities and Watson Studio to make data- driven decisions and build predictive models.

Lab Environment Setup

When you made a request to take this hands-on lab, there were some background tasks that were performed on your behalf in order to prepare the lab environment, which are described below. Don't worry if you have some questions about these concepts... you will get a lot more explanation and hands-on experience as you move through lab steps.

Created a Maximo Visual Inspection (MVI) environment within IEL Private Cloud
Provisioned and installed Maximo Visual Inspection v 8.0 (as part of Maximo Application Suite 8.2) within an OpenShift Cloud Platform 4.6 environment with one worker node allocated on a bare metal machine containing a single Nvidia Tesla P100 GPU (graphical processing unit). We then created, trained, and deployed a custom object detection model that would be relevant for stormwater management operations. To do this, we also took drone footage near the IEL location in Coppell, TX to be used throughout the lab.

Created a Maximo Monitor within IEL Private Cloud
Provisioned and installed Maximo Monitor version 8.0 and IoT Tool (Watson IoT Platform) as part of Maximo Application Suite 8.0, within an OpenShift Cloud Platform 4.4 environment. We then created a series of sample sensors based on project experience with actual weather and river/stream sensors from the DFW area and configured to support a stormwater use case.

Created a custom MVI test application in the IBM Cloud
Created a small application to allow the user to interact with the IEL Private Cloud instance of the MVI Stormwater model API from IBM Cloud over a secure tunnel enabled via Cloud Satellite Link.

Established IEL infrastructure as IBM Cloud Satellite Location
IBM Cloud Satellite enables local IEL resources to be utilized as a Satellite location by using the infrastructure already present in that location. To achieve the necessary IEL Satellite location configuration, a set of hosts machines were dedicated in order to deploy Satellite control planes that could the manage IEL cluster. This established a secure Satellite link between IBM Cloud and the IEL location. Satellite Link uses Link Endpoints configured on this private IEL environment to enable the IBM Cloud Remote Drone Console application functionality that will be leveraged as part of the lab's Visual Inspection test page and Remote Drone Console Application functionality.

Deployed a project and artifacts for each lab user in the IBM Cloud
A project with an updated Remote Drone Console Application with custom Stormwater Operations functionality was created and deployed for each lab user in a IBM Cloud OpenShift Cloud Platform 4.5 cluster.

Adding each user to the IBM Cloud environment
The new user was first given user permissions to an existing IBM Cloud organization for the environment that was provisioned for the lab. A new project was then created specifically for that user in the OpenShift Cloud Platform 4.5 cluster within that organization, and user policies were applied to enable the permissions necessary for completion of the lab steps.

Move on to the Preliminary Steps →

Preliminary Steps

You will open several tabs throughout the course of this lab. To avoid confusion, it is recommended that this instruction site is viewed via a browser window with no other tabs open before you begin.

It is strongly recommended you use Google Chrome or Mozilla Firefox as your browser for this lab. Browsers such as Safari have been known to encounter errors with some of the IBM Cloud services.

The screenshots shown in this lab were recorded in a full screen width/height browser. Some IBM Cloud services perform best in full screen resolution.

To avoid issues as you navigate through the lab steps, it is ideal to use the buttons and tabs within the instructions (as opposed to using your browser's "back" and "forward" options).

After you requested this lab, you should have received an email containing the link to this instructions page.

The instructions here will guide you through the lab step by step.

This instructions page that you have been provided should contain your lab name.

Your Lab Name: YourLabName


Referencing the above, if your Lab Name is "YourLabName", you may encounter errors in the lab steps that follow. Please use the email link located in the top right corner of this page and send an email to gscgov@us.ibm.com with the message, "I was not assigned a lab name for IEL Stormwater Drone Lab."

Note: If you continue with this lab without having completed at least Drone Lab 1 & Drone Lab 2 (Satellite), you may not have the appropriate context for some of the activities and technical flows that follow. If this applies to you, it is recommended that you request the appropriate labs via TechZone here:  Drone Lab 1 | Drone Lab 2 (Satellite).

The diagram above explains the overall flow of this lab.

As you can see, the short walkthrough videos and activities in Step 1 and Step 2 are important
and will allow you to start the activity in Step 3 with a deep understanding of the capabilities
and Government Industry context showcased within the Stormwater Operations functions of the Remote Drone Console Application.

Step 1 Next

☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model
Step 2

☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes

Ready? Move on to Step 1 and start the lab →

1. Understand how Maximo Visual Inspection can be trained to detect objects within a video/image

You may recall that in Drone Lab 2, you were able to view the data from a drone within the IEL Private Cloud environment via the Remote Drone Console application running in IBM Cloud (image on the right).

This drone data included images captured by the drone camera, along with various telemetry and measurements. If you would like to review the specific data the drone sends back, you can review the details here, which are excerpted from Drone Lab 2.


Our Remote Drone Console Operators can further gain insights from drone imagery using IBM Maximo Visual Inspection to analyze the drone images using classification or object detection. Based on trained models, multitudes of images sent by many camera/drone sources can have analytics applied in order to quickly and effectively identify patterns or problems. This allows operators to focus on the insights across sources rather than trying to try to monitor raw imagery.

In our stormwater management use case, you can imagine how critical it is for operators to maintain some level of situational awareness. They must be able to assess risks during disruptive weather events when dangerous conditions can impact traffic and directly threaten the safety of citizens and first responders. In this step, you will explore IBM Maximo Visual Inspection, and ultimately consider how its powerful analytics can be leveraged to raise insights about stormwater in close proximity to vehicular and pedestrian traffic that can help assess and prioritize the public safety response.

More about IBM Maximo Visual Inspection (MVI)

You recall that MVI is just one of the powerful capabilities within the overall Maximo Application Suite portfolio that performs visual inspection of assets to get immediate, actionable insights about emerging issues. MVI is intelligence that drives automation of existing manual processes in addition to enabling subject matter experts to easily and quickly use AI computer vision, in order to build highly accurate customized AI models. It also is compatible with Edge and mobile.

You can imagine how important it is to identify issues which may have been missed during normal inspection and preventive maintenance cycles for life-threatening situations that involve public safety. Often, challenging operating environments in the field exist as well. As a result, adding a level of intelligent oversight that can look for errors 24/7 in the operations or output of an asset can be very beneficial. MVI uses the power of AI to detect errors and defects, with alerts to operators to take action, avoiding disruptions that could take down operations.




Maximo Visual Inspection includes the following key features:
Data ingest
- Drag-and-drop capability for moving images into data sets
- Common file format support for jpg, png, mp4, zip, etc.

Data labeling
- Point-and-click labeling with bounding box or polygon control
- Data augmentation for creating larger data sets
- Auto labeling, to greatly reduce the manual effort and time

Model training -- supported model types
- Image classification
- Object detection
- Image segmentation
- Video action detection

Inference/Deployment options
- Batch mode
- Near real time at the edge (Maximo Visual Inspection Edge)
- Near real time with iOS device via smart camera capability (Maximo Visual Inspection Mobile)




Architecture overview
The architecture of IBM Maximo Visual Inspection consists of hardware, resource management, deep learning computation, service management, and application service layers. Each layer is built around industry-standard technologies.

A Note about GPUs:
The OpenShift® cluster where Maximo Visual Inspection is deployed must have 1 or more worker nodes with 1 or more Nvidia GPUs available for training models. For more information about hardware requirements, see "Application-specific requirements" in Maximo Application Suite system requirements. For this lab, there is an MVI instance with a GPU-enabled worker node running in the IEL Private Cloud.


As you progress through this step of the lab, you will be guided through the creation, preparation, training, and deployment of a model in Maximo Visual Inspection. Once you’ve walked through the process, you will interact with a Maximo Visual Inspection (MVI) test page that was created to allow you to experience and consider how an external application can interact with an MVI model that has been trained to detect objects from drone images, traffic cameras, and other video sources that would be relevant to stormwater operators.

Reference:

Instruction/Step
Indicated as text with light blue background.

Example:
Scroll down after you make your changes
Reminder
It's important to read each step carefully and in order.

Skipping sections, reading too fast, or not clicking to read
💡 More Information lightbulb blocks

may result in incomplete learning.


Technologies Used
Your Tasks for Step 1

✅ IBM Maximo Visual Inspection: Getting started
✅ IBM Maximo Visual Inspection: Visual Annotation
✅ IBM Maximo Visual Inspection: Model training
✅ IBM Maximo Visual Inspection: Deploy the model
✅ IBM Maximo Visual Inspection: Using the model



Step 1 In Progress
☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model

Step 2
☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool

Step 3

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes

1.1 IBM Maximo Visual Inspection: Getting started

1.1.1

Given a corpus of existing video footage and imagery taken from drones, traffic cameras, and even citizen uploads, IBM Maximo Visual Inspection (MVI) was designed to allow technical or non-technical subject matter experts (like Stormwater Management operators) to produce an object detection model that can help find insights when processing future images. This is possible because MVI has a very guided and intuitive user interface, starting with the concept of creating a new data set from a corpus of images or video.

Below is a video walkthrough of the data set creation process that explains step-by-step how the user can navigate through the MVI interface, starting immediately after logging in. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


For full details of this step, you can also reference the corresponding section in the MVI knowledge center.

1.2 IBM Maximo Visual Inspection: Visual Annotation

1.2.1

Once the user has captured frames (either manually or using MVI’s auto-capture capability) in the new data set based on the imported images or video, the user can now start to annotate those frames by labeling the objects that should be detected in future images. An object is used to identify specific items in an image or specific frames in a video. You can label multiple objects in an image or a frame in a video. An example of objects in an image of cars might be a wheel, headlights, and windshield. In our stormwater use case, we will be labeling vehicles, water, flooding, people, buildings, etc.

Image Classification can be used to build a solution that can examine an image and properly classify it according to your model. For example, the model can examine images of a flooding situation and can classify if the flooding is happening on a road or low-level water crossing bridge and thus helps to determine the flooding severity. In the above use case, you train a model to recognize flooding happening on a road versus low-level water crossing bridge categories.

In this lab, we use Object Detection model rather than Image Classification model to determine if a vehicle is passing through a flooded region by examining the image for a vehicle object, water, and splashes around the wheels of the car. If all of the above objects are detected in an image,  the system can determine that a vehicle is passing through a flooded region which is one of the main public safety issues during stormwater monitoring.

In the video below, you will learn how to create an object detection model by labeling objects. The video initially shows how to label a vehicle and flooding object and later uses a trained deployed model to auto-label the entire data set. This is a unique feature of MVI. In the real world, based on your scenario, you will train the model with objects that you think are appropriate for detection and this might result in some false positives (a false positive result is when IBM Maximo Visual Inspection labels an image when it should not have -  for example, detecting an A/C unit on top of a building as 'vehicle' in the drone video). The users have to create new labels (e.g 'A/C unit') to re-train the model to avoid false positives. Determining what objects to label can be an iterative effort, and is often based on an understanding of what type of object is present in the images that you are using in order to train the model.

Below is a video walkthrough of the data prepation process that explains step-by-step how the user can navigate through the MVI interface, starting immediately after data set creation. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


For full details of this step, you can also reference the corresponding section in the MVI knowledge center.

1.3 IBM Maximo Visual Inspection: Model training

1.3.1

Once the user has annotated the desired objects via labels within the frames in the data set, the user is now ready to use those labeled frames to train the MVI model. A model is simply a set of tuned algorithms and that produces a predicted output. Models are trained based on the input that is provided by a data set to classify images or video frames or find objects in images or video frames. There are additional training concepts and terminology involved (iteration, batch, batch size, epoch, etc.), but it is not necessary for a user to become an expert in these concepts within MVI to produce a model.

Below is a video walkthrough of the Train Model process that explains step-by-step how the user can navigate through the MVI interface, starting immediately after the Prepare Data has been performed to define the objects to be detected. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


For full details of this step, you can also reference the corresponding section in the MVI knowledge center.

1.4 IBM Maximo Visual Inspection: Deploy the model

1.4.1

Once the user has trained the model with the desired objects via labels within the frames in the data set, the user is now ready to deploy that model to create a unique API endpoint based on that model for inference operations. The deploy process was designed such that it can be executed via a single click. Once deployed, external applications (such as the Stormwater Management function within our Remote Drone Console) can programmatically interact with the MVI model API via the associated API key and API endpoint.

Below is a video walkthrough of the Deploy Model process that explains step-by-step how the user can navigate through the MVI interface to deploy and test the model, starting immediately after the model was trained. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


For full details of this step, you can also reference the MVI knowledge center about deployment and testing.

1.5 IBM Maximo Visual Inspection: Using the model

1.5.1

As you saw in previous walkthroughs, IBM Maximo Visual Inspection (MVI) is a powerful tool that enables even non-technical users to create and deploy custom AI models that can then be used to perform visual analytics on images or video. After deploying the model, you saw that it can be tested within the MVI interface, but it is important to consider how that model can be leveraged by external applications. For example, our Remote Drone Console Application will use the MVI API to detect objects from the drone images during a weather incident and then use those insights to facilitate situational awareness and raise alerts about potential threats from a public safety context.

In this next step, a test page has been created in order for you to explore how an external application can interact with IBM Maximo Visual Inspection to identify objects from an image. The test page will contain various activities that will demonstrate IBM MVI capabilities, in addition to further information on the service and how the test page was created.


View full size diagram



Click the blue button below to navigate to the test page in a new tab. Once you have completed all the activities on the test page, it will prompt you to return to these instructions. Click the checkbox underneath the button to move on when you are ready.



Click here to go to the IBM Maximo Visual Inspection test page and complete the activities
(Opens in a new tab)


I have completed the above activities on the test page.

Finish Step 1

You completed Step 1! 🎉
Step 1 Completed

✅ IBM Maximo Visual Inspection: Getting started
✅ IBM Maximo Visual Inspection: Visual Annotation
✅ IBM Maximo Visual Inspection: Model training
✅ IBM Maximo Visual Inspection: Deploy the model
✅ IBM Maximo Visual Inspection: Using the model

In the next step, you will explore IBM Maximo Monitor and its IoT Tool (Watson IoT Platform). You will consider how Stormwater Management Operators can also leverage sensor data (weather, water/stream, etc.) with the help of IBM’s capabilities available in Maximo Monitor to deliver insights from raw measurements.

Step 1 Completed ✅

☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model

Step 2 Next
☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes
Ready? Move on to Step 2 →

Step 2: Walkthrough how Maximo Monitor and IoT Platform can bring insights from sensors

Once again, recall that in previous drone labs, you were able to view the data from a drone, operated within the IEL Private Cloud environment via the Remote Drone Console application running in IBM Cloud (image on the right).

That drone data included images captured by the drone camera, along with various telemetry and measurements. Remember that part of that drone data included the GPS coordinates, which were used to query a Cloudant database for sensor data (raw measurements) that filtered based on a geo-fenced area around the drone location. If you would like to reread the previous lab sections on how sensors were used based on drone location, you can go to https://ielgov.github.io/dronelab12/ and review Step 3.1 and Step 4.3.2. Note that in the step after this, you will re-visit the remote drone console application as well.

Our Remote Drone Console Operators can further gain insights from these type of sensors using IBM Maximo Monitor to monitor them remotely in order to maintain their health and to evaluate the measurements and raise alerts as required. IBM Maximo Monitor, with its associated IoT Tool (Watson IoT Platform) includes capabilities to allow users to create assets, define interfaces, make the connection, and monitor output. There are many benefits to using this platform – with a lot of data coming in quickly, there’s a need to get actionable insights to detect anomalies and visualize patterns to make the right decisions.

In our stormwater management use case, you can imagine how critical it is for operators to be able to maintain some level of situational awareness. They must be able to assess risks during disruptive weather events when dangerous conditions can impact traffic and directly threaten the safety of citizens and first responders. In this step, you will explore IBM Maximo Monitor and its IoT Tool (Watson IoT Platform) in the context of weather and water/stream sensors and ultimately consider how its powerful analytics can be leveraged to raise insights about changing conditions (raising water, increasing wind speed, etc.) to help assess and prioritize the public safety response.





More about IBM Maximo Monitor

You recall that Maximo Monitor is just one of the powerful capabilities within the overall Maximo Application Suite. Maximo Monitor is a solution that enables connectivity to devices and operational technology systems and allows for no-code application of analytics and AI-based anomaly detection, as well as easy building and configuring of custom dashboards. This solution empowers operations and maintenance teams to remotely monitor assets and optimize operational runtime. With these capabilities, teams can learn the root cause of the alerts and perform remedial action.

Maximo Monitor provides personnel with a visualization and anomaly detection for current and historical trending data through historians, supervisory control, and data acquisition (SCADA) systems, and Internet of Things (IoT) sensors. Its drill-down capabilities and hierarchical navigation, coupled with alerts and a prebuilt customizable dashboard, can increase operational visibility by aggregating and analyzing data from a broad range of sources.


Maximo Monitor enables scaling of operational insights across departments, facilities, and geographies by using a low-code/no-code interface with a modern and customizable dashboard. Through this set of capabilities, Maximo Monitor enables operations and maintenance leaders to obtain visibility of critical equipment and respond quickly to problems to reduce downtime. The configurable rules-based alerts, anomaly detection, and the capability to drill down into root cause provide teams with tools for intelligent intervention. This feature set also enables reliability engineers to better understand historical trends and supplies them with the capability to view historical data for forensic analysis of failure trends.

Read about some Maximo Monitor use cases and demos here.



Maximo Monitor includes the following key features:
- A platform component, known as IoT Tool, for connecting and registering devices.
- A data store component for storage of data.
- An analytics component for performing calculations, anomaly detection, creating service requests, and alerting.
- Configurable dashboards for monitoring KPIs, anomalies, and alerts.




Architecture overview
Maximo Monitor is built on the platform and analytics components of Watson IoT Platform.

As an application in Maximo Application Suite, Maximo Monitor provides an end-to-end cloud-like experience on a self-managing OpenShift platform.

The services that make up Maximo Monitor support connecting, storing, analyzing, monitoring, and managing data. The Maximo Application Suite framework, which is deployed on a Red Hat OpenShift cluster, handles back-end operations, such as user management, workspace management, application deployment, single-sign on and session management, and application entitlement.



A Quick Note on Flow Volume Sensors:

Flow volume sensors help to monitor the behavior of water in streams/rivers by measuring the volume of liquid moving through an area. This measurement is reported in the U.S as cubic feet per second (cfs). One cubic foot of water is 7.48 gallons of water. A gallon of water weighs 8.35 pounds. So, one cubic foot of water weighs over 62 pounds. From the above example, a flow of 1,000 cfs means over 62,000 pounds of water is flowing downstream every second! Information on flows in cubic feet per second can be found online at the U.S Geological Survey.

You can even see an example of a flow volume sensor reading from the DFW area here, which is leveraged by actual stormwater management operators when trying to assess rapid changes in river/stream flow that can be an indicator of potential flood risk. In fact, this flow volume sensor's attributes and example sensor readings are actually leveraged in the sections that follow during the setup of the device connections and monitoring within Maximo Monitor and its IoT Tool.



Technologies Used
Your Tasks for Step 2

✅ Maximo Monitor: IoT Tool Device creation
✅ Maximo Monitor: IoT Tool Interface definition
✅ Maximo Monitor: IoT Tool Device connection & Device simulation
✅ Maximo Monitor: Dashboards and alert triggers for device monitoring
✅ Review Maximo Monitor & IoT Tool


Step 1 Completed ✅
☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model

Step 2 In Progress
☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes

2.1 Maximo Monitor: IoT Tool Device creation

2.1.1

Given the frequency and volume of sensor data that is available, it is crucial to manage and monitor the sensors and their associated measurements. IBM Maximo Monitor and its IoT Tool, with its proven approach and maturity, were designed to help operators maintain a healthy ecosystem of sensors and evaluate the resulting measurement data in a format that is intuitive for their needs. In the Stormwater Operations context, we will start by looking at a flow volume sensor that would be common to monitor the behavior of water in streams/rivers. For example, flow volume sensors can help measure when the volume of water is moving through an area at an increased rate, thus making a river unsafe to travel on/across.

Below is a video walkthrough of the Device Creation process that explains step-by-step how the user can navigate through the Maximo Monitor’s IoT Tool interface, starting immediately after logging in. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


For full details of this step, you can also reference the knowledge center about creating/registering the device here.

2.2 Maximo Monitor: IoT Tool Interface definition

2.2.1

Once you have a device type and registered device with token, you can create a physical and logical interface for the device. The physical interface represents the actual data structure of the data coming out from the device. The logical interface creates the application-specific information for an end application. Again, we will continue to use the Flow Volume sensor to consider this sensor within the stormwater operations context.

Below is a video walkthrough of the Interface Definition process that explains step-by-step how the user can navigate through the Maximo Monitor’s IoT Tool interface to collect and store data. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


For full details of this step, you can also reference the knowledge center about the device interface here.

2.3 Maximo Monitor: IoT Tool Device connection & Device simulation

2.3.1

Once you have created and activated the device interface, you are ready to connect the device and test the incoming data through the interface. Mapping to our stormwater management context, we will use actual measurement output from a flow sensor device in the area as the basis for the steps that follow.

Below is a video walkthrough of the Device Connection process that explains step-by-step how the user can navigate through the Maximo Monitor’s IoT Tool interface to connect a device via the Device Simulator feature to test. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


2.4 Maximo Monitor: Dashboards and alert triggers for device monitoring

2.4.1

Now that we have data flowing into our Maximo Monitor environment via the IoT Tool for a given sensor on a defined interface, we want to be able to do something useful with the data. Maximo Monitor makes it very easy to create alerts based on the data values, as well as to customize visualizations via dashboards. In our stormwater use case, imagine being able to use a dashboard to monitor all relevant sensors from a common interface, showing trends and anomalies that quickly get the attention of the operator when needed. Furthermore, imagine this same platform to trigger alerts and send to external systems (like the Remote Drone Console application’s Stormwater Management function) so that these sensor alerts can be seen alongside other insights to facilitate a more complete sense of situational awareness for the operator.

Below is a video walkthrough of the Device Monitoring process that explains step-by-step how the user can navigate through the Maximo Monitor interface to monitor a device via dashboard and to create rules that can trigger alerts. The video is being served via Watson Media’s Video Streaming service, and you should notice that Closed captioning has been provided. (You can click "CC" on the video to enable/disable).

HINT: If text in the video is difficult to read, you can make the video fullscreen and/or click "HD" and select the highest quality.)



👉 Recap Activity

As you watched this video, hopefully you were able to identify these key points:
Check all of the boxes before moving on.


2.5 Review Maximo Monitor & IoT Tool

2.5.1

Before we move on, let’s spend a moment to recap:

In this step of the lab, hopefully you learned the following:
✅ Consideration of how Maximo Monitor can help Stormwater Management operators work with insights from sensor devices
✅ Introduction to Maximo Monitor and its IoT Tool (Watson IoT Platform), its architecture, and its alignment within the Maximo Application Suite.
✅ Capabilities of Maximo Monitor, and overall steps required when working with a new type of sensor device:
    ✅ Use IoT Tool to to create and register a device type, define a physical and logical interface, connect and test the device (via Device Simulator)
    ✅ Use Maximo Monitor to create alert criteria and summary data for the device data, and visualize via Dashboard

Finish Step 2

You completed Step 2! 🎉
Step 2 Completed

✅ Maximo Monitor: IoT Tool Device creation
✅ Maximo Monitor: IoT Tool Interface definition
✅ Maximo Monitor: IoT Tool Device connection & Device simulation
✅ Maximo Monitor: Dashboards and alert triggers for device monitoring
✅ Review Maximo Monitor & IoT Tool

In the next step of this lab, you’ll be able to experience how the alerts and insights from Maximo Monitor can be consumed as part of a larger application as you interact with it from an Industry context via new Stormwater Operations Monitor function within the Remote Drone Console application.

Step 1 Completed ✅

☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model
Step 2 Completed ✅

☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3 Next

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes
Ready? Move on to Step 3 →

Step 3: Explore Stormwater Management Operations in the Remote Drone Console Application

In the previous Drone Labs, you worked extensively with the containerized Remote Drone Console Application in the IBM Cloud environment, and along the way learned about Red Hat OpenShift and Cloud Integration in a Hybrid Multicloud context.

In Drone Lab 1, the goal was to make you more comfortable with the relationship between GitHub and RedHat OpenShift on IBM Cloud and provide a better understanding of the overall process to take an application from code/configuration all the way through to its containerized cloud deployment. Along the way, hopefully you were able to get a preliminary hands-on experience with the concept of a cloud “operator” that can gain some benefit via the Remote Drone Console Application (in the IBM Cloud) to visualize and interact with drones in a remote setting.

In Drone Lab 2, you focused on learning how the various IBM Cloud Integration services can be leveraged to enable an application in a hybrid cloud context, and thus augment its security, functionality, and connectivity. You also explored how these services could be put to use within the Remote Drone Console Application to “connect” to remote resources through the use of the above services. After connection to a remote drone environment, you were able to view the images, telemetry, measurements, and even GPS from a drone, and then view readings from sensors that were located within a drone’s geospatial proximity. With the Remote Drone Console Application, you interacted as an operator to see how they could benefit from having access to the (raw) information collected via the drone to help them monitor a situation.

As you prepare to revisit the Remote Drone Application again in this lab, you now know that IBM’s capabilities can further benefit the “operator” of this cloud application by infusing AI and analytics into the application and thus bringing forth insights from the raw drone and sensor data that can be applied within a government industry context around managing stormwater during a weather event. Using the technologies and approaches within Maximo Application Suite’s Visual Inspection and Monitor that you learned in this lab’s previous steps, you will now get hands-on experience as an operator that can leverage the new Stormwater Operations Monitor interface to gain situational awareness, assess risk, and coordinate mitigation response to dangerous flooding conditions. By tailoring the interface with industry context, you will see the concept of the Remote Drone Console Application become a persona-based solution for the operator, with access to video and analytics that can be compared with alerts from sensors and other sources, and then assessed within a geospatial context. From there, the operator has the tools and insights necessary to devise a mitigation/response plan using additional support from guided workflows, predictive models, and virtual support.

As you progress through this step of the lab, you will empathize with the operator persona during a weather event, and experience how the Remote Drone Console Application’s new Stormwater Operations Monitor capabilities can help to make use of the insights that surfaced from the analysis and processing of the drone and sensor data. Within the solution interface, you will see how an operator can leverage those insights to understand the risks that emerge during the unfolding situation and utilize AI-driven guidance to help plan and coordinate the response.

Technologies in Focus



Your Tasks for Step 3

✅ Explore Remote Drone Console Application Stormwater Operations
✅ Final Notes


Step 1 Completed ✅

☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model
Step 2 Completed ✅

☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3 In Progress

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes

3.1 Explore Remote Drone Console Application Stormwater Operations

3.1.1

In this step, you will navigate to the Remote Drone Console Application using the personalized button link provided underneath this brief introduction. The Remote Drone Console Application interface should look familiar to you from your experience in previous labs. At the bottom, find the ”Stormwater Operations Monitor” tile under the Additional Operations section. This section, as mentioned in Drone Lab 1, contains various relevant operations that a public safety operator would need to interact with. In this case, you will explore a stormwater use case and how the previous steps led to the analytics demonstrated in this view. In previous versions of the lab, this function was not yet available.

Once you’ve selected this tile, follow the instructions on the page. This view contains instructions on how to complete the activity regarding Stormwater Operations.




You will:

1. Play the video that contains the raw video + video analytics applied to the raw video (using IBM Maximo Visual Inspection)
2. Pause the video at any point, or move the time slider to move the video backward and forward. Notice that the video time is shown directly to the right of the Play/Pause button.
3. As you see incoming alerts and actions, you can click the item to see more details. (You will notice alerts from a variety of sources, including sensors that have been analyzed by IBM Maximo Monitor) Note: You can click the image within an alert's description to view its full size version.
4. Review the location of the drone on the map, and see the other layers available
5. Click on each of the four lightbulbs on the four widgets on this view to read more information.


Ready? Click the gold button below to navigate to the Remote Drone Console Application in a new tab. Once you have completed the activities, whenever you are done exploring, you can return to these instructions. Click the checkbox underneath the button to move on when you are ready.



Click here to go to http://lab4-dronelab-ui-route-YOURLABNAME-2021-lab-4.mycluster-dal12-b-632406-ed9112eb83c761afd4c566b0882eaa3e-0000.us-south.containers.appdomain.cloud/ and complete the activities (Opens in a new tab)

I have completed the above activities on the Stormwater Operations of the Remote Drone Console Application.

3.2 Final Notes

3.2.1

Make sure you have completed all the activities (via the gold button above) before moving onto this step.

When you accessed the application, you may have noticed that the URL reflects an IBM cloud endpoint. In fact, this application was deployed for you in a project within an OpenShift Cluster in the IBM Cloud. We provided the direct link to the application, but you could also navigate to IBM Cloud and find the project and application within the mycluster-dal12-b3c.4x16 cluster in the 1500867 - IBM organization. Here is a direct link to your project in IBM Cloud if you want to see the deployment environment. 

You can review Drone Lab 1 for reference on the relationship between GitHub and OpenShift within IBM Cloud, as well as the overall process of taking an application from code/configuration through to its containerized cloud deployment.

 This completes all the steps for Step 3 – congratulations! Please take a screenshot of the application interface and send it in an email to gscgov@us.ibm.com. You may continue to finish this step and then move onto the Summary.

Email a screenshot
of the application interface
to gscgov@us.ibm.com.
(Link also in the top right of this page)

Finish Step 3

You completed Step 3! 🎉

You successfully navigated to your Remote Drone Console Application running in the IBM Cloud, and found the new Stormwater Operations Monitor interface. There, you empathized with the role of an operator who is trying to mitigate the risks of an ongoing weather event in Coppell, TX. You experienced how the insights from IBM tools like Maximo Monitor and Maximo Visual Inspection can drive more informed situational awareness to make assessment of emerging risks/alerts and facilitate a more coordinated mitigation response plan.

Step 3 Completed

✅ Explore Remote Drone Console Application Stormwater Operations
✅ Final Notes
Step 1 Completed ✅

☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model
Step 2 Completed ✅

☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3 Completed ✅

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes
Ready? Move on to Summary →

IEL Stormwater Drone Lab: Summary

Step 1 Completed ✅

☑ IBM Maximo Visual Inspection: Getting started
☑ IBM Maximo Visual Inspection: Visual Annotation
☑ IBM Maximo Visual Inspection: Model training
☑ IBM Maximo Visual Inspection: Deploy the model
☑ IBM Maximo Visual Inspection: Using the model
Step 2 Completed ✅

☑ Maximo Monitor: IoT Tool Device creation
☑ Maximo Monitor: IoT Tool Interface definition
☑ Maximo Monitor: IoT Tool Device connection & Device simulation
☑ Maximo Monitor: Dashboards and alert triggers for device monitoring
☑ Review Maximo Monitor & IoT Tool
Step 3 Completed ✅

☑ Explore Remote Drone Console Application Stormwater Operations
☑ Final Notes


Let’s review what we accomplished in this lab overall:

First, we learned about the challenges city command center operators face as they manage weather situations that endanger the public and put first responders at risk and considered how drone imagery and sensor measurements could be helpful inputs to help them gain situational awareness and assess risks during a storm event.

Next, we got some exposure the Maximo Application Suite, and specifically Visual Inspection (MVI) and Monitor capabilities.

In Step 1, we walked through how to create, train, and deploy a Maximo Visual Inspection object detection model, and then tested that model from a custom interface running in a hybrid cloud context. Throughout, we considered how operators could leverage insights from drone images that were processed via MVI in a stormwater scenario.

In Step 2, we experienced Maximo Monitor and its IoT Tool (Watson IoT Platform), and walked though how to setup a flow volume sensor (create new sensor type, define interfaces, connect/simulate device) and then configure alert thresholds, summary reports, and dashboards. Again, we considered how operators could monitor alerts or see trends from these sensors during stormwater scenario.

Finally, we got some hands-on interaction with the extended Stormwater Management functionality of our previous Remote Drone Console application. There, we considered how MVI and Maximo Monitor can bring important insights into the operator’s the overall situational awareness during weather events by raising alerts based on detected objects in the drone video and from anomalies in the relevant sensor measurements.

The takeaway of this lab is to have an understanding the very common Government industry scenario of managing a public safety during a localized weather event that can potentially endanger citizens, property, and responders. With the help of Maximo Application Suite’s Visual Inspection and Monitor capabilities, IBM can turn raw data into insights that can help stormwater management or command center operators to assess risk and then prioritize and coordinate response. Hopefully, you also got more comfortable with MVI, Maximo Monitor, and the associated IoT tool, and learned that they have a very user-friendly approach with intuitive interface, and thorough documentation.


Thank you for taking time to complete IEL Stormwater Drone Lab.

Lab Title Lab Description
Lab 1 Completed Building the drone application on a Multi-Hybrid cloud platform and using all the associated coding tools.
Lab 2 Completed Validate connectivity to resources in back-end private cloud, and test interaction with drone in remote private cloud location via app running in multi-hybrid cloud.
Lab 3: Stormwater Lab Completed Apply visual recognition to detect/count vehicles and sensor API to monitor water/flooding in streams in order to reroute traffic and barricade unsafe conditions, in a specific geographic location.
The IEL Stormwater Drone Lab was created by the IBM Industry Engineering Lab PubFed team in Coppell, TX.


Related labs and collateral about this use case: Stormwater Commander Experience Lab | Stormwater Responder Experience Lab | Drone Lab 1 | Drone Lab 2 (Satellite)
Related labs and resources about this technology: Maximo Labs (separate sign up required)