Let’s suppose you want to use our Patch Search feature to find images that have a traffic light.
You will see that the search results have faithfully captured the neighbouring frames in the video sequence that have the traffic light.
You can perform a variety of actions to understand what your dataset entails. You can get a summary view, look at images in each cluster groups, perform different choices of sampling or change number of clusters, etc.
Select the images matching the criteria you wish to train your model with.
In this scenario, since we are looking to train or improve autonomous cars’s ability to detect construction zones, we need to identify images containing construction, such as a barricade on the side of the road.
Once you identify an image matching your desired criteria, you can select the “thumbs up” to give the model the positive reinforcement to return images closely related to the desired frame.
In tandem, you may also select “thumbs down” on non-relevant images to tell the model that you do not want images matching these criteria (such as an image with no visible construction zone or images taken at night).
All of the images in the initial search results may not necessarily match the criteria “Construction Zone” as we did not explicitly state we are looking for such features.
By continuing this process of providing positive and negative reinforcement, the Data Explorer will provide higher accuracy results.
Once you create a resultset, you can add more images from new searches and remove unwanted images
Publishing the resultset allows you to share with your team members in your organization and accessible for other downstream activities