We are all jazzed up!
We attended CVPR in New Orleans at the end of June and had a great time seeing everyone there, talking computer vision, visual data, and displaying Akridata Data Explorer.
Couldn’t make the event? No problem! Below we’ll dive into our top takeaways, trends, and discussions from CVPR.
From taking a look at the attendees and companies present at this event, it is clear that both the need and demand for point solution providers are growing.
Data scientists from autonomous vehicles, robots, manufacturing, medical, and numerous other fields incorporate visual data into advancing their industries. And even though the industries vary, the basic needs and challenges remain across the board. From improving data quality, conducting analyses and model drift issues, to offering curation and selection tools, there are numerous ways in which solution providers are trying to enhance the way we currently utilize visual data. But far too many companies are still relying on archaic systems or are unaware that these tools even exist.
Attendees were present from a large variety of industries and verticals that benefit from being able to effectively and efficiently digest and use visual data. Each presents its own unique ways to advance society through computer vision, but a solution is still needed.
Homegrown tools and piecemeal models can no longer cut it and keep up with the volume of data available.
Training models and improving accuracy requires better data. The quality of the data used in atraining set can dramatically impact the results. The better the quality of information we cansupply our models from the onset, the greater the path to advancement will be.
Many companies are focused on model performance and thus are experiencing increasedpressures to accelerate their capabilities. But this traditionally takes a lot of time and incurs highcosts. Instead of adding more bodies and increasing labor costs, by selecting more accuratenovelty sets earlier and more often during the training lifecycle, companies can speed up theirimprovement process and reduce/refocus human workloads.
This is the driving force behind the concept of Data-Centric AI, a newer approach rapidly gainingtraction with support from industry leaders like Andrew Ng. The models we have are sufficientand nearly a solved problem. But if we continue to supply subpar data sets, these modelscannot perform at the levels intended.
The potential uses for visual data are exploding. From using GIS data and satellite imaging forquality control during construction to diving into a more granular view to inspect buildingmaterials for defects, the scale at which computer vision can be used is nearly limitless. The biggest issue is that the systems in place cannot and are not scaling with the uses or thevolume of data.
For example, using Data Explorer we were able to demo our patch search tool that was able toidentify roads across very high resolution images of agricultural areas (roughly 1,000,000 millionsq miles) within about 20 seconds, with minimal hints. And this is just one example of how muchtime can be saved by implementing the right tools.
As data volumes continue to increase exponentially, data scientists will have to adopt the tools to allow them to quickly and efficiently maneuver through it to find the relevant data points. Such as when needing to find similar images to help train AI models. Historically, if images weren’t labeled, it was a manual process to wade through countless images to find enough similar information to feed the model. For organizations to act nimbly and advance AI at a rapid pace, the tools to make patch searching and data curation easy will be essential.
Did you attend CVPR? Share your thoughts with us by emailing firstname.lastname@example.org with your top takeaways from the event.