On February 3, 2023, a train made up of 38 cars derailed in East Palestine, Ohio, spilling hazardous materials and forcing emergency crews to conduct a controlled burn of multiple railcars. Not only did the accident require the evacuation of residents within a one-mile radius of the crash, but it necessitated responses from agencies in […]
Akridata’s Data Explorer offers an interactive platform, where the accuracy metrics are connected directly to the data, saving valuable time in analyzing model’s accuracy, understanding what caused inaccuracies and allowing DS teams to target the next training cycle exactly where the model misfires.
Philips recently settled over 700 lawsuits relating to their line of sleep apnea machines and ventilators. Not only did their payout amount to over one billion dollars but the company was forced to recall a million of their devices as well. Why? Poor inspection. Perhaps that’s an oversimplification of the issue, but the fact that […]
In this blog, we’ll see how Akridata leverages existing models like CLIP and DinoV2 for an improved labeling flow.
Explore AI's transformative effects on the industry in 2024. Uncover key predictions and trends, from foundational models to data quality investments and more.
Until very recently, performing data science with image and video data was an incredibly difficult task. But now, Sanjay Pichaiah explains how advancement in AI powered tools has allowed more organizations to extract value from visual data, an increasingly important need for businesses.
Given the typically large visual datasets, it is impossible to manually inspect each image, but what if there was an automatic way to validate their quality?
Information can be presented in many different forms – starting from its raw format, with no processing or filtering, through graphs and statistics, to a short summary or even a single value. It all depends on the use case, available resources and the next step. Choosing the correct form of data presentation is tricky, but […]
Development cycles in modern computer vision rely heavily on large visual datasets – images of various types, videos from different sources or a mix of the two. The raw content is somehow curated to form a training set and a test set that are used to develop, train and test a DL model. But how […]
We live now in an era where AI is everywhere around us, where applications and services rely heavily on automated systems with an AI-based component embedded in them. In order to perform well, these systems rely on getting an accurate output from various ML models, and while researcher and DS teams work tirelessly to improve […]