October 5, 2025

I am the CEO of an AI startup that finds dead angles in visual data. If it is missed, it can paralyze your AI models

0
brian-moore.jpg



Each company wants to make breakthroughs with AI. But if your data is bad, your AI initiatives are condemned from the start. This is part of the reason why 95% of AI generative pilots fail.

I have seen from the first hand how well constructed the AI ​​models that operated reliably during tests can lack crucial details that make them work. And in the world of physical AI, the implications can be serious. Consider Tesla’s independent cars which have trouble detecting pedestrians in low visibility; Or Walmart anti -theft prevention systems which report the normal behavior of the customer as a suspect.

As CEO of an AI visual startup, I often think of these worst scenarios, and I am perfectly aware of their underlying cause: bad data.

Solve for the wrong data problem

Despite the emergence of large -scale vision models, various sets of data and progress in the data infrastructure, the visual AI remains extremely difficult.

Take the example of Amazon cashier technology for its American grocery stores. At the time, it was a kind of crazy idea – buyers could enter a fresh Amazon store, catch their items and leave without queuing to pay. The underlying technology was supposed to be a sophisticated symphony of AI, sensors, visual data and RFID technologies to carry out this experience. Amazon has seen this as the future of shopping – something that would disrupt holders like Walmart, Kroger and Albertsons.

Amazon’s visual AI could precisely identify a buyer picking up a coke in ideal conditions – well -lit alleys, simple buyers and products in their designated places.

Unfortunately, the system struggled to follow the items on crowded alleys and screens. Problems have also emerged when customers have returned items to different shelves or when they bought in groups. The Visual IA model lacked sufficient training on uncommon behaviors to function well in these scenarios.

The central problem was not technological sophistication – it was the data strategy. Amazon had formed their models on millions of hours of video, but bad millions of hours. They optimized for the common scenarios while under-fonduing the chaos which animates the real retail trade.

Amazon continues to refine technology – a strategy that highlights the basic challenge with the deployment of visual AI. The problem was not a computing power or an insufficient algorithmic sophistication. The models needed more complete training data that captured the whole spectrum of customer behavior, not just the most common scenarios.

This is the dead angle of a billion dollars: most companies solve the bad data problem.

Quality on quantity

Companies often assume that data scaling – collecting millions of other images or hours of video – will fill the performance gap. But the visual AI does not fail because of too little data; It fails due to the wrong data.

Companies that succeed regularly have learned to organize their data sets with the same rigor as they apply to their models.

They deliberately seek and label the difficult cases: the scratches which barely register on a room, the presentation of rare diseases in a medical image, the condition of lighting to a thousand on a production chain, or the pedestrian escaping between the cars parked to twilight. These are the cases that break the deployment models – and the cases that separate an adequate system from a system ready for production.

This is why data quality quickly becomes the real competitive advantage in the visual AI. Smart companies do not continue the volume; They invest in tools to measure, organize and permanently improve their data sets.

How companies can use visual AI successfully

Having worked on hundreds of major visual AI deployments, there are certain best practices that stand out.

Successful organizations invest in standard data sets to assess their models. This implies having an in -depth human review to catalog the types of scenarios on which a model must perform well in the real world. During the construction of benchmarks, it is essential to assess the cases of the on -board, not only the typical cases. This allows a complete evaluation of a model and to make informed decisions as to whether a model is ready for production.

Then, the main multimodal AI teams invest in an infrastructure focused on data that promotes collaboration and encourages visualization Model performance, not just measure it. This helps improve safety and precision.

In the end, success with visual AI does not come from larger or more calculation models – it is the processing of data such as the foundation. When organizations put data at the center of their process, they unlock not only better models, but safer, smarter and more impactful in the real world.

The opinions expressed in the Fortune.com comments are only the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Global Forum fortune returns on October 26 to 27, 2025 in Riyadh. CEOs and world leaders will meet for a dynamic event only invitation that shapes the future of business. Request an invitation.


https://fortune.com/img-assets/wp-content/uploads/2025/09/brian-moore.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *