Ad
Image: Adobe Stock
To fine-tune their machinery for optimal performance, manufacturers often create data simulations and digital twins to test out different scenarios under realistic conditions. However, the predictive tools used are often resource-intensive and demand massive volumes of precise data to generate usable insights.
For small and medium enterprises (SMEs) in particular, the only option they have is to manage data quality issues by applying various statistical techniques and computation cycles to correct data inconsistencies. Not only does this brute-force approach often yield inaccurate results, it’s also financially unsustainable due to being process intensive.
The costs of poor data quality in digital twin development
The quality of simulated data can’t be continuously monitored during generation. Issues can only be uncovered once the data is consumed for analysis or reporting. Normally, this is when manufacturers implement the traditional “cleanup later” strategy to get usable data out of their simulations.
“This delayed discovery creates significant downstream costs that quickly become unsustainable…,” said Saurabh Gupta, chief strategy officer of The Modern Data Company. “I’m aware of one example where implementing upfront data quality frameworks reduced these remediation efforts by over 20% in data management costs.” Gartner estimates that this perpetual data cleanup costs organizations $12.9 million annually.
The financial implications of this approach typically manifest in multiple ways:
- Direct remediation costs: SMEs typically spend multiple weeks per quarter working with cross-functional teams to track and fix data quality issues before it’s finally usable. The financial losses are associated with the increased workload and delays in manufacturing schedules.
- Decision-making delays: Without a trusted data foundation, business decisions that require data validation can be delayed by weeks at a time. For time-sensitive markets, such delays can translate into missed revenue opportunities and shareholder dissatisfaction.
- Resource allocation inefficiency: In organizations with a reactive approach to data, engineers often end up spending a significant portion of their time troubleshooting quality issues rather than building new capabilities. The uncertainty of when they’re going to get usable data leads to resource buffering where progress is halted.
While bulk computing costs are becoming more affordable, the goal is to shift away from this processing-heavy strategy towards a more sustainable alternative. Particularly, one that’s accessible to smaller manufacturers with limited budgets and resources.
What strong data foundations look like
The alternative to hunting for errors in the final version of the data is to build simulations on strong data foundations from the start. Not only is this approach less resource-intensive, but it also allows for rapid experimentation and more efficient manufacturing innovation cycles.
There are two critical elements to creating a strong data foundation:
- Well-designed data products: Through the implementation of business-driven data products that consolidate historical design performance data, manufacturers can substantially reduce the time spent on data revisions. The result is trustworthy and high-quality data products governed by comprehensive metadata and established processes.
- Consistent accessibility: By avoiding a “black box” approach to data simulation, teams are able to access the same reliable data across various tools and interfaces without needing manual interventions. Through accessibility across the different consumption modes, engineers and analysts can focus on innovation instead of data wrangling.
With strong data foundations in place, subject matter experts no longer have to worry about data quality or availability. They can freely experiment and innovate using different models, instead of spending the majority of their time cleaning and preparing data.
In Design for Manufacturing (DFM), data silos from previous builds often create significant bottlenecks. Engineers can instead implement business-driven data products to consolidate historical design performance data to reduce the DFM review cycles and test more design variants faster. That way, the data foundation can lead the shift from a linear design process into a more agile approach, allowing SMEs to efficiently experiment with design and process modifications based on comprehensive historical performance data, rather than starting each review cycle from scratch.
Aiming for a ‘minimum viable data architecture’ for smaller manufacturers
SMEs might not have access to sufficient quantities of historic manufacturing data. So instead of complex, custom-built systems, smaller manufacturers should use business-driven, templated data products to reach a minimum viable data architecture. Readily available templates like Device360, Vehicle360, and Customer360 all provide pre-configured data models that address specific manufacturing needs without focusing too much on the minute details.
Using templates reduces the upfront investment in data simulations, providing all the benefits of enterprise-grade data capabilities without the burden of starting from scratch. They’re also ideal for manufacturers with limited experience in their field, as templates typically encapsulate industry best practices, taking the guesswork out of the equation.
“By leveraging these data products, manufacturers can focus on deriving value from data rather than infrastructure development,” Gupta said. “This approach significantly reduces time-to-value while establishing the foundation for more sophisticated data capabilities and their ability to experiment as the organization matures.”
Resolving the data challenges holding back design and engineering
The solution for data-based design and manufacturing issues is to start small with the available data. Forward-thinking manufacturers are investing in building solid data foundations and prioritizing innovations and resource efficiency.
As Gupta put it, “As more businesses rely on digital twins to test models, simulate behaviors, and develop products, those with solid data foundations will innovate faster and more cost-effectively than competitors still struggling with data quality.”
Anina Ot has been a technology and SaaS writer for the past 5 years, focusing on explainers, how-to guides, industry and trends, and tech reviews. She’s worked with clients such as Dashlane, Remote.It, and Logit.io and contributed hundreds of pieces to prominent online publications, including AllTopStartups, MakeUseOf, and multiple TechnologyAdvice websites. Her goal is to make technology more accessible through clear and structured writing. Off the clock, she’s a huge physics nerd, an enjoyer of the great outdoors, and an avid jigsaw puzzler.
Ad