What are the risks of using AI in engineering?

New technologies mean new pitfalls. Here are the three biggest hazards of using AI in engineering.

Artificial intelligence (AI) has been proliferating rapidly in recent years, driven by advancements in computing and statistical algorithms to make some truly stunning leaps forward. There’s a growing sentiment that AI will soon be everywhere, and engineering is no exception. The possibilities of what engineers could do with AI are tantalizing but, as with any new technology, the adoption of artificial intelligence carries with it inherent risks.

The substance of the risks naturally depends on how organizations choose to deploy AI, but in the specific context of engineering, there are three particular hazards of which all stakeholders should be aware.

#1 – Misinformation

Although the AI industry would prefer everyone to use the vaguer, more evocative term, ‘hallucinations’ the simple fact is that AI systems — particularly large language models (LLMs) — routinely generate falsehoods. There’s been no shortage of suggestions for how to overcome this tendency, whether it’s better training data, expert input or even just making the models larger and more complex.


Unfortunately, the inherent structure of LLMs and machine learning more generally means that the outputs of these systems are statistical inferences, which can never guaranteed to be truthful. Add to the fact that LLMs are fantastic confabulators (to use the polite term), and the potential for AI systems to misinform the engineers using them becomes a serious risk indeed.

#2 – Gray Work

The term ‘gray work’ refers to the ad-hoc solutions and workarounds users are forced to adopt when new technologies make their jobs more difficult, rather than easier. For example, imagine a manufacturer decides to add a new machine vision system to an existing production line in order to reduce the number of defects it produces. If the system requires workers to ensure that each product is positioned just so in order to function, the efficiency gained by its detecting more errors will be far outweighed by the extra efforts it requires from employees.

In the context of artificial intelligence, gray work typically involves reconciling data from disparate sources, including sensors, databases and various tools or applications. For example, an engineer working on a predictive maintenance program might need to collect data from designers, customers and the production line. If that predictive maintenance program’s output then becomes part of a larger quality report, the engineering team behind the report may find themselves spending more time sorting through data than doing any actual engineering.

#3 – Waste

Artificial intelligence is often celebrated for its efficiency, or more specifically for the potential efficiency gains it offers. However, the not-so-secret shame of AI is that it’s incredibly resource intensive to operate. Data centers – the required infrastructure for any cloud-based AI, which is most of them — have a high carbon footprint due to their significant electricity needs in addition to requiring copious volumes of water for cooling. Of course, this pertains to cloud-based design tools as well, not just AI, but what makes the latter more of a concern is the way it’s being used today.

Typically, machine learning models need to go through several generations of training before they become practically useful, which means that every new ML model represents the consumption of more water and electricity. Add that to the fact that many models are redundant copies created solely for commercial purposes and the risk of creating excess waste by using AI becomes fully apparent. Moreover, even if you’re not inclined to worry about resource consumption from the perspective of climate change, you should be aware that AI companies are taking it seriously by charging their customers more for each query they make.

Written by

Ian Wright

Ian is a senior editor at engineering.com, covering additive manufacturing and 3D printing, artificial intelligence, and advanced manufacturing. Ian holds bachelors and masters degrees in philosophy from McMaster University and spent six years pursuing a doctoral degree at York University before withdrawing in good standing.