Vitez Engineering Blog
Large Language Models in Serious Engineering Applications
The use of chatGPT in serious engineering applications is a serious concern for me. While I do regularly use chatGPT and find it an extremely useful tool, I have also seen regular abuse of it. ChatGPT fundamentally changes the access of information by dramatically lowering the barriers to find relevant information. However much like reading a news article on something you are intimately familiar with, using chatGPT on the topic you are an expert in shows how little information can be conveyed correctly with just a paragraph to explain all nuance.

This past week I saw an engineer ask chatGPT to calculate the sun vector from Earth. Unshockingly, chatGPT gave a reasonable response. However, this response had a bug which the engineer, unfamiliar with astrodynamics, didn’t catch. Most people, including the engineer in question, know that chatGPT can be wrong, yet ChatGPT is still correct enough to gain our trust. This is the “danger-zone” where things are reasonable but to a keen eye clearly not correct. ChatGPT is correct often enough to let our guard down but incorrect enough to be bitten.

In any engineering project there are a series of assumptions made. One might assume a satellite is a rigid body, water is a continuum, or that 𝛑 is 3. However there are additional assumptions often at the interface level that are of utmost importance. These include conventions for coordinate systems, assumed ordering of engines installed on a rocket, the sign of a correction term, and the units of a number. Famously, the Mars climate orbiter crashed due to Lockheed assuming US customary units while NASA had used metric. Getting these interfaces right is often the most assumption prone, and thus error prone, part of serious engineering.

The use of chatGPT in engineering only exacerbates “interface” level issues given the different assumptions chatGPT might implicitly make. ChatGPT may assume a coordinate frame or quaternion convention inconsistent with the rest of a project. In order to properly apply code or formulas from chatGPT, one must understand the topic to the point of being able to derive said code or formulas independently of chatGPT. However, I worry instead that the low barriers chatGPT presents effectively empower engineers to endless egregious errors.