Human–AI collaboration reshapes design and engineering workflows
New research shows general AI must be adapted to domain-specific contexts before it can deliver reliable industrial value

Artificial intelligence (AI) is changing how designers and engineers work, but not in the way many fear. Rather than replacing human expertise, AI is increasingly reshaping how problems are framed, ideas are evaluated, and decisions are made across complex product and manufacturing environments.
As general-purpose AI tools become easier to access, their limitations are also becoming more visible, particularly in high-risk engineering domains. Trust, safety, and domain relevance remain significant barriers, pushing organizations toward more localized, human-centered approaches to AI adoption.
“I see AI as human and AI collaboration, where we are augmenting human ability. At the same time, it has to be done with an awareness of responsible use,” Saeema Ahmed-Kristensen, associate pro-vice-chancellor for research & impact at the University of Exeter and director of DIGIT Lab, told TechJournal.uk in an interview.
She said the dominant narrative that AI will replace designers and engineers misunderstands how innovation actually happens in complex systems.
“AI is not replacing designers or engineers. I see it as enhancing what humans can do, particularly when experts are able to judge, evaluate, and refine what the tools produce,” Ahmed-Kristensen said.
She added that expertise plays a decisive role in whether AI becomes a productive tool or a source of risk.
“If you put general AI in the hands of an expert, they can evaluate what is useful and what is not,” she said. “However, when large language models (LLM) don’t know the answer, they still produce an answer. They don’t tell you that they can’t produce an answer. That’s a problem if it’s in the hands of a novice.”
The comments were made during an interview conducted for TechJournal.uk with Ahmed-Kristensen.
Ahmed-Kristensen is a design engineer by training and has spent more than two decades working at the intersection of engineering, data, and human behavior. Before joining Exeter, she held senior academic roles at Imperial College London and the Royal College of Art, and previously spent more than a decade at the Technical University of Denmark, where she led design engineering and innovation programs.
Her research portfolio spans aerospace, medical devices, consumer electronics, and manufacturing systems, including long-running collaborations with companies such as Rolls-Royce and GN Netcom. Much of her work focuses on integrating digital tools into early-stage design, from predicting user comfort in wearable devices to improving decision-making in complex engineering environments.
At DIGIT Lab, she leads research on data-driven innovation and the use of AI in design and manufacturing, with a particular focus on trust, governance, and the role of human judgment in safety-critical contexts.
DIGIT Lab, the UK’s national research centre for digital innovation, is led by the University of Exeter and funded by the Engineering and Physical Sciences Research Council, with additional academic and industry partners.
Trust before automation
One of the central challenges in industrial AI adoption is trust, particularly in environments where failure can have serious consequences.
“In manufacturing and high-risk products, trust is one of the biggest barriers,” Ahmed-Kristensen said.
“If you go back 20 years into knowledge management systems, people would write a report. You still need to trust the outcome. For example, when large aerospace companies would use an expert to verify the lessons learnt,” she said. “We’re still in that same situation when the report is AI generated.”
She said general AI tools are rarely suitable for such environments without significant adaptation.
“General AI tools are not built with the domain-specific knowledge required for complex products, which is why companies are moving toward local models trained on their own data,” she said.
This shift reflects growing concern about reliability and accountability rather than a rejection of AI itself.
“The challenge is not collecting data, but having confidence that the output is reliable and relevant to that specific domain,” Ahmed-Kristensen said.
A central goal of Ahmed-Kristensen’s work is ensuring that research on AI and design translates into methods organizations can actually use. At DIGIT Lab, this has involved developing structured frameworks that help companies understand where AI can add value across different stages of design and innovation.
One strand of the research examines how data is currently used in product and service development and categorizes it into distinct modes, ranging from incremental optimization to the creation of entirely new offerings. This allows organizations to assess whether AI is being applied tactically or more strategically.
Another focus is evaluation rather than generation. Instead of positioning AI as a tool that produces finished designs, the research emphasizes using AI to support early-stage assessment, helping teams filter, compare, and challenge ideas before committing significant resources. The aim is to clarify where automation makes sense, where human oversight is essential, and how responsibility should be distributed across teams.
Creativity versus fluency
Concerns about creativity extend well beyond engineering.
A recent survey commissioned by DIGIT Lab, based on responses from 500 UK creative professionals, found that four in five designers (81%) believe AI dulls creativity, while 78% say AI is making creative work feel soulless. Around 70% fear it will eventually replace creative roles.
Despite those concerns, adoption is widespread. The survey found that 94% of creatives now use AI in some part of their workflow, with 42% relying on it daily, even as only 11% believe machine-generated work is creatively valid. Design professionals were twice as likely as writers or journalists to describe AI-generated work as soulless, suggesting design may be particularly vulnerable to homogenization.
Ahmed-Kristensen said the findings reflect a growing tension between efficiency and creative confidence. She said lived experience and feeding the brain remain essential to creative work. She added that LLMs are effective at speed and recombination, but not originality.
“LLMs cannot experience imagination, inspiration or intention – only humans can be creative beyond what is known,” she said.
DIGIT Lab concluded that while AI can accelerate production and expand creative capacity, creativity itself remains rooted in uniquely human capabilities.
Rather than resisting AI, Ahmed-Kristensen said designers and engineers need to adapt their skill sets to work effectively alongside it.
“Designers and engineers need to become more comfortable working with data and combining quantitative and qualitative insights,” she said.
She urged industries to position AI as an enabling tool rather than a substitute for creativity itself, warning that over-automation risks eroding satisfaction and self-belief across creative professions.
As AI tools continue to mature, the most successful organizations are likely to be those that treat AI not as a replacement for human judgment, but as a means of extending it within clearly defined, responsibly governed boundaries.


