Artificial Intelligence and Human Factors Unite to Strengthen Aerospace Safety
Aviation experts emphasize that automation should complement, not replace, human judgment in the drive to reduce error and enhance operational resilience

Artificial intelligence is poised to transform aerospace safety—not by replacing humans, but by amplifying their capabilities. From predictive maintenance tools that flag fatigue risks to machine-learning systems that analyse cockpit interactions, AI now offers a powerful ally in understanding and preventing human error.
Yet, as experts warn, the challenge lies in ensuring that technology reinforces human strengths rather than obscuring them. The focus, they argue, must remain on designing systems that make it easier for people to succeed and harder for them to fail.
This human-centric approach to AI-driven safety was at the heart of discussions held in London on October 7, 2025, during the Royal Aeronautical Society (RAeS)’s President’s Conference: People in Aerospace. The panel, titled The Human Contribution: Just Another Brick in the Wall, brought together engineers, psychologists, and pilots to examine how automation, training, and culture must evolve together to reduce risk and strengthen decision-making across the industry.
The Role of AI in the Cockpit
The experts shared their views on how artificial intelligence could be used directly inside the cockpit to reduce pilot workload and human error. While their perspectives differed in emphasis, all agreed that AI should enhance—not replace—human capability.
Dr. Michael Bromfield, Associate Professor at the University of Birmingham and Aerospace Systems Engineer at Myriad Business Ltd, dismissed the idea of a fully autonomous cockpit.
“Will AI replace the human in the cockpit? Probably not,” he said. “Imagine working in that environment—we’re not just providing written information; we’re reading our co-pilot’s tone, actions, and stress. That shared awareness can’t be coded.”
Ben Peachey, Chief Executive of the Chartered Institute of Ergonomics and Human Factors, described AI as a transformative force—“more like electricity than the internet”—whose value lies in optimising the human–machine partnership rather than eliminating human input.
Harry Nelson, Chair of the Royal Aeronautical Society’s Human Factors Specialist Group, echoed this caution.
“We mustn’t let the press or convenience drive us into thinking AI can replace judgment,” he said. “Automation should support skill, not supplant it.”
Bromfield noted that AI-assisted monitoring could help detect early signs of fatigue or cognitive overload, giving flight crews real-time alerts before a lapse occurs. Peachey expressed cautious optimism, saying such systems could act as a second layer of protection but must be designed to support, not overrule, pilot judgment.
Nelson warned that over-reliance on automation risks eroding essential piloting skills and teamwork. “The more we delegate decision-making to algorithms, the greater the danger of losing situational awareness,” he said.
Lessons from the Airbus Accident
Nelson illustrated the enduring importance of human oversight with a striking example from his career.

As Accountable Manager for a test program involving a brand-new Airbus A340-600, due to be delivered to Etihad Airways, he recalled the accident that occurred on November 15, 2007, at Toulouse-Blagnac Airport (TLS). The quadjet, valued at around $275 million at the time, was set to join the Abu Dhabi-based carrier’s fleet within days, but the acquisition was later cancelled after the aircraft was written off.
It is exceptionally rare for a new aircraft of that scale to be declared beyond economical repair, and Nelson described how, during a high-power engine run, it smashed through a containment wall.
The cause, he explained, was a combination of small procedural oversights, misjudged assumptions, and commercial pressure to meet delivery deadlines.
“It became a lesson in how humans adapt, drift, and normalise deviance over time,” he said.
Engineers attempted the run without chocks in place, relying instead on the parking brake—a system certified to hold only 50% of total thrust. When thrust exceeded that limit, the aircraft broke free, injuring several personnel and destroying the fuselage. Nelson used the case to highlight the perils of confirmation bias and the slow erosion of safety culture.
“They learned the brakes could hold thrust—until something changed. When the aircraft got lighter and the pressure increased, their assumption no longer held true,” he said.
The investigation that followed prompted Nelson to implement major procedural reforms. He oversaw the creation of the world’s first dedicated ground operations manual, mandated external monitoring of engine runs, and invited independent experts from Britain, France, and Germany to audit testing protocols.
“We needed fresh eyes,” he explained. “External review revealed weaknesses that we would never have seen internally.”
He summarised the lesson in one line: “The object of design is to make something easy to do right, difficult to do wrong, and impossible to do catastrophically.”
Peachey expanded on this point by highlighting the synergy between human insight and data-driven design. He noted that the field, originally developed in military aviation, has matured into a global discipline influencing industries from healthcare to energy.
“We’ve matured from seeing failure as the fault of individuals to understanding it as the result of flawed systems,” he said. “Humans are fallible and humans are brilliant. Our job is to minimise the former and maximise the latter.”
Dr. Anne Isaac, Chair of the RAeS Human Factors Air Traffic Control Group, also emphasised that improving investigations depends on understanding human perception and information processing. She noted that investigators often understand what happened in an incident, but the more important challenge is discovering why it happened. She and colleagues are expanding the Society’s partnerships with neuroscience programmes to uncover how stress and cognitive load affect decisions in real time.
Embedding Human Factors in Everyday Operations
Charlie Brown, Safety Compliance and Quality Control Manager at Virgin Atlantic Airways, described how his airline integrates human factors principles into daily operations.
Licensed engineers, he said, undertake initial five-day modules and biennial refreshers to maintain awareness of fatigue, workload, and communication barriers.
Yet Brown stressed that compliance alone is insufficient: “If you want to make a difference, talk to your people—and more importantly, listen to them.”
He highlighted Virgin Atlantic’s apprenticeship programme, which embeds human factors training at its foundation. Apprentices begin with psychological and communication modules before advancing to technical disciplines, cultivating a mindset of safety and collaboration from the outset.
“When you look after your people properly, that’s what you get,” Brown said. “It becomes a family culture where mentorship and safety go hand in hand.”
Bromfield discussed how human-systems integration research bridges technology and psychology. His work explores how pilots interact with flight decks, how sensory cues shape situational awareness, and how automation can augment—rather than diminish—human judgment.
“Understanding how humans perceive cues and respond under stress is vital to improving performance,” he explained.
The panel closed on a forward-looking note, urging that human factors education begin long before engineers and pilots reach industry. Bromfield confirmed that the Specialist Group is launching a member survey to prioritise emerging areas such as AI, neurodiversity, and mental well-being.
“Every one of us has responsibility for safety,” Nelson reminded the audience. “And every one of us has responsibility for each other.”


