Close
Browse courses

There’s an attention economy, and it’s doing about as well as the real one. With social media and streaming platforms providing infinite content to consume, the allegation is that people’s attention spans are dwindling—and not just young people. The result may be a world where longer content is a niche prospect, and where people’s ability to focus on it (and absorb information from it) is impaired.

Where, then, does this leave training and education? While it seems unlikely that education itself is under threat, this isn’t the only challenge for teachers and lecturers. Facing this threat means acknowledging it, and focussing on what might be the best way to combat it—engaging with people in real life, and combining the practical with the personal.

Fighting for attention

There is a widespread feeling that younger people have shorter attention spans. While little clinical evidence is available for how attention spans have actually changed, the anecdotal evidence of teachers and parents lines up with some modern habits. The TikTok format of an endless stream of short videos has taken over almost every social media platform, while many people’s exposure to news is now limited to headlines, still images, or dubiously sourced videos, and never clicking through to read an article.

The dizzying amount of content means it’s become increasingly palatable to rattle through as much of it as possible, as quickly as possible. One method that’s become normalised for Gen Z and Gen Alpha to watch videos at higher playback speeds. A poll by YouGov and the Economist found that almost a third of Americans aged 18-29 watch videos at more than 1x speed, while some are even going as far as to watch movies sped up. Apps such as Headway and Blinkist meanwhile promise to compress entire books into digestible summaries, allowing you to ‘read’ a book in just 10 or 15 minutes.

These aren’t products in search of an audience, either. As much as phenomena like ‘BookTok’ show that reading is far from dead, statistics show that younger people are reading much less than older people. A recent study indicated that the number of Americans reading for pleasure has dropped by 40% since the turn of the century—just as AI has arrived to help us quickly summarise information, and influencers boast about ‘extracting value’ by extracting the key points from long books.

Relying on robots

All of this reflects a general move towards AI as a way to summarise information rather than reading it ourselves, and drawing our own conclusions. Ai assistants are now being heavily relied upon for daily tasks, without full consideration or awareness of how often AI can get things wrong. This effect could easily snowball, as AI becomes capable of more complex and convincing-sounding answers, while being trained on low quality content that it has produced, and has then been regurgitated onto the internet.

Whether or not this constitutes an existential issue is open to debate. It isn’t impossible to teach critical thinking skills, and there is a growing opposition to AI from some quarters. The problem comes if and when AI becomes near-infallible in its output. As long as it continues to regularly make mistakes, it’s possible to argue against it, and for the merits of learning to do things ourselves. But the point at which it improves—or if people stop being able to identify those mistakes—is the point where people could lose the capacity to perform many tasks themselves. By surrendering every odd job to AI, people’s very ability to learn and process new information could be affected.

A harbinger of the effects of automation on human skills and decision making is the aviation industry. While there is a unified front on issues of safety and standards, one thing that still splits opinion is the difference between Airbus and Boeing planes. While Boeings are still largely mechanical, Airbus has been driven by a ‘fly by wire’ approach for decades, where computers interpret many of the inputs made by pilots. This allows the computers to catch mistakes, and prevent pilots from putting the flight at risk. However, it can also be a crutch that leaves them unprepared when those protections disappear.

This was exemplified by the crash of Air France Flight 447 on a trip from Rio de Janeiro to Paris. Inclement weather likely caused an instrument to malfunction which supplied key data to the autopilot. Without this data, the autopilot couldn’t automate everything, and alerted the pilots that it had switched to a reduced mode without certain protections. The sudden array of warnings startled the first officer, who was flying the plane at the time. Instead of identifying the issue before taking action, he made a series of erratic inputs that led to the plane stalling, then failed to perform a basic recovery procedure, crashing into the ocean. Had the pilots simply left the controls alone, the plane would have continued flying, and the instrument error would likely have corrected itself as the weather improved.

The Swiss Cheese model

While no air accident is the result of a single point of failure, one of the main factors blamed for the disaster was a lack of recent experience with manual flight. One of the conditions the plane’s systems protected against in normal flight was an ‘upset condition’, where the nose could be raised to the point that the plane would stall. Without this protection in place, there was nothing stopping this from occurring—and having become reliant on this protection, the pilot appears not to have realised that his actions were putting the plane in danger. It did not even occur to him in the moment to take an action that should be drilled into every pilot: to push the nose of the plane down in a stall in order to regain lift, and get it back under control.

This is an example of what’s known as the ‘Swiss cheese model’ in action. The concept is that each layer of redundancy in a system is like a piece of Swiss cheese, with the holes in different places. The more layers of redundancy there are—and the more slices you line up—the lower the chances are of all the holes lining up, and a mistake getting through all of them. When people lose practice or knowledge in a skill due to overreliance on automation, multiple layers of redundancy go out of the window. There’s no redundancy with AI when you lack the skills to identify whether AI has made a mistake.

It’s an extreme example, but it’s illustrative of the dangers of ‘over automation’. Pilots are among the most rigorously trained professionals on the planet, expected to recognise and manage a variety of conditions with only seconds to make the right decisions. Yet the philosophy of this particular plane (and to some extent airline) had led this flight crew to become complacent, and lose the sharpness they needed to react appropriately when that automation was taken away. The AI in this case (the autopilot) is an extremely useful tool, but it can’t be the only thing we use. In situations where individuals are unable to rely on AI to parse information for them, can they recite that information—and if they let AI do their thinking for them, do they even understand it themselves?

Lessons for trainers

All of this poses interesting questions for training. Training, after all, is about imparting large amounts of information, and helping people to absorb and understand it. If the current trend of truncation continues, and people get worse at absorbing information in their day-to-day lives, training people might get harder. Not only might people’s ability to learn be impaired, their willingness to learn might too. If they can simply get out their phone and ask it a question when they need to, they might wonder why they need to internalise that information at all.

This is a problem that schools are actively fielding. Teachers are grappling with the improvements in large language models (LLMs) like ChatGPT, which students are using to generate essays and solve questions. While it may not be difficult to spot the lowest-effort attempts at this, it is possible to generate something that could deceive a teacher with good enough prompts. In this instance, students are only learning how to get better at using AI—not without its benefits, but a net negative for the rest of their learning.

On the other hand, there are some things that just need to be learned, and some things that AI cannot teach you. Many manual processes might be described by AI, but wouldn’t be practical to learn simply by reading a list of instructions. Other skills simply make more sense to internalise and recall on the spot than having to look them up every time you need them. Businesses rely on a perception of competence, and staff needing to look things up isn’t likely to be well received by the public!

More than this, though, it’s about helping to explain the shortfalls of AI, and the benefits of education and training. A skilled individual can help to illustrate something in terms that AI is unlikely to manage, finding pertinent examples and using personal anecdotes to enliven a subject. AI also lacks the ability to provide meaningful feedback without being prompted directly. Through a conversation with a trainer, the trainer can help to hone your skills, identifying and correcting issues that you might not realise than you have.

The very act of training also builds skills that can have a huge amount of utility. The process of learning is one, as is the process of self-improvement, and looking critically at what we’re doing and how to get better. We can improve our conversational skills, feel less overwhelmed in new or stressful situations, and learn routines that become ‘automatisms’, where we perform tasks by muscle memory. None of this is possible with AI, and all of these things are muscles that need to be trained.

None of this is to say that AI has no role in modern businesses, or in modern life. But it needs to be seen as a tool that can augment our work, rather than replace it. We still need to know how to do the things AI does in order to scrutinise how it has done them.

This should mean that training has as much of a future as ever—it’s just contingent on us to make sure that businesses continue to see the benefits of practical training, and don’t get lost in the cost savings AI can provide.

Develop your employee training programme with Kent Trainers

Working in partnership with you, we provide insight and assistance to help you achieve your development goals. Whether you are looking to gain a better understanding of your training and development gaps, build training plans across multiple teams, or need bespoke training solutions for a particular challenge, we can help identify your options and the solutions available.

Contact us

Mark Fryer

1st November 2025

Want 10% off your first scheduled course?

Sign up to our newsletter and receive 10% off any of our scheduled courses as a thank you! Our monthly newsletters are filled with features, advice and information about our forthcoming courses.