By [Mark Cooper] FE Practitioner | Dyslexia Advocate | EdTech Pragmatist
I am currently existing in that weird, foggy limbo between Christmas Day and New Year's Day. You know the one—where nobody knows what day of the week it is, the Quality Street tin is just empty wrappers and regret, and everyone is universally tired. 😴
Naturally, as the sugar crash settles, my brain has drifted to the upcoming academic term. And with it, the familiar creeping dread of "The Admin."
This academic year, I have lived a professional double life. In the printshop and the classroom, I am confident, observant, and articulate. I can spot a micro-achievement in a learner with complex needs from across the room. But the moment I sit down in front of a keyboard to document that achievement, I hit an invisible wall.
I have been formally diagnosed with dyslexia. In a sector that lives and dies by evidence tracking—specifically in UK Further Education (FE) and Special Educational Needs (SEN)—this is a friction point that often feels like a physical weight. I knew the value was in my observations, but getting them onto the screen was a battle I was losing.
According to the British Dyslexia Association, around 10% of the UK population is dyslexic. Yet, startlingly, 80% of dyslexic individuals do not disclose this to their employers. Why? Because in a world of compliance, "struggling to write it down" is often mistaken for "struggling to do the job."
But this term, something changed. I didn't just survive the admin; I automated the friction out of it.
Here is the story of how a set of wireless microphones and a custom AI workflow became my biggest productivity game-changer—and why I believe Contextualised Speech-to-Structured-Text (CSST) is the future of inclusive education.
The "Why": A System at Breaking Point
Before we talk about the tech, we have to talk about the context. If you work in SEN or FE, you don't need me to tell you the system is bursting at the seams.
The latest government data is staggering. As of January 2024, there are 576,474 Education, Health and Care (EHC) plans in England—an increase of 11.5% in just one year. That is over half a million young people who need specific, evidenced, statutory support.
Simultaneously, the administrative burden on teachers is reaching critical mass. The 2024 DfE Working Lives of Teachers survey found that full-time leaders are working nearly 57 hours a week, with a massive chunk of that time dedicated not to teaching, but to "general administrative tasks." It is no wonder that Education Support reports 36% of education staff are at risk of clinical depression.
We are drowning in data requirements. We need to track Core 4 targets, EHCP outcomes, safeguarding notes, and soft skill progression. For a neurodiverse educator like me, typing these updates for a roster of learners wasn't just boring—it was exhausting.
I needed a way to let technology handle the sorting, so I could handle the teaching.
The Solution: The "Walk and Talk" Workflow
I purchased a set of wireless lavalier microphones (the kind you clip to your collar) and connected them to a dictation workflow. The hardware cost less than a round of drinks, but the impact was immediate.
The physical act of sitting at a desk was part of my block. By clipping on a mic, I could walk around the empty classroom at the end of a session, tidy up, and simply speak my feedback.
But raw transcription isn't enough. If you’ve ever used standard speech-to-text, you know it results in a "wall of text" filled with "ums," "ahs," and unstructured rambling. You can't upload that to Evidence for Learning (EFL) or send it to a parent.
This is where the AI comes in.
How It Works: Contextualised Speech-to-Structured-Text (CSST)
I developed a specific workflow using a custom AI prompt designed specifically for UKFE and Skills compliance.
1. The Input: I speak my raw observations. “Okay, let's look at Student X. Today was a win. He managed to stay on task for 15 minutes during the woodwork assembly, which beats his target of 10. He used the pillar drill with supervision but set up the clamp independently."
2. The Processing: I run that messy transcript through my custom AI prompt.
3. The Output: The AI parses the speech against an attached list of EHCP outcomes and Core 4 targets.
Here is what the system does for me:
* ✅ Personalisation at Scale: It converts a single block of raw speech into structured, professional written updates.
* ✅ EHCP Alignment: It automatically scans my spoken feedback against the learner's specific targets. It identifies that "clamping independently" maps to Outcome 4: Developing vocational independence.
* ✅ Zero "Hallucinations": I built in strict, non-negotiable rules. The AI is forbidden from making assumptions. If a learner wasn't mentioned in my speech, it clearly flags "No feedback given" rather than inventing progress to fill a box.
* ✅ Accessible Output: The output is pre-formatted in clear, British English at Level 1 readability (short sentences, simple vocabulary), making it perfect for sharing directly with learners and their families.
The result? I can generate termly progress updates that are evidence-backed and grammatically perfect with minimum typing. It keeps the "human" in the loop but removes the admin barrier.
The "Twixtmas" Reality Check: It’s Not Magic
Following a recent post on LinkedIn about this breakthrough, I’ve been overwhelmed by the positive feedback. However, in the spirit of transparency (and because I'm currently in that reflective post-Christmas headspace), I need to share the caveats.
We must talk about the constraints. It is not magic; it is a tool. And like any tool—be it a chisel or a chatbot—it has quirks.
Here is what I’m navigating right now:
1. The "Accent" & "Waffle" Factor
Speech technologies are great, but we all have different dialects. The AI sometimes struggles with regional accents or industry-specific jargon. Furthermore, if I go "off on one" and start rambling about the weather in the middle of an observation, the AI has to work hard to sift that out. I have had to train myself to speak more deliberately.
2. The Identity Issue
What happens if I mispronounce a learner's name? Or if I have two learners named "Connor"? The data gets messy.
The Fix: I’ve learned to be hyper-deliberate—using full names and clear diction to ensure the data goes to the right place.
3. "Garbage In, Garbage Out" (Crucial)
This is the most important lesson. The AI is only as smart as the person speaking to it.
If I speak into the app and just say: "Well done, good lesson today," the AI will structure that perfectly... but it means absolutely nothing. "Did well" is not data.
For this approach to work, the verbal feedback has to be properly personalised:
* ❌ Generic: "Well done Student X, you did good."
* ✅ Specific: "Well done Student X. You managed to stay on task for 15 minutes, beating your target of 10 minutes. I noticed you completed the task with less staff support than last week."
The AI captures the "15 minutes," the "target met," and the "reduced support" and files it against the correct outcome. You have to give the context to get the tracking.
Flipping the Script: When Students Use the Mic 🎙️
Perhaps the most profound moment of this journey wasn't about my admin at all. It was about my students.
Last term, my ICT group took charge of our termly newsletter. Many of these students face significant literacy barriers. They had brilliant ideas, incredible humor, and insightful stories to tell, but the mechanical act of writing them down was a wall they couldn't climb. They would stare at a blank Word document, defeated before they began.
So, I handed them the mic.
The Workflow:
* Speech: Students spoke their raw thoughts on a topic (e.g., "The Christmas Trip to the Garden Centre").
* Structure: We used the CSST workflow to restructure that raw speech into a polished article, removing the "ums" and "ahs" but keeping their specific content and tone.
The result was a "win-win-win." The students saw their exact thoughts in print. The barriers were gone. When "I can't write it" becomes "I just published an article," the shift in self-esteem is palpable.
The Future: Keeping the Human in the Loop
As we move into 2025, the debate around AI in education will intensify. There is a fear that AI will replace the teacher's voice. My experience suggests the opposite: AI allows the teacher's voice to finally be heard clearly, without the interference of administrative fatigue.
But we must remain vigilant.
* No Hallucinations: My prompts are set up with strict constraints: Do not add anything I haven't spoken. It captures my evidence and my assessment. It doesn't generate observations from scratch.
* Oversight: We review everything. Speech-to-text can mishear pronunciations. The human must remain in the loop.
I am really looking forward to contributing to discussions on this topic at NatSpec Peer Exchange Week next month. The potential to reduce teacher burnout while simultaneously improving the quality of evidence for EHCPs is too significant to ignore.
For anyone working in SEN or FE, finding ways to let technology handle the sorting while we handle the teaching is vital. We are just scratching the surface of how this can revolutionise our sector.
Have you experimented with speech-to-text workflows for classroom evidence? Or are you a dyslexic professional who has found a different "hack"?
Let me know in the comments. 👇
#EdTech #Dyslexia #AIinEducation #UKFE #EHCP #Accessibility #Productivity #TeachingTips #HumanInTheLoop
No comments:
Post a Comment
Note: only a member of this blog may post a comment.