AI Insight: AI tools to improve elearning content
This series of articles on Generative Artificial Intelligence covers best practices, projects, and tools in use at Vancouver Community College, and how the Centre for Teaching, Learning, and Research (CTLR) is supporting and guiding the use of GenAI in elearning development.
Even with my cynic’s hat firmly affixed to my skull (see my earlier AI post), I haven’t been able to avoid using GenAI tools to enhance content for our online courses.
Here are some examples where I’ve used AI to generate or enhance content for elearning projects at VCC.
This Persona Does Not Exist
In the last year alone, the quality of synthetically-generated images of people has continued to improve markedly. Month upon month, it has become difficult for me to tell the difference between false and real people in online images. It’s kind of maddening that this artificial realism is starting to grow beyond the “uncanny valley” level.

In the past, I have used https://thispersondoesnotexist.com to grab realistic images of people’s faces when designing Agile Personas for software design. To create Personas, I needed to create one-page descriptions of typical users who would be the beneficiaries of a design project. I wanted faces that were realistic enough to induce empathy for the designers and stakeholders, but I didn’t want to use real people’s faces I found on the Internet.
Reviving old Videos through Upscaling
I’m not such a big fan of using GenAI for content creation. I much prefer using tools where AI’s fumbling fingerprints may be harder to notice. For example, I’ve used an AI-driven video upscaling tool at CapCut.com to scale-up old educational demonstration videos. To bring a twenty-year-old SD resolution video (e.g. 480x360 pixels) closer to modern HD resolutions, AI tools basically introduce new pixels to double the video’s width and height. In this case, the end-product becomes 960x720 pixels, which is pretty close to the 720P HD standard, much sharper in detail, and less “noisy” or grainy in appearance.

The results aren’t perfect, but they can be pretty usable. In the upscaled end-product, objects with straight edges end up looking fantastically crisp, but human features do not always resemble their original owners (look at mouths and eyes, especially). Lettering in signage or graphical titles also tend to lose their styling.
The AI behind the image upscaling does its best to interpolate patterns of pixels to increase the spatial resolution of the video, but nothing in it actually “knows” if its new pixels are accurately rendering the contour of a person’s mouth or eyes. To the image upscaling AI, an edge is just an edge and it’s all just a bunch of pixels that need transforming.
I’ve now done this for over one hundred Auto Collision and Refinishing (ACR) demo videos at VCC. I will likely be doing similar upscaling for at least sixty more videos to come. In this case, the ACR instructor’s locally-made video content is extremely useful to their program. They also determined that shooting new HD versions would take them too long, and it was also not possible to find equivalent videos to license from other vendors or publishers.
Upscaling old videos this way isn’t an ideal solution, but it can be a useful compromise, giving you better-quality video relatively quickly, and buying your instructors and content developers extra time to prepare for the inescapable day when they will need to record new demonstration videos.
Cleaning and Sharpening Audio Narration
AI-driven audio enhancement can work along the same lines, removing background noise from an audio track to make human voices louder and more distinct. AI helps to recognize patterns of sound that belong to a human narrator so that they can be made louder and sharper, and so that background noise can be de-emphasized and squelched down. (Improving the clarity of voice narration, whether by recording with better microphones or by using AI-driven audio processing, can also result in a video’s closed captions being more accurate.)
Adobe’s Podcast audio enhancement tool, Enhance Speech, is a free AI filter for cleaning up spoken audio. It provides very good audio clean-up quality. I’ve used it to created easier-to-hear audio from video recordings taken in loud conference rooms where everyone is talking at the same time, but we really want to hear what the presenter is saying.
Text to Speech: Using Synthetic Narrators
Video often benefit from having spoken narration, but circumstances sometimes make it too difficult or time-consuming to record a live voice recording.
To educate users about using VCC’s WeBWorK math homework platform, I had previously made ten tutorial videos. I originally created them as “silent movies” without any spoken narration, in order to reduce my production time. However, months later, I decided that those videos really needed spoken narration help to fill-in the gaps in the narrative and to “humanize” the presentation. The problem was that I was super busy, my office space was a busy place, and it would be inconvenient to book a quiet room to record audio for all ten videos.
I ended up trying out a synthetic text-to-speech tool inside the video editor ClipChamp.com. (This tool may be bundled with Windows 10 or 11.) I used a free account in the web edition, and signed in using my email address and a custom password.
I picked a male voice named “Stefan”, which had good inflection and sounded natural enough for the video content. I’ll let Stefan speak for himself:
I admit that to my ear, Stefan sounds a little bit like a disaffected nephew of Werner Herzog, but the enunciation and inflection of the voice are very good, and the overall audio quality is absolutely pristine. There is no background noise of any kind between his words because it is synthetically generated. Silence is silence. I think it would be almost impossible to get a live microphone recording to have zero sound between the speaker’s words, unless you had a sound-proof booth and a really nice microphone (none of which we have here).
In another use of this feature, I created spoken narration for a fellow staff member who was nervous about using her own voice for her video narration.
I asked her to time the duration of each slide of information in her video, based on the speed at which she could say it out loud. Then I asked her to send me her written text. Using her text and rough timings, I picked a female voice to recite her words and added the resulting audio track to her video. She got a professional-sounding video narration with no stressful or embarrassing recording session, and she loved the final results.