AI Insight: Creation or Mechanization?
In this post, I’ll cover main ideas and issues that have both driven me away from and pulled me towards using AI-based tools for content creation. (If you’re more interested in practical uses than my philosophical ramblings, check out my other post where I document practical applications of AI-based tools to improve the quality of digital content in elearning projects.)
Ever since I started reading about Artificial Intelligence in my second year of art college, I’ve been both fascinated by and skeptical of it. Back in the mid-1980s, the idea of AI existing in the real world felt like science fiction, like HAL-9000 in that big famous space movie by Stanley Kubrick. It all seemed rather magical, and potentially scary.
A Keyhole View on the Evolution of AI
In the real world, we’ve come a long way from the determinism of the Expert Systems of the 1980s and from the first neural networks I read about back in the 1990s. Even with the insane amount of technological advancement that’s occurred in the past 30–40 years, I maintain a healthy skepticism about the whole AI endeavour. The term “Artificial Intelligence” itself still bothers me.
This video from IBM does a really good job of describing the evolution of AI, and its related sub-disciplines and applications:
AI can mean a lot of things
Recently, with all the hype and excitement in the consumer marketplace over AI-powered services and devices, I have wondered if many people parse the term as if the computer medium is the “artificial” part but the “intelligence” part is actually kind of genuine. It’s worth remembering that the word “artificial” is the main modifier. It’s not artificial just because it’s being done by a machine. The intelligence itself is still an artifice. In my opinion, the so-called “intelligence” in AI is just a new form of mechanical fakery, like those 18th Century clockwork automatons that could simulate writing in script with a quill pen.
Back in my college days, one of my tech savvy media instructors refused to even use the term “artificial intelligence”, preferring instead to call it “artificial rationale”. He was being precise with his language, and I still appreciate the careful distinction he made. If I was allowed to rename AI, I might consider using the term “simulated intelligence”.
AI research and development has many competing philosophies and areas of specialty. One philosophical axis is “Hard AI” versus “Soft AI”. Hard AI seeks to synthesize a model of intelligence based as closely as possible on the function and physiology of the human brain (i.e. an artificial model of human cognition). By comparison, Soft AI is less rigid in its goals, aiming for acceptably human-like end results, regardless of the methodology or design.
None of what we call AI is truly intelligent: it’s just much better and faster at simulating intelligence using rapid calculations performed on masses of data. The increased horsepower of today’s software and hardware platforms makes it possible for AI to perform trillions of calculations per second. But instead of enabling a more intelligent model, isn’t it really just enabling a more rapid version of stupidity? Aren’t we just talking about the quantity and speed of information being processed, versus the quality of something resembling cognition?
To use a simplistic comparison, I think that the recent technological improvements in GenAI performance could probably be equated with the kinds of technical improvements we’ve seen in digital graphics and video. Over the past thirty years, as the spatial resolution of digital imagery has improved from the 1980s up to the present day, the quality of the end-product has been ever-more compared to visual realism, or taken as being “realistic”. In graphics, convenient photo-realistic representation has often been used as a benchmark of success, similar to how photography has largely replaced illustration for documenting news events in mass media. It’s fast, and looks as good (or even a bit better).
But the outward appearance of realism really just helps us to accept and understand the role that the a new medium can play. We tend to put our trust in what we think the truth has historically looked like. Back in the 1960s, Marshall McLuhan wrote about how we’ve tended to accept new media as extensions of ourselves and in relation to how they extend upon the role and usefulness of older familiar media. But maybe that’s just a convenient shortcut for user acceptance. Surface similarities can mask a false equivalence between the new and old tools. The new tool could be very capable of creating similar-looking products, but behind the familiar surface effects there are deeper differences and emergent by-products that the new medium may be bringing to the table. We’re back to “the medium is the message”, once again, per Dr. McLuhan.
Even so-called “AI learning models” that incorporate data from massive sets (like the Internet) are still based on rules, parameters, and training data mediated by human designers, and we know how foolproof our human designs can be (yeah, I’m looking at you, HAL-9000). AI developers already admit that their models come to surprising conclusions and exhibit emergent behaviours that they did not originally predict, and for which the underlying causes remain unknown. While unforeseen results can be a healthy part of scientific experimentation (real, exciting evidence), it can also be worrisome or even dangerous if it negatively affects real users in the public marketplace of open-source or commercial software. How much beta-testing should the general public be exposed to?
What constitutes experience or deep knowledge?
My cynical expectation is that our robots and AI will eventually run amok and turn on their creators. That’s based on some well-worn expectations about the failures and limitations of human creators, like we saw in the sequel to that big space movie, that came out many years later (the one with Roy Scheider, that wasn’t directed by Kubrick).
What we seem to have today are neural networks that develop “confidence” from performing millions of comparisons between concepts and usage, developing statistical models of likelihood, similar in concept to what they do for economic forecasting. Those systems don’t develop experience so much as they develop ever-more accurate statistical models to predict acceptable answers to questions, based on evidence of popularity from others’ experiences (real or fictional). GenAI doesn’t know what is true or objectively believable. It only seems to determine what is statistically likely. Looking at it that way, maybe GenAI and Large Language Models are just showing us a distorted, “funhouse mirror” reflection of our own experience.
For me, this all brings to mind the question of what constitutes experience or deep knowledge. I don’t think we understand the mechanics of human cognition well enough to say if today’s neural networks are even a close equivalent to how we learn or understand things. However, from a surface glance of today’s commonly-seen results, the gap between man and machine rationale does appear to be narrowing (at least if you believe all the breathless hyperbole out there in the marketplace). The Turing Test, Alan Turing’s famous post-WWII acid test to determine if an AI’s performance was good enough to fool a human observer, has likely now become obsolete. In many cases, today’s GenAI systems are capable of fooling many of us too much of the time, unless we know what to look for. Six fingers on one hand may just be a new anomaly, not the new normal, but it’s still dangerous to trust what you read and watch online. Media literacy and critical thinking skills are now more important than ever.
The Downside of AI Judgement for Content Creation
So, by now it must be pretty obvious that I have reservations about using GenAI for some aspects of content creation. Even though the accuracy of ChapGPT has improved version-after-version, AI hallucinations can still pose a big risk to the integrity of your AI-generated content (unless you’re creating surrealist art or you don’t care about your audience that much).
In AI-generated text, hallucinations can cause unrelated statements to be conflated into convincing alternate “facts” about a history that never happened, or they may encourage you to use software features that simply don’t exist. In AI-generated imagery and video, realistic-looking body parts have been harvested from a wealth of unknown sources and assembled into surreal monstrosities with seven fingers and limbs that blend into the surrounding furniture.
Personal Conclusions
Having demonstrated all my reservations on using AI, I am still becoming a consumer of AI services. The AI-driven content generation tools I’ve seen are helpful and pretty amazing a lot of the time, and the more I use them, the more of my precious time they may free up. But in terms of raw decision-making, idea generation, or judgement, I’m determined to keep my own hand on the tiller of rationale.
Maybe at some point, if the investor and market-driven AI hype and psychedelic hallucinations die down, we’ll be left looking at ourselves in the mirror, wondering how we can do things better for our real future.