AI Insight: Creation or Mechanization?

(Time to Read: 6 mins.)

In this post, I’ll cov­er main ideas and issues that have both dri­ven me away from and pulled me towards using AI-based tools for con­tent cre­ation. (If you’re more inter­est­ed in prac­ti­cal uses than my philo­soph­i­cal ram­blings, check out my oth­er post where I doc­u­ment prac­ti­cal appli­ca­tions of AI-based tools to improve the qual­i­ty of dig­i­tal con­tent in elearn­ing projects.)

Ever since I start­ed read­ing about Arti­fi­cial Intel­li­gence in my sec­ond year of art col­lege, I’ve been both fas­ci­nat­ed by and skep­ti­cal of it. Back in the mid-1980s, the idea of AI exist­ing in the real world felt like sci­ence fic­tion, like HAL-9000 in that big famous space movie by Stan­ley Kubrick. It all seemed rather mag­i­cal, and poten­tial­ly scary.

A Keyhole View on the Evolution of AI

In the real world, we’ve come a long way from the deter­min­ism of the Expert Sys­tems of the 1980s and from the first neur­al net­works I read about back in the 1990s. Even with the insane amount of tech­no­log­i­cal advance­ment that’s occurred in the past 30–40 years, I main­tain a healthy skep­ti­cism about the whole AI endeav­our. The term “Arti­fi­cial Intel­li­gence” itself still both­ers me.

This video from IBM does a real­ly good job of describ­ing the evo­lu­tion of AI, and its relat­ed sub-dis­ci­plines and appli­ca­tions:

“AI, Machine Learn­ing, Deep Learn­ing and Gen­er­a­tive AI Explained” (IBM)

AI can mean a lot of things

Recent­ly, with all the hype and excite­ment in the con­sumer mar­ket­place over AI-pow­ered ser­vices and devices, I have won­dered if many peo­ple parse the term as if the com­put­er medi­um is the “arti­fi­cial” part but the “intel­li­gence” part is actu­al­ly kind of gen­uine. It’s worth remem­ber­ing that the word “arti­fi­cial” is the main mod­i­fi­er. It’s not arti­fi­cial just because it’s being done by a machine. The intel­li­gence itself is still an arti­fice. In my opin­ion, the so-called “intel­li­gence” in AI is just a new form of mechan­i­cal fak­ery, like those 18th Cen­tu­ry clock­work automa­tons that could sim­u­late writ­ing in script with a quill pen.

Back in my col­lege days, one of my tech savvy media instruc­tors refused to even use the term “arti­fi­cial intel­li­gence”, pre­fer­ring instead to call it “arti­fi­cial ratio­nale”. He was being pre­cise with his lan­guage, and I still appre­ci­ate the care­ful dis­tinc­tion he made. If I was allowed to rename AI, I might con­sid­er using the term “sim­u­lat­ed intel­li­gence”.

AI research and devel­op­ment has many com­pet­ing philoso­phies and areas of spe­cial­ty. One philo­soph­i­cal axis is “Hard AI” ver­sus “Soft AI”. Hard AI seeks to syn­the­size a mod­el of intel­li­gence based as close­ly as pos­si­ble on the func­tion and phys­i­ol­o­gy of the human brain (i.e. an arti­fi­cial mod­el of human cog­ni­tion). By com­par­i­son, Soft AI is less rigid in its goals, aim­ing for accept­ably human-like end results, regard­less of the method­ol­o­gy or design.

None of what we call AI is tru­ly intel­li­gent: it’s just much bet­ter and faster at sim­u­lat­ing intel­li­gence using rapid cal­cu­la­tions per­formed on mass­es of data. The increased horse­pow­er of today’s soft­ware and hard­ware plat­forms makes it pos­si­ble for AI to per­form tril­lions of cal­cu­la­tions per sec­ond. But instead of enabling a more intel­li­gent mod­el, isn’t it real­ly just enabling a more rapid ver­sion of stu­pid­i­ty? Aren’t we just talk­ing about the quan­ti­ty and speed of infor­ma­tion being processed, ver­sus the qual­i­ty of some­thing resem­bling cog­ni­tion?

To use a sim­plis­tic com­par­i­son, I think that the recent tech­no­log­i­cal improve­ments in GenAI per­for­mance could prob­a­bly be equat­ed with the kinds of tech­ni­cal improve­ments we’ve seen in dig­i­tal graph­ics and video. Over the past thir­ty years, as the spa­tial res­o­lu­tion of dig­i­tal imagery has improved from the 1980s up to the present day, the qual­i­ty of the end-prod­uct has been ever-more com­pared to visu­al real­ism, or tak­en as being “real­is­tic”. In graph­ics, con­ve­nient pho­to-real­is­tic rep­re­sen­ta­tion has often been used as a bench­mark of suc­cess, sim­i­lar to how pho­tog­ra­phy has large­ly replaced illus­tra­tion for doc­u­ment­ing news events in mass media. It’s fast, and looks as good (or even a bit bet­ter).

But the out­ward appear­ance of real­ism real­ly just helps us to accept and under­stand the role that the a new medi­um can play. We tend to put our trust in what we think the truth has his­tor­i­cal­ly looked like. Back in the 1960s, Mar­shall McLuhan wrote about how we’ve tend­ed to accept new media as exten­sions of our­selves and in rela­tion to how they extend upon the role and use­ful­ness of old­er famil­iar media. But maybe that’s just a con­ve­nient short­cut for user accep­tance. Sur­face sim­i­lar­i­ties can mask a false equiv­a­lence between the new and old tools. The new tool could be very capa­ble of cre­at­ing sim­i­lar-look­ing prod­ucts, but behind the famil­iar sur­face effects there are deep­er dif­fer­ences and emer­gent by-prod­ucts that the new medi­um may be bring­ing to the table. We’re back to “the medi­um is the mes­sage”, once again, per Dr. McLuhan.

Even so-called “AI learn­ing mod­els” that incor­po­rate data from mas­sive sets (like the Inter­net) are still based on rules, para­me­ters, and train­ing data medi­at­ed by human design­ers, and we know how fool­proof our human designs can be (yeah, I’m look­ing at you, HAL-9000). AI devel­op­ers already admit that their mod­els come to sur­pris­ing con­clu­sions and exhib­it emer­gent behav­iours that they did not orig­i­nal­ly pre­dict, and for which the under­ly­ing caus­es remain unknown. While unfore­seen results can be a healthy part of sci­en­tif­ic exper­i­men­ta­tion (real, excit­ing evi­dence), it can also be wor­ri­some or even dan­ger­ous if it neg­a­tive­ly affects real users in the pub­lic mar­ket­place of open-source or com­mer­cial soft­ware. How much beta-test­ing should the gen­er­al pub­lic be exposed to?

What constitutes experience or deep knowledge?

My cyn­i­cal expec­ta­tion is that our robots and AI will even­tu­al­ly run amok and turn on their cre­ators. That’s based on some well-worn expec­ta­tions about the fail­ures and lim­i­ta­tions of human cre­ators, like we saw in the sequel to that big space movie, that came out many years lat­er (the one with Roy Schei­der, that was­n’t direct­ed by Kubrick).

What we seem to have today are neur­al net­works that devel­op “con­fi­dence” from per­form­ing mil­lions of com­par­isons between con­cepts and usage, devel­op­ing sta­tis­ti­cal mod­els of like­li­hood, sim­i­lar in con­cept to what they do for eco­nom­ic fore­cast­ing. Those sys­tems don’t devel­op expe­ri­ence so much as they devel­op ever-more accu­rate sta­tis­ti­cal mod­els to pre­dict accept­able answers to ques­tions, based on evi­dence of pop­u­lar­i­ty from oth­ers’ expe­ri­ences (real or fic­tion­al). GenAI does­n’t know what is true or objec­tive­ly believ­able. It only seems to deter­mine what is sta­tis­ti­cal­ly like­ly. Look­ing at it that way, maybe GenAI and Large Lan­guage Mod­els are just show­ing us a dis­tort­ed, “fun­house mir­ror” reflec­tion of our own expe­ri­ence.

GenAI does­n’t know what is true or objec­tive­ly real­is­tic. It deter­mines what seems sta­tis­ti­cal­ly like­ly.

For me, this all brings to mind the ques­tion of what con­sti­tutes expe­ri­ence or deep knowl­edge. I don’t think we under­stand the mechan­ics of human cog­ni­tion well enough to say if today’s neur­al net­works are even a close equiv­a­lent to how we learn or under­stand things. How­ev­er, from a sur­face glance of today’s com­mon­ly-seen results, the gap between man and machine ratio­nale does appear to be nar­row­ing (at least if you believe all the breath­less hyper­bole out there in the mar­ket­place). The Tur­ing Test, Alan Tur­ing’s famous post-WWII acid test to deter­mine if an AI’s per­for­mance was good enough to fool a human observ­er, has like­ly now become obso­lete. In many cas­es, today’s GenAI sys­tems are capa­ble of fool­ing many of us too much of the time, unless we know what to look for. Six fin­gers on one hand may just be a new anom­aly, not the new nor­mal, but it’s still dan­ger­ous to trust what you read and watch online. Media lit­er­a­cy and crit­i­cal think­ing skills are now more impor­tant than ever.

The Downside of AI Judgement for Content Creation

So, by now it must be pret­ty obvi­ous that I have reser­va­tions about using GenAI for some aspects of con­tent cre­ation. Even though the accu­ra­cy of ChapG­PT has improved ver­sion-after-ver­sion, AI hal­lu­ci­na­tions can still pose a big risk to the integri­ty of your AI-gen­er­at­ed con­tent (unless you’re cre­at­ing sur­re­al­ist art or you don’t care about your audi­ence that much).

In AI-gen­er­at­ed text, hal­lu­ci­na­tions can cause unre­lat­ed state­ments to be con­flat­ed into con­vinc­ing alter­nate “facts” about a his­to­ry that nev­er hap­pened, or they may encour­age you to use soft­ware fea­tures that sim­ply don’t exist. In AI-gen­er­at­ed imagery and video, real­is­tic-look­ing body parts have been har­vest­ed from a wealth of unknown sources and assem­bled into sur­re­al mon­strosi­ties with sev­en fin­gers and limbs that blend into the sur­round­ing fur­ni­ture.

Personal Conclusions

Hav­ing demon­strat­ed all my reser­va­tions on using AI, I am still becom­ing a con­sumer of AI ser­vices. The AI-dri­ven con­tent gen­er­a­tion tools I’ve seen are help­ful and pret­ty amaz­ing a lot of the time, and the more I use them, the more of my pre­cious time they may free up. But in terms of raw deci­sion-mak­ing, idea gen­er­a­tion, or judge­ment, I’m deter­mined to keep my own hand on the tiller of ratio­nale.

Maybe at some point, if the investor and mar­ket-dri­ven AI hype and psy­che­del­ic hal­lu­ci­na­tions die down, we’ll be left look­ing at our­selves in the mir­ror, won­der­ing how we can do things bet­ter for our real future.

John Love

E. John Love has been CTLR's eLearning Media Developer since 2011. Before working at VCC, John spent over 20 years in the high-tech sector as an art director, graphic designer, web designer, and technical writer. Early in his career, he taught computer graphics courses for the VSB evening program and contributed in front of and behind the camera on two award-winning educational TV series for BC's Knowledge Network. John has a Fine Arts diploma from Emily Carr College of Art + Design (1989). As student and staff at ECCAD, he contributed to published research in computer-based visual literacy projects under Dr. Tom Hudson. John continues his active interest in art, technology, and new media. For over 25 years, he's also developed his love of storytelling, blogging about his family and personal history, and competing in local and international fiction contests. He published his first (and so far only) novel in 2009.

You may also like...