Skip to main content

Davis Journal

Rage Against the Machine

Apr 13, 2023 11:58AM ● By Peri Kinder

First, robots came for assembly line workers. Then they came for agricultural and warehouse jobs. Then a cyborg assassin time-traveled from 2029 to 1984 to kill Sarah Connor. What will artificial intelligence target next? 

If you ask our publisher, journalism is on the robotic chopping block. He recently sat down with our editorial team and told us we’d better get our act together or AI will certainly replace us within five years. 

It’s the plot of every sci-fi movie. “Let’s merge robotic efficiency and human connection to create a utopian workplace.” But then, the robots download a virus and turn into killing machines. It doesn’t end well for humanity. 

But let’s back up a bit. When encyclopedias were created in the 1700s, people were astonished to have so much information at their fingertips. What’s an encyclopedia you ask? Thanks for asking, young whippersnapper. 

Encyclopedias are books bound in fake leather that weigh the equivalent of a baby hippo. They were like printed versions of Wikipedia that became outdated as soon as they were purchased. They were used for footstools and sometimes for murder weapons. 

They were also used for rampant plagiarism. Teachers often received essays copied straight from Encyclopedia Britannica.

As technology advanced, plagiarism got easier with the ability to copy-and-paste from any website; more efficient and much harder to detect. Then along came chatbots, or virtual assistants, like Siri, who learned to answer our stupid questions with a bit of sass. 

Now, journalists are encouraged to use AI to produce copy. ChatGPT launched in November and millions of people have tried it out, creating everything from poetry to fake news. It’s like a Google search on steroids.

In fact, it’s so good at creating fake news, that the CNET media website published stories for months before the articles were discovered to be riddled with errors, misinformation and plagiarized material. Oops.

Following my publisher’s orders, I typed a few questions into ChatGPT and immediately ran into a virtual brick wall. 

“How many people are living on Earth?” I asked. ChatGPT replied 7.9 billion but added its data ended in September 2021. I guess anyone born after that date doesn’t count. 

I asked it to tell me a joke. ChatGPT explained it didn’t have a sense of humor or emotions and didn’t understand jokes. So it could be a Utah legislator. 

So, will AI adapt to create personality, voice, humor and journalistic ethics or will future generations get used to reading pedantic and pretentious articles written by emotionless robots like Tucker Carlson?

Sometimes, the “journalism” churned out by AI is racist, offensive and inappropriate because, and here’s the issue, humans create code for these bots. Fallible, stupid humans who unintentionally create programming that mimics their own limiting beliefs. 

In These Times writer Hamilton Nolan said, “Journalism is the product of a human mind. If something did not come from a human mind, it is not journalism.”

He said journalism requires accountability. The writer should be able to explain the origins and sources of any story. 

Can AI do that? Will robots request interviews from other robots? When questioned, will AI fall to pieces like HAL in 2001: A Space Odyssey after being given contradictory orders: lie to the crew but be completely truthful. Pretty much like anything on Twitter.

I’m mixing movie metaphors, but if Sarah Connor’s interactions with the Terminator taught us anything, it’s that we control our own destiny. Can we unite robotic efficiency and humanity? The fate of journalism could hang in the balance. λ