Jeb Bladine: AI gives partial thumbs-up to op-ed commentary
This week’s column is built around information from artificial intelligence (AI) — intertwined (hopefully) with splashes of human intelligence (“HI”). In today’s unprecedented technological eruption, it’s becoming more and more difficult to tell the difference.
We’re all in the midst of that eruption. AI gushes from every Google search, every question to Alexa, Siri or Bixby. We’re surrounded by an exponential explosion of information, images, research, analysis, art, scientific knowledge and so much more coming from large language model, generative AI and conversational AI platforms.
That explosion is scary, to say the least. But meanwhile, here’s one small journalistic example of AI in use:
I asked ChatGPT to analyze today’s op-ed commentary in Viewpoints by Mel Gurtov, Portland State University professor emeritus. I fed AI the text of Gurtov’s “wag the dog” hypothesis and asked: “What do you think about the accuracy of the presented facts, legitimacy of the reasoning, and overall fairness of the commentary?”
Chat “thought” for 190 seconds — a fairly long contemplation for AI — before responding with a far-reaching, 1,700-word analysis addressing all three areas of the question. (You might consider reading Gurtov’s article first, then return here to compare your interpretation of with Chat’s AI conclusions.)
Briefly, those Chat conclusions were: Most checkable facts are “either accurate or at least grounded in real reporting;” interpreting the Venezuela situation as a wag the dog distraction “is plausible but ultimately speculative;” and the article focuses “pretty hard into one very hostile reading of Trump’s motives without giving much space to alternative explanation.”
Though one-sided, said Chat, it’s “within the normal bounds of an opinion column … not a deception piece.”
Perhaps the greater interest, for me, was reading – and saved for future reference – Chat’s “thinking process” related to the questions. One of those Chat reflections involved acknowledgement of content that was “considered real in November 2025 (but) which is different from my June 2024 knowledge. I need to treat the information from the web environment as valid.”
Topping all of that, however, was Chat’s corresponding list of about 175 online sources for its lengthy review, all linked for quick, easy access. Thus, the growing journalistic conundrum: If writing a book on the subject, take two weeks off to read and analyze all those sources, and many more; if writing a weekly newspaper column on deadline, come up with a clever way to help others find their own research sources.
Lately, I’ve had several deep-dive exchanges, discussions and debates with ChatGPT about journalistic integrity and copyright protection. Here are just a few things “we” agreed on:
Exact quoting of an AI response without attribution is plagiarism, but it isn’t copyright infringement because machine-creations have no copyright protection. (But beware the possibility that AI produces content or images that are “too close” to copywritten material.)
Meanwhile, multiple major lawsuits are testing whether it is a copyright infringement for AI to download copyrighted text/images for use in “training AI” for its exchanges with users. Stay tuned, no doubt, for more on that.
Jeb Bladine can be reached at jbladine@newsregister.com or 503-687-1223.



Comments