Tag Archives: artificial general intelligence

AI Soul Searching

http://www.cpnet.io image

By Hugh Miller and Casey Bukro

Ethics AdviceLine for Journalists

A lot of soul-searching is going on over the ethical use of artificial intelligence in the media, a mind-bending exercise pointing out that a tool expected to improve journalism might replace human journalists and doom news outlets that feed AI the information that makes it work.

Some pontificate. Others strategize over this existential moment.

As often happens when science brings us some astonishingly brilliant new idea, using the new technology reveals a few equally astonishing flaws. AI software models used widely today, for example, cannot reliably and accurately cite and quote their sources. Instead, we get gibberish that looks credible, like crediting a real author for words AI “hallucinated.” Since AI “feeds” on the work of others, usually uncredited, news organization using AI could be accused of plagiarism.

Nothing quite that complicated came to AdviceLine’s attention when a journalist working for a newspaper in Alaska asked for help with an AI issue more likely to confront journalists every day:

“This is kind of a dumb question,” the journalist began, although most journalists know there is no such thing as a dumb question. “But I’ve always struggled with headlines and now I’m hoping to get some help from AI to write them,” he continued. “How/where do other outlets disclose that just the headline of an article was written by AI?”

An answer

Answering that question was Joseph Mathewson, AdviceLine advisor and a professor at the Medill School of Journalism, Northwestern University, who happened to be a personal friend of the journalist calling for help.

“Thanks for the question!” replied Mathewson. “I haven’t confronted it before, but it seems to me that anything you publish written by AI should be identified as such, including headlines…maybe by a blanket note somewhere in the paper to that effect if it’s more than one.”

A direct response to a direct question, which is what AdviceLine has done since it began operating in 2001, which was long before Artificial Intelligence existed as a burning issue in journalism. But it was the kind of question the AdviceLine staff of ethics experts is qualified to answer.

Artificial Intelligence is a journalism riddle, a kind of technology already in use, but not fully understood. Expected to be a solution, it causes problems of a kind never seen before, like hallucinations, defined as information or responses generated by AI that are fabricated, inaccurate or not grounded in fact. That is hardly a useful tool, but it’s already in widespread use.

Job loss

And conflicts over AI can cost a journalist their job, as illustrated by the Suncoast Searchlight, A Florida publication covering Sarasota, Manatee and DeSoto counties.

The publication had four full-time staff reporters and two editors.

In November, all four reporters sent a letter to the nonprofit board of directors accusing their editor-in-chief of using generative AI tools, including ChatGPT, to edit stories and hiding that use from staff, according to a report by Nieman Journalism Lab of the Nieman Foundation for Journalism.

As a result, said the reporters, hallucinated quotes, a reference to a nonexistent state law and other factual inaccuracies were introduced into their story drafts. When they questioned the editor about the edits, they said she did not immediately disclose her use of AI tools but instead contended she made the errors herself.

Breach of trust

Said the reporters: “We fear that there may be extensive undisclosed AI-generated content on our website and have questions about what retroactive disclosure is needed for our readers.” Adding that the editor created a breach of trust between her and her reporters.

The reporters asked the board of directors, consisting of media executives, journalists and local business people, to intervene. They also made several requests: To adopt an AI policy, a fact-checking process and an internal audit to identify AI-generated writing that might have been published on the site. They also asked the offending editor-in-chief to promise not to use AI for editing in the future.

Less than 24 hours after the board received the letter, the editor-in-chief and her deputy editor fired one of the reporters who signed it. Clearly, hazards abound when reporters criticize their editors, who prefer to do the criticizing.

Disruptive

AI is proving to be a disruptive technology, although widely used.

A 2024 Associated Press survey found nearly 70 percent of newsroom staffers use the technology for basic skills such as producing content, information gathering, story drafts, headlines, translation and transcribing interviews. One-fifth said they used AI for multimedia projects, including graphics and videos. Surveyed were 292 media representatives from legacy media, public broadcasters and magazines, mostly based in the U.S. and Europe.

Aimee Rinehart, AP’s co-author and senior product manager of AI strategy, observed:

“News people have stayed on top of this conversation, which is good because this technology is already presenting significant disruptions to how journalists and newsrooms approach their work and we need everyone to help us figure this technology out for the industry.”

Ethics uneven

Citing the AP survey, Forbes, the American business magazine, headlined: “Newsrooms are already using AI, but ethical considerations are uneven.”

Forbes pointed out that while the news industry’s use of AI is common today, “the question at the heart of the news industry’s mixed feelings about the technology” is whether it is “capable of producing quality results.”

This is oddly reminiscent of football teams that sign rookie quarterbacks to multi-million-dollar contracts, hoping they become champions of the future. Good luck with that. Such hopefuls soon find themselves contending with someone like Dick “Monster of the Midway” Butkus, Chicago Bears linebacker famous for his crushing tackles.

Server farms

The Dick Butkus analogy also applies to the large language models (LLMs) that drive artificial intelligence tools. They are large programs that run on hugely energy-intensive server farms. They just take a huge volume of training data (usually sourced without recompense to the originators) and, in response to a prompt, spit out text that is associated with the prompt topic and reads as grammatically reasonably well-informed.

Such output has no necessary connection with reality, since the LLMs have none. They rely wholly on their input data and their algorithm – they are, in fact, nothing but these.

They cannot fact-check, since they have no access to facts, only “input data,” which itself may have only tenuous connection to reality, if it’s coming from, say, Fox News, Newsmax or OANN (One America News Network.)

No concepts

They cannot conduct interviews, because they cannot tell when an interview subject needs to be pushed on a point, or if he or she is lying. They cannot construct a narrative of events, since they have no understanding of causality or temporal sequence – they have no concepts at all, in fact. And they are subject to “steering” – they can be programmed to exhibit actual biases, as Elon Musk has said he is doing with his X.com AI bot, Grok.

It may be the case that, in the future, an AGI (artificial general intelligence) may be constructed. AGI is the concept of a machine with human-level cognitive abilities that can learn, understand and apply knowledge. Unlike today’s AI, which excels at doing specific jobs, AGI would have versatility, adaptability and common sense, allowing it to transfer learning across different disciplines like medicine, finance or art without being specifically programed for each. It’s a major goal in AI research, but remains hypothetical. Some will want to prevent it.

LLMs are far from being such a thing, and a true AGI will not be built out of a LLM.

Reshaping newsrooms

Despite AI’s shortcomings, The Poynter Institute for Media Studies points out that it already is reshaping newsroom roles and workflow. In 2024, Poynter introduced a framework to help newsrooms create clear, responsible AI ethics policies – especially for those just beginning to address the role of artificial intelligence in their journalism.

Updated in 2025, Poynter’s AI Ethics Starter Kit helps media organizations define how they will and will not use AI in ways that serve their mission and uphold core journalistic values. It contains a “template for a robust newsroom generative AI policy.”

Near the top of this template is a heading called “transparency,” calling upon journalists using generative AI in a significant way to “document and describe to our audience the tools with specificity in a way that discloses and educates.”

RTDNA guidance

Another major journalism organization, the Radio Television Digital News Association (RTDNA), also offers guidance on the use of artificial intelligence in journalism, pointing out that it has a role in ethical, responsible and truthful journalism.

“However,” says RTDNA, “it should not be used to replace human judgment and critical thinking — essential elements of trusted reporting.”

Getting down to the nitty gritty, Julie Gerstein and Margaret Sullivan ask “Can AI tools meet journalistic standards?”

Spotty results

“So far, the results are spotty,” they say in the Columbia Journalism Review. AI can crunch numbers at lightning speed and make sense of vast databases.

“But more than two years after the public release of large language models (LLMs), the promise that the media industry might benefit from AI seems unlikely to bear out, or at least not fully.”

Gerstein and Sullivan point out that generative AI tools rely on media companies to feed them accurate and up-to-date information, while at the same time AI products are developing into something like a newsroom competitor that is well-funded, high-volume and sometimes unscrupulous.

Hallucinate

After checking the most common AI software models, Gerstein and Sullivan found that none of them “are able to reliably and accurately cite and quote their sources. These tools commonly ‘hallucinate’ authors and titles. Or they might quote real authors and books, with the content of the quotes invented. The software also fails to cite completely, at times copying text from published sources without attribution. This leaves news organizations open to accusations of plagiarism.”

Whether artificial intelligence babbling can be legally considered plagiarism or copyright infringement remains to be answered by lawsuits filed by the New York Times, the Center for Investigative Reporting and others.

Especially irked, the New York Times accuses OpenAI of trying “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” OpenAI created ChatGPT, which allegedly contains text copied from the New York Times archives and reproduced verbatim for ChatGPT users.

Worrying outcome

Say Gerstein and Sullivan: “One possible – and worrying – outcome of all this is that generative AI tools will put news outlets out of business, ironically diminishing the supply of content available for AI tools to train on.”

This is our strange new world: Technology needs other technologies to survive. One feeds upon the other. In a new twist, Microsoft struck a $16 billion deal with Constellation Energy to buy 100 percent of power produced by the Three Mile Island power plant, once it restarts.

Three Mile Island became world famous in 1979 for an accident that caused the fuel in one of its reactors to overheat and crumble, triggering a mass evacuation of thousands of residents in the Harrisburg, Pa. area. The stricken reactor was closed permanently, but a second power-producing reactor on the site continued to operate for 40 years until 2019.

Nuclear power

Microsoft wants all the power the nuclear plant can produce for its energy-hungry data centers. Its 20-year agreement with Constellation is supported by a $1 billion government loan to Constellation. The plant is expected to resume producing electricity in 2027.

This signals a resurrection of sorts for nuclear energy in the United States, brought on by new and growing power demands in our highly technological society. A similar nuclear comeback around the world, after two decades of stagnation, was declared by the International Energy Agency.

In another odd twist, both nuclear energy and artificial intelligence have been criticized as potentially disastrous for the human race. The nuclear hazards include atomic bombs and the risks of operating nuclear electric power producing plants.

Scientists point out that with risks, come benefits.

**********************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Using AI Ethically

ece/emory.edu image

By Hugh Miller and Casey Bukro

Ethics AdviceLine for Journalists

Brian, a freelance journalist, called AdviceLine with a timely and hot-button question: How far should journalists go in using artificial intelligence bots like ChatGPT — an ethics and legal quagmire still taking shape?

Transformative technology like artificial intelligence often arrives before its consequences and potential are fully understood or foreseen.

Artificial intelligence did not just arrive in the world, it exploded into use. It became an academic discipline in 1956, just 69 years ago. Yet by January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining more than 100 million users in two months.

Phenomenon

It’s an outsized technological phenomenon that is challenging human understanding, given recent reports that scientists are not sure exactly how AI works or its decision-making process. These super computers appear to be thinking for themselves in ways scientists do not understand. Some even believe AI could cause human  extinction.

However, most AI applications being rolled out today for commercial use, like ChatGPT, are termed “large language model (LLM)” programs, which are trained on vast amounts of data, and which use prediction algorithms to generate text and images that seem the most likely to satisfy the requirements of a user’s query.

(How that training data was acquired — and the astounding amount of computing power and electrical energy needed to process it – are ethical issues in themselves).

Higher order tasks

They are not what are called “artificial general intelligence” (AGI) – systems that would perform higher-order human cognitive tasks.

What is also significant about such LLMs is that they are not “conscious” in any sense. They are not subjects, though they may employ the first-person “I” in their responses to please their prompters; and they have no access to an objective world, other than the data they have been trained on.

They do not understand, or think, or infer, or reason as intelligent humans do – at least, not yet. In essence, they are extremely sophisticated versions of the auto correct function we are already familiar with in other applications – with many of the same limitations.

Hallucinations

Since these LLMs have no access to reality, they are prone to “hallucinations,” to making up plausible-seeming outputs that bear no relation to actual facts. Their algorithms are built to generate merely plausible answers.

Against this background, people like Brian are trying to understand how to use this impressive innovation in their every-day work tasks. Artificial intelligence is described as a tool for journalists. Brian asks some down-to-earth questions:

“Would it be ethical to use an AI bot like ChatGPT in writing articles, as long as I confined its use to checking spelling and grammar, making suggestions for alternative phrasing, and ensuring the piece conformed to the AP Stylebook, but not for generating content, and if I checked it afterwards before submitting it? And should I disclose its use?”

Begin in 2001

Those questions came to Hugh Miller, a veteran AdviceLine ethicist. Since its beginning in 2001, AdviceLine advisors do not simply dish out answers to complicated questions.

AdviceLine advisors engage callers in a conversation intended to encourage journalists to think about the ethics issues involved in their ethics dilemma, and to arrive at a conclusion about what the journalist believes is the most ethical thing to do.

In this 2025 case, Miller does exactly that. Here’s a summary to Miller’s conversation with Brian.

HM: So you are using the bot as, basically, a high-end version of Grammarly?

B: Yes, exactly.

HM: What, exactly, troubles you about such a use, ethically?

B: I’m not sure — it seems controversial, though.

HM: Let me come at that question from another angle. What seems to you to be the harm, to yourself or others, from employing such a tool?

B: Using such tools, undisclosed, might diminish the trust a reader might have in a journalist’s work. And, in some sense, the work the bot does is not “my work,” but work done for me, by the bot.

HM: As to the latter, most word processors have built-in spelling, grammar and composition checkers already. And Microsoft is integrating its own AI bot into its Office software as we speak. All of us who write have used such tools for years, precisely as tools.

B: That’s true.

HM: Problems seem to emerge here if you’re (1) using the bot to do your “legwork” — that is, digging up material you should be using your own efforts, training, experience and judgment to find, and avoiding the bias introduced by the data sets the bots are trained on, and (2) failing to check the output of the bot and passing on “hallucinations” and other howlers without identifying and excising them. But you say you are doing neither of these things, right?

B: Yes, correct.

HM: If then, you are using this bot as a next-gen editing tool, what harm could come of it?

B: None that I can see.

HM: Nor I.

B: But what about disclosure?

HM: AI use in journalism is not settled ethical ground yet; I think here you need to consult your own conscience. I have seen some articles with a disclosure statement saying something along the lines of, “An AI tool, Gemini, was used in the editing and formatting of this story,” and I’m sure I’ve read many others that one was used in but which contained no such disclaimer. If you feel uncomfortable not using a disclaimer, by all means use it. At the very least, it might signal to readers that you are someone who thinks such disclaimers and transparency more generally, are ethically important enough to mention and keep in mind in one’s reading.

B: That’s a helpful way to think about it, thanks.

Just as scientists struggle to understand how AI thinks, journalists are struggling to find ways to use this technological marvel without allowing AI to think for them, or putting mistakes in their work.

The record-breaking speed with which AI technology grew is not likely to slow down any time soon, according to U.S. Energy Secretary Chris Wright, who recently visited two national laboratories located in Chicago suburbs., Argonne and Fermilab.

Heart of race

Argonne’s new Aurora supercomputer, said Wright, will be at the heart of the race to develop and capitalize on artificial intelligence, according to a report in Crain’s Chicago Business. Likening the race to a second Manhattan Project, which created the atomic bomb, Wright said, “we need to lead in artificial intelligence,” which also has national security implications.

“We’re at that critical moment” with AI, Wright told Argonne scientists on July 16, predicting that the next three to four years will be the greatest years of scientific achievement of our lifetime.

Argonne’s Aurora computer is among the three most powerful machines in the world, said Crain’s, able to perform a billion-billion calculations a second.

As with all technology, it comes with strings attached. Use it at your own peril. Eternal vigilance is the cost of good journalism. Artificial intelligence does not change that. Instead, it adds another reason to be vigilant.

*******************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.