Tag Archives: ethics of artificial intelligence

Using AI Ethically

ece/emory.edu image

By Hugh Miller and Casey Bukro

Ethics AdviceLine for Journalists

Brian, a freelance journalist, called AdviceLine with a timely and hot-button question: How far should journalists go in using artificial intelligence bots like ChatGPT — an ethics and legal quagmire still taking shape?

Transformative technology like artificial intelligence often arrives before its consequences and potential are fully understood or foreseen.

Artificial intelligence did not just arrive in the world, it exploded into use. It became an academic discipline in 1956, just 69 years ago. Yet by January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining more than 100 million users in two months.

Phenomenon

It’s an outsized technological phenomenon that is challenging human understanding, given recent reports that scientists are not sure exactly how AI works or its decision-making process. These super computers appear to be thinking for themselves in ways scientists do not understand. Some even believe AI could cause human  extinction.

However, most AI applications being rolled out today for commercial use, like ChatGPT, are termed “large language model (LLM)” programs, which are trained on vast amounts of data, and which use prediction algorithms to generate text and images that seem the most likely to satisfy the requirements of a user’s query.

(How that training data was acquired — and the astounding amount of computing power and electrical energy needed to process it – are ethical issues in themselves).

Higher order tasks

They are not what are called “artificial general intelligence” (AGI) – systems that would perform higher-order human cognitive tasks.

What is also significant about such LLMs is that they are not “conscious” in any sense. They are not subjects, though they may employ the first-person “I” in their responses to please their prompters; and they have no access to an objective world, other than the data they have been trained on.

They do not understand, or think, or infer, or reason as intelligent humans do – at least, not yet. In essence, they are extremely sophisticated versions of the auto correct function we are already familiar with in other applications – with many of the same limitations.

Hallucinations

Since these LLMs have no access to reality, they are prone to “hallucinations,” to making up plausible-seeming outputs that bear no relation to actual facts. Their algorithms are built to generate merely plausible answers.

Against this background, people like Brian are trying to understand how to use this impressive innovation in their every-day work tasks. Artificial intelligence is described as a tool for journalists. Brian asks some down-to-earth questions:

“Would it be ethical to use an AI bot like ChatGPT in writing articles, as long as I confined its use to checking spelling and grammar, making suggestions for alternative phrasing, and ensuring the piece conformed to the AP Stylebook, but not for generating content, and if I checked it afterwards before submitting it? And should I disclose its use?”

Begin in 2001

Those questions came to Hugh Miller, a veteran AdviceLine ethicist. Since its beginning in 2001, AdviceLine advisors do not simply dish out answers to complicated questions.

AdviceLine advisors engage callers in a conversation intended to encourage journalists to think about the ethics issues involved in their ethics dilemma, and to arrive at a conclusion about what the journalist believes is the most ethical thing to do.

In this 2025 case, Miller does exactly that. Here’s a summary to Miller’s conversation with Brian.

HM: So you are using the bot as, basically, a high-end version of Grammarly?

B: Yes, exactly.

HM: What, exactly, troubles you about such a use, ethically?

B: I’m not sure — it seems controversial, though.

HM: Let me come at that question from another angle. What seems to you to be the harm, to yourself or others, from employing such a tool?

B: Using such tools, undisclosed, might diminish the trust a reader might have in a journalist’s work. And, in some sense, the work the bot does is not “my work,” but work done for me, by the bot.

HM: As to the latter, most word processors have built-in spelling, grammar and composition checkers already. And Microsoft is integrating its own AI bot into its Office software as we speak. All of us who write have used such tools for years, precisely as tools.

B: That’s true.

HM: Problems seem to emerge here if you’re (1) using the bot to do your “legwork” — that is, digging up material you should be using your own efforts, training, experience and judgment to find, and avoiding the bias introduced by the data sets the bots are trained on, and (2) failing to check the output of the bot and passing on “hallucinations” and other howlers without identifying and excising them. But you say you are doing neither of these things, right?

B: Yes, correct.

HM: If then, you are using this bot as a next-gen editing tool, what harm could come of it?

B: None that I can see.

HM: Nor I.

B: But what about disclosure?

HM: AI use in journalism is not settled ethical ground yet; I think here you need to consult your own conscience. I have seen some articles with a disclosure statement saying something along the lines of, “An AI tool, Gemini, was used in the editing and formatting of this story,” and I’m sure I’ve read many others that one was used in but which contained no such disclaimer. If you feel uncomfortable not using a disclaimer, by all means use it. At the very least, it might signal to readers that you are someone who thinks such disclaimers and transparency more generally, are ethically important enough to mention and keep in mind in one’s reading.

B: That’s a helpful way to think about it, thanks.

Just as scientists struggle to understand how AI thinks, journalists are struggling to find ways to use this technological marvel without allowing AI to think for them, or putting mistakes in their work.

The record-breaking speed with which AI technology grew is not likely to slow down any time soon, according to U.S. Energy Secretary Chris Wright, who recently visited two national laboratories located in Chicago suburbs., Argonne and Fermilab.

Heart of race

Argonne’s new Aurora supercomputer, said Wright, will be at the heart of the race to develop and capitalize on artificial intelligence, according to a report in Crain’s Chicago Business. Likening the race to a second Manhattan Project, which created the atomic bomb, Wright said, “we need to lead in artificial intelligence,” which also has national security implications.

“We’re at that critical moment” with AI, Wright told Argonne scientists on July 16, predicting that the next three to four years will be the greatest years of scientific achievement of our lifetime.

Argonne’s Aurora computer is among the three most powerful machines in the world, said Crain’s, able to perform a billion-billion calculations a second.

As with all technology, it comes with strings attached. Use it at your own peril. Eternal vigilance is the cost of good journalism. Artificial intelligence does not change that. Instead, it adds another reason to be vigilant.

*******************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

WaPo, A.I. and Ethics

bernardmarr.com image

By Casey Bukro

Ethics AdviceLine for Journalists

The news lately has been full of accounts of journalists or media companies accused of acting unethically or taking liberties with the work of others.

Here’s how that shapes up.

The Washington Post publisher, Will Lewis, is accused of offering an NPR media reporter an interview if the reporter would avoid mentioning that Lewis was linked to a phone hacking scandal while working in Britain for Rupert Murdoch’s tabloids. 

Lewis also is accused of pressuring the Post’s executive editor to ignore any story that would make the publisher look bad, such as the phone hacking story. She published the story, then resigned, throwing the Post’s newsroom into chaos.

Fuel to the flames

Adding fuel to the flames, another former British journalist linked to questionable reporting practices, Robert Winnett, was hired to be the Post’s next editor. Winnett made a name for himself through undercover investigations and so-called “checkbook journalism,” paying people for information.

Both Lewis and Winnett were engaged in a kind of journalism popular in the United Kingdom, but generally shunned in the United States. Now they are leading The Washington Post, most famous for the Watergate exposures that led to President Richard Nixon resigning in 1974. The Post’s news staff published a report describing their grievances with Lewis.

American standards

Now that Lewis and Winnett are practicing journalism in the United States, they would be expected to conform to American standards, which are expressed in the Society of Professional Journalists code of ethics.

That code begins with this preamble:

Preamble

Members of the Society of Professional Journalists believe that public enlightenment is the forerunner of justice and the foundation of democracy. Ethical journalism strives to ensure the free exchange of information that is accurate, fair and thorough. An ethical journalist acts with integrity.

You can read the rest of the code here, and decide for yourself if Lewis and Winnett are acting with integrity, which the code says is basic to ethical journalism.

But the New Republic reports that Lewis and Winnett are harbingers of what comes next in American journalism: A British invasion intended to shake things up and get American media out of their economic doldrums.

Uncertainty

“In the midst of the uncertainty,” reports the magazine, “newsrooms owners have turned to an unexpected source of expertise on the U.S. media landscape: British journalists.”

The logic is clear: “As the journalism industry bleeds money, a fresh perspective could be just the thing to shake things up and bring some much-needed cash.”

This could also bring a major clash of cultures, considering the history of the British tabloid press. Their journalism ethics differ markedly, the New Republic points out, and “the British tabloid press are notoriously aggressive, unafraid to publish half-truths, purchase scoops, or even toe laws in pursuit of extreme sensationalism.”

In that way, Old Country values are coming to America, which is awakening to new technology.

Artificial intelligence

In a sign of the times, with the advent of new technology, artificial intelligence now is used to generate stories. This phenomenon is so new, it is not even recognized in the SPJ code of ethics, or how it can be unethical.

For example, a German celebrity tabloid published an A.I.-generated exclusive “interview” with a champion German racing car driver who was severely injured in a skiing accident in 2013. It contained fabricated quotes presented as real news.

Legal precedent

See how that turned out here for details. The case now is an early legal precedent signaling that such uses of artificial intelligence is unethical and deceptive.

Here’s another artificial intelligence quagmire in the publishing business that is now coming to light as the technology matures.

Creators of ChatGPT and other popular A.I. platforms used published works to “train” the new technologies, like feeding information to a growing child.

A new front

The New York Times sued OpenAI and Microsoft for copyright infringement, which is another way to get in trouble ethically. The suit is seen as a new front on the increasingly intense legal battle over unauthorized use of published work.

“Defendants seek to free-ride on The Times’s massive investment in its journalism,” the complaint said, accusing OpenAI and Microsoft of “using The Times’s content without payment to create products that substitute for the Times and steal audiences away from it.”

The Times is among a small number of news outlets that have built successful business models from online journalism, while other newspapers and magazines have been crippled by the loss of readers to the internet.

Billions in damages

The defendants, said The Times, should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also asks the companies to destroy any chatbot models and training data that use copyrighted material from the Times.

A.I. firms depend on journalism, and some publishers have signed lucrative licensing agreements allowing A.I. firms to use their reports. “Accurate, well-written news is one of the most valuable sources” for their chatbots, which “need timely news and facts to get consumers to trust them,” writes Jessica Lessin in The Atlantic. But it’s making a deal with the devil as A.I. firms build products that reduce the need for consumers to click links to the original publishers.

This is one of those moments of technological growing pains, raising concerns about the boundaries of using intellectual property. We’ve seen it before with the advent of broadcast radio, television and digital file-sharing programs.

Time and the courts typically sort it out eventually.

In this ethicscape, a traveler must avoid making blatant bunders, avoiding the appearance of making blunders, and avoiding blunders that did not exist a short time ago, but now must be taken into account.

********************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.