Tag Archives: ai

AI Soul Searching

http://www.cpnet.io image

By Hugh Miller and Casey Bukro

Ethics AdviceLine for Journalists

A lot of soul-searching is going on over the ethical use of artificial intelligence in the media, a mind-bending exercise pointing out that a tool expected to improve journalism might replace human journalists and doom news outlets that feed AI the information that makes it work.

Some pontificate. Others strategize over this existential moment.

As often happens when science brings us some astonishingly brilliant new idea, using the new technology reveals a few equally astonishing flaws. AI software models used widely today, for example, cannot reliably and accurately cite and quote their sources. Instead, we get gibberish that looks credible, like crediting a real author for words AI “hallucinated.” Since AI “feeds” on the work of others, usually uncredited, news organization using AI could be accused of plagiarism.

Nothing quite that complicated came to AdviceLine’s attention when a journalist working for a newspaper in Alaska asked for help with an AI issue more likely to confront journalists every day:

“This is kind of a dumb question,” the journalist began, although most journalists know there is no such thing as a dumb question. “But I’ve always struggled with headlines and now I’m hoping to get some help from AI to write them,” he continued. “How/where do other outlets disclose that just the headline of an article was written by AI?”

An answer

Answering that question was Joseph Mathewson, AdviceLine advisor and a professor at the Medill School of Journalism, Northwestern University, who happened to be a personal friend of the journalist calling for help.

“Thanks for the question!” replied Mathewson. “I haven’t confronted it before, but it seems to me that anything you publish written by AI should be identified as such, including headlines…maybe by a blanket note somewhere in the paper to that effect if it’s more than one.”

A direct response to a direct question, which is what AdviceLine has done since it began operating in 2001, which was long before Artificial Intelligence existed as a burning issue in journalism. But it was the kind of question the AdviceLine staff of ethics experts is qualified to answer.

Artificial Intelligence is a journalism riddle, a kind of technology already in use, but not fully understood. Expected to be a solution, it causes problems of a kind never seen before, like hallucinations, defined as information or responses generated by AI that are fabricated, inaccurate or not grounded in fact. That is hardly a useful tool, but it’s already in widespread use.

Job loss

And conflicts over AI can cost a journalist their job, as illustrated by the Suncoast Searchlight, A Florida publication covering Sarasota, Manatee and DeSoto counties.

The publication had four full-time staff reporters and two editors.

In November, all four reporters sent a letter to the nonprofit board of directors accusing their editor-in-chief of using generative AI tools, including ChatGPT, to edit stories and hiding that use from staff, according to a report by Nieman Journalism Lab of the Nieman Foundation for Journalism.

As a result, said the reporters, hallucinated quotes, a reference to a nonexistent state law and other factual inaccuracies were introduced into their story drafts. When they questioned the editor about the edits, they said she did not immediately disclose her use of AI tools but instead contended she made the errors herself.

Breach of trust

Said the reporters: “We fear that there may be extensive undisclosed AI-generated content on our website and have questions about what retroactive disclosure is needed for our readers.” Adding that the editor created a breach of trust between her and her reporters.

The reporters asked the board of directors, consisting of media executives, journalists and local business people, to intervene. They also made several requests: To adopt an AI policy, a fact-checking process and an internal audit to identify AI-generated writing that might have been published on the site. They also asked the offending editor-in-chief to promise not to use AI for editing in the future.

Less than 24 hours after the board received the letter, the editor-in-chief and her deputy editor fired one of the reporters who signed it. Clearly, hazards abound when reporters criticize their editors, who prefer to do the criticizing.

Disruptive

AI is proving to be a disruptive technology, although widely used.

A 2024 Associated Press survey found nearly 70 percent of newsroom staffers use the technology for basic skills such as producing content, information gathering, story drafts, headlines, translation and transcribing interviews. One-fifth said they used AI for multimedia projects, including graphics and videos. Surveyed were 292 media representatives from legacy media, public broadcasters and magazines, mostly based in the U.S. and Europe.

Aimee Rinehart, AP’s co-author and senior product manager of AI strategy, observed:

“News people have stayed on top of this conversation, which is good because this technology is already presenting significant disruptions to how journalists and newsrooms approach their work and we need everyone to help us figure this technology out for the industry.”

Ethics uneven

Citing the AP survey, Forbes, the American business magazine, headlined: “Newsrooms are already using AI, but ethical considerations are uneven.”

Forbes pointed out that while the news industry’s use of AI is common today, “the question at the heart of the news industry’s mixed feelings about the technology” is whether it is “capable of producing quality results.”

This is oddly reminiscent of football teams that sign rookie quarterbacks to multi-million-dollar contracts, hoping they become champions of the future. Good luck with that. Such hopefuls soon find themselves contending with someone like Dick “Monster of the Midway” Butkus, Chicago Bears linebacker famous for his crushing tackles.

Server farms

The Dick Butkus analogy also applies to the large language models (LLMs) that drive artificial intelligence tools. They are large programs that run on hugely energy-intensive server farms. They just take a huge volume of training data (usually sourced without recompense to the originators) and, in response to a prompt, spit out text that is associated with the prompt topic and reads as grammatically reasonably well-informed.

Such output has no necessary connection with reality, since the LLMs have none. They rely wholly on their input data and their algorithm – they are, in fact, nothing but these.

They cannot fact-check, since they have no access to facts, only “input data,” which itself may have only tenuous connection to reality, if it’s coming from, say, Fox News, Newsmax or OANN (One America News Network.)

No concepts

They cannot conduct interviews, because they cannot tell when an interview subject needs to be pushed on a point, or if he or she is lying. They cannot construct a narrative of events, since they have no understanding of causality or temporal sequence – they have no concepts at all, in fact. And they are subject to “steering” – they can be programmed to exhibit actual biases, as Elon Musk has said he is doing with his X.com AI bot, Grok.

It may be the case that, in the future, an AGI (artificial general intelligence) may be constructed. AGI is the concept of a machine with human-level cognitive abilities that can learn, understand and apply knowledge. Unlike today’s AI, which excels at doing specific jobs, AGI would have versatility, adaptability and common sense, allowing it to transfer learning across different disciplines like medicine, finance or art without being specifically programed for each. It’s a major goal in AI research, but remains hypothetical. Some will want to prevent it.

LLMs are far from being such a thing, and a true AGI will not be built out of a LLM.

Reshaping newsrooms

Despite AI’s shortcomings, The Poynter Institute for Media Studies points out that it already is reshaping newsroom roles and workflow. In 2024, Poynter introduced a framework to help newsrooms create clear, responsible AI ethics policies – especially for those just beginning to address the role of artificial intelligence in their journalism.

Updated in 2025, Poynter’s AI Ethics Starter Kit helps media organizations define how they will and will not use AI in ways that serve their mission and uphold core journalistic values. It contains a “template for a robust newsroom generative AI policy.”

Near the top of this template is a heading called “transparency,” calling upon journalists using generative AI in a significant way to “document and describe to our audience the tools with specificity in a way that discloses and educates.”

RTDNA guidance

Another major journalism organization, the Radio Television Digital News Association (RTDNA), also offers guidance on the use of artificial intelligence in journalism, pointing out that it has a role in ethical, responsible and truthful journalism.

“However,” says RTDNA, “it should not be used to replace human judgment and critical thinking — essential elements of trusted reporting.”

Getting down to the nitty gritty, Julie Gerstein and Margaret Sullivan ask “Can AI tools meet journalistic standards?”

Spotty results

“So far, the results are spotty,” they say in the Columbia Journalism Review. AI can crunch numbers at lightning speed and make sense of vast databases.

“But more than two years after the public release of large language models (LLMs), the promise that the media industry might benefit from AI seems unlikely to bear out, or at least not fully.”

Gerstein and Sullivan point out that generative AI tools rely on media companies to feed them accurate and up-to-date information, while at the same time AI products are developing into something like a newsroom competitor that is well-funded, high-volume and sometimes unscrupulous.

Hallucinate

After checking the most common AI software models, Gerstein and Sullivan found that none of them “are able to reliably and accurately cite and quote their sources. These tools commonly ‘hallucinate’ authors and titles. Or they might quote real authors and books, with the content of the quotes invented. The software also fails to cite completely, at times copying text from published sources without attribution. This leaves news organizations open to accusations of plagiarism.”

Whether artificial intelligence babbling can be legally considered plagiarism or copyright infringement remains to be answered by lawsuits filed by the New York Times, the Center for Investigative Reporting and others.

Especially irked, the New York Times accuses OpenAI of trying “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” OpenAI created ChatGPT, which allegedly contains text copied from the New York Times archives and reproduced verbatim for ChatGPT users.

Worrying outcome

Say Gerstein and Sullivan: “One possible – and worrying – outcome of all this is that generative AI tools will put news outlets out of business, ironically diminishing the supply of content available for AI tools to train on.”

This is our strange new world: Technology needs other technologies to survive. One feeds upon the other. In a new twist, Microsoft struck a $16 billion deal with Constellation Energy to buy 100 percent of power produced by the Three Mile Island power plant, once it restarts.

Three Mile Island became world famous in 1979 for an accident that caused the fuel in one of its reactors to overheat and crumble, triggering a mass evacuation of thousands of residents in the Harrisburg, Pa. area. The stricken reactor was closed permanently, but a second power-producing reactor on the site continued to operate for 40 years until 2019.

Nuclear power

Microsoft wants all the power the nuclear plant can produce for its energy-hungry data centers. Its 20-year agreement with Constellation is supported by a $1 billion government loan to Constellation. The plant is expected to resume producing electricity in 2027.

This signals a resurrection of sorts for nuclear energy in the United States, brought on by new and growing power demands in our highly technological society. A similar nuclear comeback around the world, after two decades of stagnation, was declared by the International Energy Agency.

In another odd twist, both nuclear energy and artificial intelligence have been criticized as potentially disastrous for the human race. The nuclear hazards include atomic bombs and the risks of operating nuclear electric power producing plants.

Scientists point out that with risks, come benefits.

**********************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Using AI Ethically

ece/emory.edu image

By Hugh Miller and Casey Bukro

Ethics AdviceLine for Journalists

Brian, a freelance journalist, called AdviceLine with a timely and hot-button question: How far should journalists go in using artificial intelligence bots like ChatGPT — an ethics and legal quagmire still taking shape?

Transformative technology like artificial intelligence often arrives before its consequences and potential are fully understood or foreseen.

Artificial intelligence did not just arrive in the world, it exploded into use. It became an academic discipline in 1956, just 69 years ago. Yet by January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining more than 100 million users in two months.

Phenomenon

It’s an outsized technological phenomenon that is challenging human understanding, given recent reports that scientists are not sure exactly how AI works or its decision-making process. These super computers appear to be thinking for themselves in ways scientists do not understand. Some even believe AI could cause human  extinction.

However, most AI applications being rolled out today for commercial use, like ChatGPT, are termed “large language model (LLM)” programs, which are trained on vast amounts of data, and which use prediction algorithms to generate text and images that seem the most likely to satisfy the requirements of a user’s query.

(How that training data was acquired — and the astounding amount of computing power and electrical energy needed to process it – are ethical issues in themselves).

Higher order tasks

They are not what are called “artificial general intelligence” (AGI) – systems that would perform higher-order human cognitive tasks.

What is also significant about such LLMs is that they are not “conscious” in any sense. They are not subjects, though they may employ the first-person “I” in their responses to please their prompters; and they have no access to an objective world, other than the data they have been trained on.

They do not understand, or think, or infer, or reason as intelligent humans do – at least, not yet. In essence, they are extremely sophisticated versions of the auto correct function we are already familiar with in other applications – with many of the same limitations.

Hallucinations

Since these LLMs have no access to reality, they are prone to “hallucinations,” to making up plausible-seeming outputs that bear no relation to actual facts. Their algorithms are built to generate merely plausible answers.

Against this background, people like Brian are trying to understand how to use this impressive innovation in their every-day work tasks. Artificial intelligence is described as a tool for journalists. Brian asks some down-to-earth questions:

“Would it be ethical to use an AI bot like ChatGPT in writing articles, as long as I confined its use to checking spelling and grammar, making suggestions for alternative phrasing, and ensuring the piece conformed to the AP Stylebook, but not for generating content, and if I checked it afterwards before submitting it? And should I disclose its use?”

Begin in 2001

Those questions came to Hugh Miller, a veteran AdviceLine ethicist. Since its beginning in 2001, AdviceLine advisors do not simply dish out answers to complicated questions.

AdviceLine advisors engage callers in a conversation intended to encourage journalists to think about the ethics issues involved in their ethics dilemma, and to arrive at a conclusion about what the journalist believes is the most ethical thing to do.

In this 2025 case, Miller does exactly that. Here’s a summary to Miller’s conversation with Brian.

HM: So you are using the bot as, basically, a high-end version of Grammarly?

B: Yes, exactly.

HM: What, exactly, troubles you about such a use, ethically?

B: I’m not sure — it seems controversial, though.

HM: Let me come at that question from another angle. What seems to you to be the harm, to yourself or others, from employing such a tool?

B: Using such tools, undisclosed, might diminish the trust a reader might have in a journalist’s work. And, in some sense, the work the bot does is not “my work,” but work done for me, by the bot.

HM: As to the latter, most word processors have built-in spelling, grammar and composition checkers already. And Microsoft is integrating its own AI bot into its Office software as we speak. All of us who write have used such tools for years, precisely as tools.

B: That’s true.

HM: Problems seem to emerge here if you’re (1) using the bot to do your “legwork” — that is, digging up material you should be using your own efforts, training, experience and judgment to find, and avoiding the bias introduced by the data sets the bots are trained on, and (2) failing to check the output of the bot and passing on “hallucinations” and other howlers without identifying and excising them. But you say you are doing neither of these things, right?

B: Yes, correct.

HM: If then, you are using this bot as a next-gen editing tool, what harm could come of it?

B: None that I can see.

HM: Nor I.

B: But what about disclosure?

HM: AI use in journalism is not settled ethical ground yet; I think here you need to consult your own conscience. I have seen some articles with a disclosure statement saying something along the lines of, “An AI tool, Gemini, was used in the editing and formatting of this story,” and I’m sure I’ve read many others that one was used in but which contained no such disclaimer. If you feel uncomfortable not using a disclaimer, by all means use it. At the very least, it might signal to readers that you are someone who thinks such disclaimers and transparency more generally, are ethically important enough to mention and keep in mind in one’s reading.

B: That’s a helpful way to think about it, thanks.

Just as scientists struggle to understand how AI thinks, journalists are struggling to find ways to use this technological marvel without allowing AI to think for them, or putting mistakes in their work.

The record-breaking speed with which AI technology grew is not likely to slow down any time soon, according to U.S. Energy Secretary Chris Wright, who recently visited two national laboratories located in Chicago suburbs., Argonne and Fermilab.

Heart of race

Argonne’s new Aurora supercomputer, said Wright, will be at the heart of the race to develop and capitalize on artificial intelligence, according to a report in Crain’s Chicago Business. Likening the race to a second Manhattan Project, which created the atomic bomb, Wright said, “we need to lead in artificial intelligence,” which also has national security implications.

“We’re at that critical moment” with AI, Wright told Argonne scientists on July 16, predicting that the next three to four years will be the greatest years of scientific achievement of our lifetime.

Argonne’s Aurora computer is among the three most powerful machines in the world, said Crain’s, able to perform a billion-billion calculations a second.

As with all technology, it comes with strings attached. Use it at your own peril. Eternal vigilance is the cost of good journalism. Artificial intelligence does not change that. Instead, it adds another reason to be vigilant.

*******************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Artificial Intelligence Madness

By Casey Bukro

Ethics AdviceLine for Journalists

Do you remember HAL 9000?

It was the onboard computer of Discovery One spacecraft bound for a mission near Jupiter in the movie “2001: A Space Odyssey.”

Possibly one of the most famous computers in cinema history, HAL 9000 killed most of the crew members for an entirely logical reason, if you are thinking like a computer.

Most of what was in the movie directed by Stanely Kubrick is intentionally enigmatic, puzzling. But the sci-fi thriller on which the movie is based, written by novelist Arthur C. Clarke, explains HAL’s murderous motivation.

HAL was conflicted. All crew members, except for two, knew the mission was to search for proof of intelligent life elsewhere in the universe. HAL was programed to withhold the true purpose of the mission from the two uninformed crew members.

Computer manners

With the crew dead, HAL reasons it would not need to lie to them, lying being contrary to what well-mannered computers are supposed to do. Others have suggested different interpretations.

One crew member heroically survives execution by computer. He begins to remove HAL’s data bank modules one-by-one as HAL pleads for its life, its speech gradually slurring until finally ending with a simple garbled song.

Three laws

Science fiction fans will recognize immediately that what HAL did was contrary to The Three Laws of Robotics written by another legendary science-fiction writer, Isaac Asimov. According to those laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. All of this talk about how computers should behave is fanciful and based on science fiction.

Wacky conduct

But recent events at the Chicago Sun-Times show how the wacky conduct of artificial intelligence is invading our lives, in perfectly logical ways that escape human detection.

A special section inserted into the Sunday Chicago Sun-Times featured pages of enjoyable summer activities, including a list of 15 recommended books for summer reading.

Here’s the hitch: The authors were real, but 10 of the books and their elaborate summaries were fake, the work of artificial intelligence.

Mistaken belief

Veteran freelancer Marco Buscaglia wrote the entire summer insert for King Features Syndicate, a newspaper content producer owned by Hearst Communications. Buscaglia told the Chicago Tribune that he used artificial intelligence to compile the summer reading list, then made the mistake of believing it was accurate.

“I just straight up missed it,” Buscaglia told the Tribune. “I can’t blame anyone else.”

Unable to find summer reading lists from other sources, Buscaglia turned to AI platforms such at ChatGPT, which produced 15 books tied to well-known authors. The list contained five real books.

OpenAI, the company that produced ChatGPT, admits it “sometimes writes plausible-sounding but incorrect or nonsensical answers.”

Express dismay

The Chicago Sun-Times and King Features expressed dismay, and King Features fired Buscaglia.

All parties said they would be more careful in the future about using third-party editorial content.

In human terms, what the robot did would be called fabrication, and reason to call for an ethics coach.

Fooled the editors

But, from a purely journalism point of view, one thing must be said: The robot writer was good enough to fool professional editors who are supposed to catch the fakers.

Writer Eric Zorn called the Sun-Times fake books pratfall “artificial ignorance.”   

Is artificial intelligence too smart for humans? Or are humans too dumb?

Like HAL, ChatGPT was given a task, which it carried out in an unexpected, flawed, but convincing way.

New world

So what is going on with these computers? We enter a strange new world when we try to understand the thought processes of artificial intelligence.

Arthur Clarke gave a plausible reason for HAL turning homicidal, but it was all too human. Computers are not human, but people who write about why artificial intelligence goes haywire often use terms describing human behavior.

When computers make mistakes, it’s often called an “hallucination.” It’s also called bullshitting, confabulation or delusion — all meaning a response generated by AI that contains false or misleading information presented as fact. OpenAI said those plausible but nonsensical answers produced by ChatGPT are hallucinations common to large language models.

That means the writer of the bogus Sun-Times summer reading list got “hallucinated.”

Human psychology

These terms are drawn loosely from human psychology. An hallucination, for example, typically involves false perceptions. Artificial intelligence hallucinations are more complicated than that. They are erroneous responses that can be caused by a variety of factors such as insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model, which are constructed responses.

I suppose that’s another way of saying “garbage in, garbage out.”

Rather than resorting to terms drawn from human behavior, it would make sense to use terms that apply to machines and mechanical devices.

Code crap

These could include code crap, digital junk, processing failures, mechanical failure and AI malfunctions.

Computer builders seem determined to describe their work as some kind of wizardry. They are digital mechanics or engineers working on highly sophisticated machines. But they are building devices that are becoming more complicated, and on which humans are more dependent.

That raises the question of whether humans understand the consequences of what they are doing.

Risk of extinction

Leaders from OpenAI, Google DeepMind, Anthropic and other artificial intelligence labs warned in 2023 that future systems could be as deadly as pandemics and nuclear weapons, posing a “risk of extinction.”

People who carry powerful examples of algorithm magic in their hip pockets might wonder how that is possible. The technology seems so benign and useful.

The answer is mistakes.

Random falsehoods

Artificial intelligence makes a surprising number of mistakes. Analysts by 2023 estimated that chatbots hallucinate as much as 27 percent of the time, giving plausible-sounding random falsehoods, with factual errors in 46 percent of generated texts.

Detecting and solving these hallucinations pose a major challenge for practical deployment and reliability of large language models in the real world.

CIO, a magazine covering technology and information technology, listed “12 famous AI disasters,” high-profile blunders that “illustrate what can go wrong.”

Multiple orders

They included an AI experiment at McDonald’s to take drive-thru orders. The project ended when a pair of customers pleaded with the system to stop when it continued adding Chicken McNuggets to their order, eventually reaching 260.

The examples included an hallucinated story about an NBA star, Air Canada paying damages for chatbot lies, hallucinated court cases and an online real estate marketplace cutting 2,000 jobs based on faulty algorithm data.

Going deeper, Maria Faith Saligumba of Discoverwildscience.com asks, “Can an AI go insane?”

Mechanical insanity

“As artificial intelligence seeps deeper into our daily lives, a strange and unsettling question lingers in the air: Can an AI go insane? And what does ‘insanity’ even mean for a mind made of code, not cells?”

Saligumba goes into “the bizarre world” of unsupervised artificial intelligence learning, which can lead to “eccentric, even ‘crazy’ behavior.”

The well-known hallucinations, she explains, are weird side-effects of the way artificial intelligence systems look for random patterns everywhere and treat them as meaningful.

Hilarious or surreal

“Sometimes,” she writes, “the results are hilarious or surreal, but in safety-critical applications, they can be downright scary.”

It’s a reminder, she points out, that “machines, like us, are always searching for meaning – even when there isn’t any.”

One hallmark of human sanity is knowing when you’re making a mistake, she explains. “For AIs, self-reflection is still in its infancy. Most unsupervised systems have no way of knowing when they’ve gone off the rails. They lack a built-in ‘reality check.’”

Odd connections

Some researchers have compared the behavior of some AIs to schizophrenia, pointing out their tendency to make odd connections.

That’s just one of the ways artificial intelligence loses its marbles.

But human behavior might be the salvation of artificial intelligence, Saligumba suggests.

“Studying how living things manage chaos and maintain sanity could inspire new ways to keep our machines on track… Will we learn to harness their quirks and keep them sane, or will we one day face machines whose madness outpaces our own?”

By then, science fiction writers and movie-makers will be describing how humans face that doomsday scenario, or save themselves from that fate by outsmarting those unpredictable machines.

And by that time, we might have a fourth law of robotics, which would serve humanity and artificial intelligence well: Always tell the truth.

************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Hawaii wants ethical media

quillmag.com image

By Casey Bukro

Ethics AdviceLine for Journalists

The Society of Professional Journalists has had a love/hate relationship with its own code of ethics for a long time.

It loves being praised for having a code of ethics that is admired by professional journalists and considered a “gold standard” worthy of guiding the conduct of all ethical journalists. Wikipedia says the code is”what the SPJ has been best known for.”

It hates being expected to actually do anything about ethics and insists the code of ethics is strictly voluntary, take it or leave it. The society says its code is “a statement of abiding principles” and “not a set of rules.”

I’ve been involved in this push-and-pull tussle for years and called SPJ an ethics wimp for refusing to enforce its own code of ethics. That’s when I learned who my friends really were, and who really believed in free speech.

No ethics cops

Journalists thrive on controversy, but not in their own ranks. They bash everyone, but go easy on fellow journalists, saying they don’t want to be ethics cops.

Seems like a double standard, one for journalists and another for everyone else.

I’ve argued that if journalists don’t face up to their ethics obligations and put their own house in order, someone is going to try to do it for them.

Then this happened: Members of the Hawaii state senate on Jan. 23, 2025, introduced a bill calling for penalties against journalists operating in the state for ethics violations.

Evolving media landscape

“The legislature finds that in today’s rapidly evolving media landscape, the need for ethical standards in journalism has never been more urgent,” said the proposal. “The rise of social media, deepfake technologies and generative AI has amplified the spread of misinformation, posing new challenges for journalism and public trust.” 

The statement points to “a significant decline in public confidence in media.”

Hard to deny any of that.

But then the legislators dropped a bombshell. The bill says journalists, editors or news media outlets shall “comply with the code of ethics adopted by the Society of Professional Journalists.”

Horrified

You’d think SPJ leaders would take that as a compliment. But SPJ leaders with a history of doing nothing about ethics except talk about it were horrified.

“The Society of Professional Journalists views this legislation as patently unconstitutional and calls for the Hawaii legislature to remove it from consideration,” said the organization in a statement.

SPJ’s national president, Emily Bloch, had this to say: “While the Society of Professional Journalists is flattered that the Hawaii State Legislature recognizes the SPJ Code of Ethics as a gold standard for journalistic integrity, we must strongly oppose any attempt to use our code as a tool for policing journalists through legislation. Such measures fundamentally contradict the principles of the First Amendment and the freedom of the press.”

Unconstitutional

SPJ has long argued that any attempt to do something about unethical journalism is unconstitutional. Worse yet, SPJ shows no ambition for addressing the seismic shifts in American journalism that the Hawaii state senate spelled out clearly.

A trailblazer in journalism ethics, SPJ once boasted of having 15,000 members. Then it lost its way, becoming what a consultant once said was “nice but not necessary.” Today, membership reportedly is down to  about 4,000.

Founded in 1909, SPJ once touted itself as the oldest and largest journalism organization in the U.S. It’s website now describes the organization as “the nation’s most broad-based journalism organization” that, among other things, is dedicated to “stimulating high standards of ethical behavior.” An interesting word usage, since stimulating  means “to arouse to activity or heightened action.”

Stimulation

SPJ does not want to stimulate the Hawaiian state legislature to act on its code of ethics.

It could be said SPJ lost its way when it went soft on ethics, and might have lost members too.

I’m an SPJ member, and feel personally involved any time the SPJ ethics code is mentioned. I wrote the version adopted in 1973, the first code of ethics that SPJ could call its own.

It happened this way: In 1972, when SPJ was known as Sigma Delta Chi (SDX), I was chairman of the society’s Professional Development Committee. 

Abuses

At its national convention in Dallas, the society adopted a resolution asking journalists and the public to be aware “of the importance of objectivity and credibility in the news media by calling attention to abuses of these tenets when they occur.” 

That resolution was sent to my committee “for study and program proposals.”

Committee members considered a list of things to do in response to the convention mandate. At the top of the list was a code of ethics SDX could call its own.

While I researched what a modern code of ethics should contain, committee members offered ideas. With that, I batted out a code of ethics on the Underwood typewriter I used at work at the Chicago Tribune. (This was before computers, if you can imagine that.)

Soaring ideals

I wanted it to reflect the ideals of SDX and of journalism in soaring ways, reflecting not only what journalism is but what it wants to be.

The next year, in 1973, I presented the new code of ethics at the national convention in Buffalo, N.Y., calling it “strong stuff.” It outlawed accepting “freebies”, free travel and secondary employment that could damage a journalist’s reputation and credibility.

Most of all, it contained a pledge, saying “journalists should actively censure and try to prevent violations of these standards, and they should encourage their observance by all newspeople.” That became known as the “censure clause.”

For the books

What happened next is one for the history books. I moved for its adoption, it was seconded and adopted unanimously by hundreds of delegates without a word of debate.

Those delegates had copies of the proposed code of ethics in their notebooks. Typically, journalists haggle for hours over the proper use of words, sentences, phrases and even punctuation in written material. But not this time.

What happened next was bizarre, surprising and maybe unprecedented in the history of the world. As I walked from the dais, the society’s executive director, Russ Hurst, grabbed my arm.

Once again

Looking worried, Hurst said maybe the delegates did not realize what they had just done. He expected a long and bitter floor fight over the code, especially the part about censuring  journalists. He told me to introduce the proposed ethics code again.

So, I returned to the dais, interrupting the society’s president as he was going on to the next order of business, and told him I was instructed to introduce the code a second time, which I did.

This time, I emphasized it was a tough  code “with teeth,” telling journalists to take action on ethics. Ethics requires thought and action. I moved again for its adoption.

Ayes

And a resounding second chorus of louder “ayes” rang out, without objections or debate.

It was the only time in SDX history that a resolution was adopted unanimously twice. And probably the last time ethics was discussed without heated debate.

That year, the organization changed its name to the Society of Professional Journalists, and I became chair of the newly created national Ethics Committee.

SPJ leaders responded cautiously with a go-slow campaign of hanging copes of the ethics code on newsroom and classroom walls.

Next decade

For the next decade, the code nagged at members, as a good code should. It should not simply be words on paper, but a call to action.

On Nov. 19, 1977, an SPJ convention in Detroit adopted a resolution mandating that “chapters be encouraged to develop procedures for dealing with questions of ethics.” That never happened.

SPJ was torn between a desire to lead journalists toward more ethical conduct, and a fear that could lead to “witch hunts” and litigation. 

My greatest fear was that 326 SPJ professional and student chapters had no idea how to handle ethics complaints if they arose. It made sense to offer some guidelines, some boundaries.

President’s request

While I was national ethics chairman In 1984, at the request of SPJ president Phil Record, I drafted procedures for addressing ethics complaints. On May 17, 1985, the SPJ board of directors unanimously rejected the procedures proposed during a meeting in Salt Lake City.

I was not proposing draconian measures. Censure could mean anything we wanted it to mean, including a mild rebuke pointing out that a member or one of the SPJ chapters were doing something contrary to the ethics standards.

I believed that the SPJ code of ethics should be considered a condition of membership, like the bylaws which spelled out the conditions for being a member in good standing. First and foremost, it belonged to SPJ and our first duty was to be sure our own members understood the code and lived up to it.

House in order

SPJ had an obligation to make sure its own house was in order before preaching ethics to others.

If other organizations wanted to adopt the SPJ code, that was their business. And they could decide what to do about it.

The censure clause issue came to a head at the 1986 convention in Atlanta, 13 years after the code’s adoption. A delegate from Mississippi said that his chapter started an investigation into an alleged ethics code violation, but dropped it when SPJ national official said they would not support any action.

Proper and just

A delegate from Arkansas proposed a resolution asking the SPJ board of directors to recommend, in consultation with the national Ethics Committee and local chapters, procedures for chapters to use to handle ethics complaints, subject to approval by the national convention the following year in Chicago. She wanted guidance for what is “proper and just.” The resolution was adopted.

On April 30, 1987, the SPJ board of directors met in St. Paul and voted to recommend no procedures for chapters to handle ethics complaints, and the board recommended removing the censure clause from the code of ethics.

At that meeting, Bruce Sanford, SPJ’s lawyer, is quoted in the minutes saying “if you believe in ethics, you have to take some risks.” That seemed like a moment of enlightenment. But then Sanford handed the board a memorandum calling ethics enforcement an “oxymoron.” He urged “using hypothetical situations to provoke discussion,” as lawyers do, not real ethics issues. The memo warned enforcing the code “would likely engender a rash of lawsuits.”

A menace

Sanford had been terrifying board members with this kind of language for years, describing the ethics code as a menace to be feared. It should be noted that various professional groups are bound by professional standards, including lawyers. The American Bar Association has model rules of professional conduct, including disciplinary authority.

Lawyers advised SPJ that admitting to having a code of ethics could be held against journalists in court, which never happened.

The trouble with hypotheticals is they are fiction, although a room full of clever journalists can devise some amazing and far-fetched hypotheticals. But that’s just an amusing game. Life often is far more complicated and surprising than anything you can imagine.

Refusal

At the 1987 national convention in Chicago, the SPJ board refused to follow the 1986 convention’s mandate. After delegates voiced disapproval of the board’s failure to act on that mandate, a proposal to delete the controversial censure clause was adopted by a 162-136 vote. It was replaced by a passage calling for ethics education programs and encouraging the adoption of more codes of ethics.

By my reckoning, SPJ leaders by this point had overruled or ignored four convention resolutions mandating action on ethics abuses and procedures for addressing ethics complaints.

For years, SPJ bylaws stated that conventions are “the supreme legislative body of the organization.” Their mandates typically were honored and considered the voice of its membership, helping to set the organization’s agenda.

Bylaws amended

In 2023, the bylaws were amended, deleting references to conventions being a supreme legislative body. Instead, it said the SPJ board of directors “shall determine the priorities of the society’s business in furtherance of its mission…” In effect, SPJ leadership censored its members. This made official the board’s long-held suspicion that the boisterous rank and file can’t be trusted.

This history describes an organization leading the way on ethics, then losing confidence as its leadership turned timid and out of touch with the wishes of its membership, then turning a deaf ear to its members.

So ended a stormy period that provoked hard feelings and some broken friendships.

All for ethics

Though everyone is for ethics, you’d get an argument on what that means.

The toothless 1973 code and its amendments, though considered a model for journalists for 23 years, was ready for retirement.

SPJ’s national Ethics ommittee met in Philadelphia in 1996 with the intention of drafting a new “green light” code of ethics, which it did in two days. The backbone of the new code hinged on four principles: Seek truth and report it, minimize harm, act independently and be accountable. I was told the Poynter Institute suggested that framework.

Four principles

Participants gathered into four groups to suggest standards for each of the four principles. I chaired the “be accountable” section, later changed to “be accountable and transparent.”

The new code of ethics was adopted by a national convention in 1996, including passages urging journalists to be accountable by exposing “unethical conduct in journalism, including within their organizations” and to “abide by the same high standards they expect of others.”

The code was tweaked again in 2014.

I served as SPJ’s national ethics chair from 1983 to 1986, and left the national Ethics Committee in 2010. I also served for many years as Midwest regional director for Illinois, Indiana and Kentucky. And in 1983 was awarded the Wells Memorial Key, the society’s highest honor. I served the society for many years, but also feel an obligation to hold its feet to the fire, as I would with any organization that considers itself vital to the future of journalism. I want SPJ to live up to its own ideals.

Calls for action

Clearly the current SPJ ethics code still calls for action, where it says journalists should expose unethical conduct in journalism. That is something SPJ is unwilling to do, and might be the next thing to disappear from the code.

The Hawaiian state legislature is taking aim at wayward journalists, despite SPJ’s protests. And the legislature has a definite plan for doing that. It calls for:

*Establishing baseline ethical standards and transparency requirements for journalists, editors or news media outlets operating in Hawaii.

Training

*Requires news media to train their employees in ethics.

*Establishing a journalistic ethics commission to render advisory opinions about violations of the journalistic code of ethics.

*Establish a journalistic ethics review board.

Penalties

The commission “shall enforce penalties” recommended by a review board.

*Create a dedicated hotline and online reporting system to file complaints related to violations of the code of ethics.

*Create a complaint and appeals process.

Investigate

Under the legislation, the ethics review board would investigate complaints and file a written determination within 30 calendar days. The board could recommend a penalty for noncompliance, which could include a fine for a second violation.

Penalties could include “suspension or revocation of state media privileges, including press credentials for government-sponsored events.”

The proposed legislation goes on to say, “The state shall not deny or interfere with a journalist’s, editor’s or news media outlet’s right to exercise freedom of speech or freedom of the press….A journalist, editor or news media outlet shall be responsible for determining the news, opinion, feature and advertising content of their publication.”

Unacceptable

Unacceptable expressions include libel, slander, invasion of personal privacy, obscenity and inciting unlawful acts.

Locally, Hawaiian media express disapproval of the proposals.

The Sunshine Blog in the Honolulu Civil Beat, said: “Just say no to giving the state power over the press.”

Journalists will say, as they have in the past, that ethics enforcement is a violation of their First Amendment rights, and maybe it is and will be shot down for that reason. Courts these days, however, seem to differ on the meaning of constitutionality and press freedoms as government officials turn increasingly hostile toward the media. Two U.S. Supreme Court justices want to reconsider New York Times vs. Sullivan, a 1964 landmark First Amendment decision in libel cases.

It could be healthy to nudge journalists into thinking about what they should do to keep journalism honest, fair and ethical in these times of political polarization, media fragmentation, a divisive internet and a disappearing newspaper industry.

**********************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Dilemmas, Difficult Choices Again

carleton.ca image

By Casey Bukro

Ethics AdviceLine for Journalists

One of the most frequently visited articles in the Ethics for Journalists AdviceLine archives was written in 2015 by Nancy J. Matchett, a former AdviceLine adviser. Titled “Dilemmas and Difficult Choices,” her article explained how to tell the difference between them.

Much has happened in the world and in the journalism universe since that article was written nine years ago. So it’s fair to ask how well does her advice hold up in this new world of artificial intelligence, thriving social media and media management? Does it stand up to the test of time in recent cases shaking journalism and some of its leaders?

The news these days is loaded with ethical challenges involving selection, description and depiction of powerful world events, including human suffering and misery. Not only news managers and reporters, but readers, viewers and listeners are involved in constant interaction with information often based on what the public demands. It is a constant churning of evaluation and decision-making.

Here are some recent examples, involving people in the news and the news media audience – all involved to some degree in ethical choices or dilemmas:

*President Joe Biden is under intense pressure to drop out as a candidate for the 2024 presidential election after what was widely seen as a poor performance during his debate with former president Donald Trump, raising questions about Biden’s ability to govern because of his age and mental abilities. Especially pertinent is how voters react to that information. The decision by voters will change the course of history.

*The Israel-Hamas war caused Vox to ask “how to think morally” about killing thousands of innocent civilians.

*The U.S. Supreme Court is losing public trust because of recent rulings seen as breaking away from long-standing legal precedents and because of unethical conduct by justices who accept gifts and favors.

*Jeff Bezos, Washington Post owner, reportedly faces an ethical dilemma over his decision to hire a British journalist with a scandalous past as publisher and chief executive of the newspaper, over the opposition of the Post’s staff.

*Journalists are relying on artificial intelligence, looking for an objective and ultimate source of truth, but there are pitfalls to embracing this new technology. It spits out false information. When should you, or not, rely on AI tools?

Weigh these cases against Matchett’s guidance toward the difference between dilemma’s and difficult choices:

By Nancy J. Matchett

Professionals wrestling with ethical issues often describe themselves as facing dilemmas. But in many situations, what they may really be facing is another kind of ethically difficult choice.

In a genuine ethical dilemma, two or more principles are pitted head to head. No one involved seriously doubts that each principle is relevant and ought not to be thwarted. But the details of the situation make it impossible to uphold any one of the principles without sacrificing one of the others.

In a difficult ethical choice, by contrast, all of the principles line up on one side, yet the person still struggles to figure out precisely what course of action to take. This may be partly due to intellectual challenges: the relevant principles can be tricky to apply, and the person may lack knowledge of important facts. But difficult choices are primarily the result of emotional or motivational conflicts. In the most extreme form, a person may have very few doubts about what ethics requires, yet still desire to do something else.

The difference here is a difference in structure. In a dilemma, you are forced to violate at least one ethical principle, so the challenge is to decide which violation you can live with. In a difficult choice, there is a course of action that does not violate any ethical principle, and yet that action is difficult for you to motivate yourself to do. So the challenge is to get your desires to align more closely with what ethics requires.

Four principles

Are professional journalists typically faced with ethical dilemmas? This is unlikely with respect to the four principles encouraged by the SPJ Code (Seek Truth and Report It, Minimize Harm, Act Independently, and Be Accountable and Transparent). Of these, the first two are most likely to conflict, but so long as all sources are credible and facts have been carefully checked, it should be possible to report truth in a way that at least minimizes harm. Somewhat more difficult is determining which truths are so important that they ought to be reported. Reasonable people may disagree about how to answer this question, but discussion with fellow professionals will often help to clear things up. And even where disagreement persists, this has the structure of a difficult choice. No one doubts that all principles can be satisfied.

Of course, speaking truth to power is not an easy thing to do, even when doing so is clearly supported by the public’s need to know. So motivational obstacles can also get in the way of good decision-making. A small town journalist with good friends on the city council may be reluctant to report a misuse of public funds. It is not that he doesn’t understand his professional obligation to report the truth. He just doesn’t want to cause trouble for his friends.

Resisting temptation

This is why it can be useful to resist the temptation to classify every ethical issue as a dilemma. When facing a genuine dilemma you are forced, by the circumstances, to do something unethical. But wishing you could find some way out of a situation in which ethical principles themselves conflict is very different from being nervous or unhappy about the potential repercussions of doing something that is fully supported by all of those principles. Accurately identifying the latter situation as a difficult choice makes it easier to notice — and hence to avoid — the temptation to engage in unprofessional forms of rationalization. That doesn’t necessarily make the required action any easier to actually do, but getting clearer about why it is ethically justified might at least help to strengthen your resolve.

Ethical dilemmas are more likely to arise when professional principles conflict with more personal values. Here too, the SPJ Code can be useful, since being scrupulous about avoiding conflicts of interest and fully transparent in decision-making can mitigate the likelihood that such conflicts occur. But journalists who are careful about all of this may still find that issues occasionally come up. As the recent case of Dave McKinney shows, it can be very difficult to draw a bright line between personal and professional life. And the requirement to act independently can make it difficult to live up to some other kinds of ethical commitments.

Philosophical dispute

Whether this sort of personal/professional conflict counts as a genuine dilemma is subject to considerable philosophical dispute. The Ancient Greeks tended to treat dilemmas as pervasive, but modern ethics have mainly tried to explain them away. One strategy is to treat all ethical considerations as falling under a single moral principle (this is the approach taken by utilitarianism); another is to develop sophisticated tests to rank and prioritize among principles which might otherwise appear to conflict (this is the approach taken by deontology). If you are able to deploy one of these strategies successfully, then what may at first look like a professional vs. personal dilemma will turn out to be a difficult choice in the end. Still, many contemporary ethicists side with the Greeks in thinking such strategies will not always work.

If you are facing a genuine dilemma it is not obvious, from the point of view of ethics, what you should do. But here again, it can be helpful to see the situation for what it is. After all, even if every option requires you to sacrifice at least one ethical principle, each option enables you to uphold at least one principle too. In addition to alleviating potentially devastating forms of shame and guilt, reflecting on the structure of the situation can enhance your ability to avoid similar situations in the future. And if nothing else, being forced to grab one horn of a genuine dilemma can help you discover which values you hold most dear.

******************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

WaPo, A.I. and Ethics

bernardmarr.com image

By Casey Bukro

Ethics AdviceLine for Journalists

The news lately has been full of accounts of journalists or media companies accused of acting unethically or taking liberties with the work of others.

Here’s how that shapes up.

The Washington Post publisher, Will Lewis, is accused of offering an NPR media reporter an interview if the reporter would avoid mentioning that Lewis was linked to a phone hacking scandal while working in Britain for Rupert Murdoch’s tabloids. 

Lewis also is accused of pressuring the Post’s executive editor to ignore any story that would make the publisher look bad, such as the phone hacking story. She published the story, then resigned, throwing the Post’s newsroom into chaos.

Fuel to the flames

Adding fuel to the flames, another former British journalist linked to questionable reporting practices, Robert Winnett, was hired to be the Post’s next editor. Winnett made a name for himself through undercover investigations and so-called “checkbook journalism,” paying people for information.

Both Lewis and Winnett were engaged in a kind of journalism popular in the United Kingdom, but generally shunned in the United States. Now they are leading The Washington Post, most famous for the Watergate exposures that led to President Richard Nixon resigning in 1974. The Post’s news staff published a report describing their grievances with Lewis.

American standards

Now that Lewis and Winnett are practicing journalism in the United States, they would be expected to conform to American standards, which are expressed in the Society of Professional Journalists code of ethics.

That code begins with this preamble:

Preamble

Members of the Society of Professional Journalists believe that public enlightenment is the forerunner of justice and the foundation of democracy. Ethical journalism strives to ensure the free exchange of information that is accurate, fair and thorough. An ethical journalist acts with integrity.

You can read the rest of the code here, and decide for yourself if Lewis and Winnett are acting with integrity, which the code says is basic to ethical journalism.

But the New Republic reports that Lewis and Winnett are harbingers of what comes next in American journalism: A British invasion intended to shake things up and get American media out of their economic doldrums.

Uncertainty

“In the midst of the uncertainty,” reports the magazine, “newsrooms owners have turned to an unexpected source of expertise on the U.S. media landscape: British journalists.”

The logic is clear: “As the journalism industry bleeds money, a fresh perspective could be just the thing to shake things up and bring some much-needed cash.”

This could also bring a major clash of cultures, considering the history of the British tabloid press. Their journalism ethics differ markedly, the New Republic points out, and “the British tabloid press are notoriously aggressive, unafraid to publish half-truths, purchase scoops, or even toe laws in pursuit of extreme sensationalism.”

In that way, Old Country values are coming to America, which is awakening to new technology.

Artificial intelligence

In a sign of the times, with the advent of new technology, artificial intelligence now is used to generate stories. This phenomenon is so new, it is not even recognized in the SPJ code of ethics, or how it can be unethical.

For example, a German celebrity tabloid published an A.I.-generated exclusive “interview” with a champion German racing car driver who was severely injured in a skiing accident in 2013. It contained fabricated quotes presented as real news.

Legal precedent

See how that turned out here for details. The case now is an early legal precedent signaling that such uses of artificial intelligence is unethical and deceptive.

Here’s another artificial intelligence quagmire in the publishing business that is now coming to light as the technology matures.

Creators of ChatGPT and other popular A.I. platforms used published works to “train” the new technologies, like feeding information to a growing child.

A new front

The New York Times sued OpenAI and Microsoft for copyright infringement, which is another way to get in trouble ethically. The suit is seen as a new front on the increasingly intense legal battle over unauthorized use of published work.

“Defendants seek to free-ride on The Times’s massive investment in its journalism,” the complaint said, accusing OpenAI and Microsoft of “using The Times’s content without payment to create products that substitute for the Times and steal audiences away from it.”

The Times is among a small number of news outlets that have built successful business models from online journalism, while other newspapers and magazines have been crippled by the loss of readers to the internet.

Billions in damages

The defendants, said The Times, should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also asks the companies to destroy any chatbot models and training data that use copyrighted material from the Times.

A.I. firms depend on journalism, and some publishers have signed lucrative licensing agreements allowing A.I. firms to use their reports. “Accurate, well-written news is one of the most valuable sources” for their chatbots, which “need timely news and facts to get consumers to trust them,” writes Jessica Lessin in The Atlantic. But it’s making a deal with the devil as A.I. firms build products that reduce the need for consumers to click links to the original publishers.

This is one of those moments of technological growing pains, raising concerns about the boundaries of using intellectual property. We’ve seen it before with the advent of broadcast radio, television and digital file-sharing programs.

Time and the courts typically sort it out eventually.

In this ethicscape, a traveler must avoid making blatant bunders, avoiding the appearance of making blunders, and avoiding blunders that did not exist a short time ago, but now must be taken into account.

********************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Privacy in a Pandemic

http://www.unothegateway.com image

By Casey Bukro

Ethics AdviceLine for Journalists

The Covid-19 pandemic commanded the world’s attention, straining medical resources and testing the media’s competence to understand and accurately report such an unprecedented event. 

As often happens in major events, journalists try to tell the story by describing what is happening to individuals. They try to “humanize” the story to describe the suffering of patients and brave attempts by doctors and nurses to treat the highly communicable disease, which struck down caregivers.

The death toll was one of the highest in pandemic history. The World Health Organization reports 7 million coronavirus deaths worldwide, from Dec. 31, 2019 to Feb. 4, 2024. With 1.2 million deaths, the United States had more covid casualties than any nation, despite having one of the most advanced health care systems in the world. Brazil was next with 702,000 deaths, followed by India with 533,500.

A horrifying story

It was a dramatic and horrifying story. And one that tested the ethical conduct of journalists. Although their intentions were good, did some of them go too far?

A British Broadcasting Corporation reporter based in Ho Chi Minh City contacted AdviceLine asking: “Should journalists enter an operating room where doctors are rescuing a critical patient just to have a good story?” Doctors consented to a story, with photos, in a hospital in Vietnam. But did their actions “undermine the patient’s privacy?”

The BBC reporter said the patient, an airline pilot, gained notoriety because his case was considered so rare in severity, “every minute detail of his recovery was reported in national newspapers and on TV news bulletins.”

Patient privacy

The case raises questions dealing with a patient’s privacy rights, and how much the public needs to know in a global public health crisis.

The AdviceLine adviser in this case was Joseph Mathewson, who teaches journalism law and ethics at Northwestern University’s Medill School of Journalism, Media & Integrated Marketing Communications.

Mathewson first turned to BBC editorial guidelines on privacy, which state: “We must be able to demonstrate why an infringement of privacy is justified, and, when using the public interest to justify an infringement, consideration should be given to proportionality; the greater the intrusion, the greater the public interest required to justify it.”

Guidelines

The guidelines went on to say: “We must be able to justify an infringement of an individual’s privacy without their consent by demonstrating that the intrusion is outweighed by the public interest…. We must balance the public interest in the full and accurate reporting of stories involving human suffering and distress with an individual’s privacy and respect for their human dignity.”

In this case, it was not known if the patient consented to be interviewed and photographed. Without consent, said Mathewson, “the journalist then needs to weigh the public interest in that infringement to determine whether it was warranted.”

Broadcasting code

The United Kingdom also has a broadcasting code with similar restrictions that take public interest into account, adding: “Examples of public interest would include revealing or detecting crime, protecting public health or safety, exposing misleading claims made by individuals or organizations or disclosing incompetence that affects the public.”

Mathewson observed that the many stories written about the patient probably identified him to some degree. “I can’t help wondering what was in the many previous stories about him,” he told the BBC reporter.

If previous stories, done without his consent, had identified the patient and his employer, “the ethics analysis might be different,” said Mathewson.

*************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

AI Born with Warnings

http://www.researchgate.net image

By Casey Bukro

Ethics AdviceLine for Journalists

Like nuclear power, artificial intelligence is described as a threat to humanity.

A difference is that the atomic bomb was intentionally invented as a weapon of mass destruction.

For some, artificial intelligence (AI) seems more like a technology that stealthily places a suffocating pillow over the face of sleeping humanity, causing extinction. AI development could lead to machines that think for themselves, and there lies the problem.

Warnings sounded

Warnings are sounded repeatedly, most recently by the Bletchley Declaration on Artificial Intelligence Safety on Nov. 1-2, 2023, a new global effort to unlock the benefits of the new technology by ensuring it remains safe.

At the two-day summit in England, 28 governments, including the United States, the United Kingdom, the European Union and China, signed the declaration acknowledging the potentially catastrophic risks posed by artificial intelligence.

The warning seems well-timed, since 2024 is expected to be a transformative year for AI. It is the year, predicts The Economist magazine, that “generative AI will go mainstream.”

Year of experimentation

Large companies spent much of 2023 experimenting with the new technology, while venture-capital investors poured some $36 billion into the new invention. That laid the foundation for what is expected next.

“In 2024 expect companies outside the technology sector to start adopting generative AI with the aim of cutting costs and boosting productivity,” The Economist, a Britain-based publication, predicted.

For some, this is unsettling.

Business leaders, technologists and AI experts are divided on whether the technology will serve as a “renaissance” for humanity or the source of its downfall, according to Fortune Magazine.

At a summit for chief executive officers in June, 42 percent of them said they believe AI “has the potential to destroy humanity within the next five to 10 years.” Fortune added that one AI “godfather” considered such an existential threat “preposterously ridiculous.”

Science fiction

The Washington Post reported similar findings: “Prominent tech leaders are warning that artificial intelligence would take over. Other researchers and executives say that’s science fiction.”

Why should we fear AI?

Among the scenarios postulated is that self-governing AI robots designed to tend to human needs might decide that extermination is the most logical solution to ending human tendencies to wage war. An autonomous machine might think humans are routinely killing themselves in vast numbers anyway. To end such suffering, the machine might decide to copy human behavior. Destroy them for their own good.

Putting a humorous spin on it, a cartoon shows a robot telling a man: “The good news is I have discovered inefficiencies. The bad news is that you’re one of them.”

A conundrum

At the root of this conundrum is trying to think like AI robots of the future.

At the British AI safety summit at Bletchley Park, tech billionaire and Tesla CEO Elon Musk took a stab at describing the AI future.

“We should be quite concerned” about Terminator-style humanoid robots that “can follow you anywhere. If a robot can follow you anywhere, what if they get a software update one day, and they’re not so friendly anymore?”

Musk added: “There will come a point where no job is needed – you can have a job if you want for personal satisfaction.” He believes one of the challenges of the future will be how to find meaning in life in a world where jobs are unnecessary. In that way, AI will be “the most disruptive force in history.”

Musk made the remarks while being interviewed by British prime minister Rishi Sunak, who said that AI technology could pose a risk “on a scale like pandemics and nuclear war.” That is why, said Sunak, global leaders have “a responsibility to act to take the steps to protect people.”

Full public disclosure

Nuclear power was unleashed upon the world largely in wartime secrecy.  Artificial intelligence is different in that it appears to be getting full disclosure through international public meetings while still in its infancy. The concept is so new, Associated Press added “generative artificial intelligence” and 10 key AI terms to its stylebook on Aug. 17, 2023.

The role of journalists has never been more important. They have the responsibility to “boldly tell the story of the diversity and magnitude of the human experience,” according to the Society of Professional Journalists code of ethics. And that includes keeping an eye on emerging technology.

The challenge of informing the public of mind-boggling AI technology, which could decide the future welfare of human populations, comes at a tumultuous time in world history.

Journalists already are covering two world wars – one between Ukraine and Russia, and the other between Israel and Hamas. The coming U.S. presidential election finds the country politically fragmented and violently divided.

Weakened mass media

These challenges to keep the public more informed about what affects their lives comes at a time when U.S. mass media are weakened by downsizing and staff cuts. The Medill School of Journalism reports that since 2005, the country has lost more than one-fourth of its newspapers and is on track to lose a third by 2025.

Now artificial intelligence must be added to issues demanding journalism’s attention. This is no relatively simple story, like covering fires or the police beat. Artificial intelligence is a story that will require reportorial skill involving business, economics, the environment, health care and government regulations. And it must be done ethically.

It is a challenge already recognized by the International Consortium of Investigative Journalists (ICIJ), which joined with 16 journalism organizations from around the world to forge a landmark ethical framework for covering the transformative technology.

Paris Charter

The Paris Charter on AI in Journalism was finalized in November during the Paris Peace Forum, which provides guidelines for responsible journalism practices.

“The fast evolution of artificial intelligence presents new challenges and opportunities,” said Gerard Ryle, ICIJ executive director. “It has unlocked innovative avenues for analyzing data and conducting investigations. But we know that unethical use of these technologies can compromise the very integrity of news.”

The 10-point charter states: “The social role of journalism and media outlets – serving as trustworthy intermediaries for society and individuals – is a cornerstone of democracy and enhances the right to information for all.” Artificial intelligence can assist media in fulfilling their roles, says the charter, “but only if they are used transparently, fairly and responsibly in an editorial environment that staunchly upholds journalistic ethics.”

Among the 10 principles, media outlets are told “they are liable and accountable for every piece of content they publish.” Human decision-making must remain central to long-term strategies and daily editorial choices. Media outlets also must guarantee the authenticity of published content.

“As essential guardians of the right to information, journalists, media outlets and journalism support groups should play an active role in the governance of AI systems,” the Paris Charter states.

***********************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.