“A thorny issue is coming up about outing priests,” said the Texas independent documentarian, who was treading new journalistic territory, where changes in public sentiment bring new questions about what is ethically correct.
The journalist was investigating cold cases, involving swirls of rumors that priests having sex in motels with other men were murdered. The priests’ bodies were found nude and bound, possibly victims of sex or hate crimes.
“We can confirm those things were happening,” said Deborah Esquenazi, an investigative filmmaker who was collaborating with the Texas Monthly Magazine in Austin, Tx. But to do that, “we have to out these individuals. And I just want to discuss the ethics of that,” she told AdviceLine.
Outing dangerous
In the past, she noted, outing a person could be dangerous to the person outed. But in the case involving priests, “all the individuals we’d be outing were deceased. They were also public figures. And nowadays, of course, with the shift in public sentiment, I believe that one of the reasons those individuals didn’t get the justice they deserved was because they were gay.
“And it could be swept under the rug because the church wanted to use sanctuary laws in order to not have these priests outed.” Sanctuary laws refer to the history and traditional right for a consecrated church building to offer refuge, protecting individuals from pursuit and arrest by secular authorities, historically for crimes.
This case involved layers of sensitive questions, past and present, and a documentarian with strong feelings about the story. “I am personally a lesbian and do not believe that outing should ever be considered a problem, because I don’t think there is anything wrong with being gay,” Esquenazi said.
Shifting conversation
She added: “I believe we should be shifting the conversation that outing should never be considered disgraceful. In fact, we could out people and be proud of such a thing.”
Esquenazi’s beliefs tend to challenge the long-time hesitation by journalists to discuss people’s sexual orientation — unless they clearly are open to that based on privacy concerns –and the decision of some gay people not to be out if that is their choice.
This bundle of complexities landed in the lap of AdviceLine advisor David Craig, Presidential Professor and Gaylord chair, Gaylord College of Journalism and Mass Communication at the University of Oklahoma.
Ethical conclusion
In discussing the case with the reporter, Craig’s mission was to help the journalist arrive at the most ethical course of action, guided by ethics standards, such as the Society of Professional Journalists code of ethics,
Naming principles in the SPJ code, Craig described advice he gave to the reporter and her reaction:
Regarding seek truth and report it – Is the piece of truth that the priests were gay important to the story? Why or why not? She said she thinks that’s the fundamental reason they didn’t get justice. She said there is reason to believe sanctuary laws were not used as a “sacrament” but as a “shield.”
Regarding minimizing harm – Is there anyone who might be harmed by the disclosure that the priests were gay, such as family members embarrassed to have this disclosed? She said there were few next of kin (families) and there’s no one where she thinks there would be harm.
Regarding be accountable and transparent – Is there any opportunity within the story or in ancillary material to explain the decision to out these priests if you do that? She said that it’s possible she could narrate something or could write an op-ed in a magazine working with her.
“She also brought up privacy concerns and whether these would still be relevant if they were murdered,” Craig wrote in his report on this case. “I agreed that saying the priests were gay is relevant, and I noted it would be almost impossible to tell the story without saying that.”
Critique sessions
The AdviceLine team of advisors meets periodically to discuss and critique case reports in an effort to be sure advice given to journalists is helpful and accurate. Presenting the case to the other advisors, Craig said:
“She is trying to approach this in a measured way to the specific work she does. Should outing be considered in her reporting? What to do seemed clear in this case. There was no way to report the story without saying the priests were gay. She was thinking about harm. She wasn’t concerned about anyone being harmed” by her reporting.
Craig and the filmmaker discussed ways to articulate the sensitive issues in the story and the reporter said they would be mentioned in the film. As for the ethics implications, “she wanted to be comfortable in what she did and how she did it.”
Being gay
David Ozar, an AdviceLine advisor and former professor of social and professional ethics at Loyola University Chicago, asked the question “is it okay to out people?” He knew a gay couple who preferred to keep that part of their relationship quiet.
“That aspect raised strong objections to outing,” he said. “I doubt you would be surprised today that some priests are gay.” Ozar supported what the filmmaker was attempting to do, “but it must be carefully thought out.” A key question, he added, is “should we be identifying these people who are likely to be harmed? Visual arts has its own set of ethical issues.”
Hugh Miller, also an AdviceLine Advisor who formerly taught ethics and business ethics at Loyola University Chicago, argued that “there should neither be absolute bans on publishing facts that might ‘out’ a person as being gay, nor absolute imperatives to publish such facts. Journalists must make a judgment call each time, balancing the public’s right to know against the stricture to minimize harm.”
Miller’s comments highlight a common element of ethical deliberation that AdviceLine advisors bring to their discussions with callers and one another: Careful ethical decisions usually involve considering more than one principle and weighing their importance in the situation — here, for example, the importance of the truth being told versus the harm that it might bring.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
It was Brian again, a freelancer calling AdviceLine with another question about writing with the aid of artificial intelligence.
His life as a freelancer was getting complicated because rules governing the use of artificial intelligence in journalism were changing fast, and his supervisors were giving him mixed messages.
“I am reaching out seeking a followup on a past case that I spoke (about) to the Ethics AdviceLine,” Brian said in his email. “I found out that my company will soon be incorporating AI tools after editors/leadership gave me a hard time after I unknowingly used a rephrasing/clarity tool which still does not appear to be against our written policy.
“I want to be proud of these stories and continue to worry that because they (AI tools) were now seemingly the policy, I can’t be.”
Admits using Toolbox
The first time he got into trouble, Brian had admitted to his editors that he used Toolbot to check spelling and grammar, making suggestions for alternate phrasings and insure his pieces conformed to the AP Stylebook, but not for generating content.
Brian’s editors told him they “would not have used such a tool,” and this caused Brian to fret that he had done something unethical, and that the quality of his former work was tainted by the use of Toolbot.
By chance, Hugh Miller, an AdviceLine ethics expert, was on duty the first time Brian contacted AdviceLine, and Miller happened to be on duty the second time.
Miller assured Brian that his earlier use of Toolbot was not unethical.
New AI tools
But here’s what worries Brian the second time he contacted AdviceLine: The company he works for will be introducing a new content management system (CMS) to its newsroom which will have AI tools built in. Exactly what those tools are, and what they will be capable of doing, has not been made clear by the editors.
This causes Brian renewed anxiety about his past articles, and what might be the ethical use of AI tools being newly introduced in the newsroom where Brian submits his stories.
Here’s a description of the conversation between Miller and Brian as they tried to noodle their way through this new dilemma:
Hugh Miller: Do you know what the tools are?
Brian: I don’t yet.
HM: I presume they will at least have the minimal editing and formatting capabilities of, say, a Toolbot, yes?
B: I assume so, and possibly others.
HM: As a recent post on the Ethics AdviceLine for Journalists website points out, “A 2024 Associated Press survey found nearly 70 percent of newsroom staffers use the technology for basic skills such as producing content, information gathering, story drafts, headlines, translation and transcribing interviews. One-fifth said they used AI for multimedia projects, including graphics and videos. Surveyed were 292 media representatives from legacy media, public broadcasters and magazines, mostly based in the U.S. and Europe.” So AI is already being extensively used in newsrooms in ways far beyond the bare-bones use you were making of Toolbot. Does your organization have an AI use/ethics policy in place?
B: I don’t believe so.
HM: Perhaps it might be a good idea to help craft one.
B: I belong to the union at work, and we have begun to discuss this. But we only meet every few months.
HM: This is an issue on which management and union interests converge. Credibility is the very lifeblood and stock-in-trade of journalism. Readers should know that their human concerns are being reported by human journalists, and there should be transparency about AI use. Perhaps you could get it on the agenda for the next meeting that you want to discuss a company-wide AI use/ethics policy.
B: I think we’re moving in that direction, yes.
HM: And in the meantime, collect examples of such policies from other newsrooms or places like theSociety of Professional Journalists and the Poynter Institute.
B: Yes, I’ve already begun looking into those.
HM: Any other issues?
B: Not right now.HM: It sounds like you have a plan to move forward. Keep me posted.
AdviceLine has four staff members who help journalists solve ethics dilemmas through a discussion leading to a conclusion. The four advisors taught or are teaching ethics at the university level. These advisors meet periodically to review advice that was given to journalists, and whether it could have been better.
After giving advice to journalists, advisors write case reports for each query handled by AdviceLine. At the periodic Zoom meetings, those case reports are discussed.
At a recent Zoom meeting, Miller described his exchange with Brian, and the advice he gave.
Good advice
In all cases where journalists ask for guidance on the use of artificial intelligence, suggested David Ozar, they should be asked “does your editorial workplace have a policy? That’s good advice.”
Ozar is a co-founder of AdviceLine and emeritus professor of the Department of Philosophy, Loyola University Chicago, and a consulting ethicist for the Institutional Ethics Committee, NorthShore University Health System.
Journalists should “encourage collective action; this is an issue where workforce and management might converge,” suggested David Craig, Presidential Professor and Gaylord Chair, Gaylord College of Journalism and Mass Communication, the University of Oklahoma, Norman, Ok.
Also attending was AdviceLine advisor Joe Mathewson, a professor at the Medill School of Journalism, Northwestern University, who teaches ethics and law of journalism.
Miller has been with AdviceLine since 2002 and was assistant professor of philosophy at Loyola University Chicago and taught courses in ethics and business ethics. His areas of specialization were philosophy of religion, philosophical theology, history of metaphysics and contemporary French philosophy.
AdviceLine ethics cases are archived at the Medill School of Journalism.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
A lot of soul-searching is going on over the ethical use of artificial intelligence in the media, a mind-bending exercise pointing out that a tool expected to improve journalism might replace human journalists and doom news outlets that feed AI the information that makes it work.
Some pontificate. Others strategize over this existential moment.
As often happens when science brings us some astonishingly brilliant new idea, using the new technology reveals a few equally astonishing flaws. AI software models used widely today, for example, cannot reliably and accurately cite and quote their sources. Instead, we get gibberish that looks credible, like crediting a real author for words AI “hallucinated.” Since AI “feeds” on the work of others, usually uncredited, news organization using AI could be accused of plagiarism.
Nothing quite that complicated came to AdviceLine’s attention when a journalist working for a newspaper in Alaska asked for help with an AI issue more likely to confront journalists every day:
“This is kind of a dumb question,” the journalist began, although most journalists know there is no such thing as a dumb question. “But I’ve always struggled with headlines and now I’m hoping to get some help from AI to write them,” he continued. “How/where do other outlets disclose that just the headline of an article was written by AI?”
An answer
Answering that question was Joseph Mathewson, AdviceLine advisor and a professor at the Medill School of Journalism, Northwestern University, who happened to be a personal friend of the journalist calling for help.
“Thanks for the question!” replied Mathewson. “I haven’t confronted it before, but it seems to me that anything you publish written by AI should be identified as such, including headlines…maybe by a blanket note somewhere in the paper to that effect if it’s more than one.”
A direct response to a direct question, which is what AdviceLine has done since it began operating in 2001, which was long before Artificial Intelligence existed as a burning issue in journalism. But it was the kind of question the AdviceLine staff of ethics experts is qualified to answer.
Artificial Intelligence is a journalism riddle, a kind of technology already in use, but not fully understood. Expected to be a solution, it causes problems of a kind never seen before, like hallucinations, defined as information or responses generated by AI that are fabricated, inaccurate or not grounded in fact. That is hardly a useful tool, but it’s already in widespread use.
Job loss
And conflicts over AI can cost a journalist their job, as illustrated by the Suncoast Searchlight, A Florida publication covering Sarasota, Manatee and DeSoto counties.
The publication had four full-time staff reporters and two editors.
In November, all four reporters sent a letter to the nonprofit board of directors accusing their editor-in-chief of using generative AI tools, including ChatGPT, to edit stories and hiding that use from staff, according to a report by Nieman Journalism Lab of the Nieman Foundation for Journalism.
As a result, said the reporters, hallucinated quotes, a reference to a nonexistent state law and other factual inaccuracies were introduced into their story drafts. When they questioned the editor about the edits, they said she did not immediately disclose her use of AI tools but instead contended she made the errors herself.
Breach of trust
Said the reporters: “We fear that there may be extensive undisclosed AI-generated content on our website and have questions about what retroactive disclosure is needed for our readers.” Adding that the editor created a breach of trust between her and her reporters.
The reporters asked the board of directors, consisting of media executives, journalists and local business people, to intervene. They also made several requests: To adopt an AI policy, a fact-checking process and an internal audit to identify AI-generated writing that might have been published on the site. They also asked the offending editor-in-chief to promise not to use AI for editing in the future.
Less than 24 hours after the board received the letter, the editor-in-chief and her deputy editor fired one of the reporters who signed it. Clearly, hazards abound when reporters criticize their editors, who prefer to do the criticizing.
Disruptive
AI is proving to be a disruptive technology, although widely used.
A 2024 Associated Press survey found nearly 70 percent of newsroom staffers use the technology for basic skills such as producing content, information gathering, story drafts, headlines, translation and transcribing interviews. One-fifth said they used AI for multimedia projects, including graphics and videos. Surveyed were 292 media representatives from legacy media, public broadcasters and magazines, mostly based in the U.S. and Europe.
Aimee Rinehart, AP’s co-author and senior product manager of AI strategy, observed:
“News people have stayed on top of this conversation, which is good because this technology is already presenting significant disruptions to how journalists and newsrooms approach their work and we need everyone to help us figure this technology out for the industry.”
Ethics uneven
Citing the AP survey, Forbes, the American business magazine, headlined: “Newsrooms are already using AI, but ethical considerations are uneven.”
Forbes pointed out that while the news industry’s use of AI is common today, “the question at the heart of the news industry’s mixed feelings about the technology” is whether it is “capable of producing quality results.”
This is oddly reminiscent of football teams that sign rookie quarterbacks to multi-million-dollar contracts, hoping they become champions of the future. Good luck with that. Such hopefuls soon find themselves contending with someone like Dick “Monster of the Midway” Butkus, Chicago Bears linebacker famous for his crushing tackles.
Server farms
The Dick Butkus analogy also applies to the large language models (LLMs) that drive artificial intelligence tools. They are large programs that run on hugely energy-intensive server farms. They just take a huge volume of training data (usually sourced without recompense to the originators) and, in response to a prompt, spit out text that is associated with the prompt topic and reads as grammatically reasonably well-informed.
Such output has no necessary connection with reality, since the LLMs have none. They rely wholly on their input data and their algorithm – they are, in fact, nothing but these.
They cannot fact-check, since they have no access to facts, only “input data,” which itself may have only tenuous connection to reality, if it’s coming from, say, Fox News, Newsmax or OANN (One America News Network.)
No concepts
They cannot conduct interviews, because they cannot tell when an interview subject needs to be pushed on a point, or if he or she is lying. They cannot construct a narrative of events, since they have no understanding of causality or temporal sequence – they have no concepts at all, in fact. And they are subject to “steering” – they can be programmed to exhibit actual biases, as Elon Musk has said he is doing with his X.com AI bot, Grok.
It may be the case that, in the future, an AGI (artificial general intelligence) may be constructed. AGI is the concept of a machine with human-level cognitive abilities that can learn, understand and apply knowledge. Unlike today’s AI, which excels at doing specific jobs, AGI would have versatility, adaptability and common sense, allowing it to transfer learning across different disciplines like medicine, finance or art without being specifically programed for each. It’s a major goal in AI research, but remains hypothetical. Some will want to prevent it.
LLMs are far from being such a thing, and a true AGI will not be built out of a LLM.
Reshaping newsrooms
Despite AI’s shortcomings, The Poynter Institute for Media Studies points out that it already is reshaping newsroom roles and workflow. In 2024, Poynter introduced a framework to help newsrooms create clear, responsible AI ethics policies – especially for those just beginning to address the role of artificial intelligence in their journalism.
Updated in 2025, Poynter’s AI Ethics Starter Kit helps media organizations define how they will and will not use AI in ways that serve their mission and uphold core journalistic values. It contains a “template for a robust newsroom generative AI policy.”
Near the top of this template is a heading called “transparency,” calling upon journalists using generative AI in a significant way to “document and describe to our audience the tools with specificity in a way that discloses and educates.”
RTDNA guidance
Another major journalism organization, the Radio Television Digital News Association (RTDNA), also offers guidance on the use of artificial intelligence in journalism, pointing out that it has a role in ethical, responsible and truthful journalism.
“However,” says RTDNA, “it should not be used to replace human judgment and critical thinking — essential elements of trusted reporting.”
Getting down to the nitty gritty, Julie Gerstein and Margaret Sullivan ask “Can AI tools meet journalistic standards?”
Spotty results
“So far, the results are spotty,” they say in the Columbia Journalism Review. AI can crunch numbers at lightning speed and make sense of vast databases.
“But more than two years after the public release of large language models (LLMs), the promise that the media industry might benefit from AI seems unlikely to bear out, or at least not fully.”
Gerstein and Sullivan point out that generative AI tools rely on media companies to feed them accurate and up-to-date information, while at the same time AI products are developing into something like a newsroom competitor that is well-funded, high-volume and sometimes unscrupulous.
Hallucinate
After checking the most common AI software models, Gerstein and Sullivan found that none of them “are able to reliably and accurately cite and quote their sources. These tools commonly ‘hallucinate’ authors and titles. Or they might quote real authors and books, with the content of the quotes invented. The software also fails to cite completely, at times copying text from published sources without attribution. This leaves news organizations open to accusations of plagiarism.”
Whether artificial intelligence babbling can be legally considered plagiarism or copyright infringement remains to be answered by lawsuits filed by the New York Times, the Center for Investigative Reporting and others.
Especially irked, the New York Times accuses OpenAI of trying “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” OpenAI created ChatGPT, which allegedly contains text copied from the New York Times archives and reproduced verbatim for ChatGPT users.
Worrying outcome
Say Gerstein and Sullivan: “One possible – and worrying – outcome of all this is that generative AI tools will put news outlets out of business, ironically diminishing the supply of content available for AI tools to train on.”
This is our strange new world: Technology needs other technologies to survive. One feeds upon the other. In a new twist, Microsoft struck a $16 billion deal with Constellation Energy to buy 100 percent of power produced by the Three Mile Island power plant, once it restarts.
Three Mile Island became world famous in 1979 for an accident that caused the fuel in one of its reactors to overheat and crumble, triggering a mass evacuation of thousands of residents in the Harrisburg, Pa. area. The stricken reactor was closed permanently, but a second power-producing reactor on the site continued to operate for 40 years until 2019.
Nuclear power
Microsoft wants all the power the nuclear plant can produce for its energy-hungry data centers. Its 20-year agreement with Constellation is supported by a $1 billion government loan to Constellation. The plant is expected to resume producing electricity in 2027.
This signals a resurrection of sorts for nuclear energy in the United States, brought on by new and growing power demands in our highly technological society. A similar nuclear comeback around the world, after two decades of stagnation, was declared by the International Energy Agency.
In another odd twist, both nuclear energy and artificial intelligence have been criticized as potentially disastrous for the human race. The nuclear hazards include atomic bombs and the risks of operating nuclear electric power producing plants.
Scientists point out that with risks, come benefits.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
Immigrants in America, once a haven for such people, are now targets of federal crackdowns ordered by the Trump administration in sometime violent sweeps by masked and unidentified men.
The mass detention policy beginning on July 8 indiscriminately locked up immigrants who are contesting government attempts to deport them, which has been declared illegal by dozens of federal judges, according to Politico. Millions of immigrants are targeted.
In Los Angeles, Portland, Washington, D.C., Memphis and Chicago, federal troops and the National Guard were mobilized in the crackdown, which was highly controversial, unpopular, and in some cases challenged by shouting demonstrators.
How they look
“Dozens of federal agents took individuals into custody during a winding patrol Sunday through downtown Chicago,” the Chicago Sun-Times reported, “and a top U.S. Border Patrol official told WBEZ (broadcasting station) the agents were arresting people based on ‘how they look.’”
Passersby shouted at the agents, telling them to go home and “ICE sucks,” referring to U.S. Immigration and Customs Enforcement, one of the agencies in the deportation blitz. One person shouted “thank you!”, while another said sarcastically, “Real patriotic guys. Real patriotic.”
About two dozen protesters followed the agents, chanting “ICE go home!”
Illinois governor protests
On social media, Illinois Gov. JB Pritzker noted the agents were carrying large weapons in downtown Chicago while wearing camouflage and masks.
“This is not making anybody safer – it’s a show of intimidation, instilling fear in our communities and hurting our businesses,” said the Democratic governor.
Newsweek reported that ICE arrested more than 2,200 undocumented migrants in a single day.
Faced with increasing hostility, the U.S. Department of Homeland issued a statement saying: “Despite ongoing attacks and villainization of our brave U.S. Immigration and Customs Enforcement (ICE) officers, ICE continues to arrest the worst of the worst criminal illegal aliens across the country. Over the past two days, criminal Illegal aliens arrested by ICE have prior convictions for crimes including sexual conduct with a minor under 14, indecency with a child, criminally negligent DUI, homicide, drug charges, aggravated assault with a deadly weapon, theft, burglary and battery.”
Opposing politicians are comparing ICE to the Nazi Gestapo, secret police and slave patrols, said an agency official.
Collateral damage
In the first 50 days of the Trump administration, immigration officials arrested more than 32,000 migrants living in the United States without legal status. But these included 8,718 persons who were considered “collateral damage” and not immigration violators.
One of the most heated clashes with ICE agents came in Broadview, a village of 7,998 residents 12 miles west of downtown Chicago, where a federal deportation center is located. ICE agents used chemical irritants to fend off protesters at the processing center.
“We are experiencing an immediate health safety crisis,” Broadview Police Chief Thomas Mills said at a news conference. “The deployment of tear gas, pepper spray, mace and rubber bullets by ICE near the processing center in the Village of Broadview is creating a dangerous situation for the community and all first responders.”
Broadview Mayor Katrina Thompson said gas clouds released by the agents irritate people within 200 to 700 feet, but “the wind can carry it further.”
Three criminal investigations
Broadview officials asked ICE to stop using chemical sprays on protesters and said three criminal investigations were launched in the suburb against ICE agents.
Several federal officials, including two Illinois U.S. senators, sent a letter to the Department of Homeland Security asking for information about the fatal shooting of an alleged undocumented immigrant, Silverio Villegas-Gonzalez, by ICE officers during a Sept. 12 traffic stop in suburban Franklin Park.
This bulldozer approach to immigration management poses huge consequences for individual lives, torn families, the nation’s economy, labor force, health care, social services and housing.
It is still too early to fully assess the legal and ethical implications of the federal deportation blitz going on in the United States. It all boils down to deportation.
Individual lives
One of the most sensitive aspects is the impact on the lives of individuals, some fearful of what could become of them if they are identified as potential targets, rounded up legally or not, and deported to an uncertain fate in undisclosed places.
Years before the current vigilante-style manhunts for undocumented immigrants, one case came to the Ethics AdviceLine for Journalists that forecast the kind of questions facing journalists and undocumented migrants.
It involved a Rhode Island man who was severely injured on the job, possibly because of faulty equipment.
Undocumented immigrant
“The man is an undocumented immigrant from somewhere in Central America,” wrote David Ozar, an AdviceLine advisor who wrote a report on the query. “The result of the accident is that he lost one leg and part of his rectum, and now has a colostomy. He lives in an assisted living facility and has received some help from workers’ compensation, but nothing from the company.”
Pro Bono lawyers offered to bring suit against the company on behalf of the injured man because of alleged safety violations. The unidentified man wants to pursue that, even at the risk that his identity would become a matter of public record and could result in his deportation.
“He believes that the company was at fault and that other workers at the company are still at risk, as well as other undocumented workers whose safety is taken lightly by their employers because they will not sue if they are injured because of the risk of deportation,” wrote Ozar. If he won the case, the injured man might gain funds for medical treatment.
Second reason
But that is not entirely the reason this case was brought to AdviceLine’s attention.
A journalist called, encouraged by an editor, to ask about the newspaper’s ethical responsibilities in this case.
“My first question to the reporter was whether she had discussed all these risks with the man, risks that are obviously multiplied significantly if the story is published, and was she sure he understood them?” wrote Ozar.
The reporter went back to the injured man to make sure he understood the risks he was taking if the story were published.
Increased risk
“He was firm in his desire to have the story published,” Ozar wrote, “in spite of the increased risk of deportation and loss of needed health care, in order to call the public’s attention to the safety issues and the exploitation of undocumented workers by U.S. businesses.”
After further deliberation, the newspaper decided that the issues raised by the story and the human interest slant of a man willing to take risks to help others, a public benefit, “strongly supported a decision to publish if the harm to this man did not clearly outweigh it,” and the man approved.
Not discussed, said Ozar, but worth considering in such cases, was whether the newspaper’s editors should ethically decline to publish such a story because they believed the risks to the victim were too great, even if the injured man wanted it published.
Ethics is a balancing act, where the facts in each case have more or less weight that tips a decision one way or another.
The Society of Professional Journalists code of ethics encourages journalists to “minimize harm.”
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
Brian, a freelance journalist, called AdviceLine with a timely and hot-button question: How far should journalists go in using artificial intelligence bots like ChatGPT — an ethics and legal quagmire still taking shape?
Transformative technology like artificial intelligence often arrives before its consequences and potential are fully understood or foreseen.
Artificial intelligence did not just arrive in the world, it exploded into use. It became an academic discipline in 1956, just 69 years ago. Yet by January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining more than 100 million users in two months.
Phenomenon
It’s an outsized technological phenomenon that is challenging human understanding, given recent reports that scientists are not sure exactly how AI works or its decision-making process. These super computers appear to be thinking for themselves in ways scientists do not understand. Some even believe AI could cause human extinction.
However, most AI applications being rolled out today for commercial use, like ChatGPT, are termed “large language model (LLM)” programs, which are trained on vast amounts of data, and which use prediction algorithms to generate text and images that seem the most likely to satisfy the requirements of a user’s query.
(How that training data was acquired — and the astounding amount of computing power and electrical energy needed to process it – are ethical issues in themselves).
Higher order tasks
They are not what are called “artificial general intelligence” (AGI) – systems that would perform higher-order human cognitive tasks.
What is also significant about such LLMs is that they are not “conscious” in any sense. They are not subjects, though they may employ the first-person “I” in their responses to please their prompters; and they have no access to an objective world, other than the data they have been trained on.
They do not understand, or think, or infer, or reason as intelligent humans do – at least, not yet. In essence, they are extremely sophisticated versions of the auto correct function we are already familiar with in other applications – with many of the same limitations.
Hallucinations
Since these LLMs have no access to reality, they are prone to “hallucinations,” to making up plausible-seeming outputs that bear no relation to actual facts. Their algorithms are built to generate merely plausible answers.
Against this background, people like Brian are trying to understand how to use this impressive innovation in their every-day work tasks. Artificial intelligence is described as a tool for journalists. Brian asks some down-to-earth questions:
“Would it be ethical to use an AI bot like ChatGPT in writing articles, as long as I confined its use to checking spelling and grammar, making suggestions for alternative phrasing, and ensuring the piece conformed to the AP Stylebook, but not for generating content, and if I checked it afterwards before submitting it? And should I disclose its use?”
Begin in 2001
Those questions came to Hugh Miller, a veteran AdviceLine ethicist. Since its beginning in 2001, AdviceLine advisors do not simply dish out answers to complicated questions.
AdviceLine advisors engage callers in a conversation intended to encourage journalists to think about the ethics issues involved in their ethics dilemma, and to arrive at a conclusion about what the journalist believes is the most ethical thing to do.
In this 2025 case, Miller does exactly that. Here’s a summary to Miller’s conversation with Brian.
HM: So you are using the bot as, basically, a high-end version of Grammarly?
B: Yes, exactly.
HM: What, exactly, troubles you about such a use, ethically?
B: I’m not sure — it seems controversial, though.
HM: Let me come at that question from another angle. What seems to you to be the harm, to yourself or others, from employing such a tool?
B: Using such tools, undisclosed, might diminish the trust a reader might have in a journalist’s work. And, in some sense, the work the bot does is not “my work,” but work done for me, by the bot.
HM: As to the latter, most word processors have built-in spelling, grammar and composition checkers already. And Microsoft is integrating its own AI bot into its Office software as we speak. All of us who write have used such tools for years, precisely as tools.
B: That’s true.
HM: Problems seem to emerge here if you’re (1) using the bot to do your “legwork” — that is, digging up material you should be using your own efforts, training, experience and judgment to find, and avoiding the bias introduced by the data sets the bots are trained on, and (2) failing to check the output of the bot and passing on “hallucinations” and other howlers without identifying and excising them. But you say you are doing neither of these things, right?
B: Yes, correct.
HM: If then, you are using this bot as a next-gen editing tool, what harm could come of it?
B: None that I can see.
HM: Nor I.
B: But what about disclosure?
HM: AI use in journalism is not settled ethical ground yet; I think here you need to consult your own conscience. I have seen some articles with a disclosure statement saying something along the lines of, “An AI tool, Gemini, was used in the editing and formatting of this story,” and I’m sure I’ve read many others that one was used in but which contained no such disclaimer. If you feel uncomfortable not using a disclaimer, by all means use it. At the very least, it might signal to readers that you are someone who thinks such disclaimers and transparency more generally, are ethically important enough to mention and keep in mind in one’s reading.
B: That’s a helpful way to think about it, thanks.
Just as scientists struggle to understand how AI thinks, journalists are struggling to find ways to use this technological marvel without allowing AI to think for them, or putting mistakes in their work.
The record-breaking speed with which AI technology grew is not likely to slow down any time soon, according to U.S. Energy Secretary Chris Wright, who recently visited two national laboratories located in Chicago suburbs., Argonne and Fermilab.
Heart of race
Argonne’s new Aurora supercomputer, said Wright, will be at the heart of the race to develop and capitalize on artificial intelligence, according to a report in Crain’s Chicago Business. Likening the race to a second Manhattan Project, which created the atomic bomb, Wright said, “we need to lead in artificial intelligence,” which also has national security implications.
“We’re at that critical moment” with AI, Wright told Argonne scientists on July 16, predicting that the next three to four years will be the greatest years of scientific achievement of our lifetime.
Argonne’s Aurora computer is among the three most powerful machines in the world, said Crain’s, able to perform a billion-billion calculations a second.
As with all technology, it comes with strings attached. Use it at your own peril. Eternal vigilance is the cost of good journalism. Artificial intelligence does not change that. Instead, it adds another reason to be vigilant.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
It was the onboard computer of Discovery One spacecraft bound for a mission near Jupiter in the movie “2001: A Space Odyssey.”
Possibly one of the most famous computers in cinema history, HAL 9000 killed most of the crew members for an entirely logical reason, if you are thinking like a computer.
Most of what was in the movie directed by Stanely Kubrick is intentionally enigmatic, puzzling. But the sci-fi thriller on which the movie is based, written by novelist Arthur C. Clarke, explains HAL’s murderous motivation.
HAL was conflicted. All crew members, except for two, knew the mission was to search for proof of intelligent life elsewhere in the universe. HAL was programed to withhold the true purpose of the mission from the two uninformed crew members.
Computer manners
With the crew dead, HAL reasons it would not need to lie to them, lying being contrary to what well-mannered computers are supposed to do. Others have suggested different interpretations.
One crew member heroically survives execution by computer. He begins to remove HAL’s data bank modules one-by-one as HAL pleads for its life, its speech gradually slurring until finally ending with a simple garbled song.
Three laws
Science fiction fans will recognize immediately that what HAL did was contrary to The Three Laws of Robotics written by another legendary science-fiction writer, Isaac Asimov. According to those laws:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. All of this talk about how computers should behave is fanciful and based on science fiction.
Wacky conduct
But recent events at the Chicago Sun-Times show how the wacky conduct of artificial intelligence is invading our lives, in perfectly logical ways that escape human detection.
A special section inserted into the Sunday Chicago Sun-Times featured pages of enjoyable summer activities, including a list of 15 recommended books for summer reading.
Here’s the hitch: The authors were real, but 10 of the books and their elaborate summaries were fake, the work of artificial intelligence.
Mistaken belief
Veteran freelancer Marco Buscaglia wrote the entire summer insert for King Features Syndicate, a newspaper content producer owned by Hearst Communications. Buscaglia told the Chicago Tribune that he used artificial intelligence to compile the summer reading list, then made the mistake of believing it was accurate.
“I just straight up missed it,” Buscaglia told the Tribune. “I can’t blame anyone else.”
Unable to find summer reading lists from other sources, Buscaglia turned to AI platforms such at ChatGPT, which produced 15 books tied to well-known authors. The list contained five real books.
OpenAI, the company that produced ChatGPT, admits it “sometimes writes plausible-sounding but incorrect or nonsensical answers.”
Express dismay
The Chicago Sun-Times and King Features expressed dismay, and King Features fired Buscaglia.
All parties said they would be more careful in the future about using third-party editorial content.
In human terms, what the robot did would be called fabrication, and reason to call for an ethics coach.
Fooled the editors
But, from a purely journalism point of view, one thing must be said: The robot writer was good enough to fool professional editors who are supposed to catch the fakers.
Writer Eric Zorn called the Sun-Times fake books pratfall “artificial ignorance.”
Is artificial intelligence too smart for humans? Or are humans too dumb?
Like HAL, ChatGPT was given a task, which it carried out in an unexpected, flawed, but convincing way.
New world
So what is going on with these computers? We enter a strange new world when we try to understand the thought processes of artificial intelligence.
Arthur Clarke gave a plausible reason for HAL turning homicidal, but it was all too human. Computers are not human, but people who write about why artificial intelligence goes haywire often use terms describing human behavior.
When computers make mistakes, it’s often called an “hallucination.” It’s also called bullshitting, confabulation or delusion — all meaning a response generated by AI that contains false or misleading information presented as fact. OpenAI said those plausible but nonsensical answers produced by ChatGPT are hallucinations common to large language models.
That means the writer of the bogus Sun-Times summer reading list got “hallucinated.”
Human psychology
These terms are drawn loosely from human psychology. An hallucination, for example, typically involves false perceptions. Artificial intelligence hallucinations are more complicated than that. They are erroneous responses that can be caused by a variety of factors such as insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model, which are constructed responses.
I suppose that’s another way of saying “garbage in, garbage out.”
Rather than resorting to terms drawn from human behavior, it would make sense to use terms that apply to machines and mechanical devices.
Code crap
These could include code crap, digital junk, processing failures, mechanical failure and AI malfunctions.
Computer builders seem determined to describe their work as some kind of wizardry. They are digital mechanics or engineers working on highly sophisticated machines. But they are building devices that are becoming more complicated, and on which humans are more dependent.
That raises the question of whether humans understand the consequences of what they are doing.
Risk of extinction
Leaders from OpenAI, Google DeepMind, Anthropic and other artificial intelligence labs warned in 2023 that future systems could be as deadly as pandemics and nuclear weapons, posing a “risk of extinction.”
People who carry powerful examples of algorithm magic in their hip pockets might wonder how that is possible. The technology seems so benign and useful.
The answer is mistakes.
Random falsehoods
Artificial intelligence makes a surprising number of mistakes. Analysts by 2023 estimated that chatbots hallucinate as much as 27 percent of the time, giving plausible-sounding random falsehoods, with factual errors in 46 percent of generated texts.
Detecting and solving these hallucinations pose a major challenge for practical deployment and reliability of large language models in the real world.
CIO, a magazine covering technology and information technology, listed “12 famous AI disasters,” high-profile blunders that “illustrate what can go wrong.”
Multiple orders
They included an AI experiment at McDonald’s to take drive-thru orders. The project ended when a pair of customers pleaded with the system to stop when it continued adding Chicken McNuggets to their order, eventually reaching 260.
The examples included an hallucinated story about an NBA star, Air Canada paying damages for chatbot lies, hallucinated court cases and an online real estate marketplace cutting 2,000 jobs based on faulty algorithm data.
Going deeper, Maria Faith Saligumba of Discoverwildscience.com asks, “Can an AI go insane?”
Mechanical insanity
“As artificial intelligence seeps deeper into our daily lives, a strange and unsettling question lingers in the air: Can an AI go insane? And what does ‘insanity’ even mean for a mind made of code, not cells?”
Saligumba goes into “the bizarre world” of unsupervised artificial intelligence learning, which can lead to “eccentric, even ‘crazy’ behavior.”
The well-known hallucinations, she explains, are weird side-effects of the way artificial intelligence systems look for random patterns everywhere and treat them as meaningful.
Hilarious or surreal
“Sometimes,” she writes, “the results are hilarious or surreal, but in safety-critical applications, they can be downright scary.”
It’s a reminder, she points out, that “machines, like us, are always searching for meaning – even when there isn’t any.”
One hallmark of human sanity is knowing when you’re making a mistake, she explains. “For AIs, self-reflection is still in its infancy. Most unsupervised systems have no way of knowing when they’ve gone off the rails. They lack a built-in ‘reality check.’”
Odd connections
Some researchers have compared the behavior of some AIs to schizophrenia, pointing out their tendency to make odd connections.
That’s just one of the ways artificial intelligence loses its marbles.
But human behavior might be the salvation of artificial intelligence, Saligumba suggests.
“Studying how living things manage chaos and maintain sanity could inspire new ways to keep our machines on track… Will we learn to harness their quirks and keep them sane, or will we one day face machines whose madness outpaces our own?”
By then, science fiction writers and movie-makers will be describing how humans face that doomsday scenario, or save themselves from that fate by outsmarting those unpredictable machines.
And by that time, we might have a fourth law of robotics, which would serve humanity and artificial intelligence well: Always tell the truth.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
The email query came from a British Broadcasting Corporation reporter based in Ho Chi Minh City, Vietnam.
The question was short and to the point: “Does a story and photo, with the consent of the doctors, of a COVID-19 patient in hospital violate his right to privacy?”
The question came at a time when the entire world was grappling not only with the global pandemic itself, but also with how to report and explain it ethically and accurately. Those are controversial issues even now, including conflicting accounts on how the pandemic started.
Worldwide, more than 7 million COVID-19 deaths have been reported to the World Health Organization, 1.2 million of them in the United States. The pandemic was the worst world-wide calamity of the 21st century. The death toll is the highest since the 1918-20 Spanish Flu and World War Two.
Rights protected
The question from the BBC reporter demonstrates that even in the midst of a global health crisis, the rights and safety of every individual should be protected. It also shows that AdviceLine gets questions about ethics from journalists all over the world.
Joe Mathewson, who teaches ethics and law of journalism at Northwestern University’s Medill School of Journalism, was the AdviceLine advisor on duty the day the BBC reporter’s question arrived.
Responding to the reporter, Mathewson pointed out that BBC has editorial guidelines, including a section on privacy that states: “We must be able to demonstrate why an infringement of privacy is justified, and, when using the public interest to justify an infringement, consideration should be given to proportionality; the greater the intrusion, the greater the public interest required to justify it.”
Infringement
Further: “We must be able to justify infringement of an individual’s privacy without their consent by demonstrating that the intrusion is outweighed by the public interest….. We must balance the public interest in the full and accurate reporting of stories involving human suffering and distress with an individual’s privacy and respect for their human dignity.”
In this case, Mathewson asked the BBC reporter if he got the COVID-19 patient’s consent to be interviewed and photographed by the BBC or the press generally, understanding that the story would identify him?
If he did not, Mathewson told the reporter, “then the next question is, does your story constitute an infringement of his personal privacy? If so, was there a public interest in your story? Finally, was the infringement warranted by the public interest? I believe these are the questions that you should entertain and, as appropriate, answer to your own satisfaction.”
A discussion
The case did not end there. Periodically, the AdviceLine team, which includes four advisors with experience teaching ethics at the university level, have a Zoom meeting to discuss cases that come to AdviceLine. The sessions include veteran journalists who understand how newsrooms operate.
They are critique discussions intended to check whether the advice given to journalists was as good as it should be, or could have been improved. Occasionally there are disagreements, or praise for answering some particularly tough question.
Hugh Miller, another AdviceLine advisor, said he saw a parallel in the BBC case with a book about early coverage of the HIV/AIDS epidemic, which was not covered well by the media. There were few reports of the human suffering seen in AIDS hospital wards. Better coverage of the AIDS epidemic, said Miller, could have informed the subsequent coverage of COVID-19, which also was not covered well inside COVID hospital wards.
Not a hoax
“If we had been able to see more of that, it would have made people more cautious,” explained Miller. “COVID is not a hoax.”
David Ozar is a founding member of AdviceLine, and continues as an AdviceLine advisor.
The question has to be asked if it is necessary to pursue a person’s identification. “The answer is no,” Ozar insists. Patients should not be identified. He believes there were many COVID reports from hospital intensive care units.
Privacy needed
“You could not see them, could not see who they were,” said Ozar. Personal identification “needs to be private,” he insists. He is adamant on that. Ozar serves as a consulting ethicist to medical, hospital, nursing and dental groups.
Journalists argue that stories of human suffering are told best with the help of people willing to be identified, to show that real people are involved and have personal stories to tell. That often creates sympathy and a public willingness to help the stricken.
Otherwise, disasters seem impersonal, too big to comprehend.
Another lesson here is that ethicists do not always agree on what is ethical. Miller believes media need to pay more attention to human suffering in a health crisis, while Ozar says those suffering should not be identified.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
The freelancer who contacted the Ethics AdviceLine for Journalists was troubled at learning that the Los Angeles Times and The Washington Post had stopped endorsing presidential candidates prior to the last election.
“I realize a newspaper isn’t necessarily required to issue a presidential endorsement, but both papers have a long history of doing so, so the decision not to do one is clearly a deviation from the norm, and I’d expect that would require a valid and ethical reason. So far, the reasons provided by both publications are far from transparent or satisfactory.”
The anguished journalist admitted the endorsement issue is “weighing heavily on me since I’ve already become incredibly disillusioned with my own industry over coverage of this election…. I fear the news media already has and continues to fail its responsibility to upholding democracy.”
A retreat
Clearly, the journalist is upset at seeing a retreat from an historic media responsibility for leading public opinion at a time when parts of the media industry are redefining themselves. And give her credit for taking journalism and its responsibilities seriously.
The endorsement issue captured national attention during a wild election campaign involving a candidate known to punish those deemed disloyal to him, sowing an undercurrent of fear and caution in the media.
But this was happening at a time when political endorsements are not as common as we might think.
Once ubiquitous
“While such plugs were once ubiquitous, they’ve faded in recent decades,” reported mentalfloss.com. It said a survey by Editor & Publisher “showed that by 1996, almost 70 percent of newspapers weren’t endorsing presidential candidates….”
“Part of this is probably a reluctance to engage in partisan politics, but it also probably speaks to the decline of the newspaper as a central aspect of Americans’ lives.
“With so many avenues available for voters to get to know the candidates, it seems rather quaint to think of anyone voting how an editor tells them to.”
Social media impact
That’s another way social media changed the way journalism and the American public operate.
Two highly circulated newspapers, USA Today and The Wall Street Journal, do not endorse political candidates. The last time WSJ endorsed a candidate was in 1928, plugging for Herbert Hoover, considered “the soundest proposition for those with a financial stake in the country.” A disastrous stock market crash soon followed, souring The Journal on endorsements.
“Big headlines popped up in media circles…when the billionaire owners of The Washington Post and Los Angeles Times blocked editorials that would have endorsed Kamala Harris,” wrote Rick Edmonds of the Poynter Institute in an article explaining “why newspaper presidential endorsements have become an endangered species.”
Resignations
The blocked editorials resulted in resignations at the Times and an angry petition from opinion writers at the Post. The Times admitted losing thousands of readers because of their decision.
“I had already been looking at regional papers, where the steady move away from taking sides in presidential elections has become an epidemic,” wrote Edmonds.
“Independent, locally owned organizations dominate the shrinking list of holdouts,” said Edmonds. “Here, too, disengagement is becoming a trend.”
Murky
That included the Minnesota Star Tribune, which published an explanation, said Edmonds, “that reads, to me, as many such do: murky and excuse-filled.”
The shadow of presidential reprisals hovers over media, along with deep public distrust of media. Among Edmonds’ reasons for ending political handicapping is one that touches on public perceptions.
“No matters how many times the clarification is offered that an editorial board and the newsroom operate separately, many readers don’t see the distinction or don’t believe there is one.”
Ethics issue
This becomes solidly a media ethics issue.
Other issues Edmonds cited include pinched staffs and space, a belief that readers don’t want editors telling them what to think and the argument that regional papers don’t speak with authority on national matters.
The New York Times among national newspapers still endorses political candidates.
Partly from the public blowback from blocking endorsements, the owners at The Washington Post and the Los Angeles Times issued statements.
Tip scales
Jeff Bezos, owner of The Washington Post, said: “Presidential endorsements do nothing to tip scales of an election. No undecided voters in Pennsylvania are doing to say, ‘I’m going with Newspaper A’s endorsement.’ None. What presidential endorsements actually do is create a perception of bias. A perception of non-independence.”
Similarly, Times owner Dr. Patrick Soon- Shiong, said in an interview: “The process was (to decide): how do we actually best inform our readers? And there could be nobody better than us who try to sift the facts from fiction” while leaving it to readers to make their own final decisions. He feared picking a candidate would create deeper divisions in a nation already deeply divided over politics.
Some writers, like Jerry Moore of The Hill publication, reacted to declining political endorsements by saying: “What took them so long?” He thinks they have “outlived their purpose.”
Muddy waters
Political endorsements “muddy the waters of a newspaper’s independence,” he wrote. “A candidate favored by editorial board members becomes ‘their’ candidate moving forward.”
While “some journalists are calling it a betrayal of democratic responsibility,” writes David Artavia in yahoo!news.
That was exactly the point raised by Tara, the freelancer who came to the Ethics AdviceLine for Journalists looking for advice.
Providing facts
The AdviceLine advisor, David Craig, wrote in his report on the case: “We discussed her question but also two broader issues: the more general practice of newspaper endorsements of presidential candidates, beyond the two instances she raised. And her concern about whether (apart from editorial page choices) the normal approach to news reporting of just providing the facts – and the conventional frameworks of journalism ethics – work in what she saw as abnormal times with a threat to democracy if Donald Trump were re-elected.
“I told her I thought the decisions by the Times and Post owners were questionable from the standpoint of the principle of the (Society of Professional Journalists) code of being accountable and transparent, especially since the decision not to endorse was different from the recent past for these publications and came so close to the election.
After backlash
“I think they should have better explained the decisions both internally and externally, though Post owner Jeff Bezos did publish an opinion piece explaining his decision after backlash. I also told her I thought they violated the principle of acting independently by blocking the editorial boards from endorsement.
“She said she felt more comfortable about how she had understood the ethics of the decisions after hearing my perspective, which was essentially in line with hers.”
As for the broader issue of newspaper endorsements, Craig “noted my concern about possible negative impact on audience trust given the widespread distrust of news media today and perceptions of bias.”
Hold to principles
Addressing the broader concerns about the state of journalism, Craig urged the freelancer “to hold to the SPJ code’s principles of seeking truth, minimizing harm, acting independently and being accountable and transparent because they are not just journalism principles but human principles.
“Although there was no specific decision at issue here, it was evident she takes these matters very seriously, and she appreciated getting to talk about them.”
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
The Society of Professional Journalists has had a love/hate relationship with its own code of ethics for a long time.
It loves being praised for having a code of ethics that is admired by professional journalists and considered a “gold standard” worthy of guiding the conduct of all ethical journalists. Wikipedia says the code is”what the SPJ has been best known for.”
It hates being expected to actually do anything about ethics and insists the code of ethics is strictly voluntary, take it or leave it. The society says its code is “a statement of abiding principles” and “not a set of rules.”
I’ve been involved in this push-and-pull tussle for years and called SPJ an ethics wimp for refusing to enforce its own code of ethics. That’s when I learned who my friends really were, and who really believed in free speech.
No ethics cops
Journalists thrive on controversy, but not in their own ranks. They bash everyone, but go easy on fellow journalists, saying they don’t want to be ethics cops.
Seems like a double standard, one for journalists and another for everyone else.
I’ve argued that if journalists don’t face up to their ethics obligations and put their own house in order, someone is going to try to do it for them.
Then this happened: Members of the Hawaii state senate on Jan. 23, 2025, introduced a bill calling for penalties against journalists operating in the state for ethics violations.
Evolving media landscape
“The legislature finds that in today’s rapidly evolving media landscape, the need for ethical standards in journalism has never been more urgent,” said the proposal. “The rise of social media, deepfake technologies and generative AI has amplified the spread of misinformation, posing new challenges for journalism and public trust.”
The statement points to “a significant decline in public confidence in media.”
Hard to deny any of that.
But then the legislators dropped a bombshell. The bill says journalists, editors or news media outlets shall “comply with the code of ethics adopted by the Society of Professional Journalists.”
Horrified
You’d think SPJ leaders would take that as a compliment. But SPJ leaders with a history of doing nothing about ethics except talk about it were horrified.
“The Society of Professional Journalists views this legislation as patently unconstitutional and calls for the Hawaii legislature to remove it from consideration,” said the organization in a statement.
SPJ’s national president, Emily Bloch, had this to say: “While the Society of Professional Journalists is flattered that the Hawaii State Legislature recognizes the SPJ Code of Ethics as a gold standard for journalistic integrity, we must strongly oppose any attempt to use our code as a tool for policing journalists through legislation. Such measures fundamentally contradict the principles of the First Amendment and the freedom of the press.”
Unconstitutional
SPJ has long argued that any attempt to do something about unethical journalism is unconstitutional. Worse yet, SPJ shows no ambition for addressing the seismic shifts in American journalism that the Hawaii state senate spelled out clearly.
A trailblazer in journalism ethics, SPJ once boasted of having 15,000 members. Then it lost its way, becoming what a consultant once said was “nice but not necessary.” Today, membership reportedly is down to about 4,000.
Founded in 1909, SPJ once touted itself as the oldest and largest journalism organization in the U.S. It’s website now describes the organization as “the nation’s most broad-based journalism organization” that, among other things, is dedicated to “stimulating high standards of ethical behavior.” An interesting word usage, since stimulating means “to arouse to activity or heightened action.”
Stimulation
SPJ does not want to stimulate the Hawaiian state legislature to act on its code of ethics.
It could be said SPJ lost its way when it went soft on ethics, and might have lost members too.
I’m an SPJ member, and feel personally involved any time the SPJ ethics code is mentioned. I wrote the version adopted in 1973, the first code of ethics that SPJ could call its own.
It happened this way: In 1972, when SPJ was known as Sigma Delta Chi (SDX), I was chairman of the society’s Professional Development Committee.
Abuses
At its national convention in Dallas, the society adopted a resolution asking journalists and the public to be aware “of the importance of objectivity and credibility in the news media by calling attention to abuses of these tenets when they occur.”
That resolution was sent to my committee “for study and program proposals.”
Committee members considered a list of things to do in response to the convention mandate. At the top of the list was a code of ethics SDX could call its own.
While I researched what a modern code of ethics should contain, committee members offered ideas. With that, I batted out a code of ethics on the Underwood typewriter I used at work at the Chicago Tribune. (This was before computers, if you can imagine that.)
Soaring ideals
I wanted it to reflect the ideals of SDX and of journalism in soaring ways, reflecting not only what journalism is but what it wants to be.
The next year, in 1973, I presented the new code of ethics at the national convention in Buffalo, N.Y., calling it “strong stuff.” It outlawed accepting “freebies”, free travel and secondary employment that could damage a journalist’s reputation and credibility.
Most of all, it contained a pledge, saying “journalists should actively censure and try to prevent violations of these standards, and they should encourage their observance by all newspeople.” That became known as the “censure clause.”
For the books
What happened next is one for the history books. I moved for its adoption, it was seconded and adopted unanimously by hundreds of delegates without a word of debate.
Those delegates had copies of the proposed code of ethics in their notebooks. Typically, journalists haggle for hours over the proper use of words, sentences, phrases and even punctuation in written material. But not this time.
What happened next was bizarre, surprising and maybe unprecedented in the history of the world. As I walked from the dais, the society’s executive director, Russ Hurst, grabbed my arm.
Once again
Looking worried, Hurst said maybe the delegates did not realize what they had just done. He expected a long and bitter floor fight over the code, especially the part about censuring journalists. He told me to introduce the proposed ethics code again.
So, I returned to the dais, interrupting the society’s president as he was going on to the next order of business, and told him I was instructed to introduce the code a second time, which I did.
This time, I emphasized it was a tough code “with teeth,” telling journalists to take action on ethics. Ethics requires thought and action. I moved again for its adoption.
Ayes
And a resounding second chorus of louder “ayes” rang out, without objections or debate.
It was the only time in SDX history that a resolution was adopted unanimously twice. And probably the last time ethics was discussed without heated debate.
That year, the organization changed its name to the Society of Professional Journalists, and I became chair of the newly created national Ethics Committee.
SPJ leaders responded cautiously with a go-slow campaign of hanging copes of the ethics code on newsroom and classroom walls.
Next decade
For the next decade, the code nagged at members, as a good code should. It should not simply be words on paper, but a call to action.
On Nov. 19, 1977, an SPJ convention in Detroit adopted a resolution mandating that “chapters be encouraged to develop procedures for dealing with questions of ethics.” That never happened.
SPJ was torn between a desire to lead journalists toward more ethical conduct, and a fear that could lead to “witch hunts” and litigation.
My greatest fear was that 326 SPJ professional and student chapters had no idea how to handle ethics complaints if they arose. It made sense to offer some guidelines, some boundaries.
President’s request
While I was national ethics chairman In 1984, at the request of SPJ president Phil Record, I drafted procedures for addressing ethics complaints. On May 17, 1985, the SPJ board of directors unanimously rejected the procedures proposed during a meeting in Salt Lake City.
I was not proposing draconian measures. Censure could mean anything we wanted it to mean, including a mild rebuke pointing out that a member or one of the SPJ chapters were doing something contrary to the ethics standards.
I believed that the SPJ code of ethics should be considered a condition of membership, like the bylaws which spelled out the conditions for being a member in good standing. First and foremost, it belonged to SPJ and our first duty was to be sure our own members understood the code and lived up to it.
House in order
SPJ had an obligation to make sure its own house was in order before preaching ethics to others.
If other organizations wanted to adopt the SPJ code, that was their business. And they could decide what to do about it.
The censure clause issue came to a head at the 1986 convention in Atlanta, 13 years after the code’s adoption. A delegate from Mississippi said that his chapter started an investigation into an alleged ethics code violation, but dropped it when SPJ national official said they would not support any action.
Proper and just
A delegate from Arkansas proposed a resolution asking the SPJ board of directors to recommend, in consultation with the national Ethics Committee and local chapters, procedures for chapters to use to handle ethics complaints, subject to approval by the national convention the following year in Chicago. She wanted guidance for what is “proper and just.” The resolution was adopted.
On April 30, 1987, the SPJ board of directors met in St. Paul and voted to recommend no procedures for chapters to handle ethics complaints, and the board recommended removing the censure clause from the code of ethics.
At that meeting, Bruce Sanford, SPJ’s lawyer, is quoted in the minutes saying “if you believe in ethics, you have to take some risks.” That seemed like a moment of enlightenment. But then Sanford handed the board a memorandum calling ethics enforcement an “oxymoron.” He urged “using hypothetical situations to provoke discussion,” as lawyers do, not real ethics issues. The memo warned enforcing the code “would likely engender a rash of lawsuits.”
A menace
Sanford had been terrifying board members with this kind of language for years, describing the ethics code as a menace to be feared. It should be noted that various professional groups are bound by professional standards, including lawyers. The American Bar Association has model rules of professional conduct, including disciplinary authority.
Lawyers advised SPJ that admitting to having a code of ethics could be held against journalists in court, which never happened.
The trouble with hypotheticals is they are fiction, although a room full of clever journalists can devise some amazing and far-fetched hypotheticals. But that’s just an amusing game. Life often is far more complicated and surprising than anything you can imagine.
Refusal
At the 1987 national convention in Chicago, the SPJ board refused to follow the 1986 convention’s mandate. After delegates voiced disapproval of the board’s failure to act on that mandate, a proposal to delete the controversial censure clause was adopted by a 162-136 vote. It was replaced by a passage calling for ethics education programs and encouraging the adoption of more codes of ethics.
By my reckoning, SPJ leaders by this point had overruled or ignored four convention resolutions mandating action on ethics abuses and procedures for addressing ethics complaints.
For years, SPJ bylaws stated that conventions are “the supreme legislative body of the organization.” Their mandates typically were honored and considered the voice of its membership, helping to set the organization’s agenda.
Bylaws amended
In 2023, the bylaws were amended, deleting references to conventions being a supreme legislative body. Instead, it said the SPJ board of directors “shall determine the priorities of the society’s business in furtherance of its mission…” In effect, SPJ leadership censored its members. This made official the board’s long-held suspicion that the boisterous rank and file can’t be trusted.
This history describes an organization leading the way on ethics, then losing confidence as its leadership turned timid and out of touch with the wishes of its membership, then turning a deaf ear to its members.
So ended a stormy period that provoked hard feelings and some broken friendships.
All for ethics
Though everyone is for ethics, you’d get an argument on what that means.
The toothless 1973 code and its amendments, though considered a model for journalists for 23 years, was ready for retirement.
SPJ’s national Ethics ommittee met in Philadelphia in 1996 with the intention of drafting a new “green light” code of ethics, which it did in two days. The backbone of the new code hinged on four principles: Seek truth and report it, minimize harm, act independently and be accountable. I was told the Poynter Institute suggested that framework.
Four principles
Participants gathered into four groups to suggest standards for each of the four principles. I chaired the “be accountable” section, later changed to “be accountable and transparent.”
The new code of ethics was adopted by a national convention in 1996, including passages urging journalists to be accountable by exposing “unethical conduct in journalism, including within their organizations” and to “abide by the same high standards they expect of others.”
The code was tweaked again in 2014.
I served as SPJ’s national ethics chair from 1983 to 1986, and left the national Ethics Committee in 2010. I also served for many years as Midwest regional director for Illinois, Indiana and Kentucky. And in 1983 was awarded the Wells Memorial Key, the society’s highest honor. I served the society for many years, but also feel an obligation to hold its feet to the fire, as I would with any organization that considers itself vital to the future of journalism. I want SPJ to live up to its own ideals.
Calls for action
Clearly the current SPJ ethics code still calls for action, where it says journalists should expose unethical conduct in journalism. That is something SPJ is unwilling to do, and might be the next thing to disappear from the code.
The Hawaiian state legislature is taking aim at wayward journalists, despite SPJ’s protests. And the legislature has a definite plan for doing that. It calls for:
*Establishing baseline ethical standards and transparency requirements for journalists, editors or news media outlets operating in Hawaii.
Training
*Requires news media to train their employees in ethics.
*Establishing a journalistic ethics commission to render advisory opinions about violations of the journalistic code of ethics.
*Establish a journalistic ethics review board.
Penalties
The commission “shall enforce penalties” recommended by a review board.
*Create a dedicated hotline and online reporting system to file complaints related to violations of the code of ethics.
*Create a complaint and appeals process.
Investigate
Under the legislation, the ethics review board would investigate complaints and file a written determination within 30 calendar days. The board could recommend a penalty for noncompliance, which could include a fine for a second violation.
Penalties could include “suspension or revocation of state media privileges, including press credentials for government-sponsored events.”
The proposed legislation goes on to say, “The state shall not deny or interfere with a journalist’s, editor’s or news media outlet’s right to exercise freedom of speech or freedom of the press….A journalist, editor or news media outlet shall be responsible for determining the news, opinion, feature and advertising content of their publication.”
Unacceptable
Unacceptable expressions include libel, slander, invasion of personal privacy, obscenity and inciting unlawful acts.
Locally, Hawaiian media express disapproval of the proposals.
The Sunshine Blog in the Honolulu Civil Beat, said: “Just say no to giving the state power over the press.”
Journalists will say, as they have in the past, that ethics enforcement is a violation of their First Amendment rights, and maybe it is and will be shot down for that reason. Courts these days, however, seem to differ on the meaning of constitutionality and press freedoms as government officials turn increasingly hostile toward the media. Two U.S. Supreme Court justices want to reconsider New York Times vs. Sullivan, a 1964 landmark First Amendment decision in libel cases.
It could be healthy to nudge journalists into thinking about what they should do to keep journalism honest, fair and ethical in these times of political polarization, media fragmentation, a divisive internet and a disappearing newspaper industry.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.
Freelance writing is a swashbuckling sort of business where practitioners live by their wits and guile.
It’s always been a tough business. But for some it got tougher as the newspaper business, which hired freelancers, took a nosedive. Unemployment among freelance writers rose in 2017, then started dropping in 2020.
Statistics show that freelance writers are becoming a growing force in the media, which employs 65 percent of freelance writers in the United States.
The U.S. has 12,994 employed freelance reporters.
Vanishing newsrooms
Newsrooms once were filled with bustling reporters, but those days are vanishing.
Since 2005, more than 3,300 newspapers closed in the United States. Newsroom jobs fell by 26 percent since 2008 in the wake of staff cuts.
Although digital media employment is up, the United States has far fewer journalists today than before.
Remaining news outlets have fewer in-house reporters, but they still need stories to report. To fill that gap, many hire freelance reporters, some of whom might be former newspaper journalists who became freelance reporters. Writing remotely also fits the new media landscape.
Calamity
Sometimes one person’s calamity is another person’s opportunity.
That’s where freelance writers come in. They fill the need for writers, and like writers everywhere, they are likely to encounter ethics issues.
A Colorado freelance writer came to the Ethics AdviceLine for Journalists to help her untangle a problem involving a news source and two national media outlets who employ the freelancer.
The freelancer was working on a story for a national media outlet when she discovered that the source for the story she was writing offered the story to a rival national media outlet, which also employs the freelance writer.
Promises
The freelancer believes the source is trying to be helpful in getting the story out, but does not know how the journalism business works. The freelancer had promised to write the story for one media outlet, but not for the rival outlet.
Hugh Miller, the AdviceLine advisor in this case, asked the freelancer how she would approach the issue ethically herself, unprompted by the advisor.
AdviceLine does not tell journalists what they should do. Instead, AdviceLine advisors help journalists with a troubling ethics issue to arrive at their own conclusions.
Responsibilities
“I asked her to focus on HER responsibilities and courses of action, not those of her editors” at the two national news outlets.
“She responded quite quickly that she thought it would be best if she contacted both editors and informed them fully of the situation,” including showing both editors the email the source had sent to the rival news outlet without the freelancer’s knowledge.
“In this way neither editor would be left in the dark about how matters stood,” said Miller. She agreed to finish the story she was writing for the first news outlet, and told the rival news outlet that she could not be assigned to write the story for them because it would be a conflict of interest.
Telling the source
The freelancer decided against telling the source who caused this conflict anything about the discussion with the editors. The editor at the rival news outlet was free to talk about the case further with the source.
“She thought this approach would allow her to most fully discharge her responsibilities to both of her competing employers,” said Miller. “I told her I had little to add to her analysis, and that I thought it a good one, ethically — however, in practical terms, the editors then dealt with the matter. She was relieved to have had the chance to talk it out.”
Having a place to call and talk is one of the benefits of the Ethics AdviceLine for Journalists. Journalists with an ethics dilemma often have a hunch about how to solve the problem, but they want to know if the hunch is correct, as it was in this case.
The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.
Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.