Tag Archives: writing

Outing Priests Ethically

http://www.pinterest.com image

By David Craig and Casey Bukro

Ethics AdviceLine for Journalists

“A thorny issue is coming up about outing priests,” said the Texas independent documentarian, who was treading new journalistic territory, where changes in public sentiment bring new questions about what is ethically correct.

The journalist was investigating cold cases, involving swirls of rumors that priests having sex in motels with other men were murdered. The priests’ bodies were found nude and bound, possibly victims of sex or hate crimes.

“We can confirm those things were happening,” said Deborah Esquenazi, an investigative filmmaker who was collaborating with the Texas Monthly Magazine in Austin, Tx. But to do that, “we have to out these individuals. And I just want to discuss the ethics of that,” she told AdviceLine.

Outing dangerous

In the past, she noted, outing a person could be dangerous to the person outed. But in the case involving priests, “all the individuals we’d be outing were deceased. They were also public figures. And nowadays, of course, with the shift in public sentiment, I believe that one of the reasons those individuals didn’t get the justice they deserved was because they were gay.

“And it could be swept under the rug because the church wanted to use sanctuary laws in order to not have these priests outed.” Sanctuary laws refer to the history and traditional right for a consecrated church building to offer refuge, protecting individuals from pursuit and arrest by secular authorities, historically for crimes.

This case involved layers of sensitive questions, past and present, and a documentarian with strong feelings about the story. “I am personally a lesbian and do not believe that outing should ever be considered a problem, because I don’t think there is anything wrong with being gay,” Esquenazi said.

Shifting conversation

She added: “I believe we should be shifting the conversation that outing should never be considered disgraceful. In fact, we could out people and be proud of such a thing.”

Esquenazi’s beliefs tend to challenge the long-time hesitation by journalists to discuss people’s sexual orientation — unless they clearly are open to that based on privacy concerns –and the decision of some gay people not to be out if that is their choice.

This bundle of complexities landed in the lap of AdviceLine advisor David Craig, Presidential Professor and Gaylord chair, Gaylord College of Journalism and Mass Communication at the University of Oklahoma.

Ethical conclusion

In discussing the case with the reporter, Craig’s mission was to help the journalist arrive at the most ethical course of action, guided by ethics standards, such as the Society of Professional Journalists code of ethics,

Naming principles in the SPJ code, Craig described advice he gave to the reporter and her reaction:

  1. Regarding seek truth and report it – Is the piece of truth that the priests were gay important to the story? Why or why not? She said she thinks that’s the fundamental reason they didn’t get justice. She said there is reason to believe sanctuary laws were not used as a “sacrament” but as a “shield.”
  2. Regarding minimizing harm – Is there anyone who might be harmed by the disclosure that the priests were gay, such as family members embarrassed to have this disclosed? She said there were few next of kin (families) and there’s no one where she thinks there would be harm.
  3. Regarding be accountable and transparent – Is there any opportunity within the story or in ancillary material to explain the decision to out these priests if you do that? She said that it’s possible she could narrate something or could write an op-ed in a magazine working with her.

“She also brought up privacy concerns and whether these would still be relevant if they were murdered,” Craig wrote in his report on this case. “I agreed that saying the priests were gay is relevant, and I noted it would be almost impossible to tell the story without saying that.”

Critique sessions

The AdviceLine team of advisors meets periodically to discuss and critique case reports in an effort to be sure advice given to journalists is helpful and accurate. Presenting the case to the other advisors, Craig said:

“She is trying to approach this in a measured way to the specific work she does. Should outing be considered in her reporting? What to do seemed clear in this case. There was no way to report the story without saying the priests were gay. She was thinking about harm. She wasn’t concerned about anyone being harmed” by her reporting.

Craig and the filmmaker discussed ways to articulate the sensitive issues in the story and the reporter said they would be mentioned in the film. As for the ethics implications, “she wanted to be comfortable in what she did and how she did it.”

Being gay

David Ozar, an AdviceLine advisor and former professor of social and professional ethics at Loyola University Chicago, asked the question “is it okay to out people?” He knew a gay couple who preferred to keep that part of their relationship quiet.

“That aspect raised strong objections to outing,” he said. “I doubt you would be surprised today that some priests are gay.” Ozar supported what the filmmaker was attempting to do, “but it must be carefully thought out.” A key question, he added, is “should we be identifying these people who are likely to be harmed? Visual arts has its own set of ethical issues.”

Hugh Miller, also an AdviceLine Advisor who formerly taught ethics and business ethics at Loyola University Chicago, argued that “there should neither be absolute bans on publishing facts that might ‘out’ a person as being gay, nor absolute imperatives to publish such facts. Journalists must make a judgment call each time, balancing the public’s right to know against the stricture to minimize harm.”

Miller’s comments highlight a common element of ethical deliberation that AdviceLine advisors bring to their discussions with callers and one another: Careful ethical decisions usually involve considering more than one principle and weighing their importance in the situation — here, for example, the importance of the truth being told versus the harm that it might bring.

********************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

AI Puzzles

shutterstock.com image

By Casey Bukro

Ethics AdviceLine for Journalists

It was Brian again, a freelancer calling AdviceLine with another question about writing with the aid of artificial intelligence.

His life as a freelancer was getting complicated because rules governing the use of artificial intelligence in journalism were changing fast, and his supervisors were giving him mixed messages.

“I am reaching out seeking a followup on a past case that I spoke (about) to the Ethics AdviceLine,” Brian said in his email. “I found out that my company will soon be incorporating AI tools after editors/leadership gave me a hard time after I unknowingly used a rephrasing/clarity tool which still does not appear to be against our written policy.

“I want to be proud of these stories and continue to worry that because they (AI tools) were now seemingly the policy, I can’t be.”

Admits using Toolbox

The first time he got into trouble, Brian had admitted to his editors that he used Toolbot to check spelling and grammar, making suggestions for alternate phrasings and insure his pieces conformed to the AP Stylebook, but not for generating content.

Brian’s editors told him they “would not have used such a tool,” and this caused Brian to fret that he had done something unethical, and that the quality of his former work was tainted by the use of Toolbot.

By chance, Hugh Miller, an AdviceLine ethics expert, was on duty the first time Brian contacted AdviceLine, and Miller happened to be on duty the second time.

Miller assured Brian that his earlier use of Toolbot was not unethical.

New AI tools

But here’s what worries Brian the second time he contacted AdviceLine: The company he works for will be introducing a new content management system (CMS) to its newsroom which will have AI tools built in. Exactly what those tools are, and what they will be capable of doing, has not been made clear by the editors.

This causes Brian renewed anxiety about his past articles, and what might be the ethical use of AI tools being newly introduced in the newsroom where Brian submits his stories.

Here’s a description of the conversation between Miller and Brian as they tried to noodle their way through this new dilemma:

Hugh Miller: Do you know what the tools are?

Brian: I don’t yet.

HM: I presume they will at least have the minimal editing and formatting capabilities of, say, a Toolbot, yes?

B: I assume so, and possibly others.

HM: As a recent post on the Ethics AdviceLine for Journalists website points out, “A 2024 Associated Press survey found nearly 70 percent of newsroom staffers use the technology for basic skills such as producing content, information gathering, story drafts, headlines, translation and transcribing interviews. One-fifth said they used AI for multimedia projects, including graphics and videos. Surveyed were 292 media representatives from legacy media, public broadcasters and magazines, mostly based in the U.S. and Europe.” So AI is already being extensively used in newsrooms in ways far beyond the bare-bones use you were making of Toolbot. Does your organization have an AI use/ethics policy in place?

B: I don’t believe so.

HM: Perhaps it might be a good idea to help craft one.

B: I belong to the union at work, and we have begun to discuss this. But we only meet every few months.

HM: This is an issue on which management and union interests converge. Credibility is the very lifeblood and stock-in-trade of journalism. Readers should know that their human concerns are being reported by human journalists, and there should be transparency about AI use. Perhaps you could get it on the agenda for the next meeting that you want to discuss a company-wide AI use/ethics policy.

B: I think we’re moving in that direction, yes.

HM: And in the meantime, collect examples of such policies from other newsrooms or places like theSociety of Professional Journalists and the Poynter Institute.

B: Yes, I’ve already begun looking into those.

HM: Any other issues?

B: Not right now.HM: It sounds like you have a plan to move forward. Keep me posted.

AdviceLine has four staff members who help journalists solve ethics dilemmas through a discussion leading to a conclusion. The four advisors taught or are teaching ethics at the university level. These advisors meet periodically to review advice that was given to journalists, and whether it could have been better. 

After giving advice to journalists, advisors write case reports for each query handled by AdviceLine. At the periodic Zoom meetings, those case reports are discussed.

At a recent Zoom meeting, Miller described his exchange with Brian, and the advice he gave.

Good advice

In all cases where journalists ask for guidance on the use of artificial intelligence, suggested David Ozar, they should be asked “does your editorial workplace have a policy? That’s good advice.”

Ozar is a co-founder of AdviceLine and emeritus professor of the Department of Philosophy, Loyola University Chicago, and a consulting ethicist for the Institutional Ethics Committee, NorthShore University Health System.

Journalists should “encourage collective action; this is an issue where workforce and management might converge,” suggested David Craig, Presidential Professor and Gaylord Chair, Gaylord College of Journalism and Mass Communication, the University of Oklahoma, Norman, Ok.

Also attending was AdviceLine advisor Joe Mathewson, a professor at the Medill School of Journalism, Northwestern University, who teaches ethics and law of journalism.

Miller has been with AdviceLine since 2002 and was assistant professor of philosophy at Loyola University Chicago and taught courses in ethics and business ethics. His areas of specialization were philosophy of religion, philosophical theology, history of metaphysics and contemporary French philosophy.

AdviceLine ethics cases are archived at the Medill School of Journalism.

****************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

AI Soul Searching

http://www.cpnet.io image

By Hugh Miller and Casey Bukro

Ethics AdviceLine for Journalists

A lot of soul-searching is going on over the ethical use of artificial intelligence in the media, a mind-bending exercise pointing out that a tool expected to improve journalism might replace human journalists and doom news outlets that feed AI the information that makes it work.

Some pontificate. Others strategize over this existential moment.

As often happens when science brings us some astonishingly brilliant new idea, using the new technology reveals a few equally astonishing flaws. AI software models used widely today, for example, cannot reliably and accurately cite and quote their sources. Instead, we get gibberish that looks credible, like crediting a real author for words AI “hallucinated.” Since AI “feeds” on the work of others, usually uncredited, news organization using AI could be accused of plagiarism.

Nothing quite that complicated came to AdviceLine’s attention when a journalist working for a newspaper in Alaska asked for help with an AI issue more likely to confront journalists every day:

“This is kind of a dumb question,” the journalist began, although most journalists know there is no such thing as a dumb question. “But I’ve always struggled with headlines and now I’m hoping to get some help from AI to write them,” he continued. “How/where do other outlets disclose that just the headline of an article was written by AI?”

An answer

Answering that question was Joseph Mathewson, AdviceLine advisor and a professor at the Medill School of Journalism, Northwestern University, who happened to be a personal friend of the journalist calling for help.

“Thanks for the question!” replied Mathewson. “I haven’t confronted it before, but it seems to me that anything you publish written by AI should be identified as such, including headlines…maybe by a blanket note somewhere in the paper to that effect if it’s more than one.”

A direct response to a direct question, which is what AdviceLine has done since it began operating in 2001, which was long before Artificial Intelligence existed as a burning issue in journalism. But it was the kind of question the AdviceLine staff of ethics experts is qualified to answer.

Artificial Intelligence is a journalism riddle, a kind of technology already in use, but not fully understood. Expected to be a solution, it causes problems of a kind never seen before, like hallucinations, defined as information or responses generated by AI that are fabricated, inaccurate or not grounded in fact. That is hardly a useful tool, but it’s already in widespread use.

Job loss

And conflicts over AI can cost a journalist their job, as illustrated by the Suncoast Searchlight, A Florida publication covering Sarasota, Manatee and DeSoto counties.

The publication had four full-time staff reporters and two editors.

In November, all four reporters sent a letter to the nonprofit board of directors accusing their editor-in-chief of using generative AI tools, including ChatGPT, to edit stories and hiding that use from staff, according to a report by Nieman Journalism Lab of the Nieman Foundation for Journalism.

As a result, said the reporters, hallucinated quotes, a reference to a nonexistent state law and other factual inaccuracies were introduced into their story drafts. When they questioned the editor about the edits, they said she did not immediately disclose her use of AI tools but instead contended she made the errors herself.

Breach of trust

Said the reporters: “We fear that there may be extensive undisclosed AI-generated content on our website and have questions about what retroactive disclosure is needed for our readers.” Adding that the editor created a breach of trust between her and her reporters.

The reporters asked the board of directors, consisting of media executives, journalists and local business people, to intervene. They also made several requests: To adopt an AI policy, a fact-checking process and an internal audit to identify AI-generated writing that might have been published on the site. They also asked the offending editor-in-chief to promise not to use AI for editing in the future.

Less than 24 hours after the board received the letter, the editor-in-chief and her deputy editor fired one of the reporters who signed it. Clearly, hazards abound when reporters criticize their editors, who prefer to do the criticizing.

Disruptive

AI is proving to be a disruptive technology, although widely used.

A 2024 Associated Press survey found nearly 70 percent of newsroom staffers use the technology for basic skills such as producing content, information gathering, story drafts, headlines, translation and transcribing interviews. One-fifth said they used AI for multimedia projects, including graphics and videos. Surveyed were 292 media representatives from legacy media, public broadcasters and magazines, mostly based in the U.S. and Europe.

Aimee Rinehart, AP’s co-author and senior product manager of AI strategy, observed:

“News people have stayed on top of this conversation, which is good because this technology is already presenting significant disruptions to how journalists and newsrooms approach their work and we need everyone to help us figure this technology out for the industry.”

Ethics uneven

Citing the AP survey, Forbes, the American business magazine, headlined: “Newsrooms are already using AI, but ethical considerations are uneven.”

Forbes pointed out that while the news industry’s use of AI is common today, “the question at the heart of the news industry’s mixed feelings about the technology” is whether it is “capable of producing quality results.”

This is oddly reminiscent of football teams that sign rookie quarterbacks to multi-million-dollar contracts, hoping they become champions of the future. Good luck with that. Such hopefuls soon find themselves contending with someone like Dick “Monster of the Midway” Butkus, Chicago Bears linebacker famous for his crushing tackles.

Server farms

The Dick Butkus analogy also applies to the large language models (LLMs) that drive artificial intelligence tools. They are large programs that run on hugely energy-intensive server farms. They just take a huge volume of training data (usually sourced without recompense to the originators) and, in response to a prompt, spit out text that is associated with the prompt topic and reads as grammatically reasonably well-informed.

Such output has no necessary connection with reality, since the LLMs have none. They rely wholly on their input data and their algorithm – they are, in fact, nothing but these.

They cannot fact-check, since they have no access to facts, only “input data,” which itself may have only tenuous connection to reality, if it’s coming from, say, Fox News, Newsmax or OANN (One America News Network.)

No concepts

They cannot conduct interviews, because they cannot tell when an interview subject needs to be pushed on a point, or if he or she is lying. They cannot construct a narrative of events, since they have no understanding of causality or temporal sequence – they have no concepts at all, in fact. And they are subject to “steering” – they can be programmed to exhibit actual biases, as Elon Musk has said he is doing with his X.com AI bot, Grok.

It may be the case that, in the future, an AGI (artificial general intelligence) may be constructed. AGI is the concept of a machine with human-level cognitive abilities that can learn, understand and apply knowledge. Unlike today’s AI, which excels at doing specific jobs, AGI would have versatility, adaptability and common sense, allowing it to transfer learning across different disciplines like medicine, finance or art without being specifically programed for each. It’s a major goal in AI research, but remains hypothetical. Some will want to prevent it.

LLMs are far from being such a thing, and a true AGI will not be built out of a LLM.

Reshaping newsrooms

Despite AI’s shortcomings, The Poynter Institute for Media Studies points out that it already is reshaping newsroom roles and workflow. In 2024, Poynter introduced a framework to help newsrooms create clear, responsible AI ethics policies – especially for those just beginning to address the role of artificial intelligence in their journalism.

Updated in 2025, Poynter’s AI Ethics Starter Kit helps media organizations define how they will and will not use AI in ways that serve their mission and uphold core journalistic values. It contains a “template for a robust newsroom generative AI policy.”

Near the top of this template is a heading called “transparency,” calling upon journalists using generative AI in a significant way to “document and describe to our audience the tools with specificity in a way that discloses and educates.”

RTDNA guidance

Another major journalism organization, the Radio Television Digital News Association (RTDNA), also offers guidance on the use of artificial intelligence in journalism, pointing out that it has a role in ethical, responsible and truthful journalism.

“However,” says RTDNA, “it should not be used to replace human judgment and critical thinking — essential elements of trusted reporting.”

Getting down to the nitty gritty, Julie Gerstein and Margaret Sullivan ask “Can AI tools meet journalistic standards?”

Spotty results

“So far, the results are spotty,” they say in the Columbia Journalism Review. AI can crunch numbers at lightning speed and make sense of vast databases.

“But more than two years after the public release of large language models (LLMs), the promise that the media industry might benefit from AI seems unlikely to bear out, or at least not fully.”

Gerstein and Sullivan point out that generative AI tools rely on media companies to feed them accurate and up-to-date information, while at the same time AI products are developing into something like a newsroom competitor that is well-funded, high-volume and sometimes unscrupulous.

Hallucinate

After checking the most common AI software models, Gerstein and Sullivan found that none of them “are able to reliably and accurately cite and quote their sources. These tools commonly ‘hallucinate’ authors and titles. Or they might quote real authors and books, with the content of the quotes invented. The software also fails to cite completely, at times copying text from published sources without attribution. This leaves news organizations open to accusations of plagiarism.”

Whether artificial intelligence babbling can be legally considered plagiarism or copyright infringement remains to be answered by lawsuits filed by the New York Times, the Center for Investigative Reporting and others.

Especially irked, the New York Times accuses OpenAI of trying “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” OpenAI created ChatGPT, which allegedly contains text copied from the New York Times archives and reproduced verbatim for ChatGPT users.

Worrying outcome

Say Gerstein and Sullivan: “One possible – and worrying – outcome of all this is that generative AI tools will put news outlets out of business, ironically diminishing the supply of content available for AI tools to train on.”

This is our strange new world: Technology needs other technologies to survive. One feeds upon the other. In a new twist, Microsoft struck a $16 billion deal with Constellation Energy to buy 100 percent of power produced by the Three Mile Island power plant, once it restarts.

Three Mile Island became world famous in 1979 for an accident that caused the fuel in one of its reactors to overheat and crumble, triggering a mass evacuation of thousands of residents in the Harrisburg, Pa. area. The stricken reactor was closed permanently, but a second power-producing reactor on the site continued to operate for 40 years until 2019.

Nuclear power

Microsoft wants all the power the nuclear plant can produce for its energy-hungry data centers. Its 20-year agreement with Constellation is supported by a $1 billion government loan to Constellation. The plant is expected to resume producing electricity in 2027.

This signals a resurrection of sorts for nuclear energy in the United States, brought on by new and growing power demands in our highly technological society. A similar nuclear comeback around the world, after two decades of stagnation, was declared by the International Energy Agency.

In another odd twist, both nuclear energy and artificial intelligence have been criticized as potentially disastrous for the human race. The nuclear hazards include atomic bombs and the risks of operating nuclear electric power producing plants.

Scientists point out that with risks, come benefits.

**********************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Freelance Writing Grows

http://www.thestatesman.com image

By Casey Bukro

Ethics AdviceLine for Journalists

Freelance writing is a swashbuckling sort of business where practitioners live by their wits and guile.

It’s always been a tough business. But for some it got tougher as the newspaper business, which hired freelancers, took a nosedive. Unemployment among freelance writers rose in 2017, then started dropping in 2020.

Statistics show that freelance writers are becoming a growing force in the media, which employs 65 percent of freelance writers in the United States.

The U.S. has 12,994 employed freelance reporters.

Vanishing newsrooms

Newsrooms once were filled with bustling reporters, but those days are vanishing.

Since 2005, more than 3,300 newspapers closed in the United States. Newsroom jobs fell by 26 percent since 2008 in the wake of staff cuts.

Although digital media employment is up, the United States has far fewer journalists today than before.

Remaining news outlets have fewer in-house reporters, but they still need stories to report. To fill that gap, many hire freelance reporters, some of whom might be former newspaper journalists who became freelance reporters. Writing remotely also fits the new media landscape.

Calamity

Sometimes one person’s calamity is another person’s opportunity.

That’s where freelance writers come in. They fill the need for writers, and like writers everywhere, they are likely to encounter ethics issues.

A Colorado freelance writer came to the Ethics AdviceLine for Journalists to help her untangle a problem involving a news source and two national media outlets who employ the freelancer.

The freelancer was working on a story for a national media outlet when she discovered that the source for the story she was writing offered the story to a rival national media outlet, which also employs the freelance writer.

Promises

The freelancer believes the source is trying to be helpful in getting the story out, but does not know how the journalism business works. The freelancer had promised to write the story for one media outlet, but not for the rival outlet.

Hugh Miller, the AdviceLine advisor in this case, asked the freelancer how she would approach the issue ethically herself, unprompted by the advisor.

AdviceLine does not tell journalists what they should do. Instead, AdviceLine advisors help journalists with a troubling ethics issue to arrive at their own conclusions.

Responsibilities

“I asked her to focus on HER responsibilities and courses of action, not those of her editors” at the two national news outlets.

“She responded quite quickly that she thought it would be best if she contacted both editors and informed them fully of the situation,” including showing both editors the email the source had sent to the rival news outlet without the freelancer’s knowledge.

“In this way neither editor would be left in the dark about how matters stood,” said Miller. She agreed to finish the story she was writing for the first news outlet, and told the rival news outlet that she could not be assigned to write the story for them because it would be a conflict of interest.

Telling the source

The freelancer decided against telling the source who caused this conflict anything about the discussion with the editors. The editor at the rival news outlet was free to talk about the case further with the source.

“She thought this approach would allow her to most fully discharge her responsibilities to both of her competing employers,” said Miller. “I told her I had little to add to her analysis, and that I thought it a good one, ethically — however, in practical terms, the editors then dealt with the matter. She was relieved to have had the chance to talk it out.”

Having a place to call and talk is one of the benefits of the Ethics AdviceLine for Journalists. Journalists with an ethics dilemma often have a hunch about how to solve the problem, but they want to know if the hunch is correct, as it was in this case.

**********************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Keep Words Small For Big Ideas

Keep words small for big ideas: Merrill Perlman notes a trend toward journalists using big words to “sound smart.”

“But a journalist’s job is to inform,” writes Perlman, “and information will not come through if the audience doesn’t understand the words.”

Rather than sending readers to a dictionary, “a writer wants to keep readers reading, to keep them engaged in our stories.”