Tag Archives: artificial intelligence

Privacy in a Pandemic

http://www.unothegateway.com image

By Casey Bukro

Ethics AdviceLine for Journalists

The Covid-19 pandemic commanded the world’s attention, straining medical resources and testing the media’s competence to understand and accurately report such an unprecedented event. 

As often happens in major events, journalists try to tell the story by describing what is happening to individuals. They try to “humanize” the story to describe the suffering of patients and brave attempts by doctors and nurses to treat the highly communicable disease, which struck down caregivers.

The death toll was one of the highest in pandemic history. The World Health Organization reports 7 million coronavirus deaths worldwide, from Dec. 31, 2019 to Feb. 4, 2024. With 1.2 million deaths, the United States had more covid casualties than any nation, despite having one of the most advanced health care systems in the world. Brazil was next with 702,000 deaths, followed by India with 533,500.

A horrifying story

It was a dramatic and horrifying story. And one that tested the ethical conduct of journalists. Although their intentions were good, did some of them go too far?

A British Broadcasting Corporation reporter based in Ho Chi Minh City contacted AdviceLine asking: “Should journalists enter an operating room where doctors are rescuing a critical patient just to have a good story?” Doctors consented to a story, with photos, in a hospital in Vietnam. But did their actions “undermine the patient’s privacy?”

The BBC reporter said the patient, an airline pilot, gained notoriety because his case was considered so rare in severity, “every minute detail of his recovery was reported in national newspapers and on TV news bulletins.”

Patient privacy

The case raises questions dealing with a patient’s privacy rights, and how much the public needs to know in a global public health crisis.

The AdviceLine adviser in this case was Joseph Mathewson, who teaches journalism law and ethics at Northwestern University’s Medill School of Journalism, Media & Integrated Marketing Communications.

Mathewson first turned to BBC editorial guidelines on privacy, which state: “We must be able to demonstrate why an infringement of privacy is justified, and, when using the public interest to justify an infringement, consideration should be given to proportionality; the greater the intrusion, the greater the public interest required to justify it.”

Guidelines

The guidelines went on to say: “We must be able to justify an infringement of an individual’s privacy without their consent by demonstrating that the intrusion is outweighed by the public interest…. We must balance the public interest in the full and accurate reporting of stories involving human suffering and distress with an individual’s privacy and respect for their human dignity.”

In this case, it was not known if the patient consented to be interviewed and photographed. Without consent, said Mathewson, “the journalist then needs to weigh the public interest in that infringement to determine whether it was warranted.”

Broadcasting code

The United Kingdom also has a broadcasting code with similar restrictions that take public interest into account, adding: “Examples of public interest would include revealing or detecting crime, protecting public health or safety, exposing misleading claims made by individuals or organizations or disclosing incompetence that affects the public.”

Mathewson observed that the many stories written about the patient probably identified him to some degree. “I can’t help wondering what was in the many previous stories about him,” he told the BBC reporter.

If previous stories, done without his consent, had identified the patient and his employer, “the ethics analysis might be different,” said Mathewson.

*************************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

AI Born with Warnings

http://www.researchgate.net image

By Casey Bukro

Ethics AdviceLine for Journalists

Like nuclear power, artificial intelligence is described as a threat to humanity.

A difference is that the atomic bomb was intentionally invented as a weapon of mass destruction.

For some, artificial intelligence (AI) seems more like a technology that stealthily places a suffocating pillow over the face of sleeping humanity, causing extinction. AI development could lead to machines that think for themselves, and there lies the problem.

Warnings sounded

Warnings are sounded repeatedly, most recently by the Bletchley Declaration on Artificial Intelligence Safety on Nov. 1-2, 2023, a new global effort to unlock the benefits of the new technology by ensuring it remains safe.

At the two-day summit in England, 28 governments, including the United States, the United Kingdom, the European Union and China, signed the declaration acknowledging the potentially catastrophic risks posed by artificial intelligence.

The warning seems well-timed, since 2024 is expected to be a transformative year for AI. It is the year, predicts The Economist magazine, that “generative AI will go mainstream.”

Year of experimentation

Large companies spent much of 2023 experimenting with the new technology, while venture-capital investors poured some $36 billion into the new invention. That laid the foundation for what is expected next.

“In 2024 expect companies outside the technology sector to start adopting generative AI with the aim of cutting costs and boosting productivity,” The Economist, a Britain-based publication, predicted.

For some, this is unsettling.

Business leaders, technologists and AI experts are divided on whether the technology will serve as a “renaissance” for humanity or the source of its downfall, according to Fortune Magazine.

At a summit for chief executive officers in June, 42 percent of them said they believe AI “has the potential to destroy humanity within the next five to 10 years.” Fortune added that one AI “godfather” considered such an existential threat “preposterously ridiculous.”

Science fiction

The Washington Post reported similar findings: “Prominent tech leaders are warning that artificial intelligence would take over. Other researchers and executives say that’s science fiction.”

Why should we fear AI?

Among the scenarios postulated is that self-governing AI robots designed to tend to human needs might decide that extermination is the most logical solution to ending human tendencies to wage war. An autonomous machine might think humans are routinely killing themselves in vast numbers anyway. To end such suffering, the machine might decide to copy human behavior. Destroy them for their own good.

Putting a humorous spin on it, a cartoon shows a robot telling a man: “The good news is I have discovered inefficiencies. The bad news is that you’re one of them.”

A conundrum

At the root of this conundrum is trying to think like AI robots of the future.

At the British AI safety summit at Bletchley Park, tech billionaire and Tesla CEO Elon Musk took a stab at describing the AI future.

“We should be quite concerned” about Terminator-style humanoid robots that “can follow you anywhere. If a robot can follow you anywhere, what if they get a software update one day, and they’re not so friendly anymore?”

Musk added: “There will come a point where no job is needed – you can have a job if you want for personal satisfaction.” He believes one of the challenges of the future will be how to find meaning in life in a world where jobs are unnecessary. In that way, AI will be “the most disruptive force in history.”

Musk made the remarks while being interviewed by British prime minister Rishi Sunak, who said that AI technology could pose a risk “on a scale like pandemics and nuclear war.” That is why, said Sunak, global leaders have “a responsibility to act to take the steps to protect people.”

Full public disclosure

Nuclear power was unleashed upon the world largely in wartime secrecy.  Artificial intelligence is different in that it appears to be getting full disclosure through international public meetings while still in its infancy. The concept is so new, Associated Press added “generative artificial intelligence” and 10 key AI terms to its stylebook on Aug. 17, 2023.

The role of journalists has never been more important. They have the responsibility to “boldly tell the story of the diversity and magnitude of the human experience,” according to the Society of Professional Journalists code of ethics. And that includes keeping an eye on emerging technology.

The challenge of informing the public of mind-boggling AI technology, which could decide the future welfare of human populations, comes at a tumultuous time in world history.

Journalists already are covering two world wars – one between Ukraine and Russia, and the other between Israel and Hamas. The coming U.S. presidential election finds the country politically fragmented and violently divided.

Weakened mass media

These challenges to keep the public more informed about what affects their lives comes at a time when U.S. mass media are weakened by downsizing and staff cuts. The Medill School of Journalism reports that since 2005, the country has lost more than one-fourth of its newspapers and is on track to lose a third by 2025.

Now artificial intelligence must be added to issues demanding journalism’s attention. This is no relatively simple story, like covering fires or the police beat. Artificial intelligence is a story that will require reportorial skill involving business, economics, the environment, health care and government regulations. And it must be done ethically.

It is a challenge already recognized by the International Consortium of Investigative Journalists (ICIJ), which joined with 16 journalism organizations from around the world to forge a landmark ethical framework for covering the transformative technology.

Paris Charter

The Paris Charter on AI in Journalism was finalized in November during the Paris Peace Forum, which provides guidelines for responsible journalism practices.

“The fast evolution of artificial intelligence presents new challenges and opportunities,” said Gerard Ryle, ICIJ executive director. “It has unlocked innovative avenues for analyzing data and conducting investigations. But we know that unethical use of these technologies can compromise the very integrity of news.”

The 10-point charter states: “The social role of journalism and media outlets – serving as trustworthy intermediaries for society and individuals – is a cornerstone of democracy and enhances the right to information for all.” Artificial intelligence can assist media in fulfilling their roles, says the charter, “but only if they are used transparently, fairly and responsibly in an editorial environment that staunchly upholds journalistic ethics.”

Among the 10 principles, media outlets are told “they are liable and accountable for every piece of content they publish.” Human decision-making must remain central to long-term strategies and daily editorial choices. Media outlets also must guarantee the authenticity of published content.

“As essential guardians of the right to information, journalists, media outlets and journalism support groups should play an active role in the governance of AI systems,” the Paris Charter states.

***********************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Automated Journalism

Automated journalism: Newsrooms always adapted to new technology like artificial intelligence, writes Nicholas Diakopoulos.

“Reporting, listening, responding and pushing back, negotiating with sources, and then having the creativity to put it together — AI can do none of these indispensable journalistic tasks,” he writes.

 

Robot Journalism Ethical Checklist

Robot journalism ethical checklist: As more media organizations deploy artificial intelligence, writes Tom Kent, “we need to keep a focus on the ethics and quality of robot news writing.”

Kent’s checklist touches on the accuracy of underlying data, automation producing thousands of erroneous stories and pitfalls, like defending a robot-written story.

 

Defining Free Speech For Robots

Defining free speech for robots: Jared Schroeder reports that free expression rights for artificial intelligence communicators may push the Supreme Court to define a journalist.

“Courts will soon have to explore whether AI communicators have rights as publishers — and whether a bot can be entitled to journalist protections,” he writes. This requires us to identify what is human about journalism and what is fundamental about it.

Artificial Intelligence and Ethical Journalism

Robots can write, but are they ethical?

Paul Chadwick writes about how artificial intelligence could damage public trust in journalism.

“For the time being, (ethics) codes could simply require that when AI is used the journalists turn their minds to whether the process overall has been compatible with fundamental human values,” he writes.