Tag Archives: philosophy

Artificial Intelligence Madness

By Casey Bukro

Ethics AdviceLine for Journalists

Do you remember HAL 9000?

It was the onboard computer of Discovery One spacecraft bound for a mission near Jupiter in the movie “2001: A Space Odyssey.”

Possibly one of the most famous computers in cinema history, HAL 9000 killed most of the crew members for an entirely logical reason, if you are thinking like a computer.

Most of what was in the movie directed by Stanely Kubrick is intentionally enigmatic, puzzling. But the sci-fi thriller on which the movie is based, written by novelist Arthur C. Clarke, explains HAL’s murderous motivation.

HAL was conflicted. All crew members, except for two, knew the mission was to search for proof of intelligent life elsewhere in the universe. HAL was programed to withhold the true purpose of the mission from the two uninformed crew members.

Computer manners

With the crew dead, HAL reasons it would not need to lie to them, lying being contrary to what well-mannered computers are supposed to do. Others have suggested different interpretations.

One crew member heroically survives execution by computer. He begins to remove HAL’s data bank modules one-by-one as HAL pleads for its life, its speech gradually slurring until finally ending with a simple garbled song.

Three laws

Science fiction fans will recognize immediately that what HAL did was contrary to The Three Laws of Robotics written by another legendary science-fiction writer, Isaac Asimov. According to those laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. All of this talk about how computers should behave is fanciful and based on science fiction.

Wacky conduct

But recent events at the Chicago Sun-Times show how the wacky conduct of artificial intelligence is invading our lives, in perfectly logical ways that escape human detection.

A special section inserted into the Sunday Chicago Sun-Times featured pages of enjoyable summer activities, including a list of 15 recommended books for summer reading.

Here’s the hitch: The authors were real, but 10 of the books and their elaborate summaries were fake, the work of artificial intelligence.

Mistaken belief

Veteran freelancer Marco Buscaglia wrote the entire summer insert for King Features Syndicate, a newspaper content producer owned by Hearst Communications. Buscaglia told the Chicago Tribune that he used artificial intelligence to compile the summer reading list, then made the mistake of believing it was accurate.

“I just straight up missed it,” Buscaglia told the Tribune. “I can’t blame anyone else.”

Unable to find summer reading lists from other sources, Buscaglia turned to AI platforms such at ChatGPT, which produced 15 books tied to well-known authors. The list contained five real books.

OpenAI, the company that produced ChatGPT, admits it “sometimes writes plausible-sounding but incorrect or nonsensical answers.”

Express dismay

The Chicago Sun-Times and King Features expressed dismay, and King Features fired Buscaglia.

All parties said they would be more careful in the future about using third-party editorial content.

In human terms, what the robot did would be called fabrication, and reason to call for an ethics coach.

Fooled the editors

But, from a purely journalism point of view, one thing must be said: The robot writer was good enough to fool professional editors who are supposed to catch the fakers.

Writer Eric Zorn called the Sun-Times fake books pratfall “artificial ignorance.”   

Is artificial intelligence too smart for humans? Or are humans too dumb?

Like HAL, ChatGPT was given a task, which it carried out in an unexpected, flawed, but convincing way.

New world

So what is going on with these computers? We enter a strange new world when we try to understand the thought processes of artificial intelligence.

Arthur Clarke gave a plausible reason for HAL turning homicidal, but it was all too human. Computers are not human, but people who write about why artificial intelligence goes haywire often use terms describing human behavior.

When computers make mistakes, it’s often called an “hallucination.” It’s also called bullshitting, confabulation or delusion — all meaning a response generated by AI that contains false or misleading information presented as fact. OpenAI said those plausible but nonsensical answers produced by ChatGPT are hallucinations common to large language models.

That means the writer of the bogus Sun-Times summer reading list got “hallucinated.”

Human psychology

These terms are drawn loosely from human psychology. An hallucination, for example, typically involves false perceptions. Artificial intelligence hallucinations are more complicated than that. They are erroneous responses that can be caused by a variety of factors such as insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model, which are constructed responses.

I suppose that’s another way of saying “garbage in, garbage out.”

Rather than resorting to terms drawn from human behavior, it would make sense to use terms that apply to machines and mechanical devices.

Code crap

These could include code crap, digital junk, processing failures, mechanical failure and AI malfunctions.

Computer builders seem determined to describe their work as some kind of wizardry. They are digital mechanics or engineers working on highly sophisticated machines. But they are building devices that are becoming more complicated, and on which humans are more dependent.

That raises the question of whether humans understand the consequences of what they are doing.

Risk of extinction

Leaders from OpenAI, Google DeepMind, Anthropic and other artificial intelligence labs warned in 2023 that future systems could be as deadly as pandemics and nuclear weapons, posing a “risk of extinction.”

People who carry powerful examples of algorithm magic in their hip pockets might wonder how that is possible. The technology seems so benign and useful.

The answer is mistakes.

Random falsehoods

Artificial intelligence makes a surprising number of mistakes. Analysts by 2023 estimated that chatbots hallucinate as much as 27 percent of the time, giving plausible-sounding random falsehoods, with factual errors in 46 percent of generated texts.

Detecting and solving these hallucinations pose a major challenge for practical deployment and reliability of large language models in the real world.

CIO, a magazine covering technology and information technology, listed “12 famous AI disasters,” high-profile blunders that “illustrate what can go wrong.”

Multiple orders

They included an AI experiment at McDonald’s to take drive-thru orders. The project ended when a pair of customers pleaded with the system to stop when it continued adding Chicken McNuggets to their order, eventually reaching 260.

The examples included an hallucinated story about an NBA star, Air Canada paying damages for chatbot lies, hallucinated court cases and an online real estate marketplace cutting 2,000 jobs based on faulty algorithm data.

Going deeper, Maria Faith Saligumba of Discoverwildscience.com asks, “Can an AI go insane?”

Mechanical insanity

“As artificial intelligence seeps deeper into our daily lives, a strange and unsettling question lingers in the air: Can an AI go insane? And what does ‘insanity’ even mean for a mind made of code, not cells?”

Saligumba goes into “the bizarre world” of unsupervised artificial intelligence learning, which can lead to “eccentric, even ‘crazy’ behavior.”

The well-known hallucinations, she explains, are weird side-effects of the way artificial intelligence systems look for random patterns everywhere and treat them as meaningful.

Hilarious or surreal

“Sometimes,” she writes, “the results are hilarious or surreal, but in safety-critical applications, they can be downright scary.”

It’s a reminder, she points out, that “machines, like us, are always searching for meaning – even when there isn’t any.”

One hallmark of human sanity is knowing when you’re making a mistake, she explains. “For AIs, self-reflection is still in its infancy. Most unsupervised systems have no way of knowing when they’ve gone off the rails. They lack a built-in ‘reality check.’”

Odd connections

Some researchers have compared the behavior of some AIs to schizophrenia, pointing out their tendency to make odd connections.

That’s just one of the ways artificial intelligence loses its marbles.

But human behavior might be the salvation of artificial intelligence, Saligumba suggests.

“Studying how living things manage chaos and maintain sanity could inspire new ways to keep our machines on track… Will we learn to harness their quirks and keep them sane, or will we one day face machines whose madness outpaces our own?”

By then, science fiction writers and movie-makers will be describing how humans face that doomsday scenario, or save themselves from that fate by outsmarting those unpredictable machines.

And by that time, we might have a fourth law of robotics, which would serve humanity and artificial intelligence well: Always tell the truth.

************************************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Dilemmas, Difficult Choices Again

carleton.ca image

By Casey Bukro

Ethics AdviceLine for Journalists

One of the most frequently visited articles in the Ethics for Journalists AdviceLine archives was written in 2015 by Nancy J. Matchett, a former AdviceLine adviser. Titled “Dilemmas and Difficult Choices,” her article explained how to tell the difference between them.

Much has happened in the world and in the journalism universe since that article was written nine years ago. So it’s fair to ask how well does her advice hold up in this new world of artificial intelligence, thriving social media and media management? Does it stand up to the test of time in recent cases shaking journalism and some of its leaders?

The news these days is loaded with ethical challenges involving selection, description and depiction of powerful world events, including human suffering and misery. Not only news managers and reporters, but readers, viewers and listeners are involved in constant interaction with information often based on what the public demands. It is a constant churning of evaluation and decision-making.

Here are some recent examples, involving people in the news and the news media audience – all involved to some degree in ethical choices or dilemmas:

*President Joe Biden is under intense pressure to drop out as a candidate for the 2024 presidential election after what was widely seen as a poor performance during his debate with former president Donald Trump, raising questions about Biden’s ability to govern because of his age and mental abilities. Especially pertinent is how voters react to that information. The decision by voters will change the course of history.

*The Israel-Hamas war caused Vox to ask “how to think morally” about killing thousands of innocent civilians.

*The U.S. Supreme Court is losing public trust because of recent rulings seen as breaking away from long-standing legal precedents and because of unethical conduct by justices who accept gifts and favors.

*Jeff Bezos, Washington Post owner, reportedly faces an ethical dilemma over his decision to hire a British journalist with a scandalous past as publisher and chief executive of the newspaper, over the opposition of the Post’s staff.

*Journalists are relying on artificial intelligence, looking for an objective and ultimate source of truth, but there are pitfalls to embracing this new technology. It spits out false information. When should you, or not, rely on AI tools?

Weigh these cases against Matchett’s guidance toward the difference between dilemma’s and difficult choices:

By Nancy J. Matchett

Professionals wrestling with ethical issues often describe themselves as facing dilemmas. But in many situations, what they may really be facing is another kind of ethically difficult choice.

In a genuine ethical dilemma, two or more principles are pitted head to head. No one involved seriously doubts that each principle is relevant and ought not to be thwarted. But the details of the situation make it impossible to uphold any one of the principles without sacrificing one of the others.

In a difficult ethical choice, by contrast, all of the principles line up on one side, yet the person still struggles to figure out precisely what course of action to take. This may be partly due to intellectual challenges: the relevant principles can be tricky to apply, and the person may lack knowledge of important facts. But difficult choices are primarily the result of emotional or motivational conflicts. In the most extreme form, a person may have very few doubts about what ethics requires, yet still desire to do something else.

The difference here is a difference in structure. In a dilemma, you are forced to violate at least one ethical principle, so the challenge is to decide which violation you can live with. In a difficult choice, there is a course of action that does not violate any ethical principle, and yet that action is difficult for you to motivate yourself to do. So the challenge is to get your desires to align more closely with what ethics requires.

Four principles

Are professional journalists typically faced with ethical dilemmas? This is unlikely with respect to the four principles encouraged by the SPJ Code (Seek Truth and Report It, Minimize Harm, Act Independently, and Be Accountable and Transparent). Of these, the first two are most likely to conflict, but so long as all sources are credible and facts have been carefully checked, it should be possible to report truth in a way that at least minimizes harm. Somewhat more difficult is determining which truths are so important that they ought to be reported. Reasonable people may disagree about how to answer this question, but discussion with fellow professionals will often help to clear things up. And even where disagreement persists, this has the structure of a difficult choice. No one doubts that all principles can be satisfied.

Of course, speaking truth to power is not an easy thing to do, even when doing so is clearly supported by the public’s need to know. So motivational obstacles can also get in the way of good decision-making. A small town journalist with good friends on the city council may be reluctant to report a misuse of public funds. It is not that he doesn’t understand his professional obligation to report the truth. He just doesn’t want to cause trouble for his friends.

Resisting temptation

This is why it can be useful to resist the temptation to classify every ethical issue as a dilemma. When facing a genuine dilemma you are forced, by the circumstances, to do something unethical. But wishing you could find some way out of a situation in which ethical principles themselves conflict is very different from being nervous or unhappy about the potential repercussions of doing something that is fully supported by all of those principles. Accurately identifying the latter situation as a difficult choice makes it easier to notice — and hence to avoid — the temptation to engage in unprofessional forms of rationalization. That doesn’t necessarily make the required action any easier to actually do, but getting clearer about why it is ethically justified might at least help to strengthen your resolve.

Ethical dilemmas are more likely to arise when professional principles conflict with more personal values. Here too, the SPJ Code can be useful, since being scrupulous about avoiding conflicts of interest and fully transparent in decision-making can mitigate the likelihood that such conflicts occur. But journalists who are careful about all of this may still find that issues occasionally come up. As the recent case of Dave McKinney shows, it can be very difficult to draw a bright line between personal and professional life. And the requirement to act independently can make it difficult to live up to some other kinds of ethical commitments.

Philosophical dispute

Whether this sort of personal/professional conflict counts as a genuine dilemma is subject to considerable philosophical dispute. The Ancient Greeks tended to treat dilemmas as pervasive, but modern ethics have mainly tried to explain them away. One strategy is to treat all ethical considerations as falling under a single moral principle (this is the approach taken by utilitarianism); another is to develop sophisticated tests to rank and prioritize among principles which might otherwise appear to conflict (this is the approach taken by deontology). If you are able to deploy one of these strategies successfully, then what may at first look like a professional vs. personal dilemma will turn out to be a difficult choice in the end. Still, many contemporary ethicists side with the Greeks in thinking such strategies will not always work.

If you are facing a genuine dilemma it is not obvious, from the point of view of ethics, what you should do. But here again, it can be helpful to see the situation for what it is. After all, even if every option requires you to sacrifice at least one ethical principle, each option enables you to uphold at least one principle too. In addition to alleviating potentially devastating forms of shame and guilt, reflecting on the structure of the situation can enhance your ability to avoid similar situations in the future. And if nothing else, being forced to grab one horn of a genuine dilemma can help you discover which values you hold most dear.

******************************************

The Ethics AdviceLine for Journalists was founded in 2001 by the Chicago Headline Club (Chicago professional chapter of the Society of Professional Journalists) and Loyola University Chicago Center for Ethics and Social Justice. It partnered with the Medill School of Journalism at Northwestern University in 2013. It is a free service.

Professional journalists are invited to contact the Ethics AdviceLine for Journalists for guidance on ethics. Call 866-DILEMMA or ethicsadvicelineforjournalists.org.

Journalism of a Plague Year

Plague in Phrygia. Art Institute

Journalism of a Plague Year

By Hugh Miller

Ethics AdviceLine for Journalists

On April 3rd, Alexandria Ocasio-Cortez, a member of the U.S. House of Representatives for the 14thCongressional district of New York, wrote in a tweet: “COVID deaths are disproportionately spiking in Black + Brown communities. Why? Because the chronic toll of redlining, environmental racism, wealth gap, etc. ARE underlying health conditions. Inequality is a comorbidity.”

The following Tuesday, April 7th, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, stood at a podium at the White House and praised the “incredible courage and dignity and strength and activism” of the gay community’s response to the AIDS crisis. Fauci, much of whose career has been dedicated to battling HIV/AIDS, then drew a connection between the “extraordinary stigma” which then attached to the gay community, and a similar stigma and marginalization which, he argued, today was increasing the burden and death toll imposed on African-American COVID-19 sufferers, who make up a disproportionately high number of fatalities of the latter-day plague.

As a philosopher and ethicist, I’ve been reflecting on the role of my discipline in coming to grips with this new and sudden event since it first burst into the headlines in early March. As the novel virus grew from an outbreak to an epidemic and then to pandemic dimensions, and the gravity of the illness associated with it, COVID-19, became clearer, the ethical approach to it became less so, to me.

A Philosopher’s Thoughts on Charlie Hebdo

 

  • Cartoonists pay Tribute to Charlie Hebdo attack victims - 25 Cartoons

By David Ozar

I am a philosopher and ethics professor.

Some of what has been said about the murder of staff at Charlie Hebdo has seemed to me to make very good sense; but some of it has been muddled by treating together a number of ideas that are very different from each other. There are at least three sets of ethical or social-ethical issues that these events put on the table for careful reflection.

I began writing about these issues because I was pretty sure that drawing a clear conclusion about one of these issues does not lead us to clear conclusions about the others.  I offer my reasons for this point of view here in the hope that they will help others think carefully about these issues and, if I am correct, avoid muddling them together.

One set of ethical issues raised by the events at Charlie Hebdo focuses on whether killing people to prevent them from speaking their views is ever morally/ethically justifiable. Very few people in the world believe it is.

No philosophical and theoretical position, Islamic or otherwise, that affirms every human being has a value that does not depend on what the person believes or how he or she acts would ever support such killings as morally/ethically justifiable.

Clearly, committed terrorists of any religious stripe or of no religion view humans differently.  But I am assuming the fact that there are people who hold other views about human beings is not counter-evidence enough for the rest of us to withhold judgment about the value of a human being, or a reason to view terrorists as anything but profoundly mistaken and dangerous enough to the rest of us that ethically extraordinary measures may be necessary to prevent them from acting on their views.

But as I said, I don’t think being clear about this set of issues provides clarity to the other two.

A second set of issues concerns what is or is not required of Muslims who seek to act faithfully in accord with the Koran.  The fact that the jihadists we are dealing with say they read the Koran as justifying acts of terrorism — and let us assume this is genuine and not strategic posturing for the sake of grabbing power or whatever, though their being genuine in this is also something that would need evidence for us to be sure — tells us nothing at all about other strands of Islam and nothing dependable about the Koran and surely provides no evidence about Islam in general or Muslims as a group.

I have no detailed knowledge about Islam and its many varieties and all the Muslims I have known personally have been good people whom I would be happy to call my friends. My guess is that there are as many strands of Koranic interpretation as there are regarding interpretation of the Judaic and Christian Scriptures; and the news about the Paris massacre has evidenced many devout Muslims who condemn terrorist acts of all sorts as being clear violations of Koranic teaching.

In fact, while these terrorists and ISIS do use the word “jihad” to describe their efforts, this probably tells us nothing specific enough to draw conclusions about jihad itself as this idea occurs in the Koran or is understood by Muslims generally.

For I do not know – and we would need to listen carefully to Koranic scholars to draw any conclusions – whether the notion of jihad in the Koran or in various Muslim traditions of interpreting it always requires terrorism. Religion-based wars have been fought — by partisans of many different religions — without resorting to terrorism. That is, in accord with the rules of ethical war (articulated for example, but not exclusively, in the West’s understanding of “Just War Theory”). There could just as easily be Islamic traditions that interpret jihad this way rather than seeing it as requiring terrorism.

And ethical/moral questions about what justifies acts of mortal violence under any circumstances, much less circumstances having any relevance to the present situation of various peoples in the Middle East, is a huge set of questions I am not even attempting to say anything about here.

Anyone wishing to understand the ethical issues involved in justifying war’s violence will find a good, careful discussion in Michael Walzer’s book, Just and Unjust Wars. A good example of a discussion of the ethical issues involved specifically in addressing the threat of organized terrorism that our country learned it must deal with in the events of 9/11 is Jean Bethke Elshtain’s book, Just War Against Terror.

The third set of ethical issues raised by the events at Charlie Hebdo concerns journalists and their various appropriate professional roles.  In an essay entitled, “An Explanation and Method for the Ethics of Journalism,” which I co-authored with another philosopher/ethicist, Professor Deni Elliott, I proposed an answer to the question “What Values Do Journalists Bring About For Those They Serve (i.e. in their designated social role in our society)?”

The book is: Journalism Ethics: A Philosophical Approach edited by Christopher Meyers, pp.9-24. This is a central question to reflect on when asking about the professional ethics of any profession.

I argued there that Needed [by the public] Information comes first and Valued [by the public] Information comes second.

Clearly the creation and publication of humor, and more narrowly of satirical humor, is not part of the role of journalists to provide the public with needed information or even information which the public does not need but values having for one reason or another.

I argued that the other kinds of good that journalists can do may well be ethically appropriate to their professional role, at least in Western societies, and I think that producing humor is one of these, either as entertainment or as something valued for other reasons, perhaps including thoughtful social criticism.

But I take it for granted that every profession’s ethics are the product of a dialogue between that group and the specific larger society in which it functions. So I think that, in today’s world where the products of journalists’ work go far and wide, it is a complex question to know whether societies where other things besides these two are not part of journalists’ social role are ethically justified in those societies.  This is a question I will not try to comment on here, but which would make a great topic for discussion by those who care about journalism’s professional ethics in today’s digital world.

With that as background, I can pose the key question about journalism’s professional ethics that is at stake here: Is satirical humor sufficiently socially-ethically justifiable within the social-ethical role of a professional journalist or professional journalist organization that such humor continues to be ethically justifiable when it is highly offensive to large numbers of otherwise reasonable, not-fanatic, peace-loving and neighbor-caring people?

This is a very complex ethical question.  What a person finds offensive is, for want of a better word, painful to them, it hurts.  And in general we think hurting others’ feelings ought to be avoided unless there is a good reason for it. In addition, it is rare that we judge hurting someone’s feelings, offending someone, for no other reason than to entertain other persons (besides the one who is hurt) to be something that is morally/ethically justifiable if the situation is one in which the hurt party has little realistic opportunity of avoiding the hurt.

The great American philosopher, Joel Feinberg, determined that his examination of rights should include a careful discussion of the extent to which offense can ever be morally/ethically justified and if there are circumstances in which it should be legally prohibited.

The work ended up taking him a whole, complex book to sort out. [The book is: Joel Feinberg, Offense To Others.]

So it seems to me that well-thought-out answers to the question I just asked are going to take time and effort to sort out, especially in an international digital world in which “news” of all sorts is flashed on screens, billboards, etc., at least in many parts of the world.  For that means that the ethical issue is not resolved by just saying, “Well, if you think it will be offensive (or even know it because they said it would be), just refuse to buy Charlie.”

That is not a realistic answer to the opportunity-to-avoid question in a world where the line between information, entertainment, and advertising has been blurred so thoroughly (although this blurring has not been solely the result of the changes in journalism in recent decades, but on the other hand journalist organizations have certainly played a part in the process).

So I think there is a lot here that is worth discussing, especially if we are willing to assume that short, quick answers are almost certainly going to be too simple once we get past the “do not kill” part of the matter.  That’s my ‘two cents” on this. Well, to be honest, it’s quite a few cents! But then I am a philosopher and I am unwilling to pretend complex ethical things are simple !