Tuesday, March 7, 2023

Is ChatGPT Friend or Foe?

Henry Kissinger, Eric Schmidt, and Daniel Huttonlocher deconstruct AI in an opinion piece in The Wall Street Journal: The Challenge to Humanity from ChatGPT.

Equal parts questions and answers, the trio endeavor to explain what ChatGPT's (Generative Pre-Trained Transformer) technology is and how it works, while examining the much larger issues of its potential to influence if not determine how we learn, think, and act.

Selected excerpts:

"Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation's particular features remain unknown. By what process the learning machine stores its knowledge, distills it, and retrieves it is similarly unknown. Whether the process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future."

"What happens if this technology cannot be completely controlled? What if there will always be ways to generate falsehoods, false pictures, and fake videos, and people will never learn to disbelieve what they see and hear? Humans are taught from birth to believe what they see and hear, and that may well no longer be true as a result of generative AI. Even if the big platforms, by custom and regulation, work hard to mark and sort bad content, we know that content once seen cannot be unseen. The ability to manage and control globally distributed content fully is a serious and unsolved problem."

I've asked ChatGPT a number of questions, some already well-understood and others a matter of uncertainty and debate. This Q&A below suggests the limitations and risks of automation bias.

No comments: