Sunday, March 19, 2023
Saturday, March 18, 2023
Banksy's Art Amid Ukraine's Rubble
"The painting is one of seven that turned up in Ukraine in November 2022 – two in Kyiv and five in towns that had lived through the hell of Russian occupation: Irpin, Borodyanka, Hostomel, and Horenka. The surreal compositions painted on the walls of the ruined houses depict subjects such as gymnasts, and old man taking a bath, and a woman in a gas mask holding a fire extinguisher.
"Because the figures look like ghosts among the ruins of once-prosperous suburbs, there was a chance the British artist's visual jokes would offend people who had lost their houses and loved one and had faced intimidation, torture, and worse during the occupation. But the murals' effect was the opposite: Ukraine fell instantly in love with Banksy."
Tuesday, March 7, 2023
Is ChatGPT Friend or Foe?
Equal parts questions and answers, the trio endeavor to explain what ChatGPT's (Generative Pre-Trained Transformer) technology is and how it works, while examining the much larger issues of its potential to influence if not determine how we learn, think, and act.
Selected excerpts:
"Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation's particular features remain unknown. By what process the learning machine stores its knowledge, distills it, and retrieves it is similarly unknown. Whether the process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future."
"What happens if this technology cannot be completely controlled? What if there will always be ways to generate falsehoods, false pictures, and fake videos, and people will never learn to disbelieve what they see and hear? Humans are taught from birth to believe what they see and hear, and that may well no longer be true as a result of generative AI. Even if the big platforms, by custom and regulation, work hard to mark and sort bad content, we know that content once seen cannot be unseen. The ability to manage and control globally distributed content fully is a serious and unsolved problem."
I've asked ChatGPT a number of questions, some already well-understood and others a matter of uncertainty and debate. This Q&A below suggests the limitations and risks of automation bias.