No, it wasn't in his explanation of Chaos Theory.
It’s his description of Gell-Mann amnesia:
“You open the newspaper to an article on some subject you know well [….] You read the article and see the journalist has absolutely no understanding of either the facts or the issues. [….] you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.”
Most of the breathless hype about ChatGPT seems to come from people asserting that it will radically disrupt the work of someone else. When experts evaluate ChatGPT in their own disciplines, the same optimism is missing:
- Musician Nick Cave has received “dozens” of ChatGPT-created songs that attempt to emulate his work. He is not impressed.
- Stack Overflow, a website for programmers discussing code, found ChatGPT answers to be “substantially harmful” (emphasized in the original statement) for users who want correct information.
- Journalists at Futurism looked at CNET's use of ChatGPT for creating news stories, and they found “a series of boneheaded errors.”
- In cybersecurity discussions, observers have pointed out that ChatGPT can’t do the actual work required for a successful attack.
- One app tried using ChatGPT responses in its online mental health services, and the experiment ended because “messages just felt better” when they were written by humans.
When it comes to overblown predictions about ChatGPT’s effects, it’s odd to see credible sources change gears and forget what they know. At least Michael Crichton can explain what’s happening.
TL;DR Just invoke Betteridge’s law when an article asks you “Can ChatGPT fill in mentorship gaps for Gen Z workers?”
No comments:
Post a Comment