I tried out ChatGPT yesterday but the results were not very satisfactory. It writes poetry for other people, but when I asked it for a rhymed couplet about apples, it said "Apples, so red, tasty and sweet / apples are a tasty treat" except the scansion in the second line was much worse. I pointed out that the scansion was off and got one of its long apologies about how it's just a language model and can't be expected to do (whatever you just asked it to do). So I asked for a rhymed couplet about apples in iambic pentameter and got something that was slightly better but still didn't scan (and was not iambic or pentameter.) I got really tired of its long and elaborate excuses for refusing to do what I asked it. I wish it just said something like "ERROR. ERROR. DOES NOT COMPUTE." which would have been equivalent but briefer and a little bit amusing. +1::skin-tone-2 letter-r bow::skin-tone-2 Mark Dominus :purple-question-marks: 1 hour ago I had a very good interaction with its predecessor in which I asked to to complete a prompt about how there used to be a fifth suit of tarot cards that fell out of use in the 15th century, and this fifth suit was…, and it wrote about how the fifth suit had been birds or ravens. When I asked ChatGPT about the lost fifth tarot suit all it gave me was longer and longer mixtures of "there is no fifth tarot suit" and "I am just a language model and can't make up stuff about a fifth tarot suit". Mark Dominus :purple-question-marks: 1 hour ago I asked it what U.S. president had been most like Mussolini, and it demurred, so I asked it which fictional character was like Mussolini and it claimed the question did not make sense and it was not reasonable or appropriate to compare fictional characters with historical ones ERROR ERROR DOES NOT COMPUTE. We wrangled about that for a while and eventually it claimed that that is because fictional characters are not real and therefore it is not reasonable to ascribe characteristics to them. I reminded it that in the same conversation it had described Scheherazade as resourceful and eloquent, and it went totally off the rails. Mark Dominus :purple-question-marks: 1 hour ago I am reminded of the Star Trek episode in which Kirk points out that the space probe has made errors, and that drives it crazy and it explodes. Mark Dominus :purple-question-marks: 1 hour ago On factual stuff, it looks good at first but since it doesn't actually know anything you can't trust anything it says. It generates correct bullshit, false bullshit, and completely random nonsense in about equal measure. I didn't save my best example where I asked it which thing didn't belong among stop signs, rubies, fire engines, broccoli, and blood. It said rubies, because all the others are physical objects but a ruby is a kind of gemstone. When I asked it what color those five things typically are it got the colors right. But when I tried to probe a little deeper it dug in and said that rubies were the thing on the list that was a different color because they can sometimes have a purplish tint, and when I pointed out that it had said that broccoli was dark green it said no I didn't say that, it is red. (edited) Mark Dominus :purple-question-marks: 1 hour ago We spent a lot of time talking about numbers close to 1000. It started here and just got worse. Ooops.png Ooops.png Mark Dominus :purple-question-marks: 1 hour ago At one point I said "what about fractions" and it regurgitated a lot of half-remembered nonsense about "fractions" and "fractional numbers" and it even knew that a fractional number with numerator 1999 and denominator 2 would be reduced to 999.5, but it refused to admit that this was a number bigger than 999 and smaller than 1000. Mark Dominus :purple-question-marks: 1 hour ago Here's a neat example of factual stuff where it seems promising at first and then goes off the rails and starts spouting nonsense. should have quit.png should have quit.png Mark Dominus :purple-question-marks: 1 hour ago Finally I leave you with this: beep boop.png beep boop.png