We come to bury ChatGPT, not to praise it.

February 8, 2023

We come to bury ChatGPT, not to praise it.:

Unsuspecting users who’ve been conditioned on Siri and Alexa assume that the smooth talking ChatGPT is somehow tapping into reliable sources of knowledge, but it can only draw on the (admittedly vast) proportion of the internet it ingested at training time. Try asking Google’s BERT model about Covid or ChatGPT about the latest Russian attacks on Ukraine. Ironically, these models are unable to cite their own sources, even in instances where it’s obvious they’re plagiarising their training data. The nature of ChatGPT as a bullshit generator makes it harmful, and it becomes more harmful the more optimised it becomes. If it produces plausible articles or computer code it means the inevitable hallucinations are becoming harder to spot. If a language model suckers us into trusting it then it has succeeded in becoming the industry’s holy grail of ‘trustworthy AI’; the problem is, trusting any form of machine learning is what leads to a single mother having their front door kicked open by social security officials because a predictive algorithm has fingered them as a probable fraudster, alongside many other instances of algorithmic violence.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: