Tom Kochuyt
2 min readApr 26, 2023

--

Thanks for your thoughtful reply. Allow me to respond to your arguments.

The fact the sentences produced by ChatGPT exhibit a 'technical coherence' (meaning they are grammatically sound and human-like) of a level not seen before, does not mean it is more intelligent than that good old spellchecker of yours. It only means ChatGPT is (much) better at creating technically coherent sentences than the spellchecker is. By the way, it also consumes more energy than that spellchecker...

Same goes for StableDiffusion and similar. Yes, the technical quality of the illustrations is high, and on average it looks 'nice'. Does that mean what is created is 'art'?
I know, the question 'what is art?' is even harder to answer than the 'what is intelligence?' one. But like someone once said, 'I recognize it when I see it'. So far, I don't recognize it. That said, a lot of stuff (both old masters and contemporary works) in museums I do not recognize as art either (but maybe that's just me...).

And yes, there are examples of ChatGPT solving complex bugs. One could consider this as evidence it 'understands'...
However, there are also examples of ChatGPT writing errorneous code (and pretty easy to detect errors at that) and not succeeding at correcting it even after multiple prompts. Which one could consider as evidence that it does not understand, just 'blindly responds'.

All that being said... I'm more in the corner of the pragmatic realists.

While the current approach to creating AI systems has resulted in unprecedented advances in quality of the outputs produced (and is marketed in a very savvy way), they are not 'Intelligent'. It remains to be seen if this approach will lead to 'Intelligence' in the long term.

Furthermore, although current (and near future) performance levels creates opportunities for application in many domains, we should also consider the risks attached to such applications.
The most important being (in my mind at least) that on average people tend to attach meaning to and trust in things because they 'sound/look convincing' without questioning if those things are actually a correct/fair representation of reality.
Now we have systems that can create very convincing sounding/looking outputs at high velocity but with no guaruantee whatsoever of those outputs being a correct/fair representation of reality.

Ah well, time will tell

--

--

No responses yet