Tom Kochuyt
4 min readMay 5, 2023

--

Hi Dan,

Thanks for continuing this conversation.

Your comments and remarks help me to clarify my own thoughts on the subject, which is always a good thing in my book.

With regard to expectations I'd like to clarify my position a bit more.

As far as I am concerned, I think one can expect anything one wants to expect. But expectations do not live in a void, they exist in a context.

When taking into account that context, one can 'rank' those expectations according to likelihood of that expectation being met. We also have expectations about our expectations being met or not, let's call them meta-expectations.

In other words, one can expect a system to do what is was designed/build/trained to do, and it would be very likely the system does exactly that. So the meta-expectation is that the expectation is being met. It would be surprising if the expectation was not met.

We can also expect a system to do something it was not designed/build/trained to do. The likelihood it actually does what you expect is low however. Here the meta-expectation is that the expectation will not be met. It would be (somewhat) surprising (unexpected) if the system actually met your expectation.

Such 'surprise' should trigger a critical reflection about the event itself, about how one interprets it, and about one's related assumptions & beliefs.

Such reflection may lead one to adjust one's beliefs given the evidence is strong enough, or may lead one to adjust one's interpretation, or maybe to look for additional evidence supporting or refuting the hypothesis, etc.

Slow thinking in action as it were ;^).

Which brings is quite naturally to "Thinking, fast and slow".

Yes, the capability of GPT (or rather LLMs) seems to resemble to a certain extent our Type-1 capability. In a way, we are also nothing more than 'pattern recognition & response generation engines'.

I'm not inclined to assess it's current performance as 'very creative' or strong in 'lateral thinking'.

The responses LLMs generate continue on the prompt you give them. In a way, prompting is nothing more than priming the LLM.

That an LLM can write eg a sonnet (or a haiku or another type of poem) when asked to write one is by itself not evidence of creativity.

Anyone can write a sonnet if they known the 'rules' describing what a sonnet must look like.

So all this evidences is that the LLM has learned the the pattern of a sonnet (or haiku or...).

The only evidence of creativity I've seen is that LLMs are perfectly capable of making up facts. Whether that is the type of creativity we should be happy about... well...

I have seen little to no evidence in favor of 'lateral thinking' (unless you count making up facts as evidence of lateral thinking).

If I remember correctly (it's been a while since I read Kahneman's book, so could be wrong here) is that our Type-1 capability seems to rely on multiple processes (systems?) which collaborate and compete at the same time for getting their response selected.

This implies that, next to multiple parallel pattern recognition & response generation engines, we also seem to have some kind of (unconscious) selection engine (maybe engines).

As far as I know, this is not the architecture of GPT and the likes (but as OpenAI and peers are quite secretive about the actual architecture, who knows...).

So, are LLMs at Type-1 level?

They show some attributes of Type-1 capability, but still some way to go I think.

Conscious step-by-step thinking does not describe the full extent of Type-2 (imho). Yes, it is that but also the capability to consciously reflect on / assess whatever the Type-1 layer throws up.

Which is not the same as what that (unconscious) selection engine I mentioned earlier does. I'm sure we all have experience with saying things without consciously realizing/knowing what we are saying (I know I have lots of that type of experience).

Current LLM-based systems are not even close to Type-2.

Will they ever be? Who knows.

Are LLMs all we need to get there? I don't think so.

Are they a key piece of the puzzle? Could be, time will tell...

In the mean time, let us remain vigilant and not take our expectation (or savvy marketing) for reality, let us keep on looking at the evidence as objectively as possible.

I realize this is not easy to do, that it takes conscious effort to do so, that not everyone is equally inclined to do so, that it takes practice to do so, that our capacity to do so is limited, that it does not mean that by doing so we will not make mistakes or come to wrong conclusions, etc.

Which may be where the real challenge LLMS (and other other AI models/systems) poses lies. I think the 'danger' of AI is not perse that such systems will become smarter than us, but that we may simply not be able to keep up with the speed and volume at which such systems can produce sentences/images/sounds/... which look plausible but not related at all to any reality or fact.

This might overwhelm us, leaving little or no room/time to exercise our Type-2 capability, similar as a simple brute-force DOS attack brings down a server.

Let me be clear, I think being pessimistic about this does not help. Thinking/hoping the future will not happen is unrealistic. No one knows what the future will turn out to be, but one can be certain that in one way or another the current generation of AI will be further improved upon.

Let us think about if and how we can prepare ourselves and our society for dealing with that.

Let us start with exercising our Type-2 capability.

--

--

Responses (1)