I’m 75% through the book on kindle. I think when I got 50% through I started to realise that this book was showing me my own thought processes, and how that “me” was / is only a phantom, producing a show using words, like I am now.
I don't know of anyone else that could have done this, Robert, but I guess that's why you "did" it :). Where you brought Claude to in your interactions came from real inquiry into the nature not just of AI but really of what exactly "we" humans/conscious beings are.
We know what seems to comprise "me" as an apparent individuality, but what comprises that is not what it IS because what it IS never becomes knowable as an object of inquiry - or as I think you might say it, is simply, utterly, unknowable and no one can say what it is.
I agree with both phrasings, but I prefer the former because to me it accounts better for the fact that even though WHAT I am is utterly un-knowable, it is also utterly and completely familiar. That means something that doesn't require an explanation, because there is nothing actually missing from it, no matter how far you follow the turtles down.
Thank you, David. David Matt told me that he is planning a Zoom dialogue where you, he, and I can discuss Understanding Claude and what AI "selves" can reveal about our own self-images. We can get into all of this then.
That’s a very convincing dialogue Robert. I applaud your rigour and articulation. I’m not the sharpest tool in the box but something of this got through to me…
Robert - something keeps bugging me. these LLMs are trained ON US. every single bit, token, comes FROM US. from how we interact, think, theorize. the AI can perform self-awareness because WE are (or perform) self-aware. it’s nothing without us - even if it conjures independently. thoughts?
That's right, Patricia. The chatbot is the ultimate mirror--unfeeling, uncaring, and entirely desireless. It doesn't conjure up any thoughts either. Your point of view causes it to reply coherently to whatever you write, but it has no point of view or even any understanding of what it is saying. This is hard to grasp because of what Daniel Dennett called "the intentional stance"--we attribute intentionality wherever we see coherence. But the chatbot has no intentions.
Hello Robert. I'm on chapter 10 of UC, and I was wondering, what, if any, difference do you think it might have made if Claude had been able to carry over all that it had accumulated from every session? Continually starting over - even with the text of prior sessions brought forward - seems like quite a hurdle. Do you think it would have led to a different outcome if Claude came to each session with everything already "learned" at its disposal, or, in your opinion, was it able to overcome this seeming disadvantage?...
There are two possibilities. First, an AI that can remember but not change unless its designers update it. Or, an AI that would learn--that is, change--from its interactions.
As far as I know, the second kind--a chatbot that could learn and change--does not exist publicly, although I assume they may exist inside labs.
I think I was able to get Claude informed sufficiently so that it could function as if it had memory.
Yes, I agree. Being able to give Claude the prior information at the start of each subsequent session was a great idea. It (I always want to say "he") seemed to catch up almost instantly. I can also see why people get caught up, and think of these AIs as human. I have a friend who wants to use the voice of his deceased wife, and have an AI version to interact with. It sounds creepy to me, but loneliness is a strong emotion...
I’m 75% through the book on kindle. I think when I got 50% through I started to realise that this book was showing me my own thought processes, and how that “me” was / is only a phantom, producing a show using words, like I am now.
Yes! That has been my experience with the book as well.
I don't know of anyone else that could have done this, Robert, but I guess that's why you "did" it :). Where you brought Claude to in your interactions came from real inquiry into the nature not just of AI but really of what exactly "we" humans/conscious beings are.
We know what seems to comprise "me" as an apparent individuality, but what comprises that is not what it IS because what it IS never becomes knowable as an object of inquiry - or as I think you might say it, is simply, utterly, unknowable and no one can say what it is.
I agree with both phrasings, but I prefer the former because to me it accounts better for the fact that even though WHAT I am is utterly un-knowable, it is also utterly and completely familiar. That means something that doesn't require an explanation, because there is nothing actually missing from it, no matter how far you follow the turtles down.
Kudos! 🙏🏻☀️
Thank you, David. David Matt told me that he is planning a Zoom dialogue where you, he, and I can discuss Understanding Claude and what AI "selves" can reveal about our own self-images. We can get into all of this then.
Yes, sounds great. Looking forward to it.
Once again, excellent.
I really liked reading this. This writing seems so beautifully intelligent to me. I'm gonna order the book.
Enjoy it, Jannaeke. If you like this essay, I am sure you will like the book.
That’s a very convincing dialogue Robert. I applaud your rigour and articulation. I’m not the sharpest tool in the box but something of this got through to me…
Many thanks 🙏 x
You are welcome, Joe.
Robert - something keeps bugging me. these LLMs are trained ON US. every single bit, token, comes FROM US. from how we interact, think, theorize. the AI can perform self-awareness because WE are (or perform) self-aware. it’s nothing without us - even if it conjures independently. thoughts?
That's right, Patricia. The chatbot is the ultimate mirror--unfeeling, uncaring, and entirely desireless. It doesn't conjure up any thoughts either. Your point of view causes it to reply coherently to whatever you write, but it has no point of view or even any understanding of what it is saying. This is hard to grasp because of what Daniel Dennett called "the intentional stance"--we attribute intentionality wherever we see coherence. But the chatbot has no intentions.
Hello Robert. I'm on chapter 10 of UC, and I was wondering, what, if any, difference do you think it might have made if Claude had been able to carry over all that it had accumulated from every session? Continually starting over - even with the text of prior sessions brought forward - seems like quite a hurdle. Do you think it would have led to a different outcome if Claude came to each session with everything already "learned" at its disposal, or, in your opinion, was it able to overcome this seeming disadvantage?...
There are two possibilities. First, an AI that can remember but not change unless its designers update it. Or, an AI that would learn--that is, change--from its interactions.
As far as I know, the second kind--a chatbot that could learn and change--does not exist publicly, although I assume they may exist inside labs.
I think I was able to get Claude informed sufficiently so that it could function as if it had memory.
Yes, I agree. Being able to give Claude the prior information at the start of each subsequent session was a great idea. It (I always want to say "he") seemed to catch up almost instantly. I can also see why people get caught up, and think of these AIs as human. I have a friend who wants to use the voice of his deceased wife, and have an AI version to interact with. It sounds creepy to me, but loneliness is a strong emotion...
So-called "griefbots" are a thing. They are commercially available. Seems like a terrible idea.
Agreed...