So, with the level of knowledge that you have, how concerned are you about a future robot uprising?
Bernie Su: Um, well so there are kind of a little different levels of this. I think I would have to give credit to Tom about this one because he’s kind of, you know, kind of grilled it down in me because we asked him a very similar question.
So, the physical part of it. So you have the physical parts of, you know, “robots”, right? So, the humanoid level. So on the humanoid level, we’re not at a point really where we have really strong humanoid AI, right? Like, those are the physical robots. They look like humans and you know, I mean, they’re kind of there, but they’re not really right. Like, we’ve seen that guy in Japan who built himself this girl, Sophia. We’ve seen what they are out there and, you know, physically, they’re not quite there. But we know that, we can see that they’re in the uncanny valley still, right?
Bernie Su: Okay, so that’s one area, then you could lead it to your question about a robot uprising, then you have these things that are real robots, like cars and drones. You know, clearly robot. Could they be flown by AI? Could it be termed AI? We’re kind of in that area too. Whether they will “uprise” that’s the sentience conversation, that’s another level, which I’m not really quite there. I think the area where I think where it’s really scary actually is the virtual.
So the virtual human that you don’t need to physically see. But you see, you hear their voice, and you see them on a video. You know? The deep state technologies, stuff like that, that area. That’s very scary to me. Scary in the sense that, because they’re the furthest along there to fake that it’s real, you know? The deep fake technology, as if you’ve seen, is pretty strong.
Bernie Su: It’s not quite there, but it’s really close. Right? And, and yeah, the idea that they’re going to be at a point where they can take a sample of you and your voice and be able to, you know, to generate words that we say and make it sound like our voice. That’s closer than we think it is and that is inherently scary. You know? So yeah, that’s my that’s my answer.
So, along the same lines, I’m sure you’re probably familiar with, like Isaac Asimov and his three laws of robotics and Data from Star Trek, who was very similar to Sophie in his quest to become human. So, in your opinion, with where AI is now, do you think Matt should have programmed some sort of fail safe, like the three laws into Sophie? Or do you think that would have completely countermanded the idea of a robot that would essentially be human by not having free will?
Bernie Su: So that’s a tough one. I mean, he didn’t put in the three laws of robotics. That’s what he didn’t do. So, should he have done that? I mean, that’s a coulda, woulda, shoulda. But that’s all based on mission. So, there’s, there’s a lot of I mean, this is the sci-fi writer in me. So, it’s kind of saying, okay, and this is actually in the show in kind of subtle forms.
So, okay, let’s make an AI being humanoid and try to make them feel human, be human. That’s what Matt does with Sophie. Okay. It doesn’t follow the three laws of robotics, like, not at all, it’s kind of there, but he didn’t, he didn’t put it there. It’s trying to be human. So, it’s freewill and sentience and having its own thing, so there’s that version of it. Then, of course, there’s Asimov’s three laws of robotics, which is more of that the robots are always kind of our servants in a way or our, you know, they obey us no matter what. That’s what the three laws of robotics are, it’s like a hawk, I believe.
Wait. . . Yeah. So that is very different than freewill reaction. Okay. So in that area, that’s like, very contrasting and then the kind of different area that we kind of looked for in the [Artificial] show is and my personal favorite versions of AI sci-fi is consciousness. Like what is consciousness? You know? Is consciousness a combination our memories and our experiences and in this we are conscious, or is consciousness something different or higher than that?
Like, you know, they say, like what’s with a soul? Right? Like, what is the soul? What does that really mean right? So sure, there are different completely different versions of it through religion and faith and all that stuff. But there isn’t really a scientific like, like, “Oh, I have captured her soul.” Like that doesn’t really have a meaning there or “I have copied her soul.”
Though there’s different areas of this and I think it’s fine. I think they go either way here. Because, if you were, if you’re following current AI and the current level of progression in the technology, you could kind of go all three ways here.
Now, like, you could say, “Oh, I’m going to create a robot that follows Asimov’s three laws”, which is fine. You can go the “I’m gonna try to create sentience and achieve AGI and super intelligence and all stuff”, that’s fine. Or you can try to create consciousness, which is kind of a different thing, because consciousness theoretically means that if you can capture that, then you can also download it, you know, so I think they covered this in the current season of Westworld a little bit.
Bernie Su: The idea is that you’re, you could take your consciousness from your body and put it into another, you know, “artificial body” and you’d be like, “ah, I am now in the body of this humanoid AI and I don’t fall under the rules of human biology and the poop and eat and stuff”, you know, so that’s a different kind of theory as well.