It's a chatbot. You talk to it, and it responds in natural language. That's exactly what it's designed to do - and it does it exceptionally well, far better than any system we've had before.
Faulting it for being untrustworthy just shows most people don't actually understand this tech, even though they claim they do. Like I said before: it's a large language model - not a large knowledge model.
Expecting factual answers from it is like expecting cruise control to steer your car. When you end up in the ditch, it's not because cruise control is some inherently flawed technology with no purpose. It's because you misused the system for something it was never designed to do.
The chatbot isn't the issue here. It's the user treating it like a reliable source of information.
It's a large language model - not a large knowledge model. It gets plenty of stuff right, but that's not because it actually "knows" anything - it's just trained on a massive pile of correct information.
People trash it for the times it gets things wrong, but it should be the other way around. It's honestly amazing how much it gets right when you consider that's not even what it's built to do.
It's like cruise control that turns out to be a surprisingly decent driver too.
I don’t really understand what you mean by suggesting that I have mental imagery I’m unaware of
I haven't ever claimed such a thing. The point was that when you ask someone about the state of their mind, you're then relying on their report being accurate - with no good way to verify it.
Although in this case, people pointed out they also monitored the visual region of the brain lighting up.
If someone asked me to visualize an object, I can easily do it. If they then asked whether I can literally see it, I'd say no - but also kind of yes. It's not a photograph I'm viewing in my mind, but there's definitely something there. Both yes and no would be truthful answers to "can I see it?"
Still, there's always a chance that if they could peek inside my mind, they'd find out the thing I report seeing isn't actually there - at least not when compared to someone who really does see it.
I've seen a few videos of this thing in action, and while I like the concept - especially that you can use the same device to mow the lawn too with the lawnmower attachment - it's still quite painful to watch it work.
Especially with snow blowing, it's just so disorganized: driving all over the place and making quite the mess. If I'm dropping 5k on an automatic snow blower, I don't want to have to clean up after it.
The issue with all these studies about people's subjective experiences is that they rely on self reporting. Just because someone says that they have no mental imagery doesn't mean that they actually don't. They may simply be unaware of it. After all, how many people actually spend any significant amount of time learning to pay attention to their minds. The vast majority don't.
It's a bit like asking people whether they have an optic blind spot in their vision but not teach them how to look for it. Virtually everyone would say that they don't and they'd all be wrong.
when they were getting downvoted to oblivion for their opinion, and holding to it.
As one should. Downvote is not a counter-argument. A single thoughtfully written response is infinitely more likely to make me change my view than endless downvotes are. Objectively true statements are being "downvoted to oblivion" here on a daily basis.
No. Most banks barely have physical locations anywhere and every single gas station I can think of has been there for as long as I can remember pretty much.
As solar and wind power becomes more popular so does green hydrogen because it's a good place to dump excess production when no other storage is available.
For no particular reason really. I don't think Proton was offering a VPN service when I switched to their email so I used few different ones and eventually landed on Mullvad and have just stuck with it since.
We obviously don't know but I'd say that it's still a pretty good starting assumption to say that consciousness is an emergent feature of information processing which is a physical process happening in out brain.
What ever comes pre-installed on the device. I stopped caring around the time when they stopped making cool new smarphones. It has became just a tool since.
It's a chatbot. You talk to it, and it responds in natural language. That's exactly what it's designed to do - and it does it exceptionally well, far better than any system we've had before.
Faulting it for being untrustworthy just shows most people don't actually understand this tech, even though they claim they do. Like I said before: it's a large language model - not a large knowledge model.
Expecting factual answers from it is like expecting cruise control to steer your car. When you end up in the ditch, it's not because cruise control is some inherently flawed technology with no purpose. It's because you misused the system for something it was never designed to do.