I'm less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.
First you were implying that today’s AI would bring about AGI
I've never made such a claim. I've been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It's in no way obvious to me that LLMs are the path to AGI. They could be, but they don't have to be. Either way, it doesn't change my core argument.
My argument is that we'll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it's just a matter of time before we create a system that's as intelligent as we are: AGI.
I already said that the timescale doesn't matter here. It could take a hundred years or two thousand - doesn't matter. We're still moving toward it. It does not matter how slow you move. As long as you keep moving, you'll eventually reach your destination.
So, how I see it is that if we never end up creating AGI ever, it's either because we destroyed ourselves before we got there or there's something borderline supernatural about the human brain that makes it impossible to copy in silicon.
If you're just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there's nothing left to discuss here. If you actually had an argument, you would've made it by now.
Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we're continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we'll never going to reach it which I view as a quite insane claim in itself.
If we're not moving toward it, then I'd love to hear your explanation for why we're moving backwards or not making any progress at all.
Whether we're 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It's not the speed of the progress - it's the trajectory of it.
Mitenköhän tuo määritellään, että mikä lasketaan korjauspalveluksi. Mun koko yritystoiminta perustuu rakentamiseen ja korjaamiseen. Onko reijän paikkaus kipsilevyseinässä korjaamista? Entä pesuallashanan säätöosan vaihto? Rikkinäisen kattotiilen vaihtaminen uuteen? Vuotavan sadevesikourun paikkaus?
I still think they deserve some credit for at least trying to do the right thing. I don't envy the position they're in.
Everyone's rushing toward AGI. Trying to do it safely is meaningless if your competition - the ones who don't care about safety - gets there first. You can slow things down if you're in the lead, but if you're second best, it's just posturing. There is no second place in this race.
Anthropic founders are former OpenAI employees who left specifically because they disagreed with OpenAI's stance on this kind of stuff and they wanted nothing to do with it. If this is just a PR stunt then I don't see why they would've left OpenAI in the first place.
Anthropic was founded by former OpenAI employees who left largely due to ethical and safety concerns about how OpenAI was being run. This is just them sticking to their principles.
Because now is the best time to be alive, ever. I could take you back 100, 200, 500, 1000, or 5000 years ago and things just get shittier and shittier the further back we go yet people kept having kids.
But in both cases you have the option to pay - yet choose not to. If money wasn't an issue, there wouldn't really be any reason to pirate anything. That's why I see piracy as a financial decision, and thus I don't think piracy advocates have any ground to stand on when they criticize AI companies for doing the exact same thing. It's not identical, but it's equivalent.
One could even argue that individual piracy is selfish because it only benefits the one person doing it. AI companies at least are providing a product that hundreds of millions of people get value out of - and the vast majority of them get it for free.
We could've never invented LLMs and I'd still be equally worried about AGI. I've been talking about it since 2016 or so - LLMs aren't the motivation for that worry, since nobody had even heard of them back then.
The timescale is also irrelevant here. I'm not less worried even if we're 500 years away from it. How close to Earth does the asteroid need to get before it's acceptable to start worrying about it?
I didn't think I'd need to explain the difference between saving money and earning money but here we are.
When you earn money, you get a check you can spend on more stuff. When you save money, you don't get a check - that would be earning, not saving. Instead, you're spending less, which means you have that money left to buy something else. Those savings are effectively what you "earn."
When you download a $40 movie for free, you're left with $40 more to spend on something else. It doesn't matter whether I hand you $40 to buy the movie or you pirate it - in both cases, you end up with the exact same amount of money afterward.
Nobody's saying AGI is here right now - it's a concept, like worrying about an asteroid wiping us out before it actually shows up. Dismissing it as "fake" just ignores the trajectory we're on with AI development. If we wait until it's real to start thinking about risks, it might be too late.
In neuroscience and philosophy, when people talk about consciousness, they're typically referring to the fact of experience - that it feels like something to be. That experience has qualia.
Nowhere is it written that this is a requirement for general intelligence. It's perfectly conceivable to imagine a system that's more intelligent than any human but where it doesn't feel like anything to be that system. It could even appear conscious without actually being so. Philosophical zombie, so to speak.
I'm less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.
I've never made such a claim. I've been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It's in no way obvious to me that LLMs are the path to AGI. They could be, but they don't have to be. Either way, it doesn't change my core argument.
C'moon now.