Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)D
Posts
89
Comments
40
Joined
2 yr. ago

  • This advice generally makes me sad, but still worth thinking about.

  • Ah, so the argument is more general than "reproduction" through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It's possible, for example, that the "300 IQ AI" only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can't get much better without new hardware requiring some kind of human intervention.

    I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that's the hardest to dispute?

  • I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.

    Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I'm not quite sure how to use those resources to reduce risk...) But this definitely is not implied by the minimal argument.

  • I certainly agree that makes the scenario more concerning. But I worry that it also increases the "surface area of disagreement". Some people might reject the metaphor on the grounds that they think—say—that AI will require such enormous computational resources and there are physical limits on how quickly more compute can be created that AI can't "reproduce".

  • dynomight internet forum @lemmy.world

    Y’all are over-complicating these AI-risk arguments

    dynomight.net /ai-risk/
  • dynomight internet forum @lemmy.world

    Shoes, Algernon, Pangea, and Sea Peoples

    dynomight.net /shorts-5/
  • It's certainly possible that I'm misinterpreting them, but I don't think I understand what you're suggesting. How do you interpret "Substack eugenics alarm"?

  • Interestingly, lots of people now seem excited about alpha school, where pay-for-performance is apparently a core principle!

  • dynomight internet forum @lemmy.world

    Dear PendingKetchup

    dynomight.net /ketchup/
  • This is a tangent but I've always been fascinated by the question of what people would spend their time on given extremely long lifespans. One theory would be art, literature, etc. But maybe you'd get tired of all that and what you'd really enjoy is more basic things like good meals and physical comfort? Or maybe you'd just meditate all the time?

  • Deciding if you'll like something before you've tasted it is a great example. Probably we all do that to some degree with all sorts of things?

    P.S. Instead of Moby Dick try War and Peace!

  • Thanks, I really like the idea of "performing enjoying". I'd heard of the Ben Franklin effect before, but not the conjectured explanation. (The other conjectured explanations on Wikipedia are interesting, too.)

  • dynomight internet forum @lemmy.world

    You can try to like things

    dynomight.net /liking/
  • That's what I see, too—if I'm able to hold my focus exactly constant. It seems to disappear as soon as I move my eyes even a little bit.

  • I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.

  • FWIW, I think this is a great post. But I really don't like the way people are treating it like a "knockout blow" against AI 2027. It's healthy debate!

  • I'm sure many people feel the same way. But wouldn't that just make that observation even stronger—people care about animal welfare so much that they'd like to go even further than in-ovo testing?

  • Agree with your first point. For the second point, I felt like I had to add some artifice because otherwise the morally correct choice in almost all situations would seem to obviously be "ask humanity and let it choose for itself"! Which is correct, but not very interesting.

    (In any case, I'm not actually that interested in these particular moral puzzles, I have other purposes in asking...)

  • Subscription confirmed!

  • Confirmed!

    (PS I love pedantic emails)

  • I first tried it with my RSS reader, but I also get an error if i just try to load that URL in a web browser. (Any browser.)