Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)D
Posts
90
Comments
41
Joined
2 yr. ago

  • I'm not sure (literally not sure!) if the Postdam Declaration meets the spirit of the Franck Report and Szilard Petition. One aspect is this:

    If such public announcement gave assurance to the Japanese that they could look forward to a life devoted to peaceful pursuit in their homeland and if Japan still refused to surrender, our nation might then, in certain circumstances, find itself forced to resort to the use of atomic bombs.

    One the one hand, the Potsdam Declaration asks for unconditional surrender. But I don't think that's disqualifying on its own and it also includes passages like this, which I think you could reasonably argue do meet the criteria laid out above:

    the Japanese military forces, after being completely disarmed, shall be permitted to return to their homes with the opportunity to lead peaceful and productive lives

    Japan shall be permitted to maintain such industries as will sustain her economy and permit the exaction of just reparations in kind, but not those which would enable her to rearm for war. To this end, access to, as distinguished from control of, raw materials shall be permitted. Eventual Japanese participation in world trade relations shall be permitted.

    But there's another part of both the Franck report and the Szilard Petition: They were concerned that once nuclear weapons were used, it was inevitable that other nations would develop them, e.g. this part:

    If after the war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation.

    Though I suppose they only urge that Truman to consider these issues, and maybe he did.

  • I support you. Personally, after spending way too much time thinking about Bourdieu etc., I've come to believe that:

    (1) The majority of worthwhile human activities have some dimension of "showing off" to them. (2) Worrying too much about showing off is best thought of as a form of neuroticism.

    I mean: Marrying someone you don't like because you think they'll impress your friends or spending all your time trying to look good on Instagram is surely bad. But having a personal library is not like that!

  • This advice generally makes me sad, but still worth thinking about.

  • Ah, so the argument is more general than "reproduction" through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It's possible, for example, that the "300 IQ AI" only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can't get much better without new hardware requiring some kind of human intervention.

    I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that's the hardest to dispute?

  • I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.

    Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I'm not quite sure how to use those resources to reduce risk...) But this definitely is not implied by the minimal argument.

  • I certainly agree that makes the scenario more concerning. But I worry that it also increases the "surface area of disagreement". Some people might reject the metaphor on the grounds that they think—say—that AI will require such enormous computational resources and there are physical limits on how quickly more compute can be created that AI can't "reproduce".

  • dynomight internet forum @lemmy.world

    Y’all are over-complicating these AI-risk arguments

    dynomight.net /ai-risk/
  • dynomight internet forum @lemmy.world

    Shoes, Algernon, Pangea, and Sea Peoples

    dynomight.net /shorts-5/
  • It's certainly possible that I'm misinterpreting them, but I don't think I understand what you're suggesting. How do you interpret "Substack eugenics alarm"?

  • Interestingly, lots of people now seem excited about alpha school, where pay-for-performance is apparently a core principle!

  • dynomight internet forum @lemmy.world

    Dear PendingKetchup

    dynomight.net /ketchup/
  • This is a tangent but I've always been fascinated by the question of what people would spend their time on given extremely long lifespans. One theory would be art, literature, etc. But maybe you'd get tired of all that and what you'd really enjoy is more basic things like good meals and physical comfort? Or maybe you'd just meditate all the time?

  • Deciding if you'll like something before you've tasted it is a great example. Probably we all do that to some degree with all sorts of things?

    P.S. Instead of Moby Dick try War and Peace!

  • Thanks, I really like the idea of "performing enjoying". I'd heard of the Ben Franklin effect before, but not the conjectured explanation. (The other conjectured explanations on Wikipedia are interesting, too.)

  • That's what I see, too—if I'm able to hold my focus exactly constant. It seems to disappear as soon as I move my eyes even a little bit.

  • I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.

  • FWIW, I think this is a great post. But I really don't like the way people are treating it like a "knockout blow" against AI 2027. It's healthy debate!

  • I'm sure many people feel the same way. But wouldn't that just make that observation even stronger—people care about animal welfare so much that they'd like to go even further than in-ovo testing?

  • Agree with your first point. For the second point, I felt like I had to add some artifice because otherwise the morally correct choice in almost all situations would seem to obviously be "ask humanity and let it choose for itself"! Which is correct, but not very interesting.

    (In any case, I'm not actually that interested in these particular moral puzzles, I have other purposes in asking...)

  • Subscription confirmed!

  • Confirmed!

    (PS I love pedantic emails)

  • I first tried it with my RSS reader, but I also get an error if i just try to load that URL in a web browser. (Any browser.)