Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)D
Posts
90
Comments
41
Joined
2 yr. ago

  • Ah, I see, very nice. I wonder if it might make sense to declare the dimensions that are supposed to match once and for all when you wrap the function?

    E.g. perhaps you could write:

     
        
    @new_wrap('m, n, m n->')
    def my_op(x,y,a):
        return y @ jnp.linalg.solve(a,x)
    
      

    to declare the matching dimensions of the wrapped function and then call it with something like

     
        
    Z = my_op('i [:], j [:], i j [: :]->i j', X, Y, A)
    
      

    It's a small thing but it seems like the matching declaration should be done "once and for all"?

    (On the other hand, I guess there might be cases where the way things match depend on the arguments...)

    Edit: Or perhaps if you declare the matching shapes when you wrap the function you wouldn't actually need to use brackets at all, and could just call it as:

     
        
    Z = my_op('i :, j :, i j : :->i j', X, Y, A)
    
      

    ?

  • OK, I gave it a shot on the initial example in my post:

     
        
    import einx
    from jax import numpy as jnp
    import numpy as onp
    import jax
    
    X = jnp.array(onp.random.randn(20,5))
    Y = jnp.array(onp.random.randn(30,5))
    A = jnp.array(onp.random.randn(20,30,5,5))
    
    def my_op(x,y,a):
        print(x.shape)
        return y @ jnp.linalg.solve(a,x)
    
    Z = einx.vmap("i [m], j [n], i j [m n]->i j", X, Y, A, op=my_op)
    
      

    Aaaaand, it seemed to work the first time! Well done!

    I am a little confused though, because if I use "i [a], j [b], i j [c d]->i j" it still seems to work, so maybe I don't actually 100% understand that bracket notation after all...

    Two more thoughts:

    1. I added a link.
    2. You gotta add def wrap(fun): partial(vmap, op=fun) for easy wrapping. :)
  • Hey, thanks for pointing this out! I quite like the bracket notation for indicating axes that operations should be applied "to" vs. "over".

    One question I have—is it possible for me as a user to define my own function and then apply it with einx-type notation?

  • Thanks, the one problem with that is that you have to use dumpy.wrap if you ever create a function that uses loops and then you want to call it inside another loop. But I don't see any way around that.

  • Well, Einstein summation is good, but it only does multiplication and sums. (Or, more generally, some scalar operation and some scalar reduction.) I want a notation that works for ANY type of operation, including non-scalar ones, and that's what DumPy does. So I'd argue it moves further than Einstein summation.

  • At one point, I actually had some (LLM-generated) boxes where you could click to switch between the different implementations for the same problem. But in the end I didn't like how it looked, so I switched to simple expandy-boxes. Design is hard...

    There's no magical significance to the assert x.ndim==1 check. I think I just wanted to demonstrate that the softmax code was "simple" and didn't have to think about high dimensions. I think I'll just remove that, thanks.

  • Yeah, I totally agree with this point! DNA is definitely not sufficient to build an organism. Originally, I thought there was definitely a large (albeit hard to quantify) amount of information embodied in the cells. Though there's been some debate on that point about how large that really is. For example, if I provided a single photograph of an adult human and—I don't know—gave the typical fraction of different atoms in a human body, would a sufficiently intelligent alien race reverse engineer how to make a zygote?

    In any case, my (annoying) answer to this challenge is to retreat: I don't technically have to solve this problem because I'm not trying to estimate the amount of information in a cell, just the information in DNA.

  • Yeah, I tried to cut the line at "trading money" as opposed to a general examination of libertarian principles. But I agree that for euthanasia, once you start considering higher-order effects, it's not clear that it's net positive for society. For example, if I definitely never want to do euthanasia, then legalizing it does seem to hurt me. Because maybe someday and I'm old and disabled and my children have to go to enormous effort to take care of me. Even if they'd never consider the idea the idea of euthanasia, the mere possibility of it might make me feel like more of a burden to them and make me feel guilty for not doing it.

    Of course there are obviously downsides to making it illegal, too! I don't really have a strong view on which is net-positive. Seems very hard.

  • I don't think sexism is a very useful concept here. After all, you could equally well argue that it's sexist to forbid surrogacy, since that's removing autonomy.

    Personally, I'm squishy enough that I'm willing to be convinced by empirical data. Like, if there was data that showed a huge percentage of surrogate mothers regret agreeing to it, then that would matter a lot to me, though I'd still probably lean towards education / screening / etc. before jumping all the way to making it illegal.

    There’s a reason that voluntary slavery is illegal: Desperate people would do it (and have historically done it), and that didn’t make it right.

    I think this is the point I was trying to make at the end of the post. If someone does surrogacy (or donates a kidney) out of desperation, that seems gross. Whereas if they are OK financially and decide to do it for some "extra money" (whatever that means) then that seems less gross.

  • My instinct is that $20 per A would not be enough to move the needle, and might be net-harmful when you consider intrinsic motivation. But how about $500 per A? (Or $1000 for straight As) Still might be cheaper than tutoring?

  • The response I find really amusing is that lots of people respond with, basically, "But if you don't do this then it's harder to make money on twitter."

    (OK... If doing plagiarism makes it easier to make money, then it's not plagiarism?)

  • That first study appears to be non-blinded, so I tend to discount it. I wasn't aware of that second review. I'll take a look. At a glance, most of the studies seem to be included in the 2020 review I did cite previously and I don't seem to see much claim that it helps for stress--in fact, the opposite. It looks like the claim is that it helps with sleep and/or ADHD.

    That said, as far as I know, theanine is very likely to be completely safe. And I think it's totally possible given all the evidence that it does have a small effect on stress/anxiety and maybe some other things. So I don't think there's really any reason not to take it. I'm just 95% convinced that the people who claim it's lifechanging for stress/anxiety are delusional.

  • All fair points!

    1. To be honest, I'm not entirely sure of the difference between stress and anxiety and jitters. For me they're closely related, and I guess I tried to measure some combination of them.

    2. True, more isn't always more. But more does tend to be more, and this is one of the suggestions people made from the first experiment.

    3. I agree. However, I see this in the context of the first post—the scientific literature has tested theanine and found basically nothing! I was originally convinced that the internet was onto something, but now I tend to think the boring scientific literature had it right all along.

  • Paper

    Jump
  • shilling blogs is encouraged! (at least for anything related, which this is)

  • Is this really the opposite? Reading that post, I find very little to disagree with.

  • Being: "If you move these molecules around you can cure cancer and make a near-infinite amount of money"

    Humans: "OK!"

  • For sure, thinking faster alone will hit diminishing returns pretty fast. I think you need to assume the Being is also much "smarter" along all sorts of other dimensions, too.

  • That's a good point re: biology. It's so vast that everyone seems to sub-sub-sub specialize. It's hard to speculate about what might follow if someone was able to master literally every aspect of biology at the same time.

    Re: Trump, my naive model is that people are just complicated and it's incredibly hard to model them and say how they will respond to a given situation, or how many of the different types of people there are, or exactly what media they've consumed, etc. Do you really mean that just using the existing polling data, etc. it should have been possible to be confident?

    The main thing that gives me pause there is that some people were very confident that Trump would win, most notably that French guy that made millions betting on the outcome. He definitely made some good points regarding polling analysis, though I wonder if there are other people who could have made equally good points if the election had gone the other way...

  • Currency is a great incentive. I think a good way of thinking about "rights" is a sort of structure to encourage transfers of currency. For example, should corporations be allowed to put up surveillance balloons and track every vehicle and sell that data to whoever? Or should that be a voluntary transaction, like in your case? (I don't have an answer, just trying to point to the complexities.)