Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)B
Posts
10
Comments
375
Joined
2 yr. ago

  • The Rust community really pulled together and made sure that there are Rust alternatives to as many tools out there as possible.

    Deliberately or not, with good appreciative intentions or not, I'm afraid you're perpetuating a myth here (a conspiracy theory even, in some "mentally challenged" circles).

    Most tools are independently created by individuals, or very small independent teams of contributors. And being an "alternative written in rust" is rarely a goal in and of itself (or shouldn't be anyway).

    The notion of a unified central "

    <lang>

    community" that is responsible for creating 100s of tools is both silly and fictitious.

    Talking about Rust itself as a good language with good tooling that allows individuals to create good tools, and contribute to a thriving library ecosystem, is okay. Not everything has to be a "community" or a "community effort" or framed that way, however.

  • zswap is not better than modern zram in any way. And you can set up the latter with writeback anyway.

    But that's not OP's problem since "swap gets hardly touched" in OP's case.

  • The point is compression.

     
        
    % swapon
    NAME           TYPE      SIZE USED  PRIO
    /dev/nvme0n1p2 partition   8G   0B     5
    /dev/sda2      partition  32G   0B    -2
    /dev/zram1     partition 3.5G 1.8G 32767
    /dev/zram2     partition 3.5G 1.8G 32767
    /dev/zram3     partition 3.5G 1.8G 32767
    /dev/zram4     partition 3.5G 1.8G 32767
    /dev/zram5     partition 3.5G 1.8G 32767
    /dev/zram6     partition 3.5G 1.8G 32767
    /dev/zram7     partition 3.5G 1.8G 32767
    /dev/zram8     partition 3.5G 1.8G 32767
    
      

     
        
    % zramctl
    NAME       ALGORITHM DISKSIZE   DATA  COMPR  TOTAL STREAMS MOUNTPOINT
    /dev/zram8 zstd          3.5G 293.4M 189.2M 192.5M         [SWAP]
    /dev/zram7 zstd          3.5G 282.1M 187.5M   192M         [SWAP]
    /dev/zram6 zstd          3.5G 284.6M 189.4M 192.9M         [SWAP]
    /dev/zram5 zstd          3.5G 297.8M 197.3M 200.1M         [SWAP]
    /dev/zram4 zstd          3.5G 304.9M 202.9M 206.7M         [SWAP]
    /dev/zram3 zstd          3.5G 300.7M 201.9M 204.6M         [SWAP]
    /dev/zram2 zstd          3.5G 311.3M 207.2M 210.6M         [SWAP]
    /dev/zram1 zstd          3.5G 307.9M 210.5M 213.3M         [SWAP]
    /dev/zram0 zstd          <not used for swap>
    
      
    • Use zram devices equal to the number of threads in your system.
    • Use zstd compression.
    • Mount zram devices as swap with high priority.
    • Mount disk swap partition(s) with low priority.
    • Increase swapiness:
       
          
         sysctl vm.swappiness=<larger number than default>
      
        
    • Use zramctlto see detailed info about your zram disks.
    • Check with iotop to see if something unexpected is using a lot of IO traffic.
  • Okay. I thought for a moment that you and everyone else were not on the same page.

  • zram file

    what zram file?

  • lightweight

    😑/me looks inside/me finds >20 sub-crates and all kinds of dependencies including a web server/me starting to think i'm the weird one for saving my passwords in a text file and almost getting annoyed about having to use cotp

  • Brother, some of us already moved from lld to mold to wild already. The project could have waited a little bit, then went with the implemented-in-rust option directly.

  • Thank you for the update.

  • Thank you (+ contributors) for doing this work and keeping us posted.

    Last time I took a look, I think LTO was broken. Is that still the case?

  • I always felt "for embedded" was always selling this crate short, or rather pushing potential users away who aren't aware some data structures here are useful in non-embedded scenarios.

    I for example use the admittedly simple Deque (with its is_full()) method in one of my projects for pushing/popping tasks to/from a fixed-size thread queue, which one could argue is a very unembedded-like use-case 😉.

  • Forgot to mention that I wasn't exactly young at the time. We just didn't have reliable broadband internet back then in my neck of the woods. So I had to download ISOs and save them in a USB thumb drive in a uni computer lab.

  • Early Mandriva with KDE 3.4 or 3.5 I think, but I can barely remember anything with clarity. It couldn't have been bad though, since I haven't used Windows on my own devices since 😉.

    From my foggy memory, I think it was good for my then nocoder self, easy to use, stable, relatively lite, and had good looks.

    I missed the Mandrake and pre-Fedora Red Hat era, but not by much.

  • Be realistic. Filling out forms to get CI runners means no serious users will be attracted. They can just go to GitLab instead. And even then, a migration wouldn't be fully seamless.

    Those are immediate show-stoppers before we get to contributor pool sizes and network effects.

    My purist young self picked Gitorious over GitHub. I even vaguely remember the KDE presence there, so it was a trusted host with big(ish) betters on it. But it closed shop soon after, and that was a quick learned lesson.

    I will be more intrigued by the first jj-native forge when it appears. I may even help alpha test it, as it may bring a breath of fresh air to the space, unless it's going to be AI buzz-filled. In that case to the ~trash~ blocklist it will go.

  • Yes. Note that I'm replying to this:

    messy Result type just seems like a case of something that should’ve been handled already (or properly propagated up).

    My point was that without flattening, "provide context and propagate" vs. "directly propagate" is always explicit and precise, and is obviously already supported and easy to do.

    Use with functional chaining, as pointed out by others, wasn't lost on me either. I've been using Option::flatten() for years already, because such considerations don't exist in that case.

  • (stating the obvious)

    You can already :

      rust
        
    res_res??;
    // or
    res_res?.map_err(..)?;
    // or
    res_res.map_err(...)??;
    // or
    res_res.map_err(...)?.map_err(...)?;
    
      

    With res_res.flatten()?, you don't know where you got the error anymore, unless the error type itself is "flatten-aware", which is a bigger adjustment than the simple ergonomic library addition, and can become itself a problematic pattern with its own disadvantages.

  • Result::flatten() is probably my favorite addition

    It's rare to a have a negative reaction to a library addition. But I don't like this one at all actually.

    For me, error contexts are as important as the errors themselves. And ergonomically helping with muddying these contexts is not a good thing!

  • With all the surface false analogies and general lack of solid knowledge in the comments here, I truly hope that at least half of them are LLM generated.

  • This is cool and lies within my area of interests.

    One thing that is not clear is if there will be a way to do playback outside of custom players. If a stream can't be trivially grabbed, and/or played directly in mpv (via ffmpeg library support or otherwise), this won't be very useful for me.

    It would also be interesting to see how the video streams are encoded, if the GOP size is forced to 1 and it's all intra frames or not, and if it's not, how synchronization after recovery (where FEC is not enough) is done.

    Hopefully this gets posted again when the code is released.