Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)R
Posts
2
Comments
133
Joined
2 yr. ago

  • Cardboards are actually quite good at heat insulation. If you have an electric oven (no flame) and put the temperature below 200°C (ignition is at a slighly higher temperature but oven aren't precise), there is no risk. So you can totally reheat pizza at 180°C on its cardboard.

  • That's well written. I think that requiered 2+ code review could also help because with time more people will gain knowledge of the dark parts of the codebase, just by reviewing the PR of “Martin” when he work on them.

  • Same in France

  • Lol. I read “Other oysters gaining more popularity”, and found it very appropriate !

  • Just a remark. C++ has exactly the same issues. In practice both clang and gcc have good ABI stability, but not perfect and not between each other. But in any cases, templates (and global mutable static for most use cases) don't works throught FFI.

  • IIRC the orbit of Mercure doesn't work with Newton Model, and astronomers were predicted the discovery of Vulcain a small planet between Mercure and the Sun. So a new model had to be invented since Vulcain couldn't be found.

  • Nothing prevent you to use dynamic linking when developping and static linking with aggressive LTO for public release.

  • Someone found the link to the article I was thinking about.

  • Thank you so much. I read this when it was written, and then totally forgot where I read those information.

  • Shared libraries save RAM.

    Citation needed :) I was surprised but I read (sorry I can't find the source again) that in most cases dynamic linking are loaded 1 time, and usually very few times. This make RAM gain much less obvious. In addition static linking allows inlining which itself allow aggressive constant propagation and dead code elimination, in addition to LTO. All of this decrease the binary size sometimes in non negligeable ways.

  • I think you don't understand what @CasualTee said. Of course dynamic linking works, but only when properly used. And in practice dynamic linking in a few order of magnitude more complex to use than static linking. Of course you still have ABI issue when you statically link pre-compiled libraries but in practice in statically linked workflow you are usually building the library yourself removing all ABI issues. Of course if a library is using a global and you statically linked it two times (with 2 differents versions) you will have an issue, but at least you can easily check that a single version is linked.

    There are no problems other than versioning and version conflicts, and even that is a solved problem.

    If it was solved, “DLL hell” wouldn't be a common expression and docker would have never been invented.

    You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library.

    This statement makes no sense at all. Undefined behavior is just behavior that the C++ standard intentionally did not imposed restrictions upon by leaving the behavior without a definition. Implementations can and do fill in the blanks.

    @CasualTree was talking specically of UB related to dynamic linking and whitch simply do not exists when statically linking.

    Yes dynamic linking work in theory, but in practice it's hell to make it work properly. And what advantage does it have compare to static linking?

    • Less RAM usage? That not even guaranteel because static linking allow aggressive inlining, constant propagation, LTO and other fun optimisation
    • Easier dependencies upgrade? That's mostly true for C, assuming you have perfect backward ABI compatibility. And nothing proves you that your binary is really compatible with newer versions of its libraries. And staticdependencies ungrade are an issue only because most Linux distribution don't have a workflow in witch updating a dependancy triggers the rebuil of all dependant binaries. If it was done it would then just be a question of download speed. Given the popularity of tools like docker who effectively tranforms dynamic linking into the equivalent of statically linking since all dependencies' versions are known, I would say that a lot of people prefer the confort of static linking.

    To sum-up, are all the complications introduced specifically introduced by dynamic linking compared to static linking worth it for a non-guaranteed gain in RAM, a change in the tools of Linux maintainors and extra download time?

  • It seems to be a lot of work but could also be a good idea.

    Something that I would like would also be a statement on the Rust blog to say that lemmy instance X is the main Rust lemmy instance and discussion should mostly be done here, so that the migration path is clear for reddit users.

  • That's a very nicecly written article.

    Just a quick question, isn't point 8 outdated (misconctption: “Rust borrow checker does adanced liftime analysis”) due to the introduction NLL (no lexical lifetime) in Rust 2018?

  • If only std::unique_ptr and std::variant were introduced in C++ it would be possible to use the default destructor, copy and move constructor…

    That article would have been useful 15 years ago, but not anymore.

  • That being said, if you access the database in GUI there is a high chance that you will repeat yourself making the whole program bigger.