iirc the main reason for QOI was to have a simple format because "complexity is slow", so by stripping things that the author didn't consider important the idea was the resulting image format would be quicker and smaller than something like PNG or WebP.
Not sure how well that held up in practice, a lot of that complexity is actually necessary for a lot of use cases (e.g. you need colour profiles unless you're only ever dealing with sRGB), and I remember a bunch of low hanging fruit optimisations for PNG encoders at the time that improved encoding speed by quite a bit.
AVIF is funny because they kept the worst aspects of WebP (lossy video based encoding), while removing the best (lossless mode) There was an attempt at WebP2, using AV1 and a proper lossless mode, but Google killed that off as well.
But hey, now that they're releasing AV2 soon, we'll eventually have an incompatible AVIF2 to deal with. Good thing they didn't support JPEG-XL, it'd just be too confusing to have to deal with multiple formats.
Businesses require VPNs to function. Banning them would decimate Michigan’s economy. The only thing these people truly value is money
I mean it's not hard to see them carve out an exception for business uses, and allow them only on business-grade ISP plans. Tech won't stump these people because they don't care about it, when they can just force the people to play along.
They can just make it a legal requirement to allow MITM, like Kazakhstan tried doing back in 2015. If every ISP requires you to have this cert installed before you can go online, you don't have many options.
- JumpDeleted
Permanently Deleted
Or skip things like react entirely and use something declarative like htmx, less "glue code" you need to write and the server response can itself kick off new client side actions.
I caught his coverage of the fine yesterday on 9 news, he pointed out that their share price actually rose afterwards, so the fine didn't really hurt them in any way.
Also the traffic inspection stuff can do it, anything that disables the hardware offload function will tank the speed.
Odd, the 100/40 plan was the original speed tier, but the NBN got rid of it a while back and everybody should have been moved to 100/20. The NBN then re-introduced it of course, but it's relegated to "business" users, so it's behind the "Pro" selector with Aussie.
We're writing to let you know that, unfortunately, from your billing cycle in November 2020, we will need to migrate you from your current plan NBN 100Mbps/40Mbps Unlimited ($99.00) to NBN 100Mbps/20Mbps Unlimited ($99.00) for the same cost of $99.00
Whilst it is an upload decrease, the price will remain the same and your download speed will also remain the same.
It goes without saying that we did not want to do this, but due to increased costs and the current NBN wholesale pricing model we have been forced to choose between congestion or higher prices. For a more in-depth look into the reasons why we were forced into this decision, click here.
...
In the spirit of 'No Bullsh*t', we are actually losing money on a lot of our legacy plans such as yours. As a smaller provider, we can't offer plans that are below cost price. Given the COVID situation, however, we've waited as long as we could to give everyone full advantage of our old pricing.
We hope you choose to stay with us – if you do, your plan will change automatically from your November bill, and you don't need to do anything else. If you'd prefer to browse all the plans available to you, sign in to the MyAussie app or have a look at the website.
If you wish to stay on the 100/40 plan, for an additional $10 per month, you can select it within MyAussie or by calling our support team on 1300 880 905 at any time.
That's the email I got about it back in 2020
Yep, their frontend used a shared caller that would return the parsed JSON response if the request was successful, and error otherwise. And then the code that called it would use the returned object directly.
So I assume that most of the backend did actually surface error codes via the HTTP layer, it was just this one endpoint that didn't (Which then broke the client side code when it tried to access non-existent properties of the response object), because otherwise basic testing would have caught it.
That's also another reason to use the HTTP codes, by storing the error in the response body you now need extra code between the function doing the API call and the function handling a successful result, to examine the body to see if there was actually an error, all based on an ad-hoc per-endpoint format.
Ehh, that really feel like "But other people do it wrong too" to me, half the 4xx error codes are application layer errors for example (404 ain't a transport layer error, neither is 403, 415, 422 or 451)
It also complicates actually processing the request as you've got to duplicate error handling between "request failed" and "request succeeded but actually failed". My local cinema actually hits that error where their web frontend expects the backend to return errors, but the backend lies and says everything was successful, and then certain things break in the UI.
Well no, the HTTP error codes are about the entire request, not just whether or not the actual header part was received and processed right.
Like HTTP 403, HTTP only has a basic form of authentication built in, anything else needs the server to handle it externally (e.g. via session cookies). It wouldn't make sense to send "HTTP 200" in response to trying to access a resource without being logged in just because the request was well formed.
60 in particular is a superior highly composite number, 12 divisors compared to a paltry 8 for 24.
It’s just too complex to implement HTML / CSS / JavaScript and all the other stuff correctly from scratch.
It depends on what you're trying to do really, if you're trying to keep pace when Google then yeah it's insurmountable (Microsoft literally couldn't do it), but if you just want basic functionality then that's actually rather static and unchanging.
Though it doesn't help when sites use JS for literally everything, and the vast majority do so incorrectly.
1596.645 × .001 = 1.596645kg of heroin by weight
That's 0.1%, not 0.001%
0.001% of 440 gallons is 0.5632 fl oz (or just over 16ml out of 1,666 liters)
So it's an "open standard", not in the sense that anybody can contribute to the development, but in the sense that the details of the standard are open and you can learn about them.
The format itself is an XML version of the existing Office document formats, and they grew organically over decades with random bugs, features, and bug compatibilities with other programs. e.g. There will be a random flag on an object that makes no sense but is necessary for interoperating with some Lotus 1-2-3 files that a company had, who then worked with Microsoft to support back it in the 90s. Things you can't change, nobody really cares about, but get written down because the software already implements it (and will emit sometimes)
- JumpDeleted
Permanently Deleted
You can turn the feature off entirely, or just not talk to people who post them? It's not something like tiktok where you get pushed a bunch of random videos, it's stuff that people you know are sending you.
JXL is two separate image formats stuck together. An improved version of JPEG that can also losslessly and reversibly recode most existing JPEG images at a smaller size, and the PNG like format (evolved from FLIF/FUIF) that can do lossless or lossy encoding.
"VarDCT" (The improved JPEG) turns out to be good enough that the "Modular" mode (The FLIF/FUIF like one) isn't needed much outside of lossless encoding. One neat feature of modular mode though is that it progressively encodes the image in different sizes, that is if you decode the stream as you read in bytes you start with a small version of the image and get progressively larger and larger output sizes until you get the original.
Why is that useful? Well you can encode a single high DPI image (e.g. 2x scale), and then clients on 1x scale monitors can just stop decoding the image at a certain point, and get a half sized image out of it. You don't need separate per-DPI variants.