I don't know of any off the top of my head, but with a cheap digital caliper and tinkercad, I assume you'd be able to model one fairly trivially. You could friction-fit two halves around the cable, and secure it with some simple adhesive, or some kind of simple bolt/nut fastener mount if you wanted to get clever.
Depends on where you work and what their policies are. My work does have many strict policies on following licenses, protecting sensitive data, etc
My solution was to MIT license and open source everything I write. It follows all policies while still giving me the flexibility to fork/share the code with any other institutions that want to run something similar.
It also had the added benefit of forcing me to properly manage secrets, gitignores, etc
The canvas API needs specific access to hardware that isn't usually available via browser APIs. It's usually harder to get specific capability information from a user's GPU for example. The canvas API needs capability information to decide how to draw objects across differently capable hardware, and those extra data points make it that much easier to uniquely identify a user. The more data points you can collect, the more unique each visitor is.
Here's a good utility from the EFF to demonstrate the concept if you or anyone else is curious.
Just think, an extra long shirt can cover that hole, and we could embed a flexible display, wifi module, and a camera in the extra space. This could scan the faces of those around you, and display personalized ads! This is an excellent solution to the hole in your pants, and frankly, the only secure one.
You're correct that nesting namespaces is unlikely to introduce measurable performance degradation. For performance, I was thinking mostly in the nested virtual network stack adding latency. Both docker and lxc run their own virtual interfaces.
There's also the issue of running nested apparmor, selinux, and/or seccomp checks on processes in the child containers. I know that single instances of those are often enough to kill performance on highly latency sensitive applications (SAP netweaver is the example that comes to mind) so I would imagine two instances of those checks would exacerbate those concerns.
Vscodium has been a very usable replacement for me. You lose some of the ms first party plugins (ssh being the most notable) but largely it just works otherwise.
There have been numerous leaks indicating a ram and compute bump internally. The og switch is old, and was underpowered when it launched. The switch 1 derivatives (think like the OLED variant) had comparably powered internals as the launch model.
You are correct that nothing has been advertised about the specs, but these are leakers with a good track record, and it stands to reason that Nintendo would want to have modern hardware on the market to make porting and development more attractive prospects for developers. They also have a history of re-selling old content through online e-shops that rarely (if ever?) persist through generations.
Everything definitely lines up with this being a significant hardware change to ensure future revenue streams exist for Nintendo.
The proper deepseek r1 requires about 500gb of ram/vram to run, which is orders of magnitude more ram than modern phones have. The smaller models called "deepseek r1" are not the real deepseek model that everyone is talking about.
When you're the size of LMG you don't hire investigative law firms for PR; you do it for liability. The goal is to limit corporate liability by removing individuals likely to get you sued, and most importantly to distance leadership from it with plausible deniability. The firm also has its own reputation to consider, and wouldn't let a client get away with materially misrepresenting their results.
I don't think its unreasonable to suggest that a positive finding from an investigative firm is evidence to support their position that they, materially, did nothing wrong. The fact that no one was fired as a result of that investigation is a good sign externally, as it would open them up to more liability if they knew about it and did nothing.
The source to this compat library is in their sources last I checked, but because it's not part of their standard repos it doesn't technically have to be. I suspect this is eventually the end-goal.
A lot of industries are semi-forced into it. Let me give you an example I know of first-hand. Modern SAP stacks support 3 operating systems. Windows Server, RHEL, and SuSE.
You're probably thinking to yourself: "but rhel is just regular linux, surely you can install it on anything if you have the appropriate dependencies, I'll bet it even just works on rhel-compatibles like rocky, alma, or centos stream!"
And you would be sort of right, but wrong in the most dystopian way possible. The installer itself does hardcoded checks for "compatible" operating systems, using /etc/os-release and a few other common system files. Spoofing those to rhel 8.5 or whatever is easy enough, but the one that really gets you is a dependency for compat-glibc-X.Y-ZZZZ.x86_64. This "glibc compatibility library" is conveniently only accessible via a super special redhat repository granted by a super special sap license (which is like ~$2,000/year/cpu). Looking at the redhat sources it is actually just a bog-standard semi-modern glibc compile with nothing special. The only other thing you get with this license as far as I can tell is another metapackage that installs dependencies, and makes a few kernel tweaks recommended by SAP.
So you can install it on alma/rocky by impersonating rhel in /etc/os-release, and then compiling a version of glibc and linking it in a special hardcoded location, but SAP/Redhat put as many roadblocks in your way as possible to do this. It took me weeks of reverse-engineering the installer to get our farm off of the ~100k/yr that redhat wanted to charge us for essentially:
It was an adblock-spcific paywall