This is for models that use vertex colors. I want all faces with the max brightness of the object. So in daytime lighting it should look unshaded, but in a dark environment it should darken (like the thumbnail).
What I've tried:
using vertex_lighting and then setting the normal directly (eg NORMAL = vec3(0.0, 1.0, 0.0);)
this works, but only if the lighting is perfectly overhead... a slightly wrong angle or even too far into the middle of an omnilight will create a shadow
attempting to save the original color value, change the normal, and then bring it the old brightness back via max() if the new value is dimmer
lots of moving parts/assumptions on my part here
other math stuff with NORMAL/VERTEX, abs() and normalize() etc
on top of the direction failure, there's also failure to keep consistent brightness (with a spinning animation), and darkening across a TextMesh
I also know I could do this with unshaded and just multiply/mix it in the shader, but I don't r
As a Reddit (and RIF) refugee, I'm searching for a space to share various useful information where it can be easily found later. The challenge is that the definition of "useful" varies among individuals, and I suspect that Reddit and other "for -profit" moderators are overly influenced by corporate interests.
In the past, I spent considerable time figuring out how to accomplish tasks unrelated to my primary goals—tasks I usually only needed to do once. Now, I tend to ask AI tools like ChatGPT, Perplexity, Deepseek, or other available models to assist with the task at hand. I've learned that by doing so, I can condense days or even weeks of effort into just a few hours, if not less.
However, a significant issue is that these AIs don't currently learn from my questions, and the knowledge they provide isn't propagated beyond the AI itself, unlike traditional sources. The AI models offering this information are ess
Like if I'm using print statements to test my code. Is it okay to leave stuff like that in there when "publishing" the app/program?
Edit: So I meant logging. Not "tests". Using console.log to see if the code is flowing properly. I'll study up on debugging.
Also, based on what I managed to grasp from your very helpful comments, it is not okay to do, but the severity of how much of an issue it is depends on the context?
Either that or it's completely avoidable in the first place if I just use "automated testing" or "loggers".
The transition to the new verdicts will reduce the device signals that need to be collected and evaluated on Google servers by ~90% and our testing indicates verdict latency can improve by up to ~80%.
the huge ratio of reduction suggests to me that the attestation is being offloaded from Google servers to on-device AI, but maybe i assume wrong. my instinct tells me Google would always make this impossible for 3rd party OS to implement anyway.
Hypothetically, If implementing that AI in Graphene could allow most attestation-requiring apps to install and run normally, is that something the Graphene devs would do? i know it would have to be secure and private, so assuming there was a way...
In my understanding, the options need to be customized for each machine, as well as the fact that making packages for tons of distros can be a lot of work for solo or small team devs, and that's why some software is provided as .tar only.
but it seems like the install process on the user side could be automated to a single command or drag-drop, as long as the script would throw informative alerts for any errors and the user is prepared to take over manually.
does something like this exist in a standalone form that's not bundled like snaps or flatpaks?
if not, is there a broadly-applicable reason (security, damaging OS, etc) that makes it a terrible idea? or simply that no one has gotten to it?
Many terminals are capable of displaying multiple fonts at the same time, say latin unicode characters in font foo and japanese unicode characters in font baz. In urxvt at least, it is also possible to have one font in a certain size and the next font in another size. However, no font can have a size bigger than the base size, the size of a terminal cell.
Why is it not possible to have multiple terminal cell sizes? For exampleso one line has terminal size 8 and the next line has terminal size 12.
Limitation of using drag and drop Images in readme.md?
One I am aware of is the size limit that no image size should be >10 MB. Are there any other limitations when using this (for
example: retention period, storage capacity, etc)?
I want to link those images outside Github.
I am aware of uploading images to the repository and linking by
Hello. I have Windows - Ubuntu dual boot and I'm trying to move space from Windows to Ubuntu. I've already freed space from the Windows side
I'm pretty sure that I've read online that it can be dangerous to move the unallocated partition, because next boot to windows can corrupt my Ubuntu system.
Is it true?
Also, when I'm trying to move the unallocated partition, there's no option to "move/resize", so I swap them with the next following partition one by one. Is it the right way to do it?
So I'm a baby dev, still in Uni and they don't allow internships in 4th year due to some issues with it so not even that exp wise.
I don't know enough, and I'm trying to learn but there's so much! My Uni degree doesn't cover security at all. Which is shit, bc I think I want to work in that? Mostly I'm just spooked and want to understand everything I can 'cause I love the internet and want to feel safer wandering about it.
I'm scared of clicking on links. Even ones here, like there was a post about a book list earlier and I was just there like "Cmoon.... someone please have posted the lissssst."
Would anyone be willing to share what they do for their own security? Especially if it's ridiculously over the top. Included reasonings and details would be adored!
Also, if anyone has any books or references that might be good for learning sec from a programmatic view rather than a IT view I'd really love that! Anything at all.
Regardless, hope anyone reading this has an absolutely wo
Can't locally download 41 GB loaded image which is provided to replicate GitHub action locally but don't want to commit and check every time too, is there any third option?
I think from what I've read that this is the case, but I've read some other info that's made it less clear to me.
On the second part of the question regarding container engines, I'm pretty sure that may also be correct, and it kinda makes me wonder a little about risks of engine lock-in, but that may be a little out of scope.
Short explanation of the title: imagine you have a legacy mudball codebase in which most service methods are usually querying the database (through EF), modifying some data and then saving it in at the end of the method.
This code is hard to debug, impossible to write unit tests for and generally performs badly because developers often make unoptimized or redundant db hits in these methods.
What I've started doing is to often make all the data loads before the method call, put it in a generic cache class (it's mostly dictionaries internally), and then use that as a parameter or a member variable for the method - everything in the method then gets or saves the data to that cache, its not allowed to do db hits on its own anymore.
I can now also unit test this code as long as I manually fill the cache with test data beforehand. I just need to make sure that i actually preload everything in advance (which is not al
What are some alternatives to browsing instagram "outside" of the proper app?
I put nitter.net and invidious as examples, as they allow you to browse Twitter and Youtube "outside" the official site/app and without being tracked. Instagram hates me trying to look at anything from the computer.