

I almost spit out my coffee from this meme!
Permanently Deleted
I understand this change by Bitwarden, but I wish they gave us the option to turn this off or at least given us more time before forcing this on us.
There's a lot of comments talking about how this increases security, which is true. But it also increases the risk of account lockout. This is especially true in two scenarios: traveling and incapacitation.
Traveling - for those of us who travel frequently, we carry all of our belongings with us. This makes us particularly vulnerable to account lockouts. We can't securely store backup devices or documents in easily accessible locations. We can't easily rely on trusted friends or family because they are so far away. Also, internet accounts are more likely to lock us out anyway because we are logging in from a different country, which is suspicious behavior.
Incapacitation - god forbid, if there comes a time when we are permanently or temporarily incapacitation, it becomes important for our loved ones to access accounts. When we are in the hospital, it's important that our loved ones get access to our personal accounts. I personally have advanced directives and have worked with an estate lawyer to make sure that my Bitwarden account becomes available. I also have instructions for immediate trusted family on how to access my vault if I were ever in the hospital. With this short notice, I need to scramble to get all of that updated and provide a way for them to access the account without my 2FA devices.
The above scenarios are based off of my real experience. These are real and likely risks that I have to account for. Security is not just making sure that outside bad actors CANNOT gain access, but it also means that the right people CAN get access at the right time.
What am I going to do? I'm weighing my options.
- I believe the self-hosted version of Bitwarden does not require this. This comes with its own set of risks though.
- Pay for premium, which comes with lockout support - I need to see if this can take care of both use scenarios above.
- Turn on 2FA and memorize the recovery code. While viable, since I will only use the recovery code once, I'm likely to forget it.
- Change the email to a non-2FA email address, only used by Bitwarden, with a strong but easily memorable password. This email must allow access from foreign countries without lockout (gmail is out). I'm actually strongly considering this.
Permanently Deleted
This is being purposefully obtuse. Choosing to force users to memorize a recovery code increases the likelihood of lock outs.
There is a real risk of account lockout, especially for those of us who travel frequently. Lockouts are a significant risk when you need to carry all your belongings and devices.
There are also some of us who also think about what happens to us when we are incapacitated and a loved one needs access to our passwords. In a situation, it's important to balance security vs expediency to access critical information. This new policy disrupts that.
At the very least, I wish Bitwarden would have given us more time to force this policy. I have to scramble to make changes to my estate planning documents and get in contact with my lawyer to change my advanced healthcare directives.
One of my favorite achievements from a space agency. I hope we can return back to the Saturnian system with more landing probes!
Then create one venv for everything
This is a classic piece, and I love the contradictions in the text. It encapsulates my feelings on good software and code that it almost becomes an art than a science.
PSA for Debian Testing users: read the wiki
https://wiki.debian.org/DebianTesting
Control-F security
returns 18 results. This is well known and there's even instructions on how to get faster updates in testing if you want.
My thought was that a lawsuit is more expensive than arbitration, but settling a class action lawsuit is cheaper than thousands of arbitrations.
Took me a sec.
Thanks for sharing. We use all pytest-style tests using pytest fixtures. I'll keep my eyes open for memory issues when we test upgrading to python 3.12+.
Very helpful info!
I'm most excited about the new REPL. I'm going to push for 3.13 upgrade as soon as we can (hipefully early next year). I've messed around with rc1 and the REPL is great.
Do you know why pytest was taking up so much RAM? We are also on 3.11 and I'm probably going to wait until 3.13 is useable for us.
EOL for 3.8 is coming up in a few short weeks!
So cool!! Mercury is definitely the most mysterious inner planet due to its difficulty to get a space probe there even though it's the closest planet.
The spacecraft will arrive next year, and I can't wait for all the Science it will uncover!
Haha, I've been waiting for the 4K/8K reference in this volume. Poor Anna.
I also like the POSIX “seconds since 1970” standard, but I feel that should only be used in RAM when performing operations (time differences in timers etc.). It irks me when it’s used for serialising to text/JSON/XML/CSV.
I've seen bugs where programmers tried to represent date in epoch time in seconds or milliseconds in json. So something like "pay date" would be presented by a timestamp, and would get off-by-one errors because whatever time library the programmer was using would do time zone conversions on a timestamp then truncate the date portion.
If the programmer used ISO 8601 style formatting, I don't think they would have included the timepart and the bug could have been avoided.
Use dates when you need dates and timestamps when you need timestamps!
Do you use it? When?
Parquet is really used for big data batch data processing. It's columnar-based file format and is optimized for large, aggregation queries. It's non-human readable so you need a library like apache arrow to read/write to it.
I would use parquet in the following circumstances (or combination of circumstances):
- The data is very large
- I'm integrating this into an analytical query engine (Presto, etc.)
- I'm transporting data that needs to land in an analytical data warehouse (Snowflake, BigQuery, etc.)
- Consumed by data scientists, machine learning engineers, or other data engineers
Since the data is columnar-based, doing queries like select sum(sales) from revenue
is much cheaper and faster if the underlying data is in parquet than csv.
The big advantage of csv is that it's more portable. csv as a data file format has been around forever, so it is used in a lot of places where parquet can't be used.
Wow everyone seems to love P3 but I actually liked P4 better. I mean I really enjoyed both, but P4 was a more immersive experience for me. I should reboot my vita and play it again.
I really felt like P4 had deeper connections and relationships between the characters. It felt more real, and that made the tension in the game more exciting. I love every second of it and am still trying to find a game like it.
Don't get me wrong, P3 was great also. The gameplay was superb and the characters were all great. But P4 still has a special place in my heart.
The autocomplete is nice but I don't find it a game-changer. The comment about writing tests is on point though, but that's the only place I found out useful.
Permanently Deleted
They're asking for TV manufacturers to block a VPN app in the TV. Not to block VPN in general.
Is there software that tracks internal dependencies for CI/CD?
Here's a hypothetical scenario at a company: We have 2 repos that builds and deploys code as tools and libraries for other apps at the company. Let's call this lib1
and lib2
.
There's a third repo, let's call it app
, that is application code that depends on lib1
and lib2
.
The hard part right now is keeping track of which version of lib1
and lib2
are packaged for app
at any point in time.
I'd like to know at a glance, say 1 month ago, what versions of app
is deployed and what version of lib1
and lib2
they were using. Ideally, I'm looking for a software solution that would be agnostic to any CI/CD build system, and doubly ideally, an open source one. Maybe a simple web service you call with some metadata, and it displays it in a nice UI.
Right now, we accomplish this by looking at logs, git commit history, and stick things together. I know I can build a custom solution pretty easily, but I'm looking for something more out-of-the-box.
Trying to figure out why comments aren't showing up on other instances
The problem with federated web apps
Trying to make web applications federated is a popular effort. Examples include things like the “fediverse”, as well as various other efforts, like attempts to make distributed software forges, and so on. However, all of these efforts suffer from a problem which is fundamental in building federated applications built on top of the web platform.
The problem is fundamentally this: when building an application on top of the web platform, an HTTP URL inherently couples an application and a resource.

Seth Michael Larson pointed out that the Python gzip module can be used as a CLI tool like this:

For varying levels of seniority, from senior, to staff, and beyond.
I generally don't like "listicles", especially ones that try to make you feel bad by suggesting that you "need" these skills as a senior engineer.
However, I do find this list valuable because it serves as a self-reflection tool.
Here are some areas I am pretty weak in:
- How to write a design doc, take feedback, and drive it to resolution, in a reasonable period of time
- How to convince management that they need to invest in a non-trivial technical project
- How to repeat yourself enough that people start to listen
Anything here resonate with y'all?
Please Go Home, Akutsu-san! - Ch. 144

Read Please Go Home, Akutsu-san! Ch. 144 on MangaDex!
