A reflection on control, burnout, and the strange weight of technical fluency.



Reposted from Hacker News, it was #6 when I looked so you may have already seen it but I thought it was worth reposting. I am not the author.

I thoroughly agree, you should always have CI tools to ensure it builds, passes tests, and meets whatever formatting and/or linting standards the team sets. I was specifically responding to "Rust makes it harder for a ‘contributor’ to sneak in LLM-generated crap". If I get a contribution from an untrusted party, I will start with the assumption that it's utter garbage, buggy, broken, and malicious and review it until I'm convinced it's not. Not because I assume the dev is bad but because it's safer to assume the code is garbage. If I get a contribution from a trusted party (e.g. a member of the dev team/employee/whatever) I will review the code carefully though not with as much paranoia. I don't particularly care if my teammates are using LLMs, but if they're submitting code they don't understand that's a great way to get ejected from the "trusted contributors" group, and if they're an employee it's a good way to get fired if they keep doing it after being warned not to.

That does sound unpleasant and I can understand why you prefer Windows. Personally, I rarely have problems with Linux that aren't self inflicted and IMO Windows is an absolute garbage fire of an OS so there's no way I'd ever daily drive it.

In what situation are you accepting contributions that you haven't vetted thoroughly enough to detect crap code? I've seen a lot of crap from developers that's as bad or worse than LLM generated crap so there's no way I'll ever accept contributions to an important system without thoroughly vetting them unless they're from one of a very few number of people I trust implicitly.

I’ve had success with Claude, but there’s always a layer of separation. I ask it to do something, read what it produced, and decide if it’s garbage or not. And rewrite or discard as necessary. Though counting by LOC mainly I’ve used it for writing tests.

I didn't say never copy and paste. I'm saying when you push a commit you should understand what all the LOC in that commit do (not counting vendored dependencies). If you don't understand how something works, like crypto (not sure what Hamilton or Euler refers to in this context), ideally you would use a library. If you can't, you should still understand the code sufficiently well to be able to explain how it implements the underlying algorithm. For example if you're writing a CRC function you should be able to explain how your function implements the CRC operations, even if you don't have a clue why those operations work.

I said you need to understand what the code you wrote (as in, LOC that git blame will blame on you) does. Not that you need to fully understand what the code it calls does. It should be pretty obvious from context that I'm referring to copy-pasting code from stack overflow or an LLM or whatever without knowing what it does.

If you are submitting work, you should understand how the code you're submitting works. Sure, you don't have to know exactly how the code it calls works, but if you're submitting code and there's a block of code and you have no clue how that block works, that's a problem.

There's a huge difference between copy-pasting code you don't understand and using a library with the assumption that the library does what it says on the tin. At the very least there's a clear boundary between your code and not-your-code.

Are you seriously trying to equate "I don't know which instructions this code is using" to "I copied code I don't understand"? Are you seriously trying to say that someone who doesn't know how to write x = a + b
in assembly doesn't understand that code?

If you’re adding code you don’t understand to a production system you should be fired
Edit: I assumed it was obvious from context that I’m referring to copy-pasting code from stack overflow or an LLM or whatever without knowing what it does but apparently that needs to be said explicitly.

I guess I just don't see enough memes to have picked up on that

Marketing. People expect to see different things on a website vs Twitter/X so the same content won't perform the same on each. So for a business it makes sense to post different things on your website vs Twitter/X.

I’m not sure what to tell you. I just don’t see what you do. And I never bother to look at a meme close enough to notice the kind of details the other user pointed out.

How can you tell?

nasm
is an assembler though, not a ‘languages’
That's like saying "clang is a compiler though, not a language". It's correct but completely beside the point. Unless you're writing a compiler, "cross platform assembler" is kind of an insane thing to ask for. If want to learn low level programming, pick a platform. If you are trying to write a cross-platform program in assembly, WHY!? Unless you're writing a compiler. But even then, in this day and age using a cross-platform assembler is still kind of an insane way to approach that problem; take a lesson from decades of progress and do what LLVM did: use an intermediate representation.

I’ve genuinely never had a problem with it. If something is wrong, it was always going to be wrong.
Have you worked on a production code base with more than a few thousands of lines of code? A bug is always going to be a bug, but 99% of the time it's far harder to answer "how is this bug triggered" than it is to actually fix the bug. How the bug is triggered is extremely important.
Why is it preferable to have to write a bunch of bolierplate than just deal with the stacktrace when you do encounter a type error?
If you don't validate types you can easily run into a situation where you write a value to a variable with the wrong type, and then some later event retrieves that value and tries to act on it and throws an exception. Now you have a stack trace for the event handler, but the actual bug is in the code that set the variable and thus is not in your stack trace. Maybe the stack trace is enough that you can figure out which variable caused the problem, and maybe it's obvious where that variable was set, but that can become very difficult very fast in a moderately complex application. Obviously you should write tests, but tests will never catch every weird thing a program might do especially when a human is involved. When you're working on a moderately large and complex project that needs to have any degree of reliability, catching errors as early as possible is always better.

And relying on runtime validation is a horrific way to write production code

Assembly languages are always architecture specific. Thats kind of their defining feature. Assembly is readable machine code.

“Assume it’s a map and treat like a map and then catch the type error if it’s not.” Paraphrased from actual advice by Guido on how you should write Python. Python isn’t a bad language but the philosophy that comes along with it is so fucked.

What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new 'user thread' in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.
Also, it still uses kernel threads, just not for every single goroutine.
This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel's scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don't mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.

Is GitHub Copilot worth it to you?
As a senior developer, I don't find copilot particularly useful. Maybe it would have been more useful earlier in my career, but at this point writing a prompt to get copilot to regurgitate useful code and massaging the resulting output almost always takes as much or more time as it would for me just to write whatever it is I need to write. If I am able to give copilot a sufficiently specific prompt that it can 'solve' my problem for me, I already know how to solve the problem and how to write the code. So all I'm doing is using copilot as a ghost writer instead of writing it myself. And it doesn't seem to be any faster. The autocomplete features are net helpful because they're actually what I want often enough to offset the cost of reading the suggestion and deciding if it's useful. But it's not a huge difference (vs writing it myself) so that by itself is not sufficiently useful to justify paying the cost myself nor sufficient motivation to go to the effort of convincing my employer t

What am I losing by not using the C# Dev Kit?
I exclusively use Visual Studio Code for editing code. I primarily work with Go, and a little bit with JavaScript/TypeScript, but I need to do some C# work.
I have no interest in using Microsoft's proprietary C# Dev Kit or dealing with their licensing terms. What capabilities am I losing? The marketing materials for the dev kit talk about a lot of stuff that appear to be features of the open source C# extension, so it's unclear which features are actually exclusive to the dev kit.

Why is crypto.subtle.digest
async?
Why is crypto.subtle.digest
designed to return a promise?
Every other system I've ever worked with has the signature hash(bytes) => bytes
, yet whatever committee designed the Subtle Crypto API decided that the browser version should return a promise. Why? I've looked around but I've never found any discussion on the motivation behind that.

What search engine do you use?
Not sure if this is the right community, but I didn't see a general one. What search engine do you use? Besides Google increasingly spying on its users, the quality of its search results seems to have gotten significantly worse over the last decade. What search engine(s) do you use?

What scientific journals do you recommend?
I have a subscription to Nature but most of the articles are totally beyond me. I’m thinking of switching to a comp-sci specific journal. I’m mainly interested in compiler design and implementation of JIT compilers and VMs like JVM and .NET.

Self taught = no imposter syndrome?
I am a self-taught programmer and I do not have imposter syndrome. I have a degree in electrical engineering and when I thought that was going to be my career I did have imposter syndrome, so I'm not immune. I wonder if there's a correlation. It seems that many if not most professionals suffer from imposter syndrome; I wonder if that's related to the way they learned.
When I say self-taught, I don't mean I never took a class, I mean the majority of my programming skill was learned by doing/outside of classes. I took a Java class in high school that helped me graduate from procedural languages to OOP, and I took classes in college but with few exceptions the ones that were practical (vs theoretical) covered material I already knew.

Systems engineering in the software industry
My last job was at a company that designed and built satellites to order. There was a well defined process for this, and systems engineers were a big part of it. Maybe my experience there is distorting my perspective, but it seems to me that any sufficiently complex project needs to include systems engineering, even if the person doing that is not called a systems engineer. Yet as far as I can tell, it isn't really a thing in the software industry. When I look at job postings and "about us" blog posts about how a company operates, I don't see systems engineering mentioned. Am I just not seeing it, is it called something else, or is the majority of the industry somehow operating without it?

What languages are well suited for testing SDKs written in multiple other languages?
I am working on an application that has SDKs in multiple languages. Currently Java, JavaScript, Dart, and Go, but ultimately we'd like to have an SDK for every major language. Our primary test suites are written in Go, which means our other SDKs are not well tested. I do not want to write or maintain test suites in four or ten different languages.
What I would like to do is choose a language to write the tests in, define a test harness interface, implement that test harness for each SDK, and write the tests using that harness. Of course I could do this with RPC/HTTP/etc but that would add significant complexity. I'd prefer to write the tests in a language that has a meaningful degree of interop/FFI with most of the major languages. Lua comes to mind, since it seems like someone has built a Lua interpreter for basically every language in existence, but I have very little Lua experience and I have no idea how painful it might be to do this in Lua. I am open to other suggestions besides

Why should I use rust (as a Go enthusiast)?
I am not hating on Rust. I am honestly looking for reasons why I should learn and use Rust. Currently, I am a Go developer. I haven’t touched any other language for years, except JavaScript for occasional front end work and other languages for OSS contributions.
After working with almost every mainstream language over the years and flitting between them on a whim, I have fallen in love with Go. It feels like ‘home’ to me - it’s comfortable and I enjoy working with it and I have little motivation to use anything else. I rage every time I get stuck working with JavaScript because dependency management is pure hell when dealing with the intersection of packages and browsers - by contrast, dependency management is a breeze with Go modules. I’ll grant that it can suck when using private packages, but I everything I work on is open.
Rust is intriguing. Controlling the lifecycle of variables in detail appeals to me. I don’t mind garbage collectors but Rust’s approach seems far more elegant.

How often does branchless programming actually matter?
I've started noticing articles and YouTube videos touting the benefits of branchless programming, making it sound like this is a hot new technique (or maybe a hot old technique) that everyone should be using. But it seems like it's only really applicable to data processing applications (as opposed to general programming) and there are very few times in my career where I've needed to use, much less optimize, data processing code. And when I do, I use someone else's library.
How often does branchless programming actually matter in the day to day life of an average developer?

How do you organize miscellaneous tasks?
I am an experienced developer, but not an experienced manager. I'd prefer if organizing tasks was not my responsibility, but I work at a small company and no one else is inclined to do it. How do you organize miscellaneous tasks when using a task management system such as Jira? We're using GitLab, but it has the same basic features, such as epics, milestones, tasks, and subtasks.
I don't want to have miscellaneous tasks floating around in the ether, because things like that tend to get lost. But an epic is supposed to have a well-defined end goal, right? A good epic is something like "Implement this complex feature" or "Reach this level of maturity" - not "Miscellaneous stuff".
The majority of the work we do fits fairly clearly into specific goals, such as "Release the next version of
<this>
feature." But what about bug fixes and other random improvements and miscellaneous tasks? How do you keep those organized?