Yud tried to describe a compiler, but ended up with a tulpa. I wonder why that keeps happening~
Yud would be horrified to learn about INTERCAL (WP, Esolangs), which has required syntax for politely asking the compiler to accept input. The compiler is expressly permitted to refuse inputs for being impolite or excessively polite.
I will not blame anybody for giving up on reading this wall of text. I had to try maybe four or five times, fighting the cringe. Most unrealistic part is having the TA know any better than the student. Yud is completely lacking in the light-hearted brevity that makes this sort of Broccoli Man & Panda Woman rant bearable.
I can somewhat sympathize, in the sense that there are currently multiple frameworks where Python code is intermixed with magic comments which are replaced with more code by ChatGPT during a compilation step. However, this is clearly a party trick which lacks the sheer reproducibility and predictability required for programming.
Y’know, I’ll take his implicit wager. I bet that, in 2027, the typical CS student will still be taught with languages whose reference implementations use either:
I am not a rationalist, but I feel like the most rational thing I could possibly do here would be to take your bet.
If, in 2027, CS students are fighting LLM-powered hellspawn compilers that must be tricked, sweet-talked and threatened, I win the bet.
If, in 2027, CS students are working with ordinary compilers with predictable, reproducible results, I win peace of mind.
Either way, I win.
@datarama @corbin The Go compiler requires reproducible builds based on a small set of well-defined inputs, if the LLM cannot give the same answer for the same question each time it is asked, then it is not compatible with use in the Go compiler. This includes optimizations -- the bits should be identical. #golang
Indeed, this is also the case for anything packaged with #Nix; we have over 99% reproducibility and are not planning on giving that up. Also, Nix forbids network access during compilation; there will be no clandestine queries to OpenAI.
@corbin I got a 96GB laptop just so I could run (some) LLMs w/o network access, I'm sure that will be standard by 2025.
Let me know @corbin@defcon.social if you actually get LLMs to produce useful code locally. I’ve done maybe four or five experiments and they’ve all been grand disappointments. This is probably because I’m not asking questions easily answered by Stack Overflow or existing GitHub projects; LLMs can really only model the trite, not the novel.