mirror of
https://github.com/osmarks/website
synced 2024-12-25 01:20:33 +00:00
38 lines
11 KiB
Markdown
38 lines
11 KiB
Markdown
---
|
|
title: Programming education, tacit knowledge and LLMs
|
|
created: 02/07/2023
|
|
description: Why programming education isn't very good, and my thoughts on AI code generation.
|
|
slug: progedu
|
|
---
|
|
::: epigraph attribution="Randall Munroe" link=https://xkcd.com/2030/
|
|
Don't trust voting software and don't listen to anyone who tells you it's safe. I don't quite know how to put this, but our entire field is bad at what we do, and if you rely on us, everyone will die.
|
|
:::
|
|
|
|
It seems to be fairly well-known (or at least widely believed amongst the people I regularly talk to about this) that most people are not very good at writing code, even those who really "should" be because of having (theoretically) been taught to (see e.g. <https://web.archive.org/web/20150624150215/http://blog.codinghorror.com/why-cant-programmers-program/>). Why is this? In this article, I will describe my wild guesses.
|
|
|
|
General criticisms of formal education have [already been done](https://en.wikipedia.org/wiki/The_Case_Against_Education), probably better than I can manage to do. I was originally going to write about how the incentives of the system are not particularly concerned with testing people in accurate ways, but rather easy and standardizable ways, and the easiest and most standardizable ways are to ask about irrelevant surface details rather than testing skill. But this isn't actually true: automated testing of code to solve problems is scaleable enough that things like [Project Euler](https://projecteuler.net/) and [Leetcode](https://leetcode.com/) can test vast amounts of people without human intervention, and it should generally be *less* effort to do this than to manually process written exams. It does seem to be the case that programming education tends to preferentially test bad proxies for actual skill, but the causality probably doesn't flow from testing methods.
|
|
|
|
I think it's more plausible that teaching focuses on this surface knowledge because it's much easier and more legible, and looks and feels very much like "programming education" to someone who does not have actual domain knowledge (because other subjects are usually done in the same way), or who [isn't thinking very much about it](https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/), and then similar problems and a notion that testing should be "fair" and "cover what students have learned" lead to insufficiently outcome-oriented exams, which then sets up incentives biasing students in similar directions. The underlying issue is a matter of "tacit knowledge": being good at programming requires sets of interlocking and hard-to-describe mental heuristics rather than a long list of memorized rules, and since applying them feels natural and easy - and most people who are now competent don't accurately remember lacking them - it is not immediately obvious that this is the case, and someone asked how they can do something is likely to focus on the things which are, to them, easier to explain and notice.
|
|
|
|
So why is programming education particularly bad? Shouldn't *every* field be harmed by tacit knowledge transmission problems? My speculative answer is that they generally are, but it's much less noticeable and plausibly also a smaller problem. The heuristics used in programming are strange and unnatural - I'll describe a few of the important ones later - but the overarching theme is that programming is highly reductionist: you have to model a system very different to your own mind, and every abstraction breaks down in some corner case you will eventually have to know about. The human mind very much likes pretending that other systems are more or less identical to it - [animism](https://en.wikipedia.org/wiki/Animism) is no longer a particularly popular explicitly-held belief system, but it's still common to ascribe intention to machinery, "fate" and "karma", animals without very sophisticated cognition, and a wide range of other phenomena. Computers are not at all human, in that they do exactly what someone has set them up to do, which is often [not what they thought they were doing](https://gwern.net/unseeing), while many beginners expect them to "understand what they meant" and act accordingly. Every simple-looking capability is burdened with detail[^1]: the computer "knows what time it is" (thanks to some [nontrivial engineering](https://en.wikipedia.org/wiki/Network_Time_Protocol) with some possible failure points); the out-of-order CPU "runs just like an abstract in-order machine, but very fast" (until security researchers [find a difference](https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability))); DNS "resolves domain names to IPs" (but is frequently intercepted by networks, and can also serve as a covert backchannel); video codecs "make videos smaller" (but are also [complex domain-specific programming languages](https://wrv.github.io/h26forge.pdf)); text rendering "is just copying bitmaps into the right places" ([unless you care about Unicode or antialiasing or kerning](https://faultlore.com/blah/text-hates-you/)).
|
|
|
|
The other fields which I think suffer most are maths and physics. Maths education mostly [fails to convey what mathematicians actually care about](https://www.maa.org/external_archive/devlin/LockhartsLament.pdf) and, despite some attempts to vaguely gesture at it, does not teach "problem-solving" skills as much as sometimes set nontrivial multistep problems and see if some people manage to solve them. Years of physics instruction [fail to stop many students falling back to Aristotlean mechanics](https://www.researchgate.net/profile/Richard-Gunstone/publication/238983736_Student_understanding_in_mechanics_A_large_population_survey/links/02e7e52f8a2f984024000000/Student-understanding-in-mechanics-A-large-population-survey.pdf) on qualitative questions. This is apparently mostly ignored, perhaps because knowledge without deep understanding is sufficient for many uses and enough people generalize to the interesting parts to supply research, but programming makes the problems more obvious, since essentially any useful work will rapidly run into things like debugging.
|
|
|
|
So what can be done? I don't know. Formal education is likely a lost cause: incentives aren't aligned enough that a better way to teach would be adopted any time soon, even if I had one, and enough has been invested in existing methods that any change would be extremely challenging. I do, at least, have a rough idea of what good programmers have which isn't being taught well, but I don't know how you *would* teach these things effectively:
|
|
|
|
* Intuitive understanding of what a computer is doing, at least to the level of tracing control flow (obviously computers are able to run a lot faster than humans and do complicated maths we usually cannot do mentally). I deride "[helping] you become a computer" in ['Problem Solving' Tasks and Computer Science](/csproblem), but without the ability to do this you cannot really frame problems algorithmically or debug.
|
|
* Edge case generation - devising weird edge cases for a particular problem obviously helps with testing, can help with debugging (especially of only partially observable systems), and may elucidate algorithms.
|
|
* Effective tool use - things like using Git and basic knowledge of Linux commands are frequently taught, but this is not really what I mean. Good programmers can select a novel tool for a particular task and quickly learn its most important capabilities, and have a smaller set of tools they know very well and can operate fast. A particularly good quick-to-operate tool can become an extension of your mental processes, at least when at a computer. I think fast typing is somewhat underappreciated, though I may be biased.
|
|
* Rough full-stack knowledge - while knowing everything about modern computers is probably literally impossible in a human lifetime, broad knowledge is necessary to guess at where bugs might arise, as many of them come from interactions between your code and other systems.
|
|
* Security mindset: as well as being directly useful for ensuring security, always thinking about where your assumptions might be flawed or how something might go wrong is vital for reliability.
|
|
* Good code structuring, e.g. knowing when to disaggregate or aggregate modules. I think that lots of people, particularly when using OOP, are too quick to try and "break apart" interdependent code in a way which makes development much slower without actually providing much flexibility, but thousand-line files with global variables everywhere are hard to work on.
|
|
|
|
If you have been paying any attention to anything within the past [two years](https://openai.com/blog/openai-codex) or so, you're probably also aware that AI (specifically large language models) will obsolete, augment, change, or do nothing whatsoever to software engineering jobs. My previous list provides some perspective for this: ChatGPT (GPT-3.5 versions; I haven't used the GPT-4 one) can model computers well enough that it can [pretend to be a Linux shell](https://www.engraved.blog/building-a-virtual-machine-inside/) quite accurately, tracking decent amounts of state while it does so; big language models have vague knowledge of basically everything on the internet, even if they don't always connect it well; ChatGPT can [also](https://twitter.com/gf_256/status/1598104835848798208) find some vulnerabilities in code; [tool use](https://til.simonwillison.net/llms/python-react-pattern) [is continually](https://openai.com/blog/function-calling-and-other-api-updates?ref=upstract.com) [being](https://gorilla.cs.berkeley.edu/) [improved](https://twitter.com/emollick/status/1657050639644360706) (probably their quick-script-writing capability already exceeds most humans'). Not every capability is there yet, of course, and I think LLMs are significantly hampered by issues humans don't have, like context window limitations, lack of online learning, and bad planning ability, but these are probably not that fundamental.
|
|
|
|
Essentially, your job is probably not safe, as long as development continues (and big organizations actually notice).
|
|
|
|
You may contend that LLMs lack "general intelligence", and thus can't solve novel problems, devise clever new algorithms, etc. I don't think this is exactly right (it's probably a matter of degree rather than binary), but my more interesting objection is that most code doesn't involve anything like that. Most algorithmic problems have already been solved somewhere if you can frame them right[^2] (which is, in fairness, also a problem of intelligence, but less so than deriving the solution from scratch), and LLMs probably remember more algorithms than you. More than that, however, most code doesn't even involve sophisticated algorithms: it just has to move some data around or convert between formats or call out to libraries or APIs in the right order or process some forms. I don't really like writing that and try to minimize it, but this only goes so far. You may also have a stronger objection along the line of "LLMs are just stochastic parrots repeating patterns in their training data": this is wrong, and you may direct complaints regarding this to the comments or [microblog](https://b.osmarks.net/), where I will probably ignore them.
|
|
|
|
[^1]: The particular examples here are not ones you're likely to run into for a while, but anyone who writes code for long enough is going to encounter *something* weird.
|
|
|
|
[^2]: Notably, people who have spent more time on Leetcode than me claim that it is actually just about memorizing a few algorithms which it then uses for a wide range of thinly disguised problems. |