mirror of
https://github.com/osmarks/website
synced 2025-09-12 07:16:00 +00:00
blog external link tracking
This commit is contained in:
@@ -29,7 +29,7 @@ So what can be done? I don't know. Formal education is likely a lost cause: ince
|
||||
* Security mindset: as well as being directly useful for ensuring security, always thinking about where your assumptions might be flawed or how something might go wrong is vital for reliability.
|
||||
* Good code structuring, e.g. knowing when to disaggregate or aggregate modules. I think that lots of people, particularly when using OOP, are too quick to try and "break apart" interdependent code in a way which makes development much slower without actually providing much flexibility, but thousand-line files with global variables everywhere are hard to work on.
|
||||
|
||||
If you have been paying any attention to anything within the past [two years](https://openai.com/blog/openai-codex) or so, you're probably also aware that AI (specifically large language models) will obsolete, augment, change, or do nothing whatsoever to software engineering jobs. My previous list provides some perspective for this: ChatGPT (GPT-3.5 versions; I haven't used the GPT-4 one) can model computers well enough that it can [pretend to be a Linux shell](https://www.engraved.blog/building-a-virtual-machine-inside/) quite accurately, tracking decent amounts of state while it does so; big language models have vague knowledge of basically everything on the internet, even if they don't always connect it well; ChatGPT can [also](https://twitter.com/gf_256/status/1598104835848798208) find some vulnerabilities in code; [tool use](https://til.simonwillison.net/llms/python-react-pattern) [is continually](https://openai.com/blog/function-calling-and-other-api-updates?ref=upstract.com) [being](https://gorilla.cs.berkeley.edu/) [improved](https://twitter.com/emollick/status/1657050639644360706) (probably their quick-script-writing capability already exceeds most humans'). Not every capability is there yet, of course, and I think LLMs are significantly hampered by issues humans don't have, like context window limitations, lack of online learning, and bad planning ability, but these are probably not that fundamental.
|
||||
If you have been paying any attention to anything within the past [two years](https://openai.com/blog/openai-codex/) or so, you're probably also aware that AI (specifically large language models) will obsolete, augment, change, or do nothing whatsoever to software engineering jobs. My previous list provides some perspective for this: ChatGPT (GPT-3.5 versions; I haven't used the GPT-4 one) can model computers well enough that it can [pretend to be a Linux shell](https://www.engraved.blog/building-a-virtual-machine-inside/) quite accurately, tracking decent amounts of state while it does so; big language models have vague knowledge of basically everything on the internet, even if they don't always connect it well; ChatGPT can [also](https://twitter.com/gf_256/status/1598104835848798208) find some vulnerabilities in code; [tool use](https://til.simonwillison.net/llms/python-react-pattern) [is continually](https://openai.com/blog/function-calling-and-other-api-updates) [being](https://gorilla.cs.berkeley.edu/) [improved](https://twitter.com/emollick/status/1657050639644360706) (probably their quick-script-writing capability already exceeds most humans'). Not every capability is there yet, of course, and I think LLMs are significantly hampered by issues humans don't have, like context window limitations, lack of online learning, and bad planning ability, but these are probably not that fundamental.
|
||||
|
||||
Essentially, your job is probably not safe, as long as development continues (and big organizations actually notice).
|
||||
|
||||
|
Reference in New Issue
Block a user