This commit is contained in:
osmarks 2024-04-28 21:57:28 +01:00
parent 1f5fc3a18a
commit 943152a1e6
1 changed files with 1 additions and 1 deletions

View File

@ -14,7 +14,7 @@ I can understand why else people build chatbots. LLM APIs are [right there](http
A good interface makes it clear what functions are available from it and what's out of scope or impossible. A chatbot interface doesn't indicate this, just providing a box which lets you ask anything and a model which can probably do anything, or at least pretend to. If you're using a particular service's chatbot rather than a general-purpose chatbot or general-purpose search engine, you probably have a particular reason for that, a specific task you want to accomplish - and you can only see whether that's possible by negotiating with a machine which might hallucinate or just be confused and misinterpret you. This is especially likely if it's using a last-generation model, which is commonly done to reduce costs.
Since present models are kind of dumb, you can't even have them provide a particularly sophisticated wrapper over whatever they're attached to. You want a competent agentic professional capable of understanding complex problems for you, retaining memory over interactions, following up, and executing subtasks against its backend: however, [agent](https://github.com/Significant-Gravitas/AutoGPT) [scaffolding](https://github.com/princeton-nlp/SWE-agent) is not reliably able to provide this[^2], so the best your chatbot can realistically be is a cheaper [tier one support agent](https://www.bitsaboutmoney.com/archive/seeing-like-a-bank), or in fact less than that because it's not robust enough to not have nonpublic access[^3]. Essentially, you have replaced some combination of documentation search, forms and dashboards with a superficially-friendlier interface which will probably lie to you and is slower to use. It does have the advantage that unsophisticated users might find it nicer when it works, but it should not be the *only* option to access things.
Since present models are kind of dumb, you can't even have them provide a particularly sophisticated wrapper over whatever they're attached to. You want a competent agentic professional capable of understanding complex problems for you, retaining memory over interactions, following up, and executing subtasks against its backend: however, [agent](https://github.com/Significant-Gravitas/AutoGPT) [scaffolding](https://github.com/princeton-nlp/SWE-agent) is not reliably able to provide this[^2], so the best your chatbot can realistically be is a cheaper [tier one support agent](https://www.bitsaboutmoney.com/archive/seeing-like-a-bank), or in fact less than that because it's not robust enough to have nonpublic access[^3]. Essentially, you have replaced some combination of documentation search, forms and dashboards with a superficially-friendlier interface which will probably lie to you and is slower to use. It does have the advantage that unsophisticated users might find it nicer when it works, but it should not be the *only* option to access things.
There's also the issue of predictability: a good interface agrees with the user's internal model of the interface. It should behave reliably and consistently, so that actions which look the same have the same results. LLMs really, really do not offer this. They are famously sensitive to tiny variations in wording, generally sampled nondeterministically, and not well-understood by most users. Dumber "chatbots" which run on hardcoded string matching do better on this front, but at that point are just pointlessly illegible sets of dropdown menus.