I’m sharing a draft of a slightly-opinionated survey paper I’ve been working on for the last couple of months: Eight Things to Know about Large Language Models. Here are the eight things:

  1. LLMs predictably get more capable with increasing investment, even without targeted innovation.
  2. Many important LLM behaviors emerge unpredictably as a byproduct of increasing investment.
  3. LLMs often appear to learn and use representations of the outside world.
  4. There are no reliable techniques for steering the behavior of LLMs.
  5. Experts are not yet able to interpret the inner workings of LLMs.
  6. Human performance on a task isn’t an upper bound on LLM performance.
  7. LLMs need not express the values of their creators nor the values encoded in web text.
  8. Brief interactions with LLMs chatbots are often misleading.
An enormous number of people—including journalists, advocates, lawmakers, and academics—have started to pay attention to this technology in the last few months. This is appropriate: The technology is on track to be really impactful, and we want the full force of government and civil society to be involved in figuring out what we do with it. I’m aiming for this paper to cover points that are relevant to some of these decisions, but that might be easy to miss for someone just starting to follow the technology. I also considered calling it “Eight Ways that Large Language Models are a Weird Technology”. 
It’s a survey: All of the evidence I use was published by others, and most of the arguments have already been stated clearly by others. (When in doubt, cite them, not me.)
All of these claims should seem obvious to at least some large subset of the researchers who build and test these models, and there’s good evidence for all of them, though some of them are still controversial—I try to point out where that’s the case.
I also close with some less survey-ish discussion that riffs on the above. Teasers:
  • We should expect some of the prominent flaws of current LLMs to improve significantly.
  • There will be incentives to deploy LLMs as agents that flexibly pursue goals.
  • LLM developers have limited influence over what is developed.
  • LLMs are likely to produce a rapidly growing array of risks.
  • Negative results with LLMs can be difficult to interpret but point to areas of real weakness.
  • The science and scholarship around LLMs is especially immature.