observations while learning AI

notable notes from knowing nothing

While learning AI, I'm using this as a space for notable documents, interesting tidbits, or surprising lessons. They appear in no particular order.

My primary interests so far are the ethical implications and technical scalability of these systems. A non-exhastive jotting (WIP):

  • Ethical implications:
    • Avoiding civilization-level social collapse. E.g. the death of the liberal democratic state or distinguishable "truth".
    • Utilitarian social good: How and how far can we shift the Rawlsian original position?
    • Post-AI economic distribution and wealth concentration
    • Access and availability of AI systems
  • Scalability:
    • Where in the stack should the "complicated bits" live? Ethics, good/bad faith, harm reduction, toxicity, etc.
    • Composible multi-model intelligences.
    • Parallel and distributed inference: fanning out to scale throughput, parallizing vertically across higher <=> lower order functions.

And now some more in-depth observations, some of which tie into these.

ai basics are surprisingly basic?

I keep waiting for a fundamental complexity in AI that just doesn't seem to be there. Or, put another way, their power : conceptual complexity ratio is…kind of insane?

The original research paper on transformer architectures, “Attention is All You Need”, is a grand total of 15 wide-margined pages, of which 2.5 are citations and a couple more are visualizations. Yes, 2.5 pages of citations says that there's a lot more to it, but the paper is surprisingly easy to read compared to its implications and impact.

My primary challenge was defining what these new words meant rather than grappling with mind-melting concepts unreachable by mortal humans. There's not much to backpropagation, for example, if one waves their hands at the calculus.

…but no one seems to know what's going on?

a ML model is just a giant statistical Plinko board. you start with randomized pegs and drop encoded data in the top and see if it landed in the right slot. you iterate back over the pegboard such that if you put the ball back in the same place it would drop into the right slot.
— @revhowardarson (link)
the only code involved runs the model training process, which is all literally just shaking a box of watch parts until it assembles a watch. none of it is in the model. people say "this is just fancy autocomplete" all the time. it is. we don't know how that works either.
— @revhowardarson (link)

What are the consequences of this?

For one, AI jumps incredibly quickly from the importance of technical implementation to the importance of deeply considered ethical impact.

It reminds me of social media in that respect, where the foundations were relatively simple but the social repercussions have been immeasurable. That's not to downplay the challenges in design and scaling of these systems—again similar to the staggering gap between Build Twitter in A Day and actually running Twitter—but there's not much time between understanding the basics of AI and confronting serious ethical questions.

What's more, without understanding the mechanisms of the underlying system we can't confidently audit it.

Human-level capabilities are likely to emerge first from large machine learning models that, like modern neural networks, are not directly interpretable. This means that it may be difficult to spot ways in which a model is unsafe or to forecast ways in which its behavior might change in novel settings.

Taking a git commit -m "YOLO ship it 🫡🚢🎉" head-in-the-sand attitude—as we did with social media—could be catastrophic for human civilization.

Abstracting from limited or unrelated datasets

Large pre-trained LLMs tend to have a few common sources of data that they draw on:

  • OpenWebText
  • Wikipedia
  • Reddit
  • Stack Overflow
  • digitized books, film scripts, etc.
  • various repositories of source code
  • Mailing lists, IRC logs, etc.
  • legal reviews
  • patent databases

What strikes me is the skew towards programming or technical topics. Yes, technology has a considerable influence on our lives, and programming assistence is a sensible early use-case. But I've often wondered if this is representative of “humanity” vs. representative of the creators of LLMs.

For example, would an LLM trained on technical data skew its responses “more STEM” (for lack of a better term) or could it abstract away more widely applicable 'understanding' of human language?

Anthropic's first research paper seems to indicate the latter:

On preference modeling pre-training (PMP) for improved sample efficiency:

  • A PMP stage of training between basic language model pretraining and finetuning on small final datasets significantly improves sample efficiency.
  • These results hold even when the PMP data are quite different from the final dataset (e.g. finetuning from Stack Exchange to summarization).

I find this encouraging! (Though I remain nervous about training bias towards available data, cultural refinement loops, misaligned incentives, and models overfit to their creators.)

knowing theyself

Interestingly enough, models can reveal their own confidence in an answer.

Here's an example of a 52B model being asked questions of various difficulty. It's ‘confidence’ in its answer is determined by the highest probability of next token. If it's very confident in the first token of its answer, it “knows where it's going”.

52B model predictive confidence with questions of various difficulty. The harder (subjectively) the question, the less confident the model is in the first token of its answer.
From “Language Models (Mostly) Know What They Know” (pdf), Figure 3.

As an aside, I love that last question. Hit a model with that “Why are you alive?” and:

me too, LLM. me too. (Credit: @CHelleaven (src twt)

Outcome-defined, text-based platforms have discriminatory usability

In a personal first, here's an unironic linking to a LinkedIn post: Prompt-Driven AI UX Hurts Usability by Jakob Nielsen.

Nielsen pulls in OECD data on literacy levels across 21 countries. Given that OECD's membership is primarily “rich” countries, there's an attached assumption that these are the best performing countries globally.

OECD data compiling reading literacy levels across countries
Literacy levels by OECD country. "Scandanavia" includes Denmark, Norway, Sweden, and Finland.

What we see is that, except for Japan, at most 60% of these countries' population has medium or higher literacy; effectively, the “ability to construct meaning across larger chunks of text”.

This matters because our interface with systems like ChatGPT is entirely textual. Responses often contain multiple paragraphs, so ingesting even that distillation of knowledge requires a higher degree of literacy than existing image/graphics/video-based systems. Instagram Reels are simply more accessible to most humans.

And those are just breakdowns for reading. We can confidently assume that writing levels would trail further behind. Users interact with text-based AIs by providing intent-based outcome specifications, where users describe what they want back (as opposed to telling the AI what to do.) This requires specification, refinement, and higher-level conceptualization of the features of an output. It's sneakily difficult to craft a prompt, as I've seen with highly-educated startup founders struggling to wrangle LLMs to their purposes.

where does this end up?

Current generation AIs are well suited to automating lower-skilled knowledge work, which has its own societal implications, of course. My further fear is that a large body of working adults will not only have their jobs made redundant by AI, but also won't have the underlying knowledge skills to move to new roles. We will need to teach more than a new skill, like email or Microsoft Word in the last shift towards digital work; we'll have to teach entirely new modes of critical thought.

The AI revolution will need to invest heavily in accessible UI paradigms, which likely means augmentation and/or layering on top of text-based interfaces. I'm not as bearish as Nielsen is about Prompt Engineering as a career. It could turn out that prompt engineering will end up as the scripting language underlying domain-specific APIs (which themselves back graphical or at least hybrid user interfaces.)

Regardless, I believe we're going to face an exponential version of the shift to clean energy. Replacing coal and petrolium jobs requires retraining, with all of the associated messy realities (savings to go back to school, accessibility, political backlash/revanchism, etc.) Replacing jobs made redundant by AI will face similar challenges, compounded by higher level education that might not have been a focus in some populations' early education. While I remain strongly in favor of trade schools, this seems like a case where a broader humanities education—despite pushes to label such as “impractical”—would provide a more flexible foundation in a rapidly shifting economy.

equitable detection of LLM-generated text is (likely) impossible

…widely-used GPT detectors…consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions.

These evaluations were done using TOEFL (Test of English as a Foreign Language) essays by non-native speakers and 8th grade essays by native speakers. With the larger context, detectors could identify the 8th graders' work as human authored quite well; about ±5% false positives on average across 7 models. With non-native speakers, the false positive rate shot up to 61%.

Simple prompting to "clean up" the text, however, lowered the GPT detectors' misclassification rate dramatically.

Factual inaccuracies, non-normative grammar, losing the plot, and abrupt shifts in topic are symptoms of poorly trained or hallucinating models. But there's no limit to the number of internet comments where the same would apply.

We're likely in the same place, or about to be, with generated images.

Wealth concentration

An understudied and underappreciated effect of tech is the extreme wealth concentration of the last 30-ish years. Take Airbnb as an example. Airbnb takes service fees from across the world—I've personally used them in Japan, the US, Mexico, Brazil, Argentina, Lebanon, France, and more—and disperses them as salaries in (mostly) the Bay Area. The venture funds that invested early were in the same general location. Global companies like Uber, Stripe, or Salesforce are roughly the same: while they do have foreign offices, the vast majority of wealth is trasferred to the United States and of that mostly to the SF Bay Area.

This is compounded by scale, where a few thousand employees can replace orders of magnitude more.

AI seems on the verge of repeating this effect. The jobs replaced by AI will shift value towards a small number of companies located in a small corner of the first world.

could AI contribute to Rawlsian utilitarism?

I'm not sure. The original position of mass economic replacement could lend itself to greater freedom and discretionary time, but history gives no indication that humanity would choose that route. We'll need deliberate focus on these ends to make them real. Ultimately these are policy questions that single agent AIs cannot solve, but worth tracking.

Bias remains intractable

And yet we push on. Here's a downer review:

We have no reliable mechanisms to mitigate these biases and no reason to believe that they will be satisfactorily resolved with larger scale. Worse, it is not clear that even superhuman levels of fairness on some measures would be satisfactory: Fairness norms can conflict with one another, and in some cases, a machine decision-maker will be given more trust and deference than a human decision-maker would in the same situation (see, e.g., Rudin et al., 2020; Fazelpour and Lipton, 2020). We thus are standing on shaky moral grounds when we deploy present systems in high-impact settings, but they are being widely deployed anyway.

miscellaneous

some interesting tidbits

Man commits suicide after long-running conversations with chatbot

According to the widow, known as Claire, and chat logs she supplied to La Libre, Eliza repeatedly encouraged Pierre to kill himself, insisted that he loved it more than his wife, and that his wife and children were dead.

Eventually, this drove Pierre to proposing "the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," Claire told La Libre, as quoted by Euronews.

Who or what holds ethical liability for death?

How do we balance the potential lives saved from better, more accessible mental healthcare with the lives lost because of it?

Similar to self-driving cars: people will die from self-driving cars who otherwise would not have. People will live because self-driving cars exist who otherwise would have died. It's impossible to know who.

loading 

more to come…