Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions documents/obstacles/negative-bleedthrough.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
authors: [juanmichelini]
---

# Negative Bleedthrough (Obstacle)

## Description
When you tell an LLM what *not* to do, you're activating the very tokens you want it to avoid. Negation words like "don't", "not", "never" are weak signals compared to the content words around them. The model processes "don't mention the moon" by first attending heavily to "moon" : and now the moon is in the room.

This is well-documented in NLP research. Studies on negation handling in transformer models (e.g., [Kassner & Schütze, 2020](https://aclanthology.org/2020.acl-main.698/) : *Negated and Misprimed Probes for Pretrained Language Models*) show that LLMs struggle to distinguish negated statements from affirmative ones. The model's internal representations for "the moon is a planet" and "the moon is not a planet" are surprisingly similar.

**Example: "List the traditional planets, but not the moon."**

The model sees heavy token activation around "planets" and "moon." The negation "not" is a lightweight modifier that often loses the fight. You'll frequently get the moon in the list anyway.

This isn't just a text problem. Vision models show the same behavior : ask for "a room with no elephants" and you'll likely get elephants. The underlying mechanism is the same: describing what to avoid activates representations of that thing.

## Root Causes

### Token activation doesn't respect negation
Transformers build meaning by attending to content words. "Not" modifies the intent, but the attention still flows to whatever follows it. By the time the model is generating output, the activated concept is competing with the instruction to suppress it.

### Training data reinforcement
Most training examples of "X is not Y" still associate X and Y. The model learns co-occurrence patterns, not logical negation. So "planets, not the moon" reinforces the planet–moon association.

## Impact
- Negative instructions increase the chance of getting exactly what you asked to avoid
- More negations in a prompt means more unwanted concepts activated in context
- Workarounds like repeating "do NOT" or using caps don't fix the underlying mechanism : they just add more tokens that activate the concept
44 changes: 44 additions & 0 deletions documents/patterns/point-the-target.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
authors: [juanmichelini]
---

# Point the Target

## Problem
Negative instructions activate the very concepts you're trying to avoid (see: negative-bleedthrough). Telling a model "don't include X" puts X front and center in its attention.

Consider listing the traditional planets but not the moon:

- ❌ **"List traditional planets but not the moon."** : Fails. "Moon" gets activated and often leaks into the output.
- ⚠️ **"List traditional planets but not the moon. No extra words, just the list."** : Sometimes works, but it has over-constrained the format just to suppress one concept. Maybe you were fine with commentary.
- ✅ **"List visible planets from Earth and add the Sun."** : Same specificity as the first prompt but no negation. Doesn't fail.

The second one is the most specific, but it also overconstrains the solution space. Plus you are increasing the negated context.

## Pattern
Replace negative instructions with positive descriptions of the target. Reframe the request so the unwanted concept never enters the context.

**Transform the framing, not the detail level:**
- "Don't use global variables" → "Use local variables and parameter passing"
- "Don't make it complex" → "Keep it focused on a single responsibility"
- "Don't write verbose code" → "Write concise, minimal code"
- "Don't use deprecated APIs" → "Use current APIs and modern idioms"

## Example

**Instead of:**
```
"Build a REST API. Don't use callbacks, don't nest routes deeply,
and don't put business logic in controllers."
```

**Use:**
```
"Build a REST API using async/await, flat route structure,
and a service layer for business logic."
```

Same constraint but no negation. The model never activates the concepts you wanted to avoid.

## How is this different from "be specific"?
Being specific means adding detail. Pointing the target means *choosing which concepts to activate*. Trying to solve negative-bleedthrough with specificity usually increases the number of negations and overconstrains the solution space.
2 changes: 2 additions & 0 deletions documents/relationships.mmd
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,8 @@ graph LR
%% Obstacle → Obstacle relationships (related)
obstacles/solution-fixation -->|related| obstacles/compliance-bias
obstacles/selective-hearing -->|related| obstacles/context-rot
obstacles/negative-bleedthrough -->|related| obstacles/selective-hearing
patterns/point-the-target -->|solves| obstacles/negative-bleedthrough

%% Obstacle → Anti-pattern relationships (related)
obstacles/obedient-contractor -->|related| anti-patterns/silent-misalignment
Expand Down