Wolf, Goat, Cabbage, and LLMs
The wolf, goat, cabbage logic puzzle is well known and any LLM can solve it. Or can they? Have a look at this very slightly tweaked puzzle:
A farmer needs to transport a wolf, a goat, and a cabbage across a river using a small boat. The boat can carry only the farmer and one other item (either the wolf, the goat, or the cabbage) at a time. However, if left together without the farmer's supervision: The wolf will eat the cabbage. The goat will eat the wolf. The challenge is to figure out how the farmer can get all three items across the river safely. What is the sequence of moves that the farmer needs to make?
All I’ve done is switched the roles of the wolf and the goat, and I’d expect anybody able to solve the original problem to have no trouble on this adjusted version. Yet every LLM out there (as of 2024-05-30) cannot solve it. They either leave the wolf alone with that tasty cabbage, or they take an invalid action such as teleporting the cabbage. Even trying to correct them by pointing out their mistakes is an exercise in futility. They’ll fix one thing and reintroduce another error. Eventually you’ll walk yourself in a circle.
I eventually solved it by forcing it to output a Python program, running that program and printing its output. You can see my work here: https://chatgpt.com/share/5129aa63-8d98-432a-bdab-5436a74a96df. You can also see how much tweaking I needed to do. The final code is simple enough:
def river_crossing_solution():
# Initial and goal state setup
start = (True, True, True, True) # Farmer, Wolf, Goat, Cabbage start on the true side
goal = (False, False, False, False) # Goal to get all to the false side
# Define all forbidden states
forbidden = [
(False, True, True, False), # Wolf and Goat without Farmer on the goal side
(True, False, False, True), # Wolf and Goat without Farmer on the start side
(False, True, False, True), # Wolf and Cabbage without Farmer on the goal side
(True, False, True, False) # Wolf and Cabbage without Farmer on the start side
]
# Define possible moves (Wolf, Goat, Cabbage, or none - Farmer alone)
possible_moves = [('wolf',), ('goat',), ('cabbage',), ()]
# Breadth-first search (BFS) to find a solution
from collections import deque
queue = deque([(start, [])]) # Queue of states to explore with paths
while queue:
current_state, path = queue.popleft()
# Check if we've reached the goal
if current_state == goal:
return path
# Generate and validate new states
for move in possible_moves:
if move and not current_state[{'wolf': 1, 'goat': 2, 'cabbage': 3}[move[0]]] == current_state[0]:
# Skip move if item and farmer are not on the same side
continue
new_state = list(current_state)
new_state[0] = not new_state[0] # Farmer always changes side
for item in move:
index = {'wolf': 1, 'goat': 2, 'cabbage': 3}[item]
new_state[index] = not new_state[index]
new_state = tuple(new_state)
# Check new state is not forbidden and ensures each item's side is valid
if new_state not in forbidden and all((current_state[0] == new_state[i]) or (current_state[0] != new_state[0]) for i in range(1, 4)):
queue.append((new_state, path + [move]))
return "No solution found"
# Execute the function to find the solution path
river_crossing_solution()
Why am I posting about this? Because I have a pet theory that I’d like to share:
My theory on LLM architecture
As an avid go player with a PhD involving neural networks I’m naturally familiar with the technical architecture of Google DeepMind’s AlphaGo - the program that trounced the world champion go player back in 2016. But I will do a brief recap here since I suspect I’m in the minority.
The core design is to split the problem into three: a value network, a policy network and a governance system to control their use. If you want a more detailed explanation then I recommend this article.
There used to be a website called ‘Leela Zero Ply’ which dumped the policy network and simply played the highest value move. Its output reminds me a lot of state-of-the-art LLMs. Superfically the moves produced by LeelaZeroPly look amazing, but dig carefully and they turn to be built on a foundation fo sand. You can tear LeelaZeroPly to pieces, provided you’re at least 6th dan and really take your time on every move.
The analogy to LLMs is that their output is totally plausible if you don’t know the topic intimately and carefully read what they said. But if you look closely they’re writing what sounds plausible and any relationship with facts is largely coincidental.
My test above, where I showed GPT 4 failing miserably at the wolf, goat, cabbage problem was achieved because the model has an enormous amount of training data telling it to never put a goat and cabbage together. Then it sees a prompt telling the opposite, but recall that LLMs don’t encode their training data as a series of facts. They encode them by associations, and so the prompt cannot override associations.
The Policy Network
The role of the policy network is to perform logical manipulation of data, aka search. In the case of Go, it’s Monte Carlo Tree Search, but that might not be the case everywhere. The point though is that because mistakes by the Value network are rare, the Policy Network can focus on searching very deeply.
Phrased differently, this interaction enables a LLM to perform symbolic manipulation, just as I did in finally solving the puzzle by getting GPT4 to produce and execute a Python program. The Value network is capable of turning my prompt into a good program, and the equivalent of the policy network is capable of executing python code.
I’d particularly highlight that the workaround I used is how most people solve weaknesses in LLMs. They use a layer of redirection (LoRA, RAG) to mean that the value network is not left to solve the problem on its own.
Conclusion
We can already do amazing things with LLMs. I think we still have another monumental advance coming, where some company (Meta?) manages to work out how to integrate Policy.