← All posts

Using a Greedy Solver as a Difficulty Baseline for Apple Game

Suppose we want to estimate how difficult an Apple Game board is. The most direct method is to let many human players play the board and then average their scores. But this is expensive and slow. On the other side, we could try exhaustive search and compute the true maximum score. In practice, this is not realistic, because the number of possible move sequences grows too quickly even on one board.

So we needed something between these two extremes: a fast and consistent approximation that can assign one difficulty-related score to any board.

Our goal was not to build a perfect solver. Also, it was not to reproduce all details of human thinking. We wanted a simple baseline that is cheap to run, easy to explain, and stable across many boards. If the score changes from board to board, we want that change to come mainly from board structure itself, not from a complicated evaluation method.

For this reason, we selected a greedy solver.

In algorithm design, a greedy method chooses the best-looking local action at each step, without searching deeply into future states. Our solver looks at the current board, finds all valid moves, selects one move by a fixed rule, removes it, and repeats this process until no valid move remains. There is no backtracking, no search tree, and no probabilistic sampling.

The move selection rule is also simple. At each step, the solver enumerates all rectangles whose cell sum is 10. These rectangles are valid moves. Among them, it prefers the move that contains the largest single cell value. The intuition is that large values are harder to combine flexibly, so removing them early often leaves the board in a more usable state. If there is a tie, the solver prefers the rectangle with fewer cells, because a smaller rectangle preserves more cells for later moves.

At first glance, this greedy behavior may look like a weakness. But for our purpose, it is actually useful. In some sense, the fact that it is greedy makes it closer to human play.

The Apple Game has a two-minute time limit. Under this condition, almost nobody tries to solve the board by planning twenty moves ahead. At best, people usually look one or two moves ahead. Since what we want to measure is not mathematical optimality but perceived difficulty, a greedy approach is a reasonable approximation.

From the perspective of an average player, one main difficulty is not long-term strategy but simply recognizing the remaining possible moves. Many players already struggle to see all valid rectangles on the current board. In actual play, they often proceed in a greedy way: they start from the area they noticed first, make a move there, and then continue with nearby areas. They do not globally optimize the whole board at every turn.

Because of this, our solver behaves somewhat similarly to a player who has played the Apple Game several times and is already familiar with getting relatively high scores. It is not an expert player, but it is also not random. It has perfect arithmetic and complete visibility of the board, but its decision rule is still myopic. This combination turned out to be useful as a mechanical baseline.

The algorithm can be written as follows:

function SolveBoard(board): integer;
var
  validMoves: list of Move;
  bestMove: Move;
  totalScore: integer;
begin
  totalScore := 0;

while HasValidMove(board) do begin validMoves := FindAllRectanglesSummingTo10(board); bestMove := SelectBestMove(validMoves); RemoveRectangle(board, bestMove); totalScore := totalScore + ScoreOf(bestMove); end;

SolveBoard := totalScore; end;

function SelectBestMove(validMoves): Move; begin SelectBestMove := move in validMoves with maximum LargestCellValue(move), and if tied, minimum CellCount(move); end;

This is the whole solver. It is deterministic, so the same board always produces the same result.

The main advantage of this solver is speed and consistency. We can evaluate many boards quickly and assign a repeatable score to each one. If a board receives a high score even from this limited solver, the board is likely to be genuinely easy. If the solver performs poorly, the board often has structural properties that also make it harder for human players, such as scattered small values, inefficient spacing, or a low number of visible opportunities.

Of course, this solver has clear limitations. It is not optimal. Because it does not look ahead, it can miss cases where a weaker-looking move now creates a much better sequence later. Some boards reward non-intuitive sacrifice, just like a brilliant move in chess, and a greedy solver will never choose such a move on purpose. For that reason, the solver score should be interpreted as a practical lower-bound type estimate, not as the true maximum achievable score.

There are also edge cases where the heuristic becomes unstable. When many candidate moves are similar, the tie-breaking rule can influence the whole trajectory. On boards where success depends on preserving certain cells for later, the greedy choice can enter a dead end too early. These limitations are real, and they mean the greedy score is only a proxy for difficulty, not an absolute measurement.

Still, for the original purpose, this tradeoff is acceptable. We did not need a perfect player. We needed a simple and reproducible lens for comparing boards. The greedy solver serves this role well. It is transparent, deterministic, and cheap to execute.

All posts