Anti-Patterns
Common testing mistakes to avoid
Learning from Mistakes
The fastest way to write better tests is to recognize what bad tests look like. This page catalogs testing anti-patterns we've encountered, explains why they cause problems, and shows how to fix them.
Sleeping Instead of Synchronizing
Using time.Sleep() to wait for asynchronous operations is the most common testing mistake. It makes tests slow and flaky: too short and the test fails intermittently, too long and CI takes forever.
The fix depends on what you're waiting for. For most cases, require.Eventually from testify is the right tool. It polls a condition until it becomes true or times out, with clear failure messages.
The API is require.Eventually(t, condition, timeout, pollInterval, message). It checks the condition every poll interval until it returns true or the timeout expires. This is much cleaner than manual channel and select patterns.
For cases where you need to make assertions inside the polling function, use require.EventuallyWithT. This gives you a *assert.CollectT that collects assertion failures without immediately failing the test:
This is particularly useful for integration tests that wait for data to propagate through async systems like message queues or eventually-consistent databases.
For time-dependent logic like expiration or scheduled jobs, inject a test clock rather than waiting for real time:
Reserve manual channel patterns for cases where you need to verify specific synchronization behavior, like testing that a stop signal actually terminates a goroutine.
Testing Implementation Instead of Behavior
Tests that reach into internal state are brittle. They break when you refactor, even if the behavior stays the same.
Test the public API instead. If the behavior is correct, the implementation doesn't matter.
This anti-pattern is especially tempting when testing constructors. You want to verify that configuration was applied correctly, so you peek at internal fields:
Instead, verify configuration through observable behavior. If the buffer has capacity 100, prove it by adding 100 elements without blocking. If drop is enabled, prove it by filling the buffer and observing that additional writes don't block.
This test is longer but more valuable. It verifies actual behavior that matters to callers, and it won't break if you rename internal fields or change the underlying data structure.
Shared Mutable State
Tests that share mutable state affect each other. One test's leftovers become another test's pollution. This causes flaky failures that depend on test execution order.
Give each test its own isolated state:
Ignoring Setup Errors
Errors during test setup are easy to overlook but cause confusing failures later. The test fails with some unrelated error because a nil pointer got used somewhere.
Check errors immediately and fail with a clear message:
Hardcoded Identifiers
Hardcoded IDs like "user_123" work fine until two tests run in parallel and collide. Or until cleanup fails and the next test run finds stale data.
Generate unique identifiers for each test:
Over-Mocking
Mocks are useful but addictive. A test with five mocks might pass while the real system is completely broken, because the test only verifies that mocks were called correctly.
Use real implementations when practical. The test harness makes this easy:
Missing Subtests
Table-driven tests without t.Run() make failures hard to diagnose. You see a line number but not which case failed.
Wrap each case in t.Run():
Forgetting t.Helper()
Test helpers that make assertions report failures at the wrong location without t.Helper(). You see the line inside the helper instead of the line that called it.
Add t.Helper() as the first line of any helper that makes assertions:
Redundant Test Cases
Each test case should verify a distinct behavior. If two test cases would fail for the same bug, one of them is redundant.
Both tests verify that calling Close() multiple times doesn't panic. If the sync.Once protection were removed, both would fail. Keep one and delete the other.
The fix is to ask: "What bug would cause this test to fail?" If two tests have the same answer, consolidate them. Each test case should target a specific failure mode.
The Common Thread
Most of these anti-patterns come from the same root cause: optimizing for writing the test quickly instead of maintaining it long-term. Taking an extra minute to generate unique IDs, check errors, and use proper synchronization saves hours of debugging flaky tests later.