Go’s concurrency model is genuinely elegant. Goroutines are cheap, channels provide safe communication, and the runtime handles scheduling transparently. The language makes concurrent code look simple.

That simplicity is a trap. The ease of launching goroutines is inversely proportional to the ease of managing them correctly. The bugs you write with goroutines - goroutine leaks, race conditions, deadlocks - are among the hardest bugs in any language to debug.

Here are the patterns developers get wrong most often.

Goroutine Leaks: The Silent Memory Drain

A goroutine leak is a goroutine that starts and never terminates. Each leaked goroutine holds its stack memory (2KB minimum, growing as needed) and any resources it holds references to. A service with goroutine leaks grows memory usage indefinitely until it crashes or gets restarted.

The most common cause: unread channel.

// This goroutine leaks if nobody reads from results
func processData(data []string) {
    results := make(chan string)
    for _, d := range data {
        go func(d string) {
            results <- process(d) // blocks forever if nobody reads
        }(d)
    }
    // forgot to drain results channel
}

The fix requires ensuring goroutines can always exit:

func processData(ctx context.Context, data []string) []string {
    results := make(chan string, len(data)) // buffered, or
    var wg sync.WaitGroup

    for _, d := range data {
        wg.Add(1)
        go func(d string) {
            defer wg.Done()
            select {
            case results <- process(d):
            case <-ctx.Done():
                return // exit if context cancelled
            }
        }(d)
    }

    go func() {
        wg.Wait()
        close(results)
    }()

    var out []string
    for r := range results {
        out = append(out, r)
    }
    return out
}

Rule: Every goroutine must have a clear exit condition. If a goroutine blocks on a channel send, someone must receive from that channel. If a goroutine blocks on a channel receive, the channel must be closed or a value must be sent.

Closing Channels From the Wrong End

Sending to a closed channel panics. Closing an already-closed channel panics. These rules lead to a common mistake: the receiver closes the channel instead of the sender.

// WRONG: receiver closes the channel
func consumer(ch chan int) {
    for {
        v := <-ch
        if v == 0 {
            close(ch) // panic if producer sends after this
            return
        }
        process(v)
    }
}

The correct pattern: only the producer (sender) closes a channel, and only when it is done sending.

// CORRECT: sender closes
func producer(ch chan int) {
    defer close(ch) // closed when function returns
    for _, v := range data {
        ch <- v
    }
}

func consumer(ch chan int) {
    for v := range ch { // loop exits when ch is closed
        process(v)
    }
}

When multiple goroutines write to a channel, use sync.Once to close it exactly once after all writers are done.

WaitGroup Misuse

sync.WaitGroup is the standard tool for waiting on a group of goroutines. The most common mistake is calling wg.Add(1) inside the goroutine rather than before launching it.

// WRONG: race condition between Add and Wait
func wrong() {
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        go func() {
            wg.Add(1) // too late - Wait might return before this runs
            defer wg.Done()
            doWork()
        }()
    }
    wg.Wait()
}

// CORRECT: Add before launch
func correct() {
    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            doWork()
        }()
    }
    wg.Wait()
}

Mutex Mistakes

Copying a mutex. A mutex has internal state. Copying it (passing by value, storing in a struct that gets copied) copies the state and breaks the mutex. Always use pointers to mutexes or embed the mutex in a struct that is used by pointer.

Lock then defer unlock - but what about the range?

// This holds the lock for the entire function
func (s *Store) GetAll() []Item {
    s.mu.Lock()
    defer s.mu.Unlock()
    return s.items // returns a slice pointing into s.items
    // caller can read s.items after the lock is released
}

Returning a reference to protected data while holding the lock does not protect that data after the function returns. Return a copy.

Inconsistent locking. If 9 out of 10 places that access a shared variable use a mutex, the one place that does not causes a race condition. Use the race detector during testing:

go test -race ./...
go run -race main.go

The race detector has low overhead and catches real races. Run it in CI.

Context Propagation

Context is how cancellation and deadlines propagate through a Go program. The most common mistake is ignoring context or not propagating it.

// WRONG: ignores cancellation
func (s *Service) FetchUser(id int) (*User, error) {
    return s.db.QueryUser(id) // what if the client disconnected?
}

// CORRECT: propagates context
func (s *Service) FetchUser(ctx context.Context, id int) (*User, error) {
    return s.db.QueryUserContext(ctx, id)
}

The rule: any function that does I/O, calls another service, or may take significant time should accept a context.Context as its first parameter.

Do not store context in a struct. Context is request-scoped. Storing it in a struct makes its lifetime unclear. Pass it as a function parameter.

The Select Statement and Default

select with a default case is non-blocking. Without default, it blocks until one case is ready. Mixing these up causes subtle bugs.

// Non-blocking send attempt
select {
case ch <- value:
    // sent
default:
    // channel full, drop or handle
}

// Block with timeout
select {
case result := <-ch:
    use(result)
case <-time.After(5 * time.Second):
    return errors.New("timeout")
case <-ctx.Done():
    return ctx.Err()
}

Note: time.After creates a new timer that is not garbage collected until it fires. For tight loops, use time.NewTimer and reset it to avoid allocating a new timer on every iteration.

The errgroup Pattern

For coordinating multiple goroutines with error handling, golang.org/x/sync/errgroup is cleaner than WaitGroup:

import "golang.org/x/sync/errgroup"

func fetchAll(ctx context.Context, ids []int) ([]User, error) {
    g, ctx := errgroup.WithContext(ctx)
    users := make([]User, len(ids))

    for i, id := range ids {
        i, id := i, id // capture loop variables
        g.Go(func() error {
            u, err := fetchUser(ctx, id)
            if err != nil {
                return err
            }
            users[i] = u
            return nil
        })
    }

    if err := g.Wait(); err != nil {
        return nil, err
    }
    return users, nil
}

errgroup cancels the context when any goroutine returns an error, propagating cancellation to all other goroutines in the group.

Bottom Line

Go’s concurrency primitives are powerful and easy to misuse. The fundamental rules: every goroutine must have a clear exit path, only senders close channels, always call WaitGroup.Add before launching goroutines, propagate context through your call stack, and run the race detector in CI. These are not advanced techniques - they are baseline correctness requirements for concurrent Go code.