Skip to main content

Go Concurrency

Overview

Go's concurrency model is built around goroutines and channels, providing a simple yet powerful way to write concurrent programs. This guide covers everything from basic goroutines to advanced concurrency patterns and best practices.

Key Concepts:

  • Goroutines: Lightweight threads managed by the Go runtime
  • Channels: Typed conduits for communication between goroutines
  • Select: Multi-way communication control structure
  • Synchronization: WaitGroups, Mutexes, and other primitives

Basic Goroutines

Simple Goroutine

Goroutines are lightweight threads that run concurrently with the main program.

package main

import (
"fmt"
"time"
)

func say(s string) {
for i := 0; i < 5; i++ {
time.Sleep(100 * time.Millisecond)
fmt.Println(s)
}
}

func main() {
go say("world") // Run in background
say("hello") // Run in main thread
}

When to use: Start background tasks, handle multiple operations concurrently, improve performance for I/O-bound operations.

Anonymous Goroutines

Create goroutines inline for simple tasks.

func main() {
go func() {
fmt.Println("Anonymous goroutine")
}()

time.Sleep(time.Millisecond)
}

Benefits: No need to define separate functions for simple operations, cleaner code for one-off tasks.

Goroutine with Parameters

Pass data to goroutines safely.

func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("worker %d processing job %d\n", id, j)
time.Sleep(time.Second)
results <- j * 2
}
}

func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)

// Start 3 workers
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}

// Send jobs
for j := 1; j <= 9; j++ {
jobs <- j
}
close(jobs)

// Collect results
for a := 1; a <= 9; a++ {
<-results
}
}

Best Practice: Always pass data by value or through channels to avoid race conditions.

Channels

Basic Channels

Channels are typed conduits for communication between goroutines.

func main() {
ch := make(chan string)

go func() {
ch <- "Hello from goroutine"
}()

msg := <-ch
fmt.Println(msg)
}

Key Points:

  • Channels are blocking by default (unbuffered)
  • Sending blocks until someone receives
  • Receiving blocks until someone sends

Buffered Channels

Buffered channels can hold multiple values before blocking.

func main() {
ch := make(chan int, 2) // Buffer size of 2

ch <- 1 // Doesn't block
ch <- 2 // Doesn't block
ch <- 3 // Blocks until someone receives

fmt.Println(<-ch) // 1
fmt.Println(<-ch) // 2
fmt.Println(<-ch) // 3
}

When to use: When you know the maximum number of values that will be sent, or to decouple sender and receiver timing.

Channel Direction

Specify channel direction for better API design and safety.

// Send-only channel
func sendOnly(ch chan<- int) {
ch <- 42
}

// Receive-only channel
func receiveOnly(ch <-chan int) {
value := <-ch
fmt.Println(value)
}

// Bidirectional channel
func bidirectional(ch chan int) {
ch <- 42
value := <-ch
fmt.Println(value)
}

Benefits: Prevents accidental misuse, makes code intent clearer, enables compiler optimizations.

Closing Channels

Close channels to signal completion and prevent deadlocks.

func producer(ch chan<- int) {
for i := 0; i < 5; i++ {
ch <- i
}
close(ch) // Signal no more values
}

func consumer(ch <-chan int) {
for value := range ch { // Range until closed
fmt.Println(value)
}
}

func main() {
ch := make(chan int)
go producer(ch)
consumer(ch)
}

Best Practices:

  • Only the sender should close channels
  • Check if channel is closed: value, ok := <-ch
  • Use range to iterate until closed

Synchronization Primitives

WaitGroup

Coordinate multiple goroutines and wait for completion.

package main

import (
"fmt"
"sync"
"time"
)

func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Signal completion when function exits

fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}

func main() {
var wg sync.WaitGroup

for i := 1; i <= 5; i++ {
wg.Add(1) // Increment counter
go worker(i, &wg)
}

wg.Wait() // Wait for all workers to complete
fmt.Println("All workers completed")
}

When to use: When you need to wait for multiple goroutines to finish before proceeding.

Mutex

Protect shared data from concurrent access.

type SafeCounter struct {
mu sync.Mutex
count int
}

func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}

func (c *SafeCounter) GetCount() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.count
}

func main() {
counter := &SafeCounter{}

for i := 0; i < 1000; i++ {
go counter.Increment()
}

time.Sleep(time.Second)
fmt.Println(counter.GetCount()) // Should be 1000
}

Best Practices:

  • Always use defer to unlock mutexes
  • Keep critical sections as small as possible
  • Consider using channels instead when possible

RWMutex

Read-Write mutex for better performance with multiple readers.

type SafeMap struct {
mu sync.RWMutex
data map[string]int
}

func (sm *SafeMap) Set(key string, value int) {
sm.mu.Lock()
defer sm.mu.Unlock()
sm.data[key] = value
}

func (sm *SafeMap) Get(key string) (int, bool) {
sm.mu.RLock() // Read lock - multiple readers allowed
defer sm.mu.RUnlock()
value, exists := sm.data[key]
return value, exists
}

func (sm *SafeMap) GetAll() map[string]int {
sm.mu.RLock()
defer sm.mu.RUnlock()

// Create a copy to avoid external modification
result := make(map[string]int)
for k, v := range sm.data {
result[k] = v
}
return result
}

When to use: When you have many readers and few writers, provides better performance than regular mutex.

Select Statement

Basic Select

Handle multiple channel operations non-blockingly.

func main() {
ch1 := make(chan string)
ch2 := make(chan string)

go func() {
time.Sleep(time.Second)
ch1 <- "one"
}()

go func() {
time.Sleep(2 * time.Second)
ch2 <- "two"
}()

for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println("received", msg1)
case msg2 := <-ch2:
fmt.Println("received", msg2)
}
}
}

Select with Default

Non-blocking channel operations.

func main() {
ch := make(chan string)

select {
case msg := <-ch:
fmt.Println("received", msg)
default:
fmt.Println("no message received")
}
}

Select with Timeout

Add timeouts to channel operations.

func main() {
ch := make(chan string)

go func() {
time.Sleep(2 * time.Second)
ch <- "result"
}()

select {
case res := <-ch:
fmt.Println(res)
case <-time.After(1 * time.Second):
fmt.Println("timeout")
}
}

Advanced Patterns

Future/Promise

The Future/Promise pattern allows executing tasks in the background and obtaining results asynchronously without blocking the main thread. This is similar to async/await in JavaScript.

package main

import (
"fmt"
"time"
)

func Promise(task func() int) chan int {
resultCh := make(chan int, 1) // Create channel for result

go func() {
result := task() // Execute task
resultCh <- result // Send result to channel
close(resultCh) // Close channel after completion
}()

return resultCh
}

func main() {
// Task that takes 2 seconds
longRunningTask := func() int {
time.Sleep(2 * time.Second)
return 42
}

// Start task via Promise
future := Promise(longRunningTask)

fmt.Println("Task started, can do other things...")

// Wait for result
result := <-future
fmt.Println("Result:", result)
}

When to use: When you need to execute long-running tasks without blocking the main thread, handle multiple async operations, or implement non-blocking API calls.

Benefits: Non-blocking execution, improved responsiveness, better resource utilization.

Future/Promise with Error Handling

Extended version that handles both results and errors.

type Result struct {
Value int
Error error
}

func Promise(task func() (int, error)) chan Result {
resultCh := make(chan Result, 1)

go func() {
value, err := task()
resultCh <- Result{Value: value, Error: err}
close(resultCh)
}()

return resultCh
}

func main() {
task := func() (int, error) {
time.Sleep(time.Second)
return 42, nil
}

future := Promise(task)
result := <-future

if result.Error != nil {
fmt.Printf("Error: %v\n", result.Error)
} else {
fmt.Printf("Result: %d\n", result.Value)
}
}

Generator Pattern

The Generator pattern provides a simple and convenient way to create data streams. It allows you to start a goroutine that generates values and passes them through a channel.

func generator(input []int) chan int {
inputCh := make(chan int)

go func() {
defer close(inputCh)
for _, data := range input {
inputCh <- data
}
}()

return inputCh
}

func main() {
data := []int{1, 2, 3, 4, 5}
stream := generator(data)

for value := range stream {
fmt.Println(value)
}
}

When to use: When you need to create data streams, process large datasets incrementally, or implement iterator-like functionality.

Benefits: Memory efficient for large datasets, clean separation of concerns, easy to compose with other patterns.

Generator with Error Handling

Generator that can handle errors during data generation.

type DataResult struct {
Data int
Error error
}

func generatorWithErrors(input []int) chan DataResult {
resultCh := make(chan DataResult)

go func() {
defer close(resultCh)
for _, data := range input {
if data < 0 {
resultCh <- DataResult{Error: fmt.Errorf("invalid data: %d", data)}
continue
}
resultCh <- DataResult{Data: data}
}
}()

return resultCh
}

func main() {
data := []int{1, -2, 3, 4, -5}
stream := generatorWithErrors(data)

for result := range stream {
if result.Error != nil {
fmt.Printf("Error: %v\n", result.Error)
} else {
fmt.Printf("Data: %d\n", result.Data)
}
}
}

Semaphore Pattern

The Semaphore pattern provides a convenient tool for controlling the number of concurrently executing goroutines, protecting against system overload.

type Semaphore struct {
sem chan struct{}
}

func NewSemaphore(maxConcurrent int) *Semaphore {
return &Semaphore{
sem: make(chan struct{}, maxConcurrent),
}
}

func (s *Semaphore) Acquire() {
s.sem <- struct{}{}
}

func (s *Semaphore) Release() {
<-s.sem
}

func worker(id int, sem *Semaphore) {
sem.Acquire()
defer sem.Release()

fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}

func main() {
sem := NewSemaphore(3) // Maximum 3 concurrent workers
var wg sync.WaitGroup

for i := 1; i <= 10; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
worker(id, sem)
}(i)
}

wg.Wait()
fmt.Println("All workers completed")
}

When to use: When you need to limit concurrent operations (API calls, database connections, file operations), prevent resource exhaustion, or implement rate limiting.

Benefits: Prevents system overload, controls resource usage, improves stability under high load.

Worker Pool

Efficiently manage a pool of workers for processing tasks.

type Job struct {
ID int
Data string
Result chan string
}

func worker(id int, jobs <-chan Job) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job.ID)
time.Sleep(time.Millisecond * 100) // Simulate work
job.Result <- fmt.Sprintf("Job %d completed by worker %d", job.ID, id)
}
}

func main() {
const numWorkers = 3
const numJobs = 10

jobs := make(chan Job, numJobs)
results := make(chan string, numJobs)

// Start workers
for w := 1; w <= numWorkers; w++ {
go worker(w, jobs)
}

// Send jobs
for j := 1; j <= numJobs; j++ {
job := Job{
ID: j,
Data: fmt.Sprintf("data-%d", j),
Result: results,
}
jobs <- job
}
close(jobs)

// Collect results
for a := 1; a <= numJobs; a++ {
result := <-results
fmt.Println(result)
}
}

Benefits: Controls resource usage, prevents overwhelming the system, provides predictable performance.

Pipeline

Chain goroutines together to process data in stages.

func generator(nums ...int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for _, n := range nums {
out <- n
}
}()
return out
}

func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
out <- n * n
}
}()
return out
}

func filter(in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
if n%2 == 0 {
out <- n
}
}
}()
return out
}

func main() {
// Pipeline: generator -> square -> filter
nums := generator(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
squares := square(nums)
evens := filter(squares)

for result := range evens {
fmt.Println(result) // Prints: 4, 16, 36, 64, 100
}
}

Benefits: Modular design, easy to test individual stages, can be composed flexibly.

Fan-Out, Fan-In

Distribute work across multiple workers and collect results.

func producer(nums ...int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for _, n := range nums {
out <- n
}
}()
return out
}

func worker(id int, in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
time.Sleep(time.Millisecond * 100) // Simulate work
out <- n * n
}
}()
return out
}

func fanIn(inputs ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup

for _, input := range inputs {
wg.Add(1)
go func(ch <-chan int) {
defer wg.Done()
for n := range ch {
out <- n
}
}(input)
}

go func() {
wg.Wait()
close(out)
}()

return out
}

func main() {
// Fan-out: distribute work across multiple workers
input := producer(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

worker1 := worker(1, input)
worker2 := worker(2, input)
worker3 := worker(3, input)

// Fan-in: collect results from all workers
results := fanIn(worker1, worker2, worker3)

for result := range results {
fmt.Println(result)
}
}

Context for Cancellation

Use context to cancel operations and propagate cancellation signals.

func worker(ctx context.Context, id int, jobs <-chan int) {
for {
select {
case job := <-jobs:
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(time.Millisecond * 100)
case <-ctx.Done():
fmt.Printf("Worker %d cancelled: %v\n", id, ctx.Err())
return
}
}
}

func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()

jobs := make(chan int, 10)

// Start workers
for i := 1; i <= 3; i++ {
go worker(ctx, i, jobs)
}

// Send jobs
for j := 1; j <= 20; j++ {
select {
case jobs <- j:
case <-ctx.Done():
fmt.Println("Cancelled sending jobs")
return
}
}
close(jobs)

// Wait for timeout
<-ctx.Done()
fmt.Println("All done")
}

Benefits: Graceful shutdown, resource cleanup, timeout handling, cancellation propagation.

Error Handling in Concurrency

Error Channels

Pass errors through channels for proper error handling.

type Result struct {
Value int
Error error
}

func worker(id int, jobs <-chan int, results chan<- Result) {
for job := range jobs {
if job < 0 {
results <- Result{Error: fmt.Errorf("invalid job %d", job)}
continue
}

time.Sleep(time.Millisecond * 100)
results <- Result{Value: job * 2}
}
}

func main() {
jobs := make(chan int, 5)
results := make(chan Result, 5)

// Start worker
go worker(1, jobs, results)

// Send jobs including invalid ones
jobs <- 1
jobs <- -1 // Invalid
jobs <- 3
close(jobs)

// Collect results
for i := 0; i < 3; i++ {
result := <-results
if result.Error != nil {
fmt.Printf("Error: %v\n", result.Error)
} else {
fmt.Printf("Result: %d\n", result.Value)
}
}
}

Error Groups

Use errgroup for coordinated error handling across multiple goroutines.

import "golang.org/x/sync/errgroup"

func worker(id int) error {
time.Sleep(time.Millisecond * 100)
if id == 3 {
return fmt.Errorf("worker %d failed", id)
}
fmt.Printf("Worker %d completed\n", id)
return nil
}

func main() {
var g errgroup.Group

for i := 1; i <= 5; i++ {
id := i // Capture loop variable
g.Go(func() error {
return worker(id)
})
}

if err := g.Wait(); err != nil {
fmt.Printf("Error: %v\n", err)
} else {
fmt.Println("All workers completed successfully")
}
}

When to use: When you need to execute multiple parallel tasks and ensure they all complete successfully, or when you want to handle the first error that occurs and cancel remaining operations.

Benefits: Automatic cancellation on first error, clean error propagation, simplified coordination of multiple goroutines.

Error Groups with Context

Use errgroup with context for timeout and cancellation support.

import (
"context"
"golang.org/x/sync/errgroup"
)

func workerWithContext(ctx context.Context, id int) error {
select {
case <-time.After(time.Second):
fmt.Printf("Worker %d completed\n", id)
return nil
case <-ctx.Done():
return ctx.Err()
}
}

func main() {
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
defer cancel()

g, ctx := errgroup.WithContext(ctx)

for i := 1; i <= 5; i++ {
id := i
g.Go(func() error {
return workerWithContext(ctx, id)
})
}

if err := g.Wait(); err != nil {
fmt.Printf("Error: %v\n", err)
} else {
fmt.Println("All workers completed successfully")
}
}

Benefits: Automatic timeout handling, graceful cancellation, context propagation to all goroutines.

Performance and Best Practices

When to Use Concurrency

Important: Don't use concurrency patterns unnecessarily. Simple sequential execution is often better than over-engineered concurrent solutions.

// ❌ Bad: Unnecessary concurrency for simple operations
func badExample() {
var wg sync.WaitGroup
results := make([]int, 100)

for i := 0; i < 100; i++ {
wg.Add(1)
go func(index int) {
defer wg.Done()
results[index] = index * 2 // Simple operation
}(i)
}
wg.Wait()
}

// ✅ Good: Simple sequential execution
func goodExample() {
results := make([]int, 100)
for i := 0; i < 100; i++ {
results[i] = i * 2 // Simple operation
}
}

Use concurrency when:

  • Operations are I/O bound (network calls, file operations)
  • You have CPU-intensive tasks that can be parallelized
  • You need to handle multiple independent operations
  • You want to improve responsiveness in user-facing applications

Avoid concurrency when:

  • Operations are simple and fast
  • The overhead of goroutines exceeds the benefit
  • You're dealing with small datasets
  • The logic becomes unnecessarily complex

Avoid Goroutine Leaks

Always ensure goroutines terminate properly.

// ❌ Bad: Potential goroutine leak
func badExample() {
ch := make(chan int)
go func() {
for {
select {
case <-ch:
return
default:
// Do work
}
}
}()
// Goroutine might never terminate
}

// ✅ Good: Proper termination
func goodExample() {
ch := make(chan int)
done := make(chan bool)

go func() {
defer close(done)
for {
select {
case <-ch:
return
case <-done:
return
}
}
}()

// Signal termination
close(done)
}

Channel Buffering Guidelines

Choose appropriate buffer sizes based on your use case.

// Unbuffered: Synchronous communication
ch := make(chan int)

// Buffered: Asynchronous communication
ch := make(chan int, 10)

// Buffered with known capacity: Worker pools
ch := make(chan Job, numWorkers)

// Buffered with estimated capacity: Batch processing
ch := make(chan Result, estimatedBatchSize)

Memory Management

Be mindful of memory usage in concurrent programs.

// ❌ Bad: Large data in channels
func badMemoryUsage() {
ch := make(chan []byte, 1000)
for i := 0; i < 1000; i++ {
data := make([]byte, 1024*1024) // 1MB per message
ch <- data
}
}

// ✅ Good: Pass references or use streaming
func goodMemoryUsage() {
ch := make(chan *[]byte, 1000)
for i := 0; i < 1000; i++ {
data := make([]byte, 1024*1024)
ch <- &data
}
}

Race Condition Prevention

Use proper synchronization to prevent race conditions.

// ❌ Bad: Race condition
type Counter struct {
count int
}

func (c *Counter) Increment() {
c.count++ // Race condition!
}

// ✅ Good: Use mutex
type SafeCounter struct {
mu sync.Mutex
count int
}

func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}

// ✅ Better: Use channels (CSP style)
type ChannelCounter struct {
count chan int
}

func NewChannelCounter() *ChannelCounter {
c := &ChannelCounter{count: make(chan int, 1)}
c.count <- 0
return c
}

func (c *ChannelCounter) Increment() {
current := <-c.count
c.count <- current + 1
}

Debugging Concurrent Code

Race Detection

Use Go's built-in race detector.

go run -race main.go
go test -race ./...

Deadlock Detection

Common deadlock scenarios and solutions.

// ❌ Deadlock: Both goroutines waiting
func deadlock() {
ch1 := make(chan int)
ch2 := make(chan int)

go func() {
ch1 <- 1
<-ch2 // Waiting for ch2
}()

go func() {
ch2 <- 1
<-ch1 // Waiting for ch1
}()
}

// ✅ Solution: Proper ordering
func noDeadlock() {
ch1 := make(chan int)
ch2 := make(chan int)

go func() {
ch1 <- 1
<-ch2
}()

go func() {
<-ch1 // Receive first
ch2 <- 1
}()
}

Profiling Concurrent Code

Use Go's profiling tools to identify bottlenecks.

import (
"net/http"
_ "net/http/pprof"
"runtime/pprof"
)

func main() {
// Start pprof server
go func() {
http.ListenAndServe("localhost:6060", nil)
}()

// Your concurrent code here
}

Common Patterns and Idioms

Rate Limiting

Control the rate of operations.

func rateLimiter(limit time.Duration) chan struct{} {
ch := make(chan struct{})
go func() {
ticker := time.NewTicker(limit)
defer ticker.Stop()
for range ticker.C {
ch <- struct{}{}
}
}()
return ch
}

func main() {
limiter := rateLimiter(100 * time.Millisecond)

for i := 0; i < 10; i++ {
<-limiter
fmt.Printf("Request %d\n", i)
}
}

Circuit Breaker

Handle failures gracefully.

type CircuitBreaker struct {
failures int
lastFailure time.Time
mu sync.Mutex
}

func (cb *CircuitBreaker) Call(fn func() error) error {
cb.mu.Lock()
if cb.failures >= 3 && time.Since(cb.lastFailure) < time.Second*5 {
cb.mu.Unlock()
return fmt.Errorf("circuit breaker open")
}
cb.mu.Unlock()

err := fn()

cb.mu.Lock()
if err != nil {
cb.failures++
cb.lastFailure = time.Now()
} else {
cb.failures = 0
}
cb.mu.Unlock()

return err
}

Pub/Sub Pattern

Implement publish-subscribe pattern.

type PubSub struct {
subscribers map[string][]chan interface{}
mu sync.RWMutex
}

func NewPubSub() *PubSub {
return &PubSub{
subscribers: make(map[string][]chan interface{}),
}
}

func (ps *PubSub) Subscribe(topic string) <-chan interface{} {
ps.mu.Lock()
defer ps.mu.Unlock()

ch := make(chan interface{}, 1)
ps.subscribers[topic] = append(ps.subscribers[topic], ch)
return ch
}

func (ps *PubSub) Publish(topic string, data interface{}) {
ps.mu.RLock()
defer ps.mu.RUnlock()

for _, ch := range ps.subscribers[topic] {
select {
case ch <- data:
default:
// Channel is full, skip
}
}
}

Testing Concurrent Code

Testing Goroutines

Test concurrent code effectively.

func TestWorker(t *testing.T) {
jobs := make(chan int, 5)
results := make(chan int, 5)

// Start worker
go worker(1, jobs, results)

// Send test jobs
jobs <- 1
jobs <- 2
close(jobs)

// Collect results
result1 := <-results
result2 := <-results

if result1 != 2 || result2 != 4 {
t.Errorf("Expected 2,4 got %d,%d", result1, result2)
}
}

Benchmarking Concurrent Code

Measure performance of concurrent operations.

func BenchmarkWorkerPool(b *testing.B) {
jobs := make(chan int, 100)
results := make(chan int, 100)

// Start workers
for i := 0; i < 4; i++ {
go worker(i, jobs, results)
}

b.ResetTimer()
for i := 0; i < b.N; i++ {
jobs <- i
<-results
}
}

Summary

Go's concurrency model provides powerful tools for writing concurrent programs:

Core Concepts:

  • Goroutines: Lightweight threads for concurrent execution
  • Channels: Communication mechanism between goroutines
  • Select: Multi-way communication control
  • Synchronization: WaitGroups, Mutexes, and other primitives

Best Practices:

  • Use channels for communication, mutexes for shared state
  • Always ensure goroutines terminate properly
  • Handle errors appropriately in concurrent code
  • Use context for cancellation and timeouts
  • Test concurrent code thoroughly
  • Use race detection during development

Advanced Patterns:

  • Worker pools for controlled concurrency
  • Pipelines for data processing
  • Fan-out/Fan-in for work distribution
  • Future/Promise for async task execution
  • Generator pattern for data streams
  • Semaphore for concurrency control
  • Circuit breakers for fault tolerance
  • Pub/Sub for decoupled communication

Remember: "Don't communicate by sharing memory; share memory by communicating." - Go's concurrency philosophy.