005 · HTTP · CLIENT · SERVER

Client-Server Architecture

The fundamental request-response model everything else builds on.

If you are new here: In client–server architecture, programs on user devices (clients) ask programs you operate (servers) to do work: read data, check passwords, charge cards. The network sits in the middle. Almost every web app, mobile API, and B2B integration is a variation on this pattern.

RoleExamplesTypical responsibility
ClientBrowser, mobile app, CLI, partner’s serverUI, retries, tokens, request shaping
ServerApp process behind nginx/ALBBusiness rules, authz, database access

The Problem

You shipped a static marketing site as a bundle of HTML and CSS. It loads fast because the browser fetches files and renders them locally. Then product asks for accounts, carts, and inventory — and suddenly you need state and logic somewhere other than the user's laptop.

Client-server architecture is the default answer: programs on user devices (clients) talk over the network to programs on machines you control (servers) that hold data and enforce rules. Everything else — load balancers, caches, microservices — is elaboration on this spine.

In plain terms: the client asks; the server is the source of truth for data you do not want scattered on laptops.

Analogy: A waiter (client) walks to the kitchen (server) with your order — diners do not rummage in the fridge; the kitchen enforces food safety and inventory rules.

Tiny example: Your phone’s app (client) sends GET /me with a Bearer token; the API (server) verifies the token against Postgres and returns JSON — the phone never connects to the database directly.

A single request

The client opens a connection (often TCP + TLS) and sends a request: method, path, headers, maybe a body. The diagram shows that first hop — browser, the network, and the app server listening on a port.

Raw HTTP shape (simplified): After TLS is established, the bytes on the wire look conceptually like this:

GET /api/v1/profile HTTP/1.1
Host: api.example.com
Accept: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...

The server parses the request line (GET, path, HTTP version), headers, then (for POST/PUT) a body — often JSON.

Processing on the server

The server accepts the connection, parses the request, and runs your application code. That code might query PostgreSQL, call an internal service, read from Redis, or run pure business rules.

The response

The server assembles a response: status code, headers, body — and the client renders a page or hands JSON to your app.

Example JSON response:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Cache-Control: private, max-age=0
 
{"id":"user_42","display_name":"Sam","plan":"pro"}
``` The response travels the same logical path in reverse. For HTTPS, the tunnel already exists; the payload is encrypted on the wire. The browser or mobile app updates the UI, stores cookies or tokens, and the user sees a result.
 
None of this requires magic — it requires clear contracts: HTTP semantics, authentication, and idempotency when clients retry.
 
## Many clients, one server
 
<FrameMarker frameId="many-clients" />
 
In production, "the server" is often one VM or container listening on a port — and **every** concurrent user shares that process's CPU, memory, and file descriptors. A thousand tabs hitting one box means a thousand open connections and scheduling contention.
 
This is the first scalability wall: not "the network is slow," but **one machine can only do so much work per second**.
 
## What breaks under load
 
<FrameMarker frameId="what-breaks" />
 
When traffic spikes, something concrete gives way: CPU pegged at 100%, memory pressure and garbage collection pauses, **connection pools** to the database exhausted, or the kernel refusing new sockets. "Slow" is a symptom — the metric tells you whether you are compute-bound, memory-bound, or connection-bound.
 
Blindly adding CPU without fixing N+1 queries or pool sizes often wastes money.
 
## What comes next
 
<FrameMarker frameId="preview" />
 
The usual evolution: **stateless** app servers behind a **load balancer**, a shared database or cache tier, and autoscaling on the app layer. The next lessons in this track unpack HTTP, TLS, and load balancing — but they all assume this client-server spine.
 
### Why this matters for you
 
Before you draw 47 microservices on a whiteboard, be fluent in the round trip: request in, work on the server, response out. Every optimization — caching, CDNs, connection pooling — plugs into that path. Client-server is not legacy; it is the shape of the web.
DIAGRAMDrag nodes · pan · pinch or double-click to zoom
FRAME 1 OF 6

The client opens a connection and sends a request — usually HTTP — toward a server that listens on an IP and port.