Architecting a Feature Before Anyone Opens an Editor
A PM, an Engineer, and a Designer walk into a planning meeting and someone says, "We need this feature on iOS, Android, and the web." But that isn't the start to a joke, it's the start of a multi-day architectural discussion.
I wrote previously about the cost of skipping architecture reviews. This is the other side of that conversation, which is what actually happens in a good one. I'm going to walk through a multi-platform feature from database to API to client and lay out the decisions I'd work through before anyone opens an editor.
Start with the data
The first question to ask is what the data looks like. Relational vs. non-relational is a structural decision. If the data has clear relationships between entities, if you need joins, if referential integrity matters, if you're going to query the data in ways you can't fully predict yet, a relational database is probably the right call. PostgreSQL is my default, and I'm not apologetic about that. It handles most workloads well, it scales better than most teams will ever need, and the ecosystem is super mature.
Non-relational makes sense when the shape of the data isn't uniform, when you're storing documents or event streams or configuration blobs that don't map cleanly to rows and columns. But I've seen teams reach for MongoDB because "it's flexible" and then spend months fighting that flexibility when they realize they need to query across document boundaries or enforce constraints the database was never designed to enforce. Flexibility is a great feature, until you need structure to maintain it.
The data model drives everything downstream. Get this wrong and you'll feel it in every query, every migration, and every report someone asks for six months later that you didn't anticipate.
Design the API like a contract
For a multi-platform feature, the REST API is the shared language between your backend and every client that consumes it. Three different platforms are going to depend on the shape of your responses, and changing that shape later means three separate client updates and a coordination problem you didn't budget for.
Start with resource modeling. What are the nouns? What are the relationships between them? Your URL structure should reflect the domain model, not the database schema. /users/{id}/policies is a relationship that makes sense to the consumer. /policy_user_junction_records is a database leak.
Version from day one, even if you think you won't need it. /v1/ in the path costs you nothing now and saves you a painful migration when you discover your initial response shape doesn't work for a use case you hadn't considered. And you will have use cases you hadn't considered.
Error contracts matter as much as success responses. Every client needs to know what a 400 looks like compared to a 422 or a 500, and the response body needs to be actionable. A generic "something went wrong" message helps, as long as you only care about knowing if something went wrong. A response that tells the client which field failed validation and why turns a debugging session into a one-line fix.
And if a list endpoint could ever return more than a screenful of data, paginate it from the start. Cursor-based if the dataset is large or changes frequently, offset-based if the data is stable and you need random page access. Retrofitting pagination onto an endpoint that returns unpaginated arrays is one of those changes that breaks every client simultaneously.
Polling or websockets
Not every feature needs real-time updates, and not every real-time need requires a persistent connection.
Polling works when the data changes infrequently and the client just needs to check periodically. A 30-second interval on a status endpoint is simple, reliable, and cache-friendly. It puts the complexity on the client side, which is easier to debug and deploy than managing a socket server.
Websockets make sense when the data changes frequently, latency matters, and the user should see updates within seconds. Chat, live dashboards, collaborative editing, real-time notifications, and so on. If the user is actively watching the screen and the data is actively changing, a persistent connection is worth the infrastructure cost.
The mistake I see most often is teams reaching for websockets because they sound more modern, without accounting for the operational weight. Websocket connections need to be managed, reconnected on failure, authenticated independently, and load-balanced differently than stateless HTTP requests. If polling at a reasonable interval solves your problem, then poll. You can always add websockets later if the use case demands it, but you can not easily walk it back once clients depend on the push behavior.
There's also a third option that comes up more than it should: WebRTC. On the web, WebRTC is fantastic. The browser gives you a native API for peer-to-peer audio, video, and data channels, and for use cases like video calls or screen sharing it's the right tool. But on mobile, WebRTC is a different animal. There's no native SDK on iOS or Android, so you're pulling in Google's WebRTC framework, which adds meaningful binary size, introduces platform-specific quirks around camera and microphone permissions, background execution, and call lifecycle, and creates a maintenance surface that's fundamentally different from your web implementation. I've seen teams choose WebRTC for a feature because it worked great in the browser prototype, only to spend weeks wrestling with the mobile side just to reach parity. If your real-time needs are data-only (status updates, notifications, live feeds), websockets are almost always the simpler path on mobile. Save WebRTC for when you genuinely need peer-to-peer media, and go in knowing the mobile cost is real.
Platform constraints
When you're shipping across iOS, Android, and the web, each platform brings constraints that affect your architecture whether you plan for them or not.
On iOS, App Transport Security (ATS) requires HTTPS with TLS 1.2 or higher, forward secrecy, and specific cipher suites, and this is not optional. If your backend doesn't meet ATS requirements, your iOS app will not connect. You can request exceptions in the Info.plist, but Apple scrutinizes those during App Store review, and "our backend doesn't support modern TLS" is not an argument they find compelling. Build to ATS spec from the start and this becomes invisible.
Android has its own network security configuration that blocks cleartext traffic by default in newer API levels. Similar principle, different mechanism. And on the web, CORS policies determine which origins can call your API from a browser. If you're serving a web app from a different domain than your API, the CORS headers need to be configured correctly or the browser will reject the requests before they ever reach your server.
None of these are edge cases you can deal with later. They're platform requirements that exist whether you architect for them or not. Discovering them after the API is built means retrofitting, and retrofitting security and networking requirements is never as clean as building them in from the start.
Encryption is not one decision
When somebody says "we need encryption," that's at least three separate conversations.
Start with what's in transit. TLS 1.2 minimum, TLS 1.3 preferred. This covers the connection between client and API. If you've already built to ATS spec, you've handled most of this. Consider certificate pinning on mobile if the threat model warrants it, but understand the tradeoff: a pinned certificate that expires without a corresponding client update means your app stops working for every user who hasn't updated.
I learned this one the hard way. At a previous client I was rebuilding the mobile apps for a medical monitoring device, a camera that streams video over a local network for caregivers of people with epilepsy and infants. I pushed hard for TLS 1.3 from the start. The business side pushed back because they just wanted to move fast. The conversation I kept having was some version of "it works fine now, why would we slow down to change the encryption?" And the answer was always the same: because these devices transmit medical video of vulnerable people in their homes, and "works fine now" is not a security posture. We got it done, but the amount of time I spent making the case for something that should have been a given is time I could have spent on actual product work. If you're building anything that touches health data, financial data, or anything a reasonable person would consider private, start with the strongest encryption your stack supports. You will never regret over-investing in security on day one, but you will always regret under-investing in it.
Then there's encryption at rest. How is the data encrypted in the database? Most managed services (RDS, Cloud SQL, Azure SQL) offer encryption at rest, but "offer" and "enabled" are different things. Verify it's turned on. If you're storing files, documents, images, or exports, those need at-rest encryption too, whether that's S3 server-side encryption, Azure Blob encryption, or whatever your cloud provider supports.
And then there's field-level encryption, which is the conversation most teams skip entirely. If you're storing PII, financial records, or health information, there may be regulatory or compliance reasons to encrypt specific fields beyond what database-level encryption provides. This adds complexity to your application layer because you're encrypting and decrypting in code, but it also means a database breach doesn't expose your most sensitive data in plain text. You've seen inception? You're Leonardo DiCaprio, and your data is Cillian Murphy. Sometimes you need to go deeper.
Key management ties all of this together. Where do the encryption keys live? How are they rotated? Who has access? A managed KMS (be that AWS KMS, Azure Key Vault, GCP Cloud KMS, etc.) is the right answer for almost every team. Rolling your own key management is one of those decisions that sounds reasonable in a planning meeting and becomes a total PITA the first time you need to rotate a key in production. Don't skimp here, do it right and never look back.
This is the work
Every decision in this post happens before the first pull request. That's not overhead, it's software craftsmanship. Measure twice and cut once, right?
The alternative is making these decisions one production incident at a time while three platform teams try to coordinate fixes they didn't plan for. I've watched that play out, and the cost is always the same: far more time spent fixing the architecture nobody designed than would have been spent designing it in the first place.
If you read my post on architecture reviews and thought "sure, but what would I even put in the review," this is it. Data modeling, API contracts, communication patterns, platform constraints, and encryption. Not a slide deck, not a rubber stamp, but a set of real decisions that determine whether the feature is maintainable or whether it becomes the thing everyone dreads touching six months from now.
That's what makes a great software team.