Our Community
Sign in
Sign Up
live scale simulation

PostgreSQL bucklesat 100M rows.

JSONB columns balloon in size. Vacuuming stalls. Index bloat becomes a full-time job. It was built for structured rows — not 850 KB JSON blobs at scale.

Record type:EMR Patient Chart
Raw size:850 KB
Hydrous compressed:~255 KB gzip
Composite indexes:0 — forever
The Problem

We kept hitting

the same wall.

Every project reached a point where storing large JSON records got painful. Firestore document limits. MongoDB costs exploding at scale. DynamoDB complexity. There had to be a better way.

Storage cost comparison
MongoDB Atlas (1B × 1KB)$460/mo
DynamoDB (1B × 1KB)$280/mo
Firestore (1B docs)$180/mo
Hydrous (1B × 850KB)$3/mo
* 150KB avg gzip compressed, Archive class blended
Platform Capabilities

Everything your app needs, built in.

Hydrous is a managed database platform purpose-built for massive-scale JSON records. Hover each card to see it come alive.

🪣
BucketsCORE DB
Store, manage & query billion-scale JSON records with date-encoded IDs and GCS-native compression.
1B+
Records
72%
Compressed
$0.02/GB
GCS Cost
260201-rec_01A847 KB
compressed
260201-rec_02B312 KB
compressed
260115-rec_09X1.1 MB
archived
251230-rec_44Z620 KB
coldline
Path: projects/my-app/buckets/orders/records/26/02/01/
Max records: 1B+
Record size: 1 MB
Compression: 72%
🔐
AuthenticationAUTH
First-class email/password auth with sessions, refresh tokens, rate limiting, and full user management.
→ POST /auth/:key/signup
← 200 { token, refreshToken }
→ POST /auth/:key/signin
← 200 { session, user }
→ POST /auth/:key/session/validate
← 200 { valid: true }
A
ada@hydrous.dev
admin
B
bob@hydrous.dev
user
C
cat@hydrous.dev
user
bcrypt·12JWT sessionsrate limitsemail verify
Session TTL: 24h
Refresh: 30d
bcrypt: cost·12
📦
File StorageSTORAGE
General-purpose file storage with direct-to-GCS uploads, public/private visibility, and signed URLs.
uploading dataset-2026.json.gz…0%
📄report.pdf4.2 MBPRIVATE
🖼logo.png89 KBPUBLIC
📊export.csv12 MBPRIVATE
🌐avatar.webp210 KBPUBLIC
Max file: 500 MB
Batch: 50 files
Direct PUT: zero-proxy
📊
BigQuery AnalyticsANALYTICS
Run full SQL analytics over your GCS buckets directly — zero ETL, no data duplication, live results.
SELECT field, COUNT(*) as count
FROM `my-proj.hydrous.orders`
WHERE date > '2026-01-01'
GROUP BY field
ORDER BY count DESC
LIMIT 100
field
count
region
category
status
Zero-ETL · reads GCS directly✓ Standard SQL
ETL: none
SQL: Standard
Data: live
Query EngineQUERIES
4-path intelligent router that picks the cheapest query path — GCS-first, Firestore-minimal by design.
queryRecords() — 4-path intelligent router
Path A
GCS folder scan
timeScope only
0 Firestore
Path B
Firestore + GCS
time + filters
minimal
Path C
Firestore cache
filters only
cached 30s
Path D
Monthly GCS walk
no scope
0 Firestore
getRecord → Zero Firestore reads · compute path from ID
Paths: 4
Cache TTL: 30s
Concurrency: 20
🧠
Analytics EngineENGINE
Rich in-app analytics with timeSeries, distribution, topN, stats, multiMetric, and crossBucket queries.
📈 timeSeries
Time SeriesDistributionTop NField StatsMulti-MetricCross-Bucket
POST /api/analytics/:bucketKey/:key → read-only
Query types: 10
Max limit: 1,000
Access: read-only
Zero-ETL · GCS-native · Firestore-indexed · Built for 1B+ records
Cost Analysis

Built for scale. Priced for reality.

Most databases charge per document read or scale storage costs linearly with record size. Hydrous stores records as GCS blobs — paying object-storage prices, not database prices.

Record size
Records
DatabaseRelative costMonthly
Hydrous DB
/mo
Firestore
/mo
MongoDB Atlas
/mo
Postgres / RDS
/mo
DynamoDB
/mo
cheaper on average
vs Firestore · MongoDB · Postgres · DynamoDB
* Estimates based on published pricing. Costs vary by usage pattern, region, and tier. Hydrous cost assumes GCS Standard with gzip compression (avg 72% reduction).
Scalability Cliff
Cost growth as record count scales 1000×
cost/mo at 1 MB record size1M → 1B records (log scale)
$60M$6M$600K$60K$6K$600
1M10M100M1B
Hydrous
Firestore
MongoDB Atlas
Postgres (RDS)
DynamoDB
Why the gap exists
GCS blob storage, not database rows

Hydrous pays ~$0.02/GB/month with no per-read pricing. Competitors charge per document read — at 1B records, that delta is enormous.

gzip compression is always on

Every record is compressed before upload. JSON typically compresses 60–80%, making effective storage cost 3–5× lower than the nominal size.

Zero-Firestore reads on getRecord

The GCS path is derived from the record ID in memory. No index lookup, no read unit consumed — just one network call to GCS.

Lifecycle tiering to Archive

Records age into Nearline → Coldline → Archive automatically. At 1B records, this drops storage cost from $2,860 to $708/mo.

◉ Live Architecture Tour

The record that
never touches a database

Phase 1
Ingestion
Phase 2
Retrieval
🌍
Phase 3
At Scale
🔎
Phase 4
Smart Query
Pipeline
🖥
API
Request in
🗜
Gzip
Compress
🗺
Route
ID→Path
GCS
Upload
hydrousdb — ingest800KBLIVE
Phase 1 · Ingestion

Every medical record starts as raw JSON. We stamp it with a time-encoded ID, shrink it with gzip, and drop it straight to cloud storage — zero database involved.

Architecture Overview
Phase 1 · Ingestion
A record arrives. Watch it get prepared, compressed, and stored.
Phase 2 · Retrieval
Fetching a record without ever touching a database.
🌍
Phase 3 · At Scale
One billion records. The math that changes everything.
🔎
Phase 4 · Smart Query
Query an entire month of records. Still zero Firestore reads.
Quick Start

REST API.
No SDK required.

Every operation is a plain HTTP call. Any language, any runtime. Your API key and bucket key are the only credentials you need.

Base URLBASE_URL/api/:bucketKey/:apiKey
AuthAPI key in URL path — no headers needed
Formatapplication/json request & response
BatchUp to 500 records per call
1// Insert a new record — auto-generates a date-prefixed ID
2
3const { data } = await fetch(
4 `${BASE_URL}/api/${bucketKey}/${apiKey}`,
5 { method: "POST",
6 body: JSON.stringify({
7 values: { name: "Alice", score: 42 }
8 queryableFields: ["name", "score"]
9 })
10 }
11);
12
13// data.record.id → "260601-rec_01JA2XYZ"
REST · JSON · No SDK
Node.jsPythonGocurlany
FAQ

Real questions.
Direct answers.

Mostly the "but why not just use X" questions that every developer asks before switching.

vs Firestore
vs MongoDB
vs Postgres/RDS
Architecture
Performance

Firestore charges per document read. At 300 KB–1 MB per record and 1B records, that's financially catastrophic. Hydrous stores the actual blobs in GCS ($0.02/GB, no per-read fee) and uses Firestore only for the tiny index document — paying Firestore prices only for metadata, not payloads. At 1B records, the monthly cost difference is over $59,000.

Both are row-oriented databases that store full document payloads inside the database engine. At 1 MB records, you're paying full database storage and compute pricing for bytes that could live in object storage at 1/10th the cost. Hydrous separates the index (Firestore, tiny) from the payload (GCS, cheap). You get queryability without paying database-storage prices for large blobs.

Every record ID is date-prefixed: 260601-rec_01JA2XYZ. The full GCS path is deterministically computed from that ID in memory — no network call, no lookup. The server builds the path, downloads the blob from GCS, decompresses it, and returns it. One network call total. Firestore is only consulted for filter queries, never for direct record fetches.

projects/{projectId}/buckets/{bucketKey}/records/{YY}/{MM}/{DD}/{recordId}.json.gz

Queryable fields are the fields you declare at write time that get indexed in Firestore. Firestore indexing is what makes filter queries fast. The 3-field limit exists because each additional Firestore-indexed field increases write cost and index storage. For most workloads, 3 queryable fields is more than enough — time-scoped queries use GCS folder structure and need zero Firestore reads at all.

No. The crash-safety system handles this. Every write starts by setting _repairNeeded: true in the Firestore index doc. Only after the GCS blob is confirmed uploaded does the flag clear. A scheduled reconciliation job scans for stale _repairNeeded flags, checks whether the blob exists, and either completes the write or removes the orphaned index entry. The GCS blob is always the source of truth — the index can always be rebuilt from it.

Every record is gzip-compressed unconditionally before upload (ALWAYS_COMPRESS = true). JSON typically compresses 60–80%, so a 1 MB record becomes ~200–400 KB stored. You cannot opt out per-record, but you wouldn't want to — it's the primary reason GCS storage costs stay so low. On download, the magic-byte detector (0x1f 0x8b header) auto-decompresses. Legacy uncompressed blobs are handled transparently.

Hydration runs at 20 concurrent GCS downloads (DEFAULT_HYDRATION_CONCURRENCY). Each download is cache-checked first — if the record is in the in-process LRU cache (1,000 entries), zero GCS reads are made for it. Records not in cache are fetched in parallel batches of 20. The query result is assembled and returned. For very hot data, consider Redis as a shared cache layer — zero code changes required in the operations layer.

Yes. The built-in Analytics Engine (analytics.js) handles timeSeries, distribution, topN, stats, multiMetric, and crossBucket queries directly over your record data. BigQuery is the additional layer for raw SQL access and large-scale aggregations that need columnar performance. For most app-level analytics needs, the built-in engine is sufficient and cheaper.

Pricing

Credit-based.
Pay exactly for what you use.

1 Hydrous Credit (HC) = $0.00001. Every operation costs a precise number of credits. No surprise bills — the calculator below shows your exact cost.

Cost CalculatorAdjust sliders to estimate your monthly bill
Record size (KB)300 KB
Saves per day1,000
Gets per day5,000
Queries per day200
Total stored records1.0M
Monthly active users5,000
Monthly BreakdownBased on your inputs above
Save (300 KB → 84 KB gz)× 30,000
$2.40240K HC
Get record (GCS path from ID)× 150,000
$3.00300K HC
Query (50 results avg)× 6,000
$3.18318K HC
Storage (82031 MB compressed)× 123,047
$1.23123K HC
Estimated monthly$25.00
Minimum monthly applies · $25/mo floor
1 HC = $0.00001 · Billed to credit balance · Unused credits roll over
Community · Active now

Built with the people who use it.

Every shipped feature started as a community thread. Our team reads every post, replies to every question, and builds what you actually need.

2,400+
community members
340+
feature requests
98%
team response rate
<24h
avg first response
LIVE
ravi_sysopened a feature request"Streaming batch hydration"
1/8
Feature Request
In Progress
Redis distributed cache support
At 50+ Cloud Run instances our per-instance LRU cache hit rate is near zero. Would love a shared Redis layer behind the same cache interface.
PM
priya_builds
·3d ago·18 replies
#performance
#caching
KT
Kwame · Core Team
TEAM

This is Priority 1 on our roadmap — zero changes to operations.js required, we just swap the cache interface. ETA: next sprint.

Question
Answered
How does getRecord avoid Firestore on every read?
I noticed getRecord seems very fast even at scale. Can someone explain how it skips Firestore entirely? The ID format is a date prefix, right?
SL
sorenl
·5d ago·7 replies
#core-db
#architecture
ND
Nadia · Docs Team
TEAM

Exactly right — the ID encodes YYMMDD, so the GCS path is computed in memory from the ID alone. Zero Firestore reads, one GCS download.

Feature Request
In Progress
Streaming batch hydration — first record latency
Large queries currently wait for the full batch before emitting. An async generator approach would drop first-record latency from O(batch) to O(1).
RS
ravi_sys
·8d ago·15 replies
#performance
#queries
KT
Kwame · Core Team
TEAM

In active development — async generator replaces the batch-then-emit pattern. Early benchmarks show 5–10× improvement in perceived query speed.

Feature Request
Shipped ✓
Batch upload URL generation (50 files at once)
We're ingesting large datasets and generating signed URLs one-by-one is too slow. Batch endpoint would be a huge win.
DF
diegodev
·12d ago·11 replies
#storage
#uploads
AS
Ama · Storage Team
TEAM

Shipped in v2.1! POST /storage/batch-upload-urls now accepts up to 50 files in a single call. Closing this ✓

Feature Request
Planned
Webhook callbacks on record create/update/delete
We're building event-driven workflows and currently polling. HTTP callbacks via Cloud Tasks on record events would eliminate the polling entirely.
MK
mina_k
·1d ago·24 replies
#events
#integrations
YP
Yaw · Platform Team
TEAM

On the roadmap as Priority 10 — HTTP callbacks via Cloud Tasks. We'll be opening a design doc for community feedback before building.

Question
Answered
Can I use BigQuery for ML training on my bucket data?
We have 80M records and want to run a feature extraction pipeline. Does the BigQuery external table support ML.PREDICT or just standard SQL?
LO
leila_ml
·2d ago·6 replies
#bigquery
#ml
AS
Ama · Storage Team
TEAM

Full Standard SQL including ML functions — the external table exposes your GCS blobs directly to BigQuery ML. No data copy needed.

Have an idea or a question?
Post in the community — our team responds within 24 hours.
Get started today

Your first bucket
in under 5 minutes.

No SDK required
Any language
Free tier
hydrous — quick start
$
$
$
✓ Bucket "orders" created · API key: sk_orders_••••••••
Ready. Base URL: https://api.hydrous.dev/api/orders/sk_orders_...