Vector Database Cost Calculator
Compare monthly cost of ownership across Pinecone, Weaviate, Qdrant, Zilliz, Chroma, MongoDB Atlas, and self-hosted pgvector / Qdrant / Milvus — for any vector count, dimension, and query volume. All math runs in your browser.
What it does
10 providers in a single comparison
Pinecone Serverless, Pinecone Pod-based (s1/p2 tiers), Weaviate Cloud, Qdrant Cloud, Zilliz Cloud (Milvus), Chroma Cloud, MongoDB Atlas Vector Search, plus self-hosted pgvector, Qdrant, and Milvus on AWS r6g instances. One workload, ten estimates, no marketing copy.
Quantization-aware storage math
Toggle between float32 (4 B/dim), int8 (1 B/dim), and binary (0.125 B/dim) quantization. Storage cost updates instantly so you can see why dropping to int8 typically cuts your bill by ~75% with under 2% recall loss.
Honest self-hosted comparisons
Self-hosted estimates include EC2 r6g hourly, gp3 EBS storage, and an optional $500–$750/month DevOps overhead line. Toggle the overhead off to see the raw infrastructure cost, on for the true total cost of ownership.
"Show calculation" for every provider
Every row expands to show the exact formula: vectors × dimensions × bytes/dim × replicas, queries × RU/query × $/M RU, pod tier × hourly × 730 hours. No black boxes — you can audit and reproduce every number.
Shareable URLs
Every input is encoded to the URL using compact single-letter keys. Copy the link to send a fully reproducible scenario to a teammate or paste it in a procurement comparison doc.
Replication & metadata baked in
Most calculators forget that production deployments run with 2× or 3× replication and store ~1 KB of metadata per vector. This calculator surfaces both as first-class inputs so the totals reflect what you actually pay.
How to use Vector Database Cost Calculator
- 1Set your vector count and dimensions
Drag the logarithmic slider from 10K to 1B vectors, or click a quick-select pill (Prototype, Small, Medium, Large, Huge). Pick a dimension preset matching your embedding model — 1536 for OpenAI text-embedding-3-small, 768 for BGE-base, 384 for BGE-small.
- 2Set your monthly query and write volume
Use the Queries per month slider for similarity searches and Writes per month for upserts and re-embeddings. Read-bound and write-bound workloads pick very different winners — getting these right changes the recommendation.
- 3Open Advanced settings to refine the comparison
Set metadata size per vector (default 1 KB), choose float32/int8/binary quantization, set replication factor (1/2/3), and toggle "Include DevOps overhead" to add the human cost to self-hosted estimates.
- 4Read the comparison table
Default sort is ascending by Total/mo. The cheapest viable option gets a "Best value" badge. Click any column header to re-sort. Use the Filter chips to focus on specific vendors. Providers that can't handle your scale show "Not recommended" instead of misleading numbers.
- 5Expand "Show calculation" on any row
Every total is backed by an explicit formula — vectors × dimensions × bytes/dim × replicas, queries × RU/query × $/M RU, pod tier × hourly × 730. Audit and reproduce every number; the source link verifies the rate against the vendor's pricing page.
- 6Share or export your scenario
Copy the URL to share a fully reproducible comparison with a teammate. Copy as Markdown for documentation or as CSV for procurement spreadsheets. URLs encode inputs in compact single-letter keys; the canonical URL stays clean for SEO.
When to use this
RAG chatbot for a 1M-document knowledge base
~1M vectors at 1536 dim with 100K queries/month and weekly re-embedding. The calculator shows whether Pinecone Serverless trips its $50 minimum, where Qdrant Cloud sits, and whether pgvector saves enough to justify the DevOps cost.
Semantic search for an ecommerce catalog of 50M SKUs
50M vectors at 768 dim with 10M queries/month. Use the int8 quantization toggle to compare a high-recall float32 index against an aggressive int8 setup; replication factor 2 reflects a production HA deployment.
AI agent with high write churn
5M vectors with 500K writes/month from continuous learning. Self-hosted Qdrant on a single r6g.xlarge typically wins on raw cost, but the calculator highlights when DevOps overhead flips the verdict back to managed.
Prototype scoping before signing an annual contract
Set the workload to your one-year projection (e.g., 100M vectors / 50M queries / 5M writes per month) and copy the shareable URL into your procurement justification doc. Comparing Pinecone Pod-based vs Zilliz vs MongoDB Atlas in one view often saves $X K/year.
Common errors & fixes
- Forgot to set replication factor and your "comparison" is 50% understated
- Production workloads almost always run with replicationFactor 2 (recommended) or 3 (high availability). The default in this calculator is 2; switch it to 1 only for staging/POC. MongoDB Atlas dedicated tiers ignore this input because they include a 3-node replica set automatically.
- Pinecone Serverless looks suspiciously cheap at small scale
- The Standard plan has a $50/month minimum spend that overrides usage-based math below ~50 GB at low query volumes. The "Show calculation" disclosure for Pinecone Serverless surfaces this — if your computed subtotal is under $50, the plan minimum is what you actually pay.
- Self-hosted pgvector "wins" by 10× but you have no DevOps team
- Toggle "Include DevOps overhead" in Advanced settings. The default $500/month estimate covers patching, backups, monitoring, and incident response — typically ~5 hours of senior engineer time per month at a fully loaded $100/hour. Without an existing platform team, the real overhead is often higher.
- Cost spikes when you switch from 1024 to 1536 dimensions
- Storage cost scales linearly with dimensions for most providers, and Weaviate's per-dimension pricing model amplifies this. If recall on 1024-dim Cohere or BGE-base embeddings is acceptable for your use case, the dimension-cost line in "Show calculation" makes the trade-off explicit.
Technical details
| Pricing source | Each vendor's public pricing page; verified May 3, 2026 |
| Providers covered | 10 (7 managed + 3 self-hosted reference deployments) |
| Storage formula | (numVectors × dimensions × bytes/dim) + (numVectors × metadata KB × 1024), all × replication factor |
| Self-hosted baseline | AWS r6g instances on-demand, gp3 EBS at $0.10/GB/mo, ~$500–$750/mo DevOps overhead estimate |
| Currency | USD only (USD listed prices used for non-USD vendors at 1:1) |
| Privacy | Zero server-side processing — all math runs in your browser; URL state encodes inputs, never identity |
How vector database pricing actually works
Vector database pricing breaks down into three core dimensions: storage, queries (reads), and writes (upserts/updates). Storage is what most people focus on — and it scales linearly with the product of vector count, dimensions, and bytes per element (4 for float32, 1 for int8, 0.125 for binary). Replication multiplies storage by your replica count, and metadata adds a per-vector overhead that is invisible in toy calculations but easily becomes 10–30% of total bytes at scale.
Queries and writes are where the pricing models diverge sharply. Pinecone Serverless meters individual operations as Read Units (RU) and Write Units (WU), where a typical similarity search costs ~2 RU and an upsert costs ~3 WU. Weaviate Cloud charges per million stored dimensions per month and bundles queries into the price. Qdrant Cloud and MongoDB Atlas charge for the cluster and let you query as much as the hardware can serve. Self-hosted databases charge for the instance hours regardless of utilisation. This means the "cheapest" provider depends entirely on whether your workload is storage-bound, query-bound, or write-bound — and the same workload can swing the answer by 10× across providers.
A fourth dimension that calculators often ignore: minimum spend. Pinecone Serverless has a $50/mo Standard plan minimum and a $500/mo Enterprise minimum. Weaviate Cloud Standard tier has a $25/mo minimum. Below those thresholds, computed usage-based pricing is irrelevant — the minimum is what you pay.
Pinecone vs Weaviate vs Qdrant vs Milvus — which is cheapest at each scale?
The answer changes with scale. At prototype scale (100K vectors, 10K queries/month), the free tiers and below-minimum economics make every managed provider look about the same — $25 to $60 per month is a typical band. Self-hosted starts to look attractive only when you already have idle infrastructure to run on; otherwise the EC2 baseline alone is $70+/month.
At small production scale (1M vectors, 100K queries/month), Pinecone Serverless is bound by its $50 minimum, Weaviate Cloud is around $30–60, Qdrant Cloud starts at ~$36 (the smallest cluster), and Zilliz starts at ~$110 (one Compute Unit). Self-hosted pgvector on the smallest r6g.large is ~$80 + EBS, which beats most managed options without DevOps overhead.
At medium scale (10M vectors, 1M queries/month), the differences become material. Pinecone Serverless is typically $200–400/mo dominated by reads; Pinecone Pod-based on s1.x2 is a flat ~$140/mo; Qdrant Cloud needs a 2 vCPU/8 GB tier at ~$70/mo plus replication; Zilliz needs ~7 CUs ≈ $760/mo. Self-hosted on r6g.2xlarge is ~$295 + EBS + DevOps.
At large scale (100M+ vectors), the picture flips. Self-hosted Milvus or Qdrant on r6g.4xlarge ($590 + EBS + DevOps ≈ $1,200/mo) typically beats every managed option, but only if you have engineers to run it. Pinecone Serverless can creep into the thousands due to Read Units; Pinecone Pod-based scales linearly via larger pods; Qdrant Cloud needs 16 vCPU+ tiers ($570+/mo). The right answer at this scale is often "managed for the first 12 months while you learn the workload, self-hosted after that."
Serverless vs pod-based vs self-hosted economics
Serverless pricing (Pinecone Serverless, Chroma Cloud) charges per operation. It wins for spiky, low-utilisation workloads — if you serve 10,000 queries during a daily 2-hour window and nothing the rest of the day, you pay only for what you used. It loses for steady high-QPS workloads because Read Units add up: 10M queries/month × 2 RU × $8.25/M RU = $165/mo just for reads.
Pod-based and cluster-based pricing (Pinecone Pod, Qdrant Cloud, Zilliz, MongoDB Atlas) charges for capacity, regardless of utilisation. The pod is always running and serving; you pay the same hourly rate whether you do 1 query or 1 million. This wins when QPS is high and steady — a 24/7 RAG service hitting 1,000 QPS on average is much cheaper on a single Pinecone p2.x4 pod than on Serverless.
Self-hosted is the same model as cluster-based but at raw cloud prices: an r6g.2xlarge runs ~$295/mo on-demand or ~$180/mo on a 1-year reserved instance. The savings are real but the operational cost is also real — backups, snapshots, version upgrades, security patches, monitoring, and the on-call burden of being the database team. For a typical 5-engineer startup, self-hosted is rarely worth it below ~$2,000/mo of managed equivalent. For a Series B+ company with a platform team, the break-even is closer to $500/mo.
Quantization tradeoffs: int8 4× cheaper, binary 32× cheaper
Quantization compresses each vector element from 4 bytes (float32) to 1 byte (int8) or 1 bit (binary). The storage savings are immediate and proportional: int8 cuts storage by 75%, binary by 96.875%. This matters most for storage-dominated workloads — large indices with infrequent queries.
The recall trade-off is real but smaller than most people expect. Modern int8 quantization (used internally by Qdrant, FAISS, and most recent vector DB versions) typically loses under 2% recall@10 on standard benchmarks. Binary quantization combined with re-ranking can preserve 95%+ of float32 recall while cutting both storage and query latency by 8–10×.
Two caveats: quantization is not free CPU-wise — query-time decompression adds a small overhead, and re-ranking requires keeping the original full-precision vectors somewhere (often on disk). Some managed providers (Pinecone, Zilliz) use quantization internally without exposing it to you, so toggling the quantization input here only affects providers where it's a user choice (Qdrant, Weaviate, self-hosted).
Replication, HA, and multi-region cost multipliers
Replication is the most-skipped input in cost estimation. A production deployment typically runs with replicationFactor 2 (active-passive) or 3 (active-active), which doubles or triples the storage and per-pod compute cost. MongoDB Atlas is the exception — its dedicated tiers always include a 3-node replica set, so the replication input does not multiply Atlas cost. Pinecone Serverless replicates internally and rolls the cost into Read/Write Unit pricing, so the input also does not multiply Serverless cost (this calculator ignores it for those two providers).
For Pinecone Pod, Qdrant Cloud, Zilliz, and self-hosted setups, replicationFactor multiplies the pod/cluster/VM cost directly. A self-hosted Qdrant deployment on r6g.2xlarge × 3 replicas is ~$885/mo for compute alone before storage and DevOps.
Multi-region (active-active across us-east + eu-west, for example) effectively means running independent clusters in each region — multiply by the number of regions for both compute and storage. Plus inter-region egress fees, which neither this calculator nor most vendor calculators include because they depend on your traffic patterns.
Hidden costs to watch for
The pricing displayed by every vendor is missing several real-world costs. The biggest is data egress: pulling query results out to your application server costs $0.05–$0.09/GB on most clouds. For a high-QPS application this is small but non-zero. Cross-region egress (your Pinecone is in us-east, your app is in eu-west) is significantly more expensive and worth co-locating to avoid.
Backup and snapshot storage is typically billed separately at ~$0.02/GB/month on managed providers and at S3 rates on self-hosted. For a 100 GB index with 30 days of daily snapshots, that's $60–80/mo of backup cost that doesn't appear in any "Vector DB Pricing" page.
Filtered search costs more than unfiltered search on most providers. Pinecone charges 5–10 RU per filtered query versus 1–2 RU for unfiltered. If your application heavily uses metadata filters (e.g., per-tenant search in a multi-tenant SaaS), your effective Read Unit consumption can be 3× the calculator estimate.
Finally, re-embedding when you change models is rarely budgeted for. Switching from text-embedding-3-small to text-embedding-3-large requires re-embedding every document — that's 100% Write Unit consumption against your entire corpus, plus the embedding model API cost. For a 10M document corpus, that's a one-time cost of $60K+ just on the embeddings.
When to switch from managed to self-hosted
The break-even between managed and self-hosted is not just about list price. A reasonable framework: total managed cost > 3× the raw infrastructure cost AND your team has bandwidth to operate a stateful service. The 3× multiplier accounts for the loaded cost of an on-call rotation, monitoring, backups, and incident response.
Concretely: at $500/mo of managed cost, raw infra equivalent is ~$170/mo and self-hosted is rarely worth the operational investment. At $2,000/mo of managed cost, infra equivalent is ~$670/mo, and self-hosted starts making sense for teams that already run other stateful services. At $10,000/mo of managed cost, the infra cost is ~$3,300/mo and the savings clearly justify a dedicated platform team.
The other consideration is feature drift. Managed providers ship new features (filtered search optimisations, hybrid search, sparse vector support, GPU acceleration) faster than you can keep up. The "save 60% by self-hosting" calculation looks worse if you also need to backport six months of upstream improvements every year.
Choosing dimensions: 1536 vs 768 vs 384
Dimensions are largely a function of the embedding model you pick. OpenAI text-embedding-3-small produces 1536-dim vectors; Cohere embed-english-v3 produces 1024-dim; BGE-base and the original Sentence-BERT models produce 768-dim; BGE-small and lightweight models produce 384-dim. The dimensionality directly multiplies storage cost.
The MTEB leaderboard suggests that lower-dim models from 2025–2026 (BGE-base v2, Voyage-3-lite, Cohere embed-v4) often match or beat older 1536-dim models on retrieval benchmarks. If you're storage-constrained or cost-constrained, evaluating a 768-dim alternative to your current 1536-dim setup is the single biggest lever — it cuts storage 50% and often improves latency.
A pragmatic approach: build your retrieval pipeline with 1536 dim during prototyping, then run a recall@10 evaluation on a 768-dim alternative before going to production. Most teams find the recall delta is under 2% but the cost delta is exactly 50%.
Real-world worked examples
Example 1 — RAG chatbot, 1M docs at 1536 dim, 100K queries/month, weekly re-embed: Storage is ~6 GB at float32 with 1 KB metadata × 2 replicas. Pinecone Serverless ≈ $50/mo (minimum applies, computed usage ~$15). Pinecone Pod s1.x1 ≈ $140 with 2 replicas. Weaviate Cloud ≈ $25/mo (minimum). Qdrant Cloud 1 vCPU tier × 2 ≈ $72. pgvector self-hosted r6g.large × 2 ≈ $147 + $1 EBS + ($500 DevOps if enabled). Verdict at this scale: Weaviate Cloud or Pinecone Serverless, both at the minimum.
Example 2 — Ecommerce semantic search, 50M products at 768 dim, 10M queries/month: Storage ≈ 154 GB at float32 with 1 KB metadata × 2 replicas. Pinecone Serverless ≈ $300+ (reads dominate at ~$165). Qdrant Cloud 4 vCPU tier × 2 ≈ $285. Zilliz ≈ 33 CUs × $0.15/hr × 730 × 2 ≈ $7,200. pgvector self-hosted r6g.4xlarge × 2 ≈ $1,180 + $15 EBS + DevOps. Verdict at this scale: Qdrant Cloud or self-hosted depending on team capability.
Example 3 — High-write AI agent, 5M vectors, 100K queries/mo, 500K writes/mo: Storage ≈ 31 GB at float32 × 2 replicas. Pinecone Serverless ≈ $20 (under minimum, pays $50). Pinecone Pod s1.x1 × 2 ≈ $140. Qdrant Cloud 1 vCPU × 2 ≈ $72. self-hosted Qdrant r6g.large × 2 ≈ $147 + $4 EBS + DevOps. Verdict: Qdrant Cloud beats both Pinecone tiers and the self-hosted route at this scale, especially given the write-heavy pattern.
2026 vector database pricing trends
Three trends are reshaping vector DB economics in 2026. First, quantization is becoming default in managed offerings: Pinecone, Zilliz, and Weaviate now apply scalar quantization automatically for index storage, which has compressed listed prices ~40% since 2024 even though headline numbers haven't moved much. The "$/million dimensions" Weaviate charges is meaningfully lower than the 2024 number because they're storing fewer bytes per dimension.
Second, Postgres extensions (pgvector and pgvector-rs) have closed most of the recall gap with purpose-built vector DBs, especially after pgvector 0.5 added HNSW indexing. For workloads under ~50M vectors, self-hosted Postgres on a single beefy instance is now a credible alternative to a dedicated vector DB — and it brings the cost savings of running your existing application database.
Third, hybrid pricing models are emerging. Pinecone introduced "Reserved" billing in late 2025 (commit to monthly spend, get 25–40% discount). Qdrant offers BYOC deployments where you bring your own cloud infrastructure and pay only for the control plane. Zilliz has tiered "Cloud Lite" and "Cloud Standard" SKUs. The list-price comparisons in this calculator should be treated as upper bounds; negotiate with vendors at scale and the real numbers come down 25–50%.
Frequently Asked Questions
What is a vector database and why does it cost so much?
- A vector database stores high-dimensional numerical embeddings of your content (text, images, audio) and supports nearest-neighbour search over them — the foundation of RAG, semantic search, and recommendation systems. The cost is high because each vector is large (1536 dim × 4 bytes = 6 KB per vector before metadata and replication), and the indices that make nearest-neighbour search fast (HNSW, IVF, ScaNN) consume substantial RAM. A 100M-vector index at 1536 dim with HNSW typically needs 600+ GB of memory across the cluster, which drives the underlying instance cost.
Is Pinecone really more expensive than Weaviate or Qdrant?
- It depends on your workload shape. Pinecone Serverless is competitive at low scale (when bound by its $50 minimum) and at very low utilisation. Pinecone Pod-based is competitive against Qdrant and Weaviate at medium scale. Pinecone Serverless can become significantly more expensive than the alternatives at high query volume because Read Units accumulate quickly. Use the calculator above to compare your specific workload — there is no universal answer.
How much does it cost to host 1 million vectors?
- For 1M vectors at 1536 dimensions with 100K queries/month and 2× replication, expect roughly $25–$50/mo on Weaviate Cloud or Pinecone Serverless (both bound by minimum spend), $70/mo on Qdrant Cloud (smallest cluster × 2), $140/mo on Pinecone s1.x1 pod × 2, $110/mo on Zilliz Cloud (1 CU), or $80–$150/mo self-hosted on r6g.large × 2 (excluding DevOps overhead). MongoDB Atlas Vector Search rolls in with the M10 tier at $60/mo if you're already using Atlas.
What is the cheapest vector database for prototyping?
- For pure prototyping, all of the major providers have free tiers: Pinecone Free Starter (2 GB), Weaviate Cloud Sandbox (14-day free), Qdrant Cloud free 1 GB cluster, Chroma Cloud usage-based with no minimum, and Zilliz Cloud Serverless free tier. Self-hosted Chroma or pgvector on a t4g.medium ($25/mo) is the cheapest non-free option. The calculator focuses on production-scale economics; for prototypes, the free tiers cover the first few months for most teams.
Does pgvector scale to 100 million vectors?
- Yes, but with caveats. pgvector 0.5+ supports HNSW indexing, which makes 100M-vector indices feasible on a single r6g.4xlarge or larger. The constraints are RAM (HNSW indices want to be in memory), index build time (can take hours for 100M+ rows), and concurrency (Postgres locks during HNSW maintenance). For workloads under 50M vectors, pgvector is usually a great choice; above 100M, consider sharding or migrating to a purpose-built vector DB. The calculator estimates the r6g instance and EBS cost for self-hosted pgvector deployments.
Why are read units more expensive than I expected?
- Pinecone Serverless charges $8.25 per million Read Units, and a typical similarity search consumes ~2 RU. Filtered queries (with metadata filters) consume 5–10 RU each. So 10M queries/month at 2 RU each = 20M RU = $165/mo just for reads, or 10M filtered queries at 7 RU each = 70M RU = $578/mo. This is intentional — Pinecone's economics encourage either Pod-based pricing for high-QPS workloads or filter-light query patterns. The calculator uses 2 RU/query as the default; adjust for your filter patterns.
How much does OpenAI text-embedding-3-large cost to store?
- text-embedding-3-large produces 3072-dimensional vectors. At float32 (4 bytes per dim), each vector takes 12 KB before metadata and replication. 1M vectors × 12 KB × 2 replicas = 24 GB. At Pinecone Serverless $0.33/GB/mo that's ~$8/mo for storage alone; at Weaviate per-dimension pricing it's ~$58/mo. Compared to text-embedding-3-small (1536 dim, half the storage), large is a 2× cost increase for the storage line. Whether the recall improvement justifies the cost is workload-dependent.
What is the cheapest vector database in 2026?
- For very small workloads (under 100K vectors), the free tiers of any major provider — none of them cost anything. For small production workloads (1M vectors, low QPS), Weaviate Cloud and Pinecone Serverless tie at their plan minimums (~$25–$50/mo). For medium workloads (10M–50M), Qdrant Cloud and self-hosted pgvector are typically cheapest. For large workloads (100M+), self-hosted Milvus or Qdrant on AWS r6g.4xlarge+ wins on raw cost but only if you have a platform team. Always run the comparison for your specific workload — the answer changes by an order of magnitude across scales.
Should I use Pinecone Serverless or Pod-based?
- Use Serverless if your workload is bursty, low-utilisation, or you genuinely don't want to think about capacity. Use Pod-based if your workload is steady-state with ≥ 1,000 QPS sustained — the fixed pod cost amortises across requests far better than per-RU pricing. The break-even is roughly: if your monthly Read Unit spend exceeds the cost of the smallest pod that fits your data (s1.x1 at ~$70/mo), Pod-based is cheaper. The calculator shows both side-by-side for any workload.
How do I reduce vector database costs?
- In rough order of impact: (1) Use int8 quantization where supported — saves ~75% storage with under 2% recall loss. (2) Drop dimensions if your embedding model has a smaller variant (1536 → 768 cuts storage 50%). (3) Audit your replication factor — many staging environments accidentally run with replication 3 when 1 is fine. (4) For Pinecone Serverless, switch to Pod-based once your steady-state QPS exceeds the break-even. (5) For self-hosted on AWS, switch from on-demand to 1-year reserved instances (~40% savings). (6) For high-volume customers on any provider, negotiate — list prices are not the actual prices anyone pays at scale.
Does this calculator include embedding generation costs?
- No — embedding generation is a separate cost charged by your LLM provider (OpenAI, Cohere, Voyage, etc.) not the vector database. To estimate embedding costs, use our companion LLM Cost Calculator and multiply per-document tokens × per-million pricing. A reasonable rule of thumb: 1M documents at ~500 tokens each = 500M tokens × $0.02/M (text-embedding-3-small) = $10 one-time. Re-embedding at scale (when you switch models) is the place this cost can balloon — that's a 100% Write Unit consumption against your entire corpus, on top of the embedding API cost.
How accurate are these estimates?
- Within 10% of vendor calculators in the scenarios we've tested (1M, 10M, 100M, 1B vectors at typical query volumes). Differences come from a few sources: (1) RU/WU per operation is provider-specific and varies with query complexity — we use 2 RU/query and 3 WU/write as reasonable averages. (2) Self-hosted estimates use AWS r6g on-demand prices; reserved instances reduce real cost ~40%. (3) Most providers offer enterprise discounts not reflected in list prices. For procurement-grade numbers, always validate the top 2–3 candidates against the vendor's own calculator before signing.
Related Tools
LLM Cost Calculator
Compare token counts and API costs across GPT-5, Claude Opus 4.7, Gemini 3, and more — all in your browser.
IP Calculator
Calculate subnet mask, network address, broadcast, wildcard mask, usable hosts, and binary representation for any IPv4 or IPv6 CIDR. Includes VLSM, subnet splitter, and range-to-CIDR converter.
JSON Formatter
Clean, minify, and validate JSON data structures.
Hash Generator
Generate MD5, SHA-1, SHA-256, and SHA-512 hashes from text or files. Supports HMAC authentication codes.
UUID Generator
Generate random UUID v4 identifiers. Bulk generate up to 100 UUIDs, toggle uppercase/lowercase and hyphen formatting.