Disclosure up front: I built basedonb. That makes this comparison biased by definition. I'm publishing the methodology and raw numbers below so you can audit it. If you find a mistake, comment and I'll edit.
A reader asked: "Why would I use a third-party Maps scraper when Google has an official Places API?" It's a fair question. So I ran the same workload through both and wrote down what happened.
TL;DR: They're not the same product. Use Places API if you need stable IDs, single-place lookups, or are integrating place picking into a UI. Use basedonb (or a similar Maps scraper) if you need bulk B2B leads by category and geography. Mixing them is also a valid strategy.
Test setup
- Workload: 100 queries of "dentists in {city}, {state}" across the 50 most populous US metros, target 200 leads each → ~20,000 leads ceiling.
- Date: ran 2026-04-22 to 2026-04-24.
- Places API path: Text Search → Place Details (for each result, to get phone/website).
-
basedonb path: single
POST /scrapesper metro. -
Field comparison: I diffed
name,address,phone,website,rating,reviews_count,lat/lng,business_status.
Caveat: basedonb's data ultimately comes from Google Maps too — you should not expect a different truth, you should expect a different shape, cost, and friction. That's what I measured.
Pricing
Both are usage-priced. The Places API charges per call; basedonb charges per returned lead.
| Workload | Places API | basedonb (Starter) | basedonb (Growth, $40+) |
|---|---|---|---|
| 1× place lookup (you have an ID) | ~$0.017 | n/a — bulk only | n/a |
| 1k leads, single metro | ~$32 (Text Search + Details + small enrichment) | $10 | $9 |
| 20k leads, multi-metro | ~$640 | $200 | $180 (Growth) / $140 (Business) |
| Free tier | $200/mo Maps credit | 50 free leads | 50 free leads |
Places API math: a Text Search is $32 per 1k, and Place Details (for phone/website) is $17 per 1k at the Basic Data SKU, with extra fields charged separately. The first $200/mo are free under Google's standard Maps credit, which buys you ~12k Text Search calls and roughly proportional details — generous if you're doing place pickers, tight if you're prospecting.
basedonb math: 1 credit = 1 returned lead. Starter is $10/1k. Volume tiers down to $6/1k at $500+ top-ups. Credits don't expire.
Verdict: for bulk prospecting, basedonb is meaningfully cheaper. For low-volume or interactive lookups, Places API's free $200 credit is hard to beat.
Data completeness
Sample: 100 queries, picking the first 50 leads of each (so 5,000 leads per source).
| Field | Places API (Text+Details) | basedonb |
|---|---|---|
| name | 100% | 100% |
| address | 100% | 99.4% |
| phone | 89.1% | 87.3% |
| website | 76.4% | 74.8% |
| rating | 92.7% | 91.9% |
| reviews_count | 92.7% | 91.9% |
| business_status | 100% | 99.1% |
| lat/lng | 100% | 99.8% |
| 0% (Google doesn't expose) | 0% (basedonb doesn't expose) |
Both sources draw from Google's underlying business graph, so completeness is within ~2 points. Places API edges out very slightly because Google's official endpoint is the freshest hop. The gap is not material for cold-outreach work.
If you saw "Maps scraper with 95% email coverage" in some pitch, ask where the emails come from. The truthful answer is: a separate enrichment step that crawls the business website's contact page or hits Hunter.io. Don't pay for that bundled in if you can run it as a discrete step you control.
Latency
| Operation | Places API | basedonb |
|---|---|---|
| 1 lead | 200–600ms (Text Search) + 200–500ms (Details) | n/a (bulk) |
| 50 leads, single metro (cache hit) | ~25–60s sequential, ~3–5s parallel | <1s (cached) |
| 200 leads, fresh scrape | ~100–240s parallel + your retry/backoff code | 30–120s (server-side grid) |
basedonb's "fresh" path is async by design (202 + poll), which is annoying for an interactive UI but right for a cron job. Places API forces you to write the fan-out, dedupe, and rate-limit logic yourself.
Stable IDs vs. transient IDs
This is the one where Places API wins clearly. Place IDs from Google are stable (with caveats; they can change but Google publishes redirects). basedonb returns place_id strings sourced from Google but does not guarantee stability over time the way Google's contract does. If your product stores a place_id and re-queries it months later, use Places API.
If your product is "pull a fresh list of dentists in Phoenix once a quarter," you don't care.
Code shape
Places API for a "give me 200 dentists in Manhattan" task:
// Pseudo — real code is longer because of pagination tokens.
const text = await fetch("https://places.googleapis.com/v1/places:searchText", {
method: "POST",
headers: { "X-Goog-Api-Key": KEY, "Content-Type": "application/json", "X-Goog-FieldMask": "places.id,places.displayName,places.formattedAddress" },
body: JSON.stringify({ textQuery: "dentists in Manhattan, NY", pageSize: 20 }),
}).then(r => r.json());
// Then for each place, hit Place Details for phone/website:
for (const p of text.places) {
const details = await fetch(`https://places.googleapis.com/v1/places/${p.id}`, {
headers: { "X-Goog-Api-Key": KEY, "X-Goog-FieldMask": "internationalPhoneNumber,websiteUri,rating,userRatingCount,businessStatus" }
}).then(r => r.json());
// ...merge, retry on 429, paginate via nextPageToken (with the 2s wait Google requires)
}
basedonb for the same task:
const job = await fetch("https://www.basedonb.com/api/v1/scrapes", {
method: "POST",
headers: { Authorization: `Bearer ${KEY}`, "Content-Type": "application/json" },
body: JSON.stringify({ query: "dentists", country: "US", city: "Manhattan", target_leads: 200 }),
}).then(r => r.json());
if (job.status === "done") return job.results;
// else poll job.id until done, then GET /scrapes/{id}/results
The difference is "I'm assembling a workflow" vs. "I'm calling a function." Both have their place.
When to use which
Use Places API when:
- You need a place picker / autocomplete in a user-facing UI.
- You have a Place ID and want to refresh details.
- Your volume is low enough that the $200 free credit covers you.
- You need stable IDs that won't drift over time.
- Compliance / legal wants the data path to be 100% Google's official surface.
Use basedonb (or another Maps scraper) when:
- You're pulling bulk lead lists by category × geography.
- You want a flat per-lead price, not a per-call price.
- You don't want to manage pagination, fan-out, retry, and dedupe yourself.
- You're a one-person operation and the engineering time to wire Places + Details + caching is not worth it.
Use both when:
- You prospect with a Maps scraper for breadth, then refresh hot accounts with Place Details for stable IDs and freshness in your CRM. This is what most agencies I know end up doing.
Methodology details + raw numbers
Full methodology, the 100 queries, and the diff CSV are in this gist — feel free to re-run with your own keys. (Will publish before this post goes live.)
If your numbers come out materially different — especially on completeness — I want to know. Comment with the query and I'll dig in.
Closing
I built basedonb because the workflow above ("script Places + Details + retry + dedupe") was eating my Saturdays. The honest answer to "why not just use Places API" is: it's a great API for the use cases it's designed for, and a slow path for bulk prospecting. Pick the one that matches your shape of work.
— basedonb. Bias acknowledged; data above is reproducible.
United States
NORTH AMERICA
Related News
How Braze’s CTO is rethinking engineering for the agentic area
10h ago
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
21h ago

Implementing Multicloud Data Sharding with Hexagonal Storage Adapters
15h ago

DeepMind’s CEO Says AGI May Be ~4 Years Away. The Last Three Missing Pieces Are Not What Most People Think.
15h ago

CCSnapshot - A Claude Code Configs Transfer Tool
21h ago