Linabase Documentation
Learn how Linabase works, integrate the SDK, use the REST API, connect your ORM, or self-host on your own infrastructure.
Concepts
Before diving into code, here is how Linabase is organized and what the key building blocks are.
How Linabase Works
Linabase gives you a full Postgres database, a user authentication system, and S3-compatible file storage behind a single API. You interact with it through the JavaScript SDK, the REST API, or directly via a Postgres connection with your ORM of choice.
Every request is scoped to a project. Row-level security (RLS) policies you define on your tables are enforced automatically on every query, whether it comes from the SDK, the REST API, or a direct database connection.
Projects & Organizations
Your account can have multiple organizations (teams or companies). Each organization contains one or more projects. A project is an isolated environment with its own database schema, API keys, auth users, and storage bucket.
| Concept | What it is | Example |
|---|---|---|
| Organization | A team or company that owns projects | "Acme Corp" |
| Project | An isolated backend environment | "production", "staging" |
| API Key | Credentials scoped to a single project | lb_... (anon or service_role) |
Projects are fully isolated. Two projects can have tables with the same name, different RLS policies, and separate auth user pools without any conflict.
API Keys
Each project has two types of API keys:
| Key type | RLS | Use case |
|---|---|---|
| anon | Enforced | Client-side code, browser apps, mobile apps |
| service_role | Bypassed | Server-side code, admin scripts, migrations |
The service_role key bypasses all row-level security. Never expose it in client-side code or commit it to version control.
Schema Isolation
Each project gets four Postgres schemas that keep your data, auth, storage, and logs separate:
| You see | Actual schema | Contents |
|---|---|---|
| public | proj_{id} | Your tables, views, and functions |
| auth | proj_{id}_auth | Auth users, sessions, identities (read-only) |
| storage | proj_{id}_storage | Buckets and file metadata (read-only) |
| logs | proj_{id}_logs | Audit and email logs (read-only) |
When you query from("users") in the SDK or /rest/v1/users in the REST API, it resolves to proj_{id}.users automatically. You never need to reference the internal schema name.
Quick Start
Get a working query running in under 2 minutes. You need a Linabase account and a project with at least one table.
1. Install the SDK
npm install @linabase/js2. Initialize the client
Get your project URL and anon key from the API Keys page in your dashboard.
import { createClient } from "@linabase/js";
const linabase = createClient({
url: "https://your-project.linabase.com",
anonKey: "lb_your_anon_key_here",
});
export default linabase;3. Run your first query
The SDK returns { data, error } from every operation. Always check error before using data.
import linabase from "./lib/linabase";
// Fetch rows (RLS is enforced automatically)
const { data, error } = await linabase
.from("posts")
.select("id, title, author:profiles(name)")
.order("created_at", { ascending: false })
.limit(10);
if (error) {
console.error("Query failed:", error.message);
} else {
console.log("Posts:", data);
}The author:profiles(name) syntax joins the profiles table via a foreign key and returns the name column as author. This works for any FK relationship in your schema.
Ready to try it?
Create a free account and get your API keys in seconds.
JavaScript SDK
The @linabase/js SDK provides a Supabase-compatible client for database queries, authentication, and file storage. If you have used the Supabase JS client, the API is nearly identical.
The SDK automatically manages access tokens. After a successful sign-in, every subsequent database and storage request includes the user's JWT. You do not need to pass tokens manually.
Authentication
Linabase provides a per-project auth engine. Each project has its own user pool, so a user signed up in one project does not exist in another.
Sign up and sign in
When a user signs up, Linabase creates an auth record and returns a session with an access token. The SDK stores this token and attaches it to all future requests.
const { data, error } = await linabase.auth.signUp({
email: "user@example.com",
password: "securepassword",
data: { full_name: "Jane Doe" }, // optional metadata
});const { data, error } = await linabase.auth.signIn({
email: "user@example.com",
password: "securepassword",
});
// data.session.access_token is set automatically for subsequent requestsOAuth providers
Redirect users to a third-party provider. After authentication, they are redirected back to your app with a valid session.
await linabase.auth.signInWithOAuth({
provider: "google", // or github, apple, discord, etc.
redirectTo: "https://yourapp.com/callback",
});Session management
Check the current session, listen for auth state changes, or sign the user out.
// Get current session
const { data: session } = await linabase.auth.getSession();
// Get current user
const { data: user } = await linabase.auth.getUser();
// Listen for auth changes
const { unsubscribe } = linabase.auth.onAuthStateChange((event, session) => {
console.log(event); // "SIGNED_IN" | "SIGNED_OUT" | "TOKEN_REFRESHED"
});
// Sign out
await linabase.auth.signOut();Password reset
Send a reset email, then update the password after the user clicks the link. The SDK handles the token exchange.
// Request reset email
await linabase.auth.resetPasswordForEmail("user@example.com");
// Update password (after clicking reset link)
await linabase.auth.updatePassword("new_password");Admin operations
Server-side code using a service_role key can manage users directly without requiring them to go through the sign-up flow.
const { data } = await linabase.auth.admin.listUsers({ page: 1, per_page: 50 });
const { data: user } = await linabase.auth.admin.createUser({
email: "new@example.com",
password: "password",
email_confirm: true,
});
await linabase.auth.admin.deleteUser(user.id);Admin methods require a service_role key. Never call these from client-side code.
Database
Query your Postgres database with a chainable builder API. Every query goes through the REST API and respects row-level security. The user making the request only sees rows their RLS policies allow.
Reading data
Use .select() to fetch rows. You can filter, sort, paginate, and join related tables in a single call.
const { data } = await linabase
.from("products")
.select("id, name, price, category:categories(name)")
.gte("price", 10)
.lte("price", 100)
.order("price", { ascending: true })
.limit(20);Writing data
Insert single rows or batches. Use .upsert() when you want to update an existing row or insert a new one based on a unique constraint.
// Insert (single or batch)
await linabase.from("products").insert({ name: "Widget", price: 9.99 });
await linabase.from("products").insert([{ name: "A" }, { name: "B" }]);
// Upsert (insert or update on conflict)
await linabase.from("products").upsert({ id: 1, name: "Updated Widget" });
// Update with filter
await linabase.from("products").update({ price: 14.99 }).eq("id", 1);
// Delete with filter
await linabase.from("products").delete().eq("id", 1);Filter reference
Chain any of these methods to narrow your query. They map directly to Postgres operators.
| Method | SQL Equivalent | Example |
|---|---|---|
| .eq(col, val) | = val | .eq("status", "active") |
| .neq(col, val) | != val | .neq("role", "admin") |
| .gt(col, val) | > val | .gt("age", 18) |
| .gte(col, val) | >= val | .gte("price", 10) |
| .lt(col, val) | < val | .lt("stock", 5) |
| .lte(col, val) | <= val | .lte("rating", 3) |
| .like(col, pat) | LIKE pat | .like("name", "%widget%") |
| .ilike(col, pat) | ILIKE pat | .ilike("email", "%@gmail%") |
| .in(col, vals) | IN (...) | .in("id", [1, 2, 3]) |
| .is(col, val) | IS val | .is("deleted_at", "null") |
| .contains(col, val) | @> val | .contains("tags", ["new"]) |
| .textSearch(col, q) | to_tsquery | .textSearch("body", "hello & world") |
| .not(col, op, val) | NOT op | .not("age", "lt", "18") |
| .or(filter) | OR | .or("age.gt.20,name.eq.John") |
Calling Postgres functions
Execute server-side functions defined in your schema. Arguments are passed as a JSON object.
const { data } = await linabase.rpc("get_top_users", { limit: 10 });TypeScript type generation
Generate types from your database schema so every query is type-checked at compile time.
const types = await linabase.generateTypes();
// Write to src/types/database.ts for type-safe queriesRun type generation in CI or as a post-migration hook so your types stay in sync with your schema.
Storage
Each project gets an S3-compatible storage bucket. You create logical "buckets" (like avatars or documents) to organize files. Buckets can be public (anyone can read) or private (requires a signed URL).
Upload and download
const { data, error } = await linabase.storage
.from("avatars")
.upload("user-123/avatar.png", file, { contentType: "image/png" });const { data: blob } = await linabase.storage
.from("avatars")
.download("user-123/avatar.png");Public and signed URLs
Public buckets serve files directly. Private buckets require a time-limited signed URL.
// Public URL (for public buckets)
const { data: { publicUrl } } = linabase.storage
.from("avatars")
.getPublicUrl("user-123/avatar.png", {
transform: { width: 200, height: 200, quality: 80 },
});
// Signed URL (for private buckets, expires in 1 hour)
const { data: { signedUrl } } = await linabase.storage
.from("documents")
.createSignedUrl("report.pdf", 3600);File management
// List files in a bucket
const { data: files } = await linabase.storage.from("avatars").list();
// Delete a file
await linabase.storage.from("avatars").remove("user-123/avatar.png");
// Move or copy
await linabase.storage.from("avatars").move("old.png", "new.png");
await linabase.storage.from("avatars").copy("orig.png", "backup.png");Realtime (Event Webhooks)
Linabase fires webhooks on INSERT, UPDATE, and DELETE operations. Configure them in the dashboard under your project's webhook settings, or via the API.
Each webhook is signed with HMAC-SHA256 using a per-webhook secret. Verify the X-Webhook-Signature header on your server to confirm the payload came from Linabase.
{
"event": "insert",
"table": "posts",
"tenant_id": "abc123",
"timestamp": "2026-04-04T12:00:00.000Z",
"record": { "id": 1, "title": "New post" },
"old_record": null
}For UPDATE events, old_record contains the row before the change. For DELETE events, record is null and old_record contains the deleted row.
REST API
The REST API provides PostgREST-compatible endpoints for direct database access. Use it when you are not using the JavaScript SDK, for example from a Go, Python, or Ruby backend, or from curl scripts.
Every request requires an Authorization header with a Bearer token. The token can be an API key or a user JWT. The token type determines the caller's identity and whether RLS is enforced.
# Pass your anon key as a Bearer token. RLS is enforced.
curl https://your-project.linabase.com/rest/v1/posts \
-H "Authorization: Bearer lb_your_anon_key"# After sign-in, pass the user's JWT. RLS sees auth.uid() = the user's ID.
curl https://your-project.linabase.com/rest/v1/posts \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIs..."# The service_role key bypasses all RLS. Use only from server-side code.
curl https://your-project.linabase.com/rest/v1/posts \
-H "Authorization: Bearer lb_your_service_role_key"The SDK handles token management automatically. After auth.signIn(), it attaches the user's JWT to every request. You only need to manage tokens manually when calling the REST API directly.
Auth Endpoints
These endpoints manage your project's end-user authentication. They are compatible with the GoTrue API format.
/auth/v1/signupcurl -X POST /auth/v1/signup \
-H "Authorization: Bearer lb_your_anon_key" \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "password123"}'/auth/v1/token?grant_type=password/auth/v1/token?grant_type=refresh_token/auth/v1/logout/auth/v1/user/auth/v1/user/auth/v1/recover/auth/v1/verify/auth/v1/settingsCRUD Operations
Read, insert, update, and delete rows from any table in your project. Filters, sorting, and pagination are applied via query parameters.
/rest/v1/:table# Select specific columns with filters
curl "/rest/v1/posts?select=id,title,author:profiles(name)&status=eq.published&order=created_at.desc&limit=10" \
-H "Authorization: Bearer lb_your_anon_key"/rest/v1/:tablecurl -X POST /rest/v1/posts \
-H "Authorization: Bearer lb_your_anon_key" \
-H "Content-Type: application/json" \
-H "Prefer: return=representation" \
-d '{"title": "Hello", "body": "World"}'/rest/v1/:table?filters/rest/v1/:table?filtersUse Prefer: return=representation to get the modified rows back in the response. Use Prefer: resolution=merge-duplicates for upsert behavior on POST.
Filtering
Append filters as query parameters using the format ?column=operator.value. These map directly to Postgres operators.
| Operator | Description | Example |
|---|---|---|
| eq | Equal | ?status=eq.active |
| neq | Not equal | ?role=neq.admin |
| gt / gte | Greater than (or equal) | ?age=gt.18 |
| lt / lte | Less than (or equal) | ?price=lt.100 |
| like / ilike | Pattern match | ?name=ilike.*widget* |
| in | In list | ?id=in.(1,2,3) |
| is | IS (null/true/false) | ?deleted_at=is.null |
| cs / cd | Contains / contained by | ?tags=cs.{new,sale} |
| ov | Overlaps | ?dates=ov.[2026-01-01,2026-12-31] |
| fts / plfts / phfts | Full-text search | ?body=fts.hello+world |
| not | Negate | ?age=not.lt.18 |
| or | OR conditions | ?or=(age.gt.20,name.eq.John) |
RPC / Functions
Call Postgres functions you have defined in your schema. Arguments are passed as a JSON body. The function must be in the project's public schema.
/rest/v1/rpc/:function_namecurl -X POST /rest/v1/rpc/get_top_users \
-H "Authorization: Bearer lb_your_anon_key" \
-H "Content-Type: application/json" \
-d '{"limit_count": 10}'Schema Selection
By default, the REST API reads and writes to your project's public schema (your tables). To access other schemas, use the Accept-Profile header for reads and Content-Profile for writes.
| Schema | Description | Access |
|---|---|---|
| public | Your project tables (default) | Read/Write |
| auth | Auth users, sessions, identities | Read-only |
| storage | Buckets and file metadata | Read-only |
| logs | Audit logs, email logs | Read-only |
# Read from auth schema (requires service_role key)
curl /rest/v1/auth_users \
-H "Accept-Profile: auth" \
-H "Authorization: Bearer lb_your_service_role_key"The auth, storage, and logs schemas are read-only via the REST API. Writes to auth users go through the /auth/v1/* endpoints; file writes go through the storage API.
Admin Endpoints
These endpoints require a service_role API key and allow you to manage users and inspect the API schema.
/auth/v1/admin/users/auth/v1/admin/users/auth/v1/admin/users/:id/auth/v1/admin/users/:id/auth/v1/admin/users/:id/rest/v1/openapi.json/rest/v1/typesUse /rest/v1/openapi.json to auto-generate client libraries in any language, or import into Postman/Insomnia for interactive testing.
ORMs (Drizzle, Prisma, Knex)
Every Linabase project exposes a standard Postgres connection. You can use any ORM or database client that supports Postgres. This means you are never locked into the Linabase SDK; your existing codebase works as-is.
Get your connection string from Project Settings > Database Connection in the dashboard. It includes the host, port, database name, role, and password.
ORM connections go directly to Postgres, not through the REST API. RLS is still enforced because the connection uses a tenant-scoped role with a preset search_path.
Drizzle ORM
npm install drizzle-orm pgFor runtime queries, point Drizzle at your connection string. Table names resolve automatically because the role's search_path is set to your project schema.
import { drizzle } from "drizzle-orm/node-postgres";
// Connection string from Project Settings > Database Connection
const db = drizzle("postgresql://tenant_abc:password@host:3105/linabase");
// Queries work with unqualified table names
const users = await db.select().from(profiles);For drizzle-kit (schema pull, push, migrations), you need to specify the schema filter so it only reads your project's schema, not system schemas.
import { defineConfig } from "drizzle-kit";
export default defineConfig({
dialect: "postgresql",
dbCredentials: {
url: "postgresql://tenant_abc:password@host:3105/linabase",
},
// Copy the schema name from Project Settings > Database Connection > Schema
schemaFilter: ["proj_abc123def456"],
});# Pull existing schema into Drizzle schema files
npx drizzle-kit pull
# Push schema changes to the database
npx drizzle-kit push
# Generate migration files
npx drizzle-kit generatePrisma
Prisma requires the multiSchema preview feature to work with Linabase's schema-per-project architecture.
datasource db {
provider = "postgresql"
url = "postgresql://tenant_abc:password@host:3105/linabase"
schemas = ["proj_abc123def456"]
}
generator client {
provider = "prisma-client-js"
previewFeatures = ["multiSchema"]
}# Pull existing schema
npx prisma db pull
# Push changes
npx prisma db push
# Generate client
npx prisma generateKnex / raw pg
Knex and raw pg clients work without any schema configuration because the tenant role's search_path resolves unqualified table names automatically.
import knex from "knex";
const db = knex({
client: "pg",
connection: "postgresql://tenant_abc:password@host:3105/linabase",
});
const posts = await db("posts").select("*").limit(10);MCP Server
The Linabase MCP server lets AI coding assistants (Claude Code, Cursor, Windsurf) interact with your project directly. It provides 28 tools for schema exploration, CRUD operations, DDL, RLS management, storage, auth user management, TypeScript generation, and performance advice.
OAuth connection (recommended)
Add this to your AI client's MCP configuration. It opens a browser window for login, no API keys needed.
{
"mcpServers": {
"linabase": {
"type": "http",
"url": "https://your-instance.linabase.com/mcp"
}
}
}Direct connection (stdio)
For local development or self-hosted instances, connect directly via the database URL.
{
"mcpServers": {
"linabase": {
"command": "npx",
"args": ["@linabase/mcp-server"],
"env": {
"LINABASE_DB_URL": "postgresql://tenant_abc:password@host:5432/linabase"
}
}
}
}Add LINABASE_API_URL and LINABASE_API_KEY environment variables to enable auth user management tools alongside the database tools.
Self-Hosting
Linabase runs as a set of Docker containers orchestrated with Docker Compose. You need a Linux server with Docker installed, at minimum 2 CPU cores and 4 GB RAM.
1. Clone and configure
git clone https://github.com/linabase/linabase.git
cd linabase
# Copy the example env file and edit it
cp .env.example .envAt minimum, set POSTGRES_PASSWORD, BETTER_AUTH_SECRET, and JWT_SECRET to strong random values.
2. Start the stack
docker compose up -dThis starts Postgres, the web dashboard, REST API, storage proxy, and all supporting services. The dashboard is available at http://localhost:3100.
3. Run migrations
pnpm db:migrate4. Create your first account
Open http://localhost:3100 in your browser and register. The first account becomes the admin.
For production deployments, use the production compose overlay: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d. This enables TLS via Caddy, resource limits, and restart policies.
| Service | Port | Description |
|---|---|---|
| Web dashboard | 3100 | TanStack React Start + Nitro |
| Caddy (reverse proxy) | 3101 | TLS termination, routing |
| Storage proxy | 3102 | S3-compatible file uploads |
| PostgreSQL + Citus | 3105 | Database |
| pgBouncer | 3106 | Connection pooling |
| REST API | 3107 | PostgREST-compatible API |
| MCP server | 3109 | AI assistant integration |
| Backup service | 3110 | Scheduled backups to S3 |
Pricing & Usage
Linabase uses pay-for-what-you-use pricing. There are no tiers and no feature gates. Every feature is available to every account.
Free Credit
Every organization gets $10/month in free credit that resets each billing cycle. No credit card is required to start.
bill = max(0, monthly_usage_cost - $10_credit)If your usage stays under $10/month, you pay nothing. Most hobby and test apps will never exceed this. You will be prompted to add a payment method when your usage reaches 80% of the credit.
Usage Metering
Three resources are metered. Usage is pooled across all projects in your organization.
| Resource | Price | What counts |
|---|---|---|
| Database storage | $0.25/GB/month | Total data across all project schemas (tables, indexes, auth users) |
| File storage | $0.10/GB/month | Files stored in all project buckets |
| Bandwidth | $0.01/GB | API responses and file downloads (uploads are free) |
Email sends and database connections are not billed. Backups do not count against your storage. The billing page shows your current cost, credit applied, and estimated bill in real time.
What Happens When You Exceed the Credit
When your monthly usage exceeds $10 and you have no payment method on file:
| Operation | Status |
|---|---|
| Read operations (SELECT, GET, downloads) | Allowed |
| Delete operations (helps you free space) | Allowed |
| Write operations (INSERT, UPDATE, uploads) | Blocked (402) |
Add a payment method to resume write access. You will only be charged for usage above the $10 credit at the end of each billing cycle.
Self-Hosted
Self-hosted Linabase is completely free with no usage limits, no telemetry, and no phone-home. You run it on your own infrastructure and pay only for your server costs.
Next steps
- Create a free account to get your API keys
- Install the SDK with
npm install @linabase/js - Follow the Quick Start above to run your first query