Skip to main content

Supabase answers

· 10 min read
Serhii Hrekov
software engineer, creator, artist, programmer, projects founder

Custom Postgres Roles and RLS in Supabase

Yes, you can define your own custom PostgreSQL roles and use them with RLS in Supabase. This is a powerful feature for implementing sophisticated Role-Based Access Control (RBAC) [1, 2].

The key is to understand that Supabase's built-in auth.uid() function and authenticated role are tied to the user's session. To use custom roles, you need to store this role information and make it accessible to your RLS policies.


The Workflow

  1. Create a roles table: Create a new table to define your custom roles and associate them with user IDs.

    CREATE TABLE user_roles (
    user_id uuid PRIMARY KEY REFERENCES auth.users (id),
    role TEXT NOT NULL
    );
  2. Make the roles available: The most efficient way to make this role data available to RLS is by storing it in a custom claim on the user's JWT. When a user signs in, you can use a custom hook or a database function to update their JWT with the role from your new table.

  3. Use the custom role in RLS: Once the role is in the JWT, you can access it in your RLS policies. The auth.jwt() function exposes the JWT claims as a JSON object, so you can filter based on them.

    CREATE POLICY "Allow editors to update content" ON articles
    FOR UPDATE
    USING ( (auth.jwt() ->> 'user_role') = 'editor' );

    This policy ensures that only users with a user_role of 'editor' in their JWT can update articles [1]. This approach is secure and performs well because the data is available in the JWT and doesn't require a separate database lookup for every query.


Event Sourcing and CQRS with Supabase

Yes, Supabase can be used to implement event sourcing or Command Query Responsibility Segregation (CQRS) patterns, but it requires a careful architectural approach. Supabase's core features are well-suited for this [3].

Event Sourcing

  • The event store: You can use a standard PostgreSQL table as your event store. This table would store every event that occurs in your application, with a strict append-only policy. Each row would represent a single, immutable event.
  • The project: You can use a PostgreSQL function or a trigger to "project" these events into a different table (a "read model") that is optimized for queries.
  • Real-time: Supabase's real-time features can be used to subscribe to the event store, allowing your front-end to react to new events as they are created.

CQRS

  • Command side: You can use Supabase's built-in APIs or Edge Functions to handle the "command" side of your application (the writes). Commands would validate requests and append new events to your event store table.
  • Query side: The "query" side (reads) can use a separate, denormalized table (the read model) that is optimized for fast queries. This read model can be a simple table, a materialized view, or even a separate database, populated by the events in your event store [3].

Supabase provides the necessary building blocks—a relational database, triggers, and Edge Functions—to implement these advanced patterns without requiring a separate, specialized platform.


Streaming Data into Supabase

Yes, it is possible to stream data into Supabase, but it's not a native feature and requires an external service to manage the stream.

The Process

  1. External Stream: Your data stream (e.g., from Kafka, Kinesis, or another message queue) exists outside of Supabase.
  2. Streaming Consumer: You need to run a dedicated service (a "streaming consumer" or "background worker") that connects to your data stream. This service can be an application running on a server or a container in a service like AWS Fargate.
  3. Supabase Connection: This consumer service connects to your Supabase project using a direct connection string to the PostgreSQL database.
  4. Insert data: The consumer reads data from the stream and inserts it into your Supabase database using a standard database insert. This approach decouples the streaming process from your Supabase instance, preventing resource bottlenecks [4].

You could also use a dedicated database function to handle the inserts, which can be more efficient for bulk data.


Optimistic Concurrency Control and Upsert Patterns

Supabase, through PostgreSQL, provides robust support for both optimistic concurrency control and upsert patterns.

Optimistic Concurrency Control (OCC)

OCC is a method for handling concurrent updates to the same data without using locking [5]. In Supabase, the best way to implement OCC is by using a versioning column in your table.

  1. Add a version column: Add a version column (e.g., an integer) to your table.

  2. Read and update: When you read a record, you also read its version number. When you update the record, you must include the version number in your WHERE clause.

    UPDATE products
    SET name = 'New Name', version = version + 1
    WHERE id = '...' AND version = <current_version_number>;
  3. Check for conflicts: If the update affects zero rows, you know a conflict occurred because another user updated the record first. Your application can then handle this conflict (e.g., by prompting the user to refresh) [5].

Upsert Patterns

The term "upsert" is a combination of "insert" and "update." It means inserting a record if it doesn't exist or updating it if it does. PostgreSQL has a native ON CONFLICT clause that provides a highly efficient way to handle this [6].

In Supabase, you can use the .upsert() method in the client library.

const { data, error } = await supabase
.from('users')
.upsert([
{ id: 1, name: 'Alice' },
{ id: 2, name: 'Bob' },
// If a user with id: 3 already exists, it will be updated
{ id: 3, name: 'Charlie' }
], { onConflict: 'id' });

This is a single database call, making it a very performant way to handle this pattern [6].


Multi-region Failover or Hot Standby

Supabase does not currently offer multi-region failover or hot standby for its managed PostgreSQL databases [7].

  • Managed Service: Supabase operates as a managed service on a single cloud region (e.g., us-east-1). Your database resides in a single, resilient data center.
  • High Availability: Supabase provides a high-availability architecture within a single region, with automated backups and recovery features. However, it is not designed to failover seamlessly to a different geographic region.
  • The Future: While this feature is not available, it is on the Supabase roadmap for the Enterprise plan [7]. For now, you can use Read Replicas in different regions to improve read performance for global users, but the primary write database will always be in a single region.

Sources

  1. Supabase. "Postgres with JWT - Supabase". https://supabase.com/blog/postgres-with-jwt
  2. Supabase. "Row Level Security | Supabase Docs". https://supabase.com/docs/guides/database/postgres/row-level-security
  3. Supabase. "Event Sourcing on Supabase". https://www.youtube.com/watch?v=FqM5gG3E_R4
  4. Supabase. "Streaming data to Supabase". https://www.youtube.com/watch?v=zJvV3yVl6xY
  5. Supabase. "Optimistic Concurrency Control with Supabase". https://supabase.com/blog/optimistic-concurrency-control
  6. Supabase. "Upserting data". https://supabase.com/docs/guides/database/upsert
  7. Supabase. "Supabase Roadmap". https://supabase.com/changelog/2023-01-27-multi-region