Skip to content

Commit

Permalink
Update README and sample docker compose
Browse files Browse the repository at this point in the history
  • Loading branch information
jovezhong committed Oct 16, 2024
1 parent c01bc3f commit 803e902
Show file tree
Hide file tree
Showing 11 changed files with 41 additions and 44 deletions.
18 changes: 10 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,26 +76,27 @@ The [Docker Compose stack](https://github.com/timeplus-io/proton/tree/develop/ex

### Timeplus Cloud:

Don't want to setup by yourself? Try Timeplus Proton in [Cloud](https://us-west-2.timeplus.cloud/)
Don't want to setup by yourself? Try Timeplus in [Cloud](https://us-west-2.timeplus.cloud/)


### 🔎 Usage
SQL is the main interface. You can start a new terminal window with `proton client` to start the SQL shell.
> [!NOTE]
> You can also integrate Timeplus Proton with Python/Java/Go SDK, REST API, or BI plugins. Please check <a href="#-integrations"><strong>Integrations</strong></a>
In the `proton client`, you can write SQL to create [External Stream for Kafka](https://docs.timeplus.com/proton-kafka) or [External Table for ClickHouse](https://docs.timeplus.com/proton-clickhouse-external-table). You can also run the following SQL to create a stream of random data:
In the `proton client`, you can write SQL to create [External Stream for Kafka](https://docs.timeplus.com/proton-kafka) or [External Table for ClickHouse](https://docs.timeplus.com/proton-clickhouse-external-table).

You can also run the following SQL to create a stream of random data:

```sql
-- Create a stream with random data
CREATE RANDOM STREAM devices(
device string default 'device'||to_string(rand()%4),
temperature float default rand()%1000/10)
```
```sql
temperature float default rand()%1000/10);

-- Run the streaming SQL
SELECT device, count(*), min(temperature), max(temperature)
FROM devices GROUP BY device
FROM devices GROUP BY device;
```

You should see data like the following:
Expand All @@ -119,8 +120,8 @@ What features are available with Timeplus Proton versus Timeplus Enterprise?
| | **Timeplus Proton** | **Timeplus Enterprise** |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Deployment** | <ul><li>Single-node Docker image</li><li>Single binary on Mac/Linux</li></ul> | <ul><li>Single node, or</li><li>Cluster</li><li>Kubernetes-based self-hosting, or</li><li>Fully-managed cloud service</li></ul> |
| **Data sources** | <ul><li>Random streams</li><li>External streams to Apache Kafka, Confluent Cloud, Redpanda</li><li>Streaming ingestion via REST API (compact mode only)</li></ul> | <ul><li>Everything in Timeplus Proton</li><li>WebSocket and HTTP Stream</li><li>NATS</li><li>Apache Pulsar</li><li>CSV upload</li><li>Streaming ingestion via REST API (with API key and flexible modes)</li><li>Hundreds of connectors from Redpanda Connect</li></ul> |
| **Data destinations (sinks)** | <ul><li>External streams to Apache Kafka, Confluent Cloud, Redpanda</li></ul> | <ul><li>Everything in Timeplus Proton</li><li>Apache Pulsar</li><li>Slack</li><li>Webhook</li><li>Exchange data with other Timeplus Enterprise or Proton deployments</li></ul> |
| **Data sources** | <ul><li>Random streams</li><li>External streams to Apache Kafka, Confluent Cloud, Redpanda</li><li>Streaming ingestion via REST API (compact mode only)</li></ul> | <ul><li>Everything in Timeplus Proton</li><li>External streams to another Timeplus Proton or Timeplus Enterprise deployment</li><li>WebSocket and HTTP Stream</li><li>NATS</li><li>CSV upload</li><li>Streaming ingestion via REST API (with API key and flexible modes)</li><li>Hundreds of connectors from Redpanda Connect</li></ul> |
| **Data destinations (sinks)** | <ul><li>External streams to Apache Kafka, Confluent Cloud, Redpanda</li></ul> | <ul><li>Everything in Timeplus Proton</li><li>External streams to another Timeplus Proton or Timeplus Enterprise deployment</li><li>Slack</li><li>Webhook</li><li>Hundreds of connectors from Redpanda Connect</li></ul> |
| **Support** | <ul><li>Community support from GitHub and Slack</li></ul> | <ul><li>Enterprise support via email, Slack, and Zoom, with a SLA</li></ul> |

## 🧩 Integrations
Expand All @@ -133,6 +134,7 @@ The following drivers are available:
Integrations with other systems:

* ClickHouse https://docs.timeplus.com/proton-clickhouse-external-table
* Docker and Testcontainers https://docs.timeplus.com/tutorial-testcontainers-java
* Sling https://docs.timeplus.com/sling
* Grafana https://github.com/timeplus-io/proton-grafana-source
* Metabase https://github.com/timeplus-io/metabase-proton-driver
Expand Down
6 changes: 3 additions & 3 deletions docker/compose/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ version: "3"

services:
proton-server:
image: ghcr.io/timeplus-io/proton:latest
image: d.timeplus.com/timeplus-io/proton:latest
pull_policy: always
ports:
- "8123:8123" # HTTP
Expand Down Expand Up @@ -42,14 +42,14 @@ services:
- redpanda
- start
- --smp
- '1'
- "1"
- --memory
- 1G
- --reserve-memory
- 0M
- --overprovisioned
- --node-id
- '0'
- "0"
- --kafka-addr
- PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
- --advertise-kafka-addr
Expand Down
17 changes: 8 additions & 9 deletions examples/carsharing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@



This docker compose file demonstrates some typical query patterns that you can achieve in Proton to solve various use cases.
This docker compose file demonstrates some typical query patterns that you can achieve in Proton to solve various use cases.

For more details, please check https://docs.timeplus.com/usecases

Expand All @@ -12,7 +12,7 @@ For more details, please check https://docs.timeplus.com/usecases

Simply run `docker compose up` in this folder. Two docker containers in the stack:

1. ghcr.io/timeplus-io/proton:latest, as the streaming database
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming database
2. timeplus/cardemo:latest, as the data generator

## Customer Scenario and Data Model
Expand Down Expand Up @@ -76,19 +76,19 @@ Please check https://docs.timeplus.com/usecases for more sample queries.

```sql
-- List live data
SELECT * FROM car_live_data;
SELECT * FROM car_live_data;

-- Filter data
SELECT time,cid,gas_percent FROM car_live_data WHERE gas_percent < 25;
SELECT time,cid,gas_percent FROM car_live_data WHERE gas_percent < 25;

-- Downsampling
SELECT window_start,cid, avg(gas_percent) AS avg_gas_percent,avg(speed_kmh) AS avg_speed FROM
tumble(car_live_data,1m) GROUP BY window_start, cid;
tumble(car_live_data,1m) GROUP BY window_start, cid;

-- Create materlized view
CREATE MATERIALIZED VIEW car_live_data_1min as
SELECT window_start AS time,cid, avg(gas_percent) AS avg_gas,avg(speed_kmh) AS avg_speed
FROM tumble(car_live_data,1m) GROUP BY window_start, cid;
SELECT window_start AS time,cid, avg(gas_percent) AS avg_gas,avg(speed_kmh) AS avg_speed
FROM tumble(car_live_data,1m) GROUP BY window_start, cid;
SELECT * FROM car_live_data_1min;

-- Top K
Expand All @@ -99,9 +99,8 @@ SELECT avg(gap) FROM
( SELECT
date_diff('second', bookings.booking_time, trips.start_time) AS gap
FROM bookings
INNER JOIN trips ON (bookings.bid = trips.bid)
INNER JOIN trips ON (bookings.bid = trips.bid)
AND date_diff_within(2m, bookings.booking_time, trips.start_time)
) WHERE _tp_time >= now()-1d;

```

2 changes: 1 addition & 1 deletion examples/cdc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This docker compose file demonstrates how to capture live database change from a

Simply run `docker compose up` in this folder. Five docker containers in the stack:

1. ghcr.io/timeplus-io/proton:latest, as the streaming database.
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming database.
2. docker.redpanda.com/redpandadata/redpanda, as the Kafka compatiable streaming message bus
3. docker.redpanda.com/redpandadata/console, as the web UI to explore data in Kafka/Redpanda
4. debezium/connect, as the CDC engine to read changes from OLTP and send data to Kafka/Redpanda
Expand Down
2 changes: 1 addition & 1 deletion examples/clickhouse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ A YouTube video tutorial is available for visual learners: https://youtu.be/ga_D

Simply run `docker compose up` in this folder. Three docker containers in the stack:

1. ghcr.io/timeplus-io/proton:latest, as the streaming SQL engine.
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming SQL engine.
2. clickhouse/clickhouse-server:latest
3. quay.io/cloudhut/owl-shop:latest, as the data generator. [Owl Shop](https://github.com/cloudhut/owl-shop) is an imaginary ecommerce shop that simulates microservices exchanging data via Apache Kafka.
4. docker.redpanda.com/redpandadata/redpanda, as the Kafka compatiable streaming message bus
Expand Down
8 changes: 3 additions & 5 deletions examples/coinbase/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@



This docker compose file demonstrates how to ingest WebSocket data into Proton by using Benthos pipeline.
This docker compose file demonstrates how to ingest WebSocket data into Proton by using Benthos pipeline.



## Start the stack

Simply run `docker compose up` in this folder. Three docker containers in the stack:

1. ghcr.io/timeplus-io/proton:latest, as the streaming database
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming database
2. jeffail/benthos:latest, a [Benthos](https://www.benthos.dev/) service as the data pipeline
3. init container, create the tickers stream when Proton database server is ready

Expand Down Expand Up @@ -54,7 +54,7 @@ output:
http_client:
url: http://proton:8123/proton/v1/ingest/streams/tickers
verb: POST
headers:
headers:
Content-Type: application/json
batching:
count: 10
Expand Down Expand Up @@ -85,5 +85,3 @@ WHERE
GROUP BY
window_start, product_id
```


4 changes: 2 additions & 2 deletions examples/ecommerce/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ For more details, please check https://docs.timeplus.com/proton-kafka#tutorial
## Start the example

Simply run `docker compose up` in this folder. Four docker containers in the stack:
1. ghcr.io/timeplus-io/proton:latest, as the streaming database.
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming database.
2. quay.io/cloudhut/owl-shop:latest, as the data generator. [Owl Shop](https://github.com/cloudhut/owl-shop) is an imaginary ecommerce shop that simulates microservices exchanging data via Apache Kafka.
3. docker.redpanda.com/redpandadata/redpanda, as the Kafka compatiable streaming message bus
4. docker.redpanda.com/redpandadata/console, as the web UI to explore data in Kafka/Redpanda
Expand All @@ -20,7 +20,7 @@ Run the following commands:
```sql
-- Create externarl stream to read data from Kafka/Redpanda
CREATE EXTERNAL STREAM frontend_events(raw string)
SETTINGS type='kafka',
SETTINGS type='kafka',
brokers='redpanda:9092',
topic='owlshop-frontend-events';

Expand Down
3 changes: 1 addition & 2 deletions examples/fraud_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,11 @@ This docker compose file demonstrates how to leverage proton to build a real-tim
## Start the example

Simply run `docker compose up` in this folder. three docker containers in the stack:
1. ghcr.io/timeplus-io/proton:latest, as the streaming database.
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming database.
2. timeplus/fraud:latest, a online payment transaction data generator
3. jupyter/scipy-notebook:latest, jupyter notebook


## Run Notebook

Visit `http://localhost:8888/notebooks/work/fraud_detection.ipynb` to access the notebook. And then just follow the code in the notebook step by step.

15 changes: 7 additions & 8 deletions examples/grafana/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ A YouTube video tutorial is available for visual learners: https://www.youtube.c

Simply run `docker compose up` in this folder. Three docker containers in the stack:

1. ghcr.io/timeplus-io/proton:latest, as the streaming SQL engine. Port 8463 and 3218 are exposed so that Grafana can connect to it.
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming SQL engine. Port 8463 and 3218 are exposed so that Grafana can connect to it.
2. timeplus/cardemo:latest, as the data generator
3. grafana/grafana:latest, with pre-configured Proton dashboard and a live dashboard

Expand All @@ -20,19 +20,19 @@ Please check https://docs.timeplus.com/usecases for more sample queries.

```sql
-- List live data
SELECT * FROM car_live_data;
SELECT * FROM car_live_data;

-- Filter data
SELECT time,cid,gas_percent FROM car_live_data WHERE gas_percent < 25;
SELECT time,cid,gas_percent FROM car_live_data WHERE gas_percent < 25;

-- Downsampling
SELECT window_start,cid, avg(gas_percent) AS avg_gas_percent,avg(speed_kmh) AS avg_speed FROM
tumble(car_live_data,1m) GROUP BY window_start, cid;
tumble(car_live_data,1m) GROUP BY window_start, cid;

-- Create materlized view
CREATE MATERIALIZED VIEW car_live_data_1min as
SELECT window_start AS time,cid, avg(gas_percent) AS avg_gas,avg(speed_kmh) AS avg_speed
FROM tumble(car_live_data,1m) GROUP BY window_start, cid;
SELECT window_start AS time,cid, avg(gas_percent) AS avg_gas,avg(speed_kmh) AS avg_speed
FROM tumble(car_live_data,1m) GROUP BY window_start, cid;
SELECT * FROM car_live_data_1min;

-- Top K
Expand All @@ -43,9 +43,8 @@ SELECT avg(gap) FROM
( SELECT
date_diff('second', bookings.booking_time, trips.start_time) AS gap
FROM bookings
INNER JOIN trips ON (bookings.bid = trips.bid)
INNER JOIN trips ON (bookings.bid = trips.bid)
AND date_diff_within(2m, bookings.booking_time, trips.start_time)
) WHERE _tp_time >= now()-1d;

```

4 changes: 2 additions & 2 deletions examples/hackernews/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Inspired by https://bytewax.io/blog/polling-hacker-news, you can call Hacker New
## Start the example

Simply run `docker compose up` in this folder and it will start
1. ghcr.io/timeplus-io/proton:latest, with pre-configured streams, materialized views and views.
1. d.timeplus.com/timeplus-io/proton:latest, with pre-configured streams, materialized views and views.
2. timeplus/hackernews_bytewax:latest, leveraging [bytewax](https://bytewax.io) to call Hacker News HTTP API with Bytewax and send latest news to Proton. [Source code](https://github.com/timeplus-io/proton-python-driver/tree/develop/example/bytewax)
3. A pre-configured Grafana instance to visulaize the live data.

Expand Down Expand Up @@ -36,7 +36,7 @@ With all those streams and views, you can query the data in whatever ways, e.g.
```sql
select * from comment;

select
select
story._tp_time as story_time,comment._tp_time as comment_time,
story.id as story_id, comment.id as comment_id,
substring(story.title,1,20) as title,substring(comment.raw:text,1,20) as comment
Expand Down
6 changes: 3 additions & 3 deletions examples/jdbc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This docker compose file demonstrates how to connect to Proton via JDBC driver.

Simply run `docker compose up` in this folder. Two docker containers in the stack:

1. ghcr.io/timeplus-io/proton:latest, as the streaming database. Port 8123 is exposed so that JDBC driver can connect to it.
1. d.timeplus.com/timeplus-io/proton:latest, as the streaming database. Port 8123 is exposed so that JDBC driver can connect to it.
2. timeplus/cardemo:latest, as the data generator

Please note port 8123 from the Proton container is exposed to the host. You need this port to connect to Proton via JDBC driver.
Expand Down Expand Up @@ -55,7 +55,7 @@ public class App {
}
return count;
}
}
}
public static void main(String[] args) {
String url = "jdbc:proton://localhost:8123";
String user = System.getProperty("user", "default");
Expand Down Expand Up @@ -85,7 +85,7 @@ First add the Proton JDBC driver to DBeaver. Taking DBeaver 23.2.3 as an example
* Default User: default
* Allow Empty Password

In the "Libaries" tab, click "Add Artifact" and type `com.timeplus:proton-jdbc:0.6.0`. Click the "Find Class" button to load the class.
In the "Libaries" tab, click "Add Artifact" and type `com.timeplus:proton-jdbc:0.6.0`. Click the "Find Class" button to load the class.

Create a new database connection, choose "Timeplus Proton" and accept the default settings. Click the "Test Connection.." to verify the connection is okay.

Expand Down

0 comments on commit 803e902

Please sign in to comment.