Skip to content

Commit 63a8f4a

Browse files
authored
Fix broken links in docs (#1448)
1 parent 5753286 commit 63a8f4a

File tree

7 files changed

+292
-292
lines changed

7 files changed

+292
-292
lines changed

pgml-cms/docs/SUMMARY.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
## API
1616

17-
* [Overview](api/apis.md)
17+
* [Overview](api/overview.md)
1818
* [SQL extension](api/sql-extension/README.md)
1919
* [pgml.embed()](api/sql-extension/pgml.embed.md)
2020
* [pgml.transform()](api/sql-extension/pgml.transform/README.md)
@@ -84,8 +84,7 @@
8484
* [Architecture](resources/architecture/README.md)
8585
* [Why PostgresML?](resources/architecture/why-postgresml.md)
8686
* [FAQs](resources/faqs.md)
87-
* [Data Storage & Retrieval](resources/data-storage-and-retrieval/tabular-data.md)
88-
* [Tabular data](resources/data-storage-and-retrieval/tabular-data.md)
87+
* [Data Storage & Retrieval](resources/data-storage-and-retrieval/README.md)
8988
* [Documents](resources/data-storage-and-retrieval/documents.md)
9089
* [Partitioning](resources/data-storage-and-retrieval/partitioning.md)
9190
* [LLM based pipelines with PostgresML and dbt (data build tool)](resources/data-storage-and-retrieval/llm-based-pipelines-with-postgresml-and-dbt-data-build-tool.md)
File renamed without changes.

pgml-cms/docs/product/vector-database.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Vectors can be stored in columns, just like any other data type. To add a vector
1818

1919
#### Adding a vector column
2020

21-
Using the example from [Tabular data](../resources/data-storage-and-retrieval/tabular-data.md), let's add a vector column to our USA House Prices table:
21+
Using the example from [Tabular data](../resources/data-storage-and-retrieval/README.md), let's add a vector column to our USA House Prices table:
2222

2323
{% tabs %}
2424
{% tab title="SQL" %}
Lines changed: 240 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,241 @@
1-
# Data Storage & Retrieval
1+
# Tabular data
22

3+
Tabular data is data stored in tables. A table is a format that defines rows and columns, and is the most common type of data organization. Examples of tabular data are spreadsheets, database tables, CSV files, and Pandas dataframes.
4+
5+
Storing and accessing tabular data in an efficient manner is a subject of multiple decade-long studies, and is the core purpose of most database systems. PostgreSQL has been leading the charge on optimal tabular storage for a long time, and remains one of the most popular and effective ways to store, organize and retrieve tabular data today.
6+
7+
### Creating tables
8+
9+
Postgres makes it easy to create and use tables. If you're looking to use PostgresML for a supervised learning project, creating a table will be very similar to a Pandas dataframe, except it will be durable and accessible for as long as the database exists.
10+
11+
For the rest of this guide, we'll use the [USA House Prices](https://www.kaggle.com/code/fatmakursun/supervised-unsupervised-learning-examples/) dataset from Kaggle, store it in a Postgres table and run some basic queries. The dataset has seven (7) columns and 5,000 rows:
12+
13+
| Column | Data type | Postgres data type |
14+
| ---------------------------- | --------- | ------------------ |
15+
| Avg. Area Income | Float | REAL |
16+
| Avg. Area House Age | Float | REAL |
17+
| Avg. Area Number of Rooms | Float | REAL |
18+
| Avg. Area Number of Bedrooms | Float | REAL |
19+
| Area Population | Float | REAL |
20+
| Price | Float | REAL |
21+
| Address | String | VARCHAR |
22+
23+
Once we know the column names and data types, the Postgres table definition is pretty straight forward:
24+
25+
```plsql
26+
CREATE TABLE usa_house_prices (
27+
"Avg. Area Income" REAL NOT NULL,
28+
"Avg. Area House Age" REAL NOT NULL,
29+
"Avg. Area Number of Rooms" REAL NOT NULL,
30+
"Avg. Area Number of Bedrooms" REAL NOT NULL,
31+
"Area Population" REAL NOT NULL,
32+
"Price" REAL NOT NULL,
33+
"Address" VARCHAR NOT NULL
34+
);
35+
```
36+
37+
The column names are double quoted because they contain special characters like `.` and space, which can be interpreted to be part of the SQL syntax. Generally speaking, it's good practice to double quote all entity names when using them in a query, although most of the time it's not needed.
38+
39+
If you run this using `psql`, you'll get something like this:
40+
41+
```
42+
postgresml=# CREATE TABLE usa_house_prices (
43+
"Avg. Area Income" REAL NOT NULL,
44+
"Avg. Area House Age" REAL NOT NULL,
45+
"Avg. Area Number of Rooms" REAL NOT NULL,
46+
"Avg. Area Number of Bedrooms" REAL NOT NULL,
47+
"Area Population" REAL NOT NULL,
48+
"Price" REAL NOT NULL,
49+
"Address" VARCHAR NOT NULL
50+
);
51+
CREATE TABLE
52+
postgresml=#
53+
```
54+
55+
### Ingesting data
56+
57+
When created for the first time, the table is empty. Let's import our example data using one of the fastest way to do so in Postgres: with `COPY`.
58+
59+
If you're like me and prefer to use the terminal, you can open up `psql` and ingest the data like this:
60+
61+
```
62+
postgresml=# \copy usa_house_prices FROM 'USA_Housing.csv' CSV HEADER;
63+
COPY 5000
64+
```
65+
66+
As expected, Postgres copied all 5,000 rows into the `usa_house_prices` table. `COPY` accepts CSV, text, and Postgres binary formats, but CSV is definitely the most common.
67+
68+
You may have noticed that we used the `\copy` command in the terminal, not `COPY`. The `COPY` command actually comes in two forms: `\copy` which is a `psql` command that copies data from system files to remote databases, while `COPY` is more commonly used in applications to send data from other sources, like standard input, files, other databases and streams.
69+
70+
If you're writing your own application to ingest large amounts of data into Postgres, you should use `COPY` for maximum throughput.
71+
72+
### Querying data
73+
74+
Querying data stored in tables is what makes PostgresML so powerful. Postgres has one of the most comprehensive querying languages of all databases we've worked with so, for our example, we won't have any trouble calculating some statistics:
75+
76+
```sql
77+
SELECT
78+
count(*),
79+
avg("Avg. Area Income"),
80+
max("Avg. Area Income"),
81+
min("Avg. Area Income"),
82+
percentile_cont(0.75)
83+
WITHIN GROUP (ORDER BY "Avg. Area Income") AS percentile_75,
84+
stddev("Avg. Area Income")
85+
FROM usa_house_prices;
86+
```
87+
88+
```
89+
count | avg | max | min | percentile_75 | stddev
90+
-------+-------------------+-----------+----------+----------------+-------------------
91+
5000 | 68583.10897773437 | 107701.75 | 17796.63 | 75783.33984375 | 10657.99120344229
92+
```
93+
94+
The SQL language is expressive and allows to select, filter and aggregate any number of columns with a single query.
95+
96+
### Adding more data
97+
98+
Because databases store data permanently, adding more data to Postgres can be done in many ways. The simplest and most common way is to just insert it into a table you already have. Using the same example dataset, we can add a new row with just one query:
99+
100+
```sql
101+
INSERT INTO usa_house_prices (
102+
"Avg. Area Income",
103+
"Avg. Area House Age",
104+
"Avg. Area Number of Rooms",
105+
"Avg. Area Number of Bedrooms",
106+
"Area Population",
107+
"Price",
108+
"Address"
109+
) VALUES (
110+
199778.0,
111+
43.0,
112+
3.0,
113+
2.0,
114+
57856.0,
115+
5000000000.0,
116+
'1 Infinite Loop, Cupertino, California'
117+
);
118+
```
119+
120+
If you have more CSV files you'd like to ingest, you can run `COPY` for each one. Many ETL pipelines from Snowflake or Redshift chunk their output into multiple CSVs, which can be individually imported into Postgres using `COPY`:
121+
122+
{% tabs %}
123+
{% tab title="Python" %}
124+
```python
125+
import psycopg
126+
from glob import glob
127+
128+
with psycopg.connect("postgres:///postgresml") as conn:
129+
cur = conn.cursor()
130+
131+
with cur.copy("COPY usa_house_prices FROM STDIN CSV") as copy:
132+
for csv_file in glob("*.csv"):
133+
with open(csv_file) as f:
134+
next(f) # Skip header
135+
for line in f:
136+
copy.write(line)
137+
```
138+
{% endtab %}
139+
140+
{% tab title="Bash" %}
141+
```bash
142+
#!/bin/bash
143+
144+
for f in $(ls *.csv); do
145+
psql postgres:///postgresml \
146+
-c "\copy usa_house_prices FROM '$f' CSV HEADER"
147+
done
148+
```
149+
{% endtab %}
150+
{% endtabs %}
151+
152+
Now that our dataset is changing, we should explore some tools to protect it against bad values.
153+
154+
### Data integrity
155+
156+
Databases store important data so they were built with many safety features in mind to protect from common errors. In machine learning, one of the most common errors is data duplication, i.e. having the same row appear in the a table twice. Postgres can protect us against this with unique indexes.
157+
158+
Looking at the USA House Prices dataset, we can find its natural key pretty easily. Since most columns are aggregates, the only column that seems like it should contain unique values is the "Address", i.e there should never be more than one house for sale at a single address.
159+
160+
To ensure that our table reflects this, let's add a unique index:
161+
162+
```sql
163+
CREATE UNIQUE INDEX ON usa_house_prices USING btree("Address");
164+
```
165+
166+
When creating a unique index, Postgres scans the whole table, checks to ensure there are no duplicates in the indexed column, and writes the column into an index using the B-Tree algorithm.
167+
168+
If we attempt to insert the same row again, we'll get an error:
169+
170+
```
171+
ERROR: duplicate key value violates unique constraint "usa_house_prices_Address_idx"
172+
DETAIL: Key ("Address")=(1 Infinite Loop, Cupertino, California) already exists.
173+
```
174+
175+
Postgres supports many more indexing algorithms, e.g. GiST, BRIN, GIN, and Hash. Many extensions, e.g. `pgvector`, implement their own index types like HNSW and IVFFlat, which help efficiently search and retrieve vector values. We explore those in our guide about [Vectors](broken-reference).
176+
177+
### Accelerating recall
178+
179+
Once the dataset gets large enough, and we're talking millions of rows, it's no longer practical to query the table directly. The amount of data Postgres has to scan becomes large and queries become slow. At that point, tables should have indexes that order and organize commonly read columns. Searching an index can be done in _O(log n)_ time, which is orders of magnitude faster than the _O(n)_ full table scan.
180+
181+
#### Querying an index
182+
183+
Postgres automatically uses indexes when possible and optimal to do so. From our example, if we filter the dataset by the "Address" column, Postgres will use the index we created and return a result quickly:
184+
185+
```sql
186+
SELECT
187+
"Avg. Area House Age",
188+
"Address"
189+
FROM usa_house_prices
190+
WHERE "Address" = '1 Infinite Loop, Cupertino, California';
191+
```
192+
193+
```
194+
Avg. Area House Age | Address
195+
---------------------+----------------------------------------
196+
43 | 1 Infinite Loop, Cupertino, California
197+
(1 row)
198+
```
199+
200+
Since we have a unique index on the table, we expect to see only one row with that address.
201+
202+
#### Query plan
203+
204+
To double check that Postgres is using an index, we can take a look at the query execution plan. A query plan is a list of steps that Postgres will take to get the result of the query. To see the query plan, prepend the keyword `EXPLAIN` to the query you'd like to run:
205+
206+
```
207+
postgresml=# EXPLAIN (FORMAT JSON) SELECT
208+
"Avg. Area House Age",
209+
"Address"
210+
FROM usa_house_prices
211+
WHERE "Address" = '1 Infinite Loop, Cupertino, California';
212+
213+
QUERY PLAN
214+
----------------------------------------------------------------------------------------------
215+
[ +
216+
{ +
217+
"Plan": { +
218+
"Node Type": "Index Scan", +
219+
"Parallel Aware": false, +
220+
"Async Capable": false, +
221+
"Scan Direction": "Forward", +
222+
"Index Name": "usa_house_prices_Address_idx", +
223+
"Relation Name": "usa_house_prices", +
224+
"Alias": "usa_house_prices", +
225+
"Startup Cost": 0.28, +
226+
"Total Cost": 8.30, +
227+
"Plan Rows": 1, +
228+
"Plan Width": 51, +
229+
"Index Cond": "((\"Address\")::text = '1 Infinite Loop, Cupertino, California'::text)"+
230+
} +
231+
} +
232+
]
233+
```
234+
235+
The plan indicates that it will use an "Index Scan" on `usa_house_prices_Address_index` which is what we're expecting. Using `EXPLAIN` doesn't actually run the query, so it's safe to use on production systems.
236+
237+
The ability to create indexes on datasets of any size, and to efficiently query that data using them, is what separates Postgres from most ad-hoc tools like Pandas and Arrow. Postgres can store and query data that would never fit in memory, and it can do that quicker and more efficiently than most other databases used in the industry.
238+
239+
#### Maintaining an index
240+
241+
Postgres indexes require no special maintenance. They are automatically updated when data is added and removed. Postgres also ensures that indexes are efficiently organized and are ACID compliant: the database guarantees that the data is always consistent, no matter how many concurrent changes are made.

pgml-cms/docs/resources/data-storage-and-retrieval/partitioning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ This reduces the number of rows Postgres has to scan by half. By adding more par
108108

109109
Partitioning by hash, unlike by range, can be applied to any data type, including text. A hash function is executed on the partition key to create a reasonably unique number, and that number is then divided by the number of partitions to find the right child table for the row.
110110

111-
To create a table partitioned by hash, the syntax is similar to partition by range. Let's use the USA House Prices dataset we used in [Vectors ](broken-reference)and [Tabular data](tabular-data.md), and split that table into two (2) roughly equal parts. Since we already have the `usa_house_prices` table, let's create a new one with the same columns, except this one will be partitioned:
111+
To create a table partitioned by hash, the syntax is similar to partition by range. Let's use the USA House Prices dataset we used in [Vectors](../../product/vector-database.md) and [Tabular data](README.md), and split that table into two (2) roughly equal parts. Since we already have the `usa_house_prices` table, let's create a new one with the same columns, except this one will be partitioned:
112112

113113
```sql
114114
CREATE TABLE usa_house_prices_partitioned (

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy