From 456306663ecc7d0fa52279f9da25b213e3ef68a5 Mon Sep 17 00:00:00 2001 From: Lev Kokotov Date: Thu, 25 Aug 2022 11:30:51 -0700 Subject: [PATCH] Editors pass over the blog --- .../blog/data-is-living-and-relational.md | 31 ++++++++++++------- 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/pgml-docs/docs/blog/data-is-living-and-relational.md b/pgml-docs/docs/blog/data-is-living-and-relational.md index 1a821ff19..2efe249bc 100644 --- a/pgml-docs/docs/blog/data-is-living-and-relational.md +++ b/pgml-docs/docs/blog/data-is-living-and-relational.md @@ -37,28 +37,35 @@ A common problem with data science and machine learning tutorials is the publish -- It’s usually denormalized into a single tabular form, e.g. csv file -- It’s often relatively tiny to medium amounts of data, not big data -- It’s always static, new rows are never added -- It’s sometimes been pre-treated to clean or simplify the data +They are: -As Data Science transitions from academia into industry, those norms influence organizations and applications. Professional Data Scientists now need teams of Data Engineers to move the data from production databases into centralized data warehouses and denormalized schemas that are more familiar, and ideally easier to work with. Large offline batch jobs are a typical integration point between Data Scientists and their Engineering counterparts who deal with online systems. As the systems grow more complex, additional specialized Machine Learning Engineers are required to optimize performance and scalability bottlenecks between databases, warehouses, models and applications. +- usually denormalized into a single tabular form, e.g. a CSV file, +- often relatively tiny to medium amounts of data, not big data, +- always static, with new rows never added, +- and sometimes pre-treated to clean or simplify the data. -This eventually leads to expensive maintenance and then to terminal complexity where new improvements to the system become exponentially more difficult. Ultimately, previously working models start getting replaced by simpler solutions, so the business can continue to iterate. This is not a new phenomenon, see the fate of the Netflix Prize. +As Data Science transitions from academia into industry, these norms influence organizations and applications. Professional Data Scientists need teams of Data Engineers to move data from production databases into data warehouses and denormalized schemas which are more familiar, and ideally easier to work with. Large offline batch jobs are a typical integration point between Data Scientists and their Engineering counterparts, who primarily deal with online systems. As the systems grow more complex, additional specialized Machine Learning Engineers are required to optimize performance and scalability bottlenecks between databases, warehouses, models and applications. + +This eventually leads to expensive maintenance and to terminal complexity: new improvements to the system become exponentially more difficult. Ultimately, previously working models start getting replaced by simpler solutions, so the business can continue to iterate. This is not a new phenomenon, see the fate of the Netflix Prize. Announcing the PostgresML Gym 🎉 ------------------------------- -Instead of starting from the academic perspective that data is dead, PostgresML embraces the living and dynamic nature of data inside modern organizations. It's relational and growing in multiple dimensions. +Instead of starting from the academic perspective that data is dead, PostgresML embraces the living and dynamic nature of data produced by modern organizations. It's relational and growing in multiple dimensions. ![relational data](/images/illustrations/uml.png) -- Schemas are normalized for real time performance and correctness considerations -- New rows are constantly added and updated, which form the incomplete features for a prediction -- Denormalized datasets may grow to billions of rows, and terabytes of data -- The data often spans multiple iterations of the schema, and software bugs can introduce outlier data +Relationa data: + +- is normalized for real time performance and correctness considerations, +- and has new rows added and updated constantly, which form the incomplete features for a prediction. + +Meanwhile, denormalized data sets: + +- may grow to billions of rows, and terabytes of data, +- and often span multiple iterations of the schema, with software bugs introducing outliers. -We think it’s worth attempting to move the machine learning process and modern data architectures beyond the status quo. To that end, we’re building the PostgresML Gym to provide a test bed for real world ML experimentation in a Postgres database. Your personal gym will include the PostgresML dashboard and several tutorial notebooks to get you started. +We think it’s worth attempting to move the machine learning process and modern data architectures beyond the status quo. To that end, we’re building the PostgresML Gym, a free offering, to provide a test bed for real world ML experimentation in a Postgres database. Your personal Gym will include the PostgresML dashboard, several tutorial notebooks to get you started, and access to your own personal PostgreSQL database, supercharged with our machine learning extension.