Interact with Amazon S3 using SQL with BryteFlow Blend

SQL Based Data Management

Run and schedule complex Hadoop/SPARK data transformations by simply using SQL, no PySpark coding required.

Workflow Automation

Run complex jobs and orchestrate your dependencies using SQL and integration with Amazon Simple Notification Service (SNS).


Build an environment of analytics ready data assets for consumers of data.

Key Features

Choices of destinations

Persist Data Assets in Amazon S3 and optionally export data assets to Amazon Redshift, Amazon Aurora or Snowflake.

Simple flow chart interface

Data Preparation on your BryteFlow Data Lake on Amazon S3 with a self-service point-and-click workbench to select, join and transform data.

Handshaking with BryteFlow ingest

Integrates with BryteFlow Ingest to run data transformation jobs when required.

Full metadata and data lineage

All data assets will have automated metadata and data lineage.

Versioning of SQL

SQL statements integrated with AWS CodeCommit for version control

Integrate with Amazon cloudwatch logs

Get monitoring and alerting capabilities through integration with Amazon CloudWatch Logs.

View landed Amazon S3 data as tables

View Amazon S3 data from within the workbench.

Classification of sensitive data

Blend allows users to see the data they have access to.

Download Data Sheet

BryteFlow addresses these challenges by providing a point-and-click SQL editor to harness the power of AWS S3. In comparison to other storage solutions, AWS S3 cents-per GB economics is too hard to ignore in an era of endless growth of data. S3 has become the de facto object storage standard for storing all spectrum of data, from raw to normalised, structured and unstructured.

Download (PDF)