What is a data lake? Massively scalable storage for big data analytics

In 2011, James Dixon, then CTO of the enterprise intelligence firm Pentaho, coined the time period knowledge lake. He described the info lake in distinction to the data silos typical of information marts, which have been fashionable on the time: 

In case you consider a knowledge mart as a retailer of bottled water—cleansed and packaged and structured for straightforward consumption—the info lake is a big physique of water in a extra pure state. The contents of the info lake stream in from a supply to fill the lake, and varied customers of the lake can come to look at, dive in, or take samples.

Information lakes have advanced since then, and now compete with knowledge warehouses for a share of large knowledge storage and analytics. Varied instruments and merchandise help quicker SQL querying in knowledge lakes, and all three main cloud suppliers supply knowledge lake storage and analytics. There’s even the brand new knowledge lakehouse idea, which mixes governance, safety, and analytics with inexpensive storage. This text is a excessive dive into knowledge lakes, together with what they’re, how they’re used, and the way to make sure your knowledge lake doesn’t change into a knowledge swamp.

The info lake defined

A knowledge lake is basically a single knowledge repository that holds all of your knowledge till it’s prepared for evaluation, or presumably solely the info that does not match into your knowledge warehouse. Sometimes, a knowledge lake shops knowledge in its native file format, however the knowledge could also be remodeled to a different format to make evaluation extra environment friendly. The objective of getting a knowledge lake is to extract enterprise or different analytic worth from the info.

Information lakes can host binary knowledge, corresponding to photographs and video, unstructured knowledge, corresponding to PDF paperwork, and semi-structured knowledge, corresponding to CSV and JSON recordsdata, in addition to structured knowledge, sometimes from relational databases. Structured knowledge is extra helpful for evaluation, however semi-structured knowledge can simply be imported right into a structured type. Unstructured knowledge can typically be transformed to structured knowledge utilizing clever automation.

Information lake vs knowledge warehouse

The query is not whether or not you want a knowledge lake or a knowledge warehouse; you probably want each, however for various functions. It is usually attainable to mix them, as we’ll focus on quickly. To begin, let us take a look at the foremost variations between knowledge lakes and knowledge warehouses:

  • Information sources: Typical sources of information for knowledge lakes embrace log recordsdata, knowledge from click-streams, social media posts, and knowledge from web linked units. Information warehouses sometimes retailer knowledge extracted from transactional databases, line-of-business purposes, and operational databases for evaluation.
  • Schema technique: The database schema for a knowledge lakes is normally utilized at evaluation time, which is known as schema-on-read. The database schema for enterprise knowledge warehouses is normally designed previous to the creation of the info retailer and utilized to the info as it’s imported. That is referred to as schema-on-write.
  • Storage infrastructure: Information warehouses typically have important quantities of costly RAM and SSD disks with a view to present question outcomes rapidly. Information lakes typically use low-cost spinning disks on clusters of commodity computer systems. Each knowledge warehouses and knowledge lakes use massively parallel processing (MPP) to hurry up SQL queries.
  • Uncooked vs curated knowledge: The info in a knowledge warehouse is meant to be curated to the purpose the place the info warehouse might be handled because the “single supply of fact” for a corporation. Information in a knowledge lake could or might not be curated: knowledge lakes sometimes begin with uncooked knowledge, which might later be filtered and remodeled for evaluation.
  • Who makes use of it: Information warehouse customers are normally enterprise analysts. Information lake customers are extra typically knowledge scientists or knowledge engineers, no less than initially. Enterprise analysts get entry to the info as soon as it has been curated.
  • Kind of analytics: Typical evaluation for knowledge warehouses contains enterprise intelligence, batch reporting, and visualizations. For knowledge lakes, typical evaluation contains machine studying, predictive analytics, knowledge discovery, and knowledge profiling.

What about knowledge marts?

Information marts are evaluation databases which can be restricted to knowledge from a single division or enterprise unit, versus knowledge warehouses, which mix all of an organization’s relational knowledge in a type appropriate for evaluation. Information marts supply environment friendly evaluation by containing solely knowledge related to the division; as such, they’re inherently siloed. Some declare the siloing would not matter as a result of the enterprise unit would not want the excluded knowledge. In actual life, it typically does matter—there’s at all times a higher-up who wants reviews based mostly on mixed knowledge from a number of enterprise items. That is one motive why we at present see many knowledge lakes and knowledge warehouses, and few knowledge marts.

Quicker SQL queries on a knowledge lake

If you retailer uncooked knowledge in a knowledge lake, the info may be ineffective for enterprise analysts till it has been processed by a knowledge engineer or knowledge scientist. Along with filtering and knowledge transformations, knowledge lakes want knowledge catalogs, knowledge safety, and schema definitions. The unlucky shorthand time period for a knowledge lake with out these options is knowledge swamp.

Thankfully, there are many instruments to assist filter and arrange the info in your knowledge lake. For instance, you would deal with the necessity for a schema by making a Hive metastore in ORC format. As soon as it is arrange, the metastore helps quick SQL queries through a massively parallel SQL engine like Presto. (The Optimized Row Columnar format is a compressed columnar retailer that’s optimized for Hive and works properly with Presto.)

Apache Spark is one other massively parallel SQL engine. Whereas it can work with the ORC format, it really works even higher with Parquet, one other compressed columnar retailer. Spark can carry out each vertical and horizontal partitioning on Parquet recordsdata, producing a question plan that requires studying solely the required knowledge and might skip irrelevant knowledge.

The info lakehouse

Databricks, the corporate behind Spark and MLflow, presents what they name a knowledge lakehouse. In keeping with Databricks, the lakehouse combines the very best options of information warehouses and knowledge lakes:

An information lakehouse unifies the very best of information warehouses and knowledge lakes in a single easy platform to deal with all of your knowledge, analytics, and AI makes use of instances. It’s constructed on an open and dependable knowledge basis that effectively handles all knowledge varieties and applies one widespread safety and governance method throughout your entire knowledge and cloud platforms.

Delta Lake, which Databricks launched to open supply, types the inspiration of the lakehouse by offering reliability and excessive efficiency straight on knowledge within the knowledge lake. Databricks Lakehouse Platform additionally contains the Unity Catalog, which gives fine-grained governance for knowledge and AI. Databricks claims that its knowledge lakehouse presents 12 instances the worth/efficiency ratio of a knowledge warehouse.

Information lakes on-prem and in public clouds

Traditionally, knowledge lakes have been applied on-premises utilizing Apache Hadoop clusters of commodity computer systems and HDFS (Hadoop Distributed File System). Hadoop clusters as soon as have been large enterprise for Cloudera, Hortonworks, and so forth. Cloudera and Hortonworks merged in 2018, which tells you one thing in regards to the route of the market.

What modified was the cloud, particularly the hyperscale public cloud distributors Amazon Internet Companies (AWS), Microsoft Azure, and Google Cloud Platform (GCP). All three cloud suppliers supply knowledge lake storage merchandise: Amazon Easy Storage Service (Amazon S3) and Amazon EMR (previously Amazon Elastic MapReduce), Azure Information Lake Retailer (ADLS), and Google Cloud Storage (GCS). All three additionally supply providers for knowledge ingestion, knowledge processing, analytics, and machine studying. It is a lot simpler and quicker to create, handle, and scale cloud knowledge lakes than it’s to handle Hadoop clusters in your knowledge middle; the tradeoff is that long-term operational expenditures within the cloud will finally change into important.

Analytics and machine studying for knowledge lakes

Earlier, I mentioned utilizing Presto and Apache Spark for quicker SQL queries on a knowledge lakes. SQL is simply one of many methods to research knowledge, though it is fairly essential and is usually step one. As well as, think about enterprise intelligence instruments corresponding to Energy BI, Tableau, or Qlik; Jupyter, Zeppelin, or Spark notebooks; machine studying, corresponding to scikit-learn, SparkML, or KNIME; and deep studying, corresponding to TensorFlow or PyTorch.

The hyperscale cloud distributors have analytics and machine studying instruments of their very own that hook up with their knowledge lakes.

Amazon Athena makes use of Presto and Hive to carry out SQL queries on knowledge in Amazon S3. Amazon EMR is a cloud large knowledge platform for operating large-scale distributed knowledge processing jobs, interactive SQL queries, and machine studying purposes utilizing open supply analytics frameworks corresponding to Apache Spark, Apache Hive, and Presto. Amazon SageMaker is a totally managed service to construct, practice, and deploy machine studying fashions.

Azure Information Lake Analytics (ADLA) is an older on-demand (serverless) analytics job service that simplifies large knowledge, and makes use of U-SQL, which is SQL plus C#. ADLA is being changed by Azure Synapse Analytics, which is a limitless analytics service that brings collectively knowledge integration, enterprise knowledge warehousing, and massive knowledge analytics. It provides you the liberty to question knowledge in your phrases, utilizing both serverless or devoted choices—at scale. Synapse combines knowledge lake, enterprise knowledge warehouse, and in-place operational knowledge question performance, and might routinely migrate knowledge and code from ADLA in addition to knowledge warehouses. Synapse has deep integration with Azure Machine Studying, Azure Cognitive Companies, and Energy BI.

Google Cloud Storage gives native integration with plenty of highly effective Google Cloud providers, corresponding to BigQuery (a knowledge warehouse), Dataproc (Hadoop ecosystem), Dataflow (serverless streaming analytics), Video Intelligence API, Cloud Imaginative and prescient API, and AI Platform.

Altogether, you may have appreciable flexibility to decide on the precise instrument to research your knowledge.


Information lakes have change into rather more helpful for the reason that days of Hadoop clusters and MapReduce. Presto and Apache Spark supply a lot quicker SQL processors than MapReduce, due to in-memory and massively parallel processing and Hive-based schemas. Cloud-based knowledge lakes are a lot simpler and quicker to create, handle, and scale than on-prem clusters of commodity computer systems. And cloud knowledge lakes combine tightly with a variety of analytics and synthetic intelligence instruments.

Copyright © 2022 IDG Communications, Inc.

Supply hyperlink

Leave a Reply

Your email address will not be published.