DataMap can be created using following DDL
CREATE DATAMAP [IF NOT EXISTS] datamap_name
[ON TABLE main_table]
USING "datamap_provider"
[WITH DEFERRED REBUILD]
DMPROPERTIES ('key'='value', ...)
AS
SELECT statement
Currently, there are 5 DataMap implementations in CarbonData.
DataMap Provider | Description | DMPROPERTIES | Management |
---|---|---|---|
preaggregate | single table pre-aggregate table | No DMPROPERTY is required | Automatic |
timeseries | time dimension rollup table | event_time, xx_granularity, please refer to Timeseries DataMap | Automatic |
mv | multi-table pre-aggregate table | No DMPROPERTY is required | Manual |
lucene | lucene indexing for text column | index_columns to specifying the index columns | Automatic |
bloomfilter | bloom filter for high cardinality column, geospatial column | index_columns to specifying the index columns | Automatic |
There are two kinds of management semantic for DataMap.
WITH DEFERRED REBUILD
in the statement, which is by default.WITH DEFERRED REBUILD
in the statementCAUTION:
If user create MV datamap without specifying WITH DEFERRED REBUILD
, carbondata will give a warning and treat the datamap as deferred rebuild.
When user creates a datamap on the main table without using WITH DEFERRED REBUILD
syntax, the datamap will be managed by system automatically.
For every data load to the main table, system will immediately trigger a load to the datamap automatically. These two data loading (to main table and datamap) is executed in a transactional manner, meaning that it will be either both success or neither success.
The data loading to datamap is incremental based on Segment concept, avoiding a expensive total rebuild.
If user perform following command on the main table, system will return failure. (reject the operation)
UPDATE/DELETE/DELETE SEGMENT
.ALTER TABLE DROP COLUMN
, ALTER TABLE CHANGE DATATYPE
,
ALTER TABLE RENAME
. Note that adding a new column is supported, and for dropping columns and
change datatype command, CarbonData will check whether it will impact the pre-aggregate table, if
not, the operation is allowed, otherwise operation will be rejected by throwing exception.If user do want to perform above operations on the main table, user can first drop the datamap, perform the operation, and re-create the datamap again.
If user drop the main table, the datamap will be dropped immediately too.
We do recommend you to use this management for index datamap.
When user creates a datamap specifying manual refresh semantic, the datamap is created with status disabled and query will NOT use this datamap until user can issue REBUILD DATAMAP command to build the datamap. For every REBUILD DATAMAP command, system will trigger a full rebuild of the datamap. After rebuild is done, system will change datamap status to enabled, so that it can be used in query rewrite.
For every new data loading, data update, delete, the related datamap will be made disabled, which means that the following queries will not benefit from the datamap before it becomes enabled again.
If the main table is dropped by user, the related datamap will be dropped immediately.
Note:
Currently, when user creates a datamap, system will store the datamap metadata in a configurable system folder in HDFS or S3.
In this system folder, it contains:
There is a DataMapCatalog interface to retrieve schema of all datamap, it can be used in optimizer to get the metadata of datamap.
How can user know whether datamap is used in the query?
User can set enable.query.statistics = true and use EXPLAIN command to know, it will print out something like
== CarbonData Profiler ==
Hit mv DataMap: datamap1
Scan Table: default.datamap1_table
+- filter:
+- pruning by CG DataMap
+- all blocklets: 1
skipped blocklets: 0
There is a SHOW DATAMAPS command, when this is issued, system will read all datamap from system folder and print all information on screen. The current information includes:
This feature applies for preaggregate datamap only
Running Compaction command (ALTER TABLE COMPACT
) on main table will not automatically compact the pre-aggregate tables created on the main table. User need to run Compaction command separately on each pre-aggregate table to compact them.
Compaction is an optional operation for pre-aggregate table. If compaction is performed on main table but not performed on pre-aggregate table, all queries still can benefit from pre-aggregate tables. To further improve the query performance, compaction on pre-aggregate tables can be triggered to merge the segments and files in the pre-aggregate tables.