Query data is slower than compressing data

My table has 37 blocks, and each block has 5 million pieces of data.
When I query uncompressed data, it takes 16 seconds.
It takes 9.5 seconds to query compressed data.

CREATE TABLE "public"."sensordatagps_ts" (
  "id" serial8,
  "station_id" int4,
  "sn" varchar(255) COLLATE "pg_catalog"."default",
  "car_no" varchar(255) COLLATE "pg_catalog"."default",
  "enable_level" varchar(255) COLLATE "pg_catalog"."default",
  "grid_key" varchar(255) COLLATE "pg_catalog"."default",
  "group" int4,
  "lat" float8,
  "lng" float8,
  "loc_time" int8,
  "time" timestamptz(6),
  "direction" int4,
  "speed" float8,
  "pm10" float8,
  "pm25" float8,
  "satellite_num" int4,
  "status" int4,
  "co" int4,
  "so2_f" float8,
  "no_f" float8,
  "no2" float8,
  "o3" float8,
  "tvoc" float8,
  "co_f" float8,
  "content" jsonb
);

SELECT create_hypertable(
  'sensordatagps_ts', 
  'time', 
  chunk_time_interval => INTERVAL '1 day', 
  associated_schema_name => '_timescaledb_internal', 
  associated_table_prefix => '_sensordatagps_ts'
);

I think you want to ask “why this is”, don’t you?

A common mis-believe is that compressed data should be much slower (due to decompressing the data). That is not how Timescale compression works though.

The individual columns are compressed independently into a columnar-like fashion (internally into arrays). That means that every column can be decompressed individually. That minimizes the actual data being read from disk, yielding to much less IO traffic and operations. Hence often less work to do and a speed increase for queries.

You can see that the compressed chunk (though it uses a sequential scan) reads much much less buffers. More can be found on the docs and blog:

Thanks a lot for your answer, I see why compressing data is fast.
But I still don’t know why the query speed is so slow, it hits the compound indexes, I think the query should be completed within 500ms.
And how to see that compressed data is read much less buffers.

And how to see that compressed data is read much less buffers.

explain (analyze, buffers). In your screenshot the normal table read 16498 buffers and the compressed only 224.

But I still don’t know why the query speed is so slow, it hits the compound indexes, I think the query should be completed within 500ms.

What is your index definition and the compression settings timescaledb.compress_orderby, .compress_segmentby? The query plan says it’s using a seq scan over compressed chunks. Try adding the sn column to compress_segmentby, this way it will be able to filter by sn and will be faster.

I created a normal table(sensordatagps_ts_o) with the same structure as the hypertable, stored a block of data, and used the same btree index(sn). I found that there is a huge difference between querying a block alone and a normal table. what could be the reason ?