Thanks for the question @anthonycorbacho
We don’t currently have documentation that points to this specifically because each situation is so different. Because TimescaleDB is PostgreSQL, there are certainly some more generalized rules of thumb that can help.
From an ingest perspective, you’re description would equate to ~7,500 rows/second (give or take) which is a very low threshold even for a low-powered TimescaleDB instance. In our various benchmarks over the years, even a “micro” .5 CPU/1GB of RAM instance can achieve 10s of thousands of rows/second insert rates.
But… even at 7,500 rows/second, you’ll still end up with ~640 million rows a day of data (it really does add up quickly, eh!?!), and obviously, that’s going to take a lot of storage over time. The good news is that TimescaleDB has best-in-class compression and data retention features to help you manage that grows and still maintain good query performance.
So, inserting data at a consistent rate given your specs isn’t the limiting factor. Storing that much data and querying it efficiently - and right-sizing your server - is the main question. Our best practices section of the docs, which discuss chunk sizes, memory, etc., is a good place to start. Knowing your insert patterns, how long you need to retain raw data, how much space compression saves you, and how much continuous aggregates can help your current query patterns will all play a factor in helping you figure out how much memory you need, disk space, etc.
If you want to share some of those requirements (what are your typical query patterns in regards to time, how much data do you have to retain, when is data (mostly) “immutable” which informs compression settings to some extent) - I could suggest a few ways to start testing a setup to closely mimic your data.