This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
6
Would Iceberg/Hudi run as a continuous service?
Post Body
Hello,
Noob alert - Am new to Lakehouse Arch and trying to understand it better. A Lakehouse such as Iceberg/Hudi - Is that (usually) going to run a a continuous process/service the way a DB engine would run?
My understanding is that:
- Storage and compute are being decoupled.
- Iceberg/Hudi would run on a cloud/clustered nodes (Inside Spark eg), provide table services as an abstraction over underlying Parquet files/other open formats. This acts as a centralized "blackbox" to provide CRUD and Transaction services to the data.
- Any other data consumers looking to use the data above (Lets say a Java application or a Spark Application), would use Iceberg/Hudi API to interact with the "blackbox" above to read/write the data?
Thank you!
Author
Account Strength
100%
Account Age
4 years
Verified Email
Yes
Verified Flair
No
Total Karma
953
Link Karma
14
Comment Karma
939
Profile updated: 1 day ago
Posts updated: 1 week ago
Subreddit
Post Details
We try to extract some basic information from the post title. This is not
always successful or accurate, please use your best judgement and compare
these values to the post title and body for confirmation.
- Posted
- 5 months ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/apachespark...