This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hey everyone,
Normally, in all my previous projects, I have used a combination of cloud run (or similar) and managed compute services (ec2, compute engine, etc.) to deploy my services and this works great.
With my new project, I have deployed a prototype on a managed compute service and this works fine. However, to scale my solution, I will need to programmatically scale up persistent services with permanent storage volumes. This is not a requirement I’ve had before in previous projects and I’m coming to you all with advice on how to proceed.
This seems to be exactly what k8s is for. If I take the time to learn how to use Kubernetes, I’m sure I could use it to solve my problem.
However, I also feel confident that I can build my own solution to auto scale up these persistent containers.
After some, admittedly brief, research on how k8s works, I’m beginning to realize that learning Kubernetes will be a task in itself and will take away much of my valuable free time I could be spending on building out the product further. However, I’m worried that if I go my own path, it will inevitably lead to scaling issues in the future.
What do you guys think, spend the time to do it properly, or build a quick and dirty solution for now and worry later?
Subreddit
Post Details
- Posted
- 2 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/kubernetes/...