@kemonine lol, you warned me many months ago to self-host docker images, and here we are today, with docker hub pricing plan changes where they are significantly limiting plans (6month retention, and 100pulls per 6 hours)
None that I've ever seen. The best I was ever able to figure out was configuring the docker registry self-hosted stuff to use an S3 back end and then using B2 or Wasabi for the storage
The problem with that is you're going to be charged for egress in most cases so you have to take the size of the images, how popular they are and some other stuff into account.
I also evaluated Harbor but they didn't have working 'fully public, no login necessary' features working at the time I evaluated the system.
I did briefly look at some of the cloud provider 'container registry' stuff offered by the like of Azure/AWS/etc and... the pricing wasn't in line with what i needed. They have some egress/storage billing models that can be problematic depending on who's the end consumer of the data stored (internal services vs external users)
@kemonine@selea no specific community model for docker registry, although perhaps a community model like how https://jortage.com/ works could work for docker as well. At $dayjob I already operate a registry, and (in free time) have contributed code to a docker_auth project, and all that is needed is an interface for creating new users, and adding permissions.
I think I've just been nerdsnipped, and may end up creating this project 😆
@selea@kemonine I use distributed minio on several kimsufi (2TB per 10$/mo) servers. So I get ~4TB of space, after erasure-encoding, for 40$/mo. Because of CPU, Diskspeed &network limitations, this solution "works", but isn't great (not fault of minio). This is a good $/GB that you can get w/ no extra costs for BW, but I'm looking at just spending 40 on hetz storage box, which gets me 5TB for slightly more, and faster network access, and putting one minio node in front, less headache.
@selea@kemonine As for registry project, I'd probably also just get hetzner storage box, and a single minio node infront until scale needed, and I've already written integration for docker's registry to use BunnyCDN (as an alt. to the only built in solution for CDN to be AWS cloudfront) so that alleviates having to also worry about location of server.
@selea@kemonine then when scale is needed I'd probably move to something like ceph, as it has radosgw which is their S3 API, and dedupe is nice. Plus, even though it isn't as nice to setup, I have exp. with it running the storage for gitea.com
@kemonine that's sad to hear about 🍭 ☁️, sadly opensource relies on too many individuals so that when personal stuff happens things just fade away. That's what I like about Gitea's contributor model is that we have the load spread out.