@silmathoron I'm a big fan of Drone.io for CI. The Gitea project uses it to build Gitea, and focuses a lot of ensuring compatibility. Several Gitea maintainers also contribute to Drone as well.
@Gargron still better than 10% off the top that patreon takes.
@Gargron definitely would be useful for others. platforms such as opencollective and patreon take a significant amount off the top that other OSS projects could use this and have less taken for intermediaries.
@travis woo! Gitea 4 lyfe. Are you using the helm chart?
@selea I think it was you, not entirely sure, but did you ask about logging into a Gitea instance with Mastodon? Cause if so I just submitted a PR to the auth library Gitea uses to support Mastodon and it was merged today. One step closer.
@firstname.lastname@example.org That looks great! Gitea has a list of user created themes at https://gitea.com/gitea/awesome-gitea/ in case you were interested in sending a PR to be included.
@slapula nice! yeah, arm64 is a must these days.
@slapula You are in luck, there is an official one here: https://github.com/tootsuite/mastodon/tree/master/chart
@kemonine It definitely is great at limiting cost exposure. It actually has allowed us to use more powerful(see: expensive) machines, because we didn't have to have them running all the time. I think the DO agents we have are 160/mo (or 240 not 100% sure off top of my head) and we can reach up to 2 or 3 at a time, and costs us only a total of 30/mo for the agents specifically (we have other non-agent costs on DO, so are costs are slightly higher than that).
@kemonine yeah, that's right. We use it for overflow builds, and so we have one long running server on packet, and then autoscaler set DRONE_POOL_MIN=0, and DRONE_POOL_MIN_AGE=45min for DO, so that after 45mins of lifetime and the servers are idle they will be terminated (and brought to 0 total). Also make sure to set a sane MAX on your pool. Should be same w/ ec2 as DO
@kemonine I love the autoscaler (and have contributed code to improving it) we use it for Gitea, and you can 100% set it to 0 and only create servers when builds are in queue
@selea @kemonine As for registry project, I'd probably also just get hetzner storage box, and a single minio node infront until scale needed, and I've already written integration for docker's registry to use BunnyCDN (as an alt. to the only built in solution for CDN to be AWS cloudfront) so that alleviates having to also worry about location of server.
@selea @kemonine I use distributed minio on several kimsufi (2TB per 10$/mo) servers. So I get ~4TB of space, after erasure-encoding, for 40$/mo. Because of CPU, Diskspeed &network limitations, this solution "works", but isn't great (not fault of minio). This is a good $/GB that you can get w/ no extra costs for BW, but I'm looking at just spending 40 on hetz storage box, which gets me 5TB for slightly more, and faster network access, and putting one minio node in front, less headache.