#78 Concept for the Codeberg CI

Open
opened 2 months ago by momar · 14 comments
momar commented 2 months ago

So, I’m basically planning to build a CI which is integrated with Gitea and can handle all limitations and requirements of Codeberg. This is the proposed concept, I’d be glad to get a lot of feedback!

Requirements

  • Securely build and deploy every commit, tag and pull request
  • Use as little resources per build as possible
  • Be as easy to use as possible, with the least possible effort for most repositories
  • Show a progress report of each build and make artifacts available for download
  • Automatically deploy & allocate servers on Hetzner
  • Limit resource consumption by each user
  • Limit monthly overall budget
  • Show build status and possibly progress and artifact links directly in Gitea
  • Possibly support for multi-arch builds
  • Probably a lot more I can’t currently think of

The plan

  • Use Docker to build stuff and isolate builds from each other
  • Use annotated Dockerfiles to specify all build instructions
    → many repositories already have a Dockerfile, so they would build automatically. If that’s not wanted for a repository, a .cifile would override the Dockerfile; an empty .cifile would basically disable the CI for this repository.
  • Keep the Docker Cache between builds, so new builds are faster
  • Show the build progress as a graph derived from the annotations in the Dockerfile
  • Use server-side receive hooks to trigger builds, and POST /repos/{owner}/{repo}/statuses/{sha} to set the build status of a commit

Annotations could look like this:

FROM golang:1 AS build
# define and run a subtask (if the "{" is left away, the file "setup" would be pasted here):
# @include setup {
COPY . /app
WORKDIR /app
# @}

# run two tasks simultaneously, then wait for them to complete:

# @required build {
RUN go build -o exampleapp .
# @}

# @optional test {
RUN go test
# @}

# any normal docker command or annotation except for "@optional" and "@required" would also wait for completion
# @complete

# deploy the binary as a download and push it to the codeberg.org Docker registry:

# @required deploy {
#   @artifact /app/exampleapp  # make artifact available for download
#   @if status != WARNING {  # if the test succeeded...
#     # ...deploy as a docker image
#     @push codeberg.org/momar/myexampleapp:latest
#     @if tag != "" {  # if the current commit has a tag attached...
#       # ...deploy as a tagged docker image
#       @push codeberg.org/momar/myexampleapp:${tag}
#     @}
#   @}
# @}

The resulting graph would then be that one:

grafik

Plugins

We could link to other Dockerfile templates (e.g. using pongo2) to run specific tasks:

# @do codeberg.org/momar/ci-plugins/create-release {
# # this is the YAML plugin configuration
# title: ${version}
# tag: ${tag}
# content: ${commit.message}
# files:
#   build: /app/exampleapp # from the step "build", include the file "/app/exampleapp"
# @if tag != version {
# isPrerelease: true
# @}
# @}

Finances

I’d suggest to limit by CPU cores, memory and parallel builds, an example based on the current membership fees, with Hetzner CX21 build servers, could look like this:

  • each user and organisation (with more than 1 member) gets its own queue, resources are allocated within that queue
  • the owner of an organisation with the biggest plan determines the plan of the organisation
  • non-members get 0.5 cores with 1GB of RAM, and can only have one build running at a time
  • members who pay 24€/year get 2 cores with 4GB of RAM, and can have 2 builds running at the same time (which then share those resources)
  • for each additional 12€/year, they get an additional core, an additional 2GB of RAM, and an additional simultaneous build
  • that way, members get 1 core for 1€ (that’s excluding any other costs) - assuming that the average user is running 72 hours worth of builds in 30 days, that means that 40 paying users could fit on a single CX21 server for <6€ - with 24 paying users per server (so, 25% of the income goes towards the CI servers), there would be space for 64 non-paying users.
  • the Docker cache could be stored on a Hetzner Storage Box, so limiting each repo to 5GB cache (and allowing exceptions) should be totally fine.

I’m not sure about how wrong my calculations are, and have no overview about our general finances, but it seems like it would be possible to support a CI system like that.

So, I'm basically planning to build a CI which is integrated with Gitea and can handle all limitations and requirements of Codeberg. This is the proposed concept, I'd be glad to get a lot of feedback! ## Requirements - Securely build and deploy every commit, tag and pull request - Use as little resources per build as possible - Be as easy to use as possible, with the least possible effort for most repositories - Show a progress report of each build and make artifacts available for download - Automatically deploy & allocate servers on Hetzner - Limit resource consumption by each user - Limit monthly overall budget - Show build status and possibly progress and artifact links directly in Gitea - Possibly support for multi-arch builds - Probably a lot more I can't currently think of ## The plan - Use Docker to build stuff and isolate builds from each other - Use annotated Dockerfiles to specify all build instructions → many repositories already have a Dockerfile, so they would build automatically. If that's not wanted for a repository, a `.cifile` would override the `Dockerfile`; an empty `.cifile` would basically disable the CI for this repository. - Keep the Docker Cache between builds, so new builds are faster - Show the build progress as a graph derived from the annotations in the Dockerfile - Use server-side receive hooks to trigger builds, and `POST /repos/{owner}/{repo}/statuses/{sha}` to set the build status of a commit Annotations could look like this: ```Dockerfile FROM golang:1 AS build # define and run a subtask (if the "{" is left away, the file "setup" would be pasted here): # @include setup { COPY . /app WORKDIR /app # @} # run two tasks simultaneously, then wait for them to complete: # @required build { RUN go build -o exampleapp . # @} # @optional test { RUN go test # @} # any normal docker command or annotation except for "@optional" and "@required" would also wait for completion # @complete # deploy the binary as a download and push it to the codeberg.org Docker registry: # @required deploy { # @artifact /app/exampleapp # make artifact available for download # @if status != WARNING { # if the test succeeded... # # ...deploy as a docker image # @push codeberg.org/momar/myexampleapp:latest # @if tag != "" { # if the current commit has a tag attached... # # ...deploy as a tagged docker image # @push codeberg.org/momar/myexampleapp:${tag} # @} # @} # @} ``` The resulting graph would then be that one: ![grafik](/attachments/e7a11e93-ac2b-4246-9935-fdcefbd9212f) ### Plugins We could link to other Dockerfile templates (e.g. using [pongo2](https://github.com/flosch/pongo2)) to run specific tasks: ```Dockerfile # @do codeberg.org/momar/ci-plugins/create-release { # # this is the YAML plugin configuration # title: ${version} # tag: ${tag} # content: ${commit.message} # files: # build: /app/exampleapp # from the step "build", include the file "/app/exampleapp" # @if tag != version { # isPrerelease: true # @} # @} ``` ## Finances I'd suggest to limit by CPU cores, memory and parallel builds, an example based on the current membership fees, with Hetzner CX21 build servers, could look like this: - each user and organisation (with more than 1 member) gets its own queue, resources are allocated within that queue - the owner of an organisation with the biggest plan determines the plan of the organisation - non-members get 0.5 cores with 1GB of RAM, and can only have one build running at a time - members who pay 24€/year get 2 cores with 4GB of RAM, and can have 2 builds running at the same time (which then share those resources) - for each additional 12€/year, they get an additional core, an additional 2GB of RAM, and an additional simultaneous build - that way, members get 1 core for 1€ (that's excluding any other costs) - assuming that the average user is running 72 hours worth of builds in 30 days, that means that 40 paying users could fit on a single CX21 server for <6€ - with 24 paying users per server (so, 25% of the income goes towards the CI servers), there would be space for 64 non-paying users. - the Docker cache could be stored on a Hetzner Storage Box, so limiting each repo to 5GB cache (and allowing exceptions) should be totally fine. I'm not sure about how wrong my calculations are, and have no overview about our general finances, but it seems like it would be possible to support a CI system like that.
Rinma commented 2 months ago

I really don’t like the idea to misuse the dockerfiles for annotations. This could lead to unwanted errors or problems with docker itself. I would prefer a separate file like the proposed .cifile to configure CI/CD.

Also, I don’t really get what you mean with “Automatically deploy & allocate servers on Hetzner”? Why not think about the possibility to host own servers? What if I want to use a different hoster? Otherwise, the possibility to get a server from codeberg, if I don’t want to host my own, sounds like a good idea.

Do you want to build this CI server only for codeberg or should this be a standalone product and you want to provide integration in gitea? Can users with self hosted gitea instances use this CI server?

I really don't like the idea to misuse the dockerfiles for annotations. This could lead to unwanted errors or problems with docker itself. I would prefer a separate file like the proposed `.cifile` to configure CI/CD. Also, I don't really get what you mean with "Automatically deploy & allocate servers on Hetzner"? Why not think about the possibility to host own servers? What if I want to use a different hoster? Otherwise, the possibility to get a server from codeberg, if I don't want to host my own, sounds like a good idea. Do you want to build this CI server only for codeberg or should this be a standalone product and you want to provide integration in gitea? Can users with self hosted gitea instances use this CI server?
momar commented 2 months ago
Poster

I really don’t like the idea to misuse the dockerfiles for annotations. This could lead to unwanted errors or problems with docker itself. I would prefer a separate file like the proposed .cifile to configure CI/CD.

The .cifile would just be another Dockerfile that replaces the main one in CI builds for repositories which don’t want the main Dockerfile to specify the CI behaviour. I think using the main Dockerfile by default would lead to more people using the CI “by accident”, which hopefully leads to an “oh, that just works” moment.

In my opinion, Dockerfiles are a great way to specify what to do when building an application in an isolated way - if they fail, the build failed, if they succeed, the build was successful. I think annotations are required for CI features not possible with only Docker, but you’re right that they should not break the normal Docker build process.

Parallel tasks are probably the biggest feature that would break Docker, and maybe was a bit overengineered by me, so maybe we could leave that away - to keep further compatibility, @if could be replaced with @include <filename> if <condition>, so it would only apply to CI builds anyways. @do is probably unnecessary if there are easy-to-use Docker images for the most common CI tasks (e.g. with FROM codeberg.org/ci/create-release and RUN create-release ...).

That would leave us with the following annotations:

  • @artifact - I think it’s important that builds can have files attached; it not being executed by Docker wouldn’t be a problem
  • @push - After the Dockerfile has been built, it will need to be pushed somewhere - we could make that a repository setting, or automatically push to codeberg.org/<user>/<repo> as an alternative.
  • @include - This leaves a possibility to import another file only within the CI when using the Dockerfile directly, or under certain circumstances. Maybe the blocks should be removed though, so only other files could be included.

Also, I don’t really get what you mean with “Automatically deploy & allocate servers on Hetzner”? Why not think about the possibility to host own servers? What if I want to use a different hoster? Otherwise, the possibility to get a server from codeberg, if I don’t want to host my own, sounds like a good idea.

Codeberg is currently hosted on Hetzner AFAIK, so that would be the simplest way here to provide build resources to users - I’d propose to make it flexible enough so other hosters could be added later though.
My idea would be to have a “standalone” server, and a “master” server - the master server wouldn’t run any jobs by itself (but would host the build output pages and artifacts), and would then start/stop new servers as needed (or use a fixed set of standalone servers when hosted manually), and distribute the tasks across the servers.


Do you want to build this CI server only for codeberg or should this be a standalone product and you want to provide integration in gitea? Can users with self hosted gitea instances use this CI server?

The plan is to keep to the requirements of Codeberg, but in the end it should be compatible with any Gitea instance, or basically any Git server that supports statuses and receive hooks.

> I really don’t like the idea to misuse the dockerfiles for annotations. This could lead to unwanted errors or problems with docker itself. I would prefer a separate file like the proposed .cifile to configure CI/CD. The `.cifile` would just be another Dockerfile that replaces the main one in CI builds for repositories which don't want the main Dockerfile to specify the CI behaviour. I think using the main `Dockerfile` by default would lead to more people using the CI "by accident", which hopefully leads to an "oh, that just works" moment. In my opinion, Dockerfiles are a great way to specify what to do when building an application in an isolated way - if they fail, the build failed, if they succeed, the build was successful. I think annotations are required for CI features not possible with only Docker, but you're right that they should not break the normal Docker build process. Parallel tasks are probably the biggest feature that would break Docker, and maybe was a bit overengineered by me, so maybe we could leave that away - to keep further compatibility, `@if` could be replaced with `@include <filename> if <condition>`, so it would only apply to CI builds anyways. `@do` is probably unnecessary if there are easy-to-use Docker images for the most common CI tasks (e.g. with `FROM codeberg.org/ci/create-release` and `RUN create-release ...`). That would leave us with the following annotations: - `@artifact` - I think it's important that builds can have files attached; it not being executed by Docker wouldn't be a problem - `@push` - After the Dockerfile has been built, it will need to be pushed somewhere - we could make that a repository setting, or automatically push to `codeberg.org/<user>/<repo>` as an alternative. - `@include` - This leaves a possibility to import another file only within the CI when using the Dockerfile directly, or under certain circumstances. Maybe the blocks should be removed though, so only other files could be included. --- > Also, I don’t really get what you mean with “Automatically deploy & allocate servers on Hetzner”? Why not think about the possibility to host own servers? What if I want to use a different hoster? Otherwise, the possibility to get a server from codeberg, if I don’t want to host my own, sounds like a good idea. Codeberg is currently hosted on Hetzner AFAIK, so that would be the simplest way here to provide build resources to users - I'd propose to make it flexible enough so other hosters could be added later though. My idea would be to have a "standalone" server, and a "master" server - the master server wouldn't run any jobs by itself (but would host the build output pages and artifacts), and would then start/stop new servers as needed (or use a fixed set of standalone servers when hosted manually), and distribute the tasks across the servers. --- > Do you want to build this CI server only for codeberg or should this be a standalone product and you want to provide integration in gitea? Can users with self hosted gitea instances use this CI server? The plan is to keep to the requirements of Codeberg, but in the end it should be compatible with any Gitea instance, or basically any Git server that supports statuses and receive hooks.
Rinma commented 2 months ago

I think using the main Dockerfile by default would lead to more people using the CI “by accident”, which hopefully leads to an “oh, that just works” moment.

So if I have a normal Dockerfile without any of the annotations the CI will work also?

> I think using the main Dockerfile by default would lead to more people using the CI “by accident”, which hopefully leads to an “oh, that just works” moment. So if I have a normal `Dockerfile` without any of the annotations the CI will work also?
momar commented 2 months ago
Poster

That’s the idea - the annotations are just to do extra stuff Docker itself is not capable of within the CI.

That's the idea - the annotations are just to do extra stuff Docker itself is not capable of within the CI.
hw commented 2 months ago
Owner

@momar : thumbs up, great concept. A few random notes, by no means intended to derail anything:

  • Most critical issue imo is the UI integration with gitea, this will be probably be the harder part.

  • Assuming API calls to launch VMs are nicely wrapped and isolated in a module, various backends shall be straightforward to integrate, including VMs on own servers (libvirt&friends come to mind, there are surely other options?). This seems to be a logical 2nd step.

  • Docker is nice and popular, but a huge number of large and very popular projects running CI unit tests do not use docker (llvm, gcc, +all compilers, tensorflow, mxnet, +all deep learning frameworks, just to name a few examples -- they are not at Codeberg yet, but we surely want to keep the door open;).

  • Otoh, a native script can always invoke docker. Having the .ciconfig for example as simple script (or some format embedding the build+test script) is far more flexible, even simpler to implement.

@Rinma :

Why not think about the possibility to host own servers?

Yes, this is long-term the most economical option, for sure. Also guarantees a fixed cost budget, and compute pool we can fairly distribute between projects.

Please don’t hesitate to contact us if helpful.

@momar : thumbs up, great concept. A few random notes, by no means intended to derail anything: - Most critical issue imo is the UI integration with gitea, this will be probably be the harder part. - Assuming API calls to launch VMs are nicely wrapped and isolated in a module, various backends shall be straightforward to integrate, including VMs on own servers (libvirt&friends come to mind, there are surely other options?). This seems to be a logical 2nd step. - Docker is nice and popular, but a huge number of large and very popular projects running CI unit tests do not use docker (llvm, gcc, +all compilers, tensorflow, mxnet, +all deep learning frameworks, just to name a few examples -- they are not at Codeberg yet, but we surely want to keep the door open;). - Otoh, a native script can always invoke docker. Having the .ciconfig for example as simple script (or some format embedding the build+test script) is far more flexible, even simpler to implement. @Rinma : > Why not think about the possibility to host own servers? Yes, this is long-term the most economical option, for sure. Also guarantees a fixed cost budget, and compute pool we can fairly distribute between projects. Please don't hesitate to contact us if helpful.

What about hosting drone.io and integrating Gitea with it? The alternative would be Jenkins. Integrating with Gitea is just adding “Jenkins” user as one of member of project. And jenkins/drone.io would have another login page.

What about hosting drone.io and integrating Gitea with it? The alternative would be Jenkins. Integrating with Gitea is just adding "Jenkins" user as one of member of project. And jenkins/drone.io would have another login page.
momar commented 2 months ago
Poster

Docker is nice and popular, but a huge number of large and very popular projects running CI unit tests do not use docker (llvm, gcc, +all compilers, tensorflow, mxnet, +all deep learning frameworks, just to name a few examples -- they are not at Codeberg yet, but we surely want to keep the door open;).

The problem is that we’d then need a solution to set up an environment - the CI scripts for some of them might work best under Debian, and for others under Ubuntu. With Docker, they can choose, add the files they need and run the scripts they need with minimal effort:

FROM ubuntu:disco
RUN apt-get update && apt-get -y install ...
COPY . /build
WORKDIR /build
RUN make
RUN make test
RUN make install

As Docker is an established standard to set up an isolated environment, I think it’s suited perfectly as a basis for a CI - projects who don’t want to offer Docker support could still use the example above as a .cifile and could skip writing documentation for Docker.

For a native script, we’d have to think about isolation and a lot more (so, every build gets its own VM? how about caching of build steps? what about providing multiple distributions? and so on…)

What about hosting drone.io and integrating Gitea with it? The alternative would be Jenkins. Integrating with Gitea is just adding “Jenkins” user as one of member of project. And jenkins/drone.io would have another login page.

In my experience, Jenkins needs to be constantly updated to be secure and is mostly suited for single-user/-company deployments. Drone is a great alternative, and I thought a lot about how it could be used with Codeberg, but we’d need to add a possibility to limit resources per user (I think that’s not yet possible), and to add/remove servers on demand (which probably somehow would work with Kubernetes). I also miss the possibility to host build artifacts - maybe we could add this to Drone.

So yeah, Drone would probably be the easiest and fastest to set up (although we might need to add some features ourselves), while my idea would be using a more widespread standard (Drone also uses Docker, but additionally requires a pipeline file) and could integrate more tightly with a Codeberg Docker Registry (which I think is also being planned in some issue).

Maybe we could also use Drone, but if there’s no .drone.yml but a Dockerfile, use a default .drone.yml:

kind: pipeline
name: default

steps:
- name: docker  
  image: plugins/docker
  settings:
    registry: codeberg.org
    username: momar
    password: blah
    repo: momar/example
    tags: latest
> Docker is nice and popular, but a huge number of large and very popular projects running CI unit tests do not use docker (llvm, gcc, +all compilers, tensorflow, mxnet, +all deep learning frameworks, just to name a few examples -- they are not at Codeberg yet, but we surely want to keep the door open;). The problem is that we'd then need a solution to set up an environment - the CI scripts for some of them might work best under Debian, and for others under Ubuntu. With Docker, they can choose, add the files they need and run the scripts they need with minimal effort: ```Dockerfile FROM ubuntu:disco RUN apt-get update && apt-get -y install ... COPY . /build WORKDIR /build RUN make RUN make test RUN make install ``` As Docker is an established standard to set up an isolated environment, I think it's suited perfectly as a basis for a CI - projects who don't want to offer Docker support could still use the example above as a `.cifile` and could skip writing documentation for Docker. For a native script, we'd have to think about isolation and a lot more (so, every build gets its own VM? how about caching of build steps? what about providing multiple distributions? and so on...) > What about hosting drone.io and integrating Gitea with it? The alternative would be Jenkins. Integrating with Gitea is just adding “Jenkins” user as one of member of project. And jenkins/drone.io would have another login page. In my experience, Jenkins needs to be constantly updated to be secure and is mostly suited for single-user/-company deployments. Drone is a great alternative, and I thought a lot about how it could be used with Codeberg, but we'd need to add a possibility to limit resources per user (I think that's not yet possible), and to add/remove servers on demand (which probably somehow would work with Kubernetes). I also miss the possibility to host build artifacts - maybe we could add this to Drone. So yeah, Drone would probably be the easiest and fastest to set up (although we might need to add some features ourselves), while my idea would be using a more widespread standard (Drone also uses Docker, but additionally requires a pipeline file) and could integrate more tightly with a Codeberg Docker Registry (which I think is also being planned in some issue). Maybe we could also use Drone, but if there's no .drone.yml but a Dockerfile, use a default .drone.yml: ```yaml kind: pipeline name: default steps: - name: docker image: plugins/docker settings: registry: codeberg.org username: momar password: blah repo: momar/example tags: latest ```
hw commented 2 months ago
Owner

@varshitbhat :

What about hosting drone.io and integrating Gitea with it?

Technically very appealing, indeed.

And strategically? For a startup the most common outcome is binary: success and getting bought out or gone bust. Both scenarios seem problematic in the future … even if the source code is public, which developer or community would take it over and continue to maintain it? Isn’t an organically grown developer community preferable?

Jenkins is oldfashioned but really great for single projects (have used this at scale and a lot in the past). Unfortuntately it lacks the necessary built-in security measures, and as a mature project it might be somewhat hard to add them ex-post (running arbitrary code on the build nodes, not necessarily isolated in VMs or containers)?

cc: @kolaente who brought this up elsewhere

@varshitbhat : > What about hosting drone.io and integrating Gitea with it? Technically very appealing, indeed. And strategically? For a startup the most common outcome is binary: success and getting bought out or gone bust. Both scenarios seem problematic in the future ... even if the source code is public, which developer or community would take it over and continue to maintain it? Isn't an organically grown developer community preferable? Jenkins is oldfashioned but really great for single projects (have used this at scale and a lot in the past). Unfortuntately it lacks the necessary built-in security measures, and as a mature project it might be somewhat hard to add them ex-post (running arbitrary code on the build nodes, not necessarily isolated in VMs or containers)? cc: @kolaente who brought this up elsewhere
hw commented 2 months ago
Owner

@momar :

The problem is that we’d then need a solution to set up an environment - the CI scripts for some of them might work best under Debian, and for others under Ubuntu.

This is actually pretty similar to docker or lxc containers. For provisioning tools like virt-install the environment is specified by command line arguments: a .ciconfig-parser would, after checking and sanitizing the input, pass the appropriate parameters to the tool.

For a native script, we’d have to think about isolation and a lot more (so, every build gets its own VM? how about caching of build steps? what about providing multiple distributions? and so on…)

A VM can be snapshot’d and suspended/resumed. The overhead compared to user-space containers like docker or lxc is mostly the kernel (nowadays still small in relation to build tools etc).

On a second thought, docker containers can run arbitrary scripts as well (commonly exemplified in first-step tutorials), so the practical difference is maybe not that big, and both solutions are probably workable for projects.

What do you think is the best approach for UI-integration with Gitea? Do you have a commit-status-API in mind to render the results, or something else?

@momar : > The problem is that we’d then need a solution to set up an environment - the CI scripts for some of them might work best under Debian, and for others under Ubuntu. This is actually pretty similar to docker or lxc containers. For provisioning tools like virt-install the environment is specified by command line arguments: a .ciconfig-parser would, after checking and sanitizing the input, pass the appropriate parameters to the tool. > For a native script, we’d have to think about isolation and a lot more (so, every build gets its own VM? how about caching of build steps? what about providing multiple distributions? and so on…) A VM can be snapshot'd and suspended/resumed. The overhead compared to user-space containers like docker or lxc is mostly the kernel (nowadays still small in relation to build tools etc). On a second thought, docker containers can run arbitrary scripts as well (commonly exemplified in first-step tutorials), so the practical difference is maybe not that big, and both solutions are probably workable for projects. What do you think is the best approach for UI-integration with Gitea? Do you have a commit-status-API in mind to render the results, or something else?

IMHO we should avoid building an own ci as much as we can. Drone took a few years to get to the point where it is now, and it has a whole community behind it (nothing against the codeberg community - it just is a huge effort to make). I would prefer to have a tighter integration with drone in Gitea (And I think this is the general view on this among Gitea’s maintainers) instead of building our own thing.

We, as a global open source community (this includes codeberg, drone, Gitea) should stand together and don’t double-spend ressources on things which are already solved in that community by reinventing the wheel. Instead, we should improve the tools we all use in way everyone can profit.

IMHO we should avoid building an own ci as much as we can. Drone took a few years to get to the point where it is now, and it has a whole community behind it (nothing against the codeberg community - it just is a huge effort to make). I would prefer to have a tighter integration with drone in Gitea (And I think this is the general view on this among Gitea's maintainers) instead of building our own thing. We, as a global open source community (this includes codeberg, drone, Gitea) should stand together and don't double-spend ressources on things which are already solved in that community by reinventing the wheel. Instead, we should improve the tools we all use in way everyone can profit.
I tweeted about this before: https://twitter.com/kolaente/status/1159588122914643968

Ok, so let me start of with a few things. It looks like you’ve put a ton of thought into this, and please don’t feel like I am dismissing this in any way. While I am one of the project leads of Gitea (for verification you can see the email in my profile), I am speaking in a personal capacity right now. I also have contributed some code to Drone (to aid in the integration between the two projects). There are also several others on the Gitea team who contribute to drone (one even has merge access to the main Drone repo). I also have built an integration for Gitea with another CI (buildkite).

With my background on this subject on this subject established, I agree with what @kolaente has said, in that we (Gitea) should focus on the core project, as with a small team building yet another CI is not doable, but we can instead work on better integration with it.

If you (and the Codeberg team) do decide to go ahead, that is still great news. As with Gitforges (Gitea, Gitlab, pagure, etc..) there are many, however each have their own project goals. I caution against starting a new opensource project, when what your goals may already be met by other projects.

Maybe you could even build an interface between Gitea and Drone that parses your custom Docker image and generates a drone file, and also handles the allowed amount of builds, and build minutes per user, and then passes it to drone. I also have a half built integration between docker registry and Gitea that I could open source which may help with hosting docker artifacts (I just need to find time to clean it up, and fix some minor issues).

Some general thoughts about the longevity of Drone, I see drone the company as already being successful as it has paying customers that appear to be enough to support all of Drone’s development (at least paying Brad a salary). These are personal guesses on Drones financial stability, and I have no secret insight into anything. I think if Drone gets bought by a company, that will be ok as there are many community contributors to Drone, which can continue its development.

Ok, so let me start of with a few things. It looks like you've put a ton of thought into this, and please don't feel like I am dismissing this in any way. While I am one of the project leads of Gitea (for verification you can see the email in my profile), I am speaking in a personal capacity right now. I also have contributed some code to Drone (to aid in the integration between the two projects). There are also several others on the Gitea team who contribute to drone (one even has merge access to the main Drone repo). I also have built an integration for Gitea with another CI (buildkite). With my background on this subject on this subject established, I agree with what @kolaente has said, in that we (Gitea) should focus on the core project, as with a small team building yet another CI is not doable, but we can instead work on better integration with it. If you (and the Codeberg team) do decide to go ahead, that is still great news. As with Gitforges (Gitea, Gitlab, pagure, etc..) there are many, however each have their own project goals. I caution against starting a new opensource project, when what your goals may already be met by other projects. Maybe you could even build an interface between Gitea and Drone that parses your custom Docker image and generates a drone file, and also handles the allowed amount of builds, and build minutes per user, and then passes it to drone. I also have a half built integration between docker registry and Gitea that I could open source which may help with hosting docker artifacts (I just need to find time to clean it up, and fix some minor issues). Some general thoughts about the longevity of Drone, I see drone the company as already being successful as it has paying customers that appear to be enough to support all of Drone's development (at least paying Brad a salary). These are personal guesses on Drones financial stability, and I have no secret insight into anything. I think if Drone gets bought by a company, that will be ok as there are many community contributors to Drone, which can continue its development.
momar commented 2 months ago
Poster

Hm, you’re right on the one hand (no need to reinvent the wheel), but it’s still unclear to me how we should manage servers and resources with Drone.

We’ll probably need a layer anyways between Gitea and whatever CI is being used in the end, which would mean that we could build plain .drone.yml files with Drone, and Dockerfiles with a wrapper around Drone (to keep their UI & Gitea integration).

For server resources, I see 3 possibilities:

  • provide everything through Codeberg, handle finances through the build management server
  • let users use their own Cloud, which would require supporting more cloud providers
  • use servers provided by third parties for free (e.g. other non-profits), but that would require trust, so every new server would have to be approved by a vote. It also is probably not working out with our number of users, especially if we’re growing exponentially

I won’t get to start working on this until October, so I’ll be open to more ideas.

Hm, you're right on the one hand (no need to reinvent the wheel), but it's still unclear to me how we should manage servers and resources with Drone. We'll probably need a layer anyways between Gitea and whatever CI is being used in the end, which would mean that we could build plain .drone.yml files with Drone, and Dockerfiles with a wrapper around Drone (to keep their UI & Gitea integration). For server resources, I see 3 possibilities: - provide everything through Codeberg, handle finances through the build management server - let users use their own Cloud, which would require supporting more cloud providers - use servers provided by third parties for free (e.g. other non-profits), but that would require trust, so every new server would have to be approved by a vote. It also is probably not working out with our number of users, especially if we're growing exponentially I won't get to start working on this until October, so I'll be open to more ideas.

Perhaps you could reach out to Brad as he has experience monitoring Drone use at scale. He supposedly has automated systems that monitor for abuse (lots of cryptominers, and people building custom personal android ROMS). He had to also disable cron tasks.

As for using their own cloud, Drone now has “ssh-runners” which means users could use their own servers for builds. This also means you don’t need to hand out your agent secret to potentially untrusted users. I know in docker you can also limit CPU/RAM allocated to each container, maybe Drone has that concept so that users don’t consume 100% of resources.

Perhaps you could reach out to Brad as he has experience monitoring Drone use at scale. He supposedly has automated systems that monitor for abuse (lots of cryptominers, and people building custom personal android ROMS). He had to also disable cron tasks. As for using their own cloud, Drone now has "ssh-runners" which means users could use their own servers for builds. This also means you don't need to hand out your agent secret to potentially untrusted users. I know in docker you can also limit CPU/RAM allocated to each container, maybe Drone has that concept so that users don't consume 100% of resources.
Sign in to join this conversation.
No Milestone
No Assignees
6 Participants
Due Date

No due date set.

Dependencies

This issue currently doesn't have any dependencies.

Loading…
Cancel
Save
There is no content yet.