More System Modules #11

Closed
opened 1 year ago by JorwLNKwpH · 10 comments

Not a feature request to have everything at once :-), but it would be nice to have some more system information modules, which other popular status bars have. Or maybe a way to run custom modules with shell scripts, like how i3status-rust works.

  • CPU use
  • Memory use
  • Temperature
  • Disk space use
  • Uptime
  • More network options, like upload/download speed
  • Power use (watts)
  • Mail
Not a feature request to have everything at once :-), but it would be nice to have some more system information modules, which other popular status bars have. Or maybe a way to run custom modules with shell scripts, like how i3status-rust works. * CPU use * Memory use * Temperature * Disk space use * Uptime * More network options, like upload/download speed * Power use (watts) * Mail
dnkl commented 1 year ago
Owner

In general, all of those require polling, something I don't want in my own bar setup (since it causes battery drain).

That doesn't mean they cannot, or should not, be added to yambar. But it is the reason for why they aren't already implemented - I don't use them myself :)

Most of them are pretty much trivial to implement.

But, implementing a shell script backed custom module is probably the most efficient way to "implement" most of them. At least initially.

Now, since the "output" from modules aren't "printed" on the bar, the output of this custom module would have to be in some structured format that can be machine parsed into tags and tag values.

Other than that it should be straight forward; the way I see it being implemented is a script that "pushes" data over stdout to the module, which translates the output to tags and updates the bar. I.e. the polling interval is controlled by the script itself (for example, by having the script execute a loop with a sleep at the end).

In general, all of those require polling, something I don't want in my own bar setup (since it causes battery drain). That doesn't mean they cannot, or should not, be added to yambar. But it is the reason for why they aren't already implemented - I don't use them myself :) Most of them are pretty much trivial to implement. But, implementing a shell script backed custom module is probably the most efficient way to "implement" most of them. At least initially. Now, since the "output" from modules aren't "printed" on the bar, the output of this custom module would have to be in some structured format that can be machine parsed into tags and tag values. Other than that it should be straight forward; the way I see it being implemented is a script that "pushes" data over stdout to the module, which translates the output to tags and updates the bar. I.e. the polling interval is controlled by the script itself (for example, by having the script execute a loop with a sleep at the end).
dnkl commented 1 year ago
Owner

Here's a suggestion for a script protocol. The custom module will run a user-specified script specified in config.yml, and read the script's stdout.

The script can send an atomic update with one or more tag/value pairs by writing a tag/value pair per line, and ending the atomic update with a single empty line.

Yambar will, when it sees the empty line, will replace all old tag/value pairs with the new ones. Tags present in the previous update, but not the last update, will be removed.

As usual, the user configures (in config.yml) how the tags are to be rendered.

The format of a tag/value line is:

<name>|<type>|<value>

Thus, a CPU meter script, sending a single percent tag in each update, might write something like. Note that it must always end each update with an empty line:

percent|range:0-100|45

percent|range:0-100|23

percent|range:0-100|100

A disk usage script might do something like:

/dev/sda1-usage|int|12345000
/dev/sda1-size|int|100000000000
/dev/sda2-usage|int|82384
/dev/sda2-size|int|200000000000

/dev/sda1-usage|int|12345999
/dev/sda1-size|int|100000000000
/dev/sda2-usage|int|82384
/dev/sda2-size|int|200000000000

Here's a suggestion for a script protocol. The custom module will run a user-specified script specified in `config.yml`, and read the script's stdout. The script can send an atomic update with one or more tag/value pairs by writing a tag/value pair per line, and ending the atomic update with a single empty line. Yambar will, when it sees the empty line, will replace **all** old tag/value pairs with the new ones. Tags present in the previous update, but not the last update, will be removed. As usual, the user configures (in `config.yml`) _how_ the tags are to be rendered. The format of a tag/value line is: ``` <name>|<type>|<value> ``` Thus, a CPU meter script, sending a single `percent` tag in each update, might write something like. Note that it must **always** end each update with an empty line: ``` percent|range:0-100|45 percent|range:0-100|23 percent|range:0-100|100 ``` A disk usage script might do something like: ``` /dev/sda1-usage|int|12345000 /dev/sda1-size|int|100000000000 /dev/sda2-usage|int|82384 /dev/sda2-size|int|200000000000 /dev/sda1-usage|int|12345999 /dev/sda1-size|int|100000000000 /dev/sda2-usage|int|82384 /dev/sda2-size|int|200000000000 ```
dnkl commented 1 year ago
Owner

(A simpler version of the disk usage script would take a disk name as input, and only emit usage and size for that disk. The user would then instantiate multiple "custom" scripts in his/hers config - one for each disk of interrest.)

(A simpler version of the disk usage script would take a disk name as input, and **only** emit _usage_ and _size_ for that disk. The user would then instantiate multiple "custom" scripts in his/hers config - one for each disk of interrest.)
Poster

Thanks, this seems like a good design. I have a few questions though.

How would yambar behave if a script stopped working? The module would stop updating, but the bar wouldn't know that there is a problem. Essentially, the tag would be frozen. Is there a better way to deal with this?

Would yambar ignore bogus input? If the module sends bad input, the tag would not show anything. Should the bar send a debug error and silently ignore the module? Will it start updating again if the module receives "good" output again? Or should the module fail hard?

Thanks, this seems like a good design. I have a few questions though. How would yambar behave if a script stopped working? The module would stop updating, but the bar wouldn't know that there is a problem. Essentially, the tag would be frozen. Is there a better way to deal with this? Would yambar ignore bogus input? If the module sends bad input, the tag would not show anything. Should the bar send a debug error and silently ignore the module? Will it start updating again if the module receives "good" output again? Or should the module fail hard?
dnkl commented 1 year ago
Owner

How would yambar behave if a script stopped working? The module would stop updating, but the bar wouldn’t know that there is a problem. Essentially, the tag would be frozen. Is there a better way to deal with this?

Yes, the script module can detect a broken pipe, or when the scrip subprocess dies. It can then terminate itself.

The hard part is visualizing this to the user. Logging to syslog/stderr is easy. But, how do we show this in the bar? Currently, the modules are the ones holding references to the particles, and the modules "instantiate" the particlar into something the bar can render.

If a module dies, the bar would have to short-circuit this code path and insert e.g. a label with a "module died" error message.

An easier, perhaps interim, solution would be to handle this in the script module specifically instead of trying to handle generically in the bar. The script module could then, probably, return a label-particle with an error message after the script has died.

Would yambar ignore bogus input? If the module sends bad input, the tag would not show anything. Should the bar send a debug error and silently ignore the module? Will it start updating again if the module receives “good” output again? Or should the module fail hard?

I think there are two slightly different cases to consider. One is a buggy script. In that case, I think the best thing to do is log an error and terminate the script, and the script module.

The other case is when the script doesn't sanitize it's input correctly. I.e. it reads an unaccounted for value and fails to format it properly.

If the script module would silently ignore these, you'd probably never notice them, and the script would never get fixed. So at the very least it should log an error. But, since you probably wont check the log unless you have a reason to, I'd say it should terminate the script (and render the "fail" label as described above).

> How would yambar behave if a script stopped working? The module would stop updating, but the bar wouldn’t know that there is a problem. Essentially, the tag would be frozen. Is there a better way to deal with this? Yes, the script module can detect a broken pipe, or when the scrip subprocess dies. It can then terminate itself. The hard part is visualizing this to the user. Logging to syslog/stderr is easy. But, how do we show this in the bar? Currently, the modules are the ones holding references to the particles, and the modules "instantiate" the particlar into something the bar can render. If a module dies, the bar would have to short-circuit this code path and insert e.g. a label with a "module died" error message. An easier, perhaps interim, solution would be to handle this in the script module specifically instead of trying to handle generically in the bar. The script module could then, probably, return a label-particle with an error message after the script has died. > Would yambar ignore bogus input? If the module sends bad input, the tag would not show anything. Should the bar send a debug error and silently ignore the module? Will it start updating again if the module receives “good” output again? Or should the module fail hard? I think there are two slightly different cases to consider. One is a buggy script. In that case, I think the best thing to do is log an error and terminate the script, and the script module. The other case is when the script doesn't sanitize it's input correctly. I.e. it reads an unaccounted for value and fails to format it properly. If the script module would silently ignore these, you'd probably never notice them, and the script would never get fixed. So at the very least it should log an error. But, since you probably wont check the log unless you have a reason to, I'd say it should terminate the script (and render the "fail" label as described above).
Poster

Yes, the script module can detect a broken pipe, or when the scrip subprocess dies. It can then terminate itself.

Would this still work if you wanted to run a script from systemd-timers or with snooze? Or if the script just hangs?

But, how do we show this in the bar? Currently, the modules are the ones holding references to the particles, and the modules “instantiate” the particlar into something the bar can render.

I definitely agree that this is something desireable to solve. However, it does seem like it requires some thought and runs parallel to this ticket, so the interim solution is ok for now.

The other case is when the script doesn’t sanitize it’s input correctly. I.e. it reads an unaccounted for value and fails to format it properly.

Yes, this is the case that would be nice to handle. A completely broken script should hopefully be easy to detect and fix, but a script that mostly seems to work will be much harder to fix if you can't tell that there is anything wrong with it.

>Yes, the script module can detect a broken pipe, or when the scrip subprocess dies. It can then terminate itself. Would this still work if you wanted to run a script from systemd-timers or with [snooze](https://github.com/leahneukirchen/snooze)? Or if the script just hangs? >But, how do we show this in the bar? Currently, the modules are the ones holding references to the particles, and the modules “instantiate” the particlar into something the bar can render. I definitely agree that this is something desireable to solve. However, it does seem like it requires some thought and runs parallel to this ticket, so the interim solution is ok for now. >The other case is when the script doesn’t sanitize it’s input correctly. I.e. it reads an unaccounted for value and fails to format it properly. Yes, this is the case that would be nice to handle. A completely broken script should hopefully be easy to detect and fix, but a script that mostly seems to work will be much harder to fix if you can't tell that there is anything wrong with it.
dnkl commented 1 year ago
Owner

Would this still work if you wanted to run a script from systemd-timers or with snooze? Or if the script just hangs?

In the solution I'm envisioning, each script module in the yambar configuration runs, and "owns", the script process.

To be able to start a script from somewhere else, each script module would have to be more like a server, listening on e.g. a unix socket.

A hanging script on the other hand is hard to detect, unless you enforce a heartbeat of some kind. Of course, we could add an attribute to the script module that says "fail if script doesn't produce any output for X seconds".

> Would this still work if you wanted to run a script from systemd-timers or with snooze? Or if the script just hangs? In the solution I'm envisioning, each script module in the yambar configuration runs, and "owns", the script process. To be able to start a script from somewhere else, each script module would have to be more like a server, listening on e.g. a unix socket. A hanging script on the other hand is hard to detect, unless you enforce a heartbeat of some kind. Of course, we could add an attribute to the script module that says "fail if script doesn't produce any output for X seconds".
Poster

In the solution I’m envisioning, each script module in the yambar configuration runs, and “owns”, the script process. To be able to start a script from somewhere else, each script module would have to be more like a server, listening on e.g. a unix socket.

Oh okay, then scratch that idea :)
I am assuming that using a socket would be more resource-intensive.

A hanging script on the other hand is hard to detect, unless you enforce a heartbeat of some kind. Of course, we could add an attribute to the script module that says “fail if script doesn’t produce any output for X seconds”.

Is there an advantage to doing it this way and having the script control the update timing? Rather than the bar's module controlling the update interval.

>In the solution I’m envisioning, each script module in the yambar configuration runs, and “owns”, the script process. To be able to start a script from somewhere else, each script module would have to be more like a server, listening on e.g. a unix socket. Oh okay, then scratch that idea :) I am assuming that using a socket would be more resource-intensive. >A hanging script on the other hand is hard to detect, unless you enforce a heartbeat of some kind. Of course, we could add an attribute to the script module that says “fail if script doesn’t produce any output for X seconds”. Is there an advantage to doing it this way and having the script control the update timing? Rather than the bar's module controlling the update interval.
dnkl commented 1 year ago
Owner

I am assuming that using a socket would be more resource-intensive.

I think resource wise it's all the same; in one case you use slightly more RAM to hold a process in memory, while in the other case you waste more CPU cycles to start up a new process and connect to a socket each time you want to update the bar.

I'd say it's more a design issue than a "minimize resource usage" issue.

Technically, there's nothing preventing us from supporting both variants; either as two different modes in the same script module, or as two separate script modules. I think it makes most sense to start with the first variant (i.e. no socket).

Is there an advantage to doing it this way and having the script control the update timing? Rather than the bar’s module controlling the update interval.

A script module with "heartbeat" disabled means no unnecessary polling; the script updates the bar when there is something to update it with. This fits well with yambar's design goals. It also means there's no "lag" in what the bar is displaying; the script (can) update the bar as soon as new data is available.

Even with heartbeat enabled, you still get lag-free behavior, and furthermore, the heartbeat can be set to a much much lower interval than what you'd want to use as a polling interval.

> I am assuming that using a socket would be more resource-intensive. I think resource wise it's all the same; in one case you use slightly more RAM to hold a process in memory, while in the other case you waste more CPU cycles to start up a new process and connect to a socket each time you want to update the bar. I'd say it's more a design issue than a "minimize resource usage" issue. Technically, there's nothing preventing us from supporting both variants; either as two different modes in the same script module, or as two separate script modules. I think it makes most sense to start with the first variant (i.e. **no** socket). > Is there an advantage to doing it this way and having the script control the update timing? Rather than the bar’s module controlling the update interval. A script module with "heartbeat" disabled means no unnecessary polling; the script updates the bar when there is something to update it _with_. This fits well with yambar's design goals. It also means there's no "lag" in what the bar is displaying; the script (can) update the bar as soon as new data is available. Even with heartbeat enabled, you still get lag-free behavior, and furthermore, the heartbeat can be set to a much much lower interval than what you'd want to use as a polling interval.
dnkl commented 1 year ago
Owner

#14 is a work-in-progress PR that implements the new script module.

It works well enough to start testing scripts against it.

https://codeberg.org/dnkl/yambar/pulls/14 is a work-in-progress PR that implements the new script module. It works well enough to start testing scripts against it.
dnkl added the
enhancement
label 1 year ago
dnkl referenced this issue from a commit 1 year ago
dnkl closed this issue 1 year ago
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This issue currently doesn't have any dependencies.

Loading…
There is no content yet.