A fast MQTT dashboard application and rule engine framework written in C for Linux, Raspberry Pi and WINDOWS.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

43 KiB


for MQTT Hyperdash V.1.02 (c) by Markus Hoffmann (concept study from 2010)

1. Abstract

Based on rule engines an automation concept can be described which theoretically leads to a fully autonomous entity of machines, all connected to each other via the MQTT framework.

The rule engines allow to crate any level of abstraction on the raw data and thus reduce the complexity of the individual signals as well as maintaining actual/measured states the subsystems are to be in. A strong distinction between set-point- and actual-measured values is essential here (actual value vs. target value). Both are supported by the MQTT-Hyperdash naming convention.

The framework is highly decentralized and allows for maximum autonomous control of large devices. Furthermore it can be expanded with the help of machine learning. We see applications in big facility operating systems, where expert knowledge can be absorbed and more and more operation modes can be automatically established. Many sudden errors can also be reacted on and circumvented automatically. Finally it teaches the designers of all subsystems where additional sensors need to be implemented or installed, and where and when manual intervention is necessary.

2. Introduction

Thinking of modern machine learning concepts, like artificial neural networks and "deep learning", we think that they can be applied only if there is a suitable interface to the device which presents the data and the automation control in a more standardized manner, so that the leaning techniques can be plugged onto the system. The neuronal networks always operate on input values and produce output values, which have to integrate with the system to be automated. And there is currently no generic way to do this.

The automation concept consists of following ingredients:

  • a decentralized parameter database,

  • rule engines,

  • actual value vs. target value distinction,

  • state transition heuristics,

  • path-length calculations, and

  • machine learning extensions.

Most of these building blocks are well established in other fields. We combine them to form a maximal autonomous automation framework for large devices with an unmanageable abundance of measured values and state parameters.

The concept allows for the (human) expert knowledge on operating each subsystem as well as the systems on each level to be absorbed. Furthermore it would be possible to improve this knowledge on a long term with machine learning techniques.


3. Automation

The automation concept presented here is using two distinct directions for data or information flow. One direction is meant to reduce the data complexity from raw data to higher levels and the other direction (which moves the system into desired states) makes the system autonomous.

4. reducing complexity

Rule engines allow to create any level of abstraction on the (measured) raw data. They can reduce the complexity of the individual signals. Usually additional parameters are created to form a next level of abstraction. There can be more hierarchical ordered abstraction layers. They can be stacked on top of each other (and so further reduce the complexity of the raw data) or more organized horizontally to perform different views to the same data. This not only allows to derive more physical values. But also to maintain and detect virtual states the machine or its subsystems are in, usually by aggregating measurements and error bits of the hardware into more general classifications. In the end, the complexity is reduced in every layer of abstraction so that at the end (on top) only very few but very meaningful states and main values show up.

A strong distinction between set-point- and actual-measured values is essential here (actual value vs. target value), see above. Both must be supported by a sort of parameter naming convention or the like.

In particular, rule engines are also suitable for generating status displays for a higher-level system from a set of other, more subordinate status parameters: e.g. the state "The power supply is switched on and ready" can be generated from a series of individual messages from the individual power supply units and current or power measurements. A well-designed system also allows you to identify a number of fault conditions by itself. For this purpose, the hardware usually already provides error bits that can be evaluated accordingly. (These error bits are therefor also of the class "measured values".)

A total status for a subsystem or even the entire area can be derived from the sum reports (with the help of the rule machines). This is an essential task of the control systems and helps the human operator to get an overview before having to deal with all details and individual messages for troubleshooting, for example. The prerequisite here, however, is that each individual component down to the hardware level must also generate adequate signals which permit such a complete determination of the state. The criteria for the design of (new) hardware components must be aligned accordingly.

Aggregating and reducing complexity with abstraction levels (middle layers) is already standard in many control systems. Automation needs this, but it is not really new, and it is not sufficient for real and full automation. We are going to focus on the missing parts in the following chapters.

5. inverse complexity

Pure state detection (based on measured values) can be realized the way described above. But there must also be a way to change the state of a system in a targeted manner (e.g. switching on a previously switched off power supply or switching on an entire subsystem including its conditioning). This is the other direction, where we need to derive wished subsystem target states from a more global wanted overall target state. In contrast to the path of measured values, from many raw values to fewer aggregated values, in this direction the complexity naturally increases. Also there is no unique way to achieve a state, many sub-configurations would be possible. The task here is therefore to choose the most optimal sub-configuration for a given overall state.

On a hardware level this means, that the hardware makes sure that a set-point value is achieved and a target state is reached (like "on" and "current applied").

Oh higher levels, rule engines make sure that a more global state is broken down into lower level subsystem states. Here also only set-point parameters would be involved. Breaking down states can be a really difficult tasks. It can happen, that there are many ways to achieve a higher level state and many configurations of the underlying sub-systems. The process may involve dedicated sequences decision trees and the like. Also waiting times and feedback loops may play a role. In general speaking, complexity increases.

Directed rule engines alone cannot map the processes and algorithms to really automate the system. Therefor we have to introduce another extended class of rules, called {\bf Intentions}. Intentions can be implemented as rule engines with some additional special state parameters. However we want to look at them from a different perspective at first. Since this is the new part of the presented automation concept we will go in more detail here for the rest of this paper.

6. Intentions for process automation

Lets define the terms state, complete system, exception, intention, and observable:

A state is an indicator which represents the status of a system. The state can be a (integer) number \(s\in \mathbb{N}_0 ; 0\le s < n\) which uniquely identifies the systems status. \(n\) is the number of distinct states the system can be in. The complete set of states of a system form a partition. A state can be computationally represented by a parameter which gives the number corresponding to the state. We assume that every system has a mechanism (using rule engines) to measure its current state. In the simplest manner it could be "on", "off", "standby", "error", represented by \(s \in \{0,1,2,3\}\). But more useful states like "on and current value on operation boundaries" can also be thought of.

A complete system provides not only the functionality to detect the state it is in, but also the functionality to reach every state in a targeted manner. We think of that there is a set-point parameter which can be set to the state the system should be in. And the complete system will be able to perform actions to really achieve the target state.

Of course this cannot be realized in practice, because there might be external conditions which force the system to go into an error state instead of the desired (e.g. "on"-) state. However, it is practicable that all relevant operating states can be reached under normal conditions. And that there are functions which allows the system to recover from error states. If the system reaches an error state which itself cannot come out of, this would be called an exception. Practically that would mean, the system indicates that it is broken and that a manual/human intervention is needed to recover.

It is worth noting that this stands in the way of (complete) automation of the overall system. For this reason, as much expert knowledge as possible about troubleshooting, conditioning, operational sequences, etc. must be made available as a functionality. This happens on the lowest level (near the hardware) and on all levels above, up to the highest level, where ideally only one switch chooses between the states "on" and "off". So every system must be designed and improved such that it is as complete as possible.

In order to achieve this full automation, not only rules must be created that reliably recognize all states, but also regulations/procedures to achieve desired states. Wee will call these procedures "intentions". These are usually instructions to the subsystems, which occur in a specific order and vary depending on the state of the system that has already been reached.

The abundance of instructions for all components of the overall system to achieve a normal operating state can quickly become confusing and ultimately very inflexible. It is therefore important to create a system and an implementation framework in which the automation in a hierarchy can be carried out as context-locally as possible with individual, clear instructions.

Large procedures are to be factored into smaller sub-steps where possible, then an automatism can ultimately also find detours in the overall procedure if a originally planned route (due to e.g. the failure of a subsystem; sudden change in status at one point) can no longer be committed. Possibly, the higher-level status can still be achieved in this way, even though there is a deviation from the standard procedure.

With the appropriate formulation of the individual tasks, a large part of the compositions can already be done dynamically and therefore automatically. This goal is to be achieved by the concept presented here.

Observables are system parameters from which a state can be derived using a rule or a set of rules. This process is also called quantization of the multidimensional and continuous real state space.

The measured values stored in parameters, e.g. from an ADC card to which temperature sensors are connected, are observables for the thermal state of a device. This state can be, depending on the measured temperatures, e.g. "too cold", "correct", "too warm" or "critically hot".

An observable can also be a state of another system on which that system depends. Ultimately, observables should be based on measured values and enable statements to be made about the condition of the device to be checked.

Expressions of desire of the operator, e.g. an entered numerical value, could also be regarded as an observable (looking at the human as a "device"), but this is not particularly useful in this concept. Instead we will refer to the desire of the operator as an intention.

7. States and State detection

A (complete) set of n states is defined for each system (e.g. "on", "off" and "broken"). For this set, a status parameter (integer) is kept ready in the control system and a rule machine maps (starting from a series of observables) to this parameter (see Figure State Detection). This ensures that the actual status of the system - based on the respective status record - is known at all times.

State Detection
Actual states (A,B,C,...,F,...) are derived from observables (from the parameter database) using a set of rules. The observables can also include a history which is stored in the memory or is also available as a parameter in the parameter database. The state detection algorithm is triggered e.g. by a change of the value of any of its observables.State Detection

Determining meaningful states is a non-trivial matter and is usually done by the person who developed the system or who knows it best. The states can be defined in different ways, but it must always be ensured that the abstract states clearly reflect the operating state of the device. A state "undefined" is also a state in the strictest sense, but hardly anything can be derived from such a state, in particular nothing that leads to an automatic finding of a state appropriate to the function (e.g. "ready for operation").

It is also possible to define several sets of states that are complete in themselves (i.e. represent a partition of the entire state space), e.g. "On" and "Off", as well as "ready for operation" and "faulty", but where conditions from one of the records may overlap. Each set of states must be reflected in its own parameter.

One interesting application of artificial intelligence can be seen here. A state detection can of course be performed as well with black-box algorithms, e.g. neuronal networks. Such algorithms which perform a "classification" have been proven to be useful in image detection and the author sees no reason why they would not be as well useful for state detection using all observables as inputs for the AI-algorithm. If no classic algorithm can be created for the state detection of a system and instead there are many training data available for training of a neuronal network, this can be a way to follow.

However, for automation this approach has limitations, because from black-box state detection it would still not be clear how to target a systems state. There is no generic way to derive procedures (intentions), if it is not clear how the state detection works. But this might become subject to research topics.

7.1. State Changes and Transitions

A change of state can happen on its own, e.g. by the failure of a system, by reaching a temperature threshold or by the expiry of a preheating time or by other physical processes which are detected by the diagnostic system.

Secondly, these physical processes can also be triggered by the diverse control functions. In this case, an action has resulted in a transition from state A to state B. Each action, on the other hand, can in turn be given by an intention, whereby we want to understand intention as the desired target state combined with a procedure that should lead to that target state from theoretical understanding (and hopefully also practically, if the system works as intended). However, whether the condition is reached is not guaranteed (however, this should be the case if the system functions normally). This can be verified or falsified by observing the detected actual state, which correspond to the intended state. If they are different, action need to be taken.

Transition matrix from one state (initial) to another (final) and the various possible and impossible actions. The path to a desired final state can be found automatically.Transition Table

Whether the condition is reached is not guaranteed (however, this should be the case if the system functions normally). This can be verified or falsified by observing the detected actual state, which correspond to the intended state. If they are different, action need to be taken.

The problem of automation can now be summed up quite simply, independently of the complexity of the system and regardless of the number of subsystems involved:


The following applies to all systems: If the system is not in the desired state, it will do nothing more than (permanently) by itself try to get into this state.


The system automatically waits for a (possibly system-external) condition to occur and then continues. If the state can be reached automatically, it will be. Otherwise, manual intervention is usually required (with a screwdriver). Then either the hardware is physically broken and has to be repaired or replaced, or there is no automatic procedure for this fault situation that can bypass or remedy the fault. This procedure would then have to be retrofitted.

No other case can occur in a system for which all relevant observables have been provided. So it should not happen that operating the system (manually, but with the functionality provided) only continues if external information (e.g. evaluating an oscilloscope image) is used to make a decision. If the latter is the case with a system, the design of the system in question has to be rethought and a new measurement signal may have to be installed so that automatic operation (only then) becomes possible.

7.2. Evolution

All in all, this process of continuous improvement, if resolved consistently, should ultimately lead to complete automation, with all of the knowledge about the operation of the individual components gradually being incorporated into the control system software framework (from which it can then be extracted, for example, for documentation purposes).

From the viewpoint of quality management (QM) and quality assurance the improvement, fault recognition and repair, the standard formalism and process follow-up could be applied here, too. At least the framework makes documentation and error followup, fault statistics, etc. much simpler.

What the author considers as important is a formal language (like a computer programming language, but specifically fitted to this particular automation concept without unnecessary overhead), in which the framework is defined. A language in which the implementation can be carried out as easily as possible. As many people as possible should be able to contribute. A large part of the process should already take place automatically in a standardized way according to defined rules (maybe according to QM standards). The knowledge then only has to be poured into simple formal rules that only describe formalized transitions or actions. In their overall direction, these automatically allow the automation of the entire system, but with local knowledge. The detailed knowledge from the lower level subsystems should not be needed for the higher hierarchical level.

Rules and intentions are closely related. In fact, an action associated with an intention is intended to reverse the rule machine graph. If a rule is a pure conversion formula, an inverse function can easily be defined if necessary. Most of the rules associated with state detection are only surjective (quantization) and are no longer injective. A clear reversal function cannot therefore be specified. However, control and regulation tasks can usually still be solved by iteration, in which case an (optimal) initial state is selected and found from all possible ones that lead to the same final state. The author sees optimization algorithms, statistical and heuristic algorithms play a central role here. This can not only be classical singular-value-decomposition reverse linear algebra functions but alsowhere it makes sense, and if the system can be trainedneuronal network algorithms and probably others evolving from AI. Important is that the presented framework is flexible enough to allow for both.

If the directed graph of the rules projects upwards from the observables to a state, the chained intentions represent a reversal of the direction from top to bottom. This direction is usually more difficult to achieve, and the rules for this are also more complicated, since they must contain knowledge of how it works and an expectation of the likely behavior of the system (usually from a model or a (physical) theory). However, since these models can only access quantized information that is disturbed by the measurement accuracy, they have to make certain assumptions that are not guaranteed to, but probably must, apply. There are therefore "good" and "bad" procedures, in the sense that the former are more likely to achieve the desired goal. This is the reason for the particular difficulty in automation.

7.2.1. Implementing Intentions

We will not stop here, but try to provide a way to implement intentions. There may be also other ways to achieve this, but we found a quite handy concept to do it, without introducing too many fancy new algorithms. Also it is important that one can understand, how it works. We define a framework in which intentions can be realized in a quite formal way, but without giving up the flexibility to later maybe also incorporate more modern AI-like algorithms. We are not going to use neurons or a neuronal network, and "machine learning" is done in a controlled manner which involves the human (expert) review. The main key parts are heuristics (defined by the human expert) and a generic path finding algorithm. You can imagine a street map (the heuristics) in which all possible ways are defined to go from one state (location) to another. The decision, which way to follow will be delegated to an automatism, the path finding algorithm, like the route calculation in an navigation device. The map can be continuously modified or extended, it contains all present knowledge about all systems and how to operate them. The information can come from either a human expert or an artificial intelligence. Note that, as with street maps, the topology (connections and individual streets, as well as street conditions) can be changed locally, and it is not necessary to consider any other part of street system further away. This means the whole automation system can be worked on at different places even at once without much interference. This is one of the essential features of the concept, where otherwise a maintenance, improvement, repair etc. would hardly be possible. However, local changes may well have global impacts. In the analogy of streets this would mean traffic jams, closed roads, and unreachable places.

Following belongs to an "intention":

  1. two corresponding parameters: a) the detected state (actual state) \(s_a\) and corresponding b) the desired state (target state) \(s_t\),

  2. a representation of the transition matrix with entries about internal actions, combinations of intentions and prohibited transitions,

  3. a table with heuristic evaluation factors ("lengths"),

  4. optionally one or more rules \(R\) and

  5. optionally one or more internal promotions with associated ratings.

<img src="images/intention.png"> Figure: How an intention works in the automation framework.

A rule \(R\) using the two state parameters as trigger inputs (and optionally any number of others) is triggered whenever either the current detected state changes or the target state changes. The target state can be externally set and the detected state is the result of a state detection rule engine which permanently computes the systems state from observables. If the detected state is equal to the target state, nothing needs to be done. If the states are different, then the rule will try to perform an action which makes the system one step closer to the desired state.

To do this, the rule selects a transition from the set of the maximum \(n\times(n-1)\) possible transitions which are arranged in a transition matrix. This can be a direct transition from state s_a to s_t is an entry exist, or a first step from a chain of transactions arranged on a path from state s_a to s_t also visiting intermediate states.

Typically there are always many options and therefor many ways from state s_a to s_t, but they are not equally optimal. So the best path has to be chosen from all possible. To do this, the rules takes weight factors (lengths) into account to calculate the lengths of all paths and then choose the shortest one.

However, lengths of parts of the path can change, some paths may not be possible at the moment (length=\(\infty\)) (due to errors in the subsystems) etc. The whole topology may be very dynamic.

So the special rule for evaluating the cheapest or shortest paths needs to dynamically access the length information of all possible paths and this length information has to be permanently updated.

This sounds like a highly complex task, but keep in mind that the lengths are normal observables, they can be computed with regular rule machines distributed over the whole system in no time. Each subsystem provides such measurement rules that calculate the lengths from internal states and observables, maybe accessing other lengths from their subsystem as well in a recursive manner. Complexity is always dealt with locally.

We will briefly line out how this can be realized in practice later. First recapitulate: These paths consist of a chain of transitions that should lead to the target state via any detour states. The first of these actions is then triggered and should now bring the system into a state that is closer to the target. Then the rule is triggered again and the next step is carried out until the goal is reached, in which case nothing is done. If the target cannot be reached by this way, something is fundamentally wrong and need to be (manually) fixed. This can be easily detected by a permanent discrepancy between the actual and current state parameters which could flag a warning. However, if state detection as well as length calculation work correctly, the faulty path would have marked itself with an infinite length so that it would not have been entered. Instead the intention would have detected this as a "There is no route to destination" error, which of course would need to be (manually) fixed as well.

Now lets look more closely to the individual actions to be performed. There are two basic types of actions, called internal and external.

An action can either be carried out within the system, e.g. if the system is directly connected to the hardware, or it is defined by a number of other intentions that affect other (subordinate) systems. In the first case, the action executes a procedure that does something locally (i.e. on the computer where it runs, interfacing the connected hardware). We want to call these internal actions. In the latter case, it is sufficient to trigger certain other intentions (whereby the order should not matter, since the intentions are expressed almost simultaneously). (Sequences have to be implemented in a different way, see a chapter below.)

7.2.2. Internal Actions

Every action, or each path of actions, which is followed by intentions, can be broken down into more and more finer actions, which at the bottom end always result in internal actions. These are then ultimately carried out by the servers of the hardware devices. The individual actions take place autonomously and, if necessary, simultaneously on each server/device. In order to find favorable ways out of the considerable amount of the different possible paths of a transition, a criterion has to be used, which takes into account which of the routes is the shortest and is accordingly preferred if it is not blocked.

Suppose a system that is in the "ON" state is to change to the "OFF" state. Assume there are two ways to do this. First, it can change directly to the off state, and secondly it can go to the standby state and then to the off state. The last route is obviously longer and therefore the direct route should be followed.

Or another example: Should several systems intentionally go to other states, the evaluation of this transition depends on how many systems are already in the desired state. Thus, for the transition of an exemplary system from the "not ready" state to the "all ready" state, all 300 subsystems must be in the "ready" state. However, the "length" of this route is certainly dependent on the number of subsystems that are already in the desired state.

Rules must therefore be found to evaluate these actions.

7.3. action heuristics

We first look at the internal actions, because they only depend on local states (on the hardware), which can be evaluated locally or which are even constant and hard-coded into the system.

The following approach is suggested here:

  1. Internal actions get a heuristically found evaluation factor ("length"), whereby a length of 0 means that the action does not need to be carried out because the state has already been reached, but it will not harm at all to perform the action any number of times.

    • A length of 1 means a normal step, e.g. a switching process with a duration of 100 ms, or an action that leads to resource consumption (wear and tear due to switching or similar), in the sense of, switching may occur approx. 1 time per minute, without reducing the life of the part to less than the total life of the overall system. The reference for the unit step would be measured to the normal function, normal lifetime and normal resource consumption of this action, if performed once.

    • A length less than 1 means the step is shorter or faster than a "normal step". Or it is cheaper in means of resources.

    • A length greater than 1 means that the action is more expensive, takes longer or cannot be carried out as often because it consumes more resources.

    • An infinite length means that the action is prohibited and may not or must not be carried out. In practice, a high maximum value is used instead of infinity, e.g. 65000. Lengths greater than or equal to this value are then considered to be infinite. Length calculations that reach or exceed this value are canceled.

  2. The length of actions to be carried out in parallel is calculated according to:

\[l=w\sum_i l_i \quad,\]

where \(l_i\) are the lengths of the individual actions and \(w\) is a positive weight factor \(w>0\), which takes into account the fact that the lengths of an overall action need not necessarily be the sum of the lengths of the individual actions, e.g. setting multiple bits of the same hardware cannot consume additional resources because it happens simultaneously or because the entire register is always set anyway. In this case \(w=\frac{1}{n}\).

Individual intentions that do not lead to any action because the state has already been reached do not make any contribution, in this case the length calculation for the individual system will deliver l_i = 0. If a prohibited action is involved, the overall action is also prohibited, namely if w>=1.

In this way, "lengths" can be calculated (recursively) for all transitions.

This includes also external actions. To access the individual lengths of external actions, the corresponding length matrix of the subsystems involved need to be evaluated. This requires that each (sub-)system publishes its current length matrix. Appropriate rule machinery must care about this.

The path with the smallest overall length is then the cheapest and most optimal. So the intention triggers a transition, which represents a step in this direction.

For further reference we are going to introduce some more terms to address specific types if intentions:

Elementary intentions are those that are only defined through (or using) internal actions and the distance/length matrix is fixed/constant and does not need to be calculated dynamically.

Sufficient intentions are those that are only defined through external actions. They can be located anywhere in the system and need not be implemented and run on any specific hardware. In consequence the whole set of rules can be generated automatically. This includes: condition detection, distance matrix calculation and automatic path finding of the intention. The advantage is that only a few lines of formal description suffice for full implementation of these intentions which makes them simple to create, simple to maintain and easy to read and understand (for documentation purposes).

Free parameters are those that are not in any rule for determining a state that is part of an intention. Rules whose inputs only consist of free parameters produce free parameters as outputs. User inputs can also be free parameters, e.g. a temperature set-point.

Free states are those that are not part of an intention. A free state is represented by a free (integer) parameter. You cannot get to a free state in a targeted manner. It is only suitable for diagnosis.

Common pitfalls the system designer has to be considered while designing intentions are:

  • The order of the two individual actions (inside the same intention) must not matter.

  • Changing non-free parameters can trigger uncontrolled changes in status within other intentions. In this case, this action is not internal, even if it looks like on the first place. Therefore, this must be avoided (through careful planning) so that there are no contradictions in the rules and the automation becomes unstable.

Intention with autonomous path length calculation: Rule R_2 always knows the path lengths from the current state to all possible other states that can be reached. In the form of a distance matrix, these are also made known to all other intentions as parameters. These in turn recalculate their lengths accordingly. Rule R_1 is used to determine the current status and rule R_3 monitors changes in the target state and in the actual state and if necessary, apply a step from the shortest path heading the target state.Intention Framework

The complete concept of the intention framework is illustrated in fig. Intention Framework.

8. No sequences

We want to outline how one can not use sequences but achieve the same result.

The implementation of procedures in so-called sequences, i.e. chronologically ordered instructions, which are processed sequentially (i.e. one after the other), seems to be more suitable and easier to implement at least for some tasks than the definition of many extra states with corresponding dependencies.

However, a well designed implementation of a sequence must always take into account the risk that certain instructions from the sequence are not executed correctly. After each step in the sequence, one must actually first carefully check whether the desired action was carried out without errors. If not, the sequence is usually not allowed to continue and would either have to be terminated or to run in one of a number of branches that take account of the error that occurred and, if possible, return to the actual sequence. It is hardly possible to catch all possible errors in this way, so the sequence will very likely miss an error, and the machine will not end up in the achieved state, but instead in an uncontrolled other state (because, for example, the sequence simply continued even though an error occurred in a sub-step that was overlooked). In order to find out in which one, a complex error analysis procedure is required.

In short: For small and reliable steps, a sequence can make sense in terms of simplicity, clarity and quick implementation. Larger sequences would not allow good automation.

Small mini-sequences make sense, for example, in the way they can appear in the definition of rules and intentions. In the case of the rules, however, they should mainly be used to implement algorithms, i.e. pure computing processes, where only one step should be calculated in iterative processes. The mini-sequence must logically be atomic in a way, that their execution can appear in no time and that no intermediate (error-) states can occur, nor that any waiting, polling or blocking may occur. Sequences in rule definitions should therefore not be used to query and set parameters outside those agreed in the set of input and output parameters, although this is not explicitly prohibited, and in some cases can make sense. For intentions, the consideration only applies to the "internal" actions anyway, since by definition all others cannot be sequenced, but may trigger additional rules and intentions. In the case of the internal actions, it must therefore also be ensured that a possible success or failure of the sequence can later be detected on the basis of suitable measured states. In this case the internal actions can be seen as "fire and forget" sequences.

Relatively quickly, however, a sequence becomes susceptible to incomplete implementation and the resulting uncontrolled changes in state, which, after a certain level of complexity (which is reached quite quickly), becomes uncontrollable and therefore unreliable. This is contrary to the desired robustness.

This concept is therefore intended to limit the use of sequences to an absolutely necessary extent and only allow them where the instructions can be safely executed or where there is (yet) no diagnosis for the detection of errors anyway. In all other cases, the use of sequences should therefore be avoided. Instead a chain of carefully designed intermediate states should be implemented. The sequence can then be broken up in individual intentions which under normal conditions perform actions one after another visiting all intermediate states.

A consistent implementation using the intentions and rules automatically checks the statuses achieved and finds its way independently. The sequence of actions to be processed one after the other is realized by the dependencies of the states, which ensures that an action is only triggered when the target state of the preceding action has actually been reached. Otherwise, another action is automatically triggered, which tries to correct the previous error and then continues normally in the chain of actions. The sequence then arises from the continuous path of the states.

8.1. Avoiding contradictions in the rule machinery

What will happen if design errors are (accidentally) build into the system? It is often very difficult to predict all global consequences on local design changes. Even if a new subsystem is integrated that could lead to unforeseen behavior. So it is unavoidable to eventually introduce contradictions in the rule and intention machinery. We will see that robustness against such faults is inherently already in the concept. You will get that for free.

An interesting feature of the automation concept presented here is that contradictions automatically prohibit themselves. That is, states that are involved in such contradictions cannot be reached automatically.

However, this does not mean that no contradictions can be constructed. For example, two rules can form a cycle that causes the parameters involved to oscillate and thus show unstable behavior. Great care must therefore always be taken when cycles are used in the regulations.

Contradictions in intentions can also be expressed in another form: e.g. an action of another intention can demand a state, which in turn leads to the former intention changing, because in turn a different state is required by the former.

This can also form cycles. Cycles of this type, however, are already noticeable in the autonomous length calculation and lead to the lengths of the paths concerned becoming longer and longer and diverging in an iterative step. Eventually a maximum length is reached where the process stops. However, maximum length means that this path becomes a forbidden path. In this way, it is possible that all paths that lead to conflicting states are prohibited. These problems are then exposed through a permanent discrepancy between target and actual states of some intentions, where the problem can then be easily localized and hopefully remedied. Debugging will be easy.

8.2. Consequences

From the described automation concept there are some (quite desirable) implications which should be considered:

  1. Non-converging (unstable) cycles in the set of rules eliminate themselves since their length adds up to infinity, and these paths are then prohibited.

  2. Distance computing will consume a great deal of computing power, since every change in the state of the subordinate systems, if it leads to a change in length there, triggers the recalculation of the distance matrices of all super-ordinate systems. But since everything happens in parallel, the load for each single computer is low.

8.3. A path finder

One more thought on the algorithm, which picks the shortest path in an intention.

The shortest path is to be found from a series of possible paths from an initial state A to a final state B.

Given a transition matrix with weights (or lengths), standard algorithms from graph theory can be used. For example, Dijkstra’s algorithm. Our problem presents itself as an edge-weighted graph, in which the edge weights have to be calculated according to the actual state and possibly recursively from the states and actions of the external intentions involved or through the autonomous path length calculation.

Dijkstra’s algorithm (after its inventor Edsger W. Dijkstra) is used to calculate a shortest path between a start node and any node in an edge-weighted graph. The weights must not be negative.

For non-contiguous, undirected graphs, the distance to certain nodes can also be infinite if a path between the start node and this node does not exist.

The same applies to directed, not strongly connected graphs. These requirements apply to our problem.

The algorithm works as follows: The next best node with the shortest path is successively included in a result set and removed from the set of nodes still to be processed.

Route planners are a prominent example where this algorithm can be used. The graph here represents the road network that connects different points. We are looking for the shortest route between two points. Dijkstra’s algorithm is also used on the Internet as a routing algorithm in OSPF (Open Shortest Path First, is a term used in computer network technology).

An alternative algorithm for finding the shortest paths, which is based on Bellman’s principle of optimality, is the Floyd Warshall algorithm. The principle of optimality states that if the shortest path leads from A to C via B, the partial path A B must also be the shortest path from A to B.

9. Conclusion

The described concept for a full automation can be used to automate a big and complex machine in a maximal way.

A lot of design decisions would have to be made, like parameter name conventions, probably a scripting language to describe rules and intentions in a convenient fashion, so that after a very short learning time nearly everybody involved into the system can participate and add new procedures rules and intentions into the framework.

The concept leaved enough room to also include black-box-algorithms found in modern AI. It unifies the whole environment so thatfor the first timea full automation looks possible.