1. Introduction
Systems of linear equations are useful for approximate analysis of vector addition systems, or Petri nets. For instance, the relaxation of semantics of Petri nets, where the configurations along a run are not required to be nonnegative, yields so called state equation of a Petri net, which is a system of linear equations with nonnegativeinteger restriction on solutions. This is equivalent to integer linear programming, a wellknown complete problem (Karp21NPcompleteProblems, ). If the nonnegativeinteger restriction if further relaxed to nonnegativerational one (or nonnegativereal one), we get a weaker but computationally more tractable approximation, equivalent to linear programming and solvable in polynomial time. We refer to (SilvaTC96, )
for an exhaustive overview of linearalgebraic and integerlinearprogramming techniques in analysis of Petri nets; usefulness of these techniques is confirmed by multiple applications including, for instance, recently proposed efficient tools for the coverability problem of Petri nets
(GeffroyLS16, ; BlondinFHH16, ).Motivations A starting point for this paper is an extension of the model of Petri nets, or vector addition systems, with data (LNORW08, ; HLLLST2016, ). This is a powerful extension of the model, which significantly enhances its expressibility but also increases the complexity of analysis. In case of unordered data (a countable set of data values that can be tested for equality only), the coverability problem is decidable (in nonelementary complexity) (LNORW08, ) but the decidability status of the reachability problem remains still open. In case of ordered data, the coverability problem is still decidable while reachability is undecidable. (Petri nets with ordered data are equivalent to timed Petri nets, as shown in (rrlsv1023, ).) One can also consider other data domains, and the coverability problem remains decidable as long as the data domain is homogeneous (Las16, ) (not to be confused with homogeneous systems of linear equations), but always in nonelementary complexity. In view of these high complexities, a natural need arises for efficient overapproximations.
A configuration of a Petri net with data domain is a nonnegative integer data vector, i.e., a function that maps only finitely many data values to a nonzero vector in . In a search for efficient overapproximations of Petri nets with data, a natural question appears: Can linear algebra techniques be generalised so that the role of vectors is played by data vectors? In case of unordered data, this question was addressed in (HLT2017LICS, ), where first promising results has been shown, namely the nonnegativeinteger solvability of linear equations over unordered data domain is complete. Thus, for unordered data, the problem remains within the same complexity class as its plain (dataless) counterpart. The same question for the second most natural data domain, i.e. ordered data, seems to be even more important; ordered data enables modeling features like fresh names creation or time dependencies.
Contributions In this paper we do a further step and investigate linear equations with ordered data, for which we fully characterise the complexity of the solvability problem. Firstly, we show that nonnegativeinteger solvability of linear equations is computationally equivalent (up to an exponential blowup) with the reachability problem for plain Petri nets (or vector addition systems). This high complexity is surprising, and contrasts with NPcompleteness for unordered data vectors. Secondly, we prove, also surprisingly, that the complexity of the solvability problem drops back to polynomial time, when the nonnegativeinteger restriction on solutions is relaxed to nonnegativerational, integer, or rational. Thirdly, we offer a conceptual contribution and notice that systems of linear equations with (unordered or) ordered data are a special case of systems of linear equations which are infinite but finite up to an automorphism of data domain. This recalls the setting of sets with atoms (atombook, ; TMatoms, ; locfin, ), with a data domain being a parameter, and the notion of orbitfiniteness relaxing the classical notion of finiteness.
Outline In Section 2 we introduce the setting we work in, and formulate our results. Then the rest of the paper is devoted to proofs. First, in Section 3 we provide a lower bound for the nonnegativeinteger solvability problem, by a reduction from the VAS reachability problem. Then, in Section 4 we suitably reformulate our problem in terms multihistograms, which are matrices satisfying certain combinatorial property. This reformulation is used In the next Section 5 to provide a reduction from the nonnegativeinteger solvability problem to the reachability problem of vector addition systems, thus proving decidability of our problem. Finally, in Section 6 we investigate various relaxations of the nonnegativeinteger restriction on solutions and work out a polynomialtime decision procedure in each case. In the concluding Section 7 we sketch upon a generalised setting of orbitfinite systems of linear equations.
2. Vector addition systems and linear equations
In this section we introduce the setting of linear equations with data, and formulate our results. For a gentle introduction of the setting, we start by recalling classical linear equations.
Let denote the set of rationals, and , and denote the subsets of nonnegative rationals, integers, and nonnegative integers. Classical linear equations are of the form
where are variables (unknowns), and are rational coefficients. For a system of such equations over the same variables , a solution of is a vector such that the valuation , satisfies all equations in . In the sequel we are most often interested in nonnegative integer solutions , but one may consider also other solution domains than . It is well known that the nonnegativeinteger solvability problem (solvability problem) of linear equations, i.e. the question whether has a nonnegativeinteger solution, is NPcomplete (taming, ). The complexity remains the same for other natural variants of this problem, for instance for inequalities instead of equations (a.k.a. integer linear programming). On the other hand, for any , the solvability problem, i.e., the question whether has a solution , is decidable in polynomial time.
[integer coefficients] A system of linear equations with rational coefficient can be transformed in polynomial time to a system of linear equations with integer coefficients, while preserving the set of solutions. Thus from now on we allow only for integer coefficients in linear equations.
The solvability problem is equivalently formulated as follows: for a given finite set of coefficient vectors and a target vector (we use bold fonts to distinguish vectors from other elements), check whether is an sum of , i.e., a sum of the following form
(1) 
for some . The number corresponds to the number of equations in and is called the dimension of .
Linear equations may serve as an overapproximation of the reachability set of a Petri net, or equivalently, of a vector addition system – we prefer to work with the latter model. A vector addition system (VAS) is defined, similarly as above, by a finite set of vectors together with two nonnegative vectors , the initial one and the final one. The set determines a transition relation between configurations, which are nonnegative integer vectors : there is a transition if for some . The VAS reachability problems asks, whether the final configuration is reachable from the initial one by a sequence of transitions, i.e. . It is important to stress that intermediate configurations are required to be nonnegative. In other words, the reachability problem asks whether there is a sequence (called a run) such that
where denotes a zero vector (its length will be always clear from the context). The problem is decidable (mayr81, ; kosaraju82, ) and hard (Lipton76, ), and nothing is known about complexity except for the cubic Ackermann upper bound of (demystifying, ). For a given VAS, a necessary condition for reachability is that which is equivalent to solvability of a system of linear equations, called (in case of Petri nets) the state equation. For further details we refer the reader to an exhaustive overview of linearalgebraic approximations for Petri nets (SilvaTC96, ), where both  and solvability problems are considered.
2.1. Vector addition systems and linear equations, with ordered data
The model of VAS, and linear equations, can be naturally extended with data. In this paper we assume that the data domain is a countable set, ordered by a dense total order with no minimal nor maximal element. Thus, up to isomorphism, is rational numbers with the natural ordering. Elements of we call data values. In the sequel we use order preserving permutations (called data permutations in short) of , i.e. bijections such that implies .
A data vector is a function such that the support, i.e. the set , is finite (similarly as for vectors, we use bold fonts to distinguish data vectors from other elements). The vector addition is lifted to data vectors pointwise, so that . A data vector is nonnegative if , and is integer if .
Writing for function composition, we see that is a data vector for any data vector and any order preserving data permutation . For a set of data vectors we define
A data vector is said to be a of a finite set of data vectors if there are , not necessarily pairwise different, such that In the generalisations of the classical solvability problem, to be defined now, we allow as input only integer data vectors (cf. Remark 2):
a finite set of integer data vectors and an integer data vector is a of ?
In the special case when the supports of and all vectors in are all singletons, the is just solvability of linear equations and thus the is trivially hard. As the first main result, we prove the following interreducibility:
Theorem 2.1 ().
The and the VAS reachability problem are interreducible, with an exponential blowup.
Our setting generalises the setting of unordered data, where the data domain is not ordered, and hence data permutations are all bijections . In the case of unordered data the is complete, as shown in (HLT2017LICS, ). The increase of complexity caused by the order in data is thus remarkable.
Similarly as linear equations in the dataless setting, may be used as an overapproximation of the reachability in vector addition systems with ordered data, which are defined exactly as ordinary VAS but in terms of data vectors instead of ordinary vectors. A VAS with ordered data consists of a finite set of integer data vectors, and the initial and final nonnegative integer data vectors . The configurations are nonnegative integer data vectors, and the set induces a transition relation between configurations as follows: if for some . The reachability problem asks whether the final configuration is reachable from the initial one by a sequence of transitions, ; it is undecidable (LNORW08, ). (The decidability status of the reachability problem for VAS with unordered data is unknown.) As long as reachability is concerned, VAS with (un)ordered data are equivalent to Petri nets with (un)ordered data (HLLLST2016, ).
The is easily generalised to other domains of solutions. To this end we introduce scalar multiplication: for and a data vector we put . A data vector is said to be a of a finite set of data vectors if there are , not necessarily pairwise different, and coefficients such that (cf. (1))
This leads to the following version of parametrised by the choice of solution domain :
a finite set of integer data vectors and an integer data vector is an of ?
The is a particular case, for . Our second main result is the following:
Theorem 2.2 ().
For any , the is in .
For , the above theorem is a direct consequence of a more general fact, where or is replaced by any commutative ring , under a proviso that data vectors are defined in a more general way, as finitely supported functions . With this more general notion, we prove that the reduces in polynomial time to the solvability of linear equations with coefficients from (cf. Theorem 6.6 in Section 6.2).
The case in Theorem 2.2 is more involved but of particular interest, as it recalls continuous Petri nets (serge, ; sergecompl, ) where fractional firing of a transition is allowed, and leads to a similar elegant theory and efficient algorithms based on solvability of linear equations. Moreover, faced with the high complexity of Theorem 2.1, it is expected that Theorem 2.2 may become a cornerstone of linearalgebraic techniques for VAS with ordered data.
3. Lower bound for the
In this section, all data vectors are silently assumed to be integer data vectors. We are going to show a reduction from the VAS reachability problem to the . Fix a VAS . We are going to define a set of data vectors and a target data vector such that the following conditions are equivalent:

is reachable from in ;

is a of .
W.l.o.g. assume .
We need some auxiliary notation. First, note that every integer vector is uniquely presented as a difference of two nonnegative vectors and defined as follows:
For a nonnegative vector , by a data spread of we mean any nonnegative integer data vector such that
In words, for every coordinate , the value is spread among all values , for all data values ; clearly, is finitely supported.
The rough idea of the reduction is to simulate every transition by a data spread of such that, intuitively, all positive numbers in use larger data values than all negative values. By a data realization of a vector we mean any data vector of the form , where data vector is a data spread of , data vector is a data spread of , and (with the meaning that every element of is smaller than every element of ). Intuitively, the effect of is like the effect of but additionally data values involved are increased. We will shortly write for and for . Clearly, a nonzero vector has infinitely many different data realizations; on the other hand, there are only finitely many of them up to data permutation. Let be a set of data realizations of containing representatives up to data permutation. The cardinality of is exponential with respect to the size of .
Now we are ready to define and : we put , and as the target vector we take , for some arbitrary data spread of (recall that ).
It remains to prove the equivalence of conditions 1. and 2. First, 1. easily implies 2. as every run of can be transformed into a of that sums up to , using suitable data realisations of the vectors used in the run.
For the converse implication, suppose that , where and . By construction of , for every the data vector belongs to for some . We claim that the multiset of vectors can be arranged into a sequence being a correct run of the VAS from to . For this purpose we define a binary relation of immediate consequence on data vectors : we say that is an immediate consequence of if the intersection of and is nonempty. We observe that the reflexivetransitive closure of the immediate consequence is a partial order. Indeed, antisymmetry follows due to the fact that all data vectors satisfy . Let denote an arbitrary extension of the partial order to a total order, and suppose w.l.o.g. that
We should prove that the corresponding sequence of vectors from is a correct run of the VAS from to . This will follow, once we demonstrate that the sequence is a correct run in the VAS with ordered data with transitions and the initial configuration . We need to prove the data vector is nonnegative for every . To this aim fix and , and consider the sequence of numbers
(2) 
appearing as the value of the consecutive data vectors , , , at data value and coordinate . We know that the first element of the sequence and the last element of the sequence . Furthermore, by the definition of the ordering we know that the sequence (2) is first nondecreasing, and then nonincreasing. These conditions imply nonnegativeness of all numbers in the sequence.
The exponential blowup in the reduction is caused only by binary encoding of numbers in vector addition systems; it can be avoided if numbers are assumed to be encoded in unary or, equivalently, if instead of vector addition systems one uses counter machines without zero tests.
4. Histograms
The purpose of this section is to transform the to a more manageable form. As the first step, we eliminate data by rephrasing the problem in terms of matrices. Then, we distinguish matrices with certain combinatorial property, called histograms, and use them to further simplify the problem. In Lemma 4.9 at the end of this section we provide a final characterisation of the problem, using multihistograms. The characterisation will be crucial for effectively solving the in the following Section 5.1.
In this section, all matrices are integer matrices, and all data vectors are integer data vectors.
Eliminating data Rational matrices with rows and columns we call matrices, and (resp. ) we call row (resp. column) dimension of an matrix. We are going to represent any data vector as a matrix as follows: if , we put
A 0extension of an matrix is any matrix , , obtained from by inserting arbitrarily additional zero columns . Thus row dimension is preserved by 0extension, and column dimension may grow arbitrarily. We denote by the (infinite) set of all 0extensions of a matrix . In particular, . For a set of matrices we denote by the set of all 0extensions of all matrices in .
Example 4.1 ().
For a data vector with support , defined by and , here is the corresponding matrix and two its exemplary 0extension:
Below, whenever we add matrices we silently assume that they have the same row and column dimensions. For a finite set of matrices, we say that a matrix is a sum of 0extensions of if
(3) 
for some matrices , necessarily all of the same row and column dimension. We claim that the is equivalent to the question whether some 0extension of a given matrix is a sum of 0extensions of .
a finite set of matrices, and a target matrix , all of the same row dimension is some 0extension of a sum of 0extensions of ?
Lemma 4.2 ().
The is polynomially equivalent to the .
Proof.
We describe the reduction of to the . (The opposite reduction is shown similarly and is omitted here.)
Given an instance of the former problem, we define the instance
of the latter one. We need to show that is of if, and only if some 0extension of is a sum of 0extensions of . In one direction, suppose is a of , i.e.,
(4) 
and let be the union of all supports of data vectors (thus also necessarily including the support of ). We will define a matrix and matrices , as required in (3), all of the same column dimension . Thus their columns will correspond to data values . Let be the unique 0extension of of column dimension so that the nonempty columns are exactly those corresponding to element of . Similarly, let be the unique extension of of column dimension , whose nonempty columns correspond to elements of . The so defined matrices satisfy the equality (3).
In the other direction, suppose the equality (3) holds for some matrices and , and let be their common column dimension. Choose arbitrary data values so that corresponds to nonempty columns of , and define data permutations so that maps the support of to data values corresponding to nonempty columns in . One easily verifies that (4) holds, as required. ∎
From now on we concentrate on solving the .
Histograms We write briefly as a shorthand for . In particular, by convention. An integer matrix we call nonnegative if it only contains nonnegative integers. Histograms, to be defined now, are an extension of histograms of (HLT2017LICS, ) to ordered data.
Definition 4.3 ().
A nonnegative integer matrix we call a histogram if the following conditions are satisfied:

there is such that for every ; is called the degree of ;

for every and , the inequality holds:
Note that the definition enforces , i.e., the column dimension of a histogram is at least as large as its row dimension . Indeed, forcedly
Histograms of degree we call simple histograms.
Example 4.4 ().
A histogram of degree decomposed as a sum of two simple histograms:
The following combinatorial property of histograms will be crucial in the sequel:
Lemma 4.5 ().
is a histogram of degree if, and only if is a sum of simple histograms.
Below, whenever we multiply matrices we silently assume that the column dimension of the first one is the same as the row dimension of the second one. Simple histograms are useful for characterising 0extensions:
Lemma 4.6 ().
For matrices and , if, and only if , for a simple histogram .
Example 4.7 ().
Recall the matrix from example 4.1. One of the matrices from is presented as multiplication of and a simple histogram as follows:
Lemma 4.8 ().
For a matrix and a finite set of matrices , the following conditions are equivalent:

is a sum of 0extensions of ;

, for some histograms .
Proof.
In one direction, assume condition 1. holds, i.e.,
(5) 
for , , and then apply lemma 4.6 to get (simple) histograms with . Thus Now apply the if direction of lemma 4.5 to get the histograms as required in condition 2. In the other direction, assume condition 2. holds, and use the only if direction of lemma 4.5 to decompose every into simple histograms. This yields
where all all are simple histograms. Finally we apply lemma 4.6 to get matrices satisfying (5). This completes the proof. ∎
Multihistograms Using Lemma 4.8 we are now going to work out our final characterisation of the , as formulated in Lemma 4.9 below. We write and for the th row and the th column of a matrix , respectively. For an indexed family of matrices, its th column is defined as the indexed family of th columns of respective matrices .
Fix an input of the : a matrix and a finite set of matrices, all of the same row dimension . Let stand for the column dimension of . Suppose that some and some family of histograms satisfy
(The row dimension of every is necessarily .) Boiling down the equation to a single entry of we get a linear equation:
By grouping all the equations concerning all entries of a single column of we get a system of (= row dimension of ) linear equations:
Therefore, the th column of , treated as a single column vector of length , is a nonnegativeinteger solution of a system of linear equations , with unknowns , of the form:
Observe that the system depends on and but not on . For succinctness, for we put
(6) 
to denote the set of all nonnegativeinteger solutions of . Therefore, every th column of the multihistogram belongs to .
Now recall that . Therefore, treating as a sequence of its column vectors in (we call this sequence the word of ), we arrive at the condition that this sequence belongs to the following language:
(7) 
where denotes the column dimension of . If this is the case, we say that is an multihistogram. As the reasoning above is reversible, we have thus shown:
Lemma 4.9 ().
The is equivalent to the following one:
a finite set of matrices, and a matrix , all of the same row dimension does there exist an multihistogram?
5. Upper bound for the
We reduce in this section the (and hence also the , due to Lemmas 4.2 and 4.9) to the VAS reachability problem (with single exponential blowup), thus obtaining decidability. Fix in this section an input to the : a matrix (of column dimension ) and a finite set of matrices, all of the same row dimension . We perform in two steps: we start by proving an effective exponential bound on vectors appearing as columns of multihistograms; then we construct a VAS whose runs correspond to the words of exponentially bounded multihistograms. For measuring the complexity we assume that all numbers in and are encoded in binary.
Exponentially bounded multihistograms We need to recall first a characterisation of nonnegativeinteger solution sets of systems of linear equations as (effectively) exponentially bounded hybridlinear sets, i.e., of the form , for , where is the number of variables and stands for the set of all finite sums of vectors from (see e.g. (taming, ) (Prop. 2), (Dom91, ), (Pot1991, )). By denote a system of linear equations determined by a matrix and a column vector , and by the corresponding homogeneous systems of linear equations. Again, for measuring the size of we assume that all numbers in and are encoded in binary.
Lemma 5.1 ((taming, ) Prop. 2).
, where such that all vectors in are bounded exponentially w.r.t. and .
We will use Lemma 5.1 together with the following operation on multihistograms. A smear of a histogram is any nonnegative matrix obtained by replacing th column of by two columns that sum up to . Here is an example ():
Formally, a smear of is any nonnegative matrix satisfying:
One easily verifies that smear preserves the defining condition of histogram: A smear of a histogram is a histogram. Finally, a smear of a family of matrices is any indexed family of matrices obtained by applying a smear simultaneously to all matrices . We omit the index when it is irrelevant.
So prepared, we claim that every multihistogram can be transformed by a number of smears into an multihistogram containing only numbers exponentially bounded with respect to , . Indeed, recall (7) and let
Take an arbitrary (say th) column of (recall (6)), where , treated as a single column vector (for the sum of row dimensions of ), and present it (using Lemma 5.1) as a sum
for some exponentially bounded and . Apply smear times, replacing the th column by columns . As is a solution of the system and every is a solution of the homogeneous system ,
the so obtained family still satisfies the condition . Using Claim 5 we deduce that is an multihistogram. Repeating the same operation for every column of yields the required exponential bound.
Construction of a VAS Given and we now construct a VAS whose runs correspond to the words of exponentially bounded multihistograms. Think of the VAS as reading (or nondeterministically guessing) consecutive column vectors (i.e., the word) of a potential multihistogram . The VAS has to check two conditions:

the word of belongs to the language (7);

the matrices satisfy the histogram condition.
The first condition, under the exponential bound proved above, amounts to the membership in a regular language and can be imposed by a VAS in a standard way. The second condition is a conjunction of histogram conditions, and again the conjunction can be realised in a standard way. We thus focus, from now on, only on showing that a VAS can check that its input is a histogram.
To this aim it will be profitable to have the following characterisation of histograms. For an arbitrary matrix , define the matrix as:
where is an matrix which extends by the th zero column.
Lemma 5.2 ().
A nonnegative matrix is a histogram if, and only if is nonnegative and .
Proof.
Indeed, nonnegativeness of is equivalent to saying that
for every and ; moreover, is equivalent to saying that is the same for every . ∎
For technical convenience, we always extend with an additional very first zero column ; in other words, we put . Here is a formula relating two consecutive columns and of and two consecutive columns and of ,
(8) 
that will lead our construction.
We now define a VAS of dimension that reads consecutive columns of an exponentially bounded matrix and accepts if, and only if the matrix is a histogram. According to the convention that , all the counters are initially set to 0. Counters of the VAS are used as a buffer to temporarily store the input; counters ultimately store the current column . According to (8), the VAS obeys the following invariant: after steps,
(9) 
Let denote the exponential set of all column vectors that can appear in a histogram, as derived above. For every , the VAS has a ’reading’ transition that adds to its counters , and subtracts from its counters (think of in the equation (8)). Furthermore, for every the VAS has a ’moving’ transition that subtracts from counter and adds to counter , i.e., moves from counter to counter . (recall the ’’ summand in the equation (8)). Observe that these transitions preserve the invariant (9).
Relying on Lemma 5.2 we claim that the so defined VAS reaches nontrivially (i.e., along a nonempty run) the zero configuration (all counters equal 0) if, and only if its input, treated as an matrix , is a histogram with all entries belonging to . In one direction, the invariant (9) assures that is nonnegative and the final zero configuration assures that . In the opposite direction, if a histogram is input, the VAS has a run ending in the zero configuration. The VAS is computable in exponential time (as the set above is so).
We have shown that, given and , one can effectively built a VAS which admits reachability if, and only if there exists an multihistogram. The (exponentialblowup) reduction of the to the VAS reachability problem is thus completed.
6. decision procedures
In this section we prove Theorem 2.2, namely we provide polynomialtime decision procedures for the , where . The most interesting case is treated in Section 6.1. The remaining ones are in fact special cases of a more general result, shown in Section 6.2, that applies to an arbitrary commutative ring.
6.1.
We start by noticing that the whole development of (multi)histograms in Section 4 is not at all specific for and works equally well for . It is enough to relax the definition of histogram: instead of nonnegative integer matrix, let histogram be now a nonnegative rational matrix satisfying exactly the same conditions as in Definition 4.3 in Section 4. In particular, the degree of a histogram is now a nonnegative rational. Accordingly, one adapts the and considers a sum of 0extensions of multiplied by nonnegative rationals. The same relaxation as for histograms we apply to multihistograms, and in the definition of the latter (cf. the language (7) at the end of Section 4) we consider nonnegativerational solutions of linear equations instead of nonnegativeinteger ones. With these adaptations, the is equivalent to the following decision problem (whenever a risk of confusion arises, we specify explicitly which matrices are integer ones, and which rational ones):
a finite set of integer matrices, and an integer matrix , all of the same row dimension does there exist a rational multihistogram?
From now on we concentrate on the polynomialtime decision procedure for this problem. We proceed in two steps. First, we define homogeneous linear Petri nets, a variant of Petri nets generalising continuous PNs (serge, ), and show how to solve its reachability problem by solvability of a slight generalisation of linear equations (linear equations with implications), following the approach of (sergecompl, ). Next, using a similar construction as in section 5, combined with the above characterisation of reachability, we encode as a system of linear equations with implications.
Homogeneous linear Petri nets A homogeneous linear Petri net (homogeneous linear PN) of dimension is a finite set of homogeneous^{1}^{1}1 If nonhomogeneous systems were allowed, the model would subsume (ordinary) Petri nets. systems of linear equations , called transition rules, all over the same variables . The transition rules determine a transition relation between configurations, which are nonnegative rational vectors , as follows: there is a transition if, for some and , the vector is still a configuration, and
(The vectors and are projections of on respective coordinates.) The reachability relation holds, if there is a sequence of transitions (called a run) from to .
A class of continuous PN (serge, ) is a subclass of homogeneous linear PN, where every system of linear equations has a 1dimensional solution set of the form , for some fixed .
Linear equations with implications A is a finite set of linear equations, all over the same variables, plus a finite set of implications of the form
where are variables appearing in the linear equations. The solutions of a are defined as usually, but additionally they must satisfy all implications. The solvability problem asks if there is a nonnegativerational solution. In (sergecompl, ) (Algorithm 2) it has been shown (within a different notation) how to solve the problem in ; another proof is derivable from (BlondinH17, ), where a polynomialtime fragment of existential FO(, + ,¡) has been identified that captures :
Lemma 6.1 ((sergecompl, ; BlondinH17, )).
The solvability problem for is decidable in .
Due to (sergecompl, ), the reachability problem for continuous PNs reduces to the solvability of . We generalise this result and prove the reachability relation of a homogeneous linear PN to be effectively described by a :
Lemma 6.2 ().
Given a homogeneous linear PN of dimension (with numbers encoded in binary) one can compute in polynomial time a whose solution set, projected onto a subset of variables, describes the reachability relation of .
We return to the proof of this lemma, once we first use it in the decision procedure for our problem.
Polynomialtime decision procedure Now, we are ready to describe a decision procedure for the , by a polynomialtime reduction to the solvability problem of .
Fix an input to the , i.e., and . Analogously as in (6) in Section 4 we put for succinctness, for ,
to denote the set of all nonnegativerational solutions of the system of linear equations determined by the matrix
and the column vector . Recall the language (7):
(10) 
where is the column dimension of . Our aim is to check existence of an multihis