A Unified View on Semantic Information and Communication: A Probabilistic Logic Approach

01/16/2022
by   Jinho Choi, et al.
Deakin University
0

This article aims to provide a unified and technical approach to semantic information, communication, and their interplay through the lens of probabilistic logic. To this end, on top of the existing technical communication (TC) layer, we additionally introduce a semantic communication (SC) layer that exchanges logically meaningful clauses in knowledge bases. To make these SC and TC layers interact, we propose various measures based on the entropy of a clause in a knowledge base. These measures allow us to delineate various technical issues on SC such as a message selection problem for improving the knowledge at a receiver. Extending this, we showcase selected examples in which SC and TC layers interact with each other while taking into account constraints on physical channels.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

05/02/2022

A Unified Approach to Semantic Information and Communication based on Probabilistic Logic

Traditionally, studies on technical communication (TC) are based on stoc...
09/29/2021

Semantic Communications With AI Tasks

A radical paradigm shift of wireless networks from “connected things” to...
07/10/2019

Modeling Semantic Compositionality with Sememe Knowledge

Semantic compositionality (SC) refers to the phenomenon that the meaning...
05/13/2022

SENS: Semantic Synthetic Benchmarking Model for integrated supply chain simulation and analysis

Supply Chain (SC) modeling is essential to understand and influence SC b...
05/13/2022

MARE: Semantic Supply Chain Disruption Management and Resilience Evaluation Framework

Supply Chains (SCs) are subject to disruptive events that potentially hi...
11/26/2018

EVM analysis of an Interference Limited SIMO-SC System With Independent and Correlated Channels

In this paper, we derive the error vector magnitude (EVM) in a selection...
06/17/2020

Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural Network

Noun phrases and relational phrases in Open Knowledge Bases are often no...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Communication systems have been significantly transformed over the last decades, yet the foundation of the underlying information and communication technology has been consistently laid by Shannon’s theory [1]. In his theory, information is characterized as randomness in variables. This allows one to calculate the fundamental limit and performance of communication, and to design efficient compression and transmission schemes through noisy channels. Despite the success in this domain of technical communication (TC), since its introduction in 1948, Shannon theory’s ignorance about the meanings of information [2] has long been tackled particularly in the field of the philosophy of information. Meanwhile, overcoming this limitation of Shannon theory has recently been regarded as one of the key enablers for the upcoming sixth generation (6G) communication systems [3, 4, 5].

To fill this void, it requires to develop a theory on meaningful information, i.e., semantic information, as well as a novel communication technology based on semantic information, i.e., semantic communication (SC)

. For SC, existing works can be categorized into model-free methods leveraging machine learning

[4], and model-based approaches that quantify semantic information [6] or specify the emergence of meanings through communication [5]. Our work falls into the latter category in the hope of unifying our analysis on SC with the existing model-based analysis on TC.

In regard to semantic information, there are two different views in the philosophy of information. One angle focuses on measuring semantic similarity [7, 8], which often encourages an entirely new way to define meaningful information. For instance, each meaning can be identified as a group that is invariant to various nuisances (e.g., a so-called topos in category theory [9]), across which semantic similarity can be compared. The other end of the spectrum focuses on quantifying semantic uncertainty [10]

, in a similar way to Shannon theory where message occurrences are counted to measure semantic-agnostic uncertainty. As an example, Shannon information can be extended to semantic information by leveraging the theory of inductive probability

[11] (see also [12, 13]). This allows to measure the likelihood of a sentence/clause’s truth using logical probability, upon which an SC system can be constructed [6]. Our view is aligned with the latter angle (i.e., like [6], a probabilistic logic approach is taken), while we focus on making SC interact with TC under Shannon theory.

In particular, in this paper, we consider an approach to SC based on the theory of probabilistic logic assigning probabilities to logical clauses [11, 14]. This allows to make inferences over clauses and to quantify their truthfulness or provability in a probabilistic way. We showcase that the process of inference and its provability analysis can be performed using ProbLog111ProbLog tools are available in: https://dtai.cs.kuleuven.be/problog.

, a practical logic-based probabilistic programming language that has been widely used in the field of symbolic artificial intelligence (AI).

Furthermore, based on [15, 10], we consider a two-layer SC system comprising: (i) the conventional TC layer where data symbols can be transmitted without taking into account their meanings; and (ii) an SC layer where one exploits semantic information that can be obtained from a background knowledge or by updating a knowledge base. We demonstrate the interaction between TC and SC layers with selected examples showing how SC improves the efficiency of TC, i.e., SC for TC, as well as how to design TC to achieve maximal gains in SC under limited communication resources, i.e., TC for SC. For simplicity and consistency throughout the paper, we confine ourselves to a simple scenario where a human user or an intelligent device stores logical clauses in a knowledge base and intends to improve the knowledge by seeking answers to a number of queries.

Ii Background

In this section, we briefly present a background on information theory [16] and probabilistic logic [14].

Ii-a Preliminaries for Information Theory

Although information theory originally started as a mathematical theory for communications, it has been applied in diverse fields ranging from biology to neuroscience. In information theory, random variables are used to represent symbols to be transmitted. The entropy of a random variable, denoted by

, is the number of bits required to represent it, which is given by (taking to base 2 in the rest of the paper) when

is a discrete random variables, where

stands for the probability that and represents the statistical expectation. The entropy of can also be interpreted as the amount of information of .

The joint entropy of and is defined as and the conditional entropy is given by

The mutual information between and is defined as . It can also be shown that . If and are assumed to be the transmitted and received signals over a noisy channel, can be seen as the number of bits that can be reliably transmitted over this channel. Thus, is called the channel capacity that is the maximum achievable transmission rate for a given channel that is characterized by the transition probability .

As pointed out in [15], information theory is not interested in the content or meaning of the symbols, but quantifying the amount of information based on the frequency of their occurrence (i.e., the distribution of symbols as random variables). For example, is to measure the amount of information or number of bits to represent a symbol regardless of what means. However, this does not mean that information theory is useless in dealing with the meaning or content of information as will be discussed in the paper.

Ii-B Preliminaries for Probability and Logic

Following the theory of probabilistic logic, we assign probabilities to logical clauses, and carry out probabilistic reasoning using a practical logic programming language, ProbLog. In ProbLog, each logical clause (e.g., rules or facts) is annotated with a probability (by a programmer) that indicate the degree of (the programmer’s) belief in the clause.

Precisely, for facts and , where is assigned probability and is assigned probability , we have probability of computed as the product of the probabilities, i.e. , and computed as since . Similar calculations can be applied with deductive reasoning, e.g., suppose we have the rule of the form (where “” is “implies”) annotated with probability and with probability , then we can infer with probability . In ProbLog, a clause with probability is written as p::b :- a, where “:-” can be read as “if”.

In general, given a knowledge base is regarded as a set of clauses (where a clause is a rule or a fact). Given a rule of the form , the head of the rule is and the body is . Note that a fact is basically a rule of the form , which can just be written as . One can make inferences about the truth value of a query , provided that matches the head of a clause in with the outcome being the probability of . If does not match any head of a clause in , cannot say anything about . We denote the probability of computed as the answer when posed as a query to the knowledge base by . We assume that inferences made will be as defined by the semantics of ProbLog.

In addition, for the purposes of the discussion in this paper, we consider mostly the propositional logic fragment of ProbLog for simplicity (and if variables are involved in some examples, we assume that their values range over a finite set, i.e., they are just abbreviations for a finite set of propositional clauses, so that the set of queries that can be answered via a knowledge base is finite).

Iii Entropy and Knowledge Bases: Communicating Informative Messages

Iii-a Entropy of a Clause

We consider the entropy of a given clause whose truth value can be regarded as a random variable with outcomes “true” with probability , and “false” with probability , as follows:

Here, the subscript is used to differentiate the entropy of a random variable from that of a clause.

When a given query is posed to the knowledge base , and suppose a probability is computed with respect to , i.e., when matches a clause’s head in , as in the semantics of ProbLog, then , and we denote the entropy of with respect to as , i.e.:

Note that if does not match the head of any clause in , then the result of the query is undefined; alternatively, for an application, this can be set to (i.e., a random guess).

Iii-B Uncertainty of a Knowledge Base

Let denote the set of the terms which are the heads of all clauses in . We consider the heads of the clauses as these would correspond to the set of different queries that the knowledge base can compute a meaningful probability for.

Given a knowledge base , we can then define an uncertainty measure of as follows which takes into account the entropy of answers it computes, i.e., the average entropy of queries computable from :

(1)

Ideally, if a knowledge base can answer all its queries with certainty (probability 1, i.e., true with probability 1 or false with probability 1), then (assuming that ), while it is in the worse case.

Example 1

Suppose we have a knowledge base as follows, in ProbLog:

0.2::a.
0.3::b.
0.5::a :- b.

The set of the heads of all clauses in is ; the possible queries can answer are and , i.e. , and . Thus,

Iii-C Sender’s Message Choice Problem

Communication plays a crucial role in reducing the uncertainty of a knowledge base. To illustrate this, suppose that Alice has a set of clauses and Bob has a knowledge base . In order to minimize the average entropy of (with ), Alice can choose and send a message among those in to Bob, i.e.,

(2)

However, this requires Alice to have complete knowledge of . Alternatively, Alice might have a statistical approximation of in which with probability . In this case, Alice’s choice of message is recast as:

(3)

One way to realize this idea is allowing Bob to keep feeding the entropy of back to Alice. Then, throughout iterative communication, Alice can gradually improve ’s accuracy.

Iii-D Semantic Content of a Message

We can define the notion of the semantic content of a message (where a message in this case is a clause labelled with a probability) with respect to the receiver’s background knowledge base as follows, as the change in average entropy of a knowledge base with respect to its queries:

(4)

Each message changes and the receiver wants to decrease the entropy, i.e., , or wants to be as low as possible, as the message should decrease the average entropy in computed queries (of course, it could also increase the average entropy!). In the following example, we show why this definition helps.

Example 2

Suppose Alice has a knowledge base as follows, in ProbLog:

0.3::b.
0.5::a :- b.

Suppose Alice receive the labelled clause , i.e., labelled with probability forming as follows:

0.3::b.
0.5::a :- b.
0.2::m.

Then, , , and and so . We have:

The uncertainty in the knowledge base with respect to the queries it can answer has decreased - which is what we expect when Alice receives a clause with a lower entropy relative to the existing clauses in . Also, if instead Alice received 0.9::b, then Alice’s knowledge base becomes:

0.9::b.
0.5::a :- b.

And , , that is, we have:

The uncertainty in the knowledge base with respect to the queries it can answer has decreased - which is what we expect when Alice receives a clause with a lower entropy replacing an existing clause in . If we use “” to assimilate 0.9::b, then we have:

0.9::b.
0.3::b.
0.5::a :- b.

where , , and

which is also a decrease in average entropy.

Iii-E Inference Can Reduce the Need for Communication

In general, suppose there is no background knowledge, i.e., , and the uncertainty of a query is , i.e. the truth or falsity of is merely a random guess. But with a knowledge base , we expect to have: . Furthermore, for two different knowledge bases and , if

then we say is less uncertain than with respect to . For , we can easily show that .

This can lead to a reduction in the need to obtain information about given that we can make inferences about with , e.g., suppose , where , is good enough, then there is no need to receive further information about . In fact, with respect to , we want only to receive information to reduce the entropy for , that is, we want only to receive message such that:

This can also be generalized if there is a set of available messages, say , as follows:

(5)

Here, is the best message among those in to reduce the entropy for . This implies that one might want to consider the consequences of receiving and assimilating a message (or from the sender side, the implications of sending a message) on the uncertainty of a knowledge base (whether it would increase or decrease the entropy with respect to or with respect to the overall uncertainty of a knowledge base as defined above).

Iii-F Improved Security via Semantic Messages

As we have seen, the semantic content of a message helps reduce the receiver’s uncertainty about one or more queries. We can then define a notion of semantically secure messages, in that, without the receiver’s knowledge base, someone who has gotten hold of the message might not be able to use it to answer a query (or a set of queries).

For example, suppose Eve has knowledge base and Alice sends a message to Bob, who has knowledge base . With respect to a query , we can represent the fact that Eve has little use for the message provided as follows:

(6)

In other words, suppose , and Eve managed to intercept the communication and gain the message (and forwards it to Bob pretending that nothing has happened as a man-in-the-middle attack), but combined with her knowledge base , Eve is still just as uncertain about as before.

However, Bob who receives , who has , finds the message meaningful, that is, with respect to :

(7)

Hence, as long as Bob and Alice have an a priori shared context, as represented by knowledge base that Bob has and Alice knows that Bob has , then, it might be possible for Alice to transmit so that Eve (who does not know ), an eavesdropper, will not be able to make much use of it, with respect to some “sought after” answer for .

Note that one can see this as analogous to the typical security encryption scenario: is the plaintext encoded as the ciphertext using some key , then Bob who has knowledge of the key can decrypt to know , but Eve, after getting hold of , does not have and cannot use it obtain . But there are key differences. There could be multiple ways to infer with different sets of clauses. and may have different clauses but both could allow some inferences about . Alice needs to ensure that is such that (6) and is such that (7) before sending .

We can consider semantic information security based on the previous discussion. Conventional information security [17] [18] is based on different channel reliability (e.g., the eavesdropper channel is a degraded channel in wiretap channel models). On the other hand, semantic information security is based on the different reliability of knowledge bases.

Iv Key Issues in Designing SC Systems

In this section, we discuss several issues in designing SC systems, in relation to the interactions between TC and SC.

Iv-a A Structure of SC with TC

In [6], a model of SC was presented, which is illustrated in Fig. 1. The message generator, which is also called a semantic encoder is to produce a message syntax that will be transmitted by a conventional/technical transmitter. As a result, it is possible to design an SC system with two different layers: TC and SC layers.

Fig. 1: A model of SC from [6].

In particular, the output of the sender at the SC layer is a message to be transmitted over a conventional physical channel as shown in Fig. 2. The output of the decoder at the TC layer is a decoded message that becomes the input of the SC decoder. From this view, a conventional TC system can be used without any significant changes for SC. However, without any meaningful interactions between TC and SC, there is no way for TC to exploit the background knowledge in SC and use the information obtained from semantic inference.

Fig. 2: A two-layer model for SC over TC.

For interactions between TC and SC, the notion of the conditional entropy [16] can be employed. In SC, we can assume that is the information that can be obtained from the background knowledge at the receiver. In particular, is a clause or an element of clauses in the knowledge base at the receiver. For a clause , the entropy of becomes . In this case, the sender only needs to send the information of at a rate of . In Fig. 3, we illustrate a model for exploiting the external and internal knowledge bases to reduce the number of bits to transmit. For a given query, Bob can extract partial information, , from his knowledge base, which can be seen as data transmitted through internal communication, and seek additional information, , from others’ knowledge bases, e.g., Alice’s knowledge base. In this case, the number of bits to be transmitted is , which will be available through external TC.

Fig. 3: Exploiting the external and internal knowledge bases to reduce the number of bits to transmit.
Example 3

Suppose that Alice and Bob are the sender and receiver, respectively. In previous conversations, Alice told Bob that “Tom has passed an exam and his score is 75 out of 100,” which becomes part of background knowledge. Then, Bob asked Alice the pass score, which is denoted by . Clearly, based on the knowledge base from the previous conversation, the pass score has to be less than or equal to 75, i.e., , which can be regarded as . Thus, to encode , the number of bits becomes . If

is a positive integer and uniformly distributed over

, , not .

Example 4

Suppose that Eve told Bob that “Tom’s score is 75”, which is denoted by fact . In addition, Alice sends additional information that “The pass score is 70,” which is denoted by fact . Bob still does not know if Tom has passed, even after knowing the mark. Bob can ask Alice but does not need to ask Alice or Eve whether or not Tom has passed, because Bob can tell Tom passes from facts and via inference. If and , the probability that Tom has passed is . Thus, in order to encode the fact that Tom has passed, which is a binary random variable (e.g., (resp. ) represents Tom passes (resp. fails)), the number of bits becomes . This demonstrates that the background knowledge in SC can help compress the information in TC. A logic programming perspective on this example can also be considered. Suppose we model the knowledge Bob has with this rule that says that a person passes if the mark is above a threshold, and also that Bob has been told by Eve Tom’s score:

0.8::mark(tom,75).
1.0::pass(X) :- mark(X,M), pass_score(S), M >=S.

But Bob still does not know if Tom has passed. Bob could ask Alice but does not need to if he also knows the passing mark:

0.9::pass_score(70).
0.8::mark(tom,75).
1.0::pass(X) :- mark(X,M), pass_score(S), M >=S.

Bob can then answer the query pass(tom) himself with computed probability . Now Bob knows not only Tom’s mark but also whether Tom has passed, if this probability of is good enough for Bob. With representing Bob’s knowledge base, note that . Note that if Charlie later tells Bob that Tom has passed with probability , then Bob perhaps should discard Charlie’s message which would increase Bob’s uncertainty about pass(tom) since . Inferring can go far - e.g., by inferring about Tom, Bob has reduced the need for communication, but this can be extended to not just Tom but many others, saving a lot of communication - another way to put it is that suppose Bob knows the mark of 1000 students but without knowing the pass score, Bob does not know if any of them passed, but on receiving the one message on the pass score, Bob now can infer which of the 1000 students passed and who did not. Also, rather than sending facts stating who passed and who didn’t, sending just the pass score is more efficient. Lastly, if Bob is uncertainty tolerant and guesses the pass score with probability , then it doesn’t even need to ask for the pass score, and concludes Tom passes with probability , which might be good enough for tolerant Bob.

In general, the notion of the Slepian-Wolf coding [19] can be employed in order to efficiently exploit the background knowledge in SC. Suppose that there are two sources at two separate senders, which are denoted by and , for distributed source coding. In the Slepian-Wolf coding, sender 1 that has can transmit at a rate of , while sender 2 that has can transmit at a rate of , not . As a result, the total rate becomes . In the context of SC, can be seen as the information that is available from the background knowledge and through semantic inference.

Iv-B SC for Efficient TC

As discussed in Subsection III-E, an optimal message can be chosen to minimize the entropy for a given query (see (5)). If a message is to be sent over a TC channel, the length of message can be regarded as the cost of TC. Let denote the length of message for all available messages in at a sender (in bits), while represents the knowledge base at the receiver that has query . Provided that the maximum length of message is limited by , the optimal message for query can be given by

(8)
(9)

While the optimization in (9

) would be tractable, it requires for the sender to know or estimate the receiver’s knowledge base,

, so that it can find . Thus, in general, it is expected that the sender has a larger knowledge base than the receiver and knows the receiver’s knowledge base. For example, the sender can be a server in cloud and the receiver can be a mobile user in a cellular system. The server needs to update all the registered users’ knowledge bases. In addition, the server is connected to base stations and needs to estimate the length of message to be transmitted through TC, which may vary depending on the time-varying physical channel condition between the user and associated base station. In this case, is also a function of the channel condition and parameters of the physical layer (e.g., modulation order, code rate, and so on).

Iv-C Integration with Distributed Sources

In this subsection, we discuss an approach to efficiently select distributed sources by minimizing the entropy gap, and extend it in the semantic context.

Suppose that there are multiple senders and one receiver. Let denote the information that sender has. The receiver has a query and the answer is a function of the variables at the senders, which is given by , where stands for the number of senders. For a large , with a limited bandwidth, collecting all information from distributed senders may take a long time. Furthermore, if the ’s are correlated, it may not be necessary to collect all variables. For efficient data collection from distributed senders/sources (or sensors), the notion of data-aided sensing (DAS) has been considered in [20] [21]. If only one sender can be chosen in each round, the following selection criterion is proposed in [22]:

(10)

where represents the index set of the senders that send their information up to iteration and is the set of the variables of the senders corresponding to . Here, stands for the complement of a set . In (10), represents the total amount of remained uncertainty of for given , which is available at the receiver up to iteration . Thus, in the next iteration , the sender that minimizes the remained uncertainty is to be chosen.

While no semantic information is taken into account in (10), it is possible to extend to consider semantic information. Let be the message at node (for a set of queries) and represent the updated knowledge base at iteration . Then, from (4), the node (or source) selection criterion becomes:

(11)

That is, the receiver can actively seek the most effective message among multiple sources and iterate this process to rapidly improve the knowledge base. In addition, as in (9), constraints on TC can be imposed if TC channels are limited (e.g., in terms of capacity and channel resource sharing).

V Conclusions and Open Issues

In this paper, we have proposed the SC layer that can simply be added on to the conventional TC layer. In addition, we have discussed how to jointly operate and interact such SC and TC layers with selected examples. While we have focused mainly on the Shannon-Weaver’s semantics (Level B) problem, it has been presumed that all semantic contents can be useful for some generic tasks in terms of the effectiveness problem (Level C). However, such SC strategies may not be sustainable under limited memory for storing the ever-growing amount of knowledge, not to mention incurring redundant communication costs. To address this issue, an interesting topic for future research is to investigate the feedback and prediction mechanisms to estimate the semantic content’s task effectiveness based on pragmatic information theory [23]

, where we may first focus on a given task, and then count the usefulness of semantic contents based on its effectiveness in the task. In addition, there are recently proposed semantics-empowered and goal-oriented SC frameworks that commonly rest on AI-native operations with neural networks, as opposed to our knowledge-based SC layer. We expect that both AI-native and knowledge-based SC frameworks are complementary, even creating a synergistic effect. This is another important open issue where effective integration in terms of various performance criteria are to be studied.

References

  • [1] C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, no. 3, pp. 379–423, 1948.
  • [2] W. Weaver, “Recent contributions to the mathematical theory of communication,” ETC: A Review of General Semantics, vol. 10, no. 4, pp. 261–281, 1953.
  • [3] E. Calvanese Strinati and S. Barbarossa, “6G networks: Beyond Shannon towards semantic and goal-oriented communications,” Computer Networks, vol. 190, p. 107930, 2021.
  • [4] Z. Weng, Z. Qin, and G. Y. Li, “Semantic Communications for Speech Signals,” arXiv, 2020.
  • [5] H. Seo, J. Park, M. Bennis, and M. Debbah, “Semantics-native communication with contextual reasoning,” arXiv preprint arXiv:2108.05681, 2021.
  • [6] J. Bao, P. Basu, M. Dean, C. Partridge, A. Swami, W. Leland, and J. A. Hendler, “Towards a theory of semantic communication,” in 2011 IEEE Network Science Workshop, pp. 110–117, 2011.
  • [7] L. Floridi, “Is semantic information meaningful data?,” Philosophy and Phenomenological Research, vol. 70, pp. 351–370, Mar. 2005.
  • [8] L. Floridi, “Trends in the philosophy of information,” in Philosophy of Information (P. Adriaans and J. van Bentham, eds.), pp. 113–131, 2008.
  • [9] J.-C. Belfiore and D. Bennequin, “Topos and stacks of deep neural networks,” arXiv preprint arXiv:2106.14587, 2021.
  • [10] P. Adriaans, “A critical analysis of Floridi’s theory of semantic information,” Knowledge, Technology & Policy, vol. 23, pp. 41–56, June 2010.
  • [11] R. Carnap, Logical Foundations of Probability. University of Chicago Press, 1950.
  • [12] T. Hailperin, “Probability logic.,” Notre Dame Journal of Formal Logic, vol. 25, pp. 198–212, July 1984.
  • [13] J. Williamson, “Probability logic,” in Handbook of the Logic of Argument and Inference (D. M. Gabbay, R. H. Johnson, H. J. Ohlbach, and J. Woods, eds.), vol. 1 of Studies in Logic and Practical Reasoning, pp. 397–424, Elsevier, 2002.
  • [14] N. J. Nilsson, “Probabilistic logic,” Artificial Intelligence, vol. 28, no. 1, pp. 71–87, 1986.
  • [15] Y. Bar-Hillel and R. Carnap, “Semantic information,” The British Journal for the Philosophy of Science, vol. 4, no. 14, pp. 147–157, 1953.
  • [16] T. M. Cover and J. A. Thomas, Elements of Information Theory. NJ: John Wiley, second ed., 2006.
  • [17] M. Bloch and J. Barros, Physical-Layer Security: From Information Theory to Security Engineering. Cambridge University Press, 2011.
  • [18] I. Csiszár and J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems. Cambridge University Press, 2 ed., 2011.
  • [19] D. Slepian and J. Wolf, “Noiseless coding of correlated information sources,” IEEE Transactions on Information Theory, vol. 19, no. 4, pp. 471–480, 1973.
  • [20] J. Choi, “A cross-layer approach to data-aided sensing using compressive random access,” IEEE Internet of Things Journal, vol. 6, no. 4, pp. 7093–7102, 2019.
  • [21] J. Choi, “Gaussian data-aided sensing with multichannel random access and model selection,” IEEE Internet of Things Journal, vol. 7, no. 3, pp. 2412–2420, 2020.
  • [22] J. Choi, “Data-aided sensing where communication and sensing meet: An introduction,” in 2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), pp. 1–6, 2020.
  • [23] D. Gernert, “Pragmatic information: Historical exposition and general overview,” Mind and Matter, vol. 4, no. 2, pp. 141–167, 2006.