The intelligence community is cool stuff. Since I could never be a G-man, I then tend to be interested in the SIGINT side of things. At least I can relate to what those folks are doing, and the technology side is rather deep as well.
Now that I then ventured onto one of the better known technical surveillance counter-measures (TSCM, "bug sweeping") site and started reading through the material, one simple but potentially quite powerful idea came to me. In fact, it's so embarassingly simple that I can't really understand why I'm not able to find any online references to existing implementations already.
The basic premise of bugs is to capture and retransmit data, most commonly audio or video, where remote surveillance does not suffice. Essentially they are about capturing, amplifying and retransmitting data that is otherwise too weak to be detected at a distance. The processing also has to be achieved in a manner which is surreptitious enough not to be noticed. The problem of the bug sweeper is to detect whether that sort of thing is going on, which then boils down to a) identifying and enumerating the possible covert channels which could be used to retransmit the information, and b) systematically, physically going through the weak spots to verify the presence or absence of machinery ("bugs") that could be doing the job. The problem is that there are incredible numbers of such channels, so that even checking for the presence of unwanted radio transmissions is very laborious. Especially when the adversary is actively trying to hide his presence.
Still, there are two distinct advantages on the side of the bug hunter. The first is that he's almost by definition closer to the transmitting device than the adversary, since because range extension is what bugs are installed for in the first place. That means that any signals they send out are going to be stronger at the source, and thus in theory easier to detect. This advantage is commonly used in the TSCM community, in the form of frequency scanners and the like. The adversary then tries to hide his presence into the complexity of the problem, and spread out, encrypt and randomize the outward information flow. Cutting through such counter-counter-measures becomes a well-known arms-race.
Then there is the second advantage, which is that by definition, bugs are about retransmitting a signal produced by the party that is bugged. This means that the signal is under the control of the target, and so potentially under the control of the TSCM expert as well. At some level, this idea has been exploited as well -- the typical warning sign that bugging is going on is when outsiders seem to know stuff that they shouldn't, and of course no spy novel is ever complete without at least some amount of deliberately planted misinformation.
However, at the purely technical level, I've never seen this idea utilized to its fullest. My point is that at one level or another, the functioning and often even the transmissions of a bug of any kind must show some kind of correlation to the message being retransmitted. Since you can control the message, sorting out whether a particular information channel is retransmitting becomes tantamount to measuring some form of cross-correlation or cross-information measure. And at least as far as cross-correlation goes, there are highly efficient means of computing it, especially with input sequences such as maximum length sequences. So why not start by (ideally) convolving/correlating the entire useful radio spectrum against such a low-level noise signal that you inserted? Quite a number of continuous retransmission schemes could be uncovered in that way, after the matched filter presented by your sequences brought out dependent structure from the RF chaos.
The beauty of this kind of thinking is that it is bug-agnostic and employs your natural advantage to its fullest. As a systems level concept it is equally applicable to video -- just perturb the lighting conditions instead, and then proceed the same. It is well-suited for continuous monitoring, because spread spectrum probes can be quiet/dim enough not to be disturbing (or even noticeable) to anybody in the space. Algorithms and hardware to accomplish the feat are generally available, and can easily stretch to hundreds of millions of samples per second, which covers quite a bandwidth already. Narrowband scanning techniques are equally as adaptable to this kind of treatment as are wider band ones. High speed entropy estimation algorithms are available which can help sort out whether there is something anomalous ("I hear structure") in the filter output. Sure, store-and-burst type bugs would not be easily detected this way -- but then, even they have to record continuously, which means that their input stage will probably be noisy in the RF range in a way that is correlated to the acoustics, so that local detection is possible. (In the longer term, their aggregate RF output will also have to be correlated over time with the envelope of the bits actually captured and retransmitted.) And perhaps most beautifully, when you're only trying to detect the presence of anomalous transmissions and not trying to decode them in any particular way, correlations show even in heavily aliased signals, which means that the complexity of the correlator stage can theoretically be dropped into a much lower intermediate sampling frequency; all that is really needed is a wideband sample-and-hold circuit without antialiasing of any kind, and a long enough correlation length.
This sort of thing is not fool-proof; nothing ever is. But I have a hunch that it could catch a relatively high proportion of low to middle tier intercept devices in a completely automated way. And because of the systems level approach, it would also catch many spurious and passive covert channels as well.
2009-12-31
On divine intervention
I've been browsing over Contact, which is one of the most evocative movies for me. For some reason.
In this and quite a number of other movies I continually wonder why divinity intersects the plot as strongly, without ever making itself known. I mean, in the fully extraphysical, miraculous sense. Sure, they talk about miraculous "voices from the sky"...but then those have been detected by purpose built radio telescopes. In earlier lore the same thing often takes the form of other means of communication through...stuff that was physically well-known, but not popularly well-understood at the time.
Even the stuff that *really* stretches even the functioning scientist's mind is always left out. We never see the kinds of mental gymnastics exemplia gratis Greg Egan imposes upon us. And that's just self-limiting although experimental, hard scifi, then. It reaches nowhere *near* the kind of awe divinity is supposed to inspire in us'll.
I find just about all descriptions of the preternatural in fictional wanting. They fail to awe. And I don't just about talk about the Bible -- the lousy piece of pseudo-history that tome is -- I mean real pieces of God's honest, meant-to-be awesome pieces of hardcore fiction. The kinds which make me both laugh and cry, possibly at the same time.
Even they fail to carry any sense of divinity. As much as they perhaps try.
In this and quite a number of other movies I continually wonder why divinity intersects the plot as strongly, without ever making itself known. I mean, in the fully extraphysical, miraculous sense. Sure, they talk about miraculous "voices from the sky"...but then those have been detected by purpose built radio telescopes. In earlier lore the same thing often takes the form of other means of communication through...stuff that was physically well-known, but not popularly well-understood at the time.
Even the stuff that *really* stretches even the functioning scientist's mind is always left out. We never see the kinds of mental gymnastics exemplia gratis Greg Egan imposes upon us. And that's just self-limiting although experimental, hard scifi, then. It reaches nowhere *near* the kind of awe divinity is supposed to inspire in us'll.
I find just about all descriptions of the preternatural in fictional wanting. They fail to awe. And I don't just about talk about the Bible -- the lousy piece of pseudo-history that tome is -- I mean real pieces of God's honest, meant-to-be awesome pieces of hardcore fiction. The kinds which make me both laugh and cry, possibly at the same time.
Even they fail to carry any sense of divinity. As much as they perhaps try.
2009-12-15
Accounting wonders
Once upon a time I have (unsuccessfully) worked as the fund-keeper of a small private organization. It was my first time, and I had very little practical understanding of accounting. Sure, I knew precisely how it was done in theory, and that wasn't the problem; my eventual failure had more to do with motivation than technique. Still, I never had an intuitive grasp of what accounting is about. Recently I've also started wondering whether accountants themselves really know why they're doing it the way they do, either.
My favourite example is double-entry bookkeeping. Just about every source I've ever consulted tells me that its raison d'etre is that it serves as a kind of primitive error correction scheme. Well, it sure does that, but why do people keep on using that sort of primitive in the age of computers? Some sources additionally suggest that there is a practical benefit to being able to subtract from an account by using the credit column or to repeating numbers across accounts without sign changes, but those are really just another way to say the same thing. It doesn't get us around subtracting from one account and adding to another, which is the more basic description of double-entry.
In this one, precise case the likeliest explanation dawned on me thanks to my ample use of alcohol.
At the time I was dating someone who lived in a commune. That commune had a common liquor cabinet, with more or less free access as long as any drain was eventually cleared with the owner. The result was a much more varied, practically inexhaustible and eminently available/convenient cabinet for everybody.
As I became to be accepted as a permanent fixture of the communal life, I was granted my own box in the diagram, and due to my thirst, quickly became adept at keeping tabs on my usage. That bookkeeping went through a couple of revisions, but eventually settled in a form where you tallied a standardized shot ("ravintola-annos") on 1) whose bottles you touched, and also 2) your own name. That way both who suffered the blow and who should be the one to compensate were recorded. On one tipsy night I then finally grasped the connection: what we had there was a picture-perfect double-entry accounting using a single account, which just happened to be in base one so that the analogy didn't show as easily as it otherwise would have. And still we pretty much never balanced the book, so that that couldn't have been why we really went with to columns to begin with. (In case you wonder, yes, there was a price premium above buying cost; this commune had little to do with communism.)
The real reason was that we had a multiple input, multiple output stock, where only the aggregate balances accrued mattered. Each transaction needed to go from a single owner of a bottle to a single consumer of a drink, but that was only a mechanism which maintained the constraint that every shot eventually had to be paid for by someone. The real beef was that the aggregate contribution to and withdrawal from the cabinet needed to be kept in check; every transaction basically lost its meaning after those sums had been updated. The stock, it was there in order to permit us to share efficiencies of scale, and the numbers were there to account for aggregate inflows and outflows which helped maintain that beneficial scale.
This is exactly what happens in firms. They have multiple inputs and multiple outputs. From the accounting perspective, all that matters are the sums total of incoming and outgoing money flows. (Or vice versa material flows, if you want to do some kind of materials accounting as well.) In theory you could treat the finance of a large multinational using a matrix of all-inputs vs. all-outputs, sure, but you really don't want to be doing that with even a dozen accounts when what you're actually dealing withis an additive, homogeneous, non-perishable thing like money. You're only interested in what goes in and what comes out in toto. So that is the basis you will choose, input-output=balance, and since every singular transaction you do influences both, you'll have to incrementally maintain those numbers in two different places. Voila: double-entries.
P.S. There are still at least three things unaccounted for, if you pardon the pun. First, journls do exist. They do so for, let's say, forensic purposes. They record the transactions so that if the abstraction that only sum inputs and outputs matter somehow falls apart, we can still get at the details. This could be if the inherent error check double-entry affords us signals an error, doubly so in case there has been intentional fraud, and then on the other hand so that we can also perform closer analysis on the disaggregated flow of transactions (todays called OLAP).
Second, why have multiple accounts? That' s because only homogeneous stocks can be reasonably tracked in the sum-in, sum-out manner: you can have an account in money, but not one in money plus steel, or one in yens plus dollars. When we want to make finer distinctions, we usually require that certain sorts of transactions going out from account a have to go to account b and not account c. That's a rudimentary means of business analytics: the sums on each account tell us how we're doing with its underlying stock, even if the account does not denote anything physical at all. Double-entry then helps us even further, there: you only need to consider the accounts/stocks-tracked relevant to your current task.
And third, why do we still have this system when in principle we could calculate all of the data on the fly from the general journal? This can be explained as a form of precalculation and caching, which enables 1) distributed, asynchronous and write-only maintenance of the relevant balances, 2) early availability of the aggregates because of that, and 3) enables local analysis of aggregate inflow and outflow in the absence of access to a central repository of data.
My favourite example is double-entry bookkeeping. Just about every source I've ever consulted tells me that its raison d'etre is that it serves as a kind of primitive error correction scheme. Well, it sure does that, but why do people keep on using that sort of primitive in the age of computers? Some sources additionally suggest that there is a practical benefit to being able to subtract from an account by using the credit column or to repeating numbers across accounts without sign changes, but those are really just another way to say the same thing. It doesn't get us around subtracting from one account and adding to another, which is the more basic description of double-entry.
In this one, precise case the likeliest explanation dawned on me thanks to my ample use of alcohol.
At the time I was dating someone who lived in a commune. That commune had a common liquor cabinet, with more or less free access as long as any drain was eventually cleared with the owner. The result was a much more varied, practically inexhaustible and eminently available/convenient cabinet for everybody.
As I became to be accepted as a permanent fixture of the communal life, I was granted my own box in the diagram, and due to my thirst, quickly became adept at keeping tabs on my usage. That bookkeeping went through a couple of revisions, but eventually settled in a form where you tallied a standardized shot ("ravintola-annos") on 1) whose bottles you touched, and also 2) your own name. That way both who suffered the blow and who should be the one to compensate were recorded. On one tipsy night I then finally grasped the connection: what we had there was a picture-perfect double-entry accounting using a single account, which just happened to be in base one so that the analogy didn't show as easily as it otherwise would have. And still we pretty much never balanced the book, so that that couldn't have been why we really went with to columns to begin with. (In case you wonder, yes, there was a price premium above buying cost; this commune had little to do with communism.)
The real reason was that we had a multiple input, multiple output stock, where only the aggregate balances accrued mattered. Each transaction needed to go from a single owner of a bottle to a single consumer of a drink, but that was only a mechanism which maintained the constraint that every shot eventually had to be paid for by someone. The real beef was that the aggregate contribution to and withdrawal from the cabinet needed to be kept in check; every transaction basically lost its meaning after those sums had been updated. The stock, it was there in order to permit us to share efficiencies of scale, and the numbers were there to account for aggregate inflows and outflows which helped maintain that beneficial scale.
This is exactly what happens in firms. They have multiple inputs and multiple outputs. From the accounting perspective, all that matters are the sums total of incoming and outgoing money flows. (Or vice versa material flows, if you want to do some kind of materials accounting as well.) In theory you could treat the finance of a large multinational using a matrix of all-inputs vs. all-outputs, sure, but you really don't want to be doing that with even a dozen accounts when what you're actually dealing withis an additive, homogeneous, non-perishable thing like money. You're only interested in what goes in and what comes out in toto. So that is the basis you will choose, input-output=balance, and since every singular transaction you do influences both, you'll have to incrementally maintain those numbers in two different places. Voila: double-entries.
P.S. There are still at least three things unaccounted for, if you pardon the pun. First, journls do exist. They do so for, let's say, forensic purposes. They record the transactions so that if the abstraction that only sum inputs and outputs matter somehow falls apart, we can still get at the details. This could be if the inherent error check double-entry affords us signals an error, doubly so in case there has been intentional fraud, and then on the other hand so that we can also perform closer analysis on the disaggregated flow of transactions (todays called OLAP).
Second, why have multiple accounts? That' s because only homogeneous stocks can be reasonably tracked in the sum-in, sum-out manner: you can have an account in money, but not one in money plus steel, or one in yens plus dollars. When we want to make finer distinctions, we usually require that certain sorts of transactions going out from account a have to go to account b and not account c. That's a rudimentary means of business analytics: the sums on each account tell us how we're doing with its underlying stock, even if the account does not denote anything physical at all. Double-entry then helps us even further, there: you only need to consider the accounts/stocks-tracked relevant to your current task.
And third, why do we still have this system when in principle we could calculate all of the data on the fly from the general journal? This can be explained as a form of precalculation and caching, which enables 1) distributed, asynchronous and write-only maintenance of the relevant balances, 2) early availability of the aggregates because of that, and 3) enables local analysis of aggregate inflow and outflow in the absence of access to a central repository of data.
2009-12-10
The improperness of improper priors
Over the past ten of days or so I've once again been diving into Bayesian statistics; let us say I'm experiencing a serious, personal paradigm shift wrt statistical inference. My main starting point as a practicing database guy is the easy correspondence between embedded multivalued dependency (EMVD) on the one hand and conditional independence (CI) in Bayesian networks on the other—EMVD has for the first time made the basic ideas of BN's accessible to me at the intuitive level, and of course it's pretty nice in other ways, since we're now only talking about the easy, finite, always continuous cases.
Still, you have to mind the infinite frame of mind as well. And since I happen to think like I do, I'm from the start strongly attached to the objective Bayesian viewpoint, in preference to the more conventional subjective one. In fact, my revival is intimately tied with my finally having learned that the Bayesian framework can also be described in purely objective, information theoretical, measurement-of-ignorance terms instead of vague references to "degrees of belief".
Here, the nastiest counter-example seems to be the problem with unnormalized (improper) priors, and the subsequent marginalization paradox which is absent from the theory dealing purely with proper priors/normalized probability measures. At least to me it implies that there might not be a coherent description of complete uncertainty with respect to the Bayesian framework. That is rather bad, because evenas we already know that Bayes's theorem is just about the neatest framework for consodlidating uncertain information in a provably coherent way (no Dutch Book! eventual stabilization among a group of Bayesian learners with shared priors, modulo common knowledge concerns and the like! a neat generalization of Popperian falsificationism!!!), we do always need a coherent starting point which has an objective, not a purely subjective, interpretation.
Now, I had a little bit of an intuitive flash there, as I'm prone to. While it is true that we cannot normalize many of the most natural "distributions" that would go along with common improper priors, perhaps that has less to do with the impossibility of bringing formal rigor to bear to them than we might think. I mean, sure, we cannot even make the simplest of such priors, the flat one, normalize into a probability measure in the infinite base set case; not even using the theory of general distributions. But sure enough we can make it work if we let the prior become a general linear functional, restricted to a class of arguments where it is continuous.
So, maybe the marginalization problem, or the common use of improper priors, wasn't so much about the impossibility of representing noninformative priors "cleanly", after all. Maybe it was just about our choice of representation?
I haven't gone through the particularities, but I have a strong sense that this sort of approach could lead to a natural, structural encoding of the special nature of not-knowing-shit versus sorta-probabilistically-knowing-something—structural restrictions wrt the resulting operator algebra would simply make certain kinds of calculations inadmissible, those restrictions would propagate in the intuitively proper manner through things like Bayes's rule, and things that manifest themselves as the marginalization paradox (or others alike it?) would probably be forced out into the open as topological limitations in the compatibility of operators which act not only on distributions proper, but the dual space as well. In particular, Bayes's rule with the usual pointwise limit interpretation would only apply to the density function side of things, and would probably be qualified in case of functional/function, and certainly functional/functional, interactions (e.g. you can't reasonably deal with products of functions and functionals, you always have to go to the inner product, i.e. in conventional terms marginalize when both are present; and in the case of functionals, the topology of the dual function space is rather different from the conventional one, seriously affecting the theory of integration).
Still, you have to mind the infinite frame of mind as well. And since I happen to think like I do, I'm from the start strongly attached to the objective Bayesian viewpoint, in preference to the more conventional subjective one. In fact, my revival is intimately tied with my finally having learned that the Bayesian framework can also be described in purely objective, information theoretical, measurement-of-ignorance terms instead of vague references to "degrees of belief".
Here, the nastiest counter-example seems to be the problem with unnormalized (improper) priors, and the subsequent marginalization paradox which is absent from the theory dealing purely with proper priors/normalized probability measures. At least to me it implies that there might not be a coherent description of complete uncertainty with respect to the Bayesian framework. That is rather bad, because evenas we already know that Bayes's theorem is just about the neatest framework for consodlidating uncertain information in a provably coherent way (no Dutch Book! eventual stabilization among a group of Bayesian learners with shared priors, modulo common knowledge concerns and the like! a neat generalization of Popperian falsificationism!!!), we do always need a coherent starting point which has an objective, not a purely subjective, interpretation.
Now, I had a little bit of an intuitive flash there, as I'm prone to. While it is true that we cannot normalize many of the most natural "distributions" that would go along with common improper priors, perhaps that has less to do with the impossibility of bringing formal rigor to bear to them than we might think. I mean, sure, we cannot even make the simplest of such priors, the flat one, normalize into a probability measure in the infinite base set case; not even using the theory of general distributions. But sure enough we can make it work if we let the prior become a general linear functional, restricted to a class of arguments where it is continuous.
So, maybe the marginalization problem, or the common use of improper priors, wasn't so much about the impossibility of representing noninformative priors "cleanly", after all. Maybe it was just about our choice of representation?
I haven't gone through the particularities, but I have a strong sense that this sort of approach could lead to a natural, structural encoding of the special nature of not-knowing-shit versus sorta-probabilistically-knowing-something—structural restrictions wrt the resulting operator algebra would simply make certain kinds of calculations inadmissible, those restrictions would propagate in the intuitively proper manner through things like Bayes's rule, and things that manifest themselves as the marginalization paradox (or others alike it?) would probably be forced out into the open as topological limitations in the compatibility of operators which act not only on distributions proper, but the dual space as well. In particular, Bayes's rule with the usual pointwise limit interpretation would only apply to the density function side of things, and would probably be qualified in case of functional/function, and certainly functional/functional, interactions (e.g. you can't reasonably deal with products of functions and functionals, you always have to go to the inner product, i.e. in conventional terms marginalize when both are present; and in the case of functionals, the topology of the dual function space is rather different from the conventional one, seriously affecting the theory of integration).
Subscribe to:
Posts (Atom)